Skip to Content
15:30 Video

Technical Deep Dive on FlashBlade//S

Join this session as we walk you through how FlashBlade//S differs from traditional unstructured storage and share how you can drive improved outcomes for your workloads and applications.
Click to View Transcript
00:01
Mm. Hello, everyone. Welcome to the technical Deep dive on flash Played s My name is Aretha. Thus director of Flash played product management. My areas of focus are new platform introduction, data protection and core software. And I'm very excited to talk to you about all that's coming with Flash played this year.
00:28
We're super proud of what we've built and what what our customers have done with this product. And you're very excited about where you're going with this product and the way ahead. But let's look back a little bit in 2016 when we launch Flash Blade. It was a disc based world. The market believes that the only way to build out file and object capability at scale was
00:52
hybrid architecture, with a little bit of flash cashing here and there. We set the stage with are all flash platforms. By building an all flash platform, we could deliver much higher degrees of performance, and we found lots of workloads which were in deep in with these disc architectures, and that allowed us to really grow from an odd science project to a healthy niche to now a
01:19
major player in the file and object category. But there's a lot on this graph that's not orange yet. And so the challenge in front of us is how do we take everything that's great about Flash Blade, the simplicity or the capabilities that we've built and bring it to even more workloads? And this is exactly what the next Gen. Flash flood is all about.
01:44
We set out to double down on the thing that we do best, the core design of hardware and software. And with you'll see at the media level, we can solve both the highest performance requirements and some of the largest capacity optimised requirements and a platform. It's going to double the density, double the performance and double the power of efficiency.
02:08
It's going to allow our customers to build out a whole new kinds of platforms at a scale that is currently not possible with any other flash based architecture. So we're very, very excited to launch our latest platform, Flash Blade s flash. Let s is here to make data storage simpler forever. Let's talk about the ethos of flash Blade s
02:36
so it is built fundamentally on the marriage of hardware and software with an efficient, all QL CBS design and this aggregation of compute and storage. These two properties allow us to deliver big performance. You'll see without an ECM crutch, and we can flexibly scale performance or capacity based on customer requirements.
03:04
And it improved resiliency, space efficiency and serviceability of the platform. Last but absolutely not the least we are bringing Evergreen forever for flash fled with evergreen storage are subscription to innovation. When they deliver new software features to customers, they receive those without the need for licencing or upgrades with flash played s. We also offer included non disruptive
03:32
hardware upgrades to future platform enhancement, all bagged up with a world class customer experience, which has consistently achieved the highest net promoter score of any enterprise storage company in the industry. And it's much more than just hardware. There's a new major release of purity.
03:53
It adds a number of new features for providing complete control and visibility. It's built for unlimited scale. You can grow from billions of files and objects to dozens of petabytes of data, and it's built for ultimate simplicity, and it's hard. It's still fundamentally the same flash blade that are used to and love and like I mentioned
04:16
before. We now have evergreen forever with flash blade. So I'm going to walk you through a guided tour of flash. Let s we'll focus on the hardware components, the software components, all of the performance work that we've done and data feedback. We are introducing direct flash models to flash blade.
04:39
DFS improves economics with better serviceability optimise supply chain and better resource D ram and CPU utilisation. The direct flash, the uplevel media management and activities that traditionally happened within the drive to the broader system. This allows us to make globally good decisions as opposed to locally optimised ones.
05:04
We control placement of our ops ourselves, allowing us to extract the full parallelism inherent in the name. More importantly, for Q l C. This direct control gives us additional benefits such as being able to pack data more efficiently and consequently not burning precious programme cycles.
05:23
So our direct flash technology enables you Elsie elevates non management at the system level is the key to performance reliability, density, cost, efficiency, etcetera and health as leverage. Common I p across the yards products flash played s uses proven direct flash models and we'll have to capacities available when we launch 24 terabyte and 48 terabytes, the configurations must be homogeneous,
05:52
which basically means the same number of drives were blade and the same size modules across the clusters. We are also desegregating, compute and storage. So we have the blades. Each blades have 1 to 4 DF s public and there could be 24 terabytes or 48 terabyte. We have DDR four dams for more memory bandwidth. The CPUs are becoming mainstream intel products
06:20
and the mid plane connexions are 100 gig in four BC I e. Now you can scale by adding blades and defence. So we are introducing four models to our family and you can scale, compute or storage independently. So you have the capacity systems, the S 200 B and B for the D stands for the 48 terabytes
06:44
system, and the P is for the 24 terabytes systems and the S 500 d and B systems, which are our performance systems. So, depending on the type of workload that you have, you can pick the model that suits you need. You can grow within the same bucket or across buckets it needed now I'd like to walk you through the hardware components,
07:10
So this is a chassis overview, and this is how the front of the chassis looks like So you can see the blade and the drive architecture where we have disaggregated, compute and storage. So this is a fire AC unit, and you can have a minimum of seven blades and a maximum of 10 blades, and within each blade, you can have one right as a minimum or food,
07:35
right, so you can grow from 1 68 terabytes to two petabytes within a single chassis. So a lot of density packed into a single chassis. And the fact that this system is so modular and flexible instead of having to replace the whole blade and rebuild on failure. Now Blade and GFMS are separate truth.
07:58
So if a blade fails, all you need to do is remove. The DFS, replaced the blades and reinstall the FMs. There's no data rebuild needed, And if a DFM fails, you just replace that particular DFM and keep the blade online. So a lot of flexibility and modularity that's built into the platform.
08:17
This is a look at the rear of the chassis So you have fabric I O modules. Now, this is similar to the current FPs E FMs. There are power supply modules and power supply units, and these are serviceable individually and look at the fabric Io module. So there are 400 dick ports enabled. On Day one,
08:41
you have a physical ports, which will be available for later use 50 gig blade and 100 gig chassis network games. So we really worked on the speeds and feeds off our fabric Io module. And like I mentioned before, it is 50 gig blade and 100 gig chassis networking and all kinds of uplink capabilities that we've added. 200 gigs.
09:03
40 Gig, 25 gig, 10 gigs And it's this integrated networking that we built in really simplifies large scale deployments. And this is a look at another view of the chassis. This is the future proof mid plane. The connectors are 400 gigs ready PC. I tested for Gen four specs.
09:25
Another thing that we are very proud of is our best in class part consumption. So the nominal consumption for a fully loaded system is around 2400 watts, so that's just 1.3 watts per terabyte of effective capacity, which gives us huge environmental benefits. Now let me quickly walk you through the software overview.
09:47
A logical architecture stays the same. Everything is still always distributed. I talked about the ethos of legend, where we have this aggregated compute and storage, and we have enabled you will see support without yours and without usage of SCM. And there's the efficiency of NV RAM usage for Q l see that were introduced within ourself. Esther. I talked about the enhanced networking to
10:13
support the increased speeds and feeds within the product. And we're introducing and always on in line deep compression. Using a pure proprietary algorithm, the S 200 systems will have a deeper compression than the S 500 systems and all the data protection capabilities that you're used to and love replication for both file and objects, snapshots support safe mode.
10:41
Support for filing object storage to protect against ransomware all continue to be a part of purity. For zero, we have enhanced security focus during development. There's a lot of threat modelling with her body penetration tests, etcetera that we do, and after it's deployed, its third party validated data at rest. Encryption using Phipps 1 43 level one
11:06
validated encryption and rapid data lock for off Iraqi management. And here's a look at our updated health hardware going pages. You can see that the hardware page has changed. You see a blade and DFM show up here. So we are focusing on the desegregation of compute and storage.
11:29
And here you see the eight physical ports that are active and for future proofing the eight physical ports that are available but unused. And we have introduced a pure drive like a man. And there's a new drive api endpoint, which helps you manage the direct flash modules better. And within the UI, you will see the blade temperatures show up and within the pure
11:58
hardware, let's see, like a man. And it's also returned by the hardware FBI endpoint. Now let me walk into the performance conflicts and the beta testing feedback we've been receiving about Flash Blade s. So I talked to you already about the S 200 the S 500 systems. So we have four models now the S 200 d and P and the S 500 d and D,
12:22
where D is the 48 terabyte drives and B is the 24 terabyte rights. Now from an illustrative mental model As you move up, you see Hirai Oxford TV and you move down. You see lower I Oxford TV. So now you can have up to four options with separate price points for each capacity. So what does that really mean?
12:44
Now let's try and build a one terabyte system. So if it were the current Gen flash flood system, that would be just one option 19 across 52 system that will be available. But now, with flash played s, you have four options the S 500 p and G and the S 200 PND. If you were looking for the highest performance
13:08
option, you can just go over the S 500 B, which is a template for dry 24 terabytes system. If you were looking for the most capacity optimised, you go with the S 200 D, which is 10 blade to dry, 48 terabytes. And if you now want the best performance with room to expand, you go over the S 500 D, which is 10 blade to drive 48 terabyte.
13:30
So there's a lot of modularity and flexibility, and you can decide what configuration to go with for the type of workload that you have. And I talked about the in line deep compression algorithm that we've introduced with Flash let s and you're seeing really good results from that algorithm. So for Oracle native, back up with our man and sequel Native Backup,
13:55
a sequel studio, we're seeing 2.9 and 2.7 compression, and we're seeing an improvement in compression across the board with this new algorithm that we're introducing. We've gotten surely great feedback coming in from the beta programmes that we've been running for flash grade s And here is one from the Mississippi Department of Revenue. The Mississippi Department of Revenue provides services that drive 95% of the states revenues,
14:24
including motor vehicle registrations as well as property taxes. They've been better testing flash played. As for multiple new work clothes like Zito's plank Windows, file sharing, etcetera and a court from Mike Deana, enterprise architect for Mississippi Department of Revenue Flash, let s retain all of the features that made Flash played a legendary solution for a backup
14:45
needs while simultaneously pushing the boundaries of performance and usability even further. So we're delighted to hear all the feedback coming in that flash leaders and that flash played as we are doubling down on poor design of hardware and software. And with you'll see, we can serve the highest performance and capacity requirements and were disintegrating
15:08
computing storage. So very excited about this launch. Thank you so much for your patient listening and hope you can attend the other sessions on flash played s where we're gonna talk a lot more about the internals, the fundamentals and the use cases. Thanks again.
  • Artificial Intelligence
  • Video
  • Data Analytics
  • Enterprise Data Protection
  • Pure//Accelerate
  • FlashBlade//S

FlashBlade//S is the storage platform preferred by a growing number of organisations for file and object workloads across Analytics, AI, Data Protection, HPC and more use-cases. Join us in this session to take a deeper look into the underlying architecture of FlashBlade and get under the hood of the latest FlashBlade//S technical enhancements. We’ll walk you through how FlashBlade//S differs from traditional unstructured storage and share how you can drive improved outcomes for your workloads and applications.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.