Skip to Content
1:03:41 Webinar

Love Software? Make Hardware. Behind the Curtain of Pure’s (Hardware) Labs

For July, host Andrew Miller invites Pete Kirkpatrick (VP of Engineering & Chief Platform Architect, Platform) to the Coffee Break to explore his 10 years at Pure and relentless pursuit of leveraging core industry standards and technologies in ways you might not expect.
This webinar first aired on July 19, 2023
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Hello and welcome to this month's Coffee Break. My name is Andrew Miller, your host, uh lead principal technologist here at Pure. I'm really happy to be joined this month as I always am by an amazing guest, Pete Kirkpatrick first heard him present uh back at Tech Field Day for those of you who like that venue and actually really enjoyed his depth and his uh I think,
00:21
mildly snarky sense of humor, but it's all right to say that, but, you know, it kind of came out and so I just like, you know, like we enjoy what we do. So today we're gonna be talking about love software, make hardware behind the curtain of peers, hardware labs and I'll pull the music down here a little bit.
00:38
We'll, we'll let it keep going at the end. A little bit of housekeeping before we dive in. So as always, this is a series. Thank you for coming back for month 31. If I've calculated it right. Uh You can always find previous months due to the solution focus,
00:52
I think, and I've had feedback from others uh that they've aged quite well, actually got a lot of feedback from folks that accelerate last month, which was really cool about how they enjoy coffee break. They found the content useful over time, you know, keep up to date with the industry and with what pure does you can find previous events or register for new ones at pure storage
01:08
dot com events. You can also find previous coffee breaks listed there as well. And while we're not gonna play this month again, uh the video, the promotion video from accelerate, I do want to make sure to highlight that. The day one and the day two keynotes are on youtube as well as available um AAA as
01:28
well as you can actually find all of the on demand sessions, all the breakout sections actually available on demand. Those are with the audio and with slide. So there's a lot of really good content there, including actually if you were there, you might recognize some stuff that Pete did over there. You'll, you'll see that a little later on because what we borrowed from previous work as
01:45
always for some of you and I can appreciate this. You're here for the gift card, some details there. You can see this in the follow up that comes within the next week uh from Olivia pure Storage. Uh There are some exclusions here. We appreciate everyone in this list attending. You just know why we can't send you stuff.
02:00
That's how life works. As I mentioned, my name is Andrew Miller. Your host. As always, I'm not gonna say a lot about my background, there's customer partner in tech marketing. Um But, but in this case, we're talking about kind of hardware labs. And, and I remember my first time, actually, this was actually in a data center.
02:15
I just graduated from college, got my first it operations job. And I walked in the data center and they said, you know, here's, here's the, here's the data center. And then they opened the, I thought that the rack, you know, 42 U rack was actually the computer or the server and I come out of help desk background
02:30
and then they opened it up and it was like, wow, there's all this other stuff inside it too. It's like, look at that, you know, so I started like way, way back a couple of little ways since then, you know, but there's been this fun sense of enjoying hardware and rack mount stuff and even H VAC and all the cooling that goes into that. We're gonna explore those themes as we go along
02:45
the way, Pete, we're, we're gonna wander through some of your professional history. But, you know, there's, there's Pete as a person, not just Pete as a pure employee too. So, like you mind giving a little bit of your little bit of your background here. Yeah, this is fun because I, I thought back to all the jobs I've had not just the technical stuff and, you know, I've had a lot of fun.
03:06
Um, some of these are more fun than others, um, especially, you know, working in a agricultural fields or, or, uh, you know, cooking, uh, for folks, you know, it's, it's important life stuff. But, um, as it's evolved, um, it got more and more fun I'd say. Um, definitely skiing and rafting were, were kind of highlights.
03:30
But, um, now those are more my hobbies rather than, uh, you know how I make a living. But, you know, a little bit of science, uh, a lot of technology and, and having that evolved over the years has been, uh, it's been really, really thrilling and now, you know, um I've got a family, I got my two teenage boys who are just amazing and uh
03:54
spend a lot of time with them and we do a lot of these activities together and it's, that's just been a great ride so far. I, I was gonna say, maybe you, you peaked a little bit early with ski bum and raft guide. I know. But if you're doing it again with your family, then, then, no, no, no, I guess not. So. Absolutely.
04:11
Uh, actually I have, I have a cousin who's been done. He's done that for years on the side and he enjoys it. He's really good at it. So, last housekeeping note, um, next month as this is a series, please join us again. It'll actually be with our third time guest, Jack Hogan,
04:25
uh data center fatigue, or maybe data center decision fatigue. So many choices, you need a bridge to the cloud, a cloud bridge. We're an explorer. Yet again, the theme of making the cloud fit your business. Some of what we've been hearing from the last year about customers on their cloud journeys, some kind of the phases of cloud journeys that
04:40
we've seen in the six states of the cloud journeys from analyst and customer perspective. And then of course, there's always some new announce announcements from pure in this area overall cloud capabilities and especially bare metal as a service. So that onto the topic, love software, make hardware. Now, for those of you who are thinking uh as someone asked for an agenda earlier,
04:59
here's the agenda, you know, it's the same kind of simple format as always, you know, not too much like it says, uh and, and hopefully you kind of recognize the title as being related to this quote. People who are really serious about software should make their own hardware. Uh Maybe you know who that's from. We'll check that in a minute.
05:17
We're gonna first as as often start with a little bit of exploring some of your technical background, Pete, you know, kind of quite a ride lessons learned, some of what you've done over the years, uh then wander into a little bit of alphabet soup and technology inflection points around NBME and TLC and QLC then going into uh direct flash, the origin, you know, some of the gutsy bets behind peer saying,
05:39
you know, and actually you were instrumental in this beat deciding that we're not gonna continue to source S sds. We're gonna make almost our own equivalent if you will. And then there's always a little bit of what's new and the accelerating pace of hardware platform innovation. We've got some great folks online, Sean Kennedy, I think Tuggle is joining later to be able to
05:56
help with Q and A and the chat. Please feel free to put stuff in the Q and A. If you have specific questions, we will as always try to end about 45 minutes after the hour, uh give or take and then we'll hang, but we'll hang around for Q and A and kind of let our hair down a little bit and relax. But first, before we dive in Olivia, if you don't mind tossing up the first poll,
06:15
um that this one has no commercial value to pure. We're not gathering your data for any specific reasons. Let's just have a little bit of fun. So on that quote, who said people are really serious about software should make their own hardware? Um I think that was you right, Pete, am I, am I correct in attributing that to you?
06:31
Ok. I may have said that from time to time. Yeah, I actually, I know you have, I've heard you say it. Now, were you the first one who said it? Now? That's an interesting question. But hey, so we will just leave that up and uh feel free to fill that out as you.
06:43
No matter when we close the poll, it's always a little too late, but we'll leave that up. So, first thinking about um your title, uh VP, chief platform architect. Um You don't just jump from college to doing that, but I'm, I'm kind of curious a little bit of what, what does that mean?
07:04
And then we're gonna wander into a little bit of how do you get there? So, so maybe just start with a little bit of a, a description of what you do and then uh maybe we'll start out back in the days of when you were at N Yeah. Yeah, cool. So, you know, the platform is uh I think it's a term that's a little overused and pretty heavily loaded.
07:20
So no matter, you know, where you are, somebody's gonna call something a platform. But you know, at, at what I think of it as, as it's the hardware, right? It's uh you know, some of the products uh work on a box, this is a box. Um So it involves the, all the physical stuff of everything you can see and touch,
07:41
but, you know, you can't have all of that work without uh software that's embedded in it and the application that it's designed for. And even thinking about the environment that it fits into. So that's kind of what the platform uh is, is to us. Um I feel really fortunate to, to be an architect.
08:02
Um architects. You know, the way I say it simply is architecture is how it should work. And uh any engineer on the line that hears the term should work is, is immediately skeptical, but I, I actually mean that it's, it's how it's supposed to work. Um But it's not the full design,
08:22
right? And so there's, there's a, there's a huge amount of work and, and folks that take this idea of how things should work and turn it into something real. Um And I think, you know, as far as how I got here doing this, I think the probably the number one thing is you, you gotta like people, you gotta enjoy working with people and talking with folks because it
08:45
does take that whole village, you know, you, you need to talk to the end users to find out what they're trying to accomplish. You gotta talk to all the folks in all the adjacent areas to figure out what's possible and you know, what we could get done and it's gotta be done in a certain time frame. And so you, you've gotta balance all of these different factors and that requires a lot of
09:08
people and, and time, you know, spent uh balancing those factors with folks, you're saying people, skills and being able to, I, I can talk to people a little bit of an office face reference there too, even with the technology for sure. And when I came out of college I would have put it the other way I'd say, oh, you gotta know all this, you know, technology and,
09:26
and, you know, got to know things and do lots of math and it's flipped to, uh, it's, it's really a people thing. So when, when we were thinking through and looking a little bit, um I mean, you, you actually started or it's been a little way back at NT working on superconductors. You might, you might kind of describe that a
09:43
little bit. Yeah, that was really fun. Um And you know, it was, it was pure science for science sake. Um And, and, you know, I was young then and so it was uh kind of a practical focus but we did, we did superconductors and it's funny because it was called electrical transport, which, you know, I think of as like a wire is how you transport electrons.
10:08
Um But these were pretty sophisticated wires. Uh There's a lot of flavors of superconductors, but they only work under certain conditions, you know, certain current and certain magnetic field and pressure and a lot of things you can do to these to make them go normal and not be superconducting. Um But the probably the coolest part is the research we did led to the development of the
10:33
wires that make up the giant magnets in uh particle accelerators like CERN. And so when you work in science, you might do some pretty fundamental little element of it and it gets built up into something that, that can lead to, you know, some of the most fundamental discoveries in physics. And so that was a real eye opener.
10:56
Um But at the same time, I remember when I went to the, the head of the department and I told him I was gonna move on uh focus on what I was doing in school, uh which was electromagnetic. Um And he said, no, no, no, no, no, you, you should really focus on superconductors. It's, it's the future and it has come a long way, you know,
11:17
we have Maglev trains and all this cool stuff. Um You know, they're even talking about building uh you know, transmission lines for superconducting transmission lines for power delivery. And it's amazing. But I, I, you know, I think I wanted a broader view of the world. And so I moved on into, you know, into different areas and,
11:39
but I really appreciate doing that, that stuff. I kind of cherish that time doing science. I think some of that actually, if, if I'm not saying what some of that foundational work led to the Higgs boson particle development, I feel just intelligent saying that for those of you out there who know what that is, we're not gonna try actually put that in the chat if you know what that is, you know, that feels like it's worth a AAA bonus gold star just you don't
11:59
feel good. Um As you were, as you were shifting though. Uh so super inductor electromagnetic, I mean, there was even some level of thinking about technology and isolation to even starting to think kind of more as systems or, or systems theory. Yeah. My, my first, you know, real technology job was it, it's, it's really product focused,
12:22
right? And so as soon as you're in the industry, then you're, you're making a product or a service and as an, as someone coming out of school, you, you gotta focus, right? You, you, you don't know everything, you can't know everything. And so you, you gotta be ready to learn and you, you probably get a smaller job.
12:39
And um in the beginning I was designing optical communications. Um some of it was down at the laser and detector level, but pretty quickly built those into modules, built those modules into transceivers. And uh what, what today you, you'd think of as the transceivers that plug into all the ports on a switch or server.
12:59
And uh I was involved in, in the very 1st 10 gbi Ethernet transceiver. And believe it or not, those things that you can get for 50 bucks today, we used to get paid, you know, $15,000 to, to make a single one and it, it didn't even plug in. It was. So anyway, that's just one example where that's like one of,
13:22
you know, 100 technologies that I work with today that I happen to, you know, get really deep on. But as you build up experience in, in, you know, different areas you end up being able to put those together, that's what a system design does and, and I'll, you know, you'll probably hear me say a couple of terms over and over today,
13:43
like balance uh efficiency, those are really critical in any one of those areas. But, but when you put them together the balance between all the components and, and requirements and the, and the efficiency of how things work becomes super critical. So networking, computing storage, I've kind of had a lot of those experiences. And now, uh again, I feel fortunate to be able to kind of all apply it all in the same area.
14:09
I think sometimes a architecture or design is even, you can say it the nice way is what do we optimize for? But it's also what do we don't optimize for? What do we choose to waste? We choose may choose to waste certain things or not optimize for certain things because they, they don't matter, optimize for time. Sometimes we don't optimize for time,
14:24
like it'll take a lot longer to do it, make it easier for the system. That's a little bit of a teaser for evergreen later, I mean, maybe, maybe last piece in this section, if you don't mind, Pete the um so depth in hardware, even kind of even uh applied versus theoretical science. I'm losing my terms, right? You know, pure science kind of thing.
14:40
But moving to the applied stage, why come to pure? As frankly when you came here, pure had no custom hardware. Totally a software company obviously was the beginning of the era of flash. Pure didn't make flash or even anything beyond the software. So, so why does someone who's deep in hardware come to pure?
15:00
Well, uh a lot of those other factors that I mentioned, um the people, you know, I, I started talking to folks and uh I even told everybody what I, I really didn't like about my, you know, my current position at that time and they told me that everything was different here and the people uh you know, learn from each other and support each other and, and respect each other and take
15:23
responsibility. And then I thought, well, II, I just told them all the things that I was looking for and of course, they said that's what it was like, but now it's 10 years later and I can guarantee you that that uh has come true and uh over and over that people thing is, is critical, you know, we have a fantastic team. But, you know, in one sense, I remember actually really distinctly,
15:45
I was, I was uh sitting around with a, a brilliant architect that I worked with at the time and we were, we were watching some of Pier's videos and some of them are just funny. Um But some of them were, were kind of architectural and, and, you know, they, they, they talked a lot about how they designed the product and, and at that time, it was all software based and we were sitting around and we saw some of the
16:09
capabilities that they demonstrated. And this architect said, how did they do that? And for me, that was a really distinct moment where if somebody really smart is that interested? And then is a little bit baffled about how they accomplished it. I knew they had something special before I ever joined.
16:28
But the fact that they didn't have custom hardware was also a huge opportunity, right? And um so taking the fundamental value that was honestly a bit closer to the application where, you know, value kind of does flow from, from what what users are doing, having uh a fundamentally great software architecture and having it on a on
16:52
Unimas hardware was a huge opportunity. And I could see that we could do um a a lot to improve it. And, you know, at the time I worked with Bill Cerreta and, and uh he, he was interested too and um you know, both of us joined pure to kind of do this custom hardware thing. And now we haven't looked back for 10 years and it's, it's been really great to,
17:14
to kind of combine the value of hardware and the value of software and code design and, and make that all work together. There's, there's huge value in the code design. We explored that in previous actually Justin Emerson called that out a lot. I think the only other piece on here I just wanna make sure to mention is that one of the
17:29
core pieces that that you saw, you mentioned, I wanna get to section two here is that um flash at a such a huge performance advantage, you know, 1000 X that we could actually throw away a lot of the benefits for the economics. So we could actually, yes, it's so much crazy faster, we could choose not to optimize around certain things for the economics. The fundamental problem 10 years ago even now
17:51
was cost and so obviously compression helps there, but we could actually, you know, flash at such an advantage, we can optimize around the other areas to make it fit kind of thing. That's right cool. OK. In the section two, hopefully I'm not leaving on anything you want to say. It's always a little loose as folks and you know, we kind of plan it out but not too much.
18:10
So let's think about some of those technology pieces. Maybe they almost kind of, I think of it maybe as technology building blocks. Um NVME is in there. We could say QLCTLC Flash generations, we could say uh PC IE, there could be other ones I think there were, there were two main ones we wanted to explore for folks that really kind of just a hardware
18:30
and, and I don't know if it's, I guess standard, standard level would be the right term. Um Maybe let's first explore NVME and, and actually maybe I, maybe I should do a little bit of a brief introduction there. So for folks familiar, you have Scuzzy and actually maybe you can give me a grade on this piece. See how I do.
18:45
So, Scuzzy is serial. It's actually, I think shoe all the way back, like, you know, SC SI and so it's built originally more for discs, especially than for flash and VME starts to have lots of lanes and it basically can be highly parallelized. So there's an impact here as far as latency and parallelism that fits flash, I'll let you keep going there,
19:04
Pete and both you can correct me on what I said if it wasn't right, as well as that was. Exactly right. And the impact on pure and what you've seen. So, yeah. Yeah. And I think it's a good example. I think anybody that's looking at it right now, um, probably just considers NBME the obvious
19:22
choice. Um If you're, if you're working with flash, it's, it's undoubtedly the, the most popular the, you know, the right uh interface, but if you rewind 10 years ago, uh it was Fringe and, and, you know, there's, there's always new technology, there's always multiple candidates for new technologies and any one of them probably gets
19:46
uh some degree of hype and, and it can be very hard to tell what's important and what's the trend, what's gonna happen and what's not gonna happen and you don't want to jump on the wrong boat, right? You, you, you, that can sync a product, it can sink a company if it's a startup. Um So you have to treat these things really seriously, but you also can't perfectly predict
20:08
the future. You have to kind of rely on fundamentals. And so we certainly believe that flash was the future. There was no doubt about that. I had learned that before I joined Pure, it was why pure was founded. Um That's why we, you know, got together, it was our whole mission at the time,
20:25
but the technology was actually based on Sass and Sata, um which, you know, was uh those are scuzzy based technologies and those were a good bridge or a transition from this, right? Because the compatibility with the old technology enabled you to fit right in, bring some of the new benefits and you know, not everything is perfectly optimized, but that's kind of how these transitions work.
20:54
Compatibility can be even more important than, you know, having all of the perfectly optimized uh things that you may get later. Mhm. So again, it was, it was a bet. We made a bet, but we looked at the fundamentals and said, if, if flash is gonna happen, a lot of the benefits derive from having high amount of
21:14
parallelism, lower latency and, and the, the scuzzy technology was at odds with those fundamentals. So following those, those uh down into the depths of the protocol is, that's exactly what NBME is designed for. And so our challenge was how do we do both? How do we have the old technology and then make this bet on new technology? But let's not redo things every time we wanted
21:42
a uh uh a way to get there fluidly more like you do with software, right? So good software can be updated in place. Uh You don't wanna uh disrupt your application to, to make the transitions, we wanted to do that with hardware, which might seem a bit funny, right? Because hardware is a physical thing,
22:02
you got to get the new thing in and the old thing out. But the way we approached that was just to build both into the system and then start using NVME where it mattered the most. And so we actually started with it in our NB ram where, where performance is critical. Um Later we moved that into the drives and now we've got it at uh end to end,
22:25
right? We've got all of the flavors of NBME fabrics, front end protocols. Um And so the bet paid off. Uh even though it wasn't obvious at the time, um And now the system is essentially 100% NBME. And even though there's still SASS in there, it's, it's been a decade and we still
22:47
support the SASS that we, that we did at the time. And so I think that's just a great example of how a transition happens. You don't drop the old and grab the new you. It really is AAA gradual transition. Sometimes as that feels like the, the definition of an enterprise company uh versus consumer. Sometimes you see stuff and even in software
23:05
land and thing in a consumer companies that drop off support enterprise, we, we can't do that. Like businesses rely on it. I, I was pretty sure that you had an evil Knievel reference there. You want to work in just maybe I love that one. Yeah. Yeah. Some people say like crossing the chasm,
23:18
which I think is a great uh it's, it's a little bit more of a business analogy, but um I'm from Montana. And so uh my fellow Montana and uh Evel Knievel is famous for trying to jump the, the Snake River Gorge in Idaho with a, with a rocket. And so that's, that's usually what I uh sometimes it's when I'm trying to discourage
23:39
people from making a big transition when I use that because he didn't succeed. He uh you know, he didn't die there. But uh he definitely didn't make the, the jump all the way across. But you, you, you can't take a risk that big in, in this kind of technology world, especially in the enterprise. But even in hyper scale or other places,
23:58
you've got to know how to get from A to B without completely failing or, or you may find yourself in trouble. The other thing I think of there and now I wanna give credit to Mike Richardson, who I borrowed this phrase from his peer pt and uh has American responsible for America's presales, talks about how our problems, you know, as technology companies,
24:17
you know, and sometimes these are like major technology paradigm changes. Our problems shouldn't be your problem, shouldn't be a customer's problem. It's our problem to figure that out not to say like you have to, it's your problem. You have to move all your data and migrate and rip stuff in and out like we should facilitate those technology shifts for you.
24:32
I think the one other part here to explore a little bit, Pete is just even kind of a little bit of flash generations. Um I'm gonna jump in the time machine here. So folks who have joined us for a while have seen this slide before actually JD and myself in the second coffee break ever. We were walking through QLC in the launch of
24:50
flash ray C but, but do you mind kind of highlighting there was, this is briefly we'll see if we can make up a little bit of time here. But what were the major differences between MLCTLC and then QLC kind of the, the engineering level that you're, you're having to deal with because there's even some, a little bit unexpected there too.
25:08
Yeah, it's, it's been a fascinating journey in terms of transitions. You know, the macro is hard drive to flash, the whole world has been uh moving and, and the end is near, we, we may talk about that a bit too. But these are more, hey, like you talk about Moore's Law and that's actually a collection of thousands of technologies that, that allow the trend to keep going.
25:32
So these transitions were more, you know, clearly you were using flash. But uh the the main trend again was economics. How do, how do you take this great technology and apply it more and more broadly? Well, you make it cheaper. Um It, it does have it, it did have performance to spare.
25:52
Um I think you could argue it still does have plenty of performance to spare. But it, if you go all the way back to SLC, there was no way to take advantage of all the performance of SLC at a system level, especially when folks were used to uh older technologies. It was uh you know, you can't take 1000 X jump uh in all areas of a system at the same time. So you put in the,
26:15
the new technology and then have everything else kind of catch up. But really the, the meaningful start of flash at a as a storage technology was with MLC. And looking back, you know, it seemed hard at the time, but looking back those days were easy, um flash MLC flash would arrive off the truck with no errors. You could read and write to it with no errors
26:38
in the beginning and, and only, you know, towards the end of life, you'd start to deal with errors. And so the encoding was easy and, but MLC was also associated with the planar, the, the semiconductor, uh you know, uh architecture was planar two dimensional. Um And so you just had cells that would hold the bits and they would shrink and shrink and
27:00
shrink. And the cool physics thing there was that they shrunk small enough to where they only held, you know, maybe 100 electrons, something like that. And there's a term called critical electrons, which meant that if you lost track of say 10 of those electrons, you know, we're not used to counting those things as individual things.
27:20
You can, you can count those on, on both your hands. If you lost track of 10, you had errors. And so that was really clearly the end of that era because the, the end of the two D era, because you know, it's electronics, you're storing electrons. And when you go below that limit,
27:39
you're gonna, you're, you're gonna have serious problems. And so the transition from MLC and, and two D scaling transition both at the same time to kind of TLC, which is three bits in one of those little cells instead of two bits, but also to 3D. And now the whole world runs on 3D flash.
28:03
And really what that means is the semiconductor uh semiconductor architecture went from two dimensional and everything was defined in two dimensions. Two now stacking layers of, of those cells along a column. And the cool part was putting three bits per cell instead of two bits per cell should be a really difficult problem in terms of signal to noise and errors,
28:28
right? But because everything had scaled down so much in two dimensions, when we got a new uh vector to scale in the third dimension, we backed off a bit, right. We backed off and made those cells big. So they became easy to work with again. And then we've been scaling in that uh TLC world for,
28:47
you know, eight years or so now. And that actually made TLC a lot easier to handle than a lot of us expected. OK. But we got used to that and then along came QLC. And in the beginning, we just said, well, that's, that's not useful. It's, it's uh rotten, it's full of errors. It uh it doesn't have any endurance. So if you use it too much, it'll wear out, it's slow but we realized at that time that there
29:14
was a huge opportunity still again, we're chasing the economics. And so we decided to embrace the, the all the characteristics of QLC, instead of trying to compensate for all of those problems, at least the, the performance and the, and the cost problems, we embraced it. We said let it be slow, let's actually um put
29:35
slow controllers in front of them because if you have slow media, you don't need a really high powered controller. And that enabled us to kind of multiply the the benefit of the economics and make the whole system cheaper. And that was the the flash ray C. Mhm But meanwhile, it didn't replace everything like in the earlier transitions.
29:54
And so with, with the other transitions, everything moved to the cheaper stuff. We saw it as an opportunity to say, let's do two systems. Let's make a a low cost and somewhat slow system, but then maintain the the need for transactional systems. Anything that worries about performance should still run on TLC. And we've maintained that uh both in parallel
30:16
for several years now and and that has even been as now as we, it feels like a segue to how we made some fundamental bets and then also accelerated. Actually, some of the platform development. Uh The only other piece here I want to mention is for anybody who if you're following along, there's a shift from two D to 3D MLC to TLC.
30:35
Uh, if you have ideas about how to make flash and 40 please reach out. We'd be happy to help you file some patents on that. Um, you know, kind of things that, that'll be the next major jump. That's about as nerdy a joke as I'm ever gonna make. I think so, you know, hey, uh, I feel like II I should get points for trying.
30:50
So I think it's all right, Pete. Well, um actually Olivia, if you don't mind, uh closing the first poll and just popping the results back up there. Thank you. So, um so for anyone wondering, man, is that not actually? So II, I can understand the Steve Jobs answer where I first saw this quote was actually Steve Jobs in a keynote because I work Mac help desk
31:08
and use Max lot. So I saw it from him, but the quote that quote is from Alan Kay actually, believe it or not. So, um and actually go ahead Olivia if you don't mind and I'll start the second poll and we'll get into section number three here. So curious and, and we will share this back so you can see what everyone else is seeing too. I've got a over 1000 folks on today.
31:27
Uh What's the biggest SSD or flash module in your data center today? Curious. And then do you see ever buying a hard drive again for your data center? And we'll just let that play out. So obviously a lot of flash engineering and understanding flash even, um, when we were doing or say, oh, we're doing S sds right where it's a,
31:49
it's a kind of a miniature storage controller and there's, there's reverse engineering going on to figure out what that looks like. But I, I almost want to kind of jump into a little bit of the history of, well, I'll leave it here for a second, you know, slide with flash. Oh, well, a direct flash 10 years ago because it was really 10 years.
32:05
You started thinking about this. That feels like a pretty gutsy bet. I mean, pure is just barely launched even, right? And you're starting to tape, you know, kind of paper, a pen out what this would look like. Take me through, you know, how the decision process if you don't mind around deciding to
32:22
start with making our own flash modules and then even a little bit of the, even a little bit of the history there too before we talk about all the benefits, but I'm guessing we'll blur together. So what was the decision process like? Yeah, that, you know, if, if, if we thought NBME was a big bet, um, you know, that's, that was a standard. At least this,
32:39
this is, um, you know, this is real invention, right? So we started from scratch and again, we, we relied on the fundamentals that we had the right trends flash was gonna take over and that, yes, emulating hard drives using S sds, which is really what they do is, is a good way to transition because the whole rest of the stack knows how to operate on top of that.
33:04
But that if we had the fundamentals, right? And we were planning for success, that optimizing for flash would bring out a lot of efficiencies and, and um all sorts of other benefits. Um We'll try to kind of uh show how those have come true, which is, you know, when you're trying to get something started, you have to kind of promote it, you have to make some claims about things that aren't true
33:30
yet. Hopefully, hopefully. And this one, the fact that we're sitting here and you can tell that uh in the beginning, there was almost nothing. And now all of Pier's products are relying on this technology kind of optimized in different ways. It has been successful. So we can look back about those claims and kind
33:49
of celebrate how we got some of those things, right? Um But this one took a village um just like everything else. And II I, our founder cause likes to say, oh man, we should have just done all this in the beginning. We should have uh you know, we shouldn't have waited until 2017. We should have shipped it,
34:07
you know, early on. And I'm very careful when I disagree with clause, but the fact that we built the system, uh got it stable, started to scale the business and the, and the product line um before we took on this step, I think was actually a, a prudent choice. Um It, it could have been too much to do all at once, but we did a good job of the system side
34:32
and then we were able to do a good job built based on that stability. We were able to do a good job on direct flash. Um Really, really what happened here was, again, S SDS were emulating hard drives. And what that really means is you, you fire up a hard drive and you get a certain amount of capacity you can read and write through that
34:55
whole uh you know, linear contiguous address space kind of however you want and flash at the very bottom, it just doesn't work that way. You have to erase large parts of it before you can uh write to other parts of it. And you can't uh it's asymmetric in terms of the size that you write and the size that you can read back and the size you can erase. There's all sorts of these quirks down at the
35:19
very low level. For most of you out there, you're probably happy that we just hide all that from you. But this is what we do. And so optimizing for how it actually works instead of faking out uh you know, another another interface, uh what's kind of the insight and further,
35:39
you know, this system, any system uh you know, already has that level of abstraction from the space that the user sees down to some other logical space and then down to the physical space again, has these maps where you need to map all of those spaces together. And so we just decided that completely eliminating one layer of that mapping and all of the work associated with it. Um In this case,
36:05
it's garbage collection. Whenever you have a map and you're reading and writing on a different uh level, you, you end up having to collect the garbage that you've created in that process, we literally eliminated that completely. Um And so a direct flash module doesn't do any of that work at all.
36:24
It turns out that it's dramatically simpler, it's dramatically more efficient. And these are the benefits that ensue from that decision, right? And so instead of instead of kind of designing around all of these goals, we got that fundamental architecture in place and these are the things that naturally come from it. Um I think we'll, we, you know, we'll,
36:49
we'll kind of touch on how we get to each one of these things. But efficiency, uh you know, the bottom one says efficiency, but all of these things come from efficiency, um they just manifest in kind of all different ways but not having the map from, from the logical to physical space means that we uh a traditional SSD will use uh roughly 1000 to 1 flash to D
37:15
ram ratio and the D ram is required to hold the mapping from, from the higher level. We don't have that map. Um We still use a bit of D ram, but we get to use it for I think, much more exciting purposes.
37:29
Um cashing and, and buffering and things make the system better instead of doing this mapping functionality, which, which doesn't really give you any benefits, something you're just forced to do. But that um lack of uh reliance on D ram means that just think if, if you want to make 100 and 50 terabyte flash module, that means you need 100 and 50
37:52
gigabytes of D ram crazy. Anybody that's seen a dim and, and you know, even 100 and 28 gig dim and try to imagine that fitting that into an SSD as well as 100 and 50 terabytes of flash and then all the power and that just doesn't work, that's not possible. So breaking some of those traditional views on how to do this,
38:15
literally made things possible that that that weren't before now, 10 years ago, you didn't have 100 and 50 you know, terabyte flash modules. So you, you know, you didn't kind of worry about these problems. But again, plan for success plan for the future and, and really, you know, embrace these trends and you can see how this has kind of come into a sweet spot uh
38:36
breaking through those barriers. Um Performance actually follows kind of the similar pattern, right? Because instead of spending a lot of time to do garbage collection behind the scenes, manage that map and keep it updated and keep it reliable. Again, we just literally don't do those
38:55
processes at all. And this is the, the drive has, you know, a lot more resource in order to perform. And so all it does is reads and writes uh according to direction from software. So this is a great coign example because if you can get a bunch of junk out of the middle and the hardware just does exactly what software wants super deterministic performance uh just
39:22
happens. Um And so now we have a lot of excess performance that we can, we can use to perform or we can use to generate efficiency, uh energy efficiency. In this case, if we choose to not drive it so hard. So I think that the piece there and this is, and I was resisting flipping slides a little bit because like we're just gonna park here.
39:43
It's like we, we've talked publicly about that 1 to 1000 ratio and, and breaking that, that's why we put out publicly and, and even I think for, for you people even like it, it's my job to deliver on some of this thing we're talking about, you know, 100 and 5300 terabyte S SDS by 2026. A couple of years out that's passed anyone else's road map.
40:03
There's pieces there about how removing that layer dramatically reduces the right amplification, which leads to much longer endurance or your flash modules last that long. That's what underpins part of our flat and fair maintenance. We don't spike spike maintenance prices in year four or five or six or ever ever because we actually have the underlying physics to economics to do that.
40:22
The energy efficiency then sometimes can almost seem too good to be true. But you pull that stuff out the the lower amount of D ram even as it grows, you know, not just today 2 to 5 times better than other all flash, but it's only gonna get better. And of course, I actually stole my own. I was already talking about reliability but we
40:38
had it there. So the entire pure portfolio, I wanna make clear it's powered by what we're talking about here today. It's been a transition over time but everything, whether it's, you know, latency optimized platforms, throughput, optimized platforms, cost capacity, optimized platforms.
40:54
It's all powered by this fundamental engineering kind of thing. Um Any last thoughts there before I move into the final section? Yeah, the one the one that uh gets less attention, but I think it couldn't possibly be the most important. Um And when I talk to folks, I spend a lot of time talking with folks that, that develop essentially hyper scale systems,
41:16
right? Those are people that really understand, of course, they understand economics. But uh you know, scaling relies on that reliability portion. And the cool thing there is the reason it can be so much better than a traditional hard drive or, or even an SSD is that it's simplicity and that appear that's a mantra.
41:37
But by removing all, almost all of the complexity out of the drive, we're finding that it's remarkably reliable. And this, this is one of the things that I was touching on where we claimed that would be true. Now, we have years and years of evidence that it's certainly true. And that means you're not uh replacing something every day,
41:59
your uh encoding schemes can be much more efficient that gives you di dividends in terms of performance and, and rebuild times. It's just, it just keeps giving back. And the, the, maybe the best part about it is because the reliability of an SSD is, is usually determined by firmware. People think about flash and how flaky it is
42:21
and it's actually generally bugs. It's, it's firmware, getting too busy and then running out of resource and falling on. You know, there's, that's the kind of traditional SSD failure mode. Our firmware is 10 times simpler, but that's still true is, is that, you know, it's, it's generally not flash that's causing these to fail.
42:39
And what that means is as these things scale to these very, very large capacities they're remaining, they're, they're, they're still reliable so they're not becoming less reliable as they scale. And that just gives you magic in terms of getting to really large failure domains. I mean, we work with petabytes all day long now, but we're doing exabyte type deployments.
43:02
Those are staying sane because of this, uh you know, fundamental underlying quality and the reliability that comes from it. Awesome. I think we're gonna close up and for those listening in, you're like, well, we're getting close to the 45 minute mark. We're probably gonna cheat and go to about the, the 50 minute mark after the hour.
43:20
So we get into the last section ending and I will end the second poll and share the results back here. Uh So just in case you're wondering, you know, big it is actually a pretty decent distribution here. You know, there, there's a clear 10 to 20 terabytes as the winner, but, you know, right behind it 5 to 9.9 I tried to do the answers.
43:39
So it worked, you know, it wasn't confusing and um we have 35% of folks where disc is still the dane data storage device. Actually, the majority is yes, but only for specific things. We're gonna probably tease that out a little more but a healthy percent, you know, I'm done with disc done with disc. I wear that Olivia if you don't mind launching the third poll.
44:00
Um And then, uh here we go. So looking at direct flash benefits, curious what is the most impactful for you? You can put more on the chat. You know, please feel free. I actually love the comment from Tim about, uh you know, we replaced all the net F and E MC and haven't looked back and seeing the benefits of what all
44:16
we just talked about. That's great. Uh Before today, did you know that pure had an E family we're gonna talk about in the next section? And what do you think the E means and you can have fun with the answer if you want to. So, you know, it's not like we're great in this for doing this very great.
44:30
The last section in some ways, this is a little bit of fun when we talk about or maybe even found foundational architecture up. So some of this should make a lot of sense. So three big items we want to hit on relatively briefly and almost it's not meant to be anti climactic, but all of this should make sense. So the first one is at accelerate and even before that a little bit,
44:49
uh we announced some pretty large direct flash modules. Um What are the highlights here to you, Pete? Well, this should be obvious since we put them in giant uh Texas, this is, this has taken our fundamental advantage and, and you know, based on the chart, you showed and this kind of stuff we're,
45:09
we're, we're pushing our advantage, right? Um 75 Terabyte, uh QLC drive is, is the biggest in the world. Uh Undoubtedly, right. And, and um by a lot and we're gonna push that further. But, but today, uh this, this is uh what we've been able to do and, you know, honestly, it takes a while for again,
45:29
all the rest of the systems to adapt to some of these capabilities. So that's the sweet spot. Now, um 36 Terabyte TLC is the biggest TLC drive in the world too. And so again, those two kind of travel uh together they will for a long time. But um that capacity and high density that derives from those really gives us a ton of system benefits.
45:53
Um The energy efficiency um it's always been near and dear to my heart. It, it's always been important um for technology, but it's become critical, right? Uh especially in uh some parts of Europe. Uh Fortunately, the US is is waking up to the the benefits. Everybody is uh everybody, especially if you're out there in triple digit temperatures.
46:16
Today, everybody is aware of these problems and energy efficiency is one of the main solutions to that problem. And so we're great at energy efficiency and then able to uh to take big strides like this really makes things, it's a game changer in terms of how data centers are designed. Um And, and how much you can get done uh in, in your infrastructure.
46:43
Um Of course, performance have to keep going up. That's uh always gonna be the story. Um And one thing we didn't really talk about much, but we have uh uh systems that used to have a dedicated NB Ram function. And it's a super critical part of the system to keep things reliable and, and performance. But it takes up a lot of space and what we've
47:06
done uh is fold uh the NB ram function into the direct flatch modules so that there's a little bit of NV ram everywhere. And now it scales really well. Um as the system gets larger, you get more and more throughput, you get a little bit more capacity. And because of the advantages there, we've, we've really emphasized that and we've doubled
47:28
the capacity and performance of that function within the uh within the direct flash module. And so all of these are focused on making the systems better and, and, you know, they're all directed at solving customer problems. But um it's been a really nice transition to this new generation of, of technology.
47:46
We've got a path further. Item number two. And this one to me was interesting because it can almost be a little bit anti-climactic sometimes. And I, I've worked as, both as a customer and a partner. Sometimes it's like we have to talk about the
47:58
new hardware because unless we get you excited enough about it, you won't do all the work to get to it because it's gonna be so painful to move everything out and insert the word evergreen here. We've discussed that a lot here, but there is because it took a lot of work on your, you and your team's and beats. I don't want to diminish it with the anti side um flash ray X,
48:15
our fourth generation, the fourth generation of the X taking the C like it says there, you know, increased performance at a zero extra cost. If you're an evergreen forever customer, you will eventually get these, you know, your evergreen refresh cycle um mind and, and obviously we put the highlights on here, but if you had to pick one or two highlights for,
48:33
for this announcement, what would it be? Yeah. Um you know, for me, this is, this is like the seventh or eight. I don't know that last count. Um and they never, they do keep us on our toes. Um It's again, one of the fundamental differences that, that we're providing is ability to upgrade at a,
48:53
at a fast pace without a big uh without any disruption and keep the infrastructure modern all the time. It's a mantra for us and it's, it's not just um uh uh a marketing or a business process, it's fundamental to the architecture, both in software and hardware. Another great example about how all those things really need to work together.
49:18
So this transition even though we've done it before is, is super exciting and it says it right here, this, this is the largest uh single leap we've ever made in terms of performance. And so we're able to extend the value of the platform, folks can upgrade in place and get a really worthwhile bump uh in performance while we're scaling up the capacity,
49:43
sometimes those need to go together. And so for X I think that's, that's the story here is you can make this transition without disruption and it's a really significant change to the capability. Um The story for C I think is a little different. Um C is newer. Uh Again, that's our lower cost.
50:03
It's, it's based on the QLC flatch that we talked about. But, you know, it's uh it's probably four years old now. Um We started with one model, we went to two models and this transition to our four is a recognition of that success. Uh that has got more model, I think it's got three models now,
50:23
but they make a significant uh increase in the top end capacity. Uh They are actually uh more performant as well. And so that's for me, that's really bringing uh the sea from, hey, this should be important to like, wow, this has really been a huge success. Let's bring it into the, you know, full scale mainstream.
50:46
So with that, you'll see there are various and we do this because we send out the slides afterwards, there are various flash array X models and you know, you see some of the use cases listed there various flash ray C models. There's there's three models now on the flash ray C line matching the numbering of the X line. Although it is more uh capacity focused or call it 2 to 4 milliseconds or we used to say flash
51:07
ray C was performance focused until we announced both flash blade E and flash rate E then it accelerate. And this is where there is, this is the first time we've actually had a family between flash ray and flash blade. Hey, Pete, I'll give you kind of last last thoughts here before I summarize down on uh
51:27
what's exciting about the EE family, both the flash and the flashlight side. Well, um I mean, fundamentally that trend that, that pulled us to do the C to do larger um and lower cost systems even if they don't need to perform at the same level. Um Again, we're trying to replace disc and this is this is the terrain of disc. Um And, and this will be the thing that does replace this but also internally uh you know,
51:54
we last year we did flash blade S which was a huge uh you know, uh improvement and, and uh in the flash blade product line, we were able to put direct flash there and, and get a lot of commonality there, which is great for you know, not just for the technology, but it's great for our team to be able to focus on a fewer number of things and make the quality better.
52:17
But now we're able to kind of join those things together and, and bring those two product lines together and kind of create this continuum because to get extremely large, you really want that flash plate architecture and, and the picture here doesn't do it justice because that thing can get um can, can get physically, you know, big as, as you get to these huge 10 and uh you know,
52:40
spoiler in the future, we're gonna go to 20 petabyte name spaces. That's amazing, right? But that's what the flash blade architecture is the sweet spot. But below this transition kind of around the four petabyte number, that's where Flasher E does a better, better job, right?
52:58
And so scaling down into the, into the uh Flasher architecture, that's a sweet spot there. We don't want to try to force one into the other or do you know uh something unnatural. So let's tie them together, you know, you can understand the pricing very easily. You can, you can uh get a lot of common functionality, manage those together from,
53:20
from, you know, our our management plan and it's a big happy family like it shows there for instance. So there's a panel that goes across and I was looking at it as a previously showing that more of a technical slide that showed how direct flashes underneath everything, whether it's a performance, a blend of performance capacity or to, you know, total focus on capacity and frankly economics.
53:41
We've been weaving the the cost the throughout this, I do want to make sure to give it a shout out on the, on the flash ray side, we were now um I wanna say it's 8 to 10 years into that chassis. It's pretty amazing the lifespan that we've gotten out of that, out of that chassis. You can kind of see some of the workloads here. I'm not gonna, I'm not gonna try and summarize that.
53:59
I think that brings us to the end here. Thank you, Pete so much for being an amazing guest. Don't leave yet. We're going to do a little bit of Q and A afterwards. So, you know, I hope I still guys here for a couple of minutes. Thank you for having me. It's been, it's been fun as always.
54:12
Thank you really appreciate it. I know it would be, you know, and if anything it's always like we're trying to, there's more good stuff to talk about. So we, we could have made for those who are wondering, we could have made this twice as long. A I think you would have enjoyed it too because I heard a lot of good stuff if you were
54:24
standing around for the drawing just in case that's you. Uh Brendan El from Ontario, you are the winner of an ember 12 ounce travel mug retail value. I believe it's 100 and $30 for whatever Amazon says right now. It's the kind you can control with your phone because, hey, that's cool. And it's actually kind of useful too.
54:40
Uh, my wife appropriated mine because she drinks a lot of tea. But I have friends my fear and Sava who uses his on a regular basis, please make sure to join us next month for data center fatigue, need a cloud bridge. Make the cloud fit your business will be joined by Jack Hogan and we'll take the last five
54:58
minutes here to do a little bit of Q and A I think we, we can kind of virtually let our hair down. I think for both of us, we both have the same hair letting down actual actual capability, but you know, we'll kind of pretend to do it. Let me close up and share the first poll here for anyone who is standing around what we are
55:14
now. We are now officially done, but there's some great questions in the chat. I'll pull up here in a second. Uh But if you were wondering um you know, sharing the results with you looking at direct flash benefits, what's the most impactful for you? I think Pete, it's interesting performance is number one,
55:28
but it's also kind of spread out. So hopefully this is interesting for you. See and then uh before today so a good number of folks follow along with pure, cool. But for 30% of you, we let you know something you didn't know. Cool. That's part of the point of this format. And then what does the E stand for? I'm, I'm very disappointed that only 4% voted
55:50
in extraterrestrials especially. I think there's gonna be like, congressional hearings on this stuff soon or I've just been watching the news there. So, but, you know, economics efficiency. Um What, what's your vote on that one? Pete? WW what, what's your, what's your favorite of the options?
56:05
Uh I was actually really disappointed I wasn't able to vote. Um So we gotta fix that but I like uh the energy one is near and dear for me. But uh when we did brainstorm this, it meant all of those things, even maybe extraterrestrial there, there's some magic inside. Yeah. So I'm thinking here, uh we're gonna do a
56:26
couple questions and, and this one, it may only be two here because one is um uh one is actually about, we talked about this actually last, last month but it's the accelerator card. It now starts to come standard in that the, the hardware DDO accelerators and someone was asking, you know, kind of kind of what are they? Um they are a custom piece of hardware that
56:50
we've written. Anything that you want to comment there, Pete just around the um Direct Compress Accelerator Card. I think I got the right name. I think so. Um Yeah, that one was uh uh I thought that was a really special project. So we, we started by saying we've,
57:07
we've got uh a gentleman here named Jan that, that he went all the way. Uh He went into compression so deep. Uh It, it's a critical technology for us, but I, I love that we were able to have somebody like that go, he be, he became one of the top compression people in the world by studying the problem and came up with fundamentally better compression algorithms um with compression,
57:33
you can basically, you can get better and better and better compression as long as you're willing to spend CPU cycles. And it's a really classic like diminishing return problem because you gotta spend a lot more cycles to find that next little bit of compression that's available. And, and by improving these algorithms, he actually made it much harder to compete with
57:54
hardware because he made the software algorithm way more efficient. And so it's one of those moving targets where if you aim at the old target and somebody moves the art forward, you're gonna, you're gonna miss. But what we did was we took the new, the new algorithms we implemented in software and we also implemented it in hardware.
58:15
But uh you know, these algorithms tend to have like deeper levels. So we did a a deeper and deeper level that we could then get the best of both better compression. But we also off the CPU at the same time. And so this was a great system view where you look at what's consuming a lot of resources in your system. But hey, we also want to push that part of the
58:38
capability forward and this was a great application of that technology and we're gonna do more where there's a lot of other areas that are similar where we'd like to offload and we'd like to, um, to do it with uh specialized hardware, I think with that. Uh, we are at 59 minutes past the hour. We all, we always kind of hang around the full time with questions,
58:58
a little less questions this time. Maybe. Actually, I'll just do, uh, are you good for another couple minutes, Pete or, or should I should I, I'll toss one last question because it, it was actually just a really fun one. And let me actually see if I can find it back here. Now.
59:11
Um, it was from, there's been comments about reliability of direct flash and what they've been seeing with tier systems. Uh So a question from if I get her name right. I mean, it was really, you know, what did 25 to 30 year ago, Pete or maybe 10 or 15, you know, what did you see as major developments?
59:28
And there are things that you thought that might play out that didn't or things that you thought that wouldn't play out but, but did, you know, kind of a little bit of like, you know, kind of Pete's view of the world of 5, 10, 15 year ago and, and how it shifted or not. Yeah. Well, I, I mean, 30 years ago, Pete was,
59:44
you know, concerned about, you know, skiing versus snowboarding or, or something like that. Um, again, I, I used to have a lot more fun but yeah, I don't know these, these transitions keep, keep coming, right? And so um 10 years ago, we talked about some of those transitions, but I, I think the, the fundamental one even before I joined pure,
01:00:04
was this this flash revolution, right? And again, you look at every, every aspect of it being better except cost and cost coming down and techniques like that we've talked about to, to further that cause and that, that just, you know, changed my life. I decided to focus on the technology. I decided to join pure and, and then uh you know, ride that trend.
01:00:27
But um you know, there's, there's things like that in every area we saw A I emerging at that time and, you know, a I has been around for 30 years under different names and different uh abilities and works for the little different things, but it was about 10 years ago, maybe 11 that, that, that really took a big inflection point, right? And now look at us uh some circles.
01:00:50
That's all we talk about. Uh Certainly we spend a lot of time optimizing systems for those workloads now and it's even changed the definition of compute for a lot of folks. And you don't think about a ac pu anymore, but you might think about TP US and GP US and how to apply them to these different kind of workloads.
01:01:09
And so that one's been exciting. We work a lot in that area and um you know, it's affected people that I know that aren't in technology, which is also kind of fun, right? Uh kind of all aspects of life. The actually I, I was joking a little bit with Calvin. Uh he handles a lot of our solutions marketing around A I and we were pulling forward the uh
01:01:29
the cowbell meme that, you know, everything if you want to go do that, the more cowbell, the great and everything needs more A I these days, right? You know, kind of thing. That's right. OK. I think um I think with that we're going to call it a wrap Pete. Thank you.
01:01:42
Thank you once again for even staying a little bit longer because I wanted to toss that in. Please make sure to join us next month. I won't say the title again. You can see it on there. Um You'll get that. Yeah, everyone can use more Calvo. Absolutely. Right. Tim, you know, kind of thing in the, in the chat there.
01:01:56
Thank you again so much. I will turn the music back up and we will on behalf of Pete Kirkpatrick. Thank you for joining us and Pure Storage Coffee Break. Thank you so much for joining us today. I hope you have a great day, a great week and I will see you next month.
  • VMware Solutions
  • Artificial Intelligence
  • Backup & Recovery
  • Hybrid Cloud
  • Coffee Break
  • Containers
  • FlashBlade
  • Private Cloud
  • Enterprise Applications
  • FlashArray//C
  • FlashArray//X
  • Business Continuity

Andrew Miller

Lead Principal Technologist, Pure Storage

Pete Kirkpatrick

VP of Engineering & Chief Platform Architect, Platform, Pure Storage

Who knew that the best coffee break conversations would end up happening online? Each month, Pure’s Coffee Break series invites experts in technology and business to chat about the themes driving today’s IT agenda - much more ‘podcast’ than ‘webinar’. This is no webinar or training session—it’s a freewheeling conversation that’s as fun as it is informative and the perfect way to break up your day. While we’ll wander into Pure technology, our goal is to educate and entertain rather than sell.

For July, host Andrew Miller invites Pete Kirkpatrick (VP of Engineering & Chief Platform Architect, Platform) to the Coffee Break to explore his 10 years at Pure and relentless pursuit of leveraging core industry standards and technologies in ways you might not expect.


This month we’ll explore:

  • Pete’s History at Pure - how he came to Pure and why he’s stayed.
  • TLC, QLC, NVMe, PCIe, Optane - say what? All the different trends and technologies Pete’s evaluated over the years, what worked, and what hasn’t.
  • DirectFlash - why it isn’t crazy for Pure to make our own flash modules instead of buying SSD’s.
  • Recent platform hardware releases and the story behind each - FlashArray //XL, FlashBlade //S, FlashBlade //E, and more!

As always, we’ll keep it educational while exploring how Pure is offering capabilities and products that benefit you. The team will stay on after the webinar answering any questions for those that want to stay longer!

Enter the New Era of Unstructured Data Storage

FlashBlade//E offers the benefits of all-flash with better economics than disk. Manage unstructured data growth efficiently, simply, and sustainably with best-in-class user experience and operating at the lowest, long-term cost.

Discover FlashBlade//E
02/2025
Accelerate AI-Driven Results with Pure Storage
Hear from real customers how Pure Storage helped accelerate model training and inference, streamline AI pipelines, and reduce AI power consumption in the data center.
Ebook
9 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.