00:01
Alright, hello everybody, how are you all doing? So I'm gonna talk a little bit about Enterprise Data cloud. We're gonna sort of jump into it a little bit because it's a new thing, right, like it's a new thing that uh. That we've been um working on for the last couple of months or so,
00:16
it's been sort of an evolution of where we've been going, not only in the last couple of years with our platform but also um, you know, sort of where we've been going. Predominantly over the last sort of 10 years, and I want us to give you a little insight of what it is and how we're thinking about all the integration parts of it and what we can use
00:32
today and all things like that as well. Um, let's just get through all this. Alright, um, I always like to sort of start off with analogies. I mean, um, how many people think there are good drivers here? I mean, yeah, I mean I do, I think you're all bad drivers.
00:48
As you should think I'm a bad driver as well, cos driving's probably one of the most dangerous things we do. Like out of everything that we do, driving is is absolutely one of the most dangerous things we do and why? Well, because we all, we're all separate, we all sort of do our own thing, you know, there's some sort of consistency in the way
01:04
that we all drive, but we're all sort of individual people with individual things on our mind and stuff like that, um. Our cars are all different types of maintenance. I, I, I live in Detroit. I see everything from salted out rust buckets that have held together by gaffer tape and wiring, uh,
01:20
to, uh, Um, some of the most, you know, crazy things that are coming out of, uh, coming out of the, uh, distribution or the, um, manufacturing centres there as well. But again, like, we're still, we're reliant on each other to make sure that our vehicles are all sort of maintained and stuff like that. And anything that we do wrong, like the way we think about things just create accidents,
01:41
they just create crashes, right, like I, my understanding of a giveaway sign might be different than yours. Roundabouts, Americans learn how to fucking drive roundabouts. As we call them merry go rounds in Australia, but again, it's the same sort of thing, like I'm, you know, like it's,
01:56
it's, and really when we think about it, um, oh where's my, yeah, when we think about it, storage, let me go to this slide, I like this slide more. When we think about it, storage is very similar to the way that. manual driving is today, right, like every system that we have is independently created.
02:17
I like to think about it as gardening versus farming, like, Every time you buy a new plant, flower or whatever, you're in a situation where you have to learn, care and feed that thing, and then by the end of the end of the day, you're having to understand 2030, 40 different ways of doing things, and that's generally the way that we have to do storage, storage is very application
02:36
architected, very connected into those as as well. And we don't really sort of, you know, lean in on, um, lean in on sort of the automation sort of factor of it as well. And again, systems are multi-architecture and multi-siloed, so configuring those systems, updating those systems, keeping those systems correct at the same level is super duper hard.
02:59
I mean, think about most organisations have anywhere between like 3 to 6 different storage architectures, you know, all the way from proper enterprise ready sort of block file and object to some some dev storing content on a Dropbox they want to do is a bit of AI training on, right? Um, and we have to manage all that, right, and, and so the configuration keeping those up to date because it's,
03:23
I always have this sort of state, like, we don't get fired because things are expensive, we get fired when things go wrong from a security perspective, and misconfiguration areas are the are the probably the biggest area associated with that, and that third area, you know, in associate you know, and we sort of think about, When misconfigurations, that's when things go bad, like if I have a new security policy,
03:44
rolling that out across most storage architecture is really, really hard, because I manually have to touch a tonne of stuff to do it, and maybe I can script it. What if scripts go wrong and APIs change, and so there's all this sort of real data and this has really been a problem for a while and, We've solved this through brute force, like this has been solved for the last 10 or 15
04:05
years, just, you know, from everybody in this room and myself included, we just sold it through brute force. We just basically put more time, effort, operations into it. That's what we did, a system sort of expanded and we sort of took on new architecture and cloud architecture and different architecture. We just threw more and more and more resources
04:23
at it, and it's sort of the dripping pipe, you know, analogy. Sooner or later we're gonna hit a wall, and we've sort of hit that wall right now. So if you think about the wall that we've hit right now, and AI is sort of causing this, and I know we're gonna, you're gonna hear the word you're hearing the word AI all the time and you're gonna hear the
04:38
word AI a lot, right? But AI is causing a really interesting thing right now, and that is the amount of data sprawl that AI is causing is out of this world. Right, we've gone from 3 to 4 copies of most data in our organisations to 9 to 11. I mean that's just because of AI because people are just copying data to train systems,
04:59
right? Uh, and then on top of that, you go, alright, well, and a lot of that I don't even know where it is, because again, devs are taking this and doing this stuff, so I can't even sure if this data I'm using is actually correct, it's original, it hasn't drifted.
05:13
So the second thing as well, and I'm sure you're feeling this as well, budgets are super tight right now, like I'm not getting more IT budget to expand my capability at the moment. Um, and then I, my, my CEO was belting on me to say, hey, we need to take half of that budget and put it towards AI projects now, right? And, and so there's a huge, and where do we
05:32
make those savings? Operations. So they're like, hey, pull money out of operations, which is where I'm pouring more money into to solve sort of this legacy issue right now. And this is that multi-architecture, multi-operating environment, multi-management environment that we've all grown up with in the past,
05:48
right? So we've had to sort of think about we sort of how to solve this. Alright, and we want to basically think about, like, if you think about the car analogy, like how do you sort of solve for that, well, Cars in the future, eventually, once, you know, and it will take a little while, but they'll, The they'll take the human element out of it,
06:09
right, there'll be, there's no misconfigurations or collisions when all the cars talk to each other, like when all the cars on the road know what all the other cars on the road are doing, there's no surprises, right? So you have that intelligent automation associated with that as well, right? Again, let's say you have sort of this consistent operations.
06:27
Everything is sort of done the same, it's the same recipe, it's the same blueprint, it's the same workflow associated with that. Um, you know, the automated cars have all this built in built-in traffic avoidance, um, and of course, the, you know, we sort of have this as a service asset, you know, maybe Elon Musk will solve the robotaxis world,
06:49
right, and then we'll have sort of this complete as a service intelligent capability. But you can see there's a lot of analogies in there and the way we want to think about how do we take that into storage, because we wanted to sort of translate that and to say, alright, can we take that, Sort of multi-operational stuff that we're dealing with right now, and can we start automating it? Can we get to a world where we stop managing
07:13
storage infrastructure, where storage infrastructure is just a black box that we just, you know, rely on being there. Almost like the cloud, like like Azure and AWS we just expect them to be there at a certain SLA performance capac capacity capability SLA. Um, and then we can start thinking about how we manage our data because data is really like
07:36
where the, where the goal is, right, but how we start thinking about really managing and controlling our data, who has access, what has access, how they have access, and then can I can I ensure that I'm using that data in the correct way. But that's a great vision, like the great vision of Pura going, hey, eventually we're going to forge the same path as our brethren in,
07:59
you know, you know, like Snowflake does and Tableau does. Yeah, eventually I think Pure will will evolve into a data organisation where we crack into data and we can make determinations on how we use that data. We're not there yet, but you're starting to see sort of the aspects of it. Right, and that's a great future, but it's not a future you guys can take advantage of right
08:22
now, right? You wanna take advantage of something right now. Right now we've got an operation issue, right, we've got this huge manual overhead operation issue, right, and so the concept is like, how do you, how do you sort of build a system now that you can just solve for just what's hard right now, multi architecture operations, like I'm having to deploy my,
08:44
Every single application completely different to every other application and maintain them and manage them completely different to everything else. There's no consistency in the way I'm doing things, cos that's, that's gonna solve a massive problem for you right now. Right, if you think about provisioning or using the cloud,
08:58
or how I do security policies and things like that, which is just day to day operations. If we can solve that and reduce that down to almost a zero cost operations aspect, that frees up a tonne of time, effort and budget for you guys to go focus on. How do we think about, you know, if we get out of that, then we can start thinking about how we transform into a very policy driven workflow
09:19
aspect. When I start thinking about the world as policies and recipes and workflows, and I can spend my time and effort on that. To say, alright, how do we roll out an application, or how do we support a DevOps testing environment, like a sandbox environment, what's the steps that I go through consistently, and then can I create a work, a recipe around that and feed that into the system and then
09:41
build some compliance and governance about that cos once I've got that, Then that's fully automated for the rest of my life. I don't have to ever be involved in that again, right, like if a deoxs person wants to create a sandbox environment for a test, a test they wanna run, that can be done straight through an API straight through like their terraform
09:56
environment or something like that. I don't have to be involved in that. So my evolution go from, you know, managing this storage to like thinking about how I'm dealing with these policies and these workflows. Yes, you have a question. Mhm. So the right one,
10:21
that's the future. Yes, we're really focus on data. Bingo, yes, yeah. Social security number right uh but before that, everything to do with operations, to what extent infrastructure is a code. Already has addressed it. Well, it has it has addressed it for a tonne of the areas as well.
10:37
I just don't think it's addressed it for storage, right? Because I think the issue's been like, and and and like this isn't new, like no, I'm not gonna stand up here and say, hey, Enterprise Data cloud is brand new and no one's ever done this. No. Like Green Lake is, is getting, is about 40% there I guess,
10:53
in the way they can do things. There's a tonne of a tonne of companies that are trying to address this. Yeah, yeah, yeah, the problem is, is that when you don't have these sort of layers of of this sort of bottom unified layers, things are always gonna screw up, like, I worked at Microsoft for a tonne of time and we were doing sort of um identity
11:11
management. We had this sort of, we had this technology called identity management access, identity management access or access identity management. Yeah, I am, yeah, long, long, long time ago, right? That was trying to do what, so this was doing, it's like, hey, there's all these different authentication systems out there and and.
11:28
You know, and there's no unification and authentication system, so what I'm gonna do is I'm gonna create a a meta identity manager of that, and I'm gonna hook into everything, and it works really good for a week, and then someone changed something, and that stopped working, and then someone changed something over there, and that stopped working,
11:42
and that's what happens, right, that's why this has failed in the past is that you can build sort of these meta systems on top. But as soon as you change anything below, and they they all break, right? And so the idea is then how do you make sure that the below the below is completely unified.
12:01
And that's the idea what we're getting here is like, how do we sort of focus on on this core area, this early area from a unification perspective and then build these workloads on top. So I'm gonna walk through on how we're doing that a little bit right in in just a second actually. Right? So what is an enterprise cloud?
12:16
Now Enterprise cloud is not something we sell. It's not a product, you're not a sco, it's something a customer's gonna build, right, something that you're gonna implement. And an enterprise cloud itself is actually pretty easy, like it's really just the way to think about how do we cloudify storage across any
12:34
environment. Cause we all want to sort of run like the cloud, but one of the, the benefits that the cloud has got is billions and billions of dollars of R&D, right, so they can support this. Incredible investment in workflow automation, which is the concept here, right? But you can see and if you think about what a cloud did,
12:52
well they took all their storage, all their networking, and all their compute, and they just virtualized it, they put a virtualized layer on top, so that you don't have to deal with any of the independencies of the configurations of hardware, right? And then on top of that, then they built a very intelligent automated control plate.
13:11
So if you go to the Quincy Azure data centre um in Washington. Um, there are 4 people that work there, and 3 of them are security guards. Right, the other person pops hardware in and out when when blades break, you know, when the, when, when they get down to an SLA they go and replace hardware. There is no, no-one's configuring sitting down and typing when you when you create a service.
13:35
No-one's doing that, it's completely automated, so how do we, how do we take that learnings and put that into that. And one thing that cloud did incredibly well is they really separated. Hardware and software, they said, hey, we can't have this this very hard configuration on hardware. We're gonna invest in the hardware vendors to
13:53
ensure that they're cloud capable, that they can hook into the APIs and these APIs are consistent, but we're gonna really control this software layer. And that's the sort of core concept of this of this enterprise data cloud is how do you create this very intelligent software layer on top of a hardware layer and be able to separate that. Now one thing that Chad sort of talked about before is like we just did this big deal with
14:16
me. And Metta's actually using our DFM hardware and our direct flash technology and just integrating it into their operating environment. Right, because we have such a separation of software and hardware, they can actually take our direct flash, um, your translation layer and hook that straight into the software layer and be able to talk straight to a DFM that they're helping us to
14:39
create. So they've got a DFM architecture that they're creating with us. Because again, we don't rely on sort of the complexity of an SSD. SSDs are wonderful things, but they're really complex, they have a tonne of on sort of chip capability. If you pull all of that out and put that in software.
14:56
You can really do amazing things to that hardware, like how do you think about how do we, how can we go from 75 to 150 to 300 terabyte as fast as we can, because our DFMs are just nanned, they're really just raw naan. All the other stuff, the compression, the inline stuff, it was all that, it's all in software for us, right?
15:15
That's why we get low watts per terabyte. We don't have any of these sort of firmware systems that most SSDs have. So we sort of looked at the structure and went, alright, how do we have? Can we take a lot of the complexity, like the integration at the hardware level and put that in software and make our hardware as simple as possible.
15:31
This is why we can get such performance figures in our Flash array and Flash by and Xer and ST aspects, cos we push a tonne of this stuff into software associated with that. But again, an enterprise data cloud is essentially a virtualized unified data layer. So how do I take all my storage and I virtualize it into a single pool. And then once I can virtualize that, then I can run automation and intelligence and work flows
15:56
on top of that, and then I can access that through a, uh, either directly through like workbooks or or generally through API so I can actually remove the human element, the ticket ticketing element. Out of those support things, and how do we do it? We do it with our platform.
16:12
So we have our pure storage platform, which we've been talking about for a little while itself, and that's sort of the basis on how we provide an enterprise data cloud. So let me walk through each of these layers of the platform and how we sort of do it now. Alright. Let's start with a unified data plan. Now our evergreen architecture is the, this is where I,
16:31
this is when we sort of talked about before, like, you know, why doesn't this work now? Like why can't other vendors do this now? Cause you had to start at the silicon, you had to start on day one and make things a little bit different. So we sort of went, hey, with our evergreen architecture, how do we.
16:49
How do we Let me, let me go plug this back in. Here we go, that should just work, shouldn't it? Yeah, look at that. So when we think about as sort of architecture, as I said, like 10 years ago, 15 years ago, um, and I don't know if you've ever seen this, and if you haven't, like, and you're really bored one day and you want to sort of read a
17:11
really thick white paper written by Cos, he wrote the 15 architectural principles of pure storage, probably about 18 years ago, and they were like um. We have to have sort of stateless controllers and we have to have all this type of stuff, but there was some really interesting things in there, and his interest was like, we have to be what, what we have to be like the cloud.
17:30
He goes, I think the cloud's gonna be really important one day, and we have to think like what they're doing, we have to start being able to separate our hardware and software. And put more of our innovation in the the software capabilities because we feel that's where it's going to grow. And so then they, so that then became this architectural principle in the way that we've
17:48
sort of built things. If you think about our entire evergreen architecture, now we, you know, we've invested into this idea of um non-disruptive, always continuous upgrade both on the hardware and software. Um, so that we can continue to keep that up to date. The separation of hardware and software, so we put all this goodness into our purity
18:08
environment, which is now unified across Flash ray, flash blade and cloud blocks store, so it's one purity operating system across the lot now. And now from an architectural perspective, Flash array is now supporting block file and objects, so we now have a single architecture. And a single operating environment, this solves half your problems already,
18:27
right, because now you don't have to deal with multiple different, I don't have a block system, a file system, an object system, you know, and a cloud system. There's 4 architectures already and I'm I'm I'm gonna multiply that by heaps. I have 1 now, and I might have 2 if I wanna run specific flash blade like very high. And, you know,
18:43
you know, um, unstructured data support, I might have flash blade, another, you know, which is a which is a different architecture, flash ray, same operating environment though, so underneath the things, if I put a flash array and flash blade together, it's the same operating system and I, I, I deal, I interact with it, manage it, build it, provision it exactly the same way.
19:01
So that's starting with the evergreen architecture was our value. We started this journey 15 years ago, I call this the Ironman journey. Um, I don't know which movie Iron Man movie it was Iron Man 2 or 3, where like. Like maybe it was 2 or so where like Howard Stark built the, the, the big land on the model, and he said to Tony,
19:20
I don't have the technology today to build this element, I think it was. God, I'm terrible at Marvel movies, aren't I? But you will, and I think that was cos 18 years ago. I think this is gonna be big one day. I think we're going to have to think about storage virtualization like a hyper scalar, but he couldn't put it into words. We didn't have the technology,
19:39
we had to go and make all flash available to everybody. But now we're at a point now where the way we architect it originally means we can actually have a unified single architecture from a hardware perspective and then build a common operating environment on the top. The next thing is this unified data plan.
19:55
How, let's go create a virtual cloud of storage, and fusion has really helped us do that. Now fusion, I'm not sure if you've been around fusion, this is our second version of Fusion. Um, we, we had a first version of Fusion, which we were very focused on from a hyperscale, our first version of fusion. Actually, um, implemented availability zones, implemented uh provisioning like a customer,
20:18
uh, you know, a uh uh two different layers like a um a um user provisioning service or a user receiving service and an administrative provisioning service, it looked and made you look and feel very much like a cloud. It had some limitations though, like it. fibre channel ovary and stuff like that, we had some little limitations and we went alright,
20:37
the limitations we have to get around, so we really sort of went back to the the the the drawing board and said, alright, like how do we think about fusion differently? Um, and so what we've done now is the first thing we did is we just put fusion into purity. Instead of having it as a separate technology, let's put it in a few into the common operating environment, and then let's hook its tendrils into everything.
20:57
Fusion is like the brain. Of our entire purity operating environment, so everything goes through fusion from an API perspective, and when arrays get provisioned, they talk to each other, so every array knows what every other array is, so now you have this common operating environment that can actually understand capacity, workload,
21:15
under usage, utilisation, and understands the whole makeup of this infrastructure as well. The other thing also, because now we have within within um purity, we have this one unified environment that does everything from just supporting your block file and object, but all the data services you need as well. So, you know, you would now have this sort of single operating environment that can be
21:37
deployed across on premises, deployed in a hybrid environment, deployed in a public cloud environment. Now in a public cloud environment, we're not gonna own the hardware, there's no flash array or flash blade there. So you're still gonna be subject to an Azure and AWS aspect,
21:49
but the way it interacts with Azure and AWS is exactly the same, right? You know, um. You're not gonna get the same performance characteristics, are you, then you're gonna get out of a flash array, but you're not gonna be using the cloud for that aspect as well. But the way you interact with it is exactly the
22:03
same, there is no difference. And in fact, you could run a whole pure enterprise cloud infrastructure, 100% in Azure and have exactly the same experience that we're talking about today, that you saw out there today. You could run it 100% in Azure and not have one flush array or flash blade. Device in your organisation and you have exactly the same experience,
22:24
because of this common operating environment. I talked quickly talked about the meta stuff as well, um, this was pretty amazing for us as well, because, what this opens the door to do, now I, I might be a little cheeky here, but what it opens us to do is, Because we've separated the hardware and software from the common operating environment
22:45
and the and we sort of we have this the the direct flash, which is sort of our translational layer and the way that we talk to the underlying hardware, that's extremely extensible, and meta is using that extensibility. So there is an opportunity for us to actually extend that beyond what Metta's using, like we're talking to all the other cloud vendors as well,
23:07
but we're also talking to our competitors. Because there might be an opportunity for us to actually say, well, we could actually build an enterprise data cloud on a HP instead of HP or Dell technology, like if you've got, You know, at that stage, we just, we're just looking at raw storage, block file and object storage,
23:25
and we can go and then provision and interact with that storage through that. So that, that API layer, this new, this this this layer that we have, gives us the ability to do that. I'm not saying we're doing that now, and I'm not saying we'll ever do that, but the opportunity is there, absolutely in the way that we think about.
23:43
that the transaction or the translation layer between those as well, it's not just tied into our flash ray and flash blade. Our purity environment now and our direct flash technology treat flash ray and flash blades really just as raw storage, that has a tonne of performance and capability as well. So we could actually roll any type of storage in there theoretically,
24:03
um, as part of it as well. And I. I think that's something that's super duper interesting for us as we move forward, once we've got our own backyard in order, cos I mean we're still building this, you know, working with everybody here to sort of build this, but as we sort of move forward, there might be an opportunity for us to actually think about
24:20
more as a data management layer. You can see how evolution into that sort of data management story versus the storage infrastructure story. Um, the other thing as well is that, um, because of the integration in the purity environment, we have all our cyber capability associated with that, both from, uh, you know, a, a, a front door protection or a recovery aspect is just built
24:45
in and, and integrated as part of a workflow model. So security is never an add-on associated with us because it's just available in impurity. So if you think about creating a workflow, if I want to deploy an application, so deploying applications is a pain in the arse, right, because I can deploy the application, I can deploy the volume, and I can deploy an application in that volume.
25:04
But the thing about all the day 2 and 3 operations or day 1 and 2 operations I've got, I've got policy securities, I've got all this type of stuff, what about replication, all, so there's all this stuff that I still need to do with it, and most of the time you touch you once you like, You can touch it 30 to 40 times in a deployment aspect, right? Now, that's because you're using different
25:24
systems and different aspects to sort of do it as well, because this is again all built into purity, we can actually just automate it straight into a deployment mechanism. This is one the biggest part of our workflow um aspect as well, which we're gonna sort of talk about right now. Our intelligent control plane, like this is where the magic happens.
25:41
So if you think about it, I've got an evergreen architecture now, so I've got a consistent block file and object, you know, hardware layer, or cloud hardware layer that's mimic that that's supporting a block or object. Um, and then on top of that I've got a common operating environment. And then on top of that, I've built a virtualized cloud of data because now I can
26:03
take all those storage nodes and I can I can unify them under a virtual banner, right, so even though they're still physical, cos you're never gonna get away from physical locations of storage, nor do you want to. Right, I'm gonna have some in the cloud, I'm gonna have some protected storage here, I'm gonna have some high, high capacity storage, maybe some archive storage,
26:22
and it's gonna behove me to continue to have physical storage. But now I can virtualize that into one virtual layer, I can create tiers if I want gold, silver, bronze, high performance, low performance, security tiers. I can carve it up any way my business needs to. But I, but I, but I'm dealing with that virtual environment now.
26:40
I'm not dealing with the physical environment. Yeah, I'm gonna have some people dealing with that physical environment for me, maybe it's me too, like I'm dealing, you know, I'm deploying hardware, plugging it into the ray, but when I plug it into the, the, you know, the Skynet here, it should just automatically configure itself as part of where it needs to be and get to the
26:58
point where, you know, Chad was sort of talking about our, our dynamic rebalancing. The idea is that you should be able to put an array in and the array is intelligent enough to know where it lives as part of your policies, right, we're gonna sort of get to that point where the arrays are intelligent enough associated with that.
27:14
So once I've got that virtual cloud of data, I'm now moving away from things like copy data management because I can now expose all my original data and have all my people use unified data access to hit that data because I don't have to worry about different systems, different policies, different front doors. I can just control data access. If I am creating copies, I can govern those copies now,
27:35
I can control those copies. If people try to make copies of my copies, I can block them now a lot easier because again I have a a a single sort of reference. And then now I can then take that for a spin. Like I'm now I've built the cool car, let's go see what I can do, right? And that's the intelligent control plane, because again, I think, you know, Chad was saying this this morning,
27:54
there's been previous attempts at this, right, like, people have created sort of these governance platforms or observational platforms, but you've said that you've sort of had to hook them in, right, and then you sort of you get really hot about them for a while and you're sort of like, ah, that's sort of my, Enthusiasm's waned on that as well. People have tried virtual storage virtualization layers,
28:15
right, like. It's, it sort of happened, but again, if you're changing underlying architecture a lot, then virtualization always breaks down. So it works great for 6 months, maybe a year, and then, You know, you release a new family architecture, boom, everything breaks, and everything has to be redone, so there's a tonne of sort of operational management and
28:35
overhead that you need to do to keep that up and running in itself, right? And people have tried to create automation platforms before. You know, my identity manager was a great example. They work for a hot minute until an API changes, until an upgrade happens,
28:49
until something happens, and boom, they sort of, you know, the amount of effort you have to put in to keep these maintained is tough. Imagine trying to do all three. It's, it's just too hard, right? So, you know, what we've done here in the sense is to say, alright, let's think about how do we federate
29:04
all the arrays into one single achievable um um you know, plane that you can now, you know, do simple and easy fleet management associated with that, so remote management and so you can touch any array in your organisation from anywhere else in the organisation associated with that, and that's smart enough that it can start rebalancing and maximising utilisation.
29:27
That you have a control plane that sits on top of all these and says, alright, I know what every single one of these servers arrays are doing. I know how they're utilise, I know their capacity, I know their throughput, and then I know the characteristics of the application we're gonna deploy based on a set of, you know, blueprints that you've sent me. I, I have an understanding of the,
29:46
the, the performance needs of that application. So when I go and deploy that, I can make intelligent choices about where it gets provision, so I don't have to worry as a storage, I don't have to worry more about the the provision deployment. The system can just choose itself based on, hey, this application's gonna require this much
30:03
capacity throughput performance, and I'm gonna put it here, and then by putting it here, I might have to rebalance across my array environment. That's what's coming around a dynamic rebalancing. As I deploy these things out, let me just rebalance workloads and data across based on what I need to do. Again, I need an intelligent layer that sits
30:23
above this that has access to, you know, every single array, all my data, all my usage characteristics of that, but not only that, has access to. Other, you know, people's sort of characteristics as well, so we can start making an intelligent decision based on it. If you think about pure one, like, pure, we, from a pure one perspective,
30:43
you know, we've been working with our customers for over a decade, capturing characteristics of their of their organisation. I mean, you own your data. But you actually share the characteristics of that data with us, like what type of data it is, the performance and capacity aspects of it,
30:58
where the problems are, where bottlenecks are, and we've had, we've been capturing that data with, with you for about 10 years, and you, we feed that back into you from an intelligent oh IOPS perspective. If you're a pure customer now, you've probably had calls from Pure going, hey, you might have a problem.
31:13
Alright, like our systems will actually be aware of a problem before you will because of this intelligent aspect. Now that's great from one thing, but as I said, we, we've been working with hundreds of thousands of customers getting that he that heuristic data so we can start predicting about what workloads look like, what the impact of workloads looked like,
31:33
because we have so much rich data that we can feed into that intelligent control plane. So you can see, you can see some of all these things that we've been building over the last 15 years are really starting to gel together now. Right, I know a journey to the intelligent intelligent control plane is pretty easy, so it's to me it's like a maturity model.
31:49
Right? First thing, like if you're a pure customer now, go turn fusion on. Like it's impurity, you gotta upgrade to Purity 62, 4, maybe, um, but it's there, right? So it's there, you can just upgrade it to it now, um, and then you can now turn fleet and remote management on
32:07
that, so you can actually start bringing all your arrays into a single fleet to finding availability zones associated with those fleets. And be able to sort of be able to do that. That's your first step of data virtualization, you can now start treating your multiple isolated sets of arrays as single sets of arrays, start building availability zones or
32:25
tiers upon them. I'm gonna make that as a performance tier, yes. Yes, it will be, yes. So there's a little bit, a little different from a characteristic perspective because a lot of our dark side stuff use our long-term tier where the only the first thing a fusion is right now, but so it's gonna lag a little bit, but we're working on the compliance aspects of
32:49
those, yes. Um, so and the next thing is like is then starting to think about these presets and workloads. I saw a demo of presets and workloads this morning with the idea you can actually start developing sort of these preset characteristics with infusion. So, um, you can then offer, you know, application um deployers or API systems a
33:12
a recipe or a menu of things that they can then go and configure or consume. So it's not, you know, a free for all, it's say, hey, I want to deploy this application, it has. These certain um performance characteristics and then that way it can actually then offer the certain tiers it can go on, which is essentially the array sets, exactly the way Azure and AWS works,
33:30
where if you're deploying things in Azure or AWS you've got a recipe of tiers, like what's the performance tier, capability tier, capacity tier that I need associated with that. Alright, two super things to do. You can take our Oracle or our SQL workbook that's available up on our website, you can put fusion on, and you can have that up and running in minutes,
33:50
and you can actually deploy a full end to end Oracle solution. In under a couple of seconds, integrating your service now you can do it automatically. I mean that's just the start of it, as I said, that gives you that first sort of blueprint to move into sort of the next set is that once again, once the system is intelligent enough to understand, you know, has, has access to all your workbooks,
34:10
has access to all the configurations, has access to all this intelligence, it then moves into a much more sort of dynamic aspect. And like, give me one second, and able to. So rebalance and optimise associated with that, and then start tracking everything that it does to enable to do things like compliance and reporting and governance associated, cos that's exactly where we sort of want to get is the
34:32
idea of life cycle of data and then creating compliance of data. And to do that you have to have a a set of data set management, right? That's sort of the journey as we go. Now, again, we're not entirely there yet. This is a journey that we're gonna go on together. You know, I think we're we're we're in sort of
34:47
the 3rd area right now, we have a little bit of the compliance dashboard stuff. I think Chad showed maybe some of the compliance dashboard stuff that we have today around drift support. You can see this stuff is just gonna get stronger and stronger and stronger as we go through. Yes. So the future state.
35:03
And when or successors. When, uh, let's say AWS we'll say we want to run on fuel. And GCP will say we don't want to mess with this uh storage ops I. Yeah. Yes. That, that is one desired future state, because like we're never gonna we're never gonna
35:29
replace Google's control plane, right? They have their own automated control plane that they deal with storage, but what we wanna do is replace every single one of their arrays underneath, cause my, I mean Google runs on hard drives, right? I mean that's what the meta did. Me, meta was running on hard drives, they have a little bit of flash and a tonne of hard
35:45
drives, and they're like, we want to get out of the hard drive game, game. Because their control plane is integrated into other areas. Now they could totally use our control plane and then they'd have to rebuild their other areas. So yes, so I would love, but that but to me it's like I,
36:00
it's not the hyper scales, I don't want the hyper scales using this, I want you guys using this, right? Because the concept here is like you can become exactly what a Google or an Azure or AWS does, can can can do from a storage perspective. Mhm. And nanos compute the same thing that they do with with uh storage and the same thing.
36:29
Yes. No, they can't, no. And everybody else is trying to go with hypers skills because we're thinking why do we need to reinvent everything they already do it. So at the end of it, that's why I'm. But see, the problem is that you, I mean, absolutely, like if there were if budget wasn't
36:51
a problem, like hyper scales would be a great answer, right, because hyper scales give you this full automation and regulations and compliance again, you know, there's, you know, data sovereignty, there's all these things that that are blockers for hyper scales, but at the end, we all sort of, we ran the hyperscale sort of idea. And we put a tonne of stuff on Hyperscalar and like there's more stuff coming back from
37:11
hyperscales now because cost, management, all these things as well. Because remember, when most organisations still have to run all their on-premises stuff and their hyperscale stuff, and hyperscale just adds a complete new management paradigm. So yes, what we're trying to do is to say, we'll come in across all the top from a storage perspective and treat it as one, but you can then choose whether you wanna run that same
37:32
thing on premises or in the cloud. If you want to, yes, I mean, there's there's always gonna be a a crossover point, as I say, like archival, I would put archival in AWS because they, you know, glacial storage is so cheap, it makes a tonne of sense. You could do archive on premises for sure, and what, it'd be what,
37:53
18 cents a gig or something. But I mean archive and AWS is like 0.01 a gig or something. Right, so it makes sense to put archive on AWS. But again, you've now got another system that you manage, but in our system that would just be a archival tier as part of your enterprise data cloud. You wouldn't even know,
38:10
most of the time no it's on AWS unless you were at the storage person. Alright, um, and we talked a little, I mean, Chad went through this sort of aspects of like, you know, when we think about like pre um pre-configurations or presets and what this sort of looks like as well, so I sort of won't labour on this labour on this idea as well, but the concept here is that when we think about all the steps that we do to configure a
38:33
volume. What we're trying to do with EDC is just automate all that capability, so do it once, put it in the systems, and then be able to sort of utilise that as part of your workflows and and and um aspects. And you've got complete control of that, you can actually just use the run books associated with each of these areas to deploy stuff, or you could automate that in an entire recipe,
38:53
of course. And this is a cute little diagram that my PMs um shot up about uh workload deployment with presets, and I think it's sort of, you know, the idea here is that, Yeah, I think the easiest thing here is that you've got the is that on provision all your day 01 or 2 operations just happen. So not just not just not just the provisioning of all the,
39:18
you know, the volume on the right system, on the right array, that's that's balanced the right way, that I'm sure I mean it's not, you know, affecting every other volume. But all the other tasks, protection, QOS data volumes, all this stuff that that that Oracle or SQL that work workflow needs is automatically integrated into that as well.
39:37
And then you can then hit this into any any type of tracking system you want, whether you're using ServiceNow or whether you're using some of the Service catalogue, you can now integrate that into that as well. So again, you're removing yourself out of that process, um. Chad showed compliance reporting this morning, so I sort of won't go through that as well.
39:54
He also showed AI co-pilot, which is so cool. Um, you should just download it and then send stupid things to your storage systems. It comes back with some amazing responses. Um, um, so yeah, but have a play with it. It's, it's a, it's a really fantastic thing. We're gonna show you another tomorrow, uh, like the future version of where we're going with co-pilot as well.
40:16
Um, and the last one of course is delivered service. Now, Um, Charlie said this morning, like 50% of our customers are now using some form of subscription services. It means 50% of our customers aren't, right? To deploy a cloud, you need to expect to run it like a cloud. It's delivered as a service,
40:37
right, like you can't, yes, you can take aspects of the enterprise data cloud and use them, yeah, you could do remote management and fleet management out of Fusion, you can do a little bit of workbook type of stuff, and you can get operational advantages of that, but if you want this full data delivered sort of data as a service experience. You're going to have to really start thinking about how do I move into a storage as a service
41:00
model, right, and that model can't be done by acquiring hardware. Right, now we have two options associated with an enterprise data cloud, the way you would licence an enterprise data cloud. Evergreen one, I mean we've talked about Evergreen one this morning. EG one is our all you can eat package, like it's got everything in it,
41:19
like I say if you want completely deliver as a service where we manage the hardware, so we manage the capacity and performance and throughput of your hardware, and we continually update that based on intelligence, so if you're starting to hit the, the top level barriers of your capacity and throughput, like if you start doing more than what, you know, flash blade R2 can do, we'll put an extra on site,
41:41
right? You know, if you're hitting the top levels of of of Flash array XL, if you are, we'll come out and put an ST on site. And that's the idea is like we will look at your hardware configuration and ensure that you, you always have capacity performance, cos we overconfigure in those areas, and then because of fusion, because of this integration of fusion,
42:00
the system's always updating us about your usage of those systems, so we'll never let you leave you high and drying when you're deploying stuff associated with that. And plus on top of that, you get all the SLAs and all the business guarantees and all the all the applications associated with that. So it is the full cloud-like experience in the way that you're gonna consume storage.
42:20
Now if you have to own hardware for some specific reason, you've got to regulate regulatory issue to own hardware, you just have to know that that hardware is yours, whatever reason. We have a thing called Flex, right? And Flex is essentially the same thing as Evergreen one, but you just own the hardware.
42:39
So you actually go and determine which piece of hardware you want. I'm gonna buy a flash array XL 130 because this is the performance and you know, capacity configuration I need. And then we'll, on top of that, it still, you still have access to all the services, all the SLAs, all the business guarantees, you just you just have to think about the
42:57
configuration planning of, of the hardware itself. Right? Go Evergreen one, it's so simple, right, like it's the way you would consume from Azure and AWS, it's the way you would think about consuming a cloud today as well. So as you know, as you're thinking about deploying or if you're working with customers
43:13
that are deploying things, EG1 is the way to consume the the best way to consume an enterprise data, because this is a maturity model for us all, right? The first thing we're gonna do is we're gonna put our 1st and 2nd workloads in, we're gonna virtualize that, we're gonna create some run books and some workloads, and then eventually we're gonna
43:30
start really pushing into that what an EDC can do. But you don't want to have to change subscription models through that and say, oh, I'm missing out on this, I really need this. EG one is a really good way to sort of do that at the start as well. Of course we've got other evergreen subscription licences if you don't want an EDC
43:47
so I won't talk about them, but, but, uh, you know, again, if, if you've got customers or yourself that you need to own that hardware for any reason. Um, you can still get all the services oriented a capability of what EG1 provides, um, with Flex. OK, and so the last thing we sort of talked about is sort of supporting our workloads,
44:07
and this is that whole thing, so, And Enterprise Data cloud doesn't change anything that you can't support with your pure storage platform today on Flash ray today, like you can run, still run every single application associated with it. The good thing about it is we're working with every single one of these business ventors to create those presets and workbooks automatically to be able to deploy.
44:28
You want to deploy a new CRM system. We're working with vendors to create workbooks for CRM systems. You saw the stuff with rubric. How do we integrate rubric from a security policy stuff automatically in today's zero operations. So you're seeing we're working with every
44:41
single one of these vendors and more to integrate into the APIs into their work workbooks as well. So it can completely support exact everything you got today. You want to just run traditional VMware with SQL on top, can absolutely support that. It will be better,
44:55
faster, stronger, less operative cost for you. Absolutely, right, you want to then completely automate that. EDC is gonna enable you to do that as well. So that's the enterprise data cloud, right? I mean, so that's sort of the core layers of the enterprise data cloud.
45:11
So things that you should look for this week or as you start moving on this type of stuff, really get into fusion. Fusion is just the glue of the enterprise data cloud. Fusion, there is fusion as the product. And then Fusion as the jazz hands, like fusion is the brand,
45:30
right? And Fusion as as the brand brings co-pilot, our data analytics tool, Fusion as the product, like if you go into purity, and you never even see the word fusion, it's called fleet automation, I think, or maybe just automation now, I think that's what it's called, right? And Pure one and brings that all together.
45:46
I think we'll make your life easier, we might actually bring all those things together under a single naming process, but really get into fusion, there's a tonne of new resources. Associated with that, um, right, and then that will bring you, you're gonna start seeing fusion and pure one really start coming together as a capability characteristic. So we're gonna start thinking about fusion in a
46:05
fusion of pure one in a dark site and what attributes you have in a dark site, what attributes you have in a um cloud, non non cloud connected site, and then attributes you have in a full on cloud cloud site as well, and what you have access to from a business process or a business guarantee SLA. Aspect as well, so I think that's the first step, really sort of get into fusion,
46:26
understand remote management, um, tagging and workflows, and virtualization, that's sort of your first step into these enterprise data cloud pieces, and then we'll continue to work with you as we start evolving this data management aspects. But with that, that's really just wanted to give you a really just a double click into what you sort of saw today, um, got a question, what can I do?
46:52
Yes. Not yet, not yet, right, so Snowflake does a tonne, so we're integrating a tonne of snowflake stuff into it as well. I think what what we're trying to do is like on our journey, like we're gonna, if we think about like data bricks or Calibria or snowflake, or, you know, the thing, some of the characteristics they can do from a data
47:15
management perspective is the areas that we're evolving into, right, like. I think from a pure from a company perspective, you're gonna see pure move into pure as a data management company, and then with a seriously strong storage infrastructure arm. Like right now we're a storage infrastructure company, that's what we sell,
47:34
we sell, right? Right, no, not, not now, right, but we're not, we're, we've, we're working with them around integrating into APIs to feed into the compliance and policy engines. Yeah. Yes, we do, yep. Analytics in order to feed the control pain. I don't, you know what, I don't actually know.
48:01
I know, I know we have to do a bunch of our own proprietary stuff. I'm not sure if Snowflake's involved in that. I can find out though. Because at the end. Yeah, yeah, yeah, yeah, I mean our, our pure one I, I, I'm not sure if we, I, I know the the pure one analytics engine we built. We've been we've been building for sort of a
48:22
decade or so, I'm, I'm not sure if it uses anybody else's or it's own proprietary analytic stuff. Oh yeah. Not yet, no. Okey dokey, it's sound to the questions I can hang around, but thanks for spending the morning with me, have a really great rest couple of days and hopefully see you out there.
48:47
Go.