Skip to Content
36:16 Webinar

Simplifying Storage Management Automation with Pure Fusion™

See how Pure Fusion and APIs make storage automation easy—whether you're starting out or scaling advanced fleet-wide automation.
This webinar first aired on 18 June 2025
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:01
Good afternoon folks. Yeah, how's, how's the accelerate going for you so far? Good? Yeah, we're almost done, almost the end of the line. We got a big party tonight. Hope everybody's gonna come, uh, demo fest part two actually going on in the Expo Center which
00:18
if anybody was in Demo fest part one, we're gonna have a whole bunch of stuff and Brent is actually gonna be giving a demo which is really cool, um, but, uh, I'd like to welcome to our session, uh, today. Uh, we're gonna talk about simplifying storage management automation with pure fusion, kind of a long title, basically we're gonna be talking about automation,
00:38
and, uh, I've got two very important people that are with me. Uh, first of all, I'm gonna introduce myself, Mike Nelson. uh, I'm tech technical evangelist for Pure Storage by title. Uh, I really deal with all the APIs, SDKs, uh, and pretty much anything that deals with automation, uh, particularly around fusion, Flash array,
00:58
flash play, you name it, all right. Then I have Mr. Brent Lim, who is a technical director, um, he's also one of the original engineering architects of Fusion, so he's really, really smart about that stuff, right? And it just, it just means that if you think there's anything done here,
01:14
you get to yell at me because it's like my fault. Um, and then we have Mr. Chris G Jimenez. Uh, Chris is actually with Fanatics, who is a customer of ours, and, uh, he, well, I'll let him talk about himself in a moment, but we really appreciate him coming and being able to present with us and give his
01:33
perspective on how this automation and how the how Fusion is gonna be able to help him and, uh, oh, by the way, we have a surprise guest um, for those of you who would like answerable from an automation standpoint, Mr. Simon Dodsley has just walked into the room, um. He is Mr. Answerable to the rest of the world, just so you know um.
01:55
Uh, but, uh, so I'll let Chris start this off, all right, um, he's gonna talk a little bit, Chris, if you could, about, you know, tell us first about yourself, about your career a little bit, and how you got into fanatics and, and your journey. Uh, yeah, so I've been in the IT world for close to 30 years now,
02:12
doing a little bit of this, a little bit of that, uh, mostly focused on the enterprise data center, you know, VMware, Windows, all that stuff, uh, a lot of storage, a lot of networking. Um, came to Fanatics almost 8 years ago now. Um, my former boss at my former company knew I had interest in coming to Florida,
02:33
um, so he, he, uh, he went to Fanatics and he said, hey, I got a job for you, brought me down to Florida with him and so it's been great, um, I guess a lot of experience over a long number of years dealing with a lot of different products in this space, um, as it comes to pure infusion. Um, you know, I've seen the challenges over the years of being able to do things centralized
02:55
and being able to standardize stuff, um, you know, in fact, we had a use case just last year before Fusion got introduced and we were talking to them about this and um we were, we were retiring an old data center and moving everything to a new data center so we had a lot of things like NTP and DNS and stuff that were pointed to this old data center and we need to get them moved to a new data center and because we didn't have
03:18
fusion yet. Uh, we had to go out and manually touch all of our storage arrays and repoint them to the new uh NTP and DNS, uh, addresses, but, uh, with the help of Fusion, uh, we're gonna be able to standardize a lot of that and make that a lot more automated and, and easy to use in the future. Um, we're also getting ready to bring a new warehouse online,
03:39
so being able to take advantage of the presets, uh, within the workloads within Pure infusion. Uh, we can basically build out the template of what we want, that storage footprint in the new data center or the new bear house to look like, and click a button and boom it's there. So as we get the new array online we can just run the, the preset run the run the work flow and we have everything all set up,
04:02
you know, the, the SQL server databases, the VMware data stores, all that stuff all built out quickly and easily. And then we're also able to utilize the automation and scripting with answerable where we struggle, we've struggled over time with just maintaining consistency and standards um because it's always been you reach out and touch this one you reach out and touch that one
04:25
and like I heard I don't know if this is in the keynote today or another session where they're talking about anything you. Do manually is an opportunity to mess something up and no matter how many times you do something it's never gonna be perfect all the time but if you use automation you can roll out a consistent standardized set of configuration for things and you can even it's not it's not uh just an initial deployment of that.
04:51
You can actually use it to run it on a scripted on a scheduled basis to enforce things to keep the the drift down um to be able to have a consistent standard footprint all the time regardless of if somebody went in there and made a change last week that they didn't mean to do. The next time you run that that scheduled script or whatever, it'll reinforce that your standard baseline and and reset it back to what
05:15
it should be. So let me ask you, let me ask you this, Chris, is that from your standpoint you're talking from your position, right? And, and what you do on a daily basis, um, I'd like to hear about, you know, how that interacts with actual your storage team. And your infrastructure, I mean, you're mainly infrastructure, right?
05:33
Do you have a separate automation team? How, how is that we have a separate automation team. Yes, we, we just went through a reorg and split a bunch of people out into different focus teams, so we do have a separate automation team and to your point, Mike, um. We're looking at being able to, I know there was a big session on ServiceNow and they were
05:50
up in the keynote talking about stuff. We're looking to leverage a lot more automation with ServiceNow and create, create a service portal for our end users that when they need something. They just put in the requirements of what's needed. It goes through Service now we have all this automation and all these templates and stuff
06:09
built out in the back end, and it becomes something as simple as a user submits a ticket. It gets reviewed and we say, yep, that's valid, approve it, and boom. Work flows go, the storage gets built out. What we need to do in VMware gets done. All that stuff is done with the automation that we're trying to build using ServiceNow and so I
06:28
want to expand on that a little bit, and I apologize the slides aren't matching because I'm kind of like really focused on what Chris is saying and I'm not paying attention to advancing slide. Um, but some of you folks, uh, obviously in bigger organizations you have that type of structure, but you also have smaller organizations that don't have the opportunity to expand past that.
06:47
Maybe you're a one person shop, maybe you're a two person shop, right? So from your perspective, Chris, if you didn't have that reorg, if you didn't have all those people involved, do you think that this would really be beneficial to be able to to go directly from a service now, take it all the way down and, and for sure.
07:02
I mean when it comes to needing to build out like say we get requests from our DBAs to build out a new SQL server, I mean that's that generally involves setting up, you know, 22 virtual servers, provisioning all the storage for that, and although we have a standard blueprint of this is what a SQL server cluster looks like, what the disks, the disc layouts and stuff, it's always going to be different size based on
07:25
whatever the needs are, um, and that's a long process to. Build out those servers to provision the storage to you know attach it and all that and do it all manually, but being able to have it scripted and automated to where you click a couple buttons and because you have these workflows and these presets and whatever on the other technologies you have that all defined and I think people keep talking about
07:48
guardrails using those configuration guardrails you have all that stuff built and set up ahead of time. Yeah, it, it, it simplifies. I mean, it turns something that takes you 2 or 3 days to do into a 5 minute task. Well, that's, that's very cool. Now you did mention that you actually have, uh, these data sets, these arrays, right? This is what your region looks like our current
08:10
footprint today, yeah, your current footprint we're not just in single sites we're in multiple sites here in the states as well as in the UK. And these are data centers or warehouses or yes, data centers and warehouses. OK, so the warehouse is kind of like are like pseudo data centers, you know, because you don't have a dedicated data center.
08:29
They kind of, they have their own closet if you know or something like that. OK. All right, that's pretty common. That's pretty common, right? Um, but you're also gonna be, that's your plus one right there. Yeah, we're getting that's the new. Warehouse I mentioned we're getting ready to build out um OK what do you put in there again
08:44
uh we have uh what do we put the C array, I think it's a C40 that we shifted from another site that we mothballed, yeah, and I, I apologize for looking in the audience, but his account representative is C. Yes, and I apologize as well. I was trying to remember what it was. I can't remember if it's one of the X-rays or I think it's a C array that we moved,
09:03
yeah. Well, that's very cool because you're expanding, right? That's something that being able to to do something in Dallas all the way to Washington over to Manchester and not have to actually, you know, connect to Manchester to do that is kind of beneficial, right? And take advantage of that one pane of glass to
09:24
just reach out and do all, all the tasks you need to. Yeah, yeah, very cool, very cool. All right, um, cooking and there we go. Yeah, so, um, thanks Chris for sharing your experience with us. So as Sean mentioned in the keynote, um, for storage management,
09:41
it's really a death by 1000 cuts and like hearing what Chris mentioned, like even a day to day like management of storage, just being able to bring up a new array, configuring something as basic as DNS and NTP and CIO server. It might seem trivial if you're like doing it once or twice, but if you have to do it over and over again, as and when you bring in new arrays and
10:02
eventually your feet gets big and you, you are not sure if you've actually configured each one of them correctly, things start getting tedious. Things start getting um error prone and unwieldy. So, what Fusion really wants to do is to not just make the whole IT transformation, big vision kind of things that was mentioned in the keynote. We also want to make the simple, the day to day
10:26
things easy. We want to make sure that the 1000 cuts you're suffering from, we want to remove each one of those cuts, one cut at a time. And so like, Uh, just hearing Chris, uh, share his story, I just quickly put up a demo and said that. What if you had something like this? Would it have helped you like there's a new,
10:43
uh, I think a site that you want to bring up. Would you, would you use that script that we provide, for example, so for sure. We definitely are going to. Yeah, so let's just go straight into the demo.
10:56
OK. Yeah, let's forget about the slides. I think they're here to see the demo, they're here to see the automation in action, so less slides more. Just about the slit. OK, we'll forget about that slide. OK. And it's gonna play.
11:14
It's not hit play on here, so. I'll hit the button here, right? Throw the demo. OK, so as uh Chris mentioned, there's a certain configuration that you want to be doing um on each array. So it could be things as simple as like
11:31
configuring NTP configuring DNS, and the uh problem is that if you have to go in there and over here, for example, you see that you're configuring NTP servers, you're configuring a C, Clo servers, and um the DNS servers. The problem is that if you are doing this manually one at a time. Um, sometimes you might make mistakes. So for example,
11:53
now we're just going to introduce some mistakes instead of like 8888 as the correct name server DNS address. We're gonna change it to 8889 just to simulate uh. Uh, typo that we make. And the problem is that a lot of times when we introduce this typo, the mistakes don't show up until it affects some production workload.
12:11
And then at that time, maybe everybody's panicking and um it could be a Friday night and get like a pager. You don't want that to happen. What you really want is the ability to define a specification. A way to say this is how my array should be configured. Like we learn about the presets for workload.
12:27
This is almost like a presets for array. So define how you want the array to be configured and to be able to apply that consistently to every array in your feed. And you might think that writing such a script might be really. But here in the demo, we want to show you that's actually really simple and you can see
12:44
in this NST scripts, it's just a specification. Write down exactly how you want your DNS to be configured. So there's like the file DNS, the management DNS, the respective IP address, the NTP servers, uh, the sys do servers, and actually on top of that, we might also want to configure the default data protection.
13:02
So, Ah, let's create like a default P group that's uh taking snapshot, I think that's like once a day at 3 p.m., and keeping it for 5 days. So this would configure the default protection with the default P group on all the arrays. And if you have a host that's connected to multiple arrays, this is also something potentially that you might want to configure on each array
13:23
consistently as so that's something you might want to throw into your uh array. Specification. So over here, we are configuring the Windows host and the ESXi hosts uh with the respective IQN. So you define it here once and rest assured that it's going to be configured consistently on all the arrays. So now let's look at the actual and playbook
13:41
that applies that um spec and it's rather straightforward as well. So we're gonna get some fleet information like figure out what are the members of the fleet. And just go through each one of the specs and apply them one at a time. So it's a really simple and simple playbook. It's not like 1000 lines of codes that you have to write. It's very straightforward, like key group
14:00
what's the setting, and then we look through all the feed members to apply to all arrays in the feed. And now, uh, if this is the first time you are um running an NB playbook or running a script, there is a one-time setup that you have to do and it's really simple. You have to create an API token. So to do that, just go into the settings page, uh, and then we go into the users and policies
14:20
and access. And in the user's table in the top right hand corner, uh, click on the kebab and click create API token. It's gonna ask you for your username, just enter your username and boom, get your API token. So we're now gonna plug that into the uh config file, add that to the API token and specify the URL that you want your script to be pointed at.
14:41
With fusion. Every array can be used to manage any other array. So we're gonna pick any of the array in your feed for that and we're just going to use VFA one for this script. So with that, let's just run the NSA playbook. And you can see it's now going through each one of the config and applying it.
15:01
So when it's yellow, it means there was a change. So you can see it fixed our typo for us. It changed it back to 8888 when it was previously 8889. And you can see the rest are green. So the NS playbook is potent. You can run it as many times as you want.
15:14
If there are no changes, if it was exactly the same, it shows up as green. If there were changes, it shows up as yellow and it applies the confit that was inspect. So just by running the SM playbook in a few minutes, uh, you see that the configuration has been applied. If you go back to the array, you can see that our typo here has been fixed.
15:31
No need to worry about something going wrong during production. And that's how you can get the confidence that your configuration has been applied. So here you can see the snapshot schedule has been applied as well for default protection. It's taking it every day for keeping it for 5 days, and you can see the host has also been configured on each of the arrays here, Windows or CSSI servers,
15:52
Ho on DFA 1 and DFA 2. And now imagine you want to bring in a 3rd array. So if this was uh without fusion, without automation. So this is the new the new Dallas warehouse here. Yeah, this is the new Dallas warehouse.
16:10
Like Chris or someone from his team. Yeah, I have to go in there and reconfigure everything by hand. But now what he has to do is to just add the array to the fleet. So to add an array to the fleet, just go to any of the array that's already in the fleet and create a fleet key. And then go to the new array and click join an
16:26
existing fleet, type in the name of the fleet and the fleet key that was previously generated. So with the fleet key, the array will now know how to join uh the fleet and the arrays will start talking to each other. So, once the array is in the fleet now, uh we can actually go back and rerun the NS playbook. And it will apply all the changes in the array spec onto the new array.
16:51
And here you can see that happening. So BFA3, that's a new array, setting the DNS servers, uh, setting the NTP servers, configuringI logs, uh configuring configuring the default data protection, and Um, just give it a few more seconds. Now it's configuring the host, so creating the Windows host and the ESXI host.
17:15
You can see the other stuff are still green because nothing has changed it. It's just applying the config to the new array and boom, there you go. On the new array, NTP is configured correctly, this is configured correctly, the DNS settings have been configured correctly, uh, and the rest have also been configured correctly.
17:33
So one of the like ideas here that we really want to uh hit home is that If you just do this manually one time, it seems simple. So many times people ask like, why don't you just automate this? And it's not as simple as just automating this, right, because like for some other products that you might be used to, sure they are APIs, but right.
17:55
Automation against this APS might not be easy. And so writing automation means you're introducing the risk that you might be uh introducing bugs. It's like another script you have to maintain and so there's always a constant trade-off between like, should I just do this manually or should I actually invest the time. Uh, writing the automation. I'm not sure whether some of you are familiar
18:13
with like the XKCD comic. There was like one comic where it shows the graph of like the amount of time spent automation and the amount of time spent manually doing the work. Like as soon as you start building automation, you just spend all your Time just debugging automation and just never ever getting to doing the work.
18:30
So we want to make automation simpler as well and you can see in that and simple playbook, it's just really, really simple. We use all possibilities of accidentally introducing bugs to make it very simple, uh, not just to maintain but also for you to get started. So if you are not using automation today. Uh, thisens playbooks are actually available for download in the GitHub accelerate GitHub
18:53
repo and then you can just use that as your starting point to get introduced and get started with automation. So the first question I have to ask is, Chris. Chris, could you definitely see any kind of advantage to this? Definitely instead of uh spending hours getting the new array ready in Dallas when when we are
19:11
ready to do that uh I think we're just gonna go hit the bar early because all we have to do is plug it in and config, you know, once we have it online we just run the run the playbook and everything's built out we have all the standard configuration there we will have all our data stores built out based on another run book we were looking at and. Ready to go? Yeah. Anybody else think this would be advantage?
19:32
It'd be advan advantageous for you to use? I see heads shaking. Yeah, cool, right? Save you a lot of time, uh, save you against human error, right? Um, you know, some, some people are really good at what they do, some not so much, and maybe, you know,
19:47
you get human error involved and stuff like that. There's all kinds of things go, uh, compliance, all that kind of stuff that kind of comes into play. So, um, yeah, this is a great demo, uh, actually that kind of outlines, um, how that, how that can actually help you with that even getting exposed to automation from the basics, right?
20:08
So, uh, from a storage management work flow, what we're looking at is two different things. We're looking at a manual approach and that's what we used to do, or what some people still do, but many of you probably still do, is you do how many volumes, what sizes, which arrays, you find snapshots. Does this workload require higher limits, so on and so forth.
20:26
You can go all the way down the line. Now when you're doing this over literally uh you know 10s or hundreds of devices, it can, you know, cause all kinds of problems. That's why we start to introduce those things called presets and with presets, one you configure a preset once, you can deploy as many workloads as you want from that single preset and that preset has
20:48
mobility, right? You have the ability to move that preset. You have the ability to export that preset and use that preset in other places, right? Um, so configure it, everything else is automatic and then actually deploy the workload at the end, OK? And Brent's gonna show another demo around that
21:05
coming up. So to define what you're looking at here, Brett, what are we looking at for the workloads? So actually this is based on what Chris shared with the workload that he's currently running. So maybe Chris would want to give some context, you know, most, most of our facilities we have, like everybody else, we've got a data center footprint and then we've got your edge stuff.
21:23
So in our cases with fanatics, you know, we have all the sports gear and everything that we're constantly, we have a bunch of fulfillments. Centers where all that stuff is housed and where they do all the shipping from so in our fulfillment centers we have kind of a typical build out. There's always gonna be a VMware cluster and we have a typical build out for that cluster of
21:42
three big data stores as part of a data store cluster, um, and then we also all those sites, the reason they actually have compute on site is because there's typically a database that's running some some part of. Warehouse management system or whatever um and so we're gonna build out at least one database server uh to for everything there and our database servers all have a standard build as
22:02
well from uh uh uh footprint of what the drives are how they're laid out what their names are the sizes might be different based on the needs, but we, you know, they're all laid out the same way. So what we go with is, you know, we have a preset where we set up. Like an initial size of those volumes and we run the work flow and that'll immediately and
22:23
quickly build out all those volumes and then if we have the hosts and everything set up and ready to go as well we can make that part of the preset and actually attach all the disks to the hosts and you do the provisioning and and and uh presentation all stuff all in one quick. Scripted automated step and instead of taking you now minutes, many, many, many minutes or hours to set up all this stuff manually like we're doing today and
22:51
have been. You set up a preset and a workflow and like you said, you click and away you go. I'd actually like to see what a click count is on all that. I mean, how many clicks do you have to do in the GUI in order to accomplish that, you know? Yeah, let's just see. So yeah, creating like 9 discs each time and you have to configure like 4 or 5 clicks with
23:10
each one of those and then yeah, a lot of time on that then the stores and then all the hosts and then actually doing the presentation, yeah, it's a lot of clicks. Yeah, so the, the funny thing about automation is that even if you're doing all of this manually, you might not realize they're actually already automating it because you have a run book. You have like a specified like build that you
23:31
want to do something and the way we're automating it right now is having a person just execute these things that you have specified in your run book. It's gonna be much more. But if we can somehow codify it, so it's not a human that can run this run book. So when you say you are not doing automation, maybe you already are. You're just somehow doing it like with a person
23:51
that's executing the instructions rather than with a computer that's executing the instructions. So, how about a robot? Yeah. So automation is just really translating your own book into something that the robot can do. And you want to translate it in such a way that you don't have to write very obtuse code. In your round book is likely the outcomes that you want.
24:13
You want to be able to specify what data protection, SLAs, like what drives you want. You're not going to specify do exactly this, click on this, click on that, click on this. Same thing for the automation, you don't want to specify very minute details in the API. You also want to be able to specify outcomes.
24:27
So to translate the run book into Ah, something that we can automate, we just require you to, um, give us the outcome and that's like codified in presets, that's codified in like the NS playbook for the array specs. And then the robot would just, ah, it's really, it's really the fusion automation engine. We call it the robot. That's a name we, you know.
24:49
Uh, but one thing I do want to point out on here, uh, Brent, if we can expand a little bit on the, on the pier one, portion of it, the integration with Pier one. Yup, so like another thing that's not mentioned like in the run book is uh you have to pick like a right placement for it, especially if um you have uh many arrays and not all arrays are.
25:06
Running equal, like there could be some array that's overloaded and some array that's underutilized. Right now it's still, I think like there was a customer that I spoke to that maintains like a spreadsheet of all the different, yeah. How many people still use spreadsheets to maintain all the yeah, yeah, I know for many things.
25:26
Yeah, so they have to just go in there and just visually inspect. OK, this guy here is running like 10 workload, this guy here is running 3 workloads, and they go into the array page, you see the performance metrics. Oh, OK, this looks like a good candidate. All of that is now automated with the Pure one workload placement engine.
25:42
So it's not just keeping track of the current uh performance and current load. It's also doing a projection because when you uh provision a workload, you specify like the workload type, like whether it's a database workload or a VMI workload. So it knows because it's collecting data from all of our existing customer base, all the timemetry data,
26:00
anonymized, of course. Uh, but we know that a database workload will have a certain IO characteristic. It will have a certain growth characteristics. So we can actually project like 30, 60, 90 days from today. How is your array gonna perform?
26:12
How's your workload gonna perform? And based on all that, we make a recommendation, we can say that, OK, this array is likely going to be overloaded in like 90 days or this array would have headroom even in 90 days' time. So this is a good fit for your. But the keyword there is recommendation. It's not it's not it's something that you have to actually do or something that's automatic.
26:31
It's just a recommendation. Right. Yeah. So, and that's a common theme here with uh automation and fusion. So, uh you hear words like, oh, we're gonna auto rebalance your workload, we're gonna automatically do a lot of these things. It's still a human that approves it, but the machine executes the steps for you
26:48
automatically. So everything is still. Uh, here's what we're going to do. Tell us whether you're OK doing it. So same thing with the workload placement. Here's where we're going to place it. Are you OK with it? If you say approve, we're going to create the workload there.
27:01
So it's still human in the loop that does the approval, but everything else, the execution of the steps is done automatically. Absolutely, absolutely. So just a little bit of background to what we'll see in the demo. Uh, there's going to be two presets, as Chris mentioned, there's two main workloads,
27:16
there's the database workload and the VML workload. So the VMI workload is just provisioning 3 very large VMFS data store. Uh, and a SQL workload, as Chris mentioned, it's quite tedious to actually do by hand. There's a few disks that you have to configure and you have to configure data protection on it and you want to configure a replication on it to uh like a CRA in this case,
27:38
uh, and be able to keep track and make sure we're doing this consistently across all SQL databases, uh, gets pretty tricky once you have a certain number of it. Right, let's roll the demo. OK, so the first place you would start is in the workload screen. So if you have a 685 and above, purely 685 and above, you will see these two new tabs under
28:03
the storage tab, so presets and workloads. Um, so, Uh, let's create a workload. We have the two presets already pre-created. Um, the VMFS preset will be used to provision the VMFS data store. Uh, it's fairly straightforward. It's just, uh, as mentioned, just provisioning 3 VMFS volumes of 1 terabyte each.
28:24
So, OK, click on the preset on the right hand side, you can see what it's doing. Uh, you can see the workload type VDI, the storage class is a flash ray X. uh, but the SQL preset is fairly more involved. You can see that these are the 9 disk that somebody would have to configure if they were doing it manually, the quorum disc, MSDTC disk, and so and so forth.
28:45
And on top of that, Uh, uh, as Chris mentioned, you want to start small, like, uh, provision with maybe 10GB, 5 gigs, uh, luns each first and then resize them as they grow. Um, and on top of that, we also want to be configuring a snapshot data protection. So here we have a, a snapshot that's being taken every hour that and keeping it for a day and On top of that,
29:08
taking one every day and keeping for a week. I think that's exactly the snapshot rules that you like wrote in the email to me. So this is like just translated as SLA. We're not saying like configure P group, we're not saying configure this API is just exactly whatever he said in the email translated as the outcome here in the preset. So, if somebody were to do this, again, they would have to create all these volumes by hand.
29:31
They would have to create a P group by hand, configure it by hand. And the scary part is, it's so easy to accidentally forget to add a volume into a P group, and you wouldn't realize that it's not taking any backups of it until it's time to actually recover it and you realize that you don't have backups, right? There was a huge major oops, oops.
29:52
And so you, you want something like this to be able to codify it. And reduce the human error from it, right. So once this preset has been set up, I could now provision this uh workload directly in the GUI, but what I want to show you is how I can actually uh provision this work to preset programmatically as well and it's just as simple.
30:13
So if we now head over to the NT playbook, provisional workload is just this, just this 4 parameters we have to provide. That simple. Like kudos to assignment for making the NS playbook that simple. So that's why he's Mr. Nipper. He made a lot of these NS scripts, um,
30:31
and to provision the workload, all you have to do is enter the name of the preset, the name of the array you want provision on, the name of the workload, and the host that you want to connect the workload with. In this case, it's the ESXI host. So once you are provided this four parameters, you can just go ahead and uh run the NS
30:49
playbook. So if you have already set up the API token as demonstrated in the previous demo, uh, you can just run newer and simple playbooks from now on. So this provisions the VMFS uh workload. So now if you were to uh refresh here, come on, refresh refresh. Yeah, there we go. So the VMware cluster workload is provision
31:12
with this 3 VMS data store. And it's connected to the ESXI host that we know was consistently configured on all arrays with the previous ES playbook. So now let's actually provision the SQL cluster. So to do that, just specify the name of the preset, change it to SQL.
31:32
Maybe this time we want to provision a SQL cluster on the other arrays VFA2 and change the name of the workload and we want to connect the Windows hosts ah to the ah SQL storage. Ah, and the time. So this is just to show you how long it would take. I think Chris mentioned it takes you like 3 days to fully bring up the,
31:53
the full, full everything from plugging in and getting all the configuration and then all the works on top of it. But at least a few hours to do this by hand, right? Boom, run the playbook. All 8 volumes are created, uh sorry, 9 volumes are created with the PE groups configured with the correct snapshots. We don't have to worry that we accidentally
32:16
leave uh left the volume out of the P group. Uh, and over here you can see replic applications also set up correctly. And if we look at the hose that's connected to the volume, you can see the windows has been configured correctly to connect to all 9 volumes and you know, it just took 8 seconds to run this playbook compared to like 23 hours you would
32:39
have taken to do this, yeah, without even having any confidence that you did all the steps right. How about 8 seconds. How about that? That's, that's pretty cool. Very impressive. That's pretty cool. All right, so this is really automation that you don't need to manage,
32:57
right? Um, what we're talking about you can have your inputs of the, the GUI, the UI, uh, the CLI, um, SDK you can interact with array with preset catalog, what we call preset catalog. Um, and then also deploy a workload, right? When it's automated, you're getting that efficiency, you're getting the,
33:15
the risk mitigation, you're getting, uh, the, uh, observability, right? This is the core of what we're looking at, uh, around fusion, around, uh, how we can automate this and make this easier. Um, you know, from a standpoint of folks that say that I don't wanna, I don't wanna even start down the road of automation.
33:36
I'm, uh, you know, maybe it's just too hard, maybe it's not the resources, maybe it's something like that. We're just trying to make it easier. I mean, we're just trying to make it simple for you. Yeah, so I don't think anybody would complain that automation is too difficult or or it's too
33:51
simple, sorry. Anybody would complain that automation is too simple, whether you are where you are along the automation journey, like whether you are, yeah, exactly, just getting started, I've already written a lot of scripts like that's like my 3rd try at an AI for this by the way. You wouldn't believe what the AI got brought
34:10
back like the first two times. It was crazy. That's real code. That's real Python code. Yeah, so automation is not just about providing more APIs. It's providing the right kind of APIs, making the input to the APIs declarative so you specify the outcome instead of having to specify exactly precisely what you want each
34:32
step to do because every APA they invoke is potentially another bug that you might introduce. It's potentially another outage that you might have accidentally caused. So just tell us the outcome you want and let uh Fusion do most of the heavy lifting for you. Right. Exactly.
34:47
So, uh, we're gonna wrap it up here and take Q&A, but the next steps we want you to do here is we want you to take a test drive, so you don't know what test drive is. All right, test drive is actually where you can try out, uh, a lot of the pure products, and Fusion is one of them. We've got like 3 or 4 test drives out there for it, um,
35:03
and we're running the hands on labs here which are based off of our test drives. Um, we've got ones out there for flash plate, flash array. Uh, we even have, uh, you know, other ones out there for Pier one, FA file, you name it, um, so go ahead and take a test drive. We have a bunch of videos, um, and our GitHub repos, uh,
35:22
that are available on our site when you hit that QR code, and then basically go out and create a fleet. It does really nothing to your infrastructure if you have, uh, two arrays, even if you have a single array. Go ahead and hit that fleet key and create a fleet. And you can see what you can do with that. It's not,
35:40
you know, it's not damaging. It's not something that's gonna, you know, bring your infrastructure down or anything like that, right, Bret? No, it's not. I the worst that can happen. Uh, so yeah, just give it a try, you know, and if you,
35:57
if you're uncomfortable doing it on your own hardware, log in to test drive and go ahead and beat those up because, uh, those are, uh, ephemeral instances, uh, that, you know, you can do whatever you like to them, uh, and, uh, it really won't hurt anything, so it's a, it's a good opportunity to try things out.
  • Pure Fusion
  • Pure//Accelerate
Pure Accelerate 2025 On-demand Sessions
Pure Accelerate 2025 On-demand Sessions
PURE//ACCELERATE® 2025

Stay inspired with on-demand sessions.

Get inspired, learn from innovators, and level up your skills for data success.
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.