Skip to Content
41:59 Webinar

What's New with VMware, Pure Storage, and You in 2025?

Explore Pure Storage + VMware in 2025: ActiveDR™, ActiveCluster™, vVols, NVMe-oF, NFS, and deep platform integrations.
This webinar first aired on 18 June 2025
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Hey, we're gonna go ahead and go through what's new, uh, this past year between VMware, for storage, and, and what's coming up with 2025. Um, a lot has changed since we met last year. Uh, some differences, uh, been popping up and some different announcements. Um, I'm Alex Carver, uh, senior technical product specialist.
00:20
I always forget my title. I, I've been up here for 10.5 years, um, been covering VMware majority of that time. Other virtualization, uh, aspects to it as well, so. Definitely looking more at the other ones and I'm Dave Stevens. I'm a field solutions architect. You're like,
00:36
what the heck is that? Probably you've said the same thing when you looked at Alex's, uh, title as well. Um, Alex is the OG guy. I've been here two years, um, but I'm the one of the guys you'll probably hear and on a lot of calls. Alex is also on a lot of calls, but, uh, we're here to talk about everything VMware.
00:55
OK. So we'll split it up in three ways. Go to just take a quick level look of what's been new with core storage, um, both from VMware, Pure, and then gonna spend some time. There's been some new stuff that we come out with Pier one, specifically around analyzing your virtualization environments and VMware
01:11
environments in particular, and then close out with some of the new stuff that's come with our vSphere plug-in. We've, uh, recently been making a lot of new updates, got some new stuff coming out too, and a demo of, uh, our integration with the Fusion uh platform itself within Pure. All right, uh, probably what everyone's anticipating, uh,
01:31
VMware did officially announce yesterday that they are deprecating and going to end support for vSphere virtual volumes. If you've been following Pure at all the last 7 years, um, it's been a pretty strong messaging from Pure, uh, more specifically about, hey, how we can differentiate from a storages aspect to the,
01:52
um, the vSphere admin. And application granular storage was a large part of our messaging. Hey, we wanna get the insights and we gotta get these APIs that are all driven right through vCenter, um, and we built a lot of our integrations around that. And unfortunately, VMware has made the decision to deprecate an end of it.
02:10
So what is, what are they actually announcing? Well, in VCF 9.0, it will be in a deprecated status, meaning that you technically can be running it on 9.0, but the amount of support that VMware is gonna give you is. Minimal to zero. They have no plans for any updates or fixes or security patches for it in there.
02:30
And it will be officially pulled and not supported and 9.1. Um, now it's gonna be fully supported in vSphere 8 and DCF 5.2. So if you are still currently using VAs and you're on VCF 8 or uh vSphere 8, you'll be able to still use it without any problems, open up support cases, well, problems, relatively speaking, but you'll still be able to use it there.
02:53
And if you go to 9.0, you'll still be able to do it there. The big thing here is no future developments coming out to it. So, show of hands, how many people are actually actively using virtual volumes today? I'm just curious. That's a pretty good number, OK. Um, so, what's next?
03:12
Um, obviously, you can contact your sales rep and just, hey, like this is really core and important to us. This is really gonna impact us. Let them know. We'll do anything. Time will tell, but at least let your voice be heard in the messaging clear. I mean we've been sharing it with them,
03:29
continue to share it with them, that it's really impactful and not really a great move in the right direction for us. Having said, stay tuned for more information for us. We have documentation updates, user guide updates that are coming out, some blog posts and demos that are gonna be coming.
03:43
Um, we are really committed on delivering a lot of that differentiation with eAs where the application granular and VM granular storage and integrations there to try to deliver that as much as we can to VMFS RDMs, and within NFS have a little bit more ability to do that with NFS on the flash array, um, as we'll have more APIs driven directly there. And then seeing what we can do with VMFS and RDMs too,
04:09
and I had the question right before we got started. The, the question was like how do I get off virtual volumes? It's no different than how you got the virtual volumes in the first place. You'll literally just be storage motion the VM back to a VMFS or an NFS data store and you're, you're back to square one.
04:24
So it's pretty simple migration. Now, what are the options? Uh, first, if you're already using it, Like you can continue to use it. Um, there's no rush to get off of it. It's not like there's a mandate that, hey, in order to upgrade to VCF 9.0, we need to remove everything from it.
04:43
You'll still have the ability to do it on 9.0. Um, if you were planning to migrate workloads to VAs and especially converting RDMs to evals, hold off on that. I would not be doing that. Just had a conversation with the customer a month ago where.
05:00
They were hours away from kicking off a huge migration from RDMs over to Vals and Pump the brakes on that a little bit for them. Thankfully, um, that the announcements were coming around the same time frame so they didn't have to come back from that. Now the other option is doing the storage of motion. There's a couple of ways to do that.
05:18
Like you can do the storage of motion of VMFS if you're already using Block, happy with Block, whether that's MEME, um, IU, or fiber channel. The uh, the options are gonna be there. You can convert your existing VAs RDMs, use case for that is fail over clustering, whether it's Windows failover clusters, Oracle clusters where you still need to have,
05:36
you know, those raw devices and, um, physical sharing between there and clustered VMDKs on VMFS doesn't make as much sense for your workload. Um, the other thing is we need to close feature gaps on MDME or fabrics, um, that includes active cluster support for MDME, um, with the MFS, uh, trying to push VMware to support RDMs with VMFS as well,
05:59
uh, with, uh, MDME or fabrics, I'm sorry. So there's stuff that we can do there, uh, option is NFS data source some flash array. Um, we're continuing to add more features there. Active fis coming next year for file on the array. And, um, uh, additionally building out different integrations as well so we got a lots coming there um the other part is,
06:23
hey, use Valls to migrate platforms. This is something that comes up in a lot of conversations and you're like, hey, we have workloads that we're in a position that we can go ahead and migrate this to a different platform. We already have something set up that we've been testing out for the last year or we're testing it out in the next 3 to 6 months.
06:39
Valls is an easy way to migrate off of it because there's no VMDK encapsulation. I don't have to convert this VMDK to a QA or VHDX or whatever other file the VDI format that the platform you're using is using. This is a raw device, makes it really easy importing it to OpenShift and even using Portworks CSI.
06:59
Um, we're developing different integrations and migration to to do that, so. Keep, uh, keep a watch on what we're doing there and keep in contact with us, um, potentially new tanics as well. We're working with them, making sure that we can make this transition as easy as possible if that's a choice. You're like staying on VMware is not the bad
07:19
option necessarily for anybody, right? Like if that's the choice that you're making that you're gonna be sticking with VMFS and NFS that's perfectly fine. Like I don't think anyone needs to be feeling pressure, making a rash decision or movement. Evaluate each of the options that you have. I, I think that's the biggest thing. It may make sense to still keep some workloads on VMware while you do strategically move some
07:40
other workloads otherwise. Um, we have a few like several different sessions throughout this, uh, today and tomorrow where we're looking at the modern virtualization, um, ecosystem, and then there's Newtanic sessions, some open, um, shift ones, and we got folks that.
07:55
Doing demos on how to convert stuff to OpenStack and so forth. So there's gonna be a lot more information coming from us on that one. Our whole b is we wanna make sure you're set up for success, and that's the biggest thing and so we need to make sure we're evaluating each of the options, making sure we have the documentation there in place for you as well,
08:11
and then make sure that you're gonna be successful in there. So I'm curious, how many of you are not considering moving off VMware? Usually you expect me to ask the other question, but I'm just curious who's not considering. OK, all right. And right, so of you who are, who didn't raise your hand,
08:30
I, is it safe to assume that you're all looking at alternatives? Management. Right, right. Yeah, yeah. OK. All right, well just, just curious, you know, what that looked like. I, I, I had a feeling I was gonna get this exact answer you all gave me, but figured it didn't hurt to ask.
08:49
Yeah. Um, and if you wanna talk, uh, we're gonna be here, you know, today and tomorrow in the expo, so just, you know, pull us aside, talk to us, questions, talk to your account teams, get stuff set up with us, um, more than happy to go through this and. Now onto some actually new stuff that's actually coming out with VCF 9.0 that we can be
09:08
a little bit more happy or upbeat about. 9.0, um, specifically for NFS is releasing two different features. One, we're the only storage vendor to be supporting it, um, and we were design partners with VMware, and that is automatic guest level on map, uh, for VMs and NFS.
09:24
Meaning that, hey, I just provisioned a bunch of thin uh disks with these VMs and NFS data stores, they deleted a whole bunch of files. Well, now they'll actually issue those and guest uh maps to our uh VAI plug-in, and we'll go ahead and reclaim that, um. There, so before you'd have to wait to reuse some of that stuff and so it wasn't actually
09:44
reclaiming or de-allocating the blocks, uh, de-allocating the storage on the array and so that is something that we co-wrote and engineered with VMware. Um, the search technically in process right now, but hopefully VMware helps us out with the broken tests, um, and then we can get that passed through. The other cool thing is that NFS 4.1, they did actually bring the encryption um support to,
10:09
uh, Kerberos, and you can actually run that with Purity 662 later. You don't even need to be on the latest, you know, branch of that. Um, you'll just need to be on the center and ESX99.0 and later. So on that topic, how many people are actively using NFS today for their data stores? Just in general, not even on pure.
10:30
Yeah, yeah, yeah, it doesn't matter if they're on pure or not. OK. All right, that's much lower than I thought it was gonna be. So I assume the option then is everybody using VMFS? Is that correct? OK. All right. Um, a couple of points from NFS, uh,
10:45
on purity is 682 and later. You don't need to open up a support case to get it enabled. Uh, file is always on 682. And so when 689 rolls around, that'll be there as well so that you don't need to get it enabled, get it approved or any of that. You just configure the the server configuration, get the networking set up,
11:03
and you're good to go. And so there's less of a bar of entry. Uh, the part that is really important there is the dynamic resource allocation. So that if you're not using Block at all and you're only using file on there, we're able to allocate the correct resources to only the file services. If you're using Block and no file, then you're only needing to have the resources for a block.
11:23
If you're using both block and file on there, make sure that there's resources available for both. And so your array is going to be able to dynamically allocate the resources depending if you're using block or file or both. Um, the other aspect is 687 and later. We do have multi-server um support. That means you can actually set up in the configuration for your file services on the
11:43
array. You can set up multiple servers uh serve uh servers. So this helps out with secure multi-tenancy, but also if you have different LDAP servers or authentication, maybe you have different domains, um, different. Uh, organizations or several organizations within your group that you're needing to
11:59
provide support for, um, maybe your NFS data stores are running on a completely different domain than what your user group or user profiles are actually being stored on or your SMB services that you need to be providing for are on a totally different one. So that allows you the flexibility not to be locked in on this is the only LEP authentication that we have and that's it, um, we're the only domain,
12:22
so. All right. And some stuff with VMware Cloud Foundation 9.0. Now, there is some actually promising things that are coming out with here with 9.0, the SDC manager, when you're deploying, um, management domains, you're no longer required to actually use VSAN, so you're not locked into having to use VSAN
12:43
when you use um VCF anymore. So your management domains can use NFSV3 or fiber channel for uh VMFS as principal storage. They haven't confirmed that they'll ever add ice cuzzy or MBME. It seems to be pretty locked into VMF, uh, uh, VMFS uh over FC, NFSV3 and VSA, I guess the other
13:08
comma there would you can still use VSA with it. Um, some of the stuff that we're working on, we were working on a 5.2 brown field kind of conversion guide. Everything really broke when trying to do a brown field like import for a management domain on 5.2 and.
13:24
I think we opened up 5 different bugs with VMware. They fixed a couple of them, but the NSX part just always fell apart. So yeah, we had to run a bunch of custom Python scripts, and so I just told Nelson, I'm like, who was writing it. Don't worry about it. We're just gonna skip it and just focus on 9.0.
13:40
So this will be coming out in the next 4 to 6 weeks as we'll be doing a big refresh of all our VCF documentation, in particular with um. Uh, NFS, uh, ScuzzFC and MVME because technically I Scuzzi can only be supplemental storage. So, but principal storage can be those, well, NVME or average right now can only be supplemental storage on workload domains too.
14:01
So, but we'll get you updates there and got some pure one updates here too. So before we go there, how many people have made the transition to VCF, whether it's 52, 90, any any flavor of it? OK, pretty, pretty low numbers. That's what I thought. OK. Part of 9.0 is, is they are locking you into
14:22
that. They want you to go through the STDC manager deployment and everything through their licensing manager through ARIA operations or VCF operations. They changed the name every 6 months, I swear, but yeah, and they, and I will say because I've seen a bunch of this stuff,
14:38
uh, as well just like Alex has, um, the install to start up a brand new VCF environment is considerably easier. They've they've taken away the cloud builder sort of thingy. Um, it's become a little bit better. Uh, AIA, as you mentioned, is really, really baked into VCF more so it's not like this kind of bolt on tool that it's always been since they bought AIA or Valize from years ago,
15:03
so it's a really good improvement. Real people call it B cops. I shouldn't call it that, yeah, right. You got it. Yeah. All right. So VM topology, how many people in uh their environment, they're,
15:18
they're using Pure, uh, which is all of you, I assume, and are utilizing the VM Analytics collector appliance. OK. So you're, you're aware that when you utilize that, you can get an end to end topology all the way from the array down to a beamM decay. It's really, really cool. It's a great free or included option that when
15:42
you buy into pure, but what a lot of people aren't quite aware of is that the data we're collecting for that also can be utilized and kind of turned in to be utilized for some business cases like analyzing what it might be like to play with some numbers. And figure out what does it cost to move back and forth and where are some optimizations because we're looking at your overall virtualization environment and the the pure
16:09
arrays as well and trying to show some optimization options um and so with that you can utilize VM analytics and the virtualization assessment tool and, and in uh the uh the topology view. To download that topology now that wasn't, that was something that's been asked for quite a while and so it's really, really easy to do this.
16:31
You'll see in, in here and I kind of talked, got ahead of myself, but you'll really hard to see you're gonna make it bigger, yeah, because I, I think this is actually really cool, right? Is that one of the things people would ask for is like, hey, I wanna actually export all this stuff so that we can actually see it.
16:47
And so here it's actually able to export everything you really can't see it here but that entire topology line that you were seeing with MBM Analytics, it's exported to a CSV and it does that from the inventory standpoint, but also from the performance metrics and capacity metrics too. And so that you're able to get all of this information rather than just I can export the
17:08
report where I get the top 10 VMs on CPU or top 10 VMs on capacity and that sort of thing. This will allow you to export everything as that CSV. So if you do have custom reports that you do wanna build out from that one, you'll have the ability to do it right away. You don't have to worry about integrating with, you know, Venter and pulling all this information yourself or aggregating it.
17:28
We'll be able to do that directly for you, so. It's in a convenient way for you to actually build out this stuff um that can be more customized to you individually and this can be done for the previous, uh, you know, 12 hours, 24 hours, 3, and or however long the period is that you wanna do for it. And for those of you who didn't raise your hand
17:47
because you're not using it, I actually like to refer to the tool and it was a it was a term I came up with called mean time to innocence, but I literally was talking. To a customer who was using the same tool and they came over with the same term so I feel kind of validated it's a great tool to have mean time to innocence of like where is the problem? I mean I know I used to be in your shoes Alex
18:10
probably was in your shoes as well before, and we always hear that and I like to pick on my database guys on my team, they're the ones that cause the most problems or complain the most about performance in their environment. And this is a great tool to say that it's not a problem on the on the array side by looking at the topology view.
18:28
It's not a problem in your virtualization environment, but usually if you go all the way to the left on the VM and look at the disks, this is where you can start seeing that there might be issues internally inside your environment. And now it's a great great way to say, hey, system team, architect team, DBA team, go take a look at the other tools you're using to manage your
18:48
environment. Um, and this is a way to, to really quickly get in there because it's, it's not a real-time tool. It's, it has a lag of about 3 hours, but it's still a great tool to see what's happening over the last 3 hours, uh, 1 hour, or sorry, 3 hours, uh, 24 hours, 7 days, and things of that nature.
19:06
And then on top of that, as I was kind of alluded to before I, um, got ahead of myself, is that that topology tool is pulling in data that you can use for assessing your environment. And this is a great way to, if you're not using something like ARIA already to understand what's happening across your entire infrastructure.
19:25
This is showing in here we've got 7 virtual centers, 7 clusters, 7, data centers, virtual data centers, and you can see we are massively overengineered on the amount of gear that we have in our, in our lab or in this example here. So this is a way to figure out.
19:43
Where are optimizations? One is, do you really need all that gear? Is there a way you can save money, especially when it comes to upgrading the next time? Um, you can also play with some numbers over on the right hand side. You'll see there's per core numbers there. These are, I like to refer to them as list prices.
20:00
So, you know, in the early days I've, I've been hearing that Brockcom's not really sticking to the $350 per core anymore. They have kind of backed off on that. But now you can play with the numbers here based on your overall environment. And see what would, what would the savings be like if I were to get,
20:16
you know, $199 per core. I'm not saying you're ever gonna get that, but great example here to say go from $350 to $199 and based on the optimizations we're looking at across your overall environment, what might the saving. Things look like and so in this environment we're saying there's uh if you look at the license right now it's $3,002,000 we believe there's a savings of
20:40
$227,000 that you could you could make by looking at the overall optimizations we have in there. Luckily I don't pay for those licenses. Those are nicely NFR licenses, so I, I, I can get away with this, yeah, and, and, and like it says at the bottom, it shows you cluster level analysis. You can look at an individual virtual center you can look at host level insights actually
21:03
one of the things we plan to do in the future, it's not. Yet is to then show you things like hey maybe multipathing is kind of wacky or you're seeing some other IO that's kind of weird um and we're also looking at BM level optimization so this is where the cost optimization piece comes in this is looking at an individual uh virtual center. Yeah, so this is the virtual center, uh,
21:26
viewpoint on it, yeah, and this is looking at all the clusters in there. Red is what we're referring to as areas where we know we can definitely have some war there are some warnings and you should take, take a look at those like there's low resource utilization, not necessarily bad in the sense that your environment's in a, in a bad state, but these are areas where we think you can make the most impact by taking
21:47
advantage of the optimization options in there. And then this is a view to show down at the individual virtual machine level. We can also look at hosts and data stores in there, and we're looking at a particular cluster. And see every one of the VMs in here has a yellow, uh, hexagon beside it, and we can show what that looks like looking,
22:05
pulling some of the same basic information that the topology view was pulling already, but putting in here it's showing that, hey, this virtual machine that we have highlighted has, has barely used any resources over the last 7 days. Um, so do you really need to have, I'm gonna pick on the top VM. Do you really need 16 or 24 VCPUs in a virtual machine?
22:26
Maybe you can back off on there. That gives you the ability to also expand and run more VMs in your environment as well. And then lastly, this is the top view that I was trying to look at squinting at, trying to figure out the numbers, uh, but in this instance this is a much larger environment it's not overengineered as much as the last one was, but a single virtual center,
22:48
20 clusters, $1.3 million we think based on the optimizations that are available in there that you can save 4. $49. So instead of spending $1.3 you're gonna spend just under a million dollars for your overall licensing costs, and it gives you the ability to play with that. We've got VCF, VVF and VVS so in there so you can play with those and they're baked in numbers right now,
23:11
um, but you can easily change that $350 to 199 like I mentioned earlier. Yeah, perfect. Yeah, yeah. All right. Yeah, and then there's more that we're gonna be adding to it obviously there's a lot of, uh, aspirational aspects to improving the BM analytics and DM assessment and recommendations so that we can help you get a higher level view
23:34
for it. Now obviously everyone's gonna have different environment, different recommendations, and depending on your VMware rep, uh, different, uh, negotiating that needs to be done. So it, it's something that we wanna at least enable you to do that. OK, with the vSphere plug-in, we've been making a lot of efforts to improve various aspects of
23:51
the vSphere plug-in, um, um, obviously deploying it, updating it, and all that stuff we're still working on doing, but we've also been adding a lot more support for, uh, NFS and NFS data store features because with vSphere 8.0, they did release some new stuff with Connect and VMKnick binding both for NFS 3 and for NFS
24:11
4. But actually getting that configure is kind of a pain. Um, they did add some of that stuff into 9.0, but not as much. Uh, NFS4, you still can't do VMK connect binding and the GUI with the 9.0. But here we have Nect support and NFS 4.1 multipathing, meaning that when I go ahead and
24:29
create a new NFS data store with the plug-in, I can specify the number of TCP connections that it's gonna be using, and I can also use the multipathing if I have multiple file ips set up and so I can get that configured when I'm creating it with the plug-in itself. Additionally, we did build in here the VMK nick binding. This is where you're able to specify, hey, I'm gonna be opening up these NFS connections
24:50
specifically from this VMK. Uh, I don't want to be using VMK Xer just because it happened to be routed there. I want to be going through these specific uplinks and so we do have that ability baked in here for both NFS 3 and NFS 4. Um, now there's a deep like a couple of ways you can do it.
25:06
You can use the same VM Nick set up, so if they're all sharing the same distributed switch, they all the same VMKs that are on the right networks, and yeah, it's easy to do that. But if you do have a more complicated setup where each host has a little bit different kind of networking configuration, it does allow you to do it at a per host basis too.
25:23
How many of you are actually taking advantage of the plug-in from us? Oh cool. So for all of you who are not, I wanna encourage you to do this. I talk about the plug-in almost on a regular uh on a daily basis to customers is the plug-in can really simplify your life and it's not just for creating data stores or doing things like BMN binding or anything like that.
25:47
Now there, there are some, there's some real reasons to use it to save. Yourself time so you can focus on doing things uh the another way um and focusing your time over there. The, the nice thing with this it's always gonna do it the same way every time. I can tell you in my days being a VMware admin, uh, in a production environment,
26:06
it was always easy to screw up something and miss something. That's why the plug-in is a really good way because it always does at the same time, the same way every time. OK, a couple other features we added is mount to additional hosts. We didn't have this before, um, but this allows you to actually mount it to additional hosts in
26:23
there, but also set the VMK nick binding and then connect itself. So you have those all that same connection level things when you were doing in the creation. You can do that when you're mounting it to additional hosts in the cluster. And there's also editing the NFS policies and the uh data store itself.
26:39
So if you actually need to change some of the access rules on there or you want to change any of the policies for those exports, you're able to do with the plug in there. And one of those other workflows that you can see highlighted, the undelete virtual machine is still there too. We have that with NFS so if the VM is deleted, you had snapshot protection, uh, policy configured on there,
26:56
you'll be able to recover it, so. Um, the other feature, and this is with 540, sorry, that, those features are with 540 as well, which is out it's GA now. Um, uh, 540, we also have VM teleport. So one of the fun things with the balls-based VMs is we do have the ability to have,
27:15
you know, the disks to volume, uh, granularity to the right. So what we've done is we've taken, um, the plug-in, and since we have insights to the VM itself, we're gonna go ahead and see what's the VM configuration. And we're gonna tag all of those data evals on the array with what's the CPU, what's the memory, what's your, uh, scuzzy controllers or MBME controllers configuration,
27:38
what's your boot policy, all that information, and we can then go ahead and tag all the data evolves. So then whatever's going to be using it in the future can read through that and know, hey, these data evolves, um, these are associated to virtual disk essentially. And these are all the settings with them and so we can build in integrations with other
27:58
platforms or within VMware itself so that you can recover it. And so there's a couple of options you can just prepare it where we just tag it so this is there's no snap snapshots for it yet or you'll just wait for a P group snapshot scheduled to kick in. Um, there's one where you go ahead and prepare it where we'll tag it and then it'll go ahead and do a snapshot now,
28:17
um, with its P group, or you can go ahead and replicate it to another array. And this is what the tags actually will end up looking like, um, so we'll do this for each of the data uh data volts so if there's multiple virtual disks for that, um, virtual machine, we're gonna do it for all the data evals there and so we'll actually go ahead and put all this information in there so that we know the networking
28:37
configuration and the actual um updates there. So this allows you to be in a state where. If I need to recover this from a specific point in time or I need to recover this to a different Venter, I'll have that ability to do it as well. And so here I actually go to a different Venter. This one happens to be linked, but it doesn't need to be.
28:57
This could be a completely unlinked Venter that's just connected to this array, and so I can go through here and say, hey, I'm gonna teleport a virtual machine and I just do this at the cluster level. And it'll go ahead and give me a list. Hey, we found some tags in this name space associated with this VM here and if there were multiple VMs,
29:16
it would go ahead and show each one of them. And so now I can go ahead and recover that VM and then I power it on and there it is, it actually booted up and it's running. And so I recovered that to a different V center. Um, now right now it's fairly simplified, it's just doing it a single VM at a time, obviously like it's not gonna do,
29:35
uh, you know, 500 VMs or 50 VMs at a time. Um, it's not built to that yet, um, but this is more, hey, we proofed this out, we know it works correctly. And now we can do it within a vSphere environment. There's other aspects that we're wanting to make this ready for and we're building out integrations. One is the AVS and eventually EVS,
29:55
um, in some format, but is where we can go ahead and, you know, recover or teleport this VM up into Azure or eventually AWS and then also, are there ways for us to then integrate this to Azure VM or to an EC2 instance? There's gonna be a lot more extra work that needs to happen with that as far as the boot goes. Um, but this is also sets the groundwork for us
30:17
to build out integrations with OpenShift and Portworks, and so we can teleport the VM over to Port work, uh, through Port Works to OpenShift, or we can do this for OpenStack or any of the other kind of platforms because you can go ahead and read all these things. And actually create a new VM there, copy it out obviously each of these won't be, you know, virtual disks on an LVM or in an NFS they're gonna be raw devices,
30:43
so you do have to consider the paths and um total devices and connections and stuff like that, but it is something that is gonna go ahead and build that in there. Um, and if there's anything that you would prefer us to kind of workflow, definitely open to suggestions and advice. Um, OK, so the other part here is pure fusion. You probably heard a few things about fusion so
31:06
far this week and gonna hear more today and tomorrow, um, but it's been a really big focus for us the last couple of years and then, um, this year as well is really trying to change the, the kind of paradigm or viewpoint that we have with managing your storage arrays rather than managing each array individually, which tend to do with the remote plug-in right now as you
31:26
register each array individually and. You do operations against the array individually. It's trying to look at this more from your pure platform, right? That I have all of these different arrays or these different flash blades, whatever the case may be, because I have this entire fleet that's gonna be configured.
31:43
And so this can be done through a fleet management level there's animation on that one, OK, and this can be where, hey, I'm gonna connect, um, to this fleet and I have all these arrays that are a part of here. I need to connect this volume to this cluster, um, but I maybe I'm sending that API or that context to one array and it'll do it for another array in that fleet because I'm just
32:06
specifying that. So you're able to actually manage any of the arrays in that fleet as long as you connect to one of them. OK. Uh, same thing with file as well, block and file. OK. There's a lot more animations than I realized on that one.
32:23
OK, and so the other part here, and this is actually where I'm a little more excited, the fleet management thing does simplify things, especially when you have a lot of arrays that you're wanting to add in. But what, what actually we'll be able to, we can build out more integrations with it has a lot more kind of excitement than me is with our presets and workloads and this is where you can essentially develop,
32:43
hey, this is a workload that I know that is gonna be running on multiple different clusters or I'm gonna deploy multiple different times. It might be a collection of just one VMFS data store or one MFS data store or maybe it could be a collection of 3 VMFS data stores that I need to run and they're size differently. Um, in the future, hopefully we can get RDMs as part of that as well,
33:02
but there's a specific configuration that I want to be replicable and that I know that it has snapshot protection, replication protection, QOS tagging, whatever this, um, prescribed, uh, workload is gonna be, you can define that with your preset. Then what fusion will go ahead and do is I'm gonna deploy a new workload.
33:21
It'll automatically create all those volumes with that specific set of requirements. Similar to how storage policy based management worked with uh Vals is that, hey, I have these rule sets and I'm creating these things. I want to make sure the array features are actually being used there. And so then eventually we will be able to add compliance um checks as well.
33:41
Hey, are these workloads actually in compliance with the settings they were supposed to have? Do they did someone remove it from a protection group or did that protection group get disabled? Are these snapshots actually be taken? Are they being replicated? Is QOS still enabled and so we can know, hey, these workloads are in compliance or not and so you have that consistent work flow.
34:01
So now we want to take that into the vSphere plug-in. The first part of it here is just getting the fusion integration, um, from a fleet management standpoint. So this will allow you when I go ahead and I add an array or a fleet rather than needing to add all 8 arrays individually, just grab one of the arrays in the fleet and add it,
34:18
and it'll add all the arrays in the fleet as well. And so that you no longer need to do it individually one at a time. So this helps out when it's a new, you know, plugin that you're deploying or a new Venter you're setting up. Um, but then you also see, hey, I don't have some of these arrays in a fleet, um,
34:33
and allow you the ability to go ahead and join these arrays that aren't in a fleet to an existing fleet where you don't need to log into the arrays, get the key, log into the other array and join it, or doing that through the CLI, you'll be able to do it right here through the the plugin. Additionally, is being able to create data sources from the presets and workloads so that
34:53
if I have all these different presets and I want to create a new data store, I'll be able to go ahead, hey, I'm deploying this workload from a fusion preset and it'll list out all the different presets, the types they are, how many resources, and we're still working on this part of it, still in development, but it'll give you more context of what is this preset actually doing
35:11
itself. And we do have a demo, here we go. So now it's actually gonna go ahead and work through um each one of these things here. And so I'm gonna start off in the demo of actually going through in the plug-in, you know, there's some arrays that aren't um part of a fleet yet, but I don't have a fleet actually managed there.
35:33
And so as I click on there, I can see, well, there's 2 arrays here, not in a fleet. I'm gonna go to add array or fleet. Be improving some of this kind of work flow. This was, uh, just doing it within the confines of our existing one here I just go ahead and select an array that I know is part of a fusion fleet.
35:50
And so now I'm gonna go ahead and provide the URL and the username and password. Now LDAP does need to be configured on each array that is in the fusion fleet, so I need to use an LDAP admin or uh LDAP user in order to do this. And so I'll go ahead and give it that LDAP username and password. And so rather than just adding that individual array.
36:09
I now have both arrays added here. And now what I'm gonna do here is I'm actually gonna go ahead and I'm gonna go to one of the the arrays gonna create a fleet key and what I'm doing here is I'm, I'm manually on the array side going to add one of those arrays that wasn't in the fleet to the fleet and what the plug-in is gonna be able to do is it's automatically gonna see hey this
36:30
other array was added to this fleet. And so you don't need to actually update anything in the plugin it's gonna already know that this fleet was added. So what this does mean is if you deploy some new arrays and you add it to the fleet, it's automatically gonna show up in the plugin then as well.
36:46
Like you're gonna see that that array is actually there now in the array list. On the other, now right now we don't have all of the context switching and stuff like that um enabled within the plugin so there are some workflows that are gonna be unique, uh, specifically to needing to have authentication with that individual array and token. So we'll have some guidance around this where
37:09
hey if you're gonna be using these different workflows, you do need to be authenticated with each array. The long term goal is that you only need to authenticate with a single one of them and then so long as you communicate to that array. You can still issue the commands and requests to any of the arrays in the fleet. And the other aspect to this LDAP user is we're going to try get more LDAP integration directly
37:31
with the plug-in. So that whichever users actually logged into vCenter is gonna need to provide provide their LDAP credentials for that fleet in order for them and so that um vSphere user that logged in will then need to provide the LDAP user credentials and they're paired together. So if a different user logs in, they're not gonna be using existing credentials that were
37:53
there. They need to provide their LDAP credentials for it. So rather than seeing that one pure user user or pure user showing up in the audit log or whatever username password that you use or username that you use to add the arrays before, each user that's actually running these workflows in there will be able to show up uniquely in the audit log as well.
38:12
So it does give you a little bit more control from an RA and LDAP integration standpoint. All right, here I'm actually gonna go ahead and create the host groups now but one of the things we wanted to do is be able to create these hosts and host host group objects on all the arrays in the fleet rather than than doing these one at a time. I wanna go ahead and create these on the entire fleet itself.
38:35
And so now this is gonna go through and it's gonna create the host and host group objects on each of the arrays in the fleet. Um, so really useful when I deploy a new cluster that's gonna be connected to one of those ones. Even we'll add in the ability to maybe select just some of the arrays in the fleet that you want added to rather than all of them at once,
38:53
um, but yeah, so there you can see that it's connected to all of them in the little widget there. And then this is the create data store workflow that I was shown before. So here I can go ahead and deploy a workload, uh, from a fusion preset, and here I can see the presets there. And I'm gonna actually go ahead and choose the first one where it's gonna actually deploy 3
39:14
VMFS data stores at once rather than just doing a single VMFS data store at this time. This preset that I set up is I actually need to have um 3 different VMFS data stores of this size created and so it'll go ahead and go through this process, give me a summary and the placement. Um, and then once this is done, it's gonna create all three of that that's gonna create
39:36
that workload um through Fusion, and then it's gonna connect those volumes to the host group and then it's gonna go ahead and create the VMFS data stores for that one. So rather than just needing to do this 3 times to create 3 different BMFS data stores, it's just going through this process onces to do that and then it's also correlated within there. Now one thing that we'll be able to do now.
39:57
With the plug-in as well is we'll have the ability to um uh to show and display um the different. Uh, workloads that are actually deployed, uh, within this be center which virtual machines are using it, um, you know what the preset kind of breakdown is workload breakdown is, and to give greater fleetwide insights to what's actually being ran in there and so it
40:19
gives a lot better, um, information to the user itself within vSphere but also potentially within. And then here's just showing that the workload is right there. It deploys it within a volume group and then the volumes that are in there. So it does provide a lot more insight potentially for us to build out further
40:38
integrations, but also for user like use storage admins to see that as well and then um our integration points too, so. Alright, I think we only have 3 minutes, I think is what it's left. Yeah, I think it's 3 minutes. And if we don't get other questions here or get your questions answered here, by all means, Alex and I are over in the expo like you
40:59
mentioned in the very beginning. So please come over to the hypervisor section, get your points there if you're playing the game also, and we can answer any questions you have. But any questions while we're in here? worries, um, now I guess there are some socks that you can get.
41:17
Oh yeah, socks are part of the game. So if you get enough points, I don't know how many points, don't ask me, um, because I'm not, we're not eligible for the for the socks, um, but you guys are, um, there's many other things, uh, but socks are kind of cool. I think that's for the doing the survey or something like that,
41:33
either way, yeah, feel free to come up and talk to us, um, any of the questions and stuff like that, um. I think the biggest thing, stay tuned though like we're gonna have a lot more content coming out with each of these options, um, should be getting a community post on Pure Community. If you're not a member of Pure Community, um, yeah, feel free to join.
41:51
There'll be a response to the VA's end of life announcement coming, so.
  • VMware Solutions
  • Pure//Accelerate
Pure Accelerate 2025 On-demand Sessions
Pure Accelerate 2025 On-demand Sessions
PURE//ACCELERATE® 2025

Stay inspired with on-demand sessions.

Get inspired, learn from innovators, and level up your skills for data success.
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.