Skip to Content
47:14 Webinar

Discover Nutanix Cloud Platform and Start Your Hybrid Cloud Journey with Nutanix and Pure Storage

Learn how Nutanix + Pure Storage deliver simplified, scalable HCI solutions with AHV, data protection, and disaggregated infrastructure.
This webinar first aired on June 18, 2025
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Well, welcome. Thanks for coming to our session. Uh, my name is Jayra Cox. I'm a principal architect on our field CTO team. Designing, uh, and presenting TNX solutions. I actually cover both Americas but mostly just North because that's all I speak is English. Uh, anybody in the room today already a TX customer?
00:19
Perfect. You're in the right session. Um, so, uh, you might have heard we have, you know, new friends, right, in the form of pure storage. Very delighted to be here. This is of course our first time here at Pure Accelerate. Um, so we're gonna be walking through what is
00:33
this new CX cloud platform that Pure Storage is choosing to partner with, what's included in that? What can it do? Uh, we will of course talk about probably the reason everyone's in the room, the hypervisor, but also what does the rest of the stack have in store. And at the end, of course, we'll also talk about new tennis with pure storage,
00:49
uh, not just the HCI part of our portfolio as well. So with that, Uh, I'm gonna let you read this. Raise your hand when you're done. Just kidding. Uh, like 98% of what we're talking about today is totally GA shipping now. Really just the forward looking part is gonna be the pure storage stuff at the end because
01:07
that's not GA today, of course, coming soon. Before we get into the tech, uh, which we will certainly do plenty of, I wanted to kind of cover the but why is it shaped the way it is? Why do we design it, how do we think about the data center and it being a cloud platform? Why don't I even call it a cloud platform? Everyone wants a hypervisor.
01:24
So we started here looking at clouds, and realizing that the experience of using public cloud. Was what we wanted our customers, IT platform and infrastructure providers to build a vend to their customers' business units, right? The application teams. We looked at cloud and we said that way that you consume it via APIs with scalability,
01:47
minimizing the care and feeding of infrastructure itself as much as possible. That's our true north, right? We want to look at actually a cloud platform that you can vend to your customers and business units and app teams. Now, of course, even if we're a cloud platform, you can, of course, own it, install it, pay with CapEx, not rent forever,
02:04
run it on the servers of your choice. Do as much as we can in software, uh, of course give you total sovereignty, right? So it's your data, you own it, put it wherever you want to, um, and of course even in geographic terms as well, right, if you want to uh, operate and vend two availability zones in one country,
02:22
you get to choose to do that versus waiting for like a hyper scalar to build it and invite you to move in, so. But that's our true north. I'll reference back to this a few times. We want to look and act like a cloud platform, uh, which of course does mean we need to run VMs as well, but we didn't start with running VMs as the reason we built the company 15 years
02:38
ago, right? Look at a cloud, offer cloud services, um, uh, and be a cloud platform to both to you and to your customers. This is our kind of a self-portrait of what is on the truck with mechanics today. At the bottom, what we call NCI, uh, this is the main part of our portfolio partnering with Pure Storage with NCI. This is where we run VMs offer today with HCI
03:03
software defined storage, where we offer things like our SDN stack for networking, microsegmentation, built-in desk recovery, analytics. All on our NCI stack, which of course can run on the servers of your choice, uh, in your data centers or in colos or at the edge or in cloud. So notice that I also put up here AWS Azure, and we have Google Google coming out this,
03:24
this, uh, coming out soon as well that you can also run our entire full stack, the identical code base on bare metal instances in public cloud as well. This is gonna be very cheesy. I'm gonna own that up front. What I tell my customers is we help your customer Acme Inc. or Kintoso Inc. We help your cloud be shaped the way you want
03:41
it to look like. If you want your cloud to include data center, Colo, Edge, and public cloud with one. Continuous operating model. One way to do VM security, one way to do analytics, one way to do disaster recovery, we give you that. And if you want your cloud to contain a subset of that, that's fine, your choice.
04:00
Wherever you deploy Netanics, you can have one cloud team administer all of that wherever you choose to place it. The geography really doesn't matter. So beyond that, our middle layer here is where we're gonna offer cloud-like services so. Uh, file services as a, as a service, that's repetitive, sorry. Uh, SMB, NFS S3 and block storage.
04:21
In the middle, our data services for Cooper is the decoder ring, right? Connecting container run times to that persistent storage, right, which usually is the harder part to solve for. Running a container, launching a pod is easy, storing its data, that takes more effort and more thought.
04:35
We do that too. Enterprise AI, not to be terribly reductive, but AI normally is a containerized application looking at data, talking to GPUs. We do all of that as well. We also help life cycle those LLMs, right, because anybody can build an AI stack one time, but how do I take the new models from LA, put them into a test environment,
04:54
get confidence in them, and then bend them to my application team securely, uh, as an endpoint, as an open AI compliant API endpoint. Uh, that's much harder. So we really, yes, we can do the, the execution of running AI, but really that LM App Store is really the, the better value add as well. We do that too.
05:12
Database of the service again, we want to look like a cloud platform. Clouds involve database of a service. Click here receive postres, click here receive Oracle or SQL, but not just provisioning, but also patching, scaling, cloning, uh, DR, uh, all involved with that. That's in our database service there.
05:30
At the top is our cloud management layer. So of course, if you, if you've come by the booth, which no longer is standing. Uh, today, uh, you, you saw us showing off PRISM there. That's our main interface for management, uh, that runs on every cluster and also on PRISM Central, which is our basically vCenter analog, our multi-cluster manager,
05:48
uh, of course, you know, all delivered in HTML all as a resilient service. Our cloud manager, which offers cloud intelligence, right? So things like AI operations, if this, then that. Self-service. So how do I vend an app store to my constituents, right? Like, you know, push button,
06:04
receive SQL VM, receive, uh, you know, multi-VM stack. Whether that needs to hide behind Service now in your environment or be a competitor to ServiceNow, your choice, right? We can be the storefront or we can hide our storefront behind another storefront, whatever you're doing for your enterprise today. Cost governance matters to us an awful lot.
06:22
Traditionally in IT, running VMs on traditional infrastructure leads to a lot of data blindness around, can I tell the business what a certain VM costs per month? Usually I can't, which means cloud has a price tag. I can't prove that it's higher. I suspect that it's higher, but I can't prove it. With our platform, we actually start doing that
06:40
for you basically from day one. With our phone home telemetry that you would send us, so to like Pier one telemetry, when you send it to Newtanics, we know what you paid for our software. We have a guess what you paid for the hardware, you can add in your own FTE costs,
06:52
licensing costs, whatnot. We will run the formulas for you and tell you that VM costs $16 a month. Now, whatever you do with that, that's layer 8, right? Show back, shame back, charge back, you can start down your own, uh, Finops model.
07:05
We give you now data instead of guesses around VM costing even for on-prem. Uh, and then, of course, you know, we're here to run both VMs and containerized applications probably forever, right? Containers are eating all the new workloads, but VMs aren't gonna go away in any of our lifetime either. So we wanna be able to do both well as a platform, uh,
07:23
at the exact same time. So here's the world we live in. Most of you are probably here today for the first column, right? Uh, planned and unplanned change, even around hypervisors, um, which of course we can help solve for, but the even the middle, the, the second, uh, three all matter as well,
07:40
right? Workload patterns change, right? Things grow in unexpected ways, get bigger in in places we didn't expect to get smaller and others. Data of course does what data does uh around growth, and then component failure, right? Another is, is the world we live in. We've yet to write software that doesn't need
07:56
good hardware to run, right? But hardware does what hardware does, which is on a long enough timeline, hardware's gonna fail. That's what informs our philosophy of how do we build the platform, and this is really our, our, our, we did our why, this is more of our how. So first off, we expect failure, right?
08:12
We're a software only company. You can run us on the hardware of your choice, whatever hardware you run us on. I, uh, with respect to all our friends at Cisco and Dell and HP, it's commodity, right? Same Intel CPUs, same AMD CPUs, same Nicks. We just, we spend zero time designing solutions
08:29
with extra resiliency in hardware. We do it all in software. We expect the hardware to fail and we and we handle that outcome versus hoping it doesn't fail and running that way. Of course, as a web scale distributed system, we of course designed with no single points of failure. If we did that, we'd be pretty bad at our job.
08:47
Scale, of course, is the inevitable. so with Newtanics you can start with as little as 3 nodes in a data center and then grow to as much as 32 nodes without ever having to revisit the initial design for those 3 nodes. So scaling up and scaling down. Workloads do change. We could even do smaller than 3 nodes at the edge.
09:02
We can do 1 and 2 node clusters as well, uh, but for most of us in the data center here, uh, 3 nodes and up is the is the common starting point. Um, teaching our software to evolve over time, right, so no rigid design choices, um, you know, I call it even obtain here RIP to that, but over time, right, we've, we've taught our code base, uh, which parts of it now are even 15 years old,
09:24
uh, how to evolve with newer technology, right? So like with NVME with SPDK for addressing NVME in user space, those are all components we can take. We see, we see them in the wild, integrate them into our stack and make the entire platform better for you guys, our customers. And then of course we do build an open source and we evolve it wherever we need to.
09:43
So like we do run Cassandra. It's fairly heavily modified, but it is Cassandra under the covers. Uh, we'll get to AHV in a minute. AHV is KVM with QMU extensions. Our real, uh, value add there is what we've done around storage performance and manageability, right? KVM is great.
09:57
It runs the entire world in in hyper scales, but the UI, of course, is not included, right? So that's where we solve that with PRISM. So, uh, again, our design principles, right, using off the shelf hardware, uh, doing everything in software that we consider to be fancy or resilient, no single point of failure of scaling out, uh, and of course self-healing,
10:15
right? Because if we don't do self-healing, then we wouldn't, uh, we wouldn't deserve to be your platform of choice. So the architecture, we're powered by metadata, right? Similar to the storage platforms you guys already uh run and love. So we also are powered by our metadata that gives us rapid cloning,
10:29
gives us intelligence, gives us autonomy. We'll get into how that powers even our snapshots as well and how our snapshots are different than we are probably familiar with the traditional hypervisor snapshots, how do we differentiate, how are we even better and faster there? Scalability data locality. So you've probably heard us say that before.
10:45
Um, that means that we know how to fingerprint your VM data and when you're running us on the HCI deployment model, we know how to put the hottest data your VMs need on the nodes where they're running on the same servers, right? So of course, we can run like a common. Question is, can I run a VM bigger than the VM storage on that host? Of course you can.
11:03
It's a cluster. It acts like a storage array from a capabilities perspective, but the hottest data of the VM I can bring to run on the same server where the VM is scheduled, which means all of its reads now are across the front side bus, right, which is like the fastest fabric in the data center. And then all rights are always protected. So, um,
11:22
you know, I'm a realist, we live in the real world, we are often not most people's first exposure to HCI. Lots of other HCI over the last call it decade have had a model where under failure conditions, under duress, when something breaks, the next VM right might be underprotected, right? Might not comply with all of your resiliency
11:41
guarantees that you've protected, uh, and promised to the business. With Tanics, we don't do that. Uh, I can have the most disastrous things happen to a cluster. A node goes offline, 2 nodes go offline. I can lose an entire rack of servers. The very next VM right is guaranteed by
11:56
definition to be compliant with your data resiliency promises to the business. I literally cannot honor a right for a VM on my platform that is not in compliance with your policies. Let's get nerdy, uh, into the anatomy of a node. So of course we run on servers. There's two kind of main components here. We, um,
12:15
for almost everything we do other than containers on bare metal, we deploy on a hypervisor. So of course, uh, for the 1st 5 years of our life, that was ESXI. We won Best of V World 3 years in a row, uh, for being an SDS solution on ESXI. We launched AHV 10 years ago.
12:30
Uh, we still support both hypervisors, of course we've over time had Hyper-V and, and, uh. And whatever XEPNG was before then, uh, Zen server, yeah, at the party, you know, we, we tailor that to customer demand. So now we're down to just E6I and and AHV and we're up to 82% of all of our customers do run AHV here as their flavor of choice in our stack.
12:51
But this is a diagnostic diagram. Well, no matter what hypervisor I ran, this is the architecture of a hypervisor. Our CVM there, our controller VM, this is where all of our code runs. Um, this is where we do our management, our resiliency, our snapshotting, our replication, our automation, our, uh, analytics all runs in the CVM there.
13:12
Uh, it runs in user space, so everything we do, uh, I have a 4 node POC going on right now, uh, delivering a million IOPs out of a 4 node cluster with a CVM that runs in user space, right? So the performance is there, we do it all from user space which your security teams will, will definitely love to hear even if no one here cares about user space versus pri privileged kernel privileged kernel space.
13:31
So we're on, we're on servers, the CDM addresses all of its local storage, but then it creates a storage cluster that delivers traditional enterprise storage outcomes. So you're familiar with this kind of standard dual controller design. We run a controller per server, right? And we say per node.
13:47
A node to us in our parlance is a server running a hypervisor and also running a CVM. So if I have a 4 node cluster, I run 4 storage controllers. If I have a 10 node cluster, I run 10 storage controllers. It runs faster as you actually grow with the with Newtanics. Because every single, every single CVM is doing work all in parallel.
14:05
So a node to us is a CVM plus hypervisor running on a server there, so I'll say node a lot. That's my verbal verbal habit there. Um, it runs next to your VMs, right? So you'll see it and you used to see it in Venter, now you see it in PRIM if you're running AHV. Uh, we don't hide any of that. You'll see our VMs there published there.
14:22
Now, we do our best to make sure you can't like shoot yourself in the foot. You can't like right click power them off. They're a virtual appliance, and they need to be, they need to be running, but you'll see them there, right? We're not, we're very transparent about what our, our resources usage is there.
14:34
But what do I get for that, right? When I spend resources running that CVM, what do I get out of that? Uh, of course, the scale we talked about, the performance and the, the data placement, uh, and we'll get into what we, how we do that and how we, uh, excel at that. The simplicity again this is a kind of a deeper
14:49
dive architecture session, but remember how we started this is meant to look like cloud both to you and to your constituents, right? Your applications and teams that you host. So we wanna make this as simple as possible. We want you to trust the system and how it works, but this doesn't require a lot of care and feeding. We automate all of this for you so that it's not much more complicated than being,
15:10
say, a cloud IA VM admin compared to being a Newtonic cloud platform on-prem admin. Uh, and then of course resiliency, right? And why are we worthy of being trusted with your data? So again, most folks have seen HCI that was not Newtonics before they met Newtonics, so some of this is, is unique to us.
15:28
Let's say I start with a three node cluster which has certain characteristics around, you know, what's the compute, how fast is that, how much data can I store, and how fast is that data access. Let's say I start with a 3 node cluster here. I can grow that cluster in ways that a lot of other HCI solutions,
15:44
uh, can't offer. So I can start with smaller nodes. I can get bigger nodes over time, mix that into the exact same cluster. I can actually even get into storage only where I don't run your VMs, I only run my CVM here on this server.
15:58
And I can even do compute only. Now the last two here are not commonly deployed, uh, outside of licensed proprietary databases, Oracle, uh, because that's where the license core count really matters an awful lot, right? Uh, but if I need to design a solution where I can, uh, really, really corral which cores are visible to my workloads as schedulable,
16:19
I can use both of these technologies now to really ring fence what that looks like. If I don't have that constraint in my uh environment as a design, honestly, HCI really is the one, the way to go in the way that 99% of our customers run, but we do have some other tricks up our sleeve here around, uh, solving for other kinds of software constraints.
16:38
Uh, data placement. So let's get, uh, into an example. You heard me say RF2 before, that stands for, uh, replication factor 2, which is how we do our data placement. So we don't use a raid anywhere in our entire stack. We do it all in software. So when your VM wants to write a piece of data, let's call it a megabyte,
16:53
we're gonna label as A here, uh, and then it writes a 2 megabyte of B, 3 megabyte of C. We replicate that in our software stack from CVM to other CVMs in the cluster. So one copy stays here locally where that VM runs and one copy is distributed throughout the rest of the cluster here. Um, with that,
17:11
whenever a VM moves, remember we started life on ESXI where we weren't even authoritative around VM scheduling. We had to follow that VM wherever vSphere DRS moved it around to. We will then relocalize that data to wherever it needs to sit to be again localized for that VM and bring that hottest data to where the VM executes.
17:30
That lets us get back to, uh, reading all hot data over that local frontside bus. So with that, um, You know, we, you, if you've seen our designs, we ask for 10 gig, uh, networking or faster, which is nowadays pretty easy to get. Back 10 years ago that was, uh, a bit of a reach, but, uh, we want you to run us on a great network and then we do our very best to not use it,
17:52
right? Because whatever your VM readrite blend is, if you're 70/30, if you're 80/20, that 70 or 80 doesn't even leave that box, right? Only under fair conditions which for rebuilding conditions if you've added a new node to the cluster and we're scaling out. Other than those conditions, we're gonna read all those bits that your VMs care about.
18:09
Not every bit they own, but the hot working set of data locally over the frontside bus. We've also brought data locality to our metadata. I talked about how we're a metadata governed platform. So that means I have data about your data. I know where your Vask live. I know what data's hot,
18:25
what data is cold. But we've also, uh, taken our Cassandra, uh, metadata level and split that into what truly is global and what can I do autonomously local on a node that therefore I don't need to notify other nodes about, right? What can the node have autonomy over? So things like disc disc placement, right, when I'm doing a right.
18:43
Uh, we, we fingerprint that right for like, uh, temperature. Uh, we know what disks are less and more used in our cluster, so I can place data that way, and things like that data temperature or even the disk placement algorithm can run locally on a node and not even need to bug the rest of the cluster. This lets us run even faster with the exact
19:02
same architecture. There's ways that we've, we've taught, um, our, our data path, even new tricks to adapt and offer a greater performance to our customers. So, with 3 performance here, now when this VM has its data set and it reads locally across this, this, uh, frontside bus, and we're following that VM wherever it moves in the cluster.
19:23
Now, uh, I can avoid using that network, uh, wherever possible for read read access. Now I'm only using it for that RF2 right splitting and copying within the cluster, minimizes the network chatter, and ultimately, why do we do this? It makes your VMs as fast as possible. So resiliency. So, um, yes, yes, yes, this is all great gyre.
19:43
It runs fast and looks good on, on PowerPoint, um, but in the real world things happen, right? They like knock stuff offline. I have a new guy in the data center who tripped over the wrong power cord, so this node is no longer online. What happens there. So of course, the first one where that VM is running, I have to,
19:58
of course, just do HA basic blocking and tackling for virtualization. I have to res resuscitate that VM on some other node in the cluster, right? So of course the VM's gonna have a failure event just like any other HA event on the hardware. VM powers up. Um, what's neat here is that even from minute
20:15
one, this VM has full access to all of its data, right? So I can still see data blocks A, B, and C in the cluster. Yes, they're remote 100% because that's where they were when this HA event happened. But I power this VM on immediately. I don't wait for a rebuild to happen here. The VMs turn, the VM turns on immediately standard VMHA here.
20:35
We call this Stargate, right? Because I can go in any node in the cluster and come out any other node in the cluster because we're giant nerds. And so just like the movie Stargate, um, so this team can immediately see all of its data. Some of it is remote, right? Just like today if you're using, if you're using a storage array with across the fabric, all your data is remote,
20:52
so this is not a bad thing. Um, so I can fire this VM on immediately and then over time I will detect all these remote reads and I will a rebuild the data that was therefore single copied due to that node failure, and B, I will relocalize the hottest tier of data back where that VM runs. We do all this for you automatically, no interaction needed whatsoever.
21:14
One of our core, uh, design thesis for mechanics is we want to take data center emergencies and make them events, right? You know that they happened, you know that we happen to, we, you know that we handle it automatically and no one gets out of bed to go react to this, right? It's fine, fix this in the morning. Now, let's say that this rebuild is completed.
21:34
My new guy in the data center trips over another power cord. I can still do this again, right? The cluster doesn't just tolerate one failure, it can rebuild and tolerate more failures, right? Because it's all done in software. So I can keep on losing nodes here, losing hardware down to the,
21:47
you know, the percentage of consumption of the cluster, right? So if I have a 10 node cluster, it's half full, I can lose one or two nodes at a time as long as I'm allowed to rebuild between each event down to half full and keep on running here. So as much resiliency as as we can possibly deliver and provide for you guys. Uh, all done in software.
22:06
Um, let's keep going. I will have time for questions at the end. So disaster recovery, natively built in our platform. So this is part of the why would I use AHV, right? What are, what's in it for me is an admin. So call it what it is. This is very, very similar to,
22:20
uh, VMware SRM or T Zerto, but we've built it natively into our fabric, right? So we do per VMDR. I can do as little as one minute replication from site to site with, uh, automated recovery, boot order plans, RIP if you need that. Non-impactful testing and auditable results you can give to your compliance team to say,
22:39
yes, I did my DR test. Natively built into our solution. Um, I can do a couple of different RPOs here. In our world, async means one hour or longer RPOs. Near sync means one minute.
22:52
I can do that across any distance, right? I can go I can go coast to coast. I can go continent to continent at one minute RPO. And then Metro for us is sync rep, right, which of course lots of physics of physics do apply there, right? So for sync rep, you know, think same city, think the book says 5 milliseconds. I really want you to be at like 2 milliseconds
23:09
or less to consider that because that becomes the new speed limit of the solution, right? If you are 44 milliseconds from site to site, every right now takes 4 milliseconds because I have to honor that remote synchronous right. So you don't want 4 milliseconds, you want like 012, think dark fiber. But if you're already doing syncrop up today, this is,
23:27
uh, able to do it over the exact same fabric there as well. I can go one to many. I can go I can go fan in, fan out. If I have 10 edge sites, I can back them up to one central data center. I can do all of that.
23:38
I can also go, uh, think about, um, uh, straight from the headlines, right? This latest failure where we saw a Google Cloud outage impact, was it cloud flare which impacted AWS. I can now run VMs in one cloud provider and offer DR in another cloud provider just by running chanics in both sites.
23:58
Uh, of course, we also have, uh, partners that run DRAS on Mechanics because all my clusters can now speak S3 storage as well. I can replicate 2 S3 storage that can either be your S3 storage on-prem or in cloud, or can be also in, uh, a partner data center or DRAS provider. I can also, of course, replicate live VMs or snapshots.
24:18
Um, I don't know if anybody here works for a backup provider. My shorthand is backups are worthless, recovery is priceless, right? So we have, uh, strong partnerships with all of our backup partners and recovery partners. Uh, so in that spectrum of recovery, right, I can recover at DR,
24:34
I can recover at the prod data center. I can, of course, use all my backup providers, um, all of which you are gonna use our snapshots, uh, and we'll get into why that's awesome as well. We can design Newtonics really for any level of availability from handling hardware failure, uh, node or server failure.
24:53
Uh, of course we do resiliency at the data level as well, but also cross site, even to the point if we can even go, uh, with a syncrop cluster and have Metro HA where I can have two paired partner data centers, and I can have one go fully dark at 3 in the morning, light up every VM at the surviving data center with no humans getting out of bed,
25:10
right? That's in my mind that's nirvana, right? Um, now I can sleep well at night and even when I'm on call, know that my phone won't ring. I mentioned our snapshots. Let's talk about that. So since we're, since we're governed by metadata in our platform,
25:24
we're not building that sort of like layer cake solution of, of snapshots that you're used to for VM, uh, snapshotting. Uh, at this show, I'll say I consider our snapshots natively to be pure storage snapshots level of quality and good, right? You love those, you'll love ours too, um, so.
25:41
Uh, and, and what's really cool is coming up with our solution. Uh, we'll talk about, talk about at the end. Our snapshots with the partnership become their snapshots. So when you take a hypervisor snapshot, you'll get a pure snapshot. It'll be great. Um, in our snapshots, so if I'm running a VM
25:55
and I, uh, take a snapshot, I mark the original one as read only. The new one becomes the readri target. Our snapshots are redirect on right at the per megabyte level. So only the change data goes over and and takes up any additional space. Uh, so this SSVM was A, B, C, and D1, only the D block changes,
26:13
only the D block needs to get stored there. I maintain that block map of what is the state of the system over time, which is what a snapshot is. So with this, of course, uh, a couple of different things here. Taking the snapshot has no impact on performance. Running the old copy has no impact on performance.
26:28
They're both equally fast, right? I can actually use this for snapshotting and for cloning because neither one is faster than the other. Neither one actually is prioritized over the other. They're equal citizens on the platform. No performance impact, uh, of course, only paid for only pay for stored,
26:42
uh, change data. And a new trick we have up our sleeve is I can actually give this capability for restores to one of my VM admins, right? My SQL admin, if you store an exchange, the exchange admin. That VM owner can browse their own snapshots and pull a file back from my snapshots into,
26:59
into their, uh, running VM environment without needing to go to restore a backup, without needing to go file a ticket with me. They can do this themselves if I give them permission to, right? So self-service, uh, end user restore from my snapshots. Uh, getting a little bit crazier here. So again, source environment up there at the top, no snapshot.
27:16
We then take a snapshot at times 0. It takes up no additional space, has no no impact on performance. Uh, as data changes, I only started to change blitz there, but snapshots and clones in our environment are really the exact same thing. They're referencing shared blocks and leveraging that for your advantages.
27:32
So this is exactly how I power my database as a service offering is when I snap your SQL database or Oracle or Postgraph or MySQL and offer a clone to a lower environment. That lower environment clone, um, for, uh, for for database cloning only takes up change data, but they all run equally fast. But what I didn't have to do here is a full backup and restore just to get a database clone, right?
27:56
Has in a few minutes, we do it for you all automatically. Um, this is the exact same snapshots we also use for disaster recovery, right? Take the snapshot, ship the change blocks over to, uh, DR. We also have been taught our system, I think it was 2 years ago. How to even do sub-megabyte change rates.
28:11
So now I even use less of the land pipe for smaller changes as well, so it runs even faster. Let's talk about security. Anybody here run as a dark site, no internet connectivity, no phone home, no telemetry. So you can run our entire stack the exact same way.
28:26
Uh, when you are, of course, internet connected, we can auto fetch updates on your behalf, but when you're as a dark site, you can run us entirely, uh, disconnected, and you, of course, you bring in your own updates into the environment. We designed the entire platform to be securable, but not just as a human, we can automate that for you too. So you can go to any running mechanics cluster
28:46
and tell it, please self-harden yourself to DAtig baselines, and it will go away and do that for you. Come back and say it's done. But that's not a one-time event in our world. We automate even the governance of that as well. So if a human goes in and monkeys with the configuration, lessons like say SSH from key-based down to password-based,
29:03
we catch that event, alert to you, and then reheal back to the baseline that you defined that we need to run at. Um, so of course we have full RBA for the entire full stack. You can define roles for your ops team for what they need to be able to do. Um, uh, Richard Shell, I'm not gonna read the, read the whole slide.
29:22
Oh yeah, Philips 140-2, which is as much as you can do in software. You can of course get higher than that, uh, with hardware as well with SEDs. Um, And then, uh, yeah, in software native, uh, encryption at rest as well if you chose to use that. So we can also do that with no additional external KMS.
29:37
I will not be reading you this entire slide. The, the punch line is at the bottom, Newtonics.com/trust. That is where we have our full pedigree for everything. If you say, but GIA, I need cert X Y Z, go find it here. Uh, I don't memorize it all. We look at, we'll look this up together,
29:53
but we are here to be able to run in the most secure, most hardened environments. We have all the pedigrees. Uh, and, and attestations to that effect. OK. Probably the reason you're all here, the hypervisor. Atrophy base Attribute-based access controls. So, sort of, yes, we do that via tags on our platform.
30:14
The USIS categories, but if I tag it as prod, only my prod ops team could see that. I could write that rule very easily. Yeah. Yeah, fair. So AHP, uh, what is it? It's the reason why you, someone told you to come to this room and learn about using Newtanics, why we're, uh, partnering with, uh, Pure Storage.
30:33
So we build AHV to be an enterprise grade hypervisor. Uh, we'll get into the, the, the what's under the covers, what, what, uh, makes it up in, uh, in on the next slide here, but effectively think about we include this because we're gonna be a cloud platform, right? So when you use a public cloud, there's a
30:49
hypervisor in there too. It's not available separately. It's not even how a market, right, the platform it's a cloud platform that includes running VMs just like us. Um, secure by design, of course, simple to operate. We want to be as simple to run VMs on Newtanics
31:04
as it is to run VMs in the cloud of your choice, uh, but of course as speedy and enterprise ready as possible. So, of course, like I said, uh, everything that we ever deploy other than bare metal containers is going to use a hypervisor. We include AHV with all licensing. Uh, so, what is AHP? Because, uh, now I've mentioned it like 5 times.
31:24
So AHP is our hypervisor. It's derived from KVM plus QMU extensions, right, for both the scheduling and the isolation, uh, components there. Both of those come from, of course, upstream open source Linux, uh, environments. Uh, we, of course, also try to be good citizens in the open source world.
31:38
We actually attend KVMO and present that. We publish back to Upstream a lot of our benefits. There are some benefits that are only on Newtanics, uh, only because they're only applicable to Newtonics. So the bottom left corner here, we've taught the KVM disk path to be multi-threaded in a way that the native one is not, but it only speaks that to our Newtonix CVMs.
31:57
So that's why that one's not in open source because there's no benefit because you can't run CVMs in open source there. Uh, also the storage redirector, right? So the same way that let's talk about failure handling, I can actually have a 40 cluster, 4 CVMs. I could have a CVM even maliciously halted, right?
32:14
Maybe the new guy in the job goes in, right clicks, uh, or, or SSHS and crashes one of my CVMs. That on my platform doesn't even generate an HA event. VMs keep on running on the exact same hypervisor and don't even notice. Because of the storage redirector, right, again, that Stargate service I can get data from
32:31
anywhere in the cluster and bring it anywhere else that VM can get access to its data from other nodes. That also is a KVM, uh, enhancement we put into AHV that only applies when you're running CVMs. So that one's not in upstream either. Uh, OvSwitch, we'll get into that in a minute, but OpenvSwitch is also is a key part of our
32:48
stack. Uh, that's how we get our Open VSwitch, uh, basically DV Switch equivalent, right, where I can manage the entire, uh, cluster and actually coming very shortly, multiple clusters all, uh, with policies around VLAN mapping. How do I patch it?
33:04
The answer is LCM. Our life cycle manager engine understands the entire stack no matter which hardware logo you choose to run us on. We can apply everything from BIOS, firmware updates, out of band firmware updates, CPU microcode, hypervisor, storage fabric, governance fabric, the the myri controller, and even higher up the stack, all with one life cycle Manage engine.
33:27
LCM in our world understands what it can do in parallel and what it has to do in a serial fashion. So like if you're running on HPE, I can patch your IA firmware, sorry, ILO firmware all in one big shot because it's not impactful. If I'm patching my hypervisor or even my storage fabric, I do one note at a time and LCM knows how to do all of that automatically,
33:47
knows how to go as fast as possible, but also, uh, totally non-desruptive to your workloads. So with one with one interface I can scan for all available updates if you're a lit site with internet access, download those updates as well and then, uh, apply them at the time of your choosing. So now I've reduced the, the care and feeding, the effort for the stack down to basically scan
34:09
for updates and tell me when to start, and I automate everything else for you. That gives you back a meaningful amount of time, uh, because my customers, even like hospitals, uh, core banking patch during the work day. They don't wait for Friday night to roll around in a maintenance window because all my patching is done non disruptively.
34:26
So do it during the work day and get your nights and weekends back. So how do I get on to AHP? So migration. So we give you the, uh, the Newtonics move appliance. We thought long and hard about that name. Uh, it's a free virtual appliance, uh, we give it to you for free,
34:43
can move from Hyper-V, from ESX, can even grab VMs out of public cloud and bring them back on prem or even push VMs from on-prem to cloud as well. Uh, it also can do SMB migrations as well. So if you're running, say Windows file shares, I want to move them over to mechanics governed file shares, that's also fully automated for you as well. So again,
35:00
totally free. That's the best part. Uh, simple, uh, and automated, so my customers that are using Move at scale get between 200 to 400 VMs per week per engineer moved over, right? I have a customer in Chicago who did about 70,000 cores VSXI moved over to AHV in about 9 months with no outside help at all. They did it,
35:19
they did it all themselves. So you write a migration plan. I could show you this at our booth, but I'll show it to you on my laptop after this, uh, uh, if you want to see it. You write a migration plan, right? It discovers your Venter VMs, you add them to a list as little as 1 VM, as many as 100 per batch,
35:35
and I can run multiple batches in parallel as well. Uh, we'll we'll replicate them all over during the work week, and then Friday night rolls around. You're ready to do a cut over, you hit go and it basically is as little as a glorified VM reboot of impact to your applications. So we'll power down the VM on vSphere,
35:52
do that final delta replication of like megabytes worth of data, power it on on AHV, and it just comes up. Um, also, it's very enterprise change control friendly. There's native backout plans of what if something goes wrong. The answer is delete the HHVVM, power on the vSphere VM,
36:08
and again, a glorified reboot worth of impact to my running applications. It's not the only way to move to AHV, so there are, of course, also third party options as well. Sometimes it's restored from backup because a lot of our backup providers can do a cross-sitepervisor restore.
36:26
Uh, there's also also even third party software, right, that can do an end guest agent-based replicate from anywhere to anywhere else that also, of course, totally works fine on AHV. So a lot of that also is licensed, but again, we give you our tool for free however you choose to move to AHP, we can help you help support you and give you advice there.
36:40
With Move, of course, the value prop is that watch my little VMs hop here per VM granularity and full control over what moves when, right? So now I can work with my application owners and have full control over when they're gonna feel that VM reboot worth of downtime to move that VM over onto AHV. We'll do all the everything else, everything else you need for you automatically driver
36:59
injection, uh, because it does need KVM Vert IOnic and storage drivers injected into it. Uh, IP preservation if you're running static IP addresses, I'll run a script to capture all of those and then reapply them to the new VNIC on AHV. Uh, I can also automatically install my VM tools if you want to also uninstall VMware tools, it does all that for you.
37:20
I'm gonna build this out real fast, but just to talk to it. Two different ways to move on to AHV. One, of course, is the forklift, go get your VMs that exist, replicate them over with move and recover them onto AHV. The other one I would, I would, uh.
37:35
I would encourage you to think about is if you're already in automation shop, go ahead and build that golden path to get on to AHP sooner rather than later, right? If I can pull up, if I can connect service now to AHP sooner in the process, I build fewer VMs. I have to go move twice now, right? So incentivize my developers to get on to AHP
37:52
sooner, build that path wide and and smooth, and now I've, I've capped what I have to go back and catch via a V2V migration tool as well. So usually two paths here the build new and the migrate old, uh, as well. Uh, I'm definitely not gonna read the whole table to you, but what this, the punch line here is this cutover time.
38:11
No matter how big the VM is on a network, this is a 10 gig baseline network, no matter how big the VM is on the same network, cutover time remains about the same, a VM reboot worth of impact, no matter how long it took to see that data over, right? So application for bigger VMs takes longer, but cutover time remains flat. Therefore, very predictable to the business and
38:29
to your app teams. Like I said, I can move up to 100 VMs, uh, in a batch. I can have multiple batches. Multiple engineers are all working in parallel. Um, the, um, once I've finished seeding, let's cut over within about a week. This is not a DR tool. Don't run it forever.
38:48
Don't let it keep on taking snapshots and sending for months and months and months, uh, but once we're ready to cut over, go ahead and do that. Why does it why does it fail, right? It has like a 98% success rate. The last 2% often comes down to can I automate inside of your VMs? Can I run a script, right? So UAC is a common common stumbling block,
39:05
and then if you don't have WinRM enabled, that also is a common stumbling block as well. SSH tends to be pretty ubiquitous if you're on Linux, but for Windows, these are the two common trip hazards. So think about that. Uh, of course you can enable both of these via, uh, GPOs, so it is very doable.
39:22
Uh, and even if, even if you can't, we have a backup plan, we can do manual mode where you run the scripts by hand and we'll still do the the VADP, uh, replication as well. Uh, and then remember there's also other ways to skin a cat. So, um, for whenever you can do an in guest data application,
39:38
think about that, right? Domain controllers, mail servers, SQL AAG. If you can build a new node, that's even cleaner than doing a VTV cut over, right? So don't forget other tools in in the toolbox. Our stack can natively do microsegmentation on the VIC layer applied at Open VSwitch.
39:55
We're really trying to get ACL for that uh, Open Vswitch flow of what's allowed there. This means that as you move your VMs over to AHV, keep your IP address, keep your VLANs, uh, keep your MAC addresses, keep your networks, but just apply security, right? Get to zero trust. I can wrap a firewall around every single VM at the VIC layer and separate prod from dev or.
40:16
Web servers from app servers from database servers and control all of that flow without any reengineering of the network and without having to deploy my SDN stack. So we can do microseg or SDN or both your choice, no dependencies between the two of them. So if all you want is microSEG, just turn that on. If you do want the SDN as well, we can do the overlays, underlays,
40:36
Geneva encapsulation, VPCs, NAT networks, uh, BGP routing. Um, cross site VPN and then in our latest release we actually even added an L4 load balancer. So for L7 forAF functionality, we'll, we'll tell you, let's use a good partner for that, right? We have all of them Citrix, F5, big IP but for L4 load balancing,
40:57
we can do that natively in the fabric now as well. So whether you want the SDN or microseg or both, we have all that solved. Uh, so for operating model, last one here, and then we'll get into, uh, what's around the corner with us with Pure. How do I act like a cloud operator to my business?
41:16
So forecasting. The cluster will natively, uh, do some, we used to call it machine learning, now maybe call it AI, uh, forecasting on when will this cluster be full and run out of resources. So we have a 12 month horizon for CPU memory and storage, and we send you an alert whenever you're trending below less than 12 months of runway on
41:34
any given cluster. So there's no surprises. You can write justifications to leadership, say, I need to grow this cluster well in advance of there being in need or getting tight. When you do that, right sizing, I want that to be a very defensible proposition to your leadership. So let's try to do our best together to
41:51
eliminate waste on the platform. So I automatically scan every VM you run on New Tanix. And I highlight for you wherever I spot inefficiencies if this VM has too much CPU, too much memory, more than we ever see it ever needing, I'm gonna put it on the naughty list and give you an alert or a report saying that VM's overbuilt,
42:08
claw back some resources so that everything on this cluster is much more right sized and very defensible. When you need to grow Newchanics, I want that to be a very easy decision, not there's a lot of overbuilt waste inside that cluster. Uh, we talked about the, the cost visibility, right? I can give you a cost per VM for every VM you run on Tanics to make it very,
42:25
very easy to then make data-driven decisions, right? Should this VM move to cloud, maybe it should, but now we know it'll cost 2 or 3X more than keeping it on-prem. Now I have a lot less data blindness there. And of course as a cloud platform, please just automate us any way you want to. You can use my frameworks, one of the like 3 different ways to do automation on Tanics,
42:44
answerable terraform, uh, AIs. I can show you how to deploy the entire, uh, stack from bare metal through running services all with terraform. So however you want to automate and however much you want to automate, please just automate a cloud platform any way you choose to. So it used to be the main concern was going to technology was.
43:06
right and now the main concern is redoing all our automation. Ah yeah. So, so do you take what you've got to figure out how to make it um work in News or do you take what Newtanics says, hey, do this and ignore what you've already done and follow your. Everybody hear that? Do I need to repeat the question at all?
43:30
OK. The question was, I'm I'm gonna, sorry, summarize it as like automation reuse, if I already have automation frameworks and procedures built, how easily can I use it on Newtanics, right? How much do I need to reinvent the wheel there? So we do have a lot of ability to use code reuse.
43:44
So if you've written samples or if you have in guests like Ans playbooks that may be VRA calls today, that's all very, very reusable on our platform. If you're already doing VM builds with terraform natively, that's a, uh, a light tweak to then target Newtonics with terraform tomorrow. Um, the most involved migration, which would be usually a PS engagement would be if you're
44:02
using VRA heavily to say let's go in there and extract out in guest code and plug that into a new blueprint on Newtanics, but also very, very doable, uh, and often a lot faster. Uh, one of our, uh, automation architects who's not here today talks about he did like a two day boot camp with a customer, uh, on site, taught him how to use our automation framework.
44:21
Came back the following week and they were building their own blueprints, right, so very, very rapid time to value for using our, our VRA equivalent called self-service to offer that blueprinting, uh, and deployment of the service to the business. Am I answering your question? Thank you.
44:37
Um, so with that, let's get into what's coming up shortly. So, uh, the entire cloud platform that I just talked about, everything but the HCI storage applies to our future partnership with Pure storage here, right? So the microsegmentation, the SDN, the automation, all will be in play as part of this, uh, partnership here.
44:56
So this is the HCI cluster that we've run for 15 years, CVMs, next to your VMs, disks inside those servers. Coming soon will be. Click still CVMs, no disks in those servers, NVME over TCP connectivity to what's been announced as Flash array X and XL targets.
45:17
But everything I talked about for AHP functionality is in play in this solution. The flow of virtual networking and security, so SDN overlays underlays and microsegmentation all available here. The native DR for VM to VM granular replication, either site to site, site to cloud, or cloud to site replication there.
45:36
The same management framework for PRISM. Uh, and of course, running on your choice of servers, uh, we'll have closer to launch, we'll have a more defined list of what's, what's, uh, validated there. What won't change, of course, is the same, uh, pure arrays that you already know and love
45:51
running of course pure DOS. We'll leave them in charge of all data efficiency and data reduction, uh, all resiliency and, uh, disk failure handling, of course, still using Pier one phone home, not trying to monkey with the way you already, uh, own and run your Pure arrays today.
46:07
The one thing we'll keep probably is gonna be that disaster recovery replication, right? But snapshotting will be offloaded to the pure array as well. There you go. Uh, marketing slide for all the benefits, you get it. It's our stack plus their stack, peanut butter and chocolate,
46:24
two great tastes that taste great together. Um, The loudest partner we have, of course, is Cisco. They are the most excited for this, so it's been announced that we will be part of Flash Stack. So if you're on UCS today, running Pure today, this is gonna be an easy drop in replacement to swap out the software layer,
46:41
keep your keep your UCS, keep your Pure. We will of course have more hardware announcements as well closer to from other OEMs, but Cisco is that excited that they've already, they've already announced and basically baked their list of what's gonna come to the part of the solution together. So they're actually here at the show.
46:54
Go meet them if you want to to figure out which models they're gonna be supporting. That's my last slide. I got like a minute for questions? None, no time for questions, um, formally, but, uh, I'll be here and happy to take them. Uh, thanks for coming out.
  • Pure//Accelerate
Pure Accelerate 2025 On-demand Sessions
Pure Accelerate 2025 On-demand Sessions
PURE//ACCELERATE® 2025

Stay inspired with on-demand sessions.

Get inspired, learn from innovators, and level up your skills for data success.
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.