Skip to Content
39:14 Webinar

Unifying VMs and Containers: Adopting Modern Virtualisation with Portworx® and Red Hat OpenShift

See how Portworx® + Red Hat ease the move from VMware to modern platforms, using KubeVirt and container-native data strategies
This webinar first aired on 18 June 2025
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Everybody. Uh, this session is about unifying virtual machines and containers together, adopting modern virtualization, and we're gonna be talking about port works and we're also gonna be talking about Red Hat OpenShift. My name is Eric Shanks. I'm a principal technical marketing engineer with Port Works by Pure Storage,
00:16
and I'm joined by George James. I'm a senior solutions architect at Red Hat on the global ecosystem team, uh, which means I work with partners such as Pure to kind of bring those, uh, solutions to market. And before we get started, I'm just curious from the audience, a show of hands, if you wouldn't mind.
00:32
Any of you currently or in the recent past, uh, VMware customers or consider yourself a virtual, a VM admin? That's what I expected. OK, um, so I am also, I've been a VMware person my most of my career. I'm a 14 time V expert. I'm a double VCDX and I understand the things that are happening in the VMware ecosystem. I'm a little saddened by them,
00:58
um, but this is an opportunity for us. The VCDX program really talked about taking requirements, constraints, risks, and assumptions to manage your designs. That's the reason that we make a design and the reason that we're building things the way we do. In the recent months, let's say, um, some of our requirements and constraints have changed.
01:18
One of those constraints I think you're mostly aware of is a price increase from VMware, um, but we're also seeing some additional requirements that are being, uh, thrown upon us from, uh, other say teams or development teams, right? So you've probably seen, um, statistics like this in the past, but this is one from Gartner. It shows by 2029,
01:39
35% of all enterprise applications will run in containers, which is an increase from less than 15% in 2023. The point of this slide is basically new applications are being pre uh predominantly being built as containerized applications. They're not being built as virtual machines. Now, virtual machines still have their place. Um, we can consider them maybe legacy because we're not necessarily building new applications
02:01
on them as often. Um, but they're not obsolete. We're gonna be, uh, maintaining virtual machines for a long time to come. Yeah, there was also a, um, stat in one of the keynotes about 95% of all new applications are being containerized.
02:15
So, what we typically have seen in the past several years is you've got a set of physical hosts, and what you're doing is you have a vSphere layer, which is your hypervisor layer that runs your virtual machines. So that's one stack. We're also starting to see another stack that's typically being deployed in most um
02:35
environments where it's a Kubernetes layer. And so you can have this in two different form factors, right? In some cases you're gonna see there's a vSphere layer that runs your virtual machines, and I'm gonna deploy a Kubernetti's cluster on top of vSphere, right? I've got virtual machines and those VMs become
02:49
my hosts for Kubernettis. In other cases, I might have a vSphere layer, um, on one side, and I've got bare metal hosts with Kubernettis on another side. So I'm running two completely separate stacks instead of them stacked on top of each other. In either case, what you end up managing. Is all of these other pieces. So for example,
03:09
performance monitoring in the vSphere layer I might be using VCF operations or IBM Turboomic just as a couple of examples, but if I was in the Kubernete's layer, I would probably be using Prometheus and Grafina to do, uh, typical monitoring, right? So basically what I have is I have two different monitoring tools because I have two different stacks.
03:28
This continues, right? Logging and auditing. I might be using Datao and Log Insight for for vSphere layer, but I'm using an ELK stack for Kubernetes. This continues with automation with VCF automation or VRA, uh, and Argo CD for Kubernettis, um, backups, we have the same problem here except this case
03:45
we can use Portworks, um, and disaster recovery, uh, a similar uh similar thing, right? The point of this is we have these two different stacks that we are trying to manage. This is cumbersome. So just having two different stacks, obviously you can see here,
04:02
um, that's a lot of different tools that I need to purchase and know how to use and have teams that can utilize those and I have to buy the software, right? That's typically what happens. But there's some additional things that happen even if you've already done those things. So there's operational redundancies as well.
04:19
What about license renewals, right? Now I've got this infrastructure team and a DevOps team. Let's assume those two teams are responsible for the two different stacks that we're talking about. The infrastructure teams like, oh no, our monitoring solution license is about to expire.
04:33
uh, we need to start the procurement process. So this isn't even an engineer, this is some sort of manager that has to go out and get these licenses renewed and probably, uh, negotiate some sort of a contract. The DevOps team is doing the exact same thing at the same time. So we're already seeing like these two stacks, obviously this is inefficient,
04:52
but also the processes around those is inefficient as well. So, how about another operational redundancy? What happens when we have to declare a disaster, for example? So, um, a lot of times if you were in the vSphere layer, you might be doing something like VMware site recovery manager to fail over all your virtual machines. Is that the same process for your Kubernetti's
05:10
layer? I don't know. That's a good question, but your DevOps team might have a completely different set of disaster recovery process to handle. So not only is this additional work for them, but if I had a disaster, I might have a longer outage because these two teams now have to talk to each other to make
05:26
sure that they can coordinate their disasters at the same time, right? There's a lot of overhead here. What I'm suggesting is modern virtualization and modern virtualization is we run a single Kubernetti's layer and we run both virtual machines and containers on that one stack. And then all of your tooling here can be used one time.
05:46
I'm not gonna duplicate these efforts. Um, so I wanted to throw a couple of statistics at you here real quick. Um, one of the ways we do this is we're gonna be using, uh, a tool called Coubert to run virtual machines in our Kubernetti's cluster. Um, we did a, a survey, uh, about a month ago. The survey results came in and we,
06:06
we collated those and published those, um, where it says 81%, uh, expect to migrate or modernize their VMs to Kubernettis. Now, I gotta, I gotta stop you for a 2nd. 1st off, you're probably looking at 81% and saying that's not accurate. Um, this is a list of people that have already deployed Kubernettis,
06:25
so they have a mature Kubernetes environment. And they're probably putting some sort of state inside their Kuberneti's environment, right? So they're using a container storage interface, uh, or Portworks or something like that to provide storage capabilities for your Kubernete's layer, right?
06:40
If you're a mature Kuberneti's company, this 81% seems to make a lot more sense, right? Um, if I had a uh a mature Kuberneti's, uh, operations already. I could I could try out running and managing virtual machines on Kubernettis at the same time, right? And 65% of those plan to migrate within the
06:59
next two years, which shouldn't shock anybody, you know, if I've got the capabilities to do this, I probably would take advantage of it, um, uh, just to get away from vSphere licensing at the current moment. Uh, and 79% though, this is the interesting one, cite technology benefits or operational efficiencies as a primary driver for Kubert. It's not the cost issue that we're trying to
07:18
achieve. It's the modernization issue that we're trying to, to fix, right? We don't want all these inefficiencies with the double stacks and things. OK. So one of the ways we do this, um, we're leveraging an existing hypervisor. So there's probably people in this room that are like,
07:34
I'm scared of covert, I don't know what that is. I only heard about it recently. Um, this is a brand new thing. Uh, sort of true, but the underlying technology that makes your virtual machines running in Kubernetes is a pretty tried and true hypervisor called KBM, right? So KVM was introduced in 2007 as an open source
07:55
hypervisor. Uh, in 2009, Red Hat actually started using it as for the, for the Red Hat virtualization, um, based on KBM. KBM got adopted by AWS in 2010. So it's been kind of tried and true tested in, uh, EC2 for a long number of years now. KVM became the default hypervisor for OpenStack in 2012.
08:18
It was introduced into Newtanix Acropolis Hypervisor in 20 2015. So if you were thinking about moving to Newtanix, Kubert's using the same underlying hypervisor as Newtanix is, right? In 2017, Port Works was founded to deliver Kubernetti's native storage and data management. We'll get to that in a little bit.
08:36
Uh, in 2017, uh, Google Cloud started using KBM for virtualization, so now the two of the major cloud providers are also using KBM. Uh, 2019, it got added to the CNCF in 2023, uh, Covert 1.0 got released. So that's available for commercial offerings. That's 1.0. It was, it was being worked on prior to that, but it was released in 2023.
08:59
And then today, of course, um, we've got deployed commercial offerings, uh, and enterprise use cases, which Georgia is gonna talk to in a minute. So the way this works, right, I'm running a Kubernet cluster. Kuberna clusters are meant to run containers. What is a container?
09:14
A container is simply a packaged process. Right? There's no boots, uh, uh, uh, boot process that has to happen. We're just running a, uh, uh, a process or an application on our host and using the underlying kernel as the, the rest of it. So, in containers encapsulate processes,
09:33
basically. Um, they have the same underlying resource needs. They need compute, they need network, and they need storage. We're starting to see the same things that containers need that VMs need. And the way they do this is we just package up the things necessary to run,
09:48
uh, a KBM-based hypervisor and a virtual machine inside a Kubernetti's pod. So if you're not familiar with Kubernetti's, uh, a pod is basically the smallest unit of measure in Kubernettis, and you either put a container in that pod or you put it in this case a virtual machine. But again, the virtual machine itself that's running is just a set of processes on that hypervisor.
10:09
Um, so KVM underlying the whole thing means you're getting a type 1 hypervisor. Uh, you're getting direct access to your CPU and memory through the KVM hypervisor. You're not doing some sort of emulation or something like that that would take us, you know, back to the stone ages. Nobody would want that performance, right? So we're really containerizing three processes
10:29
to make this work. There's a vert launcher. There's a Libbert D which is basically your uh API Daemon so that you can make calls to KVM and then you've got, uh, Kemu or QEMU which is the Qu emulator. Now that one threw me for a loop when I first saw it, right? Uh, as soon as I saw a quick emulator, I was
10:47
like, nope, don't want that. I don't like emulation. I want a hypervisor, right? I'm, I'm, uh, you know, vSphere snob from the, the old days where I needed a type one hypervisor. Nothing else was gonna do the job. But the, the quick emulator here does several things.
11:02
One of those things that it does is it's like VM or tools for KBM, right? So we can get some additional access to the VM and do some additional things to it, um, but it also provides access to like PCI devices and things like that if there's no, uh, Vert IO type device driver, right? So if there's no virtualization for your device
11:20
driver, it will emulate like other PCI devices and things if you need to. So that's how we actually get a virtual machine running inside of a Kubernetes cluster. We're just gonna to package up the processes, and then Kuberne's is gonna do the normal Kubernetes things it does to manage pods and make sure they restart on different nodes and things like that. All right.
11:42
That was awesome. Can you guys hear me now? Awesome. All right, so Red Hat OpenShift, uh, OpenShift is Red Hat's enterprise grade Kubernetes. So underlying technology is still Kubernetes. Our virtualization stack is still KVM as Eric, uh, went into.
12:01
So with the Broadcom acquisition, you can imagine a lot of people are wanting to migrate to different hypervisors, different solutions, uh, different platforms. So our phone began ringing off the hook, as you can imagine, uh, I saw it happen at Red Hat. We went into overdrive on OpenShift virtualization and, and bringing those solutions to market.
12:23
Um, within a year I had what is Red Hat OpenShift to hey, I need to solve this, uh, specific use cases. Um, I live in Vegas, so I do a lot of these shows and I talk to a lot of people. And the, the, uh, interest in it has been phenomenal. So really what we're getting is two different conversations.
12:39
One is I want to modernize. So 95% of new applications are being containerized. People say, hey, I want to modernize. I don't know if I'm there yet. Can you help me migrate some applications into containers? Can you help me migrate some virtual machines onto a new platform?
12:55
And then we have other uh customers that are just like, hey, I just want to migrate my VMs. I'm not ready to go to a full modern uh workflow uh for containers, but I need to get off VMware. So how can you help me? So we have some solutions for that too.
13:11
So it's probably a good idea to just give an overview of what OpenShift is before moving on. So OpenShift is, as I said, our enterprise Kubernetti's offering, and it runs anywhere. So we want to meet our customers where they are, right? We can use existing hardware if you have existing bare metal, such as like Flash Stack, then we can run on that just fine.
13:30
And your same OpenShift is gonna run on physical bare metal. It can run on top of VMs. Little known fact. Most OpenShift is running on VMs. Uh, it can run in private cloud, public cloud all the way out to the edge, and you're gonna get that same OpenShift experience no matter where you run. So if you develop an application and you run it
13:48
on bare metal and then one day you want to move that application. Out to the cloud for some reason you don't have to refactor anything you can just move it over. So, above that, you see the OpenShift Kubernetti's engine. That's just our base offering that gives you all of the Kubernetti's uh functionality. But most people go for the OpenShift container platform, which is all those additional
14:12
services that you would have the DIY on Kubernetti's, uh, normally. So we've packaged all that up. We've created an enterprise grade offering that we support, and that way you don't have to do all these things yourself. And then above that, once your Kuberite's, uh, footprint grows and you have more and more
14:29
clusters. You need a way to manage everything. You have advanced cluster management, uh, and then we also have advanced cluster security, uh, in our OpenShift platform plus. All right, so I know in the beginning we had a lot of people raise their hands that there were VMware admins and we want to help those people, uh, migrate over.
14:49
It is a new paradigm. We are not looking to replace 1 to 1 functionality of VMware. Uh, we would fail if we did that, to be honest with you, uh, but we want to make it easy for VMware admins to migrate their VMs onto this new platform. So we have a bunch of tooling around that. We'll get into that in a second here.
15:09
What I didn't show you on that previous slide, that kind of layer cake slide, well, we have a new offering called OpenShift virtualization Engine. Now, this is a different SKU, but it's still the full OpenShift container platform. OpenShift virtualization is a feature of the OpenShift container platform. It is not an add on.
15:27
So as long as you're running on a supported system such as bare metal, you just install the operator, refresh your UI, and you're gonna have all the functionality of importing VMs or creating new ones or whatever you need to do. And that skews specific to VMs only. Yes, so if you get the OpenShift virtualization SKU, it's called OpenShift virtualization
15:46
engine OVE. It, it comes in at a much lower price point than OpenShift container platform. But we want to be able to get those, um, customers who just need to migrate onto our platform so they can see the benefit of it. And then in 2 years, like you showed, they're like, oh hey, I want to start running containers now.
16:05
They're already in the ecosystem. They already know how it works, and they can start adding containers very easily. So this gives you a unified platform for containers and VMs. Now you don't have two different silo teams trying to manage everything separately. We saw that, you know, creates a lot of operational inefficiencies.
16:23
It gives you a consistent management for all of your workloads, no matter if they're running in containers or VMs. And as Eric showed, all of this runs on Linux KVM. It's been in development for like 15 years or something like that. So this is tried and true technology. This is,
16:39
this is just a new way of managing uh what we already know works. So, uh, as Eric mentioned, it's built on Cubert. Uh, Red Hat contributes to that project. Um, it also includes OpenShift GitOps. So now like we have a validated patterns program that allows you to like quickly spin up these environments,
16:59
uh, with Git, uh, so that you can start automating because a lot of times if you're trying to do a small cluster, you know, doing it by hand is great, but when you want to replicate that over and over, uh, you need a way to automate that. We also fully support Microsoft Windows. So if you're uh Windows uh sys admin,
17:18
and you want to make sure that you're gonna get support from Microsoft when you run it on OpenShift, yes, you will. So because a lot of people want to migrate, we have a migration toolkit for virtualization. This is built in. It's a free tool. It's built on some underlying tools like VV, but we've made it very easy to basically
17:38
you give it your vSphere credentials. It goes out, pulls all your VMs down, says, Hey, how do you want to migrate this? and you say, I want this disk storage. I want this Nick, I want this, uh, you know, LAN IP, whatever it is, and it's going to do all that for you. And then as you do that at scale, you know, hundreds or thousands of machines,
17:58
um, we have answerable automation platform, we have a set of playbooks that we've developed that can help migrate all those over very easily. All of that is API driven, so anything with an API pretty much you can automate with answerable. So that makes it much easier to do these at scale. And then really, like, as I said, we can't
18:18
solve this problem without our partners such as Pure. So this isn't just me and Eric standing here, but this is a long road for us to get to this point today. Like we probably meet like 2 or 3 times a week, something like that. Yeah, we didn't. Yeah, we just didn't meet today.
18:34
Um, our team, we have alliance teams, we have marketing teams, we have, uh, you know, our product management teams that regularly meet to make sure that we're all on the same page. We all know what the path is going forward. All right, so as I mentioned, OpenShift virtualization engine, a new SKU gives you unlimited VMs. You can run as many as you need.
18:54
Uh, we don't limit that. It is, uh, on 120 bare metal cores, so it used to be 64, but a lot of people are running 96, so they felt like doubling that, uh, made a bunch more enticing offer for people. Uh, advanced cluster management is, is a separate, uh,
19:13
offering, uh, and is AAP, but even altogether, they come in at a much lower price point than Broadcom. And then also, uh, we offer, you know, workload monitoring platform logging, of course. I also mentioned to, uh, the OVE SKU is a much reduced priced solution if you're only gonna run VMs on OpenShift.
19:31
Uh, Port Works has a similar license. So if you're running OpenShift virtualization, you're only gonna run virtual machines on that cluster. Portworks has a license as well that's, that's a much reduced price too for the same reasons so that we can match with Red Hat. Yeah, that's um. Exactly, so that we can both come in, uh, and, and get people into the platform.
19:51
All right, let's go to the next one. So I'm not gonna go through this as kind of an eye chart, but you're welcome to take a picture of it if you want. These are, is basically breaks down all the uh different capabilities of OpenShift virtualization engine, Kubernetti's engine container platform and platform plus.
20:07
The main difference is With OVE, your customer facing or your application workloads have to be in VMs, can't be containers. Now the underlying infrastructure, it's still Kubernetti's, it still runs containers, so all of your logging, all port works, all of that is still a container, uh, but your, you know, your user database or your, uh,
20:29
you know, your, um, web, you know, internet facing website that runs in Engine X, that has to be in. So Really, this is, we don't want you to just like buy this software and then we say, OK, see you later, right? Red Hat, we want to be your trusted partner, your trusted advisor, Port Works, I know they're, they're the same.
20:52
We want to help you on this journey, right? So we have tons of training. Red Hat is an open source company. Almost everything we do is put out open on the internet. Documentation, training videos, we offer uh paid training for OpenShift virtualization as well. We have a virtualization migration assessment.
21:11
We'll go in for a 2 week engagement and our experts will sit down with you and, and figure out exactly what is it gonna take to get you guys over onto OVE. Now, I'm not sure on the exact price for that, but if you end up subscribing, that price is then applied to your subscription. So we have a lot of customers taking care, uh, taking advantage of that.
21:31
All right, so I think that's about it for me. Am I correct? Yeah. All right, now let's see if I can explain how Portworks fits into this whole thing. Yeah. So some of you might not even be familiar with Portworks. We'll try and fill in those gaps, uh, during this, this,
21:46
this section. Uh, what Portworks is is we, we are a software defined storage solution that runs inside your Kuberne as your red OpenShift clusters, right? Um, if you're a vSphere persona, right, you might be familiar with VM or VSAN. You can sort of at a really high level think of Portworks as VSAN for Kubernetes instead of VSAN for vSphere,
22:08
if that makes sense, right? So we, we provide a software defined storage solution inside your Kuberne cluster. So we run, uh, basically anywhere and any storage. You need to provide some sort of block device. up to your Kubernetti's worker nodes and then we create the storage cluster out of that. We don't care if it's if it's local disks that are in your Kubernetti's worker nodes
22:26
themselves, if it's a, a storage array like a pure flash array, uh, where you, uh, create volumes and mount those, it could be one of our competitors uh storage arrays as well. It just needs to be some sort of a block storage device, right? So, so is OpenShift runs anywhere on almost anything, Portworks runs anywhere that's works,
22:46
right? Um, so we'll run on Red Hat OpenShift and then our job is to help you automate, protect, and unify your containers. Um, so automate, we're gonna give you storage classes and things to allow you to interact with Kubernetes instead of the storage layer. No one wants to interact with the storage layer. They want to interact with the Kuberne layer like you would do with vSphere.
23:03
Um, uh, so you can get storage classes that have specific, uh, configurations for what you want your storage to do like encryption or disaster recovery, replication or something like that. Then we help you protect your data. You know, Kubernetti's initially was kind of thought of as a stateless application engine,
23:18
right? You deploy your web server there and if it crashes, it, it just redeploys another one and you don't have to worry about the state. As soon as you have to start worrying about state, it gets trickier. You gotta have things like backups and disaster recovery and snapshots and all these things. This is what Portworks was initially designed to do.
23:33
We did this for containers. We've been doing this for about 10 years, uh, and we got really good at this. So anybody who was willing to take the, the leap to say, you know what, I think you can run state full applications in Kubernetti's, uh, they often chose us. We're the leader in this space. Uh, as it turns out, when people started looking for alternatives to vSphere and they
23:53
found out they could run VMs and Kubernettis, they realized. Huh, every one of these needs state because I have a boot disk for every one of these virtual machines, right? So we've seen a, a huge uplift recently in a number of people come contacting us and looking for support and things, uh, and how Portworks can help because like I said, we've been doing this for a long time for
24:12
containers. We've just adjusted a little bit so that we can also do this for virtual machines, right? So that's what we're trying to do on that last screen. We're trying to help you unify both VMs and containers. And on top, our goal is to let you run any modern app. Uh, typically we're talking about databases
24:27
here because that's where we focus on the stateful applications, right? We think you can run your databases as a container. They can also run as a virtual machine if, if that's the form factor they're in. So let's talk a little bit about how this works, right?
24:38
Um, you've got your hardware array or a cloud block device like if we were deploying an Amazon EKS cluster, for example, you might have to use EBS volumes, right? Um, we could use EBS volumes, we can use local disks, we can use the storage array type disks. You just mount those to your worker nodes, we create that storage cluster for you.
24:57
We have to provide some sort of availability. So again, if you're familiar with VM or VSAN, um, they use a thing called FTT, so a failures to tolerate, right? And their language of failures to tolerate of one means there's two copies of data, right? Because I can, I can tolerate one failure.
25:14
I still have a copy left. We use a different nomenclature which I actually think is simpler. Uh, where we say we're using Repple 2, meaning there's two copies of your data, right? So in this case, you've got a container which could run a VM or a container, right, in that pod, uh,
25:30
with a persistent volume, which is the disk for your container or VM. And then we have a replica which has got a copy of your data in it and we make sure that there's an exact copy of that replica on another node, assuming you choose repple 2 or higher. So we can do repple 1, which is no availability. You should do application availability in that
25:47
case. Uh, Reppel 2 for 2 copies, Repple 3 for 3 copies, that's where you get. OK, so in Kubernettis, let's talk about high availability, right? You're used to having things like a vSphere instance and ESXi host goes down, all the VMs on that host get automatically restarted on the other hosts in the cluster to
26:05
provide you the high availability and a low outage window. In our case, we have to do something similar. Kubernetti's does this by default. They run everything off of desired state. So in this case, my desired state says I need to have one container with a persistent volume running. If that host crashes, Kubernetti says, oh no, I don't meet desired state anymore.
26:26
I need to fix that, and they will redeploy your application on another node in your Kubernettis cluster, sort of like vSphere HA does. Um, in this case, we have an additional tool called Stork because what you could see happening here is what happened if this node went down. And Kubernetti's decided to reschedule your container on that 3rd node on the far side,
26:47
which you notice does not have a replica in it. Well, that would technically work, but you'd have a latency, right? Because now I have to get all of my data over the network before I can access it. So we use a tool called Stork, which stands for storage orchestrator runtime for Kubernetes.
27:04
Uh, and what it does is it informs the Kubernetti scheduler about where your data lives. So we can say, hey, we know you have to restart this node. We think you should restart that node on one of these other nodes that has the data already on it. That's the best fit for it. It's the best place.
27:18
So that's kind of how we provide high availability for your data along with how Kubernetes provides high availability for your VMs and containers. So the reason Portworks fits in here so well is Kubernetti's wasn't designed initially to do some of the things at the Stateville layer, the the thetaville data uh layer. Um, you're used to having all these VMware capabilities that I've kind of listed here on
27:42
the, in the second column here. I know this is an eye chart. Um, but basically this shows you all the typical capabilities that people are used to using in vSphere. And then across here you'll see the Covert virtualization capability. So this is running, say OpenShift with OpenShift virtualization or or Covert,
27:59
um, and some of the things like high availability are kind of built in like we, we just went through that example where if something fails, desired state it's not met, it restarts on another node. But there's this giant gap here in the middle which you can see that's an orange. That's what Portworks is designed to solve, right?
28:16
How about application portability? How do I delete and move applications around? How do I do replication or migrations? Uh, how do I do, uh, disaster recovery? Any SRM, uh, folks in here? But you run an SRM for disaster recovery.
28:30
So that was one of my favorite features when I was using vSphere. Um, long ago, and Portworks has something similar that we've been using for containers for a long time. So not everyone needed a disaster recovery tool for container-based applications because, um, in many cases your disaster recovery process was just redeploy those apps in another, in another location because they weren't worried
28:51
about state. If as soon as they started worrying about state we had to help them. Do things like fail over their applications. So we do things like, uh, for disaster recovery, of course, we replicate your replicas, the the data for your applications from one site to another site. Of course we do that. But we're also what we'll call is Kubernetti's
29:09
Aware disaster recovery. So when you fail over and push that big red button and say I need to do everything to fail over to the other site because I had a disaster, we fail over your data, but we also fail over the objects in Kubernettis, right? So this is your virtual machine, this could be your services,
29:23
it could be your container objects, it could be secrets and config maps. All of the things you need for your application, we'll fail those over and start those up when you need them, right? And of course you do that from your, your, um, destination cluster in case your source cluster is no longer available. So that's where Portworks is trying to fill in these gaps.
29:41
Now what I kinda wanna talk about here for a little bit is just a little bit about our architecture, and I wanna compare it to the VSAN because we, we mentioned that earlier, right? Um, on the left you can see a typical architecture for VMware VSAN. I've got ESXi hosts here with a network between them.
29:56
Uh, my virtual machines are living on these ESXi hosts, and they've got a VMDK on this storage, uh, data store, right? On the right side, what you're seeing is Port works architecture, which you'll notice looks almost identical. We're using different terms. We have containers and VMs,
30:12
and we're using a persistent volume instead of a VMDK, but you can see that these look pretty identical in terms of how they're kind of architected. If I look at a VMFS data store, you're gonna see some additional things like your hardware storage array has, uh, in, uh, vSphere environment, you typically have a single lun that's mounted to all of your
30:33
ESXi hosts as a shared data store, and then VMFS allows you to use that shared data store for all of your ESXi hosts. I'm seeing lots of heads nodding here, looking good. So VM, your VM again has a VMDK. On the right side. It's slightly different. So this is the thing that we we do that VA
30:49
typically didn't do was, um, we've got, we need a lun for each one of our worker nodes, right? So we need onelu for each one of those worker nodes, we take that lun and then we create our storage cluster out of it. So there's a slight difference here at the storage array level, um, but otherwise, this architecture looks similar.
31:08
Now, why do I bring this up? The traditional way that Kubernetti storage works with the container storage interface is like this. So on the left hand side, I have an external storage array. This could be a pure flash array, it could be a NetApp thing or a Dell thing or whatever.
31:26
It doesn't matter. The CSI works kind of the same way. When I need storage, I use a storage class which reaches out through the CSI provider. Creates this volume or the lun on your array. It connects that lun to whatever worker node your container or your VM runs on, and then once that's connected, your container can then connect to that and use that lun.
31:47
So we're not connecting the lun directly to the container. The container, the lun gets connected to the node. And then the node can be used, um, for your container. On the right side, we have the same portwork storage architecture we saw earlier. At first glance, these look fairly similar. The only difference is we seem to have this
32:06
little abstraction layer in the middle, which looks a lot like a data store if you're familiar with this. Look at them when you look at it at scale. And if you still don't see the problem, look here. So I've got all these host connections. Those host connections are some of the reasons that the container storage interface has
32:25
trouble at scale. So you may have even done tests, um, to see how the how CFI works with your containerized applications, and you probably ran tests and they worked fine. At scale, you might see additional problems. That's why Portworks was designed. OK. So each one of these ESX, uh, each one of these
32:42
ESXI. Each one of these container hosts here are connecting and getting all these luns. I need one of these luns for every container, and I need another host connection for every one of those as well, right? Now, at the level of crud churn, uh, by crud churn I mean creates, updates and deletes, um, that Kubernetti's makes to your workloads,
33:04
it can overwhelm the storage array for API calls. So what I'm saying here is your storage array has no problem providing the data performance, but all that creates updates and deletes of those lungs and creating the host connections can overwhelm the storage array and take them a long time to manage, right? Now you might be saying, well, I'm not doing
33:23
anything in a large scale anyway, it doesn't make any difference. Well, what happens when you got a whole bunch of VMs on one node and that node fails. If that node fails, every one of these host connections has to be disconnected. And reconnected on whatever node those those machines come back up on, right? So that's uh an operation that happens all at
33:43
once and I've got lots of API calls I need to make to the storage array at the same time. Portworks doesn't have the same problem. You have these 3 lens that you created for our 3 worker nodes, that's it. All the rest of the connections are within the Kubernes cluster, and they happen immediately. How about live migration?
34:01
Everybody in here, nobody in here is gonna move to a new hypervisor solution if they can't do a V motion anymore, right? Am I wrong? That's what I thought. OK, so, uh, we can't call VMotion anymore, obviously, but a live migration is still possible with OpenShift virtualization or Covert, and this is typically the way it works.
34:20
Go, it's built right into the, to the UI. You just hit migrate and it, it does. It's really simple. Yeah, it's very simple to do. There's a couple of subtle changes which will, uh, annoy a vSphere administrator for right now, um, but it still works. It works fine. Uh, what you need from a storage solution is
34:37
rewrite many access modes. So your storage solution typically in a Kuberne environment, you'll typically see a rewrite once access mode, and that means only one node at a time can access this volume, right? That's what a rewrite once would say. We're looking for shared storage because we need to move our VMs around,
34:53
right? So rewrite many access mode allows multiple nodes at the same time to connect to the same volume. So you need to make sure your storage provider can do this. During a live migration, you have a vert launcher pod that represents your virtual machine. It connects through a persistent volume claim
35:10
to our persistent volume, which is on our storage solution in this case, let's assume it's port works. When I want to do a live migration, what happens is Kubernetes or OpenShift creates a new vert launcher pod that's identical to the old one. It uses a persistent volume claim that connects to the volume,
35:27
so it's got access to its data again. The memory is then copied over. And then the old one is removed. So it's a little bit different from what we're typically used to in vSphere where we actually see a VM move from one place to another place, but we're accomplishing the same goal.
35:43
In this case, it's not the same VM technically, right? There's a different identifier for this VM because we created a new one and got rid of the old one, but that's kind of how Kubernetti's works, and this is how live migration works. Uh, and I think George said, uh, mentioned earlier like this isn't the first time George and I met, um, we, we are on calls constantly it seems like sometimes,
36:05
um, about, you know, marketing events or how engineering is communicating back and forth to make sure that we're providing a joint solution to the field, right? We need, we need a way for Red Hat and Portworks to work very closely together so that you can make the best out of your virtual machine, uh, environment.
36:22
So we're trying to lower your. Costs with the SKUs that uh OVE is providing and Portworks is providing for BM only environments, uh, but we also our goal is to really improve your operational efficiencies. So all that stuff we was talking about at the beginning, why am I doing these processes twice for two different teams with two different sets of software?
36:40
Let's try to eliminate that. If we're, if we're starting from scratch and we need a new solution, why don't we think differently at this point? Yeah, that's that unified platform that that I mentioned. Uh, yeah, and we want app and data flexibility, so we want you to eventually be able to run VMs and containers together. We see the, the momentum that containers are
36:58
are getting. I imagine that's only gonna increase because of of costs reasons, um, right now, um, and new applications are being built as containers anyway. So we want you to be able to run virtual machines and then in the same platform that you're used to managing, you can then convert those to containers if that's a better fit for your organization.
37:18
Um, We have a couple objects here for you. If you're interested in learning how to build this. I wrote both of these docs. Um, they have been vetted by red hat. They have been vetted by Port works. Um, they're like 80 pages each, I think, or something.
37:34
There are two reference architectures. There's a reference architecture to deploy. Red Hat OpenShift on bare metal nodes with Port works. It gives you all the design decisions you're interested in like how do I size this? Well, uh, you know, what should I think of for the number of disks I need? What do I do for availability purposes? It'll give you the basics on setting up an open
37:54
shift cluster with Port works, uh, in kind of the right way. The second reference architecture here is an addendum. So assuming you've gone through the first reference architecture, the second one is additional design considerations you might want to look at if you're going to be running OpenShift virtualization on that cluster.
38:10
So now I need to be able to manage virtual machines, which means I need rewrite many block, uh, rewrite many, uh, volumes. Um, so that you can do live migration and some other things, right? So we have to talk about those in there as well. So this might help you get started if you're like, I think we're,
38:25
we understand how this works. We have Kubernetti's experience, but we want some guidance on how we should actually build our clusters. This should help you. Or if you're brand new to this whole thing, you know, Red Hat, uh, and Pure Port Works, we wanna help you succeed, right? So, you know, come to us, talk to your red hat,
38:43
um, your Red Hat team, talk to your your team or Port Works team and ask questions, you know, help them solve those issues for you. Yeah, and then there's one more, uh, message in here you've probably seen this in other, uh, sessions I'm guessing, uh, but join us on the community. So we've, we've been posting a lot more information on our community pages and things.
39:02
We'd love to have you post as well about your experiences and the problems that you seem to be having and like get some information from your some of your peers as well.
  • VMware Solutions
  • Portworx
  • Containers
  • Pure//Accelerate
Pure Accelerate 2025 On-demand Sessions
Pure Accelerate 2025 On-demand Sessions
PURE//ACCELERATE® 2025

Stay inspired with on-demand sessions.

Get inspired, learn from innovators, and level up your skills for data success.
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.