20:48 Video

Solving Real World Challenges With Cloud Block Store

Accelerate 2021 session covering common use cases for Cloud Block Store, including data mobility, high-availability, application migration and enhanced data management.
Click to View Transcript
Unknown: Hi, my name is Cody host Herman. I'm the Director of Product Management here at Pure Storage focusing on on cloud. And what are we talking today about is, of course, cloud block store, but not what it is and how it's built some other sessions on that. But why, not only why we build it, but like,
why it matters to you why a customer might want to use Cloud locks, or how it can help you solve challenges and struggles you may be encountering when leveraging hybrid cloud, public cloud, multi cloud, whatever you want to call it, because there are differences with how data is managed, how its configured, how
its deployed, how it's charged, what it means, inside of the hyper scalars, inside of the public cloud, simple things that we've taken for granted, if you will, for many years, like thin provisioning, data reduction, compression, de doop, etc. space efficient snapshots, non performance impacting snapshots,
instant restore from snapshots, these things don't quite exist in the public cloud. So a lot of these benefits that we've experienced and had for a while with the native experience don't exist. And so these things change. And this is what are the reason we built cloud block stores, we saw a lot of these
struggles and problems fundamentally, that the flashbay was built to solve. And we could do this we can help customers in similar ways in the public clouds, and of course, new ways as well. And this is what cloud block store is five block store is the evolution of purity, the operating environment inside of
the flasharray brought to life inside of the public cloud. It's not just a port, it's just not some VM where we slap purity into and moved into the public cloud. It is an architecture leveraging the underlying resources of Azure of AWS in the right ways to provide these things you see on the screen for
reliability, efficiency, performance, consistency, simplicity, lack of trade offs. This is what the flasharray was built for. This is what purity did with flash. But now this is what purity does with the underlying public cloud infrastructure to deliver that simplicity, and that resiliency,
and that availability to the end user. We keep up with the technology. So you don't have to, that in and of it's not a use case, the existence of cloud block store is not a particular use case. So what is why does this matter? Well, there's a couple of reasons here. And, you know, fundamentally, what is
what is very do, right provides availability. It provides replication provides features, it provides thin provisioning, and data reduction, and these clones and snapshots implies encryption. The use cases spawn from these ideas from these features. But what they mean on premises also can mean different
things in the public cloud. And the use cases are built from these concepts, what these means and the value, they can add changes when it comes to the public cloud. So if we move up a level down a level, depending on how you look at it, we can assemble those together to understand what they mean for
these common uses these struggles, these questions, these implementations around the public cloud disaster recovery, migration, lift and shift dev test, even just approving I availability, where does CVS play into this? And what can we do? Well, let's start with some pretty straightforward ideas and
move into some of these more complex architectures that you might be trying to implement in your infrastructures. So first off is simplicity. Simplicity in and of itself can be a use case, I have complexity, it is getting in the way of me being able to do my job. How can I simplify this so I can focus on other
things, I don't have the time to understand this. This is what we did on premises, people don't have the time to go and look at all these different flash types of pile of flash and assemble it together to be resilient and encrypted and protected and available, etc, etc. No, they pay us to figure that out. And
just Hey, give me a volume. This is what we can do in the public cloud. But instead of different types of hardware, flash and controllers and connectivity like that. It's around what can we leverage inside of the AWS and Azure infrastructure to provide this simple storage layer? How can we adapt and
change and alter cloud block store to take advantage of the new things so the applications using cloud block store on the front can then also immediately and instantly consume them as well? So they don't have to experience change? How can we be the storage experts so you don't have to and you can focus your
business on the direct business related services and architectures and applications that matter to you. Right, let us be your storage expert. This is conceptually, one of the use cases around the CBS. This is what we're doing to make storage simpler inside of the public cloud. By taking the features
and knowledge set inside of purity, and moving that into Azure at AWS. And kind of more specifically on that point, I'm looking at what we do well, and what means in the public cloud spend cost. What is one of the ideas around the flash array on premises?
Well, flash was traditionally expensive dollar per raw gig. So how can we make it cheaper? Well, it's somehow make that capacity bigger. Now we do that through thin provisioning through deduplication. Through compression, pattern removal, snaps and clones that are 100% efficient when you clone or
snapshot something, there is no increased storage capacity or hit. It's all metadata based, leveraging pointers back end to that D dupe that globally, D duped and compressed, inefficient back end, then provisioning does not exist in the public cloud, you pay for what you provision, right? Not
what you've written to. So the combination of all these can reduce your storage footprint. And what you can then do from the money you save from that idea is to spend it in other places in the public cloud places at once again, are more or more beneficial and directly related to the business that
you're trying to run. Instead of spending it on the capacity footprint, spend it on those services on those applications on those developers, as the case may be to improve the business to look into AWS or Azure offerings that you maybe couldn't afford before, because you're spending it on this
storage footprint. Right then in there. That is a direct use case for CBS. Now look into more of these architecture concepts. One of the really early thoughts around leveraging the hybrid cloud was something like disaster recovery. Hey, my Dr. Data Center, I'm not failing over to it every day. I'm not
using it most days. And so do I figure out how to make it active active and many have done that. But suddenly, you don't want I have my one data center, it's good for what I need. What if I use the elasticity of the compute in the public cloud to failover to it? But when it comes to doing that, how do I
get the data in? How do I protect it? The format's are different. How do I replicate in and then what if I want to fail back? How do I reconverted to whatever my on premises environment is? How do I convert it to my public environment? These are traditional problems around this use case with hybrid
cloud. Because the flash array and CVS presents the same front end the same capacity, the same storage offering, there is no reconversion. You don't need to replicate and convert. It is one simple six singular storage layer across the flash array and into the public cloud using CVS. And so this allows the mobility
of that data from on premises into the public cloud. And of course, that's an important key piece around disaster recovery, where it is not a migration, it is a temporary migration, and then an eventual re migration back. And so that flexibility that mobility is particularly important. And this can be
extended directly into the public cloud itself. If your production environments in one availability zone is in one region, your disaster recovery can be another region another availability zone. And one of the benefits here from our replication is also another consideration when it comes to
the public cloud. One of the most important consideration is how you efficiently use your network. There are e grass costs between availability zones and regions. When you're sending traffic traffic across these, it costs money, so making sure it is efficient, but fully there is important. And our replication
the purity replication that we build on the flasharray and extended the cloud block store maintains deduplication maintains compression maintains differencing, if a block was sent for this volume, six hours ago, a day ago already to that target, we will not resend it for any other volume. So we can
keep that efficient replication between the different CBS instances, really reducing the amount of data that needs to be sent while still ensuring that all of the data is there. And of course this has has the same concept and allows not only a synchronous replication, but also active active replication.
So if you want to have active active synchronous protection across availability zones, you can leverage active cluster as well. Now, one of the benefits and concepts around using disaster recovery with the public cloud is of course, I don't want to have my resources sitting there I don't want to be
paying for when I'm not using it. So why do I need to have a CBS sitting there and waiting to be replicated to? Well, one of the options we have here is also something called Cloud snap. Cloud snap can be is a replication offering from a flash array or originating from a cloud block store replicating
snapshots into Azure Blob into AWS s3 into NFS as well sending those snapshots in That's cheap and deep storage, waiting for it to be to be needed and rehydrated into any given flash array, or any CVS that's been deployed on demand. CVS is software only, you can deploy it in minutes, deploying
it through the Azure AWS marketplace. And so replicating the cloud snap to Azure Blob or s3, as the case may be, and then rehydrating back the cloud block store. This is what we can do with cloud snap ncvs and the flash array as well. And so you do not need a persistent CBS target, sitting there and
waiting. That's also another way of getting it across regions across availability zones by sending it into s3, and then pulling it back into Cloud block store as needed. So even CVS can be sitting there and waiting, but can be rehydrated just certain applications and certain data sets as needed, as you want
to throw them over or move them to different availability zones. That's another mechanism to for efficient network traffic. Now, one of the main differences here, you might ask between ours async replication and cloud snap, why would I choose one or the other? Well, because cloud snap means to go through
something and then another thing, your RTO is going to be increased, there's more time to pull it out of s3 and rehydrate it in the CVS than if it was if you're just replicating from one CVS or one flasharray. to another. Right. So the decision point, there generally is cost versus RTO. So depending on what
your applications need, depending on what your business needs, you can choose one or the other, or you can certainly have configurations leveraging both. Within shifts, moving improve, etc, etc, is probably one of the first things that people do, because changing your disaster recovery, implementing that into
a new architecture. As a long tail, it takes some time to understand and test and build out. But lift and shift is one of the first things to do, hey, I have some applications, I have this database I have whatever that I want to move into the public cloud makes sense? Let's start with that I can do that
today. This is something we can help with because a traditional problem here is how do I get that data set in there? How do I replicate it? How do I get these large databases from on premises into the public cloud, we can replicate these flat these volumes from the flash array into CVS and present them up
there because it has that same consistent front end layer storage. So there is no conversion of the data set, you can focus on moving the application logic and the infrastructure around that and we can easily move that data. Another use case comes back to our cloning and snapshots.
snapshots and clones are costly in AWS and Azure, they they're not instant restores, or they're the ones that are instant, there's limited amounts that you can use, the capacity footprint is not nonzero, right, there is additional footprint in costs on the flash array on CBS, you can create hundreds and 1000s of
snapshots with zero capacity footprint when they're created is a complete metadata copy. And so creating lots of clones and snapshots can bring up these development and test environments that you might need to check on your databases on your applications. scale up that elastic compute scale out your
clones without that that storage footprint costs of having increased with along that those elastic compute instances as well. And so this can be dev tests inside the cloud, or forests also across clouds, creating these clones instantly provisioning them, restoring them, refreshing them, it's all
metadata copies to copy from one volume to another, or one snapshot to another is an instant operation because it's just a metadata copy. So the efficiency of it and the speed of the restoring cloning can be of significant benefit when it comes to dev tests in the cloud. And other use cases kind of a
change on that same idea. It's kind of combining all things, these things across the board. My production runs on premises. But my development, my test, my analytics engines needs more compute that I have sitting on premises, or rather, it needs a lot of compute that I don't need running all the time. And so
leveraging the elastic compute inside of the public cloud, while being able to send your production datasets into the public cloud, clone them create lots of copies really quickly for that elastic compute and tear it down when you're done is an important part of this use case. So you might ask yourself,
though, so Okay, cool. That makes sense. I get the data mobility thing, but I'm running VMware on premises. I'm not running VMware in the public cloud. VMware uses virtual disks, there's vmfs threes NFS. How do I get the data that's in that particular format? into something that as your VMs or
AWS EC two instances can actually understand and use? It is a good question, because this has been a struggle for a variety of reasons. Forget about even hybrid cloud for a long time is how do I get data and how do I get data out? How do I change this? How do I move
physical servers and etc, etc. And it's specifically because of that fundamental reason. VMware has built an abstraction layer with their virtual disks. And there's a lot of benefits buying the storage, vmotion, cloning, snapshots, all that kind of stuff. But what virtual disks are is it's a file on some kind
of a file system vmfs NFS. And in that file is this abstracted block object that is what's actually presented to that guest. So the data is encapsulated in a virtual disk, which is then encapsulated on a file system, which is then on a black off black off block object. And so we could
replicate that vmfs volume up into the public cloud. But as your VMs, don't understand vmfs AWS easy two instances don't understand that these vmdk files that were on NFS, or on vmfs, even replicating that vmdk file, via software perspective to some target there, they still don't understand that from vmfs, or
NFS. So there's a data mobility and migration problem here, it needs to be converted. And when you convert it, then that also means it needs to be converted if it goes back. And that really causes problem with things like disaster recovery and additional mobility across the board. Every time you replicate, you also
must convert. This is a problem. And so if we can't use these if you can't read these and most environments we see these days are running on top of VMware. How does this work? Well, there's a couple ways to solve this problem. There's an old and traditional way of doing this using raw device mappings.
Forget about the VMware layer and all the goodness that comes with it. I need that data mobility, I need this direct direct mapping to my guests. But you lose like I said the value of VMware in many, many, many ways already AMS are horrific, right. So let's avoid that. There is a better option.
Virtual volumes. V balls are essentially raw device mappings but they are orchestrated and integrated. The provisioning and management is integrated in the VMware scuzzy layer. So all those features clone snapshots backup v ADP, etc, are supported with V Vols. But that concept of that data v vault that virtual
disk is still open and mobile data v vault is essentially an integrated virtual RDM. There is no vmfs on them. There is no vmdk encapsulating that virtual volume, it is a direct mapped volume to that guest it is a block volume with whatever file system your guest puts on it. And so the original example of
this concept of the fact that hey, it is just a block volume with some file system on it was exemplified when it comes to moving physical servers to virtual servers. You could take a let's say it was on Windows Server C drive on a block volume and an E drive on another one sitting on the flash, right?
Alright, I'm going to create a V vol VM, the same virtual disk size for my scene, he has that source physical server, copy those snapshots, copy those volumes directly even from the physical server to that V vol VM and boot it up. Bam, you've now instantly virtualized that physical server, because all
that see is is NTFS with the boot partition, and he is the same NTFS as well. So the data mobility store between virtual and physical plays out allowing you to move things back and forth refresh from physical to virtual are converted, as seen in the diagram on the slide. And this same idea extends to the
public cloud. This is one of the reasons that V Vols is so important is that we can replicate these v vault virtual disks from an on premises flash array into CBS. And without any kind of conversion or change directly present those block objects that were originally v Vols, to your guests, whether
it's an Azure VM or an AWS EC two instance. Those objects can be directly presented from CBS into them, because they're just scuzzy objects with x d, four NTFS, x Fs, whatever the case may be that's been formatted by that guest. And that data mobility is a key part of all those use cases we've discussed
when you're using a VMware environment. V balls are a key part of that. Do you need to move all your VMs and evolves to be able to do it, do your VMs need to be on V Vols instantly to do it? No. You can run them on V balls. In fact, I encourage you to do so. But you can put them on vmfs you can clone it to
a V vol data store you can store v motion into a V vol data store. Because the movement from vmfs to the vault is offloaded. It does a metadata copy on the flash race so it is efficient and fast. And so you can move it to V Vols When ready, you can move it just when you need it or of course, ideally moves to the
vault data stores entirely and persistently. But there is flexibility around that and efficient options for making that move. And so v v balls are an important part of our hybrid cloud strategy when it comes to flasharray in cloud block store itself so
Any of these use cases interesting to you any of these, sparked some interest, give you some ideas while super easy to get started, go the AWS or Azure Marketplace, you can deploy it directly from that we have a terraform provider to you prefer to deploy things for you via terraform. And the licensing is
mobile through pure as a service. Licensing use on premises can be applied to CBS itself. But we do have a free on demand 30 day license that you can leverage when deploying CVS that you can get from the Azure Marketplace, or the AWS marketplace, or of course, your accounting to try it out today.
Thanks for your time. I really appreciate it. If you have any questions or have any comments or feedback, please let me know. Thank you.
  • Cloud Block Store

Each public cloud solution is different and presents unique challenges for optimizing availability, provisioning, deduplication, compression, overall performance and a host of other factors involved in managing a modern, complex cloud data environment. How can enterprise cloud storage management be simplified?

This is why we developed Cloud Block Store. There are a variety of ways Cloud Block Store can be deployed and provide value in the cloud. In this session our technical experts will provide an overview of the common use cases for Cloud Block Store including data mobility, high-availability, application migration and enhanced data management.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call us: 833-371-7873



Pure Storage HQ

650 Castro St #400

Mountain View, CA 94041

800-379-7873 (general info)


Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.