Skip to Content
20:48 Video

Solving Real World Challenges With Cloud Block Store

Accelerate 2021 session covering common use cases for Cloud Block Store, including data mobility, high-availability, application migration and enhanced data management.
Click to View Transcript
00:08
Unknown: Hi, my name is Cody host Herman. I'm the Director of Product Management here at Pure Storage focusing on on cloud. And what are we talking today about is, of course, cloud block store, but not what it is and how it's built some other sessions on that. But why, not only why we build it, but like,
00:28
why it matters to you why a customer might want to use Cloud locks, or how it can help you solve challenges and struggles you may be encountering when leveraging hybrid cloud, public cloud, multi cloud, whatever you want to call it, because there are differences with how data is managed, how its configured, how
00:46
its deployed, how it's charged, what it means, inside of the hyper scalars, inside of the public cloud, simple things that we've taken for granted, if you will, for many years, like thin provisioning, data reduction, compression, de doop, etc. space efficient snapshots, non performance impacting snapshots,
01:04
instant restore from snapshots, these things don't quite exist in the public cloud. So a lot of these benefits that we've experienced and had for a while with the native experience don't exist. And so these things change. And this is what are the reason we built cloud block stores, we saw a lot of these
01:21
struggles and problems fundamentally, that the flashbay was built to solve. And we could do this we can help customers in similar ways in the public clouds, and of course, new ways as well. And this is what cloud block store is five block store is the evolution of purity, the operating environment inside of
01:41
the flasharray brought to life inside of the public cloud. It's not just a port, it's just not some VM where we slap purity into and moved into the public cloud. It is an architecture leveraging the underlying resources of Azure of AWS in the right ways to provide these things you see on the screen for
01:58
reliability, efficiency, performance, consistency, simplicity, lack of trade offs. This is what the flasharray was built for. This is what purity did with flash. But now this is what purity does with the underlying public cloud infrastructure to deliver that simplicity, and that resiliency,
02:16
and that availability to the end user. We keep up with the technology. So you don't have to, that in and of it's not a use case, the existence of cloud block store is not a particular use case. So what is why does this matter? Well, there's a couple of reasons here. And, you know, fundamentally, what is
02:35
what is very do, right provides availability. It provides replication provides features, it provides thin provisioning, and data reduction, and these clones and snapshots implies encryption. The use cases spawn from these ideas from these features. But what they mean on premises also can mean different
02:57
things in the public cloud. And the use cases are built from these concepts, what these means and the value, they can add changes when it comes to the public cloud. So if we move up a level down a level, depending on how you look at it, we can assemble those together to understand what they mean for
03:15
these common uses these struggles, these questions, these implementations around the public cloud disaster recovery, migration, lift and shift dev test, even just approving I availability, where does CVS play into this? And what can we do? Well, let's start with some pretty straightforward ideas and
03:33
move into some of these more complex architectures that you might be trying to implement in your infrastructures. So first off is simplicity. Simplicity in and of itself can be a use case, I have complexity, it is getting in the way of me being able to do my job. How can I simplify this so I can focus on other
03:51
things, I don't have the time to understand this. This is what we did on premises, people don't have the time to go and look at all these different flash types of pile of flash and assemble it together to be resilient and encrypted and protected and available, etc, etc. No, they pay us to figure that out. And
04:07
just Hey, give me a volume. This is what we can do in the public cloud. But instead of different types of hardware, flash and controllers and connectivity like that. It's around what can we leverage inside of the AWS and Azure infrastructure to provide this simple storage layer? How can we adapt and
04:24
change and alter cloud block store to take advantage of the new things so the applications using cloud block store on the front can then also immediately and instantly consume them as well? So they don't have to experience change? How can we be the storage experts so you don't have to and you can focus your
04:42
business on the direct business related services and architectures and applications that matter to you. Right, let us be your storage expert. This is conceptually, one of the use cases around the CBS. This is what we're doing to make storage simpler inside of the public cloud. By taking the features
05:01
and knowledge set inside of purity, and moving that into Azure at AWS. And kind of more specifically on that point, I'm looking at what we do well, and what means in the public cloud spend cost. What is one of the ideas around the flash array on premises?
05:21
Well, flash was traditionally expensive dollar per raw gig. So how can we make it cheaper? Well, it's somehow make that capacity bigger. Now we do that through thin provisioning through deduplication. Through compression, pattern removal, snaps and clones that are 100% efficient when you clone or
05:39
snapshot something, there is no increased storage capacity or hit. It's all metadata based, leveraging pointers back end to that D dupe that globally, D duped and compressed, inefficient back end, then provisioning does not exist in the public cloud, you pay for what you provision, right? Not
05:58
what you've written to. So the combination of all these can reduce your storage footprint. And what you can then do from the money you save from that idea is to spend it in other places in the public cloud places at once again, are more or more beneficial and directly related to the business that
06:14
you're trying to run. Instead of spending it on the capacity footprint, spend it on those services on those applications on those developers, as the case may be to improve the business to look into AWS or Azure offerings that you maybe couldn't afford before, because you're spending it on this
06:30
storage footprint. Right then in there. That is a direct use case for CBS. Now look into more of these architecture concepts. One of the really early thoughts around leveraging the hybrid cloud was something like disaster recovery. Hey, my Dr. Data Center, I'm not failing over to it every day. I'm not
06:51
using it most days. And so do I figure out how to make it active active and many have done that. But suddenly, you don't want I have my one data center, it's good for what I need. What if I use the elasticity of the compute in the public cloud to failover to it? But when it comes to doing that, how do I
07:08
get the data in? How do I protect it? The format's are different. How do I replicate in and then what if I want to fail back? How do I reconverted to whatever my on premises environment is? How do I convert it to my public environment? These are traditional problems around this use case with hybrid
07:24
cloud. Because the flash array and CVS presents the same front end the same capacity, the same storage offering, there is no reconversion. You don't need to replicate and convert. It is one simple six singular storage layer across the flash array and into the public cloud using CVS. And so this allows the mobility
07:44
of that data from on premises into the public cloud. And of course, that's an important key piece around disaster recovery, where it is not a migration, it is a temporary migration, and then an eventual re migration back. And so that flexibility that mobility is particularly important. And this can be
08:04
extended directly into the public cloud itself. If your production environments in one availability zone is in one region, your disaster recovery can be another region another availability zone. And one of the benefits here from our replication is also another consideration when it comes to
08:21
the public cloud. One of the most important consideration is how you efficiently use your network. There are e grass costs between availability zones and regions. When you're sending traffic traffic across these, it costs money, so making sure it is efficient, but fully there is important. And our replication
08:41
the purity replication that we build on the flasharray and extended the cloud block store maintains deduplication maintains compression maintains differencing, if a block was sent for this volume, six hours ago, a day ago already to that target, we will not resend it for any other volume. So we can
09:00
keep that efficient replication between the different CBS instances, really reducing the amount of data that needs to be sent while still ensuring that all of the data is there. And of course this has has the same concept and allows not only a synchronous replication, but also active active replication.
09:18
So if you want to have active active synchronous protection across availability zones, you can leverage active cluster as well. Now, one of the benefits and concepts around using disaster recovery with the public cloud is of course, I don't want to have my resources sitting there I don't want to be
09:35
paying for when I'm not using it. So why do I need to have a CBS sitting there and waiting to be replicated to? Well, one of the options we have here is also something called Cloud snap. Cloud snap can be is a replication offering from a flash array or originating from a cloud block store replicating
09:52
snapshots into Azure Blob into AWS s3 into NFS as well sending those snapshots in That's cheap and deep storage, waiting for it to be to be needed and rehydrated into any given flash array, or any CVS that's been deployed on demand. CVS is software only, you can deploy it in minutes, deploying
10:14
it through the Azure AWS marketplace. And so replicating the cloud snap to Azure Blob or s3, as the case may be, and then rehydrating back the cloud block store. This is what we can do with cloud snap ncvs and the flash array as well. And so you do not need a persistent CBS target, sitting there and
10:36
waiting. That's also another way of getting it across regions across availability zones by sending it into s3, and then pulling it back into Cloud block store as needed. So even CVS can be sitting there and waiting, but can be rehydrated just certain applications and certain data sets as needed, as you want
10:54
to throw them over or move them to different availability zones. That's another mechanism to for efficient network traffic. Now, one of the main differences here, you might ask between ours async replication and cloud snap, why would I choose one or the other? Well, because cloud snap means to go through
11:11
something and then another thing, your RTO is going to be increased, there's more time to pull it out of s3 and rehydrate it in the CVS than if it was if you're just replicating from one CVS or one flasharray. to another. Right. So the decision point, there generally is cost versus RTO. So depending on what
11:30
your applications need, depending on what your business needs, you can choose one or the other, or you can certainly have configurations leveraging both. Within shifts, moving improve, etc, etc, is probably one of the first things that people do, because changing your disaster recovery, implementing that into
11:49
a new architecture. As a long tail, it takes some time to understand and test and build out. But lift and shift is one of the first things to do, hey, I have some applications, I have this database I have whatever that I want to move into the public cloud makes sense? Let's start with that I can do that
12:03
today. This is something we can help with because a traditional problem here is how do I get that data set in there? How do I replicate it? How do I get these large databases from on premises into the public cloud, we can replicate these flat these volumes from the flash array into CVS and present them up
12:21
there because it has that same consistent front end layer storage. So there is no conversion of the data set, you can focus on moving the application logic and the infrastructure around that and we can easily move that data. Another use case comes back to our cloning and snapshots.
12:41
snapshots and clones are costly in AWS and Azure, they they're not instant restores, or they're the ones that are instant, there's limited amounts that you can use, the capacity footprint is not nonzero, right, there is additional footprint in costs on the flash array on CBS, you can create hundreds and 1000s of
12:58
snapshots with zero capacity footprint when they're created is a complete metadata copy. And so creating lots of clones and snapshots can bring up these development and test environments that you might need to check on your databases on your applications. scale up that elastic compute scale out your
13:16
clones without that that storage footprint costs of having increased with along that those elastic compute instances as well. And so this can be dev tests inside the cloud, or forests also across clouds, creating these clones instantly provisioning them, restoring them, refreshing them, it's all
13:34
metadata copies to copy from one volume to another, or one snapshot to another is an instant operation because it's just a metadata copy. So the efficiency of it and the speed of the restoring cloning can be of significant benefit when it comes to dev tests in the cloud. And other use cases kind of a
13:54
change on that same idea. It's kind of combining all things, these things across the board. My production runs on premises. But my development, my test, my analytics engines needs more compute that I have sitting on premises, or rather, it needs a lot of compute that I don't need running all the time. And so
14:13
leveraging the elastic compute inside of the public cloud, while being able to send your production datasets into the public cloud, clone them create lots of copies really quickly for that elastic compute and tear it down when you're done is an important part of this use case. So you might ask yourself,
14:31
though, so Okay, cool. That makes sense. I get the data mobility thing, but I'm running VMware on premises. I'm not running VMware in the public cloud. VMware uses virtual disks, there's vmfs threes NFS. How do I get the data that's in that particular format? into something that as your VMs or
14:49
AWS EC two instances can actually understand and use? It is a good question, because this has been a struggle for a variety of reasons. Forget about even hybrid cloud for a long time is how do I get data and how do I get data out? How do I change this? How do I move
15:03
physical servers and etc, etc. And it's specifically because of that fundamental reason. VMware has built an abstraction layer with their virtual disks. And there's a lot of benefits buying the storage, vmotion, cloning, snapshots, all that kind of stuff. But what virtual disks are is it's a file on some kind
15:21
of a file system vmfs NFS. And in that file is this abstracted block object that is what's actually presented to that guest. So the data is encapsulated in a virtual disk, which is then encapsulated on a file system, which is then on a black off black off block object. And so we could
15:43
replicate that vmfs volume up into the public cloud. But as your VMs, don't understand vmfs AWS easy two instances don't understand that these vmdk files that were on NFS, or on vmfs, even replicating that vmdk file, via software perspective to some target there, they still don't understand that from vmfs, or
16:03
NFS. So there's a data mobility and migration problem here, it needs to be converted. And when you convert it, then that also means it needs to be converted if it goes back. And that really causes problem with things like disaster recovery and additional mobility across the board. Every time you replicate, you also
16:19
must convert. This is a problem. And so if we can't use these if you can't read these and most environments we see these days are running on top of VMware. How does this work? Well, there's a couple ways to solve this problem. There's an old and traditional way of doing this using raw device mappings.
16:38
Forget about the VMware layer and all the goodness that comes with it. I need that data mobility, I need this direct direct mapping to my guests. But you lose like I said the value of VMware in many, many, many ways already AMS are horrific, right. So let's avoid that. There is a better option.
16:55
Virtual volumes. V balls are essentially raw device mappings but they are orchestrated and integrated. The provisioning and management is integrated in the VMware scuzzy layer. So all those features clone snapshots backup v ADP, etc, are supported with V Vols. But that concept of that data v vault that virtual
17:13
disk is still open and mobile data v vault is essentially an integrated virtual RDM. There is no vmfs on them. There is no vmdk encapsulating that virtual volume, it is a direct mapped volume to that guest it is a block volume with whatever file system your guest puts on it. And so the original example of
17:36
this concept of the fact that hey, it is just a block volume with some file system on it was exemplified when it comes to moving physical servers to virtual servers. You could take a let's say it was on Windows Server C drive on a block volume and an E drive on another one sitting on the flash, right?
17:53
Alright, I'm going to create a V vol VM, the same virtual disk size for my scene, he has that source physical server, copy those snapshots, copy those volumes directly even from the physical server to that V vol VM and boot it up. Bam, you've now instantly virtualized that physical server, because all
18:13
that see is is NTFS with the boot partition, and he is the same NTFS as well. So the data mobility store between virtual and physical plays out allowing you to move things back and forth refresh from physical to virtual are converted, as seen in the diagram on the slide. And this same idea extends to the
18:32
public cloud. This is one of the reasons that V Vols is so important is that we can replicate these v vault virtual disks from an on premises flash array into CBS. And without any kind of conversion or change directly present those block objects that were originally v Vols, to your guests, whether
18:51
it's an Azure VM or an AWS EC two instance. Those objects can be directly presented from CBS into them, because they're just scuzzy objects with x d, four NTFS, x Fs, whatever the case may be that's been formatted by that guest. And that data mobility is a key part of all those use cases we've discussed
19:09
when you're using a VMware environment. V balls are a key part of that. Do you need to move all your VMs and evolves to be able to do it, do your VMs need to be on V Vols instantly to do it? No. You can run them on V balls. In fact, I encourage you to do so. But you can put them on vmfs you can clone it to
19:26
a V vol data store you can store v motion into a V vol data store. Because the movement from vmfs to the vault is offloaded. It does a metadata copy on the flash race so it is efficient and fast. And so you can move it to V Vols When ready, you can move it just when you need it or of course, ideally moves to the
19:43
vault data stores entirely and persistently. But there is flexibility around that and efficient options for making that move. And so v v balls are an important part of our hybrid cloud strategy when it comes to flasharray in cloud block store itself so
20:01
Any of these use cases interesting to you any of these, sparked some interest, give you some ideas while super easy to get started, go the AWS or Azure Marketplace, you can deploy it directly from that we have a terraform provider to you prefer to deploy things for you via terraform. And the licensing is
20:16
mobile through pure as a service. Licensing use on premises can be applied to CBS itself. But we do have a free on demand 30 day license that you can leverage when deploying CVS that you can get from the Azure Marketplace, or the AWS marketplace, or of course, your accounting to try it out today.
20:33
Thanks for your time. I really appreciate it. If you have any questions or have any comments or feedback, please let me know. Thank you.
  • Cloud Block Store

Each public cloud solution is different and presents unique challenges for optimizing availability, provisioning, deduplication, compression, overall performance and a host of other factors involved in managing a modern, complex cloud data environment. How can enterprise cloud storage management be simplified?

This is why we developed Cloud Block Store. There are a variety of ways Cloud Block Store can be deployed and provide value in the cloud. In this session our technical experts will provide an overview of the common use cases for Cloud Block Store including data mobility, high-availability, application migration and enhanced data management.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.