58:34 Webinar

Build Resilient Data Protection with FlashRecover on Epic

This session discusses using Pure FlashArray to achieve RPO/RTO of 0 for Epic. Also covered is using Pure FlashRecover to deliver rapid recovery of vital data in the event of a ransomware attack.
This webinar first aired on July 14, 2022
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Excellent, thank you Emily appreciate the invite to talk to everybody and to talk yourself about at this tech talk about the solutions that we're building, focused on resiliency and availability within the Epic ecosystem. We've got a very mature partnership and then partnership with Epic um the focus is on the products across our flash array as well as a little bit what we'll talk about here,
00:25
how flash recover plays into that. As we mentioned, you've got myself chad monty and thomas Whelan will be talking to you today about how we work together with Epic. The strategy that we've got with Epic is really a better together type of solution. We're tying together the mature ecosystem, the mature products,
00:47
the mature processes that you're feeling familiar with on Epic with the very robust, highly available durable storage as well as recovery solution that we've built with flash recover and we've built a solution that is ultimately better together from a performance from a resiliency, from a data protection from a recoverable itty perspective. We're looking forward to talking to you about about that today.
01:12
So with that let me turn it over to thomas to talk a little bit about. Here's work with Epic. Thanks. Thanks, I appreciate that. And everyone, I'm glad you're having a you guys joined us today. It's awesome. Um So we are work with Epic is is important obviously for our delivery of health care uh market space that we occupy and we put a lot of
01:35
time and energy into ensuring that our technologies fit within the confines and the needs of their application as they evolve their applications set over different modules in different clinical competencies and things like that. Um But we really when it comes down to how we build epic out and we're talking about customers that are looking at storage requirements for epic.
02:00
Um there's really three areas that we tend to really focus a lot of of discussion around and that's obviously financial impact, operational impact and technical impact. So I'm I'm a technology guy. So the financial impact is is really all around changing a bit of the paradigm around how I. T. Is owned today versus maybe what a lot of customers are used to that ownership aspect that Capex versus apex Pierre is really done.
02:27
I think a really great job of helping epic customers who are looking at alternative storage products to see a different way of storage ownership, technology ownership as a whole. Um and we offer a lot of programs around those things, especially around our whole evergreen program um which you know is is been a huge technology disruptor in this space for a while now.
02:52
Um part of the other things that we really talk about is operational and technical impact where we think we really provide a huge amount of value to customers that are looking for storage products to work with the epic application as we know it is a vast application. There's a lot of technology and modules a lot of software in there that that does a lot of great things for uh the clinical side of the business, but sometimes it can be a bit
03:16
challenging to kind of get your hands around it and just really see as as an overall architecture. So um one thing that Pierre has done is has done a great job of trying to give the tools that we believe uh storage managers really require in order to not just run their storage day today but also have aspects of how to view their storage as a as a uh a much broader sense of of scalability as well as capacities,
03:44
performance and really try to give the tool sets that that we think customers really are asking for and looking for in order to provide those capabilities, things from educational, making it very, very easy to install as well as just kind of day to day support from data protection technologies, like like all things that there's a lot of different things, a lot of moving parts and there's a lot of different ways of approaching
04:08
data protection technologies, which is really what we're here to talk about today. And we'll talk a little bit about all those various toolsets and where they kind of fit within the kind of epic uh infrastructure architecture. Um, things are also around just making sure that what, you know, the promises that we make around capacities and things like data reduction
04:26
things like that are all things that we deliver on. And so we offer some some guarantees around that. And then lastly having the ability to take a look at some of the challenging points which you know, database or excuse me, backups and restore mechanisms are always a bit of a challenge for very, very large applications like an Epic or other kind of vast application
04:48
framework oriented applications and we, we can speak to how we kind of approach those as well as some other things. And then lastly around technical things like data reduction technologies, innovation that we put into the products, things around high availability future proofing and looking further out into where storage seems to be taking us, you know, evolving technologies like technologies and
05:13
beyond. There are really things that we focus on energy around and we believe that that we have, we're on always you know, on the, on the cusp of what those technologies really have to bear and try to bring those two customers so that they can leverage them for their epic environment as it evolves over time. So uh, as with Epic there's Epic has some some guidance around how storage manufacturers in
05:40
this case are evaluated and they have a document called the spats guide, a storage product and technology status guide essentially it's there vision of what they see in the storage space and then how they view uh those various technologies, not just about how they operate with their application technically but also how their customers have either embraced those technologies or operationally have built them
06:05
into is existing epic infrastructures and you know, we're very, very honored to be uh rated at the top end of those scales when it comes to our flash array product. Um, and you know, we, we've for a company as new as pure considering the time scale of a lot of our friends and other surgeon manufacturing spaces. Um, we've been able to reach that that top end
06:31
pretty quickly considering and we're very proud of that and we don't take that for granted. So we're always looking to to keep that status as as, as as relevant, as high as we possibly can as we evolve our technology spaces. We've also had some, some areas of, of some additional products are Flash blade now is, is in that spot Sky, which is great.
06:52
Um, we really feel like that is a product that is going to be incredibly impactful moving forward, especially in the space of data protection, which we're here to talk about today. And then also we really want us have start to see our flash braid products with flash recover and some of our other rapid restores also kind of fit within that backups area of this basket as well.
07:14
And we really want to try to continue to grow that out because we do believe that those products really do offer a significant improvement over some of the other technologies that we see today as and also some of the challenges that we see customers face uh, you know, in this space as well. So we're, we're happy we're, we're at with honorable but we don't take it for granted and we're always
07:35
trying to improve that and, and and maintain as best as we can. So like with everything, it's all about keeping customers uh, you know, maintaining that clinical care continuum over time and part of that requires, you know, highly high availability in order to get that done, you know, especially with, you know, Covid and while we might be on the tail end of that,
08:03
I think it's really changed the whole mentality of how healthcare organizations as well as just purveyors of healthcare all view, I think technology when it comes to health care and no longer can, can things be taken for granted. I think when it comes to uh taking a look at availability aspects of it, availability is more important now, more so than ever. And that's something that appear,
08:29
we really try to to maintain and and focus a lot of energy around to always provide the technology that customers can not only leverage, but also when they need to uh, you know, they need to upgrade them, they need to add additional technology to it. Things like data packs for additional capacity or, or controller swabs or purity upgrades, things like that. We want those, those operational aspects to to
08:54
not have to impact the delivery of patient care because ideally that's kind of what we're all here to do and so with that um you know we've been able to demonstrate some some incredibly uh important I think benchmarks around uh setting I think uh the pace of what I think good storage is doing today which is being able to do data in place upgrades without actually having to take any array offline or stopping I.
09:18
O. Or or quieting the environment if you will. Um as well as data pack data pack migrations, things like that. All those things are really important for keeping health care environment up and running. Even when you have to do things like uh you know doing those various controller swaps and other kind of technology which traditionally you would have to plan uh an outage window
09:40
around and when it comes to an application like epic where there's a lot of departments that have to be part of that decision making process it becomes harder and harder to kind of go after those types of outages. Hold any technology that can assist with I. T. And having to not only be able to have a better you know ability to do that but also be able to trust that the technology will actually perform
10:03
those roles and can demonstrate that over and over again. It's really important to us and it's part of how we just approach technology as an innovation platform moving forward. So it's really important that we that we ensure that our downtime is always maintained as much as we can with the way we build our products today. So this is a kind of a fun slide but this is a
10:25
slide that we put together that you know we ask customers, you know often kind of what they think of our technology and you know we're very very happy. That one thing I can say is that we have a lot of happy customers which is really awesome um and we also don't take that for granted so we're always trying to improve ourselves um and a lot of times we hit the mark and we also you know we'll also sometimes you know we need to
10:48
improve some things but on the whole um this kind of gives you a simple snapshot of kind of what we get back for feedback when it comes to uh customers that have adopted uh Pure for Epic in environments. And we're really happy that customers are finding our technology compelling to use and also solving problems in their environment today. Excellent. Thank you thomas.
11:16
I wanna see a little bit here to talk about pure ecosystem as it applies to Epic around over all of our modern data protection solutions and strategies and to highlight this. I want to talk about our continuum what we call the continuum of modern data protection and availability and this is really a way to highlight how epic and pure can work together from an S.
11:41
L. A driven technology. Yes L. A. Whether it's a database, a backup or restore is really the point that's going to define what technology you want to use and how you want to integrate pure storage into your epic solution if the business need and for the business needs of that primary production database where you have
12:03
the tightest S. L. A. A no disruption is close to 100% availability of the application and the service is possible where you truly want an R. P. O and an R T O at zero. We solve that with a technology based on our flash or Ajax that's been mentioned here with Active cluster. Take two arrays.
12:23
Keep a synchronous replication between the two and ensure that Epics primary production environment gets the tightest S. L. A. No fail over time from the application perspective. So really maintaining that no service disruption strategy as your S L. A relaxes and it turns into more of a disaster recovery strategy.
12:44
Your R P. O. May need to be near zero. Your R T. O. May need to be near zero but it's not going to be an automated process. It's still a, you know, a big red button if you will, hey let's fail over the application. We've made this decision to the D. R site.
12:58
We will implement our continuous data protection technology called Active D. R. And the power the power to this especially in an epic environment as well as active cluster is a synchronous zero R P 00 R T O solution when you move to active D are we utilize the continuous data protection technology which means that that D R site may only be 10 or commonly 30 seconds behind the
13:24
actual production environment. So you can start to build this to build very highly available business continuance and disaster recovery solutions for Epic that will dig into on some future slides extending from there as the R P O increases where now we measure it perhaps and hours from an R P O and an RKO perspective we're going to utilize our robust snapshot technology, taking snapshots of the primary environment replicating those
13:51
snapshots or even making rideable snapshot clones of that data that you can push down to the proxy environment for backups or for other testing and development type of purposes. And you utilize technologies from a flash blade. Even from a flash of racy or technologies like snap to NFS in order to get snapshot offload capability within your pure environment. And then as we move past that it's important to understand that all of our technologies,
14:22
the entire Epic ecosystem that we're going to talk about here is protected by our safe mode technology. Those are immutable snapshots that are protected by the array. And even an administrator or an attacker is not able to delete them. So that allows you to go back to your business and say I've got a non delete herbal immutable copy of my data.
14:43
Maybe it's on flash array, maybe it's on flash blade that has the property of us being able to immediately restore to it. And that's protected from Attackers or from you know, a systems or administrative type of error. And then we wrap around that the traditional backup and rapid restore that pure brings to the table. So when you're measuring a recovery point or
15:06
recovery time within hours, Maybe 10s of minutes with a large backup type of strategy, then that's where technologies like flash recover and our rapid restore ecosystem partners are going to come into play. And then all of these can still write that data, especially in the health care environment where you may have long term copies of data that you need to reside,
15:27
maybe you know in cash or in another type of medical imaging type of workflows. We can send that data to low cost long term cloud storage from an archiving from a long term retention policy perspective, all of these technologies tying together into this continuum of modern data protection that we can apply to. The epic solution workflow. Now which subset of these are really focused on Epic.
15:53
Well it's active cluster for that immediate availability, that highest level availability, active D. R for disaster recovery snapshots for mutability for ransom or protection for that fast recovery for your test and development type of use cases. All protected by safe mode and then backed up with rapid restore capabilities by flash recover and long term in the cloud.
16:17
These are the pieces that we bring to bear on an epic solution. Now let's talk a little bit have thomas. Talk a little bit about what the solution looks like from the epic solution builder perspective thomas. Thanks. Um so we thought we'd take a couple of minutes and just kind of talk about how we approach uh an epic environment when it comes to providing
16:40
storage for it. Um we take a different approach um than maybe would other traditional storage providers which would normally look for a single array that can kind of cover all of the various application elements of of epic in one big box um and then provide that secondary box and A D. R. Location and you're essentially done Well that's a good strategy. It's a strategy we also would be able to easily
17:07
embrace. We we take uh we don't take that approach because one thing we we know we recognize is that there are many moving parts to uh what's happening on of surgery and the fact that we're we're managing clinical data. We're helping um you know the treatment process with physicians. Um We really want to be sure that that that clinical care delivery process is protected at
17:34
the highest levels that we can offer. So what we do is we actually create a process in which we will separate the production instances as well as a couple other important key instances that that are also aligned with production um and put them on their own array and we we do this so that other elements of the environment as a whole. The data center environment. Uh There could be other non epic data kind of
18:00
intermixed with other epic data, things like that which is pretty common. Um Those things we want to protect the clinical care delivery of epic as much as we possibly can. So we actually just build a separate array that is designed to do just that job. Um We also then have based on the size of the customer.
18:17
Their their current their epic needs as well as their non epic data needs. We can develop out other arrays that can service those those capabilities as well as managing the storage for their their co hedo there could be no their clarity environment which can can grow very very large very very quickly for reporting analytics and things like that. And then as well as provide d our options for
18:42
those. And so what we do is we have these kind of general methodologies as two plus one version two plus 13 plus one kind of idea that that allows us to take customers are very varying sizes and be able to kind of put them into these container. Ization process is that we've worked out and uh and and um you know and be able to develop and build out a environment that is well performing
19:07
uh scalable for the future but also recognizes the need for some specific data isolation where it makes sense within the application and within the various kind of I. T. Workloads as a whole. Um you know, we also the way we begin this process as always as we I've developed out some tool sets that allow us to build out uh storage using the epics harder configuration guide that
19:32
they built for customers and I think we can move slide one forward there there you go. And and be able to then kind of build out infrastructures that you know, represented somewhat like this. So this is a representation of what these are examples of kind of what those storage devices look like and the things the various epic elements that would be on each of them.
19:55
This is for a two plus one solution. We can advance, this is a three plus one solution. So we're doing some additional degrees of separation but you'll all, you'll notice that the top label, the servers, whether it's a I X, whether it's Linux, um you know, we can support both of those operating environments.
20:15
We know that A X customers, there's a lot of them but there's also a lot of letting customers out there today that that have embraced that and we're happy to work with both of them. Um and essentially, you know, we can build out the arrays the way that that we choose to as well as allows us to be highly customizable to the customer as well. So the customer would like to have some
20:36
specific placement of elements of the harder configuration guide, uncertain arrays for various reasons that they have. We absolutely can embrace uh, you know, those, those types of ideas as well and uh and be able to really create an environment for their epic that is tailored specifically to the customer. Um the way that they would want to build it out and and be able to provide them with all of the
21:00
storage. And you'll also notice at the, at the very bottom there the number of snapshots, you'll notice that we use seven days, we use that because it's usually where the journal refreshing process happens within the actual database engine itself. So we embrace that same kind of rotational process with our snapshots.
21:16
We view that as a good number to use as well as having longer retention for other copies of data as we move forward and we'll talk about those in a few other slides. And again at the end here, we also talked about, we talked a little bit about which I was mentioning about some of the advanced kind of data protection technology, like Africa cluster and things like that, which active cluster has really been adopted by a lot
21:44
of epic customers as a way to be able to uh for those customers that really want to have uh an operational existence in one data center for let's say six or seven months and then want to kind of shift over to another data center for another six or seven months and kind of have that cohabitation capability. Um Active cluster provides a great mechanism to provide the ability to not only provide the data movement asset aspect of it,
22:10
but to be able to work with other orchestration of controls, such at the at the VM ware level, uh to be able to help the movement of that data as well as the servers as they move over from area to area. Or if those servers are built out, you know, traditionally exactly the way it would be in a primary data center, just having that data available when they want to make those transitions all just work
22:32
seamlessly as well. We can do that and at the same time still maintain all of the other day protection technologies that we leverage the snap shotting the safe mode, all those things are all carried across uh to give that that customer who really is looking for the premium of having that dual dual site capability for operational control, uh to to keep their environment hold the way they choose to do as a business.
22:57
We can help provide and deliver that to the customer without without any issues. And because piers all of your software features are included with the array, uh customers can maybe start off using a small subset of those capabilities and then evolve over time to leverage technologies like active cluster because essentially own those technologies on the day that they buy their first puree so they can choose to embrace
23:23
and and implement things on their terms versus the terms of some of the kind of purchase process that they have to go through or maintenance type of situation. They can simply develop those technologies when it's important for their business to do so. And lastly this this slide just kind of gives you some examples of the varying degrees of uh options that we can offer customers. There's there's a lot going on here.
23:50
But really these are just different ways that we can configure our pure storage devices that still allows us to comply with what epic requires for data data usage, uh still retains our ability to kind of keep some separation of control with what's going on, but also to be able to customize those things for a customer specific environment needs. Um and by the fact that we can be agile, be flexible with all those things.
24:16
Really just allows us to be more of a collaborator with a customer's environment simply than just selling them storage. We don't want to just sell them storage. We want to be part of how they deliver their healthcare process, be an extension to their their their I. T. Team if we can. And lastly uh even technology integrations like we see here with flash recover with power bi
24:37
kohi city which is a great partnership and we found that with that specific product we've had a lot of success with backups but we really can help customers see how that integrates into the overall Epic architecture as well. So these are just kind of some examples of the creativity as well as the options that we can offer customers um to provide a solution for pretty much any any solution that a customer may bring to us.
25:05
So let's talk a little bit about backup and restore. I mean we're talking about data protection today so let's get to it. So the first thing that that is important understand with with epic is an embracing of a methodology called 321321 is a approach um that is fairly standard in the industry. Maybe not necessarily all all recognized but it is a common approach with a lot of storage
25:28
manufacturers that because is done I think a great job of embracing this as well because it does make a lot of sense especially in today's uh Yeah my tape sorry about that. Um I thought that was kind of funny. Um but the idea here is that we want to have data protection uh and we want to replicate some of that data protection so that we have copies of data on devices that may not
25:53
necessarily be the same type of devices, so that if there was something that particular, for whatever reason, if there was something that impacted, let's say a block storage array, like a flash array, we would also have uh we would also have a flesh, excuse me, a Nasa based product like flash blade, to be able to also hold that same data. The idea there is that we have some some
26:14
coexistence on a couple of different stories, platforms and media types and then also at the at the very end having that single copy as kind of that compliance uh copy that everyone kind of has to have. But it's also the furthest away from where all of the data lives. Um we have it there as, you know, as kind of that last uh kind of point of failure if we have to go to it,
26:36
we know it's there but we really wanna we really want to use some of these other options for more immediate retrieval and backup recovery type things. And and so 321 allows us to leverage our black storage array with flash array are known as b storage array with flash blade to be able to provide those two different storage mechanisms as well as with software with various integrators that we that we partner with that
27:00
you see here in a little bit um as well as offering a way to to get that data off to some other kind of last bastion, so to speak. So technologies like cloud block store and other manufacturers that offer backup solutions can write data to the cloud to magnetic tape, the V. T. L. You know, all those kind of things can all be happening. And how we managed a lot of that is um we build
27:25
some of those things out. We have some we have a lot of scripting that we can do to leverage not only what we do at api level but also uh leverage epics freeze and thaw aspect which is their questing for the database uh as well so that we make sure that we're always integrating with creating application consistent backups based on the the things that they require in order to position the database to do so.
27:54
Excellent. Thank you thomas. So I want to talk, you know, we've talked a lot about how the different technologies that we have is pure are gonna fit into this ecosystem from 00 S. S. L. A. With an R. P. 00 R. T. 00 where active VR is gonna fit into it.
28:08
Now let's address the third part of that 3-1 strategy where we want to get a copy a full copy of the data that's off of those production systems on a different medium and maybe even replicated to a cloud provider based upon what your business application is. 1st. I want to highlight a little bit about why we built flash recover And it was interesting looking at the history of technologies like
28:33
flash blade is that we had originally thought those and we designed them towards, you know, the chip design market, the E. D. A. The ai processing, the analytics processing use cases which are characterized by large data sets. Lots of small transactions where low latency is important. What's interesting is that our customers started very quickly coming to us and saying
28:55
that they had another use case that they had another way to apply a storage system that was a nas storage system and an object storage system that had these properties. The problem that they were trying to solve was specifically in their backup and restore, trying to improve the sls and the recovery points in the recovery times that they had and they started applying this technology to the rapid restore problem.
29:21
And why is it good from a why can this technology be applied to a rapid restore use case? Well, the same properties that we had for those chip designed for those analytics pieces also interestingly enough apply very well when your dataset is large and de duplicated because de duplicated, ultimately makes data where your transaction sizes are very small, very spread out across the system.
29:48
And when you build a system that's highly parallelized, you can now reconstruct those small random transactions into something into a data stream that has a high rate of throughput. And that was the problem that many of our customers that saw value in flash blade with rapid restore had brought to the table. So we use that as a foundation and we asked ourselves, how can we build on top of this?
30:11
How can we continue to innovate on the properties that we have with flash blade and bring this and bring a better solution to the market And we did that with with flash recovery. Now what are the properties that you have to go when you build a solution like flash or cover? What are the properties that we need to bring to the market? Well, the first one and probably the most important one is the speed of the restore.
30:35
We need to be able to bring to environments like Epic. The ability to rapidly restore data set and this is a rapid restore on top of the instant restores that you may get with a snapshot restoration or the instant or the near instant fail over from an activity. Our type of strategy but this is hey we've got data and backup, we need to bring it back as fast as possible regardless of what the
30:58
situation is to put us into that place. But since it's another copy of the data and we've got all these issues with ransomware recovery mutability of the data. We need to ensure that it's secure. Security was a key aspect for that as well. So we need to make sure that that data is protected at multiple levels. It's protected by policies in the backup
31:19
application. It's protected by enforced mutability at the storage layer that and even an administrator or an attacker can't circumvent and it needs to be encrypted regardless of where we place the data so that we can ensure protection of privacy and hippo related datasets across it as well. All of it being done with the keep your principle of simplicity,
31:42
we wanted to make sure that api integrations were present and that you could manage it with the existing interfaces as much as possible within pure and within pure one, all of this being pulled into a system that has properties of scalability as well and you'll see what we built with our disaggregated architecture feeds into this very well because now you can expand your system to meet what you're S.
32:07
L. A. Is at the most efficient cost If you just need storage, add additional blades into a flash blade and have the system automatically detect that that is available. With no additional work being having to be done from an operational perspective. Similarly, if you have an S. L. A. Change, I need to back up this data set in
32:28
eight hours but now my business has come to me and said I need to back it up in two hours. Well you should be able to do that without having to rip and replace your infrastructure instead by just adding the necessary component that improves throughput without necessarily having to increase the capacity within the environment. That's a key benefit of a disaggregated architecture that we build here and finally
32:51
providing the principal and the property of sustainability and having a system that is able to substantially reduce the footprint from the storage perspective. The footprint from a compute perspective but also the footprint from an infrastructure perspective. And what's interesting about that is we've got examples in some of these environments where you know,
33:12
we would have to back up three petabytes worth of data And previous and and other solutions that would typically be deployed here May need six or 710 gig ports in order to back up that environment. But with with pure and with flash recover based solutions, we've been able to reduce that to 100 to 120 ports in order to so to serve that S. L. A requirement and we may even be able to reduce
33:37
that even farther in the future And the difference of six or 710 gig ports, especially with supply chain issues. Maybe the difference between whether you can solve this problem in your epic environment this year or you may have to solve it next year or even the year after. So we've built pure flash recover with these properties and to solve this specific problem
33:59
and it's composed of three key components that you can tie into your epic workflow. There's pure flash blade and as we've mentioned, Pure flash blade isn't as in an object interface that also works directly with your Epic environment. So you can consume it as just native SMB or NFS storage with the workflows. That's appropriate with an Epic.
34:22
But in this case we add some purpose built compute nodes that are really optimized with CPU and networking on them that we run the mature Coe City data protect software. This is if you're familiar with Coe City, this is the same software that you would be procuring directly with them. Optimized entirely integrated with pure flash blade from an API and from the UI perspective
34:45
in order to provide you natively all of the performance benefits of pure giving you something that is ultimately as these properties of speed, security and simplicity all provided at scale. So what are some of the performance of the restore priests that we can bring to bear into an epic environment? That rapid restore peace is probably one of the
35:07
most important values that we're going to bring and against hyper converged or purpose built back up appliance type of strategies. We will see at least a four X improvement in the backup and the restore rate by bringing being able to bring to bear the flash recover and the flash solution on Flash blade, this is allowing you to do instant or rapid rapid restore recovery of thousands of virtual machines at a time.
35:32
You can absolutely turn on an instant recovery, put 100 or put a 200 VMS in that and have it be a usable restore where you can start to get your environment back while we do that storage V motion on the back end. This is all about getting your environment, return to service as fast as possible and substantially faster than than any other option. We've even demonstrated well above a petabyte per day of rapid restore capability with this
36:02
solution, What's interesting is one of our partners that actually characterized this to prove it out as as a third party, you know, is pure accurate with these numbers that they're seeing and so they compared us to a P B B A, a typical hyper converged solution and then pure flash recover powered by flash blade and the order of magnitude of numbers that we saw that we can bring to bear to an epic environment. Was where that PBB A was restoring that backup
36:28
set On the order of about 13 hours. That was the baseline that we that we had to meet there when we moved to a next generation hyper converged type of strategy, interestingly enough that reduced down to just under one hour. So hyper converts technologies already provide a pretty substantial improvement in that restore rate. But when we added flash recover and flash blade
36:53
to the mix that one hour number was further reduced To just over 10 minutes. So you can see the order of magnitude of the problem that we're solving here with flash recover with flash recover as the rest of the pure ecosystem. Flash array, flash array C and flash blade. You're procuring this through pure and you're getting one call support from Pure for all of the infrastructure that you have underneath.
37:19
Epic, whether that is flash array, whether that's flash blade or that's flash recover, you can acquire that through pure storage and you get one call support for that stat owned by pure storage. As I mentioned with the disaggregated architecture benefits, you are now able to expand and design your system to exactly what you need. If you need more throughput, you don't
37:42
necessarily need more capacity. You need more capacity. You may not necessarily need more throughput or more licenses, apply your investments where it's needed and solve the problem in a very discreet and effective manner. And then lastly, you get all of the benefits of non disruptive upgrades and the single pane of
38:01
glass within pure one and within the helios cloud cloud based platforms allowing you to have all the benefits that you see across the pure stack from an availability perspective. Now I want to highlight here a new feature that we're expecting to G. A you know, fairly fairly quickly here in the next hopefully in the next couple months here where we tie flash recover safe mode into flash blade.
38:28
And this is something that's new to the market that nobody else is able to provide from a backup and a restore perspective. And with flash recovery, what this means is that we are tying the data and the metadata associated with your backup environment together into consistent safe mode immutable snapshots.
38:49
What this means is that you're able, if an attacker gets in and as you know, damaged a substantial amount of the environment, you're now able to restore your backup environment to a known certified good point in time that contains the operational environment, the metadata As well as the backup data. And you can do that process in about 25 minutes. Previously, you may have to procure new server
39:16
hardware, install an operating system, install your backup software control plane, import a catalog mount, a snapshot version of the backup, do a recovery of your catalog, you know to make it consistent with your backup and then start your restore. And that's a process that could potentially take days and that we have now automated into a process where you kick it off that can take
39:38
about 25 minutes. And then at that point you can start your rapid restore and your instant recovery of VMS to start serving your day to get your application back fast in an epic. And environment that speed to service is going to be absolutely critical. All of this ties together into the flash recover security framework where we tie
40:00
together data lock, safe mode integration, the mutability of the file system. The mutability of the snapshots on flash blade with operational integrations like multi factor authentication are very granular mature, role based access control. Our ransomware detection as well as the clean snapshot detection that's driven by ai and analytics services that looks through your data within within the helios platform in order to
40:28
deterministic li identify for you. Here's where we have determined to clean safe mode snapshot is in order to recover to So let me talk a little bit about the performance here because I keep highlighting that is important but I think a big question and we're not talking about the incident recoveries of active cluster active er snapshots but they restore from a backup environment. What does that look like with with flash to
40:55
recover? Well, I want to highlight in this particular example this was with a pretty small flash recover deployment that with VM ware many times an epic we CVM were deployed you've got two options, the data restore that your traditional restore of the data and this very modest cluster was pushing about seven gigabytes per second. So it pulls it up in 3.5 hours but what I want
41:17
to highlight here is that instant recovery if you've got enough available enough service availability in your E S X form, you can mount that same data set in about five minutes. It's really how fast can you gsx servers power on those VMS Because we've demonstrated in other videos where we can mount that and have it booting the VMS. That process starts in about 30 seconds is how long that takes regardless of the size of your
41:43
your VM or your ESX environment. And then once those are booted, we kick off automatically via a P I that data copy restore on the back end. All of this meant getting your environment the most critical core services for your Epic environment back up available in serving data as fast as possible with that. We do have published an Epic in a flash
42:07
recovered white paper so the details I'm gonna tell you on the next couple of slides, we have published in a white paper specifically with Epic and flash recover. So you can see how to achieve this and how to integrate this best practices perspective yourself we've got the link on there but it is available via search on the pure storage website as well.
42:28
So where is it that we're that we're solving the problem with flash recover. So as thomas had shown you've got to flash race here likely in an active cluster type of configuration there on the left serving your production and your fail over servers then you may have another one reporting environment that's getting database level replication to send the data to it.
42:50
But importantly you're also sending a copy to that data, disaster recovery server and that's where your proxy server amount is going to pull that data up, it's going to have it all written on local disk and then we are going to back it up from there. That is your backup process. So we're getting a consistent copy of the data that we can then pick up and read and write
43:10
into flash, recover right into cloud archive and send it to a cloud for long term archive. Whatever your business policy is the restore peace, we'll often go if you're going back to primary production, which means snapshots and other technologies weren't viable for whatever reason, then you've got the agents sitting on the production server and you can bring that data back directly there from a rapid restore perspective or really anywhere else that you
43:35
may need to. So what are those performance numbers that we highlighted in that white paper We demonstrated with an 18 terabyte database backup of the cash, a database with an epic. And this was actually the result the screenshot of a successful run on that. Now these are hero numbers. So this is the best case scenario that we were
43:56
able to find and that we were able to design with our in the production of our white paper But this does require that you've got a similarly designed or better designed environment. You realize that is the context that we're looking at here but we were able to push this at an excessive rate in excess of 2.65GB per second.
44:18
So if you've got, if you scale this out to say 100 terabyte database, yeah, you're going to be able to back that up in a, in a 10 or 11 hour window um from a, from a backup perspective. Now from a rapid restore perspective, here's the results of us bringing that data back in that same cluster to an environment from a restore perspective.
44:39
And this was pushing I think it was 1.5 gigabytes per second. We are concerning to work as part of this joint solution to identify where bottlenecks are and to remove them especially from a rapid restore perspective and this one is no different. So as we continue to make innovations, you will simply through software upgrades, be able to gain the benefit of additional concurrency, additional parallelism and be able
45:03
to really maximize the throughput capability of your individual restore jobs to individual hosts or nodes as well. But really from an epic perspective that um that that high rate of backup is pretty critical to meeting your or exceeding your sls as we wrap up here. I want to highlight where some of the pieces are that you get to go next.
45:26
Um white papers on pure storage. So we've got flash recover but we've also got a portal and a customer community portal that is available to you thomas, why don't you tell us a little bit about the epic solutions portal and the community portal that you've worked on here? Yeah. So we have a couple of new things that we have for customers. Um, the links there are a new pure storage Epic
45:49
portal where customers can can register and um allowed to have a little bit more of integrate kind of almost like a a live discussion, so to speak with us. If if you have questions, comments, concerns whatever we have, it's, it's really for you to ask us any question that you may have. Um, so some of the questions in here that are being asked are great topics.
46:11
The nice thing about those other customers can see those questions as well and, and we can all begin to collaborate a bit. I like the think of it as kind of uh user web that Epic has for its customers. Kind of the same idea that we have with with this as well, allow for customers to collaborate with each other as well as appear directly and we can all help each other kind of, you know,
46:33
build our environment the way we want them. Um, last thing is uh, you know, if, if there's any every chance you needed to want to get a hold of us and talk maybe a bit further about, you know, some of this technology referred to today. Uh, you've heard today, uh, if you simply send us an email at Epic at pure storage dot com, um that comes to myself and my partner uh as well as some other folks in the healthcare
46:55
vertical team and we're always monitoring those things, so feel free to to always reach us there. Um we're always, you know, ready to and willing to answer any question that you may have. We love talking about customers, we love talking about, you know, how we can help evolve. You know, technology sets with Epic uh with peers, features and capabilities.
47:16
So, um, you know, we're here to help you guys in any way we possibly can. Awesome. Thank you. So with that I want to address some of the Q and A. That we've got live that that's come across and I want to add my, my personal experience to Valentino's question here about what is the flash raise best feature.
47:38
And it's interesting when that question is asked because there's one thing that always comes to mind from my perspective and there's, I mean, Flash ray has got so many great technical features that are associated with it. But From my perspective, having been in storage and and pre sales for 15, 20 years and working in this market through many technologies and many companies when I
47:59
hear that, I always remember back to when I was competing with Pure and Pure was a nascent company first starting and first coming to market and how we would have what they were changing in the industry. And so if I was just personally to define what I think the best feature that pure flash array brings to market, I would invariably converge on evergreen and evergreen
48:22
is is both a technology as well as a, you know, a business solution. You know, we what I've seen customers be able to do with everyone, especially when you're talking about the availability requirements of applications, is the ability to include it in your support contract, replace controllers, replace even data modules, the data packs under the covers while maintaining the service
48:50
availability while being able to maintain the up time of your application and without having to take an outage, being able to do those generational improvements. This is probably the most interesting and compelling technol technological advance that I've seen uh in the market with with flash array. I mean no more having to do these forklift upgrades or data migrations simply having that
49:14
built into the controller upgrade process was so compelling. And it's funny because, you know, 10 years ago, product managers that I work with would say, you know, that's, that's never gonna work, is never gonna survive with with that type of strategy. And I think they were biased because, you know, nobody else was providing that it was, it was a real competitive advantage.
49:34
But now being able to see customer examples where they started with f a four hundreds and they've done seven upgrades over the controllers and seven replacements and replacing the nand flash and had zero downtime across those seven or eight years. That's hugely valuable. There's nobody else really doing anything like that from an evergreen perspective. And just personally,
49:58
I think that's probably the most valuable and compelling feature that we've built. I agree. Um It's kind of funny because when you, you know, when you when you talk, when you talk about that that technology, the non disrupt herbal, you know, everyone has an opinion about that. Everyone talks about, you know, my stuff is not disrupt herbal,
50:18
you know, all those kind of things. And there's always, you know, it's always, you know, the fine print is always kind of what you what what the reality is, right? So, um what what I like to see is when a customer actually goes through that process for the first time and they have this, it's almost like an awakening if you will.
50:36
Oh my God, the what you actually said actually happened like it worked the way it was designed to work, it was worked the way it was articulated and that's probably in my opinion, one of the best features appears is that when we tell you these things actually work this way, it's not a sales pitch, it's because they really do. Um My other, my other technology piece that I think when I joined here and I joined here from
51:03
another three letter storage manufacturer to um was pure one pier one doesn't, in my opinion doesn't get talked enough about. um but pure one is an incredible tool to have in any storage Edmunds toolkit because it does so much. I'll give you an example. So today uh actual use case example, we have a very large customer and epic
51:29
customer. Um and they're, you know, they're reaching some, they're reaching some pain points with their clarity environment, which a lot of customers go through. That clarity grows incredibly large very, very quickly, especially with Covid. Um censuses went triple quadruple what they normally are and all of that data had to go somewhere luckily with peers,
51:50
data reduction technology, a lot of our customers actually did not have to spend a ton of money on additional capacity. Which is also kind of a cool benefit as well, especially in the, in the days of Covid, but in this particular case, you know, clarity is a different workload than than cash. A it's a it's a sequel server slash oracle. A traditional kind of relational database type
52:12
workload. And there's some dynamic differences between those 21 of the things that we, that that we are going to help the customers be able to do is figure out what they're how the next version of that clarity environment from a storage array uh will look like and how do they actually know if that's the right thing to pick. So one of the things that that pure one does is it has both workload and capacity planning
52:40
tools. These tools traditionally are very expensive to buy and also very, very expensive and time consuming to just set up and and maintain over years and years of time because like everything your environment is traditionally always changing in in small amounts here and there and you have to manage all of that pier, one takes a lot of that off the storage of men's shoulders.
53:07
So what we're able to do with this particular use case is look at the array that's currently running that workload, identify that these are the volumes of that workload that are actually the problematic volumes of the volumes that we really want to target for doing something different with and then take those volumes and apply it against an array that they don't own yet. An Excel in X 90 and our new Excel platform XL
53:33
1 31 70 platforms a customer can take those specific volumes that have specific I O I O characteristics and be able to actually model them against an array that they don't own. But we've actually modeled using our ai tools and all of the other technologies we used to do that work And to be able to say if I buy this array, if I upgrade to this model, if I do a controller spot to this model,
54:02
What will be my perceived headroom I gained with that? Maybe it's 10%, maybe it's 25% Or if I go maybe a bit further do I get to 50%, How do you really know that in traditional storage processes where you're having that conversation with other storage manufacturers, it's sometimes difficult to really zero in on that number.
54:26
There's a lot of math, there's a lot of tool sets that are used uh different, you know, S. E. S have different approaches that they take towards those those those problems. And so there's always a bit of uniqueness. What pure really wanted to do is figure out how to use the science of our arrays and the and to be able to categorize and take all of the
54:47
performance of every array that is sold by pure and use that as a sampling mechanism to then help customers make better decisions about how to do storage in the future. And we find that with epic customers because they're always dynamics changing. When you add modules, modules can add a lot of additional capacity um you know,
55:09
versus what they have. In some cases, it can be really dramatic. And those also then ripple, there's a ripple effect because now those new modules also, you have to have development copies, you have to have builder copies. You have to have all these other non production but yet incredibly important copies for just
55:27
building out the the the program over the number of businesses that you have and by virtue of having all of that telemetry data right there at the fingertips of the admin in doing some of those forecasting and planning type things are incredibly powerful because now you can go and you can say yes I know that if I buy this array based on what the modeling to show me, I should see this amount of headroom that I perceived.
55:55
So it allows you to make better I. T. Decisions as a as a whole, not just for epic but every application that you serve that's running inside of a pure storage platform. And it's really, in my opinion that's a it's a it's a it's a technology that we should talk more about but I really invite customers who maybe have never seen pure before. Take a look at what that that product can do
56:19
for you and then take a look at what the you know how it's all built out because you'll find that I think you'll find that with pure it's a much easier storage platform. In fact we were at S. R. T. This year um on site for the first time in a couple of years which is awesome which is epics systems roundtable conference and it's great to sit there and we're at the table with all the other manufacturers and have people come up and
56:43
say hey just stop by letting you know that our stuff runs awesome and we appreciate what you guys do that's a common day at that conference which is awesome. And the fact that all of these things that we're talking about all factor into that, what we kind of just view as you know, being successful with storage in the way that Pierre does, it is incredibly powerful. So I would, I would invite you guys to ask us,
57:08
you know, the tough questions, the things that you guys really struggle with and let's figure out how we can solve them together as collaborators with healthcare and Epic and pure storage awesome. Thank you thomas. I'm gonna we're gonna turn this over to Emily in just about 30 seconds here to do the uh to do the raffle and wrap up components.
57:29
But I did want to answer just quickly some of the questions that that had come in around cohesive e and Flasher cover? Yes, this is an on premises cluster. The data movement that we highlighted on there is resident in the local land. That is how we are able to get the performance and the throughput and the integration with Flash blade.
57:47
But you are able still to use technologies like cloud archive in order to put data in the cloud or if you've got a cluster in the cloud provider, your ability to replicate to that cluster in Azure or in AWS is fully capable and fully available to you as well. So celeste, I think that should encompass your questions again reach out to us directly if you want to deep dive on that a little bit more. And then j you asked about the capability to
58:16
protect volumes directly off the flash ray. Absolutely. The ability to connect to flash ray back up those lungs directly is supported in the platform as as well. Um so I think that's the book of the questions that we had there Emily. Let me turn it over to you to wrap it up here.

Chad Monteith

Principal Field Solutions Architect | Pure Storage

Thomas Whalen

Senior Epic Solutions Principal | Pure Storage

Epic is the most widely used electronic medical record (EMR) in the world.  With more than 250 million patient records digitally stored in Epic, maintaining security and service availability to these records is critical. This session discusses using Pure FlashArray to achieve RPO/RTO of 0 for Epic. Also covered is using Pure FlashRecover to deliver rapid recovery of vital data in the event of a ransomware attack.

During this webinar, we will cover the following:

  • Modern Data Protection Continuum and how it applies to Epic
  • FlashRecover - what it adds to the equation and how it integrates
  • 3-2-1 & how do customers use for maximum availability & recoverability


We look forward to having you join us!

Test Drive FlashArray

Work in a self-service environment to experience the management of a Pure FlashArray. Explore advanced features, including snapshots, replication, ActiveCluster™, and even VMware integration.

Try Now
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call us: 833-371-7873

Mediapr@purestorage.com

 

Pure Storage HQ

650 Castro St #400

Mountain View, CA 94041

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.