Skip to Content
47:40 Video

Accelerate Session - EPIC Healthcare Maximise Service Availability

Accelerate Session - EPIC Healthcare Maximise Service Availability
Click to View Transcript
00:00
Mm hmm. Yeah. Mhm. Thank you for joining us here today where we're going to talk to you about Pure solution and strategy with the epic healthcare product where we're going to talk about how we maximize the availability of your services and the recovery ability of the Epic platform with both pure flash array and with flash recover.
00:27
I'm chad monty, the principal architect with our field solutions and strategy team and I'm joined today by thomas Whalen, a senior Principal solutions architect with their health and life sciences team with a focus on our partnership and our solutions around Epic. You're welcome to contact us at the information provided here. We're going to focus today on how the combination of Epic Pure storage and our backup
00:50
software Flash recover joined together to really achieve these high levels of availability and rapid recovery ability in an epic solution. So when we talk about Epic in the HR technology kind of the whole there's a number of key areas that we like to focus on um financial impact, obviously operational impact and technical. So financial, many customers that that are interested in looking at here,
01:15
they are looking for a different buying experience, a different ownership experience. And with some of our programs like Evergreen and some of the other kind of programs that we have for healthcare customers specifically. But just general customers as a whole Pure is offering customers more buying options and more flexibility with ownership today than we see with other manufacturers in the same storage space, which we think is is really key to a lot
01:39
of our success from an operational impact perspective. You know, one of the core covenants of, of developing technology appears is simplicity and the right size and the ability to basically uh, present a rate to the customer that does what they need with some intelligence behind it but doesn't, you know, impede or add more work on their plate when they're already really,
02:02
really busy. So the operational impact is really important to us to try to keep things simple yet as incredibly effective and with the intelligence behind it so that the array can take on more of the job of helping to make decisions around storage versus having a bunch of engineers that have to be engineers and storage. We, we think that there's other uses for those folks that are more meaningful than just simply
02:24
acting and creating Luns all day long. The technical impact is really around when you have this solution in place. How does it impact the rest of your infrastructure outside of even just epic. Um, so any solutions that we buy that we deploy today for epic. We really want to have a great sense of data reduction so that people are getting more
02:44
efficiency with the capacity that they're spending. Um, and with any health care application, availability and uptime is a huge part of of that discussion as as that we learned in covid with the the the the increased amount of capacity that we're dealing with also with the availability of, of having so many patients, you know, in in our, in our clinical environments.
03:08
Getting outage Windows just is a thing of the past. So having the ability to take a storage product like a Puree uh, and be able to be able to do the normal upgrades and you know, updates and things like that without having to actually take an application offline. It's something that a lot of other storage manufacturers talk about.
03:27
But, but but I have a hard time executing on Pure has figured out how to do this in such a way that uh, we do really offer those capabilities and lastly is having, you know, having some services folks uh, to be able to leverage when, you know, your, your team is, is uh, you know, busy doing a lot of things. One thing we've really been doing with the Epic space specifically is really ramping up
03:49
professional services in the areas of scripting uh, data migration, uh, things like uh customized engineers to be able to help customers that really need some additional assistance to have a direct access to someone that he can speak to to make sure that any issues that they have are, you know, dealt with, you know very, very quickly. Um, and, and we think that these three areas
04:11
are really what sets epic apart from other other stories of manufacturers in the same space. One of the things that the customers are especially existing. Epic customers are really uh you know, it's really important to them is is honorable. Honorable is a program that Epic users to help customers deal with costs around around their application by leveraging a set of best
04:32
practices that Epic publishes for customers. And it and it's a, it's a great way for customers to to gain some additional discounts while at the same time assuring uh that they're complying with all of the various epic rules and regulations that that they have related to their software. Um, and with pure we we are lucky and very fortunate to be at the top end of the scale for a lot of those things when it comes to storage.
04:58
So with our pure platforms are flash array platforms achieve the highest comfort level for the operational database workload, which is your inner systems iris database area also achieved the highest rating for the comfort of your analytical so your co he does your clarity is your caboodle is those kind of things. And then lastly from a prevalence perspective, um, we are,
05:24
you know, we're happy and proud to have very common status essentially what that means is that, you know, all things being called, customers are gravitating towards our solutions and maybe other storage manufacturers when it comes to platform specific with, you know, flash recover. Um you know, we're still working towards us, a newer product. So like all technologies, we start,
05:43
you start kind of lowering the scale but having a medium comfort level early in the this product life is a huge uh is a huge kind of statement of address ability for the things that that that that product you know specifically covers which is data protection which all customers are struggled with from time to time. And right now we're still growing that as we just talked about so we are getting more and
06:06
more customers to buy into that space and as that happens that prevalence will obviously continue to rise. And then Flash blade. Flash blade is Arnaz based product. Um It also became part of the spats guide which is the guy that has all this information in there. It also came in as a medium comfort right right
06:25
away which we were very thrilled to have um as well as the prevalence being uncommon again as we have developed out our smb services and things like that, more and more customers are now coming back and looking at at our flagship product as a way of not only satisfying, you know, file storage for other clinical cos he's like pax and maybe genomics, things like that. They're using it now for some of their web blob
06:50
work. That that also is part of the epic application. So downtime is not an option in healthcare which is probably really, really true. Like we talked about, you know with Covid and the huge influx of patients, you know, it's impossible to get outage Windows for any level of technology. Well, a pure, we decided on day one,
07:12
we built these products that they really needed to have the ability to continue to operate with workloads um, and still be able to be upgraded. We felt that there was a technical challenge there that we were willing to solve. And with pure, we're happy to say that we've done that and we have many customers that are can get the advantages of that. Um but it really does, it's sometimes it's hard
07:37
to understand that when you start, you know, when you're talking about what, what that timeframe breaks down, um you know, you can see from, from left to right, you don't really get a lot of time frame even though a lot of customers and other manufacturers will talk about, you know, the nines of availability. But sometimes the asterisk behind that tend to be a bit murky between patient,
07:58
you know, between different storage manufacturers. In our case. You know, our outage windows are if there's, we provide upgrades and down upgrades and updates and things like that with zero loss of workload time so that if there's any uh stop of workload, we view that as being a downtown situation. So all of our work that we do to try to
08:20
minimize that is always with the full availability and accessibility of that surgery for all the workloads when you're running. So if you're gonna do a controller upgrade your applications are up and running servicing the servicing servicing your, your community of applications, you know, fully if you're gonna do a patch or or install a new version of software, all your applications are online, serving workload,
08:43
doing their normal jobs while these upgrades are happening at the same time. That to us is what really non disrupt herbal upgrades really means it's not, you know, quieting your environment and then having some things offline and then you do your upgrades. Um, we don't require customers to do that. Um, and so we think that with pure, we've, we've really achieved something that's been
09:05
really hard to achieve in the, in the storage space, which is really giving a true nd you kind of experience to our customers. Um This is kind of a fun slide. So this is a slide that, that after, you know, talking with a lot of our customers or our field. Um, one of the things that I, I really enjoy is, um, you know,
09:23
being able to ask customers what they feel about our products. Um, and we are lucky enough that customers will, will come to us and tell us, you know, how impactful they really are and um, this little work cloud here really kind of describes that it's, it's all these things. Um, but some of the comments were really, really insightful as well as a bit eye opening. Um, you know,
09:45
we were just at epic SRT a few weeks ago and we had a number of folks come up and just say, you know, hey, just stopping by to tell you that we love your stuff and it really helps our, you know, makes our job much easier and to us that's the best compliment that we can receive. And so we, we think that that we've done some interesting things that really changes the way storage is not only looked at in the in the in customers look,
10:08
you know, in their customers infrastructures, but as a but as a technology that is, you know, shares the spotlight with other technologies, we're doing some things that are rather different that customers really are embracing and we're really proud of that. Thank you thomas. Now I want to talk a little bit about where Epic fits with a modern data protection strategy across the pure storage portfolio.
10:32
What's important to understand is that we really have this continuum of data protection and availability that starts at an aggressive S. L. A. Where our recovery point objective and our recovery time objective is zero where both of those are zero. We'll see that, you know, at the crux of that, of those line graphs right there at the crux of that cross,
10:51
where we've got active cluster as a key technology, if you've got an epic database where you need to spread that across a couple of metropolitan area sites where you need to truly achieve 00 in the event of multiple failures, it could be, you know, within a data center within Iraq or even within a metropolitan area. 00 with active cluster from an RPORTO perspective is the technology that you want to
11:17
choose for that. Now as we grow out from there, we increase that recovery point and that recovery time with our solutions. So the next step for that is technologies like active D R. A continuous data replication technology which will take a third array and can put that at a remote site, you know, in another city or in another state and it will only be behind by a
11:42
matter of seconds and then you can do a fail over to that cluster again only that's back behind by a number of seconds um, and bring your application environment back up. And then you also have the snapshot technology and the snapshot replication technology. If you have recovery points that are measured in minutes or even hours, you can utilize that technology to bring your environment back.
12:08
So those are some key pieces that we provided your core database application and your core application environment within epic. As you move out to further recovery point objectives technologies like flash recover or data protection product will come into play. Flash recover generally meets an R P. O. That is measured in hours.
12:28
You may take a database back up every four hours, maybe every eight hours but you can actually increase that even farther Down to a recovery point maybe in the tens of minutes, maybe 10 minutes or 30 minutes. With the ability to do transaction log backups um, into that system, especially if you're running a sequel or an oracle or a similarly type of supported database.
12:51
So you've got the ability with pure storage to solve 00 RPO. To data that requires less R P O R T. O. But you need to maintain cost effectively for longer periods of time. That can be in flash recover, sending data to the cloud with cloud archive for compliance type of purpose. So how does this apply with an Epic? Well if we net it down to the key technologies
13:13
that apply with an Epic, you're going to have active cluster, you're gonna have snapshot replications, you're gonna have a technology like flash recover and you'll have safe mode support integrated with all of those so that you can make sure that you've got an enforced immutable copy of your block data and even your backup data associated with that. So appear when we talk about Epic
13:34
infrastructures, you know, the first part that's it's one of the most most important and critical pieces of, of building on an Epic infrastructure is understanding uh what approach we want to take towards architect in those environments. So the way that the way that traditionally that's done is is Epic will provide the customer a document for the hardware configuration guide.
13:55
And that guy is based on clinical metrics that the customer supplies the Epic and Epic then turns that into essentially a sizing sizing document that gives uh technology providers like pure and others basically the road map to create an infrastructure for their application that's based on all the Epic various best practices as well as you know other application specific needs that might be prevalent to one customer but probably the most important thing
14:25
is it is it is customized customizable to each customer. So no two epic implementations are ever essentially the same. Um and after you know with with the amount of time we've been building this out uh you know we spent a lot of time thinking about how to create infrastructures that are built around you know some some solid I. T. Methodologies but also takes it takes into
14:48
consideration some of the dynamic nous and workload heaviness of some of the application to try to determine how to best give customers not only a storage environment that services the need but also healthy storage environment so that it also takes into consideration areas where the application can be really really heavy handed and and trying to figure out a way to architected so that that work that has to be done doesn't necessarily you know hurt
15:18
other things as part of the epic infrastructure. So we we took that into consideration and we basically have two different options. We call them very easily to plus one and three plus one. The two plus 13 plus one are really based on a size of a customer perspective. Um and we have a lot of metrics that we kind of look at to determine that. But essentially what it does is it allows us to
15:40
take a smaller location potentially and divide the application into two specific pieces. One is the epic production and sub instances which is where all of the clinical work is really happening. Um we define a specific array towards that work independently of everything else. The reason for that is simple. We want it to be the we want that environment to be as protected as possible or any other
16:07
outside influences that are also part of the epic infrastructure that sometimes can be temperamental and cause you know other issues. The fact that we can create a fault domain for just that instance allows us to have some peace of mind that the customer that uses that will have a less likely chance of any other kind of influence impacting the clinical performance of the production environment because that is where your patient care is being driven from.
16:33
Um The other, the other option is three plus 13 plus one is essentially that same idea but extended out a bit further um These are for larger customers that are uh we're talking you know in the in the 5,300,000 I Ops rangers. So where they really need to have some isolation of elements of the irish database as well as elements of their reporting environment. And so what we do at for a three plus one solution, we still have our dedicated
17:01
production server that's protected just like we just talked about but we also then separate some of the the more generalized workloads with the sequel server oracle workloads for that reporting piece. Um and because we can then isolate those two, we can help to mitigate problems with very, very large and and uh resource intensive et elles that have two ap happened from the reporting instance of the irish
17:27
database over to sequel server oracle by separating those two, we can assure that our controller utilization on those two pieces are giving the maximum amount of performance because they have some isolation and separation there. Um We also then for each of those solutions, the plus one that's part of that process is we do also provide a D. R. Array that is configured identically to what
17:51
the specifications. The harder configuration guide stipulate in a lot of cases. D. R tends to be a bit of a more of a discussion that we have with customers around what di are really means to them because customers have their own opinions and their own ideas about what they want and desire to be. But it gives us having that DRM built out in such a way that allows us to have that
18:13
conversation with the customer but also to understand that that these are the things that Epic is just ask customers to to protect in in extra extra protection within storage so that you know, for for the very bare minimum the elements that they believe are are to be protected are in fact protected. Um Part of how we also do this is that, you know, snapshots. Obviously as we talked about,
18:35
you know, data protection and and and utility data copying type use all of our rays come with a high degree of of snapshot capabilities as well as capacity to service snapshots. Um and so however every customer does snapshots a bit differently, but regardless of of how many snapshots you want to maintain, we provide you the storage in order to get that done effectively.
19:02
Yes. Let's take a quick look at the general uh we'll take these two infrastructures and kind of break them down a little bit. So this diagram is a really, really simplistic version of essentially a general Epic infrastructure, What you would see for what referred to as r two plus one customer. So, you know, you have your basic uh you know towards the top going down,
19:20
you'll have your your Ai X or your O DB servers, um you know, which are running the actual cashier database itself, iris. Database itself. You also then have there's a good complement of VM ware um and also some, some Citrix potentially for the middle layer, piece of services layers referred to is as well as the presentation layer,
19:40
which is where hyperspace the client is deployed for customers. But below that is where the storage are pure storage rates come into play. So in this case you'll see a prod one and then a prod to three. So that prod one, is that that singular uh, production and sub environment? That's kind of stand alone. Again, we do that too,
20:01
isolated from other things that are happening And then the prod 23 year is really more of a general purpose build array or for smaller customers that fit into that. We do the we are reporting and sorrow instances on the irish side we have non production um other instances like training and other buildings and things like that that are more built towards uh specific module implementations that may come over the
20:26
lifecycle of your, your epic application. Um but also we also then combine some of the, you know, the, the clarity caboodle type thing. Web lob. All these things are all kind of bundled within that prod 23 array. So it's a it's a combined, it's a collapsed system for customers that may not necessarily need a lot of different surgeries or the breakup of those surgeries but can actually uh,
20:49
you know, benefit, you can gain the same performance and other kind of benefits that we have with that um you'll see below there is, is seven days of immutable snapshots local uh to source copies. So we really embrace the idea of using snapshots and you'll notice on the prod one or a there on the left, you'll see that little shield that has a s next to it that indicates that we're leveraging safe
21:11
mode. Safe mode is a is a function that we have that allows us to take snapshots and any other kind of storage element and and preserve that in the case of a malicious attack or ransomware kind of situation allows us to keep that that data protected in such a way that allows pure as well as a customer and with Epix help as well to be able to remediate situations where it might be a malicious source,
21:35
a ransomware attack and we want to be sure that we're going to protect that. Um so safe mode helps us do that by introducing a policy engine around how data is replicated in the system. Um, so each snapshot then as can be used and we, we recommend about a seven day rotation and that recommendation is really around the also the rolling of the transactional logs within the actual cash or irish database system itself.
22:01
Um and we felt that that was an ample amount of time to keep uh snapshots but a customer can really define their own rules around that's just what we, we think we recommend with the D. R side of epic. Uh you know, we provide a server that we are already that we just spoke about that has the minimum configurations that Epic would like you to protect.
22:19
Um we talked about production the hyperspace as well as the web blob storage. But really it's it's more about having a discussion with the customer about what they're, what their their data center needs are for backup recovery. So while we provide this particularly right. Usually there's other things that are added to it or part of a much larger discussion around kind of data protection as a whole.
22:39
But in any case we allow, we give the customers the same capabilities as on the production side where you can do uh snapshots, you'll have ample snapshot. So again, seven days of immutable snapshots we think is is more than adequate for a lot of companies. But again, it's it's it's the customer's needs that we that we are most interested in so we can adapt to any kind of customer situation
23:02
that you may come up with. So this solution here is is an expanded view. Obviously this has got the three plus one solution for our medium, the larger sized customers. It's the same idea. Generally it's just some some some additional separation as we talked about earlier. Um where prod is still that single clinical
23:19
array that is doing all of the clinical work. The product to is now kind of isolated and doing more of the the non production iris database management as well as uh there's some web lab in there and some of the hyperspace capacity. It's more general purpose generalized workloads but then you also have now the product three rate which is dedicated towards really the reporting aspect of the epic applications.
23:44
So this is gonna be your clarity, your mojito, your caboodle is also weblog. We do service in that as well. Um and the reason being there is because many times we will often see with customers that are much larger in size. The reporting extracts that E. T. L. Over to the clarity environment can be really really heavy handed. So the ability to separate those two allows us
24:05
to make sure that our controllers allow for all the performance necessary to do the U. T. S. As well as all the performance necessary in the prod three array to to bring all of that data into your sequel server oracle environment for that. Um you know, for that environment. Um one of the things that you know, I often get asked is you know why not just have one array
24:25
that kind of does everything which um you know, we could absolutely do that and we've made the choice to not do that for um for a number of reasons but it's really just the more so to protect the workloads from from situations where one can maybe step on the other and and what we found is that customers that embraced kind of an all in one solution sometimes can become impacted by things that may not necessarily be coming from Epic that
24:50
may be coming from another application. But the challenge with that is, is also somehow, you know, rippling through other areas of the storage solution. So while we can certainly sell a single array that does everything, we chose to make these various uh kind of tactical separations of epic work in order to make sure that all of that work that has to get done has the highest degree of availability as
25:14
well as the high school performance driven towards them. And that's kind of the, the reason for this, this change and kind of philosophy that we have but again, same idea seven days, you know, amenable snapshots on the D. R. Site as well. Same conversation, it's just a bit of a larger environment for those customers that have a
25:31
little bit more epic workload to do. It's really essentially the same infrastructure just with an additional array added to it. And this is a, this is an interesting design. So this is a design for those organizations that really want the highest level availability. Uh and I really try to achieve the active active kind of experience for their epic environment.
25:53
So what this does is it shows essentially are three products, three products production or raise. Uh and then on the D. R. Side you'll see another set of three production raise. This is actually more of a common solution than most people think. Um One of the things that they like about here
26:06
is that while maybe in the past with other storage providers, it was really expensive uplift to uh to try to achieve that active active kind of idea within their data center. The arrays are expensive, the software licensing is expensive and just really the the whole setup of of that can be uh you know, that can only you can have dedicated people just doing that alone.
26:29
Um What pierre, we we figured out how to do that in such a way that uh you know, inexpensive the customer as far as actually doing it to raise themselves, really don't operate any differently. Um but we provide the the way a mechanism to move that data between those two locations and present those full fidelity copies between new data centers and really try to achieve that the active, active,
26:50
we have a number of customers that have been very successful using this and we see more and more customers as they come to peer want to have more discussions around creating this capability in their environment. Whereas previously, before Prayer they may not have been able to do that. So we feel that that's a that's a great benefit to our solution as well as having the
27:10
flexibility and design to be able to accommodate for epic customers that are looking for that kind of experience. And this is a, you know, this slide here is just just giving you further examples of how creative we can be with the various solution build out. Um you know, there's like I said with every epic customer,
27:29
they're a bit unique. Um and because of that sometimes there's some uniqueness to how we architect Epic um one of the values that period is that we're flexible enough that we can really do that and really make an epic environment to the customers. Uh you know, needs exactly the way they want to build it out. We can do this from not only the arrays themselves, but how data is placed on each
27:50
arrays. Part of our sizing process allows a dialogue between us and the customer in order to know exactly what instances are placed on what array um and we can really get real uh we can put a lot of detail into that and make the customer, uh we can give them that the from the environment that they absolutely would like to get to. Um that last diagram there is, is really kind
28:13
of what we do for we have other with other collaborations in this case it's with cohesively for the flash recover product, um it kind of helps us understand kind of how that fits into the general epic ecosystem. Um and so regardless of how big or small you are, we can provide you with a solution that is completely customized to your needs. So you get the most out of your peer storage as well as your epic implementations.
28:37
That's great. It looks like we've got answers across the board for any business requirement here. So um when we move into kind of talking about data protection in more detail, there's something that we really want to make sure that that customers understand that that are part of our approach to data protection. And first is is this idea of 3-1.
28:59
This is something that epic has embraced as a standard in their support ability of customers environments. Um it's a pretty simple concept. It's essentially three copies of data, two copies on two different kind of platforms or or protocols to to hold that information and then a single copy that is kind of a compliance uh you know, further out of the blast radius if you will copy that,
29:23
that can be there for, you know, uh if absolutely needed. So the idea with this is to really just offer some additional degree of protections, both from the data's point of view, but also from a physical device point of view, which is why they have, which, which is why this embraces both block storage and known as b storage in order to provide those two disparate kind of storage mechanisms.
29:47
And with our we can achieve this by our combination of some of the technology you're gonna learn about here in a second with the combinations of flash array and flash played working together to provide a robust and as well a 3-1 compliant uh backup and data recovery protection solution that will fit um and be will be part of Epic's honorable best practices.
30:12
So thank you thomas. It's great to see all of the different ways that we can provide availability and recovery ability as well as performance. Epic infrastructures. I want to talk a little bit next about flash recover, which is a modern Epic data protection strategy focused on the backup and the rapid restore component.
30:33
It's important to know why we built flash recover originally and why that's going to be important to an epic type of deployment when we built flash blade. You know, the crux of flash recovery here many years ago. You know, we really thought it would initially be big in the space of design automation analytics, ai chip processing, things like that, you know,
30:55
chip design, top type of components, what our customers started using it for. We're not only those use cases, but they started using it in workflows where recovery or rapid restore was important because we found that a system designed well for electronic design automation or for analytics was also very good at doing rapid restore of de duplicated or optimized data sets.
31:22
That meant for some of these initial customers that if they had in their business processes, rapid restore or recovery as a part of the workflow or if they ever needed to bring some of that data back fast. They could uniquely do so with flash blade faster than any other product out there on the market. And this was reinforced a few years ago by an E S. G.
31:42
Report where they said that half of the organizations that they surveyed identified improving sls specifically defining or dropping recovery point and recovery time objectives in their backup and restore environment as a top challenge that they were facing. So we went to augment flash blade with some key properties that became the flash recover solution.
32:08
We needed a product whether it's for epic or for other workflows that had its core property of rapid restore. If you're going to need to bring the data back from your backup system, you really need to bring it back fast, especially in the event of a ransomware attack. If your data is corrupt and it's corrupted multiple copies in your production environment
32:28
of the data, you're beholden to how fast you can bring that data back. Maybe you're lucky and it's from a snapshot maybe it's a fellow overdue D. R site but if it's not one of those, a rapid restore is absolutely critical to service restoration and availability and in some cases an instant recovery may be available or possible as well. So it needs some properties it needs properties
32:50
of performance, simplicity and scalability inherent to all of this. So we built a solution called flash recover, which is composed of three key components at the core at the center is the flash blade. The pure scale out now is an object platform. Are unified fast file and object platform that is heavily optimized and serves great in that rapid restore use case we tie together a very
33:18
mature data protection product called from Cohee City called their data protect platform. That is really the backup interface to all of the applications in your environment. Whether it's epic whether it's VM ware or its other backup. Other applications that require fast backup and rapid restore and we run that day to protect software on top of some customized and optimized compute nodes that are really built
33:43
for the CPU and the network throughput that we're asking of this next generation disaggregated architecture. Those three components allowed us to build flash recover and really allow it to exceed at high size, high performance, high concurrency applications like an epic backup or restore And so we've built a system at its core with speed, security and simplicity at scale.
34:09
Pure flash recovery Pure flash recover has four key properties that are important for epic. The first is exceeding the high demanding S. L. A. That is applied to it versus typical hyper converged or even purpose built back up appliance type of applications. We will often see at least a four X improvement in the backup rate and the restore rate from a typical hyper converged platform.
34:35
It's actually substantially larger than that when you look at tests against a traditional purpose built back up appliance where that number may be well into the double digits of size that allows you to restore maybe from an instant recovery perspective, thousands of virtual machines within a day or for the largest environments. You can actually restore with a modest system up to a petabyte or more of data a day.
34:59
And in fact if you build, if you have a large enough environment, you could easily exceed that data from a restore perspective. So that means the amount of data you need to bring back for your epic environment can be very large in a very short amount of time. We have tied this together with the pure ecosystem and the pure experience.
35:18
We've actually spent quite a bit of years jointly innovating this with cohesive e and with pure engineering to ensure that you've got a platform that out of the box is integrated, is self healing and is designed and optimized in order to work together and we continue that innovation over time by time to gather future optimizations from a performance and a capability perspective with technologies like safe mode,
35:45
yep, all of this being provided with a single support, call from pure storage, the same people you call for your flash array or your active cluster deployment can now support your flash recover implementation as well as these are all pure delivered products. The enabling technology for this is that we've built it with a disaggregated scale out architecture and you can kind of see visually here what those components are.
36:09
You see the flash blade there at the bottom, you see the PXG- one compute nodes there in the middle that are running the day to protect software. This solution as a whole represents the pure flash recover technology. What's great with the disaggregated architecture is you can optimize each component from a performance from a scale and from a cost perspective to do what it does best.
36:35
You have the flash blade at the bottom that can serve many additional workloads. Besides flash recover. Maybe you want to add analytics or you have direct, you know, our man or sequel dump type of operations that are running directly there. You can absolutely do that. But what it also gives you is a cost effective way if the business comes to you and changes in
36:54
S L A. If you've built this system for your epic environment with two weeks of retention And your executives now come to you and say I need 60 days of retention on flash recover for rapid restore. You don't have to repurchase a massive amount of infrastructure to do that. You can simply hot add additional blades to the flash blade to represent the additional
37:16
capacity necessary for that 60 day retention flash recover will automatically detect that you update the policies and the system continues on a minimal amount of work from an operational perspective. In fact most of that is done as part of the upgrade process. The installation process by your pure field engineers and the flash recover auto detects it From a node perspective,
37:42
the PXG 1's Let's say you're SLA comes in and says I have this environment and I need to back it up in 12 hours so you build a four or an eight or 12 node cluster in order to support that and now your business comes to you and says, you know, I can't have an eight hour S L A to back up or to restore that data. I now need a two hour S L A associated with that.
38:04
You don't necessarily need more storage associated with that. You don't even necessarily need more licensing in order to support that. You simply need more throughput And so you can take that cluster from four or eight nodes and expanded to 12 or 24 or 36 or whatever that requirement is simply by hot adding additional nodes into the cluster.
38:23
We're on the next job run, the job will automatically distributed across all of the nodes that are now in the cluster. All of this done is done automatically behind the scenes by the combined infrastructure powered by flash recover And then lastly you get all of the components that you are used to and familiar with from pure as well as some cohesive e pure one as the global
38:51
management platform still supports and will view all of the infrastructure that you've got within flash recover Similarly from a data platform perspective helios from Cohee city is fully licensed and supported in this as well with all of the features that you get from that technology, you still get all of the non disruptive upgrades managed by Pure support, whether it is on a flash blade or within the flash recover cluster as well.
39:16
And full support for all of the long term archiving capabilities with our cloud archive technology and similar technologies that allow you to access data in public clouds, simplifying the implementation, the management and the operation of these technologies. We've also got an enhancement as well provided with safe mode integration onto flash blade. And this is something that nobody else has out on the market.
39:41
Just like you're running safe mode on top of a flash array, you can now run safe mode with your flash recover deployment. And what that means is that both the data and the metadata are stored on the flash blade for your backup environment. So if you're in a worst case scenario where multiple layers have been compromised, you can reinstate, initiate your entire backup
40:04
environment by doing a safe mode recovery and that will bring back the configuration, The metadata as well as the user backed up data for your epic environment all in one operation and it takes about 20-25 minutes. This is something unique in the market that nobody else has, nobody else has the ability to fully reinstate initiate your backup environment to a known
40:27
good immutable copy of the data. Within about 20 to 25 minutes, everybody else has to reinstall control, reinstalled control planes, import databases, bring the data back, reread it, re indexing. None of that has to happen here. 20 to 25 minutes. You're back up ready to do an instant recovery
40:45
or a rapid restore of your data in the event of a ransomware attack. And that's enabled by our key core security tenants that are in here. You've got flash recovery in safe mode components. You've got to admit ability built into the file system as well as safe mode. You've got the data lock user profile that's built into the data protected platform.
41:07
So now you can have a security user that sits outside of your administrative function. Go in there and say this data needs to be unalterable, unchangeable and immutable by anybody including an administrator including somebody with root privileges into the system. Until this timer has expired and that is owned by a security officer within your company, outside of the administrative function. So now you get enforcement at the application
41:35
layer, as well as enforcement at the storage layer, tightly integrated. And then you layer on top of that. Multi factor authentication, very granular role based access control down to an individual object level as well as the ransomware detection and remediation that comes inherent to the platform allowing you to say I've been attacked here. Safe mode recover here,
41:59
incident recovered here. The entire goal to get your environment up in running and the shortest amount of time possible allowing you to return your service faster than any other solution out there on the market. I do want to highlight a little bit about the rapid restore pieces that we've got here and finalize that with what it looks like in an epic environment because many epic environments,
42:22
you know have VM or as a key component here. And so that VM our environment and epic, you know, if you're not gonna, if for whatever reason you can't recover to a snapshot on flash array, you've got a couple options here, you can do a regular rapid restore of the data, we bring the data back using every node using every es x ice server,
42:41
bring it back as fast and as parallel as possible. In this case there was almost 100 terabytes that we brought back in about 3.5 hours. That was a pretty modest system. Only about eight node clusters and I say that's modest because many of our larger clusters are measured in the twenties or the thirties or the forties node sizes and there's even some, it cohesively that are measured in the hundreds of
43:03
notes and you get a linear scale out from performance in this case. But you've got another ability with VM ware which is important. Epic. And that's instant recovery. Now instant recovery is unique with flash recovery because it is backed by flash blade. So every virtual machine and that virtual data store now has the power of flash blade behind
43:24
it. From a performance perspective it's not a couple of spinning SAS drives like in a hyper converged platform and it doesn't first require a restore like a traditional purpose built backup implants instead in the span of less than 30 seconds we can create that virtual data store, present those virtual machines and e. S X can start powering them on all automated with our api integration in order that ransom
43:49
or recovery process. And once that's done we start rapid restore that data back the storage of emotion back to the flash array. This allowing you even if it's in a slightly degraded state to get your service back as fast as possible in this case potentially within five minutes versus having to perform a ransomware recovery with other technologies. Now let's look at what it looks like in an epic
44:13
environment. So you've seen from thomas at the beginning a lot of these components built out in his solution builder slides But I'm just simplifying and just highlight the pieces where flash recover works here most often in your disaster recovery server. We're bringing that data overthrew an irish marrying process and then we are taking a snapshot of that that we are mounting and
44:35
backing up through the proxy server with our epic integration. So that's giving us that rapid backup of that epic database. We pull that into flash recovery and then in the event of a recovery we can send that either back to your proxy server or in many cases we will install the agent on the production server and we will do a rapid restore of that epic database directly back to the production server.
45:02
So this is the general workflow that we're looking at from a rapid restore pure flash recover solution. So what does that look like? Well with the current instance of our epic integration we are pushing a pretty substantial amount of backup throughput. It's key to note that we are continuing to
45:20
innovate with cohesively on this and we expect through the rest of this year some pretty substantial performance improvements in both the backup rate as well as the restore rate with our integrations especially on the Ai X platform that many users are using. So in this case you can see it took a little less than two hours for us to perform a full initial backup of an 18 terabyte database with an epic.
45:44
So from a backup and restore rate we show what those look like on the diagnostic page here. Our current rates for a backup are about 2.5 gigabytes. Bigby per second of rate and that's a pretty fast backup rate. Similarly importantly from a rapid restore perspective we're demonstrating about 1.5 gigabytes per second to that single host from a rapid restore rate And we are actually working
46:12
with cohesively pretty tightly in order to increase those numbers. So you will see those numbers become even faster from a rapid restore perspective later this year. And in fact those numbers when compared to other backup and restore technologies I think you'll see that they are foreign access to what competitors are able to offer.
46:30
Not only are we taking the lead but we want to further press the lead as far forward as possible and increase those numbers. So with epic and pure storage where do we go next? You've got a tremendous amount of resources on pure storage dot com. We've got our white paper published by our solutions architecture team that highlights epic data protection with flash recover.
46:53
We've got the link here. You can follow on the pure solutions page that pure solutions portal on pure storage dot com is focused on all of our healthcare solutions and in particular our epic components. So you'll see our healthcare epic page listed there and linked here as well. And then similarly you can also hit our community portal for all things Pure and Epic.
47:16
If you're interested in hearing more about this or you want to engage any of us on your epic project. please reach out to your assigned pre sale systems engineering team or your account team at Pure and they can engage us to work with you. Thank you. Thank you thomas for helping today, yep. Thank you.
  • Healthcare
  • Video
  • Epic
  • Pure//Accelerate

EPIC is the most widely used electronic medical record (EMR) in the world. With more than 250 million patient records digitally stored in EPIC, maintaining security and service availability to these records is critical. This session discusses using Pure FlashArray to achieve RPO/RTO of 0 for EPIC.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.