Skip to Content
43:34 Video

Why Restore is the Next Backup

Join our panel of industry, customer and technology experts as they discuss real-world restore requirements and how Pure addresses them.
Click to View Transcript
00:01
Unknown: Ladies and gentlemen, welcome to the pure panel discussion. Please welcome your moderator, Andy stone. Welcome to our live panel session and titled Why restores the new backup. I'm Andy stone CTO for the Americas here at Pure Storage, and I'm honored to be hosting today. This session
00:22
is meant to give you some industry and peer insights into the data protection space and hopefully answer some of your questions with real world examples and experiences from our distinguished panel panelists, and analysts. With that, let me introduce today's panelists who will participate
00:37
in our discussion today and ask them each to tell you a little bit about themselves, their company and the roles that they play? We'll start off with Mr. Christophe Bertrand. Krista? Everyone, I'm Christopher Trent, Senior Analyst at ESG. And I'll be sharing some very interesting research and some numbers on the
00:55
topic of data protection with your Thank you, Christophe, Justin brassfield. Hi, everyone. I'm Justin brassfield. I am the senior architect for storage at netspend. And I'm here to kind of give an insight of what how data recovery works in the pay
01:15
card industry. Hey, thanks and Thomas vecchio. Tom, you're on mute. Sorry about that. I clicked on the video of an episode. So I'm Tom vacu. I'm the director of infrastructure for Sinai Chicago, which is a safety net hospital on the west side of
01:39
Chicago. Everything servers storage and backup falls into two sort of my area. And I'm here to talk to you about you know, some of the things we do in a mission critical environment to maintain maximum uptime best SLA is and just kind of talk about the importance of backups and recoveries. What a
01:58
great group. Thank you all for being here today. Really appreciate it. So just a reminder to our audience that the q&a is live. So please feel free to enter your questions in the chat. And we're gonna do our best to get to them as we go throughout. With that said, let's go ahead and start with
02:12
some background on the topic. Christophe has come today with some of his latest research. Christoph, would you mind kicking us off and sharing some of your latest thinking in the space? Well, I absolutely have some pretty interesting research to share. First of all, can you see it? Is it good? I can see
02:30
it. Great. Thank you. Excellent. So I wanted to spend a little bit of time on downtime. Pun intended. Actually, there are so many reasons why things can go wrong. As you can see here, where we researched what reasons. A number of IT professionals have given us were about 350 or more IT
02:48
professionals in this research that told us about why and how the last data or had some interruption of the business applications. As you can see, you have two sets of numbers, let me walk you through them very quickly in a dark blue is the primary or the
03:04
most concerning potential impact. So what is the most important impact pick one, and then pick all that apply is in the labeling. So it's a bit of a popularity contest. Now in this case, not very popular, I admit. So the first one ranked by popularity is really the loss of employee productivity. And
03:24
actually, there are quite a few things to say about that. What I've done is I've tried to group them in a way that would allow us to understand the top four sort of big areas of impacts that you have to consider. So the first one is going to be around efficiency. And I've highlighted the three reasons
03:42
that I believe for in that category. And obviously, one of the big things that happens with it is that diversion of IT resources to go put all the X on hand and on board to really go. All hands on deck ready to really go fix the problem when something comes up. Because imagine you have a ransomware
04:01
attack, you have some big data loss issue, you want to get all the resources to help the business back on track. Obviously, there are lots of other impact potential potentially impacts around loss of IP, increase insurance premiums, but it goes beyond. Unfortunately, you can also have
04:18
some very straight and direct business impacts such as loss of revenue, certainly bad publicity, damage to the brand integrity would fall in that category, and direct impact on stock price. Clearly when something goes wrong, and it becomes public, that can be a direct consequence for publicly
04:36
traded organizations. And we've seen that time and time again. More importantly, you lose your customers confidence, which is as you can see ranking third here pretty high as a primary reason as well. There are also some legal impacts. I think those you know that that the auditor's the the lawyers are
04:55
going to start knocking on the door once a problem has happened is become visible, it's very likely that you will get sued or that you will have some compliance exposures. And finally, we I mentioned employee productivity as one of the first impacts,
05:09
it's it's definitely a big one. But also and what's interesting here is that it also affects the confidence of the actual employees, you know, would you really like to, to keep working for a company that can't keep the business going, because the systems have failed for some reason, right. So a lots lots of
05:26
things to unpack here. And definitely, I'm sure that we'll hear from, from our panelists that they've been through some of this, and are considering these four big areas. Now, let's talk about essentially, downtime and
05:43
what the real numbers are. So again, this is based on a survey we recently conducted, and we asked hundreds of IT professionals to tell us what their RPO and artios are. So in this case, we're going to focus on downtime or recovery time objective,
06:00
which is a very critical KPI here. So first things first, you will know that 15% of the respondents said they can tolerate no downtime. So just want to point that out, it's pretty high bar to meet. Again, these are objectives, not actuales. So 1617, are telling us that, you know, I can never
06:19
have any downtime. So that means under the covers, you're likely going to look at technologies that allow you to do continuous data protection to failover, to things like that. But more importantly, if I sort of look at the mean, of what we found out how to get from the sample, it's two hours. So when you
06:36
think about it, two hours is kind of a pretty short period of time, especially in a busy day, to be able to, to resume business. But the real kicker is the one hour mark, I would like to highlight here that you're looking at two sets of data, the mission critical applications that are in the light blue, and
06:56
the other applications or normal production workloads that are in the darker blue, as you could see, if I drew the line at one hour, you get the big majority here of mission critical applications being in that sort of one hour window. So there's a bit of a magic number here with one hour. It's arbitrary. But
07:12
candidly, if you look at the distribution, it's a lot of applications that really have to be back online quickly, should anything come up, I'm not going into the reasons here. Lots of different causes could happen and cause the outage. But the point is that, as you can see, there's a pretty high bar that
07:30
has been placed here. Normal applications are actually about 35% are expected to come back within the hour too. So again, think about the technologies and the effort and processes that it takes to get to these to be able to meet these objectives. And again, is our objectives, our RPO, how much data or any
07:50
transactions can I afford to lose very different picture here in the sense that the one hour bar is really captures most of the respondents. But note, that same 15%, that doesn't want to lose data at all. So there are organizations, and some of our listeners may be in one of them that clearly have this mandate
08:13
that there can be no downtime. So you have to have some sort of failover technology in place. And you cannot lose data, should anything come up. Again, pretty tall order. But the technology exists to do that. It also takes people processes and investment to get there. But looking at the whole picture here of RPO.
08:32
Clearly, I think, you know, losing more than one hour of data has become something of the past. People just can't tolerate it. So, you know, this is something that I think is fundamental to understanding not only the strategies you put in place for data protection and disaster
08:52
recovery, but also the type of solutions that you have to to deploy to get there. Now, I talked about objectives, I want to just give us a sense for what happens in reality with actuals. So we asked organizations whether they had experienced a downtime type of
09:13
event within the last year and 79% said yes, so great. Let's look at the 79%. What did they tell us? It turns out that the longest outage, so think, hey, that's the bad one, we will get a bad one, right? Well, that's the bad one. And the bad one was nothing to two hours of mean that the objectives would
09:32
require, right based on what I just showed you, but it was six hours 3x the objective. Now again, that was the longest outage, but at the end of the day, the longest average is the one that's likely going to create all of those impacts that I mentioned earlier. The employee impacts you operational
09:51
efficiency impacts the business impacts, etc. So, you know, with that, I know it's a bit of a grim picture, but at the same time, it's nice to anchor our conversation. In where the market is, and that's where we are today, there's still a lot of work to do. And lots of great things can
10:06
be achieved. And I definitely want to hear from our from our panelists on this topic. So I'll pass this back to you. Well, thanks so much, Christophe, those are some brilliant data points. I'm always amazed that the data Christopher and his team are able to gather and the research
10:23
that they're able to produce as a result, it certainly goes to illustrate the importance of the topic that we have today. So let's go have a chat with our panelists now and get some real world perspective on this topic. I'd like to ask you both to quickly you just reintroduce yourself and your company and
10:38
roles we get started, if you don't mind. But that said, you know, here we go. So first question, what are the top two or three challenges you face relative to the whole data protection space in your organization, and Justin, we'll start with you. Sure. Once again, stated, I'm Justin
10:55
brassfield, the senior storage architect and netspend. Um, for the most part, the heart the the top things, obviously, is, is like Christoph said, no data loss, you can't have any in the pay card business. Time is money and data is money. If you have any kind of that you that every one of those impacts these
11:15
listed it, it seems to track, I kind of like how when you looked at the the critical applications and the non, the non critical applications, they kind of went in a sideways slope to one another. And that is pretty representative, like for the for the most part, you're on the high end of things, if you don't
11:38
want any data loss, and in response, time has to be snappy. People want backups of their backups, when it when it kind of comes to those things. So of course, you'd want instantaneous, some kind of level of restore recoverability that you can at least point to when when people are in the room
11:59
or, or when a problem goes bad. You know, as soon as something goes bad. The first question is, how fast can we get it up online? How fast was the last time we get it online? And I know what from what we've seen from Christoph, it's really a you know, hope for the best but plan for the worst when it comes
12:16
down to it. So the six hour mark, it's just like, every time that happens, everyone's like, Oh, I hope this isn't a six hour event. Cuz that's, you know, that's usually what it everywhere every Baron remembers the one time that that one terrible storm that happened. So you, you kind of do your best
12:32
when when something like that happens to learn from it. And you know, utilizing pure in the past, we've you know, we've been able to make leaps and bounds doing those things. So those are, those are kind of the the top things, we're looking for no data loss, and then that, that data loss kind of affects
12:49
everything else that affects your brand. And affects legal, everybody gets entangled with it, and you don't want you know, the CTO calling you up when it's your application that has happened. So you need to have you know, everyone's gone. What's the plan? Yeah, thanks for that. And, you know, and
13:04
Tom, I guess over to you, what are the top two or three challenges you faced relative to the whole data protection space in your organization? Sure. I'm Tom vecchio, again, I'm the director of infrastructure at Sinai Chicago, safety net hospital on the west side of Chicago. So, you know, we share
13:18
similar issues, like like Justin was talking about, but some of the some other areas that we we hit on is growth. Alright, so one of the things the organization wants you to do, especially if you have to maintain sort of compliance for HIPAA or other sort of government bodies is, you know,
13:36
maintaining five years, seven years, 10 years of data. So the what is sometimes difficult when it comes time to budget is I have to budget to maintain, you know, infinite amounts of backup with a finite amount of budget. So it gets kind of challenging where, you know, it's like, well, why do you need another so
13:53
many dollars to purchase more? Why need more backup? You know, so that becomes part one of the big challenges is being able to maintain, you know, several generations of copies, several years of copies of emails, medical records systems, going back for whatever compliance and typically, you
14:13
know, since we're HIPAA, usually there's like a five year seven year 10 year cut off for various types of data, depending on their data category. One area that we're going to talk about today is obviously restore. So you know, being able to if you do have an event where
14:28
you have to restore from backup, being able to restore as quickly as possible, because literally lives are kind of at stake and healthcare organizations. So being able to have maximum uptime, you know, little to no downtime, and then being you know, in the event that there is some sort of event of being able
14:45
to restore and be up and running as quickly as possible. Another area is just your continuity plan. So your BCPS being able to figure out in the event of a disaster, how would I maintain the organization's operations and there's various ways tools and things that we will talk about today.
15:03
Thanks for that, Tom. And so Justin, you Tom started to allude to this a bit. But there's historically been this huge focus on backup and making backup fast, right. But more and more, that's really shifting to restore, what are your views on that trend.
15:23
So for the most part, that I feel like, that's always kind of been the case, particularly in our industry. Because most people don't know, the backup isn't visible to usually anybody but the backup admin, or whoever's doing that
15:41
kind of thing, as long as it gets backed up with inside of it with with inside the recovery SLA set by internal areas of the company, it doesn't have impact unless it goes over. And even then most people wouldn't know that it would go over, unless there was an incident. All people care about is you know,
16:01
how, you know, when something goes wrong? How fast Can I have it? And when is When is it going to be in production, and when was the last backup. So as long as you're maintaining your steady backup schedule, that is always just kind of helped the IT teams. But when it comes to recoverability, that's what
16:23
matters when it comes to the whole company. And, and as things progress, that window has gotten smaller and smaller and smaller, because you're for netspend, we're doing more transactions per second than we've ever done, you know, year over year over year, as technology increases, as speed
16:40
increases, as people become more technologically advanced and start to use those systems, check things with their phones. You know, everybody being more mobile, and being more technical puts the onus more on it to do those things, which also includes recoverability, which for most part is a mostly manual
16:59
process, you have to have somebody kick it off. Now, there's not a lot of things that you as soon as something crashes, a train comes I go, there's obviously you know, like, why have backups? But that's not the same thing as true. recoverability. Right. Yeah, good point. So, you know,
17:14
Tom, I guess, what would you like to add to kind of build on where you were started in terms of, you know, this trend of migrating away from your focus on backup, but now focus on restore? Well, I mean, there's always been a focus on maintaining backups and maintaining from a compliance
17:31
standpoint, if it's Sarbanes Oxley, or, or one of those governing bodies that you are maintaining a backup, you tear your systems in 123. But what I've noticed over the years is, is restored restore is a hot topic right now, because for many, many years, people have been doing backups, but they
17:48
never really did a restore. A long time ago, when I was a developer. This was a long time ago, we blew away a database. And we just kind of asked the database team, Hey, could you restore this database for us, and we were the first team to ever request a restore. And they didn't have the, the Restore
18:06
procedure. And they it took them several days to discuss it. And in the end, they're like, you know, we need to work out a procedure, we can't do this for you right now. So you might as well just rebuild the data from scratch. So they had the requests to do the Restore, but they never vetted the Restore,
18:20
and they never, and it was it was just kind of interesting, because then you're sitting scratching your head, well, how many other backups or are we not able to restore from so. And just over the years with meeting some of my peers and have sharing stories like that, there's a lot of stories like
18:35
that. So just being able to actually perform a restore now is becoming a hot topic, because Backup and Restore go together hand in glove, you can't have one without the other. I think everybody's got the backups covered. But you know, restores is new territory for some organizations, that makes
18:53
brings up a really good point with like audit requirements, being in congruence with with like, restore requirements, or like what the business is asking. I feel like that's, that's always kind of interesting. You're always gonna get audit on the backup, but how many times are you audit on the
19:07
Restore? But when that's really what the business is looking at, so it's like internal it is focusing on the is focusing like, Well, can we back up? Are we meeting these windows, when like, really the proof and all of that should be in the Restore, which is what the actual business cares about? And
19:24
I always always feel like that's kind of a funny comparison. So I think because it's becoming more technologically advanced, and people are kind of figuring that out. That's why Ray store is hot. Because of the C level execs have finally figured out, oh restores are what matters. Like it doesn't matter if we,
19:40
you know, go through and see if they're doing the backups, can they do a restore? And that's actually what the audits are requiring you to prove like the evidence used to be are you performing backups you know, show us the backup log. Now it's are you doing your
19:55
tabletop exercises? Can you show the evidence that you successfully restored your To one systems Exactly. And it's it's some I figure, it's similar for HIPAA as it is for in the Picard industry for PCI, you know, everybody's looking at that now, but five years ago, no
20:10
way. They just they just wanted to see the logs. So one of our audience members, Michael Colby has a question is, you know, why is it taken so long for the industry to recognize the most important part of data protection is restore, not backup, you know, in the past, the focus has been on backup,
20:27
and methods of backup, not speed of restore. So any Any thoughts? I mean, Christophe, feel free to add some commentary to if you'd like, you know, so I will tell you, I think that it's, it's a question of, you know, well, it happens to others. And unless or until it happens to you, you don't really fully realize as an
20:44
organization culturally, that it's about restores. I do think, though, that it's changing, I think there is a better understanding now that bad things can happen. I do believe we've seen an acceleration of this understanding due to ransomware. And cyber attacks.
21:01
Those exposures are pretty high, they've actually intensified during COVID, we've seen an acceleration of the frequency of attacks, and of course, a much broader surface of attack with many people working from home. And I mean, just look at the news in the past past week, in Europe and in North America. So
21:21
I think you this this general, perception is probably changing rapidly now, which is a good thing. And now we can really talk about, you know, the real SLA s and how much data can you afford to lose that quickly? Can you get back on your feet? I would also note, there's been what I think is an iPhone
21:39
isation of it, people think that you can just click on some on your phone, and it's all gonna come back, and it's gonna be back quickly. Well, no, that's not the way it really works. I mean, it, maybe it should, but it takes a lot of work to get there. So I think there's also a bit of an education that needs
21:54
to happen between the business and it in terms of what's realistic or not, and what it's going to take to get there. Okay, anything you guys want to add neither Tom or Justin, I would just say that I think the the the gap there is because traditionally, in it, things were kind of humming along, you
22:12
were doing your backups, maybe a server crashed, and then like all the server crash, so we'll restore it, but the way the landscape has changed with virtualization, integration with other data centers, wide area networks, when there's an event, it tends to have a bigger impact to the organization. And I think
22:29
that's what has percolated restore to the top because when when there's an event that you have to recover from the C suite is going to say, Well, why did this happen? You know, how can we can't just restore from backup like Chris was saying, because you know, what should be an app on your phone, you click
22:44
Restore, and next thing, you know that the servers back in the virtual farm. But I think that some of these things that have come out where it's a bigger to do, has really started to shine a light in that area that, you know, if we need to restore
23:00
15 server system with 20 databases, and we are not taking we're only taking backups of a portion of it. Alright, well, that, you know, why? Why aren't we doing it? And then number one, why did it take so long to restore 50 terabytes of data? I think the, to piggyback on that, and also kind of get into
23:21
the depth of like, Why Why now? And Christoph said it's ransomware I think it's not real until it's in your neighborhood, right? It's like, oh, the Johnsons genial got their house broken into, oh, well, they caught the guy. The next thing, you know, everybody in the whole neighborhood has a dog.
23:36
It's that kind of philosophy, right? Like, as soon as it happens to someone in your industry, like a competitor, then your your shareholders and your CTO are going, what are we doing about ransomware? Almost immediately, you know, next day, you know, as soon as there's a breach that they're calling
23:54
everybody, everybody and in going, what are we doing about it, and then the storage and backup administrators get shut down of restore time. So that's why there's a focus now it's, it's becoming real, because it's becoming more frequent. anybody, any bad actor can go down on GitHub, and go get some
24:11
ransomware and find a way to go loaded, loaded up or whatever they're going to do. It's not that hard. Anybody who's the script, Kitty can do it now. So that kind of accessibility hurts industry. I kind of put it back on the industry itself as well, that and the regulators. And frankly, the focus has always
24:29
been Hey, are you backing up? And the way that backup vendors get paid is by backing up data. Right, the focus hasn't been restore. So, you know, I think there's a bit of that component to it as well. Let's kind of move on, though. So, you know, Justin and Tom crystal spoke to some pretty stringent
24:48
expectations in terms of recovery. SLS, or your business is similar in terms of what they expect or do you kind of view his data as being anomalous. So let's start with Justin this time, Justin. Sorry, what was the question again? So, you know, Christoph
25:04
spoke to some pretty stringent expectations in terms of the recovery SLA s, are your business is similar in terms of what they expect from a recoverability SLA perspective. So, right. So, from that area, I would say even more stringent, because like I said earlier, time is money. But it seems to
25:23
follow along with the trends, when it comes, things come down, realistically, you know, what we want to tell people, and then what actually happens follows that trend almost exactly. Though, we might stuff some more things up front, when it comes to zero data loss. Or when something does happen, if you
25:40
ever have had an event, you obviously see all that go out the window, and the in your SLA is change and become more stringent, because there's more people involved in looking at it. So typically, that kind of thing doesn't get locked down on until something either happens to you. Or, as Tom kind of
25:57
mentioned earlier, like an auto requirement changes, and the industry has changed because of it. But for the most part, it does align. I would, I would say my experience is pretty similar to those voiced on the slide deck. Okay. So you know, Tom, I'll build on this for you, though. So, you know, answering
26:17
I guess the question around the SLA s and what your business expects, but also, how do you actually go about achieving those SLA s? And are there any specific, you know, technologies or tools that you guys use to help meet those objectives? Sure. So first and foremost, you want to collaborate with your
26:33
business partners, because in order to set that objective, and make it realistic, you first you have to partner with the business and say, This is what we can achieve. Realistically, I know, you want this with zero downtime and, and be able to recover within five minutes. But realistically, based on the tool
26:51
sets that I have, this is what I can do. And then I you know, always take it from there with the collaboration and partnership on your business side. We built out for example, in our medical systems, some of our hospital, your your medical record system is the heart of the organization.
27:09
The backups were not happening as frequently as we wanted. So what we did was we as we invested in newer technologies, more possibilities opened up for us to say, okay, your RTO RPO went from 12 hours, now we can go down to two hours. And if we continue down this path, we can get down to 15 minutes or 30
27:27
minutes. And then what we do is we sort of set a project to say, this is what we want to do, here's the objectives we want to achieve, based on with the caveat that we have budget to buy the software, this hardware, you know, this flash blade array, and things like that. So
27:43
in meeting your objective, I'd say the biggest part is making sure they're realistic partnering with your business partners, your organization and being upfront with what you can realistically achieve. Yeah, thanks, Justin, anything you want to add in terms of how you help your business achieve the
28:00
required SLA s and any specific technologies or tools that you guys use? Sure, yeah, we do. We do drills every year with, with restoring different pieces of content to kind of give benchmarks. And like Tom said, those are realistic with like, Hey, this is the fastest we were ever able to restore this
28:21
massive database. And we can't get any lower than that. That was it only kind of scales up from here, because all hands were on deck when we did this. If this happens at a time, we're not all hands on deck,
28:35
you're going to have to get scrambled time, and then we're gonna have to bring that up as well. So being realistic, is it and when we got Pure Storage, both flash blade and flash array, that changed completely, we ran the test. And then the next time they ran the test, everyone was like Hold up.
28:53
Why are these numbers so different than what they were last year? And it's because because of the way we changed how we kind of set up our most important backups. And they were like, is that? Is that a typo? Is that ours? And it's like now that's minutes. So you know that then you kind of give people
29:11
lofty, lofty heights there. But for the most part, it's staying pretty steady. That's great. We did something similar to so we went pure flash array and flash blade. And we were able to reduce those those windows down from you know, half a day or so down to like, hours. And as we continue to do tabletops. You
29:29
know, we want to get into like the minutes. Oh, that's great. Well, thanks for those first perspectives. You know, I hear more and more from customers how important recoveries become and really the fact that while backing up data fast is nice. It's getting the data back fast. It's even nicer. So it's
29:45
certainly an area we've been focusing on with the products and our partner integrations. So thanks for those mentions. And further to that though, we hear a lot about ransomware attacks in the news today. You guys have mentioned it a little bit already. You know, the bad guys have of course figured out how
30:00
to evolve, and now target backups in hopes of deleting companies backup data and making it impossible for them to restore or having to pay a ransom as a result. To that end, you know, protecting snapshots and backups has become a huge focal area, you know, how has ransomware affected the unit?
30:19
Get your specific businesses data protection strategies? I guess. So. Tom, do you want to start us off with Sure. So we made an investment in Flash blade. And from day one, when flash blade was installed, and we began the migration from our previous product to those products, we enabled the
30:37
immutable backups within the device. So this way, every backup on everything that we basically put into that system is immutable. So if there's ever a ransomware attack, we can go back to a known clean backup, and we can restore from that. So we really relied on, on your technology to to sort of solve
30:57
that that problem for us. Oh, that's great. I like to hear that. All right, Justin, how about you, how has ransomware affected your organizations, you know, protect data protection strategy, almost the exact same immutable backup, and you know, and restore testing, as well as a kind of separation of duties
31:16
when it comes to that, and access to those backups and snapshots, and even data separation for the most important things. So you know, you can't go wrong, when you know, you might drive your costs up a little bit. But once someone looks at that, particularly, you know, like
31:32
your DBAs, and stuff like that, they're like, Oh, we want to keep this separate from this, in case this is ever compromised. And, you know, they're thinking about that, and you'll get involved with infosec when it comes down to it too, right? Because they're gonna they're gonna see the the vectors of
31:45
attack public possibly a little bit different than you do. And like, you know, how you'd mess up things if you were in there, but they might know a little bit of the of the outside, so or so some of the immutable backups is like, hey, if I can't mess it up, they can't mess it up, which is pretty good. It's pretty good
32:01
frame of mind to be in? Absolutely. Yeah. I mean, it helps from an insider threat perspective, as well. And even, you know, developer mistakes or admin mistakes, and they make people make mistakes. So, you know, having that fast ability to recover, I think is great. So, you know, I think this space
32:18
is going to continue to evolve and really become more and more important, as a focal point, along similar lines, you know, making sure you can get access to your backed up data, after an event is of huge importance. And we've kind of touched on this a little bit in terms of testing, but, you know, given the level
32:34
of complexity of today's it environments, are you really able to fully test your data protection recovery capabilities? And if so, how often are you guys able to do that, and we'll, we'll start with Justin, a, we give a good test every couple months or so when it comes to it for
32:49
different applications and spinning things up. And Dr is usually a good benchmark for that. We have one for one when it comes to storage arrays and some of the some of the other technologies that are there and disaster recovery. And so when it when you're and also when you're monitoring, active active
33:07
sometimes, you know, it doesn't really matter, whatever is whatever is live is whatever's live if you spin up a copy of it, and you can have your, your clone or whatever reach out to it, you know, it's just as good, just as good as any, the only thing that changes is the network that it's on, right. So
33:24
then at that point, it's just, it's just neck workings job to make sure that and that's a separate test. So as long as we get the thumbs up from networking, we know that we're going to be golden. If we have to move over. Make sense. And Tom, what about you guys? Are you able to fully test your data
33:39
protection recovery capabilities? And if so, how often? Yeah, so what we do is, and this is kind of goes along lines with the question in the chat. So what we do is we kind of develop it as a supply chain. So if you break up your backups into logical units of work, so on your array, you have snaps
33:57
that happen so often, and then at certain intervals, those snaps, then go to a flash blade, we have a DR site, which is a totally separate data center, where we're continuously sending replication data and backup data to and at the other point, there's another set of
34:15
bare metal servers for a virtual environment. There's a whole set of pure equipment there too, that if we needed to, we can set up in a sandbox or our tier one applications and test them. Additionally to with some quick tweaks to the network, we could also turn that sandbox live and be able to serve that up to the
34:35
organization as the production environment if need be so and we do that twice a year, we don't turn it on to the organization. But what we do is we test it in a sandbox using tabletop exercises twice a year so every six months, give or take a month but that's typically what we do. And then through that you know
34:53
we look at it as supply chain. How long did this take? How long did this take Was there any pain points and all the way till we get to that sandbox where everything drops in. Awesome. So you let's build on that a little bit. Justin, based on your testing, what
35:07
types of results? Do you guys generally achieve? You know, like, for example, are you able to find that your backups are providing all the coverage you need and you're able to hit 100% effectiveness in terms of recoverability? Or are there are there's, you still find there are gaps, I guess, in terms of
35:22
what you can recover? it, you know, you always say that it's 100%, when you break it down into different tests until, you know, until something actually hits the fan, right. But for the most part, we've had live events, and we've recovered with from them in a matter of a
35:40
matter of minutes before, which is really impressive. And that's only that's only possible through pure, and usually, that's a good enough benchmark for us. For those things, it's not like we don't continue to do the test. And it's not that it happens all that frequently. But when it does happen, boy, you're
35:57
you're glad that you, you did the homework ahead of time. So I so I would say that that method has proven effective because it does work. Even though we've never had a you know, knock on wood, the catastrophic event that a company would have that would result in all of us having to have a total breakdown with
36:19
that with, you know, when you see those kinds of things happening. But chances are if you keep all the other parts running, your time, only, your scope only spread out just a little bit, as opposed to when it comes to being in concert, as opposed to the breakdown that you have when you test those
36:36
things individually. Right. And, Tom, I guess your same for you guys, do you find that you're able to hit that 100% effectiveness score for recoverability? Well, so what we do is we tear our applications and systems into into three different tiers high, medium, and low. So the tier one
36:54
applications is what do I need to survive as an organization. And we typically do 100% of those, the other two and three are sort of on best effort. You know, if it's a day or two to restore, but our main focus, if there was a catastrophic event would be 100% of the tier one systems within so many, which is
37:15
typically like two or three hours. So within a few hours, we need to be able to have our core systems up. And then we haven't really set a timeline. But really, it's within the first 24 hours, we'd want to have the tier two tier three systems up in the secondary data center. Yeah. So all these solutions,
37:32
you know that and the work that goes in and making sure that they're appropriately implemented and maintained is certainly is not inexpensive, right? A lot of organizations are spending more and more to achieve these capabilities and to get the right tools, technologies, etc. in space. I
37:47
guess I want to bounce back over to Chris stuff for a second, you know, what do you see as the trends in spending and the data protection space? And are there any specific data points you can share with us? And also, first of all, great conversation, and I can only agree with the, you know, practice making, you know,
38:05
perfect and sandboxing all great things about a third of applications are actually mission critical, based on our research. So the tier one is a pretty big tier one. So in terms of costs, obviously, one of the questions that I often get is what has been the impact of 2020. Let's face it, the first
38:20
few months, there has been a pullback, in terms of budgets turns out that the second part of the year totally changed the picture, with organizations ending up spending a lot more in their IT budget than they had previously previously planned in 2019. And certainly, previous previously sort of readjusted at
38:40
the beginning of the year. And it has a direct impact on data protection. So first of all, people are going to not only keep investing about 45% will maintain their budgets, but 50% will actually increase their spend on bcdr, based on the research
38:59
I've seen, that we conducted at the end of last year. So pretty impressive. And when you look at the data center in terms of data center modernization efforts, guess what tubs it is number one area for investment in a data center, backup and recovery. So definitely bright days ahead, and maybe, you know, slightly
39:19
better budgets on the IT side. The other thing is it's not just backup and recovery. It's also investments in infrastructure, again, talking about the data center, it's invested in containers. So, you know, a few areas that certainly a pure is extremely well positioned to
39:38
help their customers with. Thanks for that. So So back to the panelists, Tom and Justin, you know, if there's, you know, first of all, I guess you agree with what Christophe is saying so based on your businesses or your organizations, are you guys seeing the same trends in terms
39:54
of spending and, you know, if there was one area where you could significantly increase funding to help augment your data protection strategy and the organization? What would that be? And we'll start with Tom.
40:08
I would agree with most of what what Christophe is saying. So you know, right now, some organizations are using this as an opportunity to go through modernization of their data centers and have their systems. Typically what I've done at some organizations, I take some of my production systems, the
40:25
equipment, and I use that to build out Dr. Because that's not as mission critical. Yeah, I don't need 24 by seven with four hour turnaround in the Dr. site, so I typically can put that equipment there and let that equipment go with like a much lower spend on the on the support costs.
40:45
In terms of the second part of your question was where any place that you would significantly increase funding to, yeah, I so one of the things that I would recommend to organizations is, try and leverage cloud because a lot of the tools now for backup for Dr.
41:05
allow you to take advantage of the different buckets, like an Amazon s3 bucket or Azure blobs to be able to take all your backups and just move it right into those cloud providers. And if you were in a scenario where you needed to, you could even spend some of those instances up within there and just pay as you
41:21
go pay for the compute. I think what I said earlier was you know that one of the challenges is when you have a budget, and you're trying to maintain infinite copies of so many years of data, but you have a finite budget, leveraging cloud with your on prem would allow you to make more efficient use of your
41:38
budget to do live backups to be able to recover. And to also, you know, some some smaller organizations can afford to have two co located data centers, you know, across geographic regions. So hit being able to say, if I'm in the Midwest of the US, and I want to have something in the Northwest region, and Amazon as
41:59
my data center, you know, that's literally just a couple clicks and saying, put the data here now instead of there, and now I have geo redundancy for my data center. Right. Awesome. Thanks. And Justin, same, same question to you have the exact same thing. I'd say if you had some, some things maybe in
42:17
bother yourself in some cloud block store or some something like that, so that you can have a peer array in the cloud. That's what we're leveraging now. And if we add some extra money, you know, we build out a full area there to not only do that, but try and create parody testing, you know, not only
42:34
storage is going to be there, but the applications as well so that we can do like Tom said you spin anything up anywhere and then be able to test it, and then see what those metrics are. So I put more funding towards that kind of thing to see what full cloud recoverability looks like, rather than just testing
42:52
certain applications. Awesome. Well, we've definitely covered a lot of ground here today. I really want to say thank you to our panelists for all the excellent engagement. Thank you for the questions that came in. We really appreciate that. And, you know, thanks to everyone for attending, please feel free to
43:08
reach out to us for more information on how pure can assist you and thinking through your data protection strategy. And also, if you'd like to continue the conversation, please go to the data protection, Ask the Experts located in the virtual event chat. Thank you all so much and
43:24
have a great rest of the day. Thank you. Thank you for having me. Thanks.
  • Ransomware
  • Portworx
  • Professional Services
  • Data Analytics

Andy Stone (Moderator), Field CTO Americas, Pure Christophe Bertrand (Analyst), Senior Analyst, ESG Global Justin Brasfield, Sr. Storage Architect, NetSpend Thomas Vecchio, IT Director, Mount Sinai Hospital Medical Centre of Chicago For years, IT teams have preached the importance of high speed backups to ensure business recovery without slowing down production workloads. But with more data being used to support end users, service desks and internal operations it is also high speed restores that are important.

Join our panel of industry, customer and technology experts as they discuss real-world restore requirements and how Pure addresses them.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.