Skip to Content
37:05 Webinar

The Mainframe Is Not Dead: Long Live the Mainframe

Explore how Luminex + Pure Storage enable cyber resilience across mainframe and open systems in healthcare.
This webinar first aired on June 18, 2025
The first 5 minute(s) of our recorded Webinars are open; however, if you are enjoying them, we’ll ask for a little information to finish watching.
Click to View Transcript
00:00
Appreciate your time here today and, uh, we're gonna talk about a topic that you probably don't realize pure and mainframes go together. But this is, uh, we're gonna do a uh quick study. On CMS, if you're familiar with them, it's on the Medicare in the United States that we helped modernize.
00:20
This guy here actually helped architect a lot of this. And uh we showed them how to save money, be more proficient and actually handle the claims a lot better, so um. Like I said, we're gonna talk about this in here a little bit, um. Most people don't know who Luminex is.
00:39
Part of the problem is the marketing. We just don't get our name out there, but it's a way to make your data center mainframe more efficient. And makes it um a situation that your clientele when I say clientele, I talking about your business units can actually. Get more out of the data we're gonna show you how they do get more out of the data,
01:01
how they actually do a lot better job processing the data. So you can read this and I used to go into it. So we're gonna get started here on the, um, progression. When CMS, when we came into CMS. A train wreck would be a compliment.
01:19
Their DR data that did not exist really. They had no way to go to a true DR if something major happened. They had no way to really protect your data. They believe they did like most mainframe customers believe they do. We go in there and we can prove that they do not.
01:40
Matter of fact, I can get into a story a little bit later on where we actually did a presentation to the deputy director. And when we talked to him about destroying one of his data centers and losing all the data for all the Medicare in the United States. You could have heard a pin drop because he didn't realize how bad it was,
02:00
but I had been into hundreds if not thousands of data centers the same way. Some of them are small, some of them are large. But most are living 20 and 30 years ago thinking they're protecting their data. There are things like ransomware. There's things in different ways disastrous happen.
02:19
And they have no way of protecting their data, so we're gonna talk about this a little bit. So Chris, you wanna go ahead and kind of get ahead, sure. So, um, and this guy, my name is Christopher Rogers, um, so with the CMS project, I'm the chief architect for CDS, um, uh, for the CMS project.
02:38
Um, and as Andy mentioned, you know, really kind of what came out as the customer came to us because they had really no DR capabilities, um, and they wanted us to build out a disaster recovery as a service, um, using colo facilities so they could DR, you know, 5 to 6 data centers that they had spread throughout the country, um, into those different data centers. Hang on,
03:01
add so much to that, yeah, IBM told them their DR would work. And certified it basically, so go ahead. So basically what we did is we built out a uh new private cloud environment in a colo facility that they they could start using as the test bed to perform DRs because we basically had to be able to instantiate massive data centers and one of the requirements that
03:30
they came to us with was that we had to be able to restore up to a petabyte a day of open systems and mainframe based uh workloads. Um, now this is in 2018, so it's kind of one of those where like, OK, you want a petabyte a day, you know, flash plate's gonna be the what's gonna get it. You're not gonna get that off of, uh, anything else.
03:51
So we started working with Luminex, the architect out the mainframe side of it. And our uh worked, uh, with a couple of other backup vendors to architect out the, uh, uh, open system side all of it based on flash blade, um, and as you can see there what you'd call DC one was the DC uh data center that we started with, uh, from a fail over testing perspective, um, building out modeling everything.
04:18
Um, and, uh, I, I don't know if I necessarily want, we can go through all of these, but basically kind of became a domino process where we built out the DR, we tested it and. Because it was successful, the customer said, well, fine, we're gonna change the project. It's no longer gonna be a DR as a service,
04:35
it's gonna be a data center consolidation. So they started built we bailed out, um, once we built out tested data center one that was East Coast Ashburn, um, we built out data center to, uh, West Coast because they wanted geo diversity, um, and their failover capabilities, um, and the first data center we collapsed was in the South,
04:59
uh, basically in Columbia, South Carolina, so we moved that data center to Ashburn. I'll add one more thing that he left off is right around DC1 and creating DC 2. They were trying to do DR testing and it would take any awards from a weekend to a week to do a daily testing and they could not move a petabyte of data. We have something called push button DR for the mainframe.
05:25
It's very simple The CTO said, OK, I'm not sure it'll work, but we'll try it. He, he called me up. He says, how do we do this? I said, all you gotta do is hit the radio button on your screen. He failed over From DC 1 to DC-2, the whole data centers.
05:45
He said how did I feel back? He had the button below it. He filled it back. Guess what? That took no time. He did it 5 times in one morning.
05:57
And we're talking about petabytes of data that we're moving back and forth and so forth 5 times in one morning. IBM was gonna take them at least 1 week or 2 weeks to do this. You know, especially 5 times, what's the value to any business unit when you can do it that fast? So go ahead and that was part of our process is that uh we had to be able to have the entire
06:20
data center back up and operational in 24 hours. So obviously that type of scenario being able to bring all the mainframe workloads up and operational, uh, it wasn't that easy as on the open system side, um, from back up and recovery perspective with the the software, but we were also able to get, you know, all those type of things built out and we actually.
06:42
Did the fail over the first time was 27 hours and then the next time we got it down to where in the end we failed over the entire data center in less than 12 hours and when we moved it we moved it in like 9.5 hours so we actually forklift moved the data center, I mean literally logically picked it up. And moved it to Ashburn, Virginia.
07:08
Now, and, and you know this is all with pure in the back end, so I wanna make sure you know that the complication thing like he mentioned was open system was a bigger can of worms to deal with. So that's what it slowed us down a little bit. But if you think about moving a major data center in 9 hours, that's open systems, that's mainframe, that's everything.
07:30
Who else is doing that in this world? Nobody, nobody can do that. Go ahead. Um, so since that was so successful, obviously we had to they then we built out a DR data center, um, in, uh, West Coast, uh, out in Seattle, um, therefore we then instantiated that tested the DR failover scenarios, um, and instead of just being disaster
07:54
recovery as a service for their two other large data centers plus a 5th data center that was running another mainframe workload. Uh, we went about basically just collapsing all of those, um, into these two primary data centers, so it kind of domino effect. We basically picked up data center 3, data center 3 we migrated to the,
08:17
uh, West Coast again that had Zen Linux workloads so there was still, uh, mainframe pieces in there, um, open system pieces. The whole infrastructure from a backup and uh backup and recovery perspective has always been uh based off a flash blade, um, as Andy there we got about 5. 12 terabytes. By the time we hit data center 3 we're in
08:40
several petabytes worth of flash blade. Um, and then you kind of fast forward, you can kind of see the timeline we did a bunch of data center shifts, moved over data centers, um, in the end we ended up moving there's about 12 to 13 petabytes or more of mainframe workload um into the two data centers, um, and one of the things with uh the customer
09:07
um. That uh they had is we had to have copies of the data tertiary so we had to have 3 copies so we actually have copies in either data center we have active active workloads in either data center so from a Luminex perspective and a mainframe perspective they back up, they replicate to the opposite data center, um, and then they also, uh, uh, replicate the S3 buckets and we actually manage all the open
09:32
systems the same way, um. So you know this is something that any customer could use because it's something that they all need but they're having a hard time figuring it out when you start asking them how do they recover the mainframe data, how do they cover the open systems data this is very important. Some of the things that they did here with open systems, some things that we did with the
09:55
mainframe we brought it together we automated the whole thing to the point where. How many is this Mad Men you got now to kind of monitor? Oh, well, we were joking about this at lunch that you know from a pure perspective, uh, we run flash array and flash blade, right? Um, we've got over 2 petabytes of flash array, um, physical flash array obviously that
10:18
represents a lot more storage with de duping compression we've got about 20 petabytes of flash blade. Um, and vast majority of it, one of the things that we use from a pure perspective is all API based, automation based, and all that kind of stuff, so we don't actually have any dedicated storage admins. We have, we have 3 or 4 guys that work on storage.
10:41
I have a guy that I call my primary storage guy, but he honestly doesn't work on pure anything pure based, probably more than 25% or 25% or 30% of his job. absolutely and and the thing about the solution when it runs together that we really don't need anybody monitoring it it just runs itself. It takes care of itself and it's very simple to do, um.
11:04
You start talking about ROIs, everybody likes to talk about that. This is a huge one right here, just the manpower it saves. The second one is how quick we can actually move data, how quick we can protect data, and so forth. Um We get a little bit of this about the cloud, uh, data resilience,
11:22
and so forth. We kind of talk about a little bit, um. The cloud is an interesting place for the mainframe world. Everybody thinks they want to be there, but nobody really needs to be there, to be honest with you, because it's very expensive to pull data back from the cloud.
11:39
So we went through that scenario, and I'll go ahead and give an example of that real quick. We turned on cloud and we're right and we're doing our thing that we're supposed to. And The CTO had told us that his analysts said that they do not recall the data ever. Within the 1st 2 or 3 hours of that day they recalled 1000 files. Do you know what that bill looked like from AWBS?
12:06
Well, that causes a lot of problems, so we had to go through to studies that we can do together that show you when to call back, how to stub it out, and so forth to protect your data. So we did go through that, but. There's two types of clouds here. There's off-prem like a Google, AWS, Azure, and so forth.
12:26
There's on-prem like with the pure S3 storage or we can do a hybrid between the two. You see what's the advantage that we'll get into that in a few minutes, but that allows them to actually protect their data in multiple ways that they couldn't do before. Anything to add to this or I mean since y'all implemented it, oh yeah, no, I was about to say probably, you know,
12:46
the biggest thing is you talk about business continuity, uh, replication, improved network utilization, you know, things like that, um, you know that was actually one of the biggest pieces is we had to bring in these big fat pipes to, you know, kind of get all this stuff, but with the the do the compression and the replication and actually the. Things we expected to,
13:06
you know, flood these 100 gig pipes that we had going all over the place, you know, we sat back and looked at it and we're like, man, we don't ever cap out at more than 20%, you know, and that was on the first major ingest, right? Then you start getting once you started getting just kind of the daily changes and everything, it was,
13:22
you know, it was, it was in the low gigabytes and we're like, OK, well, uh. I guess that's uh I mean it ended up being a lot, it ended up being a lot better, but it also helped us understand that, you know, once we started doing East Coast to West Coast, um, even though the physical connection is 100 gig, we actually only have 30 gig worth of
13:40
bandwidth, but that also, you know, helps. With, uh, uh, utilization cost perspective, um, and that sort of thing and you know, but you know not even having two or three, you know, real storage admins just, you know, and obviously, uh, operations are simpler, um, one of the other things I was, I was thinking about when Andy was Andy was talking was that,
14:03
um. We used to have, uh, we had a similar scenarios with on-prem and the cloud on our open system side, um, and you know that we were running it all on flat, you know, flash blade, and, uh, they were, they were, they were like, well we can we'll try this in the cloud so you know, try this running it and try getting this,
14:23
you know, your one of your, um, open system sys uh talking and just dumping directly to S3, um, and so we said, OK, fine, you know, we'll do it because they're like, you know, that's where we wanna be anyway. Um, and about 3 months later they called us up and said, can you take that VM back out because it's costing us about $65,000 a month.
14:44
Um, so obviously I was like, wait a minute, so on-prem is actually more cost effective than the cloud. So that's another value you, we we talk about values and ROIs. These are all true values that we can see now with the pure storage and what we do with Luminex with the mainframe data, like you said, everybody thought we're gonna use a huge pipe.
15:06
We didn't use any of it because of compression because did you and where we handle the data makes a big difference that our competitors cannot do. They don't even come close. They do not come close and there's something that was mentioned in the last presentation. Protecting your data through encryption.
15:24
How many of you heard about quantum computing, quantum, uh, protection? We have been. Careful, I say this, classified as the only quantum protected encryption on the market. And this is by 3 L agencies said I hope they're not listening right now, but they always are listening.
15:47
That's the truth, uh, but the fact that we can do that, we know that this data that we're doing now, even if they are hacked, we have ways around it. We can protect them. Matter of fact, they said our 256 bit encryption, if they hack it with a, with, uh, quantum computing,
16:06
it would take them 2 years to break it down to 128. Guess what we can do on the fly, and this would make Chris cringe. I can change over encryption on ever. By and there's multiple petabytes by flipping one switch. You know, now there's a process called XM we have to go through,
16:27
which he knows what that is. That's what makes him cringe, but, but so if we have any clues, somebody's breaking into us, we can change it. So we don't care about the quantum computing. We don't care. Because you're gonna know that it's coming, you'll see it. You'll recognize it.
16:42
They leave marks and we can, we can handle that. Nobody else is doing that the way we do it. Sorry, Chris. No, you're good. Go ahead. So ransomware protection. I'm gonna get into this. This is a huge push in the mainframe arena right now,
17:03
huge. Nobody can really do it as good as Pure and Luminex doesn't. We're working with a major account right now that has about 60% of all the data in the world. Causes this. We use the pure safe mode. And way there y'all can, I'm gonna use the word air gap,
17:25
but you can lock down admin to the point where nobody else can touch it. It's hard to get into your S3 buckets. Through this air gap Or the way y'all lock it down. We actually, when we write the tape data by the tape. We use our what we call infinite snap that allows us to write multiple days of the same
17:48
volume over and over again. So let's say we got 20 days out there. And you find out that on day 10. That you've been encrypted. I mean, I encrypted, uh, corrupted. How do you solve that problem?
18:03
Well, you do your forensics. You tell us what's going on, and we also do some forensics too. And then we can fence off all the bad characters. And your tape data sets, why is that important? This is what most mainframes have found out in recent years.
18:21
They'll do it with their online storage, IBM Hitachi storage. But guess what, they don't do it on the tape. One thing people don't understand is mainframes use tape as data processing for the applications. It's just like online storage. Chris said it a while ago when those conversation he says it's a cheaper way to run
18:43
applications than it is to run it on the online storage, and it's normally faster too. If you did not protect your tape data sets a virtual tape. And you recall it, what happens automatically when you call that recall that. Bad character, it corrupts everything else we start all over again so you can lose everything you just did. We have customers using this today.
19:08
We got in our lab testing it today even further. But allows us to give the mainframe world true ransomware protection. Nobody else is doing this on the market the way we're doing it, nobody. I mean, IBM has tried. But they're struggling with it and that's the only two players really you're gonna have some
19:31
other people saying they're doing it, but they're not doing it. They struggle with this. This is extremely important ransomware protection in mainframe environments. If you have a mainframe in your site, if you have a mainframe customer and you're not talking to him about this, you are missing the boat.
19:47
Guess what else happens to these mainframe customers? They have insurance policies. One of the things that insurance policy asks them all the time, what kind of protection do you have on your data? They get lower rates when they come in with something like this.
20:04
That's a huge savings. And we're using the S3 buckets to do this, and I know y'all are doing this on the open system sizes too to a certain degree, and we're using the pure to do this. Nobody else is doing this at all. Has anybody any questions on this because I, this is about an hour conversation this part
20:24
alone. Quick one are you doing any like API integration we're working on some of this right now actually we are definitely working on that so we got in the labs working on right now. That's just kind of yes and kind of no. But give me about another month and I'll give you better yes.
20:43
So we are definitely doing some stuff. And matter of fact, that's one of the things we're around the ransom and protection. That's the one APIs we're integrating together. Right now I was told it works already, but I can't give you an official yes yet. So, but Uh-huh. Mhm.
21:04
In that case, um Notice that uh with snapshots, the space is super is significant and then you talked about something earlier like the change rates so. Due to the snaps, have you seen like you got 20 petabytes roughly right? Is there, are these things taking up a ton of these snapshots? Are you, are you seeing a big dramatic load?
21:24
to manage this Mhm. A lot depends on the data rate and data change, um. We're not seeing a big impact at all. Yeah, we're not seeing any big impact and another one when we talk about the push button DR that's another version of the type of snapshot that we're taking and when they do the
21:44
DR testing we're not talking about terabytes of data. I mean it looks to the end user we're talking terabytes of data, but we're talking about gigabytes, kilobytes. And it makes a big difference. It's just how we manage the whole situation. He was but say even on our flash plate, our, our growth rate,
22:03
um, we've had a tremendous amount of growth rate because the amount of data we've ingested, right? But once it, once it got there, it's, it's, it's, it's. It's fairly, I was about to say probably 5% a year or something like that, but that's actually more just changes in operational data sizes and that kind of stuff.
22:21
The the amount that we consume via snapshots and stuff like that it's, it's not, yeah. It's small and another thing to talk about is I know the D dupe on the pures there. We also do compression and we've seen as much as 9 to 1 for on mainframe data, which is also you go talk to people that say well we get 2 to 1 to 3 to 1 to 5 to 1 we've
22:45
seen 9 to 1 now you're gonna take me into a customer. North Carolina Farm Bureau is exactly one of the customers that compresses everything on the mainframe before we get it, so we get 2.5 to 1. But normal data, normal environments we can see as much as 901 or higher anecdotally it's, uh, on our, on the flash blades so open systems and and mainframe are
23:07
separate flash blades when we're we're doing all of this, but um. Uh, the Luminex, the, the, the flash plays that, um, run the Luminex actually don't from a pure perspective, the data is already optimized so when it lands we're storing it, you know, 1 to 1, but actually on it's it's kind of interesting on the open system side we actually still get compression on the backups and all
23:30
that kind of stuff on the open system side on the flat on the flash blate um and the only reason we do it on a mainframe is because we do a lot of reporting we do a lot of uh. Like I said, the push button DR movement of the data, and we'll get into more of that. But when we do these things and, um, we need to control of that information. That's why we do it.
23:50
Now There's a question that always comes up with the Luminex. What's your maximum throughput? IBM loves to ask that question. We don't know what that means. We don't have a maximum. Right now we got 52 CGX's in there. ECGX we know can push more than uh 1, 1200 megabytes per second because that's what we,
24:12
that's the maximum we test out at and we know we'll go faster than that. So we have no limitations on throughput. So if a customer comes to you and says, well, we need this much throughput. OK, that's fine. We don't care, you know, we don't care.
24:27
What's your limitation on your flash blades? Has anybody ever hit it? Not really. We, we, we, we've tried to figure that out, um, but yeah, actually, yeah, Andy's, uh, when he was going through that I was sitting there thinking because, you know, well, obviously when we built out the flash blade environments,
24:43
you know, the, the smallest ones we always made sure we had a certain amount of blades, you know, to get the right, you know, throughput and that sort of thing, but I mean. We've got 160 gig lags going in in in and out of every one of our, our, our, um, chassis, uh, chassis arrangements and is one of those where we see, you know, we keep, we, I think we're both saying the same thing everybody was
25:06
thinking our the workload was just gonna blow them away and we're like, yeah, yeah. We aren't doing anything actually, the, um, a little story and, and the CTO that we were hoping could be here today, but he has the situation couldn't be here. IBM told him and I said, Well, you're gonna blow right through that box with your first
25:26
data center. 52 CGXs multiple data centers later, we're probably running at about 40 or 50%. You know, we're not running hard at all. I'm sweating, but this here ransomware protection. Over mainframe customer has this problem if you're not getting in there if you're not if
25:48
you're uh a user and you're not talking to your executives about this, you're missing the boat because guess what, it's gonna hit you one day. It's gonna hit you one day. Oops, that is that it? No. Yeah MDI.
26:10
This is a huge one here, very huge mainframe data integration. I was talking to somebody earlier today with Pure and I can't remember who it was. But yeah, I did. I told a lot of people and he was saying, well, I got a customer that their accounts receivables having a hard time getting it down for analytics to open platform.
26:29
I said, What do you mean? It says, well, it's taking us, they're taking forever to get down and it's cost them a ton in MSUs. I'm like, Well, why don't we have a meeting set up Monday morning? I can solve that problem for them. What the MDI does is take the data off the mainframe through FIO channels,
26:47
which I know you don't have to know what aIO channel is, but it's very fast. And writes it to open pure storage, NAS storage that you can do analytics from. What's the value of that? There's a large retailer.
27:04
And Arkansas, the largest retailer in the world. When COVID hit, they were having to do everything online. They had a problem with that because they're gonna have to buy a brand new Z processor. Do y'all know how much that cost? I have a clue? Tens of millions of dollars plus software.
27:24
How much would you say a large Z would cost $20 million? Yeah, I was about to say the ones we the ones we just went up to were several million dollars apiece. So we walked in there and then we said, OK, let's test this out. Let's put an MDI box in here. We'll pull it off the DB2 database.
27:41
We will write the first this is first use case, and we'll write it to our storage and then we'll take it up to Google Cloud because they want to do the analytics on Google Cloud for all the inventory worldwide. No big deal. We did it. We were running almost 40 times faster.
28:02
Than what they were trying to do on IBM with a test. Second, we use almost zero MSUs. Somebody's asking what's an ROI on that. That's like immediately. I mean the whole solutions paid for almost immediately. Walmart swears by this now they have brought other applications in.
28:19
They take all their data that's going through TCP IP on the mainframe and start running it through us because it's cheaper to do that. I can tell you this, and then he can probably tell you the same thing. anything in time you run anything through an IP port on the mainframe, it's very costly.
28:36
The MSUs will eat you alive. It's very costly. There was another company, one of the largest, um. Transportation companies uh have trains all over the place. They, they were working on getting their, giving up the box cars and inventories, and they'll only get it down to 8 hours.
28:57
And they were struggling with that because they were losing inventory they were losing customer confidence there was everything else we went in there and I remember the CIO, she told me and told me she said, Well, I just don't believe you boys can do this. And she was an old Southern lady and I said, What do you mean? She said, I'll let you put it in here, but it ain't gonna work.
29:17
We've been doing uh we spent tons of money trying to make this thing work. Y'all ain't no way y'all gonna make it work. But guess what? We did it in 28 or 28 minutes. We went from 8 hours to 28 minutes. Wall Street Journal wrote an article recently after that and said this company was able to
29:35
get better handle on their inventory into the box cars, make customer sat go up, and that's why their revenues are going up. We were able to put this in place. Why is this important to Pure? Every one of your customers have this when we pull it off,
29:48
where do we have to put it? We have to put on storage. Where's that storage need to be? It needs to be NS storage. That's huge. It works together. It works together. There's a small, well, American Express does it all the time.
30:03
Anything we do with credit cards, that's going through them. Um, huge insurance company. Actuary data. It was taking them all weekend long to get the actuary data off the mainframe. And if it fail and Chris knows this as good as I do,
30:19
if a mainframe job fails, you just don't start where it fails most time you start all over or you just cancel it, so they're canceling data left and right, so they're going weeks without true actuary data if you know insurance companies. You know that's important. Well, we went in there same story we put it in there.
30:41
We pulled the data off the DB2 databases down inside of 2 hours petabytes of data inside of 2 hours. So now they run the actuary data on a daily basis. Their executives are saying now we're more accurate than we've ever been. What's it worth to them?
31:03
What the MDI does is allows you for DB2, VSAM, different types of data off the mainframe, put it on your storage, or we can go to a cloud. For analytics, for data processing, for any kind of. Information they need in open systems and I wish it was Josh that was telling us the story this morning. And I'm like you just, just get in
31:28
front of these people. I'll drop an MDI box in there in a heartbeat. And you get there's pure storage in there and if it's taking on this many hours to do processing. We can show them we can beat it. Now we do have this again is another hour too long conversation with a bunch of slides that we can go through. But It saves time.
31:53
IBM is so impressed at what we're doing. They have made a decision to try to copy what we're doing. But you know that's gonna be a minimum 2 years and they still won't figure it out. You know, I'm not trying to knock on IBM because they are a partner, but. It's just there's such a big machine, it's hard
32:10
for them to move fast, but that's exactly how we can do make this thing happen. So any place you got a mainframe customer. We need to be there. Nothing else with rights and Protect and nothing else with MDI virtual tape. Like Chris was talking about, there's no comparison between what we do and IBM does,
32:32
no comparison when you talk about the the marriage we have today, there's no comparison. We're gonna be cheaper. Easier to use, faster. Um, And actually more reliable you were, you, you, you just mentioned something a second ago that it made me think about something uh that we were talking about you were talking about a
32:52
little earlier, right, but how the mainframe handles tape data, right? tape data is just like regular. You know, regular dasty, I'll use a good old mainframe term right dasty, um, but you know, basically it just treats it just like the disk storage, right? So it's gonna, it's gonna go pull something
33:07
from tape just like it would, uh, a disk storage to process something, so batch processing or anything like that. So you, you were talking about some, you know, being down, you know, obviously one of the, you know, one of the wonderful things about flash blade. Is the resiliency that's built into the flash boom, right?
33:21
You get multiple chassis which they so therefore you means every one of your, every one of those blades is processing power, right, um, and then obviously you know the uplinks into the environment completely the the amount of redundancy, the resiliency, you know, and I was, we were talking about this at lunch, but from a.
33:40
If a blade goes bad, you know, I get a, I got a, I got a message from Pure that basically says, hey, we saw this blade failed open up remote assistance we'll check it out we'll restart it and you know, a couple, you know, about an hour later they're like, oh yeah, we restarted it everything was good, move on. Oh no, you know this blade needs to be replaced. We're gonna,
33:56
we'll shut it down, evacuate everything off of it, and you know you'll, uh, we'll have a guy coming out there and you know to replace it, but that's a big deal from a mainframe perspective because if that tape is down. Then you might not be doing batch processing with or other processing, which means that whatever business you were trying to do on that mainframe is not getting
34:17
done. One of our customers told me if they're down, it's a million dollars a minute. So we can't go down. We have something called STM that we put in there along with your flash blades. This is y'all love this from pure. That means you got to double everything you do.
34:36
So if you if it's a petabyte, you need to put 2 petabytes in, but. It guarantees it's called continuous availability. We're the only one that offers continuous availability. It guarantees you that no component or group of components will ever take your system down.
34:54
And we can actually do it across buildings, so even if you lose a whole building, your data is still processing. And you do not have to restart a job. Do not have to restart a job. Most people do have availability, and that's what he was talking about,
35:10
and that's better than most people have today. But in very critical situations, if you cannot afford to ever go down for no reasons. We're the only ones that have the continuous availability. And this works real good with the pure storage. Matter of fact, we highly recommend it.
35:29
So, this flash blade technology is something I've become a big fan of ever since. You're not first started talking. So I um. I've been preaching it to a lot of places and I think it, I think the future is bright.
35:45
The biggest problem is we do not get out and talk to people enough. If you're a customer and we haven't talked to you about it, give us a chance to show what we can do, so we have to save money if you're a pure rep. If you're not opening this door, you're leaving money on the table. Matter of fact, who was your pure pure rep, uh, when we first started in,
36:09
in, uh, CDS? Oh, Matt, Matt Martino, yeah, he retired, didn't he? You don't, no, he retired. I, I, I, I, I can't speak on that conversation. I just know that, yeah, so you are leaving money on the table,
36:28
just to be blunt, the money that you haven't recognized before. And it's not that hard to collect because everything we talked about is real. Um, CDS is a great reference for us. Somebody asked the other day, uh, do you know anybody that's large? Well, they're pretty large, you know, got a little bit of data.
36:51
And actually the total amount of storage in there is not 20, it's actually 40 petabytes with everything, all the locations consider what we keep in the cloud, yeah, it's 40 petabytes of data.
  • Healthcare
  • Pure//Accelerate
Pure Accelerate 2025 On-demand Sessions
Pure Accelerate 2025 On-demand Sessions
PURE//ACCELERATE® 2025

Stay inspired with on-demand sessions.

Get inspired, learn from innovators, and level up your skills for data success.
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.