Skip to Content
36:24 Video

Modernise Gaming Industry Software Version Control with Perforce on Pure Storage

Discover how Pure brings scalability, performance, and improved developer productivity to Perforce, a SVC platform for large repositories (e.g., gaming industry).
Click to View Transcript
00:08
Bikash Roy Choudhury: Hello, and welcome to Accelerate 2021. Good morning. Good afternoon. Good evening to everyone. Today we're going to talk about modernizing software version control with Perforce and Pure Storage. We'll talk about what exactly is software version control? Where exactly does it apply? And who
00:28
needs it? And why is it important that when we worry about the architecture, one of the challenges that you normally have with Perforce, and then how does Pure Storage help to eliminate them? My name is because we're Chaudry. I'm a technical director, working primarily on solutions on
00:46
software Dev, and DevOps, and EDA. So today's agenda, we'll talk about the areas of where Perforce is applied, like where do you find Perforce? Primarily? What exactly is Perforce is obviously an application? What, what exactly does it do? I will talk about very briefly the architecture. What are the
01:09
current customers normally do today in their production environment? And what are the challenges they come across? As well as how does Pure Storage elevate those challenges? We'll talk about some of the primary components, which is the database and how do we actually Backup and Restore because
01:27
without the database that Perforce application is an inaccessible is dead in the water, even though there are files there, but you cannot access those files without the database. So talk about that. And that is another challenge which customers normally face what Perforce users is to
01:43
workspace creation time. For customers with large repositories or version control system, other other file stores, when they check in check out code, it takes a very, very long time for creating the user workspaces. And we'll talk about how do we address that challenge and flashlight. And finally, we
02:04
conclude with the key takeaways. So let's look into what exactly is Perforce look into the architecture. And then we'll talk about the performance, right. So what exactly is Perforce support force is an application, which is used as a source control management. Now, what is
02:25
the source control management? Now, whenever a developer write code, they will make multiple iterations of the code that they're writing, and there must be there is a software or there must be some form of a version control, like every time you write a new code as a new version? And how do you manage
02:43
those different versions, just think about your Google Doc, or the Google slides that you're creating, and Google Docs or Google Slides, every time you make a change, it creates a history of all the changes that you've made. So you can go back in time and pull out where we're very good, what changes you have
02:58
done. So that is exactly what the software control management does is anytime a developer, or anybody who's writing code, writes a new piece of code or makes changes to the code, a new addition to the code, every time the code has to have a new version assigned to it. So that he can always go back in time
03:16
and further find out what exactly the changes were done. And Matt does a change management, or a change control control over the changes that has been done. So what normally happens here is when a developer tries to make sure or creates a new code, or makes additions to the new code, or existing code,
03:33
they check out the code, that means they are checking out means they are pulling the code out from the main repository into the work area, the work area could be sitting in your laptop, or it could be another desktop sitting in his office. So you need to pull the cord out and then start making changes
03:51
because you do not want to touch the pristine code, which is the main code and make changes on the main location that will pollute the entire code. And that's reason where we for security reasons or mitigating the risk, we check out the code and make changes. And once the changes are done, and you're run
04:07
some tests, making sure the changes that you've done is what the way it should, and you should you check it back in the mainstream repository. So there is a checkout and check in process that happens for the developers. And once the source code is checked in, you compile the code, and then you have the
04:22
output of the code is a binary just like an EXE or dot bin file that you have which could be used for production. Now there are various different applications out there, including Perforce, there is a version that is IBM Rational ClearCase these are some of the names you will be coming up in
04:37
your customer conversation. But what is the difference between this and the term get that you have been hearing a lot nowadays because Git has been one of the most commonly used software version control system, which is mostly used in your modern data application or modern application development. The big
04:58
difference is the applications like Perforce Subversion and IBM Rational ClearCase. They are primarily for source code management for large repository. Think about gaming, a gaming industry, which has got a ton of games, various different diversions or different kinds of extensions to your game. And
05:17
every time you have that code, it is it is in terabytes. And there are multiple revisions of them. And so when you have a situation like that, Git may not be that effective in this situation, because the files are large, because Git manages normally small files. And that's where Perforce is one of the
05:35
predominant SCM tools that have been used by our customers in gaming, and various other verticals that you come across EDA, high tech, media, entertainment and automobile. So if you're part of any of these verticals, you should be asking questions as to how they are managing the source code today,
05:53
what applications are running, and if Perforce seems to be their answer, then I think we have got an opportunity to do talk to them and find out what the challenges are, and how Pure Storage can help. So the there is, these are the three in a very high level three
06:13
main challenges Perforce has today. First of all, scalable performance. Now, I was giving you example of a gaming industry where you have got this large repositories. But think about it, if the repositories are large, your developer size, your development, team size would be equally large. That means you
06:32
would be having developer sitting not only the local locations, but also remote sites. And when you're having so many different developers trying to make changes to the source code, that means you need to have a scalable performance. And with the native architecture and the native deployment of
06:48
performance on local servers, tend to have some challenges as you start to scale the number of users now keep in mind, every time a user, you're adding a new user, that user would be checking in or checking out and checking in code. And they could be doing it for various different revisions, and release
07:08
branches or feature branches, or debug branches. Depending upon how your repository has been set up, your source code management is set up, you could be the user could be having multiple work areas, just not one on one on one work area, or multiple work areas, for working on the source code. And then if there is a
07:29
failure, the recovery is very slow. And that was what I was mentioning in the agenda is that the Perforce has a way of taking its own checkpoints, which is also known as snapshots in our terms and storage terms. But the checkpoint, or depending upon the size of the database. Now the checkpoint only applies to
07:48
the database, not to the repository. Now there are two different components here. One is the database and the repository. What exactly why do we need a database database is something like an index of a book, which gives you the location of the page number of the chapter that you'd like to
08:03
go to the same thing. Your database is just the amount of metadata which gives you information about where the repository where the different files that you're looking for the source code, the branch information, all of that is stored in the metadata. So if your metadata is down,
08:19
obviously, if you lose the index page, it is very challenging to go and find your files. So it is always recommended to protect the database and also have much better performance to run your queries the commands that you're going to run to access the database, it has to be fast enough. So in a ideal scenario,
08:42
Perforce has been implemented on a local storage, obviously, I know the SD four or no X Fs file system. And, and then the repository also sits in local storage. Now, if your size of your development team is small enough, where you're having about less than 50, check in checkouts in a day and you're
09:04
you're absolutely fine. Your number of bills that you're running after your code changes has happened, or the compilation process that I mentioned earlier, is limited, then you're perfectly fine. But the challenge happens when you start to scale, the number of users and every user would have its
09:19
own work areas multiple of them. And every time you check out your code, you make changes, you run your build, run your test, and make sure that changes that you've made is perfectly fine before you check back in. So if anything happens, your data has to be restored properly. And we will talk about how proposed
09:37
does it natively today and how Pure Storage can help in in that architecture. And then the third one is as I was talking about when you check out and check in the code, as I said the repositories is so big sometimes and get a higher higher size of gigabytes or maybe a terabyte and when you check out a code or
09:57
network to your local machine Whether you're using as when you're trying to make changes as a developer, that takes a lot of time, but considering the fact that there'll be so many different developers in different locations trying to check out and check in various different time zones. So that
10:15
itself is a huge bottleneck, not only on the network, but also on the Perforce server, because they are the ones who will be serving that request. So that is the reason why it becomes much more challenging when you're having a lot more developers requesting for multiple user workspaces or work areas where
10:34
they can start working on the code making changes and building and testing them. So in a Perforce architecture I was talking about the two main components. One is the metadata, which is obviously you see the bottom box where you see the Perforce server, which is the p
10:50
four dB. So p four dB is the heart and the brain of the entire Perforce implementation of the architecture. So it is always recommended to put that on, on a block, back end storage. And that's the reason why they would be most of the customers would use local servers for doing that, because
11:11
the performance is absolutely key. Now there are customers who also run them over shared storage, like over NFS. But NFS always has some kind of an overhead. And that's the reason for best perform best and optimal performance, it is recommended to store or configure the Perforce database
11:27
on I scuzzy or SCP depending upon what you have in your environment. Then there are some other components, we can tie to the proposed database, which is the transaction logs. Now what exactly is the transaction log non transaction logs are mostly required when you're trying to recover from a disruption or
11:43
there's some kind of a failure that has happened. And you would like to recover the database, just like you have redo logs in Oracle, where you replay your redo logs, think about these journals to be the redo logs, which goes and replace your database to recover from that failure. And then you have got
12:02
the database consistency. This is where the P for checkpoint of the snapshot I was talking about earlier. And we'll talk about what exactly does so basically, it causes the database and takes it before checkpoint, depending upon the size of the database. And normally in an average Perforce environment, your
12:19
database should know should be like a 500 gigabyte or less. But when you go into industries like gaming, or automobile or EDA, high tech, in those organizations, these databases could be one terabyte or more. Now, when you take this checkpoint of a one terabyte, that takes a long time. And not
12:42
only you take a snapshot or a checkpoint, you got to move that snapshot to a location where you can recover that at a later time. And the recovery time also takes a longer, longer period. And we'll talk more about it as we proceed to the presentation. So this is all about the sensitivity sensitive part of
12:59
the proportional architecture. The lessons report, which requires scale is the version files, that's the depot, that's the file store. That's where you actually have the physical files, the different versions of files, the different branches recreated for your development, all of that stuff. Plus, you
13:17
usually work areas like I was talking about the user starting to the checkout and check in. Instead of doing that on your local machine. What happens if you actually create a NFS Share, and you'll create your work areas instantaneously in the in on on, on NFS Share rather than pulling the code into your club,
13:36
on your laptop or in your personal desktop. And the reason for that is for security. And also for ease of management. Because then what happens is when you have a central location where you're creating user work areas, you can set limits on how many work areas you can actually have, you can actually apply
13:54
patches on the work area rather than chasing people around them the desktops, which has been hidden somewhere else, when the important part is the security. How do you secure your call, if somebody is actually having the code in your laptop and walking out of the building, then the code is no longer secure. So you
14:10
need to have security also when you're in your development process. And that actually allows them to have everything in one central location. And people can be managed for the admin to manage and to apply the right patches and secure that becomes a lot more easier.
14:27
Now let's look into this architecture. So as I was mentioning, on the left hand side of the picture, we have the Perforce database, the checkpoint, the journal logs and the logs. All happening here on flash plate that's been implemented and configured on flash array I'm sorry, which has
14:42
been backed by Lance. Now it could be an X Fs would be at most, most Required or Recommended file system to use in the latest versions of Perforce, like the 2020 version of Perforce and later, where we can which we should be using the X Fs by The older versions could be using SD SD for for that
15:03
matter, but the old new ones can use experts for configuring on flash array. Now, if you look into this architecture on the right hand side, you have got the Perforce depot, Mr. P for depot, which is in the green. And then you have got all those orange boxes, which is the Perforce user work areas, which
15:21
I was talking about earlier. And those could be sitting on a flash blade. And the purpose of doing this is as I was mentioning, on the left hand side on flash array, you are sensitive to performance databases, absolutely sensitive performance checkpoints, your your transaction logs, which is
15:37
a journal logs that are very sensitive to performance. So they need to be sitting on a block device, whereas the depot which is extremely sensitive or has got a requirement for scalability, as I was mentioning, for large scalable environments, you need to have it on NFS Share. Now, the
15:54
question is, will it run on FlashArray? Absolutely, we have got some customers who still have both the database and the people depot running on FlashArray absolutely fine. But there will be a limitation with the amount of scale that you're looking for from from a user and end user perspective. And that
16:12
is a reason why a lot of our customers are also consciously moving, moving out from the block device at the backend to a more scalable and shared infrastructure on over NFS. Now, this is the slide which we'll talk about the improvements we saw in this architecture. Now, as I said in the earlier slide,
16:35
we have a hybrid architecture where we are positioning the database and few other components of flash array back lands over its acceptance file system, whereas we are having the depot or the version files sitting on a flash blade over NFS. Now the slide that had the numbers that you see here, the
16:54
graph that you see here, is primarily for all the file activities happening in the depot as we scale the the number of users and the operations. And there were some very specific operations like that is a P for sync operation, which is very common operation that we do in Perforce. And we saw about a 14%
17:14
improvement when compared to a local storage. So the blue bar is your local storage, we ran the store on the SSD back storage locally. And then we ran it on flash array and flash blade, which is a hybrid architecture that we saw on the previous slide. And we ran this file system operations. And we
17:31
saw about a 14% improvement before sync. And again, there are a lot of other operations and they're all documented in the white paper, the information to the white paper is located towards the end of the presentation, you can take a look and for a more detailed view. And then there is an 11%
17:44
improvement on a filesystem operations like any other stuff like we are doing Perforce ad servers that you're creating, now what is the ad server? Now you have got a Perforce server, which is primarily responsible for read and write operations. That means you're checking out which is a read operation,
17:59
you're checking in, which is a read write write operation, and you're doing a ton of those. So what happens is, because in a non Pure Storage environment, if you're running Perforce in a local storage, you're always tied to the local storage bandwidth that you're having the network bandwidth and the scale
18:17
that is limited. So what Perforce chose to do is to spread out or create edge servers. What that means is you can use those ad servers for Read Only purposes, whereas your rights can go directly to the main Perforce server to commit the changes that you've done. So the way what happens is, when
18:36
you have multiple of these ad servers, you are able to apply a little more of a scalability from a Perforce landscape perspective or Perforce architecture perspective, but what happens is, you are actually having so many different servers that you'll end up having in your
18:53
environment, that you have to have a get into a server swarm, where you have to manage so many different servers independently. And that was one of the band design challenges that we normally have in a proposed environment, which we can be mitigating that with the when we install this with Pure Storage.
19:12
So then let's go into the backup and restore and this is one of the key areas that we are focusing on is that the new ticket when you install your Perforce database, the database actually is written to locations. This is the default setting for Perforce, you have a DB one and DB two. The reason
19:26
why they have that inveigh is because the DB two is a passive database, that means a DB one is an active database and a DB two is a passive database. Now what that means is every change that has been done, and the database gets updated, so your DB one gets updated right away because that's active database. And
19:43
immediately DB two is also synced up that means there is network traffic going to two different locations on your preferred server and that is the design Perforce design, which would run on a typical on a local server. And even if you configure that on a flash array, it's still the same because it's
20:00
the same design. Now, what we recommend is you can still go with the same design, or you can just have a single database server, the database one. And what will happen is, instead of going onto the database to, which is going to be writing to same location, where you actually intend to work with a
20:21
Pure Storage technology, like snap to NFS. And we'll talk more about that. Now what exactly happens in the database to a database to is mostly used for taking checkpoints now checkpoints, because as I said, checkpoint takes a longer time, depending on the size of the database. Now, if you were to
20:38
take a checkpoint in our main database, that means you are actually pausing the database activity at the time while until the checkpoint is complete. So to avoid the disruption, Perforce has come up with this architecture where we have a database to and you're taking a checkpoint on database, right?
20:54
Now, what happens over there is that, as I was mentioning, when you're having a longer worry call about a larger database, than your checkpoint times is equally higher, that means it takes a longer time to take a checkpoint. And then you have to move the checkpoint to another location where you can actually
21:14
preserve it so that you can recover that from a later state when there is a failure happening. So while I'm in my customer conversation, when you have a large database, they do not take the checkpoints that frequently. That means they take it like once a week, what happens if there is a failure in
21:27
the middle of the week, and you're not able to access the database. So you get you don't have your journal logs for the last three days, because the last checkpoint you took is was Sunday night, and you have this failure on Wednesday morning. And you can roll back all the three days because that those
21:42
data because there are a certain time limit for the journal logs to be available, like in a few hours, maybe 24 hours max, but three days. But how do you roll it back. So obviously, you will lose some amount of data. And that's the reason why the reason because they don't take frequent checkpoints, because takes a
21:56
longer time. So that means your recovery point objective and recovery time objective is very vague is much more stretched. And then because you're storing a lot of these files on local storage, the there is no compression because Perforce doesn't do any compression. And as I was talking about as we
22:16
start to keep on adding storage for capacity purposes, or for creating edge servers, there is a lot more servers that you're having, but which needs to be managed. Now, this is a traditional way of doing things. And this was something which we tested in a lab when we were working
22:33
together with Perforce. And they were actually giving us instructions and guidance as to how to run this test. And making sure even the performance test that we talked about only on the previous slide was actually been done in, in collaboration with Perforce. And this has been officially endorsed by Perforce.
22:50
And if you see, look into this location, and if you see here on the left hand side, I was talking about the database one, database two, which is the native architecture, we are running it on local storage, and then you have got your journal depot and all other stuff. So if that goes down that land that
23:06
that is extreme life goes down, and then you want to recover that and the right hand side, if you see your checkpoint took about 110 minutes, then your data was sync to a location where you want to store the checkpoints, and then your restore took 90 minutes. Overall, it's about 122 minutes
23:20
of recovery time. That's a long time. That's the reason why a lot of those big gaming companies, they would like to take a straggler the checkpoints, like once in a week so that they don't have to spend so much time on this one. Now what performs on Pure Storage, we could do that technology
23:39
called Snap to NFS, where you're actually if you remember I was talking to dB two, right? So instead of DB two, just do a clone of DB one, which will be sitting on a flash array, we use flash for cloning. And then we create a snapshot or whether that clone becomes part of a protection group with a backup
23:59
folder on a flash blade because we are using flash red anyways for the people depot as well as for the user work area creation. So we just create another folder, which is tied into an into a production Protection Group, which has been manifest snap to NFS. So then what happens is, whenever you're
24:17
taking a snapshot of the database to clone or database, one clone, that snapshot has not only gone metadata, but has got data too. So you can set a lot more higher frequency instead of doing a people checkpoint once a week, you literally can have these snapshots generated like five or six times a day. And
24:38
then you can just set a policy to skip the most recent snapshots on the flash array, the local flash array, and then move the older snapshots into the backup folder on flash blade. So in case of any kind of disruption happens, then we can just immediately recover from that snapshot. And the database
24:59
could be up and running. So if you can see here, on the left hand side and the right hand side, we are on a flash blade because of the snap to NFS technology, we were able to recover that under one minute. Just imagine it's 20x faster. So when you actually have this conversation with the customer
25:20
understanding how the backup and restore and what is RPO and RTO, existing RTO and RPO RTO times, and then what we can accomplish using Sure Storage is absolutely important. And that's where we talk about this 20x faster recovery data recovery time in case of a failure. Now, keep in mind, this is all to do with
25:39
Perforce database recovery, because the database is the brain. And if you lose your database, you can't access your version process very, very important. So just to summarize what the benefits you get is obviously, as I was saying, we use the cloning technology, you can spin
25:56
up various, couple more database instances for your Edge locations for high availability, because you can then do your read workloads much more faster, or isolate the read workloads from your writes coming into the main server, it's completely zero downtime. Because of direct flash, we are having much more
26:13
faster performance, you saw the performance numbers, the scalability, we definitely have got a lot more scalability, it's about 10x. BM when we do scale the number of users on on a flash blade when you're accessing the version files and creating user work areas, the backup recovery time is from 20
26:29
to 90x, depending upon the size of the data database, the 20x was primarily for the 350 Gig database that I was testing in the lab. And if you have got a one terabyte database, that means you can imagine the time it would take a snapshot or a checkpoint and then recover versus, versus what you can do
26:45
with Pure Storage. And then finally, the data reduction, the data reduction from the testing that we did, we were able to see an A flash array, and nine is to one. So that is a massive data savings. So this is the overall value that you'd normally get the benefit, you get Perforce and Pure storage. Now what else
27:06
we're looking into this and workspace creation. Now, as I was mentioning about when the user start to check out these files, they need to do that in the one work areas. And I was talking about the Security Acts aspect of it where you do that on your local laptop, which the code which actually travels with
27:21
the user wherever the wherever he or she goes. So it is absolutely important for putting the code locally or somewhere in a central location where you can actually manage them. And how do we create those more spaces. So when you use it, regular Perforce checkout process, depending upon the repository,
27:38
as I was talking about for the gaming industry, these are huge repositories. So when you checkout, it takes somewhere between 15 minutes to an hour. And we have thought some data from the testing or even more than that, from hours together to create this work areas. And this is just one work area. And
27:52
I mentioned users trying to create multiple work areas, that takes a lot of time that actually vertical limits that data developers productivity, because until the worker area is not ready, he or she cannot get started to work. And that actually limits the developer productivity. And that's the
28:07
reason why a lot of these organizations are having major concerns about that. So that's what I was talking about the faster workspace creation, scale the number of users and overhead now whenever you're checking and checking out all of these things on to the to the Perforce server, the server is equally
28:27
tasked the CPU really goes through the roof sometimes, because it goes actually to hits about 100% or 98% utilization most of the time, and those kind of things makes a big challenge because the Perforce server completely is starved of resources when you have got a high number of users checking in
28:45
checking out those operations. So what we have done is we have used as a solution using the rapid file toolkit with Flash blade and keep in mind we are having these work areas created on flash plate, we have got version files sitting on flash plate. So what we did was we are using a rapid file toolkit which
29:04
has got an operation called pcopy pure copy, and you will probably know what a rapid file tool kit would be rapid file toolkit is a host based utility primarily written and developed by for FlashBlade or for NFS shares from FlashBlade where you have got a lot of these traditional Unix commands like
29:26
list find copy the d u, which takes a longer time and you have got a deep directory structure. These are the standard Linux commands. But when you use something with a pcopy or pl st a PLS or pfind or PDU th se give see a lot more faster it s up to 5x to 50x faster de ending upon the directory st
29:48
ucture. So PCAP is also a mo t recent addition to that to lkit. And we have been using th s one and the reason why we do that because is For a couple of reasons now, we could always have this conversation that our competitors have this ability of
30:06
doing. Cloning, and flashplayer doesn't have cloning capability. Now, we have a competitor to say, okay, we can do a volume cloning. Now, this is a volume cloning, not a directory cloning that we're doing. And traditionally, what customers tend to do is to create user work areas as directories along
30:23
with the source code reports repository, that means every version control would be having different kinds of branches. Now, I was mentioning about the managers about a production branch or release branch or feature branch debug brands have various different branches. And every branch would be customers
30:40
or the end users would be actually checking out the code. So what they normally try to do is to create the code or the user workspaces directory sitting inside of one large file system. Now, a competitors can do cloning, but they cannot do cloning on directories, they only do cloning at a file system
30:57
level. So what PCP does is you actually as this because it's, it will be once on the host, it has the ability to do a copy operation from the host from a from a director level. So you can choose pick and choose whatever directories you'd like to copy over to create the user work area and then register the
31:19
work area from Perforce because keep in mind, the P copies of flash is a pure utility, how would Perforce know about the data that has been copied over to a work area. So once you copy the files over to the user work area, that is that force command to register that work area to the Perforce database, and after
31:38
that, it becomes official that Perforce understands the work area. So whatever changes you're going to make any work area and checking in, it is going to acknowledge those changes as you're standing right in bed. And so we ran a test with 10 different unit Linux hosts and would prefer before depot on a
31:56
flash plate, as I was mentioning earlier in the architecture, we scaled it up to 10 to 12, Perforce clients, and again, there was a license limitation when I was running the test. And we used a rapid file to toolkit, B copy command to copy those work areas or the data, checking out the directories from the
32:15
source code version files into the user work area, and then resistor, those work areas to the Perforce database. And then we'll also try to see, as I was mentioning, the bigger challenge was because there was this additional, what he called the native way of checking out checking, and was also stressing
32:31
the proposed server with a very high load on the CPU, and the memory. How do we actually address that. So the testing that we did was various different data sets. And you see if there is a lemon cake 52gig 101gig going on 153gig. A d keep in keep in mind, th s data segment I'm talking about
32:51
is the user work area that you're creating, this can o into terabytes, depending u on the size of the version files or the repository that you ant to check out. So with this sample data that we tested, and we tested that using the egular Perforce command. And ou can see here, the red ones
33:08
re the one which is using P co y from using rapid fire toolk t on a flash blade, whereas t e blue ones are the ones which re the traditional Perforce co mands. And we saw abou a 10x performance. So look at t e last data set, which took a out 42 minutes close to 42 m nutes, completed in less tha
33:27
about three minutes with rapid ile to get the copy. So once ou did that, what happened was, ou see on the top left, you hav got a Perforce server w th the traditional commands w ich is running about your net ork is completely saturated is one gigabyte per second you g t what 20 clients are able to
33:46
o with 30% CPU utilization. Now magine you have got hundr ds and hundreds of developers ho are brand that is 30% will ea ily at the 100% CPU utilizati n very quickly. button on th right hand side in the botto right hand side, if you see sing a PCP, your network is abs lutely in bytes, there is har
34:03
ly any network communication hap ening. And the CPU is almost zer , it's sitting idle, because we are reading it out directly from a flash player. So this is he one which you see the acti ity on the flash blade. The top eft is where you're runn ng the traditional Perforce co mands, there is hardly any a
34:20
tivity happening on the flas blade because the server the p oposed server is extremel busy. Whereas in the bottom ri ht, if you see here, the flash b ade is immensely busy. N w, the question may arise, okay, if the flash blade is easy, doi g busy doing all of the e copy operations using PC
34:38
, what happens to the fr nt end workload? Are there an other front end nios t at are requested by an other applications or any othe users to do other operations? ecause this is just one operati n that you're doing that you're checking out your us r work area, but there are a fe
34:52
other operations that you c uld be doing or users could be doing? So yes, the answer is y s, the flash blade is busy, ut the busy very, very limited, like I said, 452 gig, it too about three minutes. So afte three minutes, your flatbed d ops to CPU utilization down. So you still have a higher ban
35:12
width. So to wrap this up w th the developer workspace, a wo k area conundrum creation, e were about 10x faster using he PCP command compared to the traditional way of c eating workspaces. And second w rk, we saw our data reduction of two is to one in your work are s when you're checking out th
35:33
code. And there was hardly any stress in the Perforce server. S these were the three major adv ntages that we had when we wer using the son of flash blad using rapid fire tools, toolk t PCP. So to conclude thes other resources that I have, e have got this available on th links are up there. If you want
35:54
to get some more information abo t this architecture, performance backup and recovery and user wo k area creation. Please refer o this to this documents and assets that are available. And you're welcome to reach out t me if you have any further que tions. Thank you very m ch for attending the sess
36:13
on and listening to this presen
  • Video

The large code bases of video games create performance challenges. Pure provides a hybrid architecture, both block and file, to accelerate Perforce performance at scale for a greatly enhanced user experience and improved developer productivity. In this Tech Byte, Bikash Roy Choudhury, Technical Director at Pure Storage covers Perforce architecture and performance, database backup and restore, and workspace creation on Pure FlashArray™ and FlashBlade™.

Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
CONTACT US
Meet with an Expert

Let’s talk. Book a 1:1 meeting with one of our experts to discuss your specific needs.

Questions, Comments?

Have a question or comment about Pure products or certifications?  We’re here to help.

Schedule a Demo

Schedule a live demo and see for yourself how Pure can help transform your data into powerful outcomes. 

Call Sales: 800-976-6494

Mediapr@purestorage.com

 

Pure Storage, Inc.

2555 Augustine Dr.

Santa Clara, CA 95054

800-379-7873 (general info)

info@purestorage.com

CLOSE
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.