Skip to Content
21:31 Video

A Day In the Life of a FlashArray Files Administrator

What do we mean when we say “zero-headache startup” and “stress-free management?” See FlashArray file management capabilities in action.
Click to View Transcript
00:01
Hey there, congratulations on your recent promotion to lead administrator for file shares. I know you're probably just as excited as I am to dig into what those duties are all about. I mean, who doesn't say when they are a kid, I can't wait to be a file shares administrator when I grow up? OK.
00:27
So the goal of this orientation is to really get you familiar with what a day in the life might look like for you as our new file systems admin. You're gonna be working with Pure's FlashArray file services to create, manage, and deliver a variety of file share needs to our workloads and end users. You've already been working with FlashArray on the block services side,
00:48
so you can imagine managing SMB and NFS based data will be just as easy. For this orientation, we're gonna cover five main focus areas you'll want to pay attention to in your day to day. These are gonna include zero headaches setup, stress-free management, transparent flexibility of data growth, enjoyable and usable data protection for file shares.
01:14
I promise you, you will agree with me after you see that, and virtualization for NFS data stores made better. I'm confident you're going to love managing all of this in FlashArray a whole lot more than any of the complicated legacy solutions that are out there. With that being said, let's dig right into it.
01:34
As you know, FlashArray is not just designed to host block-based data. It can also host network-attached file systems to be shared out via NFS or SMB. This is the heart of unified storage, being able to deliver different data services use cases from one single management and control plane. File systems are managed in the same menu area as volumes under the storage button from the
01:57
left navigation pane. From here you can see the four main variables associated with managing file sharing from a FlashArray. First up, let's take a look at the file systems the array is hosting. FlashArray can host multiple independent ones that can operate and be managed autonomously.
02:14
For our example, let's build out a new file system that we can then create and export directories from. Ta da. And just like that, your new file system is ready to be built out and populated. I know you were expecting something much more complicated and requiring a bunch of resources allocation planning like RAID groups and aggregates.
02:34
Sorry to disappoint you. FlashArray's Purity operating system has been designed to automate and streamline rudimentary functions like file system provisioning. This includes eliminating the need to manually select media groups, set RAID, and create a usable volume for the file system.
02:50
Just like with block services, Purity has automated all of that behind the scenes and made the provisioning process only two steps. All you have to do is name the file system and hit the create button. Sorry if you were disappointed in how streamlined that was, but Purity knows you are wearing nine or more hats for your job and wants to be the one that fits
03:10
the best. One other thing with the file system provisioning, you may have noticed there was no prompt or need to identify a dedicated media group. This is because FlashArray file and block services are designed to share the same storage media pool with the underlying DFMs where they can grow in tandem without any hard allocations to either service.
03:29
This eliminates any crystal ball calculus legacy storage would need upfront that would never work as planned because data growth is largely unpredictable. And as a bonus, FlashArray can de-dupe and compress globally across file and block services. Something else legacy vendors struggle to accomplish with their bolt-on approach to how they merged the two services onto the same
03:49
array in the first place. By the way, if you thought that was easy, let's take a look at how Pure Fusion can make it even easier. Fusion is an included service in our Purity operating system that allows for you to deploy and manage storage services across your entire array fleet. For instance, here's an example of where I have
04:07
three flash arrays in a fleet I manage. As an example, I can create file systems, manage directories, and folder exports on any of them from the same GUI. In this case, I'm creating a new file system on FlashArray 2 from FlashArray 1. And when I flip over to FlashArray 2's GUI, you can confirm it's there.
04:27
This is the core value of Fusion. It allows for managing data services across many arrays with the fewest amount of moving parts possible. You may have worked with other vendors who claim a single management plane, and Pure is no different with FlashArray. As you've seen, you're able to manage block and file services in the same GUI.
04:47
With that established, let's dig into some key topics that'll drive your directory layout based on the file system we just created. A root folder was built when you created the new file system. This is the top of your file structure, and while there's nothing preventing you from sharing it out or putting data in it, a best practice will be to create directories
05:04
underneath it that can become different share exports clients can attach to. These are called managed directories. Managed directories are the heart of organizing file shares from FlashArray. They are where your management policies are applied and a great place to establish your core shares based on upstream workload needs.
05:22
Let's go ahead and create some managed directories we can work with. The next thing we're going to do is set up a directory structure for that file system we just created. You may want to do some organizing and planning for this step since FlashArray storage quotas are based on folder sizes and not end user or group ownership. And as you can see through the process of
05:42
creating a new directory, we not just name it, but also establish where it is in the directory tree. Before we set up our directories for sharing, let's take a quick look at what NFS and SMB sharing policies look like, since you'll need to select one for each protocol you choose for the export. The NFS and SMB simple policies you see here are the default ones,
06:03
but you can add more that are fine-tuned to your IT security policies. Each policy consists of specific rules that can be created for any fine-tuning access needed. This would include things like NFS version, SMB encryption or client network access variables. Again, you have the ability to create as many policies and associated rules as you need. Now let's work on some directories to be exported.
06:28
This is where we established the managed directory share name and the file share protocol policies with NFS and or SMB. One thing to note as you step through the share creation, your protocol policy choices here are tied to how clients will be able to attach to the export. This means you can limit network access to only one or include both sharing protocols,
06:50
something called a multi-protocol file share. Once these have been set and committed, your directory exports are ready for your end clients. Finally, let's take a quick look at quota management. Like I mentioned earlier, FlashArray file share quota limits are based on the managed directory and not group or user file ownership.
07:08
This means setting a 100 gigabyte limit on the accounting folder is based on the aggregate of files and folders contained in that managed directory. Purity leverages extremely efficient copy on redirect snapshots, which can be applied at the managed directory level, and the methods of creating and managing them are similar to how they work with block volumes.
07:29
This means you can take one-off snapshots of your directories or have them join a protection group policy that automates the whole process. What you just saw there was us adding the sales managed directory to a snapshot policy. And as you can see here, restoring our one-off snapshot is just as straightforward and fast as it was for block storage.
07:49
The biggest difference you can see here is we're using the native Windows Explorer tool to revert our directory snapshot. And like our earlier block volume restore example, we are just simply restoring Flash Array's metadata address book mapping to when the snapshot was originally taken of the accounting directory. One thing to note about managed directory
08:07
snapshots, they're protected by default with a feature we call safe mode. The great thing about safe mode is it supports your ransomware remediation strategy by making snapshots indelible for a time frame you can configure and has proven to be an invaluable tool to other customers in helping save the day when an attack has occurred. Let's dig into what it's all about with a cybersecurity colleague of mine.
08:29
There are two operating modes for safe mode. There's a raid wide and there's per object safe mode. Now, a raid wide is like we're locking everything down. Anything that once you've got it configured and you've turned safe mode on, you're gonna have to call peer support to make any changes, and that's usually for folks that have like very secure sites,
08:48
dark sites, whatever, like if you're super, super. Security conscious, you might go with a raid wide, but what we do see in what our default Per object safe mode is the per-object safe mode, which basically means you can't delete something that's protected in a protection group, or what we refer to as a PG group. A protection group cannot be deleted.
09:06
And PG groups are basically the container that's going to hold those volumes, and I'll show that to you in the demo. Per object safe mode is the default safe mode for any new array. So if you're not an existing customer and you go out and purchase a new array, when we come in and set it up, this will be turned on.
09:23
If you're an existing customer watching this webinar and you're not running, maybe you're not on 6.4.1 or newer, once you upgrade your Purity environment to that level, that's when per object safe mode will be turned on. Like I said, when safe mode is enabled, here are a couple of things that you can and can't do. You can increase the frequency of snapshots, but you can't decrease the frequency of
09:47
snapshots. So meaning, again, if that hacker gets in, you can make more of them to make it safer, but you can't take fewer of them to make yourself, you know, more at risk. You can't remove the protection group objects again, that's what I'm talking about, that's going to basically be where you contain your volumes
10:03
that you're taking snapshots of and putting the safe mode policy on. You can't disable the snapshot schedule and you can't disable safe mode itself. And one key thing to call out is only the snapshots created via these protection groups are protected from immediate manual deletion. So, like that snapshot I created in our snapshot demo,
10:24
I could go in there and delete that as my heart desires because it's not created via this policy, right? So if you do need to take some one-off or ad hoc snaps and you just need them to do a quick recovery, a quick test, you can 100% go in there and delete them. This is not based on the eradication time. It's anything that you do that follows this
10:42
protection group strategy. So, I've talked a lot, I apologize. Let's go ahead and get back into the demo. So I'll get over here. I go into protection groups. You're going to see I have this PG group auto, right? So that's the default group that gets created
10:58
when safe mode is enabled. And so I'll go in here and you'll see on the left. The volumes that I have, including that new clone volume I created, are already in here, right? So you see the Langer demo copy and the other volumes, right? So again, everything that gets created is going
11:14
to be positioned or placed into this default group. And as you can see here, I've got safe mode is turned on, aka it's ratcheted, so like I said, I can't make things work so worse. So if I see, I can't even turn this off here. Uh, if I had a replication schedule, I don't. I only have one array in this lab,
11:32
so I can't show you that, but if I had a replication schedule, I couldn't make it worse. I could only make it better. And then here's like our snapshot schedule again, so, uh, I can't turn it off. I'll disable, hit save. And it's like, nope, you can't do that. Safe mode's turned on,
11:48
you can't disable it. So I cancel there. I want to go in and let's say I want to, instead of every 6 hours, let's say I want to make them every 8 hours, right? Hit save. Again, I'm making the protection strategy worse, so it's not gonna let me do that.
12:04
But what if I'm like, you know what, every 6 hours is enough. I want to do it every 3 hours, which again would be better. That change, it is gonna let me make, right? So that's how when it comes in, I could kind of throttle things, to make them more secure. I just can't throttle them to make them less secure.
12:24
Now if I go into the volumes again and I go into Windows Volume, the Windows, you know, that D Drive that we have, and we look at our destroyed, right, we've got the Langer demo and then we have this copy. So back to my comment of like you can delete anything that's manually created, that's why you see here my Langer demo.
12:44
You see the countdown is still remaining, but since it's not enabled or protected by safe mode, I can eradicate it now, so I can just go ahead. You know, basically it's gonna tell me you're gonna lose the snapshot they permanently deleted. I'm like, yep. I'm good. But notice this Windows one volume copy.
13:01
These are being created by the PGA group, right? The one that's the protection group that has safe mode turned on. These ones I'm not gonna be able to destroy because they're not, they're not in, um, the policy has not been. exhausted yet, right? So that's why this one is grayed out here.
13:21
Once this timer expires, because remember it's set to one day, I'll then be able to go in and delete it. Like I don't have to delete it in 24 hours. I could still leave it for a couple days. That's just the soonest I would be able to delete it. And again, this is all about protecting yourself from ransomware and your ability to recover from like anything where,
13:40
you know, somebody comes in and they've encrypted your data or encrypted your drives. Thanks to my colleague Jason for that great explanation of how safe mode can provide an extra layer of cybersecurity resilience in the event of a ransomware attack. While we're on the topic of data protection, let's talk a little bit about replication. Flash array files can also be protected with asynchronous replication to a remote array via
14:04
Active DR, which is actually the same technology that we use already to replicate our block volumes. With that said, here are a few things to remember when it comes to flash array files. Active DR is included in Purity at no extra cost. This is a huge advantage over the legacy storage guys that we're currently trying to
14:23
kick out of our data center. And for file share replication, you're going to replicate at the file system level. And like with block, ActiveDR is an active passive replication solution that works by manually promoting and demoting your primary and secondary replication pods with your synced copies being read-only until promoted.
14:43
It has a 5 to 15 minute recovery point objective and a near zero recovery time objective. And finally, it can support fan out and fan in multi-site DR needs. OK. Stay with me. You're almost there. Our VMware clusters also leverage NFS shares as data stores,
15:04
and Flash array has some unique value when it comes to managing them. We're going to again lean on my colleague Jason for a quick breakdown of how Flash array NFS Capabilities make managing VMware data stores easier than other legacy solutions. And lastly, let's go ahead and configure an NFS data store, which is something relatively new to the Pure Storage FlashArray.
15:27
So if we go back to the vSphere client, we right-click on TD cluster, go down to Pure Storage, click on create data store. Again, we get the create data store type screen. We'll select NFS, so our options on the left will change a little bit, as you see there. We'll click next. Compute resource is going to be TD cluster. FlashArray One is our storage.
15:49
We'll give the data store a great name: Langer demo NFS, uh, we'll give it one. Let's not give it a petabyte. How about we just do a terabyte, right? Um, and we have this option to use a file system that has already been configured for NFS. Since we don't have anything, we just won't use this one, but maybe if you had something already mounted or something
16:08
created before, you could mount it that way. But this is a net new, so we'll just leave it that. We'll click next. Uh, well, we use NFS version 3, but just know that we support 4.1 for those that want the security. For pod, a pod is again that's another Pure Storage construct, where you get into working with pods when you're dealing with replication.
16:28
I only have a single array in this environment, so don't need to worry about that because we're not replicating between, let's say, a production site and a DR site. So we'll just say none. We'll click next. For policy, I mean for security-minded folks, this is where you could really get in and lock down who can access these NFS mount points based off IP addresses and so on and so forth.
16:48
Uh, but to simplify this, I'm going just to say unrestricted. Click next. Enable Autoder is Autoder is a function of the Pure Storage FlashArray that will show you. In a second, is basically it's what's going to give you the option to view each VM as its own directory, and the benefit of doing that is now you'll be able to see within the Purity UI or the FlashArray UI per-VM performance details, right?
17:17
So on this Linux VM if we move it to the NFS data store, we would see it as its own directory and then we would see its VM details and actually be able to see its performance and so on and so forth. As we added additional VMs, so if we did Linux VM2, VM3, it'd be the same thing. Each one would have its own directory to make
17:35
It makes it easier for you to identify and monitor for, you know, latency, bandwidth, capacity, all those things. So we are gonna go ahead and leave Enable Autodirect checked. As the default, we'll click next, uh, we'll basically see the ready to complete screen, we will click finish.
17:53
Give it one second, we say it's creating the data store and voila, we now have the Langer demo NFS data store. If I flit over to the flash array, now the thing with, now that we're in the flash array, we're not working with volumes, right, because volumes in FlashArray speak is a block construct. We now have a file system.
18:16
So under the storage node, I'm gonna go over to file systems and you now see that I've got the Langer demo NFS so I can click into that. There's the directory Langer demo NFS root, so that's the root directory of the um the NFS export, and you can see I've got the export policy, this is what I was talking about before where we can talk about the client,
18:37
the rules of access, so right now I'm allowing any client, the asterisk or the wildcard. To connect, I've got the no root squash. So if I wanted to lock this down for security reasons. I could edit this rule and put an IP range in and so on and so forth. If also, if we went back into policies, you can see that we've got a quota policy.
18:56
So remember when I created the data store, I set the size at 1 terabyte. So now it knows the quota on this should not be larger than the terabyte and, of course, as this grows, you can set up alerts both either from the flash array side or within the vSphere client. And to kind of show you one last thing about the NFS data store quickly,
19:18
let's migrate the Linux VM off the VA, so let's do change storage only. And let's move it from the VAs over to the NFS. So we'll go ahead and select that, we'll click finish. And we'll give it a second to migrate. OK, we see that it's migrated, it says completed.
19:40
So let's go back over to the flash array. And now when we go into file systems, we see the demo, the root of the Langer demo NFS, which is what we created for the data store, but now we see the Langer demo NFS. Root Linux VM one, so we now have a directory for the Linux VM that you can go into and see,
20:02
so we could select directory exports, snapshots, quota, and so on and so forth. But the real power of this is now if I go into the performance. Category of the UI and then I go into directories. Now I can select so I only have the root as well as the Linux VM. If I click on the Linux VM.
20:23
you can start seeing, now granted, not the best of an example since I just moved it, but you can now start seeing the metrics for latency, IOPs, and bandwidth for this specific VM. Now imagine if you had 100,000 of these VMs that were running on the NFS status where you could now get per VM statistics on these metric points to help you either isolate problems or to further troubleshoot, or so on
20:51
and so forth, and that's something special that we bring to our NFS data stores when it comes to a VMware environment. and with that shown, let's go ahead and wrap up the demo. And that's it. Hopefully you now feel well prepared to be our next file share superhero.
21:10
it's going to be an enjoyable gig because Pure FlashArray is really designed to make your life easy. Wait, did I just say file share management was easy? Let's make that a great conclusion to all of this to get you fired up. good luck in your new role.
  • Video

Test Drive FlashArray

Experience how Pure Storage dramatically simplifies block and file in a self service environment.

Try Now
09/2025
Pure Storage FlashArray//X: Mission-critical Performance
Pack more IOPS, ultra consistent latency, and greater scale into a smaller footprint for your mission-critical workloads with Pure Storage®️ FlashArray//X™️.
Data Sheet
4 pages
Continue Watching
We hope you found this preview valuable. To continue watching this video please provide your information below.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Your Browser Is No Longer Supported!

Older browsers often represent security risks. In order to deliver the best possible experience when using our site, please update to any of these latest browsers.