00:01
Hm And, they say this meditation book is really good stuff, but every time I try to read it, it just doesn't make any sense to me whatsoever. That's weird. Huh. Oh, hey, hi there. Didn't see, come in. My name's Don Porman, and let me be the first
00:19
to welcome you to the Pure Storage flash array walkthrough. Hey, thanks for jumping in for a deeper dive on Flash array with me. I think you're gonna like what you see. This series aims to teach you some key management concepts for what delivering and protecting data services on Flash array looks like.
00:43
That's gonna include 3 parts. Getting to know the dashboard and creating, connecting, and locally protecting a block storage volume, working with file systems to include creating SMB and NFS file shares with local data protection. And flash array performance analysis and health monitoring. To make everything easier to digest,
01:02
I've split the orientation into three separate individual videos. With that, let's dive in and get to know Flash array a little deeper. Enjoy. To get things kicked off on this flash array walkthrough, let's start with part one, dashboard orientation and working with blocked storage. This is the flash array dashboard.
01:21
As you can see, it's a pretty robust landing spot because it gives you a quick glance of the three main variables admins and operators focus on capacities, physical hardware health, and performance metrics. Let's take a closer look at them. First, there's a quick glance of the array's logical capacity to include physical data
01:39
storage space for unique data from volumes and file systems after data reduction in D-oop, total amount of physical space consumed by snapshots. Physical space shared for other volumes and snapshots as a result of the deduplication. Physical system space consumption of pod-based replication features like active DR and active cluster, storage space consumed for the system's metadata,
02:02
unused space available for allocation. A snapshot of how effective our data reduction is on your data. The difference between the two ratios is data reduction does not take thin provisioning into account while total reduction does. For example, a total reduction ratio of 10 to 1 would mean that for every 10 megabytes of
02:21
provision space, only 1 megabyte is stored on the direct flash modules. Finally, is the total physical usable space of the array. Next, we have an abbreviated listing of any alerts the system has observed in the last 24 hours that needs attention. There's also a simplified view of the hardware status to include the controllers and direct
02:41
flash modules. Finally, a large amount of screen real estate is dedicated to the three big performance variables. Latency is how quickly the controllers are completing read or write requests. IOPs is how many input-output requests per second the array is servicing.
02:56
These could be read or write requests. Bandwidth is how much data per second is being transferred through the controllers to and from the array, similar to water pressure in a pipe. As you can see, performance charts are updated every second and allow for focusing in on block or file system performance.
03:13
The last part of the dashboard on the left is the navigation menu. From here, you are more than likely one click away from where you need to go to configure, deliver, protect, and monitor things at a deeper level to ensure your flash array's data services are optimal. Before we get to creating a block volume, though, there are a couple of useful dashboard
03:31
items to know. First, we include a link to the flash array admin guide under the help button. This was actually handy for me when I was recording this video to double-check ensuring what I was saying was correct. Second, settings is where most of those set and forget variables are.
03:47
These include items on how login security is managed, where SNMP and SIS log information can be sent, and items you may need to leverage when working with our tech support folks. One other setting to highlight here is eradication configuration. This is where we can set the time windows that safe mode follows for not allowing snapshots to be deleted. Let's click on the storage button to get under
04:09
the hood and see how block storage-based volumes are created and delivered to upstream hosts. One thing you'll immediately notice when working with Flash array is how dedicated we were to simplifying everything we could with managing cornerstone actions like storage provisioning. Keep an eye on the steps I take to create a new block storage volume.
04:28
Watch while I step through things and how all we need to create a new block storage volume are three variables its name, its provision size, and if we want it to belong to a potter volume group. Both of those are logical designations that enable a simplified way of setting up replications and other data protection policies we'll get into in another demo video series.
04:49
We can also expand to some advanced options like protection group membership and QOS configuration. You can see here, any volume created is joined automatically to a default protection group for snapshots. We'll get into more details of managing that later in this video. Did you notice a few things were missing in those steps,
05:08
like identifying media groups and setting ray type? That's because Purity, our storage operating system here, is automatically placing the volume in the optimum right group with rate protected DFMs based on real-time analysis of their performance. This is another example of Pure's commitment to simplifying the storage management experience.
05:27
We have automated the physical data placement during provisioning. Destroying a block volume is equally as simple as creating one and carries a great fail-safe feature. It will sit in a destroyed state for 24 hours and can be recovered before being permanently wiped from the system. Connecting a block volume to an upstream host also only requires a few clicks.
05:47
First, let's take a second to show you what a host we've already created looks like. As you can see with this ESX1 host, it shows any existing volumes connected to it, along with its assigned SAN initiators, protection groups it's a part of, and other details like CAP credentials, preferred arrays, and its personality.
06:06
This one's actually set when the host is created and enables flash array to interact a certain way with its role. With all of that host info in place, let's connect our new volume to it. As you can see, 4 clicks is all you need. The connection has been made and a LEn designator has already been assigned.
06:23
The system is smart enough to know which outbound interface needs to be used based on the host's initiator. There's no need to designate the connection as iScuzzy, fiber channel, or NVME over fabric. And as you can imagine, disconnecting the volume from the host is just as straightforward
06:40
and requires just as few clicks. Let's move on to the last section of this video, which is local data protection for volumes with snapshots. Now, for this video series, we're only going to focus on managing locally created snapshots. There's another video series on Pure 360 dedicated to protecting flash array data with asynchronous and synchronous replication.
07:00
Tune into that one if you want to review that kind of information, but in this case, we're focusing only on local data protection. Before we get into the snapshot demo part, we should highlight a core cyber resilience feature of Flash array called safe mode. This is our way of offering a protective element to your data that helps minimize the
07:19
impact of a ransomware attack. A mainstream attack vector of ransomware is to not just encrypt the active data it's infected, but also destroy any local snapshots associated with it to disrupt chances of a quick recovery. Safe boat counters this method by preventing snapshots from being eradicated until a certain time frame that you see here has been met.
07:40
Snapshots protected with safe mode cannot be deleted from the array without a coordinated effort with peer support staff. This prevents their elimination by bad actors. There are a few ways you can take snapshots of existing data volumes. The first method is to manually create one, and as you can see,
07:57
a few clicks from the volume's details page can create what we might consider a one-off snapshot. This could be handy for test dev scenarios or a rapid fallback safety net before applying emergency patches. Destroying snapshots is straightforward, just like deleting volumes by only requiring a few clicks. We can also recover our destroyed snapshots
08:17
from the spot, knowing the safe mode will not allow its eradication for a 24 hour period of time based on the settings I showed you earlier. The second method of leveraging snapshots is through flash arrays protection groups, which is a more automated and standardized approach. Let's see this in action.
08:35
Protection group configurations are reached through the protection button on the left navigation window in the flash array dashboard. From here, we are able to create a source protection group. This is different from a target protection group in that it creates snapshots on the same array as the volume.
08:52
Again, we will only be focusing on local protection with this video series. Remote disaster and business continuity will be covered in another series. As you can see, we have a default protection group the system has already created, and we can click on it to get a handle on how it's configured. If you remember the volume creation process from earlier,
09:12
this is the default protection group it was automatically assigned to when we created it. That's why you see it in the list with the other pre-existing volumes. And reviewing the snapshot schedule, we can see the volume members of this group will have a snapshot taken at certain intervals with varying retention periods. Next, we have the ability to configure a schedule for shipping our snapshots to another
09:33
target. This will be covered in a future video series. And finally, we can set up whether the snapshots taken for this group are ratcheted. This means that they are protected by a safe mode with the eradication window I showed you earlier under settings. So now that we have some snapshots created, let's check out how they can be recovered or
09:52
manipulated. Block volume snapshots are accessed from the snapshot submenu under protection. From here, we can retrieve or send snapshots to or from other flash arrays or work with local ones. Local management of them here is pretty straightforward. The 3 dot menu with each one gives you the
10:09
ability to do things like copy, send, or restore. Let's conduct a restore of the one-off snapshot we took earlier. You can see that after we confirm we know what we're doing, the rollback on the protected volume happens pretty quickly. That's because the copy on right snapshots are based on data pointers and separate from the
10:28
actual data blocks themselves. We are simply reverting Flash arrays metadata address book mapping to when the snapshot was taken. Congratulations. You just made it through a quick and simple breakdown of how block storage is provisioned and locally protected on a flash array. When you're ready, be sure to check out the next video in this series where we'll do the
10:48
same thing to file shares. See you there.