Skip to main content
Pure1 Support Portal

Snap to NFS Overview and Administration

Introduction

In Purity//FA 5.1, Pure Storage introduced Snap to NFS, the first feature based on Pure’s new Portable Snapshot technology. Portable snapshots contain the data as well as the associated metadata required to restore snapshots.
 
Encapsulation of metadata along with the data blocks makes these snapshots truly portable, so they can be offloaded from a Pure FlashArray™ appliance to any heterogeneous NFS storage target, and are recoverable to any Pure FlashArray appliance.
 
FlashArray appliances already use snapshots for data protection, test/dev, and cloning workflows. Snap to NFS extends this functionality by adding the ability to move snapshots off the FlashArray appliances onto NFS storage appliances from any vendor.

Some examples of NFS storage appliances that can be used with Snap to NFS are:

  • The Pure Storage FlashBlade™ appliance
  • Third party NFS storage appliances
  • Generic Linux servers

The ability to use a generic NFS storage appliance as the offload target means that for customers who want an inexpensive storage solution for long-term retention, a low-cost NFS storage solution from any vendor can be used as the NFS target. Customers who require extremely fast backups and rapid restores can use Pure’s FlashBlade appliance as the NFS target.


Examples of NFS Targets that can be used with Snap to NFS

Snap to NFS Benefits

Since Snap to NFS was built from scratch for the FlashArray, it is deeply integrated with the Purity Operating Environment, resulting in highly efficient operation. A few examples of the efficiency of Snap to NFS are as follows:

  • Snap to NFS is a self-backup technology built into the FlashArray. No additional software licenses or media servers are required. There is no need to install or run a Pure software agent on the NFS target either.
  • Snap to NFS preserves data compression in transit and on the NFS target, saving network bandwidth and further increasing the efficiency of inexpensive NFS storage appliances, even the ones without built-in compression
  • Snap to NFS preserves data reduction across snapshots of a volume. After offloading the initial baseline snapshot of a volume, it only sends delta changes for subsequent snaps of the same volume. The snapshot differencing engine runs within the Purity Operating Environment in the FlashArray, and uses a local copy of the previous snapshot to compute the delta changes. Therefore, there is no back and forth network traffic between the FlashArray and the offload target to compute deltas between snapshots, further reducing network congestion. As a result: 
    • Less space is consumed on the NFS target
    • Network utilization is minimized
    • Backup windows are much smaller
  • Snap to NFS knows which data blocks already exist on the FlashArray, so during restores, it only pulls back the missing data blocks to rebuild the complete snapshot on the FlashArray. In addition, Snap to NFS uses dedupe-preserving restores, so when data is restored from the offload target to the FlashArray, it is deduped to save precious space on the FlashArray

Snap to NFS Management

Snap to NFS can be managed natively on Pure FlashArrays via the GUI or CLI. It is also integrated with Pure1®, so users can monitor snapshots on the NFS target via Pure1. In addition, there’s a robust and open REST API which can be used by 3rd party data management software to move incremental snapshots from FlashArrays to offload targets.

Snap to NFS has been designed to make administration very similar to asynchronous replication between FlashArrays. Like Async Rep, Snap to NFS also uses protection groups for replication. When configuring a protection group for Snap to NFS, the user selects an NFS appliance as the replication target instead of a secondary FlashArray.

Snap to NFS core components

The FlashArray, Purity, & Run

The Snap to NFS feature is available starting with Purity version 5.1. Since Snap to NFS relies on an offload engine that operates in Purity Run, the Run platform must be enabled on the FlashArray for Snap to NFS. FlashArray models that support Purity Run are required.

Please visit the following link for more details on Purity Run: https://support.purestorage.com/FlashArray/PurityFA/Purity_RUN

NFS Target

An NFS storage appliance is needed to serve as the offload target for Snap to NFS. An NFS server from any vendor can be used as the NFS target, so long as it supports NFS version 3 or 4. File locking (NLM) is required on the NFS target.
 
Snapshots are stored on the NFS target in Pure proprietary format, embedding the associated metadata along with the data. Snapshots are not directly readable by users or applications on the NFS target. A Pure FlashArray is needed to display snapshots on the NFS target, and to restore snapshots from the NFS target. Once data is restored to a FlashArray, it can be accessed by users.

An NFS storage appliance with enterprise-class reliability & performance is recommended, due to the following reasons:

  • If the NFS server is slow, backup & restore windows will be longer
  • If the NFS server is unreachable, data on the NFS target cannot be listed, queried, or restored
  • If the NFS server loses data, the data is permanently lost, so an NFS server with RAID is recommended

An NFS share must be created on the NFS target, and the FlashArray must be given read/write/execute permissions to access the share. 

Network

The FlashArray ports configured for Snap to NFS must have network connectivity to the NFS target. An extra IP address is required for Snap to NFS, even if the ports already have IP addresses assigned for replication to another FlashArray. Both IPv4 & IPv6 are supported, and DNS name resolution of the NFS server is also supported.

Basic Snap to NFS Administration

This section covers basic Snap to NFS administration. Please refer to the FlashArray User’s Guide for a complete list of all the Snap to NFS capabilities and commands.

Connecting the FlashArray to the NFS target

The first step in Snap to NFS administration is connecting the NFS target to the FlashArray. The following screenshots show how to connect an NFS target to the FlashArray via the FlashArray GUI.

Go to Storage > Array, and under Offload Targets, click on the + sign.

The following window will appear. Select a name for the NFS target, and enter the IP address of the NFS server, and the share path. 

We recommend using default values for all mount options unless there’s a good reason to change any of them. In the GUI interface, this is done by leaving the Mount Options field blank. Supported mount options include the NFS version, the port number, read/write block sizes, and the protocol (TCP or UDP). These are common mount options available in all NFS file systems.

 
While the FlashArray is connecting to the NFS target, there will be a yellow icon next to the NFS target, as shown below:

If the NFS target fails to connect, please check the following:

  • There is network connectivity between the FlashArray ports used for Snap to NFS and the NFS target
  • The FlashArray has read/write/execute permissions to access the share on the NFS server

Once the FlashArray successfully connects to the NFS target, the icon next to the NFS target will turn green, as shown below:

To connect the NFS target to the FlashArray via CLI, log in to the FlashArray, and issue the following command:

FlashArray> pureoffload nfs connect --address X.X.X.X --mount-point /mnt/exports/share1 nfstarget

Where:

  • X.X.X.X = IP address or hostname of the NFS server
  • /mnt/exports/share1 = NFS export on the NFS server
  • nfstarget = select a name for the NFS offload target on the FlashArray

When connecting the FlashArray to the NFS target via CLI, use --mount-options to change the default NFS mount options. We recommend using default values for all mount options unless there’s a good reason to change any of them.
 
To verify that the NFS offload target has been connected to the FlashArray successfully, use the following command:

FlashArray> pureoffload list

To disconnect an NFS offload target from the FlashArray, use the following command:

FlashArray> pureoffload disconnect nfstarget

Where:

  • nfstarget = the name of the offload target to disconnect

When the NFS target is connected to the FlashArray, the next step is to create and configure a protection group (or use an existing protection group) on the FlashArray.

Protection Groups

A protection group is the unit of replication on the FlashArray. This means that all volumes added to a protection group are replicated to the target configured in the protection group, according to the replication schedule.
 
Following are the steps to create and configure a protection group via the GUI. Please refer to the FlashArray User’s Guide for how to create & configure protection groups via the CLI.

Creating a Protection Group

The steps listed below show how to create a protection group via the GUI.
 
Go to Storage > Protection Groups, and click on the + sign on the right.

 
The following window will appear. Enter a name for the protection group, and click on Create.


After a protection group has been created, it will appear in the list of existing protection groups. Next, follow the steps below to configure the protection group.

Configuring a Protection Group

The following three steps are required to configure a protection group:

  1. Adding volumes to the protection group (this is the data that is to be offloaded to the NFS target)
  2. Adding the NFS target to the protection group (this specifies the location to offload the data)
  3. Creating a replication schedule (this specifies how frequently the data will be offloaded to the NFS target, and how long it will be retained on the NFS target before it is expired)

Step 1: Adding volumes to the Protection Group

After a protection group has been created, the next step is to add volumes to the protection group. The screenshots below show how to add volumes to a protection group.

From the protection groups listed under Storage > Protection Groups, select the desired protection group. The following screen will appear. Select Add from the options menu under Members.

The following screen will appear, listing existing volumes on the FlashArray. Select the volume(s) that you want to add to the protection group, and click on Add.

 

You should see the newly added volumes listed under Members for the protection group. 

The next step is to add an NFS target to the protection group.

Step 2: Adding an NFS target to the Protection Group

The following screenshots show how to connect an NFS target to a protection group using the GUI.

Go to Storage > Protection Groups, and select the desired protection group. The following screen will appear. Select Add from the options menu under Targets.

The following screen will appear. Select the NFS target, and click on Add.

 

When the NFS target has been added, it should appear under Targets for the protection group as shown below.

 

The last step in configuring a protection group is to create a replication schedule.

Step 3: Creating a replication schedule for the Protection Group

The following steps show how to create a schedule for the protection group.
 
Go to Storage > Protection Groups, and select the desired protection group. The following screen will appear. Click on the small square box to the right of Replication Schedule, as shown below.

The following screen will appear. Enable the replication schedule, and select “hours” or “days” on the Replicate a snapshot to targets every line. Enter the number of hours or days. Note that Snap to NFS does not allow the replication frequency to be more than once every 4 hours.
 
Next, set the retention period by entering the number of hours or days on the Retain all snapshots on target for line.
 
Optionally, you can also choose to enter an extended retention schedule by entering non-zero values in the then retain X snapshots per day for Y more days line. In the example below, snapshots are taken & offloaded to the NFS target once every 4 hours; they are retained on the NFS target for 24 hours. In addition, one snapshot per day is retained on the NFS target for another 30 days.


 
Once the schedule is created, Snap to NFS will start offloading data immediately. Snapshots will be taken at the scheduled times and offloaded to the NFS Target.

Snap to NFS replication frequency best practice

Though the ideal replication frequency depends on several factors including the size of the dataset, the network bandwidth between the FlashArray and the NFS target, and the data change rate, etc., in most cases, the best practice for Snap to NFS is to offload data once or twice per day at the most.

Displaying the Replication Schedule

To display the replication schedule for a protection group in the GUI, go to Storage > Protection Groups, select the desired protection group, and click on the small square box to the right of Replication Schedule.

Use the following CLI command to display all protection group replication schedules on the FlashArray:

FlashArray> purepgroup list --schedule

Displaying a list of snapshots on the NFS target

A FlashArray must be connected to the NFS target in order to display or restore the offloaded snapshots. 

Pure1 can be used to monitor offloaded snapshots on the NFS target. Active management of the offloaded snapshots, such as restoring snapshots from the NFS target, must be done via the FlashArray interface. In Pure1, users can click on the name of a FlashArray in order to be rerouted to the login screen of the FlashArray.

Using Pure1 to display NFS snapshots

After logging into Pure1, under Protection on the left, select either Protection Groups or Snapshots to view a list of all protection groups or snapshots. Selecting Protection > Protection Groups will display a list similar to the following:

To view a list of volume snapshots for a particular FlashArray, go to Protection > Snapshots, then search for the FlashArray by entering the name of the FlashArray under Array. Next, click on a particular row to view the snapshots of a specific volume. The following screen shows all snapshots of volume engineer_vms2 on the array dogfood_chuckwagon.

To narrow the above search down to all snapshots of the volume engineer_vms2 located on the NFS target, enter the name of the NFS target under Target.

To narrow the list of snapshots further down to a particular offload time, click on the purple dot on the right that is closest to the offload date/time.

To view details of a protection group snapshot, go to Protection > Protection Groups, narrow the search down, then click on a particular snapshot on the right. A screen similar to the following will appear.

In order to do active management of the offloaded snapshots, such as restoring snapshots from the NFS target, click on the name of a FlashArray to be rerouted to the login in screen for that FlashArray.

Using the FlashArray GUI/CLI to display NFS snapshots

To view protection groups & snapshots on the NFS target, go to Storage > Array,  and click on the NFS target listed under Offload Targets, as shown below:

 
A list of protections groups will be displayed in the top half of the screen, and a list of the snapshots on the NFS target will be displayed in the bottom half of the screen, as shown below:

 
Use the following CLI commands to list/view information about the offloaded data on the NFS target:

To display a list of protection groups on the NFS target, use the following command:

FlashArray> purepgroup list --on nfstarget

Where:
nfstarget = name of the NFS offload target
 
To display Protection Group level snapshots on the NFS target, use the following command:

FlashArray> purepgroup list --snap --on nfstarget

Where:
nfstarget = name of the NFS target
 
To display volume level snapshots on the NFS target, use the following command:

FlashArray> purevol list --snap --on nfstarget

Where:
nfstarget = name of the NFS offload target
 
To display protection group snapshots on the NFS target, including data transfer information, use the following command:

FlashArray> purepgroup list --snap --transfer --on nfstarget

Where:
nfstarget = name of the NFS offload target

Restoring snapshots from the NFS target

Restoring a snapshot from an NFS target involves the following steps:

  1. Recovering the contents of the snapshot from the NFS server to the FlashArray. This creates a local copy of the snapshot on the FlashArray
  2. Copying the local snapshot to either create a new volume or to overwrite the existing volume
  3. Connecting the newly created volume to a host and accessing data

Step 1: Recovering a snapshot from the NFS target to the FlashArray

The following screenshots show how to recover a snapshot from the NFS target to the FlashArray.

Under Storage > Array > Offload Targets, click on the NFS target to view a list of Snapshots on the NFS target. Select a protection group snapshot to restore by clicking on the download button on the right, as shown below.

The following screen will appear, listing all the volume snapshots in the protection group. Select the volume snapshots that you want to restore. When selecting snapshots to restore, you can optionally add a suffix to the names of the restored snapshots, to make it easier to identify the restored snapshots. When done, click on Get.

Once the recovered snapshots are copied to the FlashArray, they appear in the Volume Snapshots tab under the Storage > Volumes menu, as shown below:

Step 2: Copying the snapshot to create a volume

Once a snapshot has been restored to the FlashArray, a new volume can be created from the snapshot, or an existing volume can be overwritten by it. The following screens show how to create a new volume from a restored snapshot using the GUI.
 
Under the Storage > Volumes menu, click on the options menu to the right of the snapshot, and select Copy.

 
The following screen will appear. Enter a name for the volume to be created, and click on Copy.

Once the volume has been created, it will appear in the list of volumes under Storage > Volumes.

Step 3: Connecting the volume to a host, & accessing data

To access the newly created volume from hosts, connect the volume to a host. Click on the options button to the right of the volume name, and select Connect, as shown below.

The following screen will appear. Select a host and click on Connect.


When the volume is connected to the host, it’ll be visible from the host. Connect to the volume from the host to access the restored data.

Summary

The Purity Operating Environment can efficiently move snapshots between FlashArrays. Starting with Purity version 5.1, Snap to NFS with its ability to natively offload snapshots to generic NFS targets is also available at no extra cost, as part of Pure’s ever-expanding Evergreen capabilities.


Snap to NFS can be managed via the FlashArray GUI/CLI, Pure1, or the REST API. Setup and administration of Snap to NFS is simple. The NFS appliance is configured as just another replication target within protection groups. Recovery of snapshots from the NFS target is just as easy. Snapshots on NFS can be browsed from any FlashArray connected to the NFS target and restored to any connected FlashArray with a few simple clicks.