With the introduction of Purity 6.4.0 on FlashArray, Pure Storage introduced support for NFS datastores for VMware vSphere 7.0 and later. Pure Storage FlashArray now support NFS, VMFS, and vVol datastores on the same platform.
In addition to NFS datastore support, FlashArray also offers tight integration of management with the Pure Storage plugin for the vSphere Client, as well as a certified VAAI NFS plugin (VIB) for accelerated operations. This document will cover how to manually configure NFS datastores on the FlashArray GUI and vSphere Client. Please refer to Using the Pure Storage Plugin for the vSphere Client for NFS management with the vSphere Plugin.
Please note that FA File needs to be activated on the FlashArray before following this guide. Use this KB to enable FA File.
In order to have VM granularity on FlashArray, it is recommended to enable automatic directory management.
Pure Storage recommends installing and configuring the VAAI NFS VIB component for any ESXi hosts that are going to use FlashArray NFS datastores. Refer to this KB for detailed instructions.
Purity Object Model for NFS
In order to implement NFS in a flexible and transparent way, Pure Storage has implemented the following objects into the NFS model.
- File Systems. This is the base object of control. There is a default file system created with file (called purefile) but additional ones can be created. In general, you would create additional file systems for replication granularity. The unit of replication for NFS on FlashArray is the file system (a file system is either replicated or it is not).
- Directories. There are what are referred to as managed directories. A managed directory is the object of control for snapshots and quotas. When a snapshot is created it is created for the whole managed directory. Managed directories can be nested into a hierarchy. Note that not all folders on a NFS mount are managed directories.
- Directory Export. This is how you present a directory or an entire file system to a host or set of hosts. When an export is assigned, that directory can be mounted within a guest via the NFS protocol by associating the directory with a policy.
- Directory Policies. This dictates how directories associated with this policy can be consumed and by what clients.
NFS Datastore Components
|Pure NFS VAAI Plugin (VIB)||Enables the offloading of storage related tasks from the ESXi hosts to the FlashArray for better performance and faster operations|
|Remote vSphere Plugin release with NFS workflows||NFS datastore creation, NFS datastore connectivity information|
|Automatic directory management (autodir)||VM granularity for performance and capacity metrics on FlashArray, simpler management of array-based snapshots|
|NFS support on FlashArray File||FlashArray-backed NFSv3 and NFSv4.1 datastores|
Configure a new FlashArray NFS Export
The first step is to create a new file system (or use an existing one). In the FlashArray UI, navigate to Storage > File Systems and clock on the plus sign to create a new File System.
Next, give it a friendly name and optionally choose a pod to put it in. A file system should go into a pod if it is desired to enable replication for the file system now or in the future. A file system can only go into a fully empty pod or one with a file system already in it.
You may choose to export this entire file system or subdivide it into separate exports. To sub-divide, continue on. To export the entire file system, skip to the next section.
To subdivide a file system, create a new directory on it. Click the plus sign in the Directories panel.
Choose a file system, a name, and a path. The path and name do not have to be the same.
The next step is to create an export policy. There is a default policy that can be used or you can create your own. To create your own, click on the Policies tab and click the plus sign in the Export Policies panel.
Give the policy a name, a type, and optionally a pod if you want this policy to be able to be applied to exports in a certain pod.
Click on the export policy and then click on the plus sign in the Rules panel.
This input requires a few things:
- Client. Enter in the IPs, CIDR, domain suffix or range of ESXi vmkernel addresses you want to be able to access anything through this policy. Default configuration is anything (an asterisk).
- Access. VMware NFS requires no-root-squash.
- Permission. Read/Write or Read Only. Choose one depending on the use case (backup or running VMs). To create and run VMs, make it RW.
Associate Policy with an Export
The next step is to associate the policy with a directory so the directory can be mounted. Click on the Members panel under the policy or under File Systems > Directory Export click the plus sign. Either option will provide this feature.
Choose the directory and give it an export name. This is what VMware will use to mount and address the datastore.
Mount a FlashArray Export as an NFS Datastore
In vSphere, right-click on a host, cluster or datacenter and go to Storage > New Datastore.
Choose NFS and NFS v3 (NFS 4.1 is not yet supported).
Next enter in a datastore name (ideally the name of the export you want to mount). Provide the export name above (newNFS-01 in this example) with a / in front of it as the folder and the filevip IP of the FlashArray to use for NFS access.
To get the filevip IP address of the FlashArray, log into the FlashArray UI under Settings > Network and in the Ethernet panel.
Select the hosts you want to connect to the NFS datastore and click next.
Review the details of the configuration so far; if anything needs updating, go back to the pertinent screen and update it. Otherwise, click finish.