Skip to main content
Pure Technical Services

vVols User Guide: NVMe-oF vVols

Currently viewing public documentation. Please login to access the full scope of documentation.

With the release of VASA Provider 3.0.0 and Purity//FA 6.6.2 Pure Storage supports NVMe-vVols with vSphere 8.0 U1 and later.  This guide will cover how the implementation of NVMe-vVols with the FlashArray.

NVMe-vVols on the Pure Storage FlashArray is currently being certified with VMware.  Certification for vVols with vSphere 8.0 is broken into two parts, the development and enablement phase, and the final build certification phase.  The engineering, development and enablement phase of the certification has been completed, but a VCG (VMware Compatibility Guide) Listing is pending the completion of the final certification process.  While Pure Storage fully supports NVMe-vVols, the VCG Listing is pending the completion of that certification process.

NVMe-oF vVols Overview

When VMware releases vSphere 8.0 GA they introduced the support for NVMe-oF with vVols for Fibre Channel.  In vSphere 8.0 U1 support for NVMe-oF with vVols for TCP was released.  Pure Storage supports both NVMe-oF with TCP and FC for vVols on the FlashArray beginning with Purity//FA 6.6.2.  

NVMe-oF Terminology 

There are some key terms that should be covered when covering NVMe-oF in general and then with NVMe vVols as well.

Asymmetric Namespace Access (ANA)

Asymmetric Namespace Access (ANA) is an NVMe standard that was implemented as a way for the target (FlashArray) to inform an initiator (ESXi in our case) of the most optimal way to access a given namespace. 

Depending on the design of an array, all paths may not be created equal; thus this is a way for the array to inform the initiator of these differences. A common scenario in the storage industry is that each controller "owns" specific resources (such as a namespace) while other controllers do not. While the namespace is still accessible through the secondary controller(s) front-end ports, there may be a performance penalty for doing so, thus accessing the namespace through the owning controller equates to faster service times.

Due to the potential performance penalties the storage array may advertise all of the paths leading to the primary controller as "Optimized" while the paths to the secondary controller(s) as "Non-Optimized". This results in the connected hosts sending I/O to only the "Optimized" as long as they are still available. Should they become unavailable for any reason, only then will the host send I/O to the "Non-Optimized" paths. Obviously, slow I/O is better than no I/O.

A FlashArray utilizing NVMe-oF advertises all paths as "Optimized" to any given host. Due to the design of the FlashArray, there is negligible performance difference between the primary and secondary controllers; thus, "Non-Optimized" paths are not reported.

For the implementation of NVMe vVols, there are two separate controller access expected from storage vendors and will be used by the ESXi NVMe drivers.  vVol namespaces are expected to never be advertised by non vVols controllers.  

Aysmmetric Namespace Access (ANA) and Asymmetric Logical Unit Access (ALUA) are essentially synonymous with one another. The difference being that the term ALUA is used for SCSI-based storage while the term ANA is used for NVMe-based storage.


A volume presented from the FlashArray to the ESXi host is referred to as a namespace. This is the same concept as a Logical Unit (LU) with SCSI-based storage. The concept of NVMe namespaces maps directly to virtual volumes very well and the implementation approach is to map vVols to NVMe namespaces at a one to one basis.  

Namespace ID (NSID)

The namespace ID is used as an identifier for a namespace from any given controller. This equates to a Logical Unit Number (LUN) with SCSI-based storage.  For NVMe vVols, vSphere expects the array to support a large amount of namespaces, but leaves the management and allocation of NSIDs to the storage provider.  vSphere does not store NSIDs nor expect the same NSID to always be used for a given namespace.  The VASA specification is designed to not make any assumptions for how arrays allocate NSIDs for the backing vVols or snapshots of vVols.

Virtual Protocol Endpoint

With the SCSI implementation of vVols the use of a Protocol Endpoint was crucial to overcome scaling issues with the number of LUNs being connected to ESXi hosts.  With NVMe vVols, while the necessity of Protocol Endpoints as they were used with SCSI was not needed, there was still value in being able to manage multipathing for all vVols on the groups of namespaces that support NVMe vVols, as well as reporting path status. 

To achieve this, a "Virtual Protocol Endpoint" is implementing with NVMe vVols.  The virtual PE is not representing a specific storage object or "Administrative LUN" as with a SCSI PE, but rather is a host side object that ends up representing all of the NVMe vVols namespace groups as advertised by the storage array.  This means that only a single Virtual Protocol Endpoint would show up in the device lists for an ESXi host per storage array.  Even if there are multiple NVMe vVols storage containers with multiple PE objects on the array created for them, ESXi will only report a single virtual PE for that storage array.  

NVMe Qualified Name (NQN)

The NVMe Qualified Name (NQN) is used to uniquely identify (and authenticate) a target or initiator. Similar to an iSCSI Qualified Name (IQN), there is a single NVMe Qualified Name associated with the FlashArray. The implementation for NVMe vVols has each ESXi host using a unique NQN for vVols and then using a different NQN for standard VMFS or RDM access.  ESXi ensures that separate pairs of host NQN and host ID will be used when accessing vVols and non-vVols.  

Using NVMe-oF with vVols

This is the workflow for getting started with NVMe-vVols with a FlashArray running Purity//FA 6.6.2 or later.  The process will require the follow:

  1. ESXi hosts and vCenter Server running 8.0 U1 or later
  2. FlashArray running Purity//FA 6.6.2 or later
  3. ESXi hosts configured to use NVMe-oF with TCP or FC
  4. FlashArray configured to use NVMe-oF with TCP or FC
  5. The unique ESXi host's NQN for vVols
  6. Create host object on FlashArray with the unique NQN
  7. Create a new Pod on the FlashArray
  8. Create a new Protocol Endpoint in the new Pod
  9. Connect the Protocol Endpoint to the host objects with the unique NQN
  10. Rescan Storage Providers in vCenter
  11. Create vVol Datastore with the NVMe Storage Container

Creating NVMe-vVols Hosts on the FlashArray

One aspect of using NVMe vVols is that there is a unique NQN that the ESXi host will use when connecting to the storage array to leverage NVMe-vVols.  At the time of writing this KB, the only way to get the ESXi host's NQN is with esxcli.  There are two ways to get the unique NQN.  One way with vSphere 8.0 U1 and then another when vSphere 8.0 U2 and later.  Once the NQN's are retrieved then create a new host object on the FlashArray and assign the vVol NQN to the host object.

Using esxcli to list the host vVol NQN

vSphere 8.0 U1 and later
[root@esxi-1:~] /usr/bin/localcli --plugin-dir /usr/lib/vmware/esxcli/int storage internal vvol vasanvmecontext get
   Host ID: 52e3d127-0a3d-7217-90f0-7a8201e9a93e
   Host NQN:
vSphere 8.0 U2 and later
[root@esxi-1:~] esxcli storage vvol nvme info get
   Host ID: 52e3d127-0a3d-7217-90f0-7a8201e9a93e
   Host NQN:

Create FlashArray Host Object with unique NQN

You can create these host objects with the CLI or with the GUI.

With the FA CLI
purehost create --personality esxi --nqnlist ESXi-1-nvme-vvols

purehost create --personality esxi --nqnlist ESXi-2-nvme-vvols
With the FA GUI
Screen Shot 2024-01-18 at 1.09.38 PM.png
Screen Shot 2024-01-18 at 1.10.28 PM.png

Create FlashArray Host Group Object with NVMe-vVols Hosts

In the event that there is an ESXi Cluster that will need to map directly to the ESXi hosts using NVMe-vVols, then create a Host Group for those hosts.

With the FA CLI
purehgroup create --hostlist ESXi-1-nvme-vvols,ESXi-2-nvme-vvols NVMe-vVols-Host-Group-FC
With the FA GUI
Screen Shot 2024-01-18 at 1.17.07 PM.png
Screen Shot 2024-01-18 at 1.19.13 PM.png

Creating NVMe-vVols Storage Containers on the FlashArray

To create a NVMe-vVols capable storage container on the FlashArray a new storage container will need to be created.  Then the Protocol Endpoint created with the container will need to be connected to the host group with the hosts with the unique nvme vvols NQN.

With the FA CLI

When creating the new pod in the CLI, you can also set the quota for the pod.  This will allow the user to set the specific size for the new storage container.

# Create the new Pod #
purepod create --quota-limit 500 TB FA-NVMe-vVols-SC-01

# Create the new Pod's Protocol Endpoint #
purevol create --protocol-endpoint FA-NVMe-vVols-SC-01::pure-protocol-endpoint

# Connect the Protocol Endpoint to the NVMe-vVols Host Group #
purevol connect --hgroup NVMe-vVols-Host-Group-FC FA-NVMe-vVols-SC-01::pure-protocol-endpoint

With the FA GUI

When creating the new pod in the FA GUI you do not have the ability to set the quota on creation.

Create and new pod, the name of the pod will be the name of the new storage container
Click on Volumes:Options and select "Show Protocol Endpoints"
Click on "Create Protocol Endpoint"
Create the Protocol Endpoint
Click on the new Protocol Endpoint
Connect the PE to the NVMe-vVols Host Group

Creating the NVMe-vVols Datastores in vSphere

When creating the vVol Datastore in vSphere you will need to first re-sync the storage providers.  Then you can go through the normal process of creating a vVol datastore.  The important part is selecting the correct storage container which will match the name that was given to the new pod that was created on the array.

After connecting the Pod::PE to the NVMe-vVols hosts, in vCenter synchronize storage providers
resync sp.png
On the ESXi Cluster, right click on the cluster and click on New Datastore
storage - new datastore.png
Select vVol
new datastore 1.png
Select the storage container that matches the Pod Name
new datastore 2.png
Select the hosts in the Cluster
new datastore 3.png
And finish the vVol Datastore creation wizard
new datastore 4.png
Here is a quick look at the virtual Protocol Endpoint.  Notice that the LUN is 0 and the size is 1.00 GB.
vPE Listing.png

Differences in vSphere 8.0 U2 and vSphere 8.0 U1 GUI Views

There is a little difference in the vSphere 8.0 U1 GUI and the 8.0 U2 GUI in the Host view and Datastore view.  You'll notice that there isn't a specific spot for the NVMe vPE in 8.0 U1, but in 8.0 U2 then is more detail and listing provided.

vSphere 8.0 U2 Host Protocol Endpoint View
vSphere 8.0 U1 Host Protocol Endpoint View (notice that there are no NVMe PEs
vSphere 8.0 U2 Datastore Configure View
vSphere 8.0 U1 Datastore Configure View

Overall vSphere 8.0 U2 has a lot more quality of life updates in the GUI and CLI for NVMe vVols.