Skip to main content
Pure Technical Services

FAQ: NVMe-oF and VMware

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

NVMe-oF Terminology

Asymmetric Namespace Access (ANA) 

Simply put, Asymmetric Namespace Access (ANA) is an NVMe standard that was implemented as a way for the target (FlashArray) to inform an initiator (ESXi in our case) of the most optimal way to access a given namespace. 

Depending on the design of an array all paths may not be created equal, thus this is a way for the array to inform the initiator of these differences. A common scenario in the storage industry is that each controller "owns" specific resources (such as a namespace) while other controllers do not. While the namespace is still accessible through the secondary controller(s) front-end ports, there may be a performance penalty for doing so, thus accessing the namespace through the owning controller equates to faster service times. Due to the potential performance penalties the storage array may advertise all of the paths leading to the primary controller as "Optimized" while the paths to the secondary controller(s) as "Non-Optimized". This results in the connected hosts sending I/O to only the "Optimized" as long as they are still available. Should they become unavailable for any reason, only then will the host send I/O to the "Non-Optimized" paths. Obviously, slow I/O is better than no I/O.

A FlashArray utilizing NVMe-oF advertises all paths as "Optimized" to any given host. Due to the design of the FlashArray there is negligible performance difference between the primary and secondary controllers thus "Non-Optimized" paths are not reported.

For all intents and purposes, Aysmmetric Namespace Access (ANA) and Asymmetric Logical Unit Access (ALUA) are synonymous with one another. The difference being that the term ALUA is used for SCSI-based storage while the term ANA is used for NVMe-based storage.

Namespace 

A volume presented from the FlashArray to the ESXi host (any host) is referred to as a namespace. This is the same concept as a Logical Unit (LU) with SCSI-based storage.
 

Namespace ID (NSID)

The namespace ID is used as an identifier for a namespace from any given controller. Once again, this equates to a Logical Unit Number (LUN) with SCSI-based storage.
 

NVMe Qualified Name (NQN)

The NVMe Qualified Name (NQN) is used to uniquely identify (and authenticate) a target or initiator. Similar to an iSCSI Qualified Name (IQN) there is a single NVMe Qualified Name associated with the FlashArray. 

RNIC

The term RNIC is simply referring to an RDMA capable NIC. Not all NICs have the capability for RDMA and thus it is important you ensure you have an RDMA capable NIC if you plan to use NVMe-RDMA.
 

NVMe-oF FAQ

Does NVMe-oF support boot from SAN?

This is supported only via NVMe-oF via Fibre Channel. This restriction is not a VMware or Pure Storage limitation but rather a HBA firmware limitation. For Fibre Channel, ensure you are using the correct version that allows for boot from NVMe SAN.
 

Does NVMe-oF support vVols?

Yes, with Purity//FA 6.6.2 and later, NVMe vVols is supported on the FlashArray.  Both NVMe-FC and NVMe-TCP are supported with Purity//FA 6.6.2 and later.  vSphere 8.0 U1 is required for TCP support.

NVMe-vVols User Guide
 

Does NVMe-oF support Raw Device Maps (RDMs)?

No, NVMe-oF does not support Raw Device Maps (RDMs). Only VMFS connectivity is supported with NVMe-oF.
 

Does NVMe-oF support shared (multi-writer mode) VMDKs?

For vSphere 7.0 U1+, shared VMDKs are supported with NVMe-oF. 
 

Does NVMe-oF support clustered VMDKs?

Yes, NVMe-oF does support clustered VMDKs.


Does NVMe-oF support directly connected ESXi hosts?

Yes, NVMe-RoCE does support directly connected ESXi hosts but NVMe/FC does not support directly connected ESXi hosts.


What Pure Storage and VMware features are compatible with NVMe-oF?

You can refer to the NVMe-oF Compatibility with VMware vSphere KB for the most up-to-date information.
 

What vSphere Storage APIs Array Integration (VAAI) features are available with NVMe-oF?

With the initial release of vSphere 7.0, not all vSphere Storage APIs Array Integration (VAAI) features will be available. This isn't a limitation with Pure Storage or VMware but rather the NVMe spec. Not all offloading capabilities have been translated from SCSI to NVMe, see below for the SCSI name and it's equivalent NVMe command (if applicable).

Feature SCSI Command NVMe Command
Atomic Test and Set (ATS) / Hardware Accelerated Locking COMPARE AND WRITE (0x89) Compare and Write (0x05 / 0x01 - fused command)
Block Zero / Hardware Accelerated Init WRITE SAME (0x93) Write Zeroes (0x08)
Extended Copy (XCOPY) / Hardware Accelerated Copy XCOPY (0x83) No equivalent NVMe command
Dead Space Reclamation (Block Delete) UNMAP (0x42) Deallocate

Which NVMe-oF transport protocols does ESXi 7.0+ support?

VMware ESXi 7.0+ supports NVMe over Fibre Channel (NVMe-FC), NVMe over RDMA Converged Ethernet (NVMe-RoCE) and NVMe over Transmission Control Protocol (NVMe/TCP). As a point of clarification, NVMe-RoCE is also referred to as NVMe-RDMA and are used synonymously with one another. NVMe-FC support was added to FlashArray in Purity 6.1. NVMe/TCP support was added in Purity 6.4.2.

It is highly recommended to be on at least 7.0 U1 when using NVMe-oF.