Confirm NVMe-FC Support
- FlashArray //X R2 (or newer) or //C
- Purity 6.1+
- vSphere 7.0 or later (7.0 U1 or later recommended)
- Follow this KB to enable NVMe-FC on the FlashArray
- Review Pure's NVMe-oF Support Matrix
- Supported HBAs https://www.vmware.com/resources/compatibility/search.php?deviceCategory=io&details=1&pFeatures=361&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc
- Switched FC fabric--direct connect is not supported
- To configure NVMe-RoCE for VMware, you can use this KB
- To confirm that it is enabled, log in to your array and look at Settings > Network > Fibre Channel. You should see nvme-fc on the ports. Note that a given FC port is either going to be SCSI or NVMe, not both.
If you do not see this, it needs to be enabled on your FlashArray– follow this KB to enable it.
ESXi Host Configuration
- The first configuration step is to enable NVMe-oF on your ESXi host(s). In this environment, I have a cluster that has two hosts, each with supported NVMe-oF/FC HBAs:
With some earlier NVMe HBA drivers, when you click on Storage Adapters, there isn't a listing for controllers, namespaces, etc. This is because the NVMe "feature" driver is not enabled by default.
With newer versions of certain adapters, the NVMe feature is enabled by default, such as with Emulex cards with version 12.8 or later. So, in that case the following step is not necessary.
For versions that do not have it enabled by default, this change cannot be done in the vSphere Client, but instead only through cmd line or a tool that can make esxcli calls such as PowerCLI.
- SSH into the host(s).
- Run the following command to enable this feature:
esxcli system module parameters set -m lpfc -p "lpfc_enable_fc4_type=3"
esxcfg-module -s "ql2xnvmesupport=1" qlnativefc
Ensure that your installed HBA driver version supports NVMe-FC by checking the VMware Compatability Guide for a feature called IO Device NVMe/FC. If the currently installed driver does not have this listed, you will need to install a version that does.
- Then reboot the host.
- Repeat this on each host in the cluster.
- Once the host has come back up, navigate back to the storage adapters. You will see new adapters added (one for each port), which will likely be something like vmhba65 and so on.
- Once you click on one, you will see more information appear in the details panel:
- If your zoning is complete at this point (this is not covered here–but the process is the same with SCSI-FC) the WWNs are the same for the host and array.
- So if zoning was already done for FC-SCSI, no additional steps are needed.
- If zoning is not complete, follow the normal process you use for SCSI zoning.
- The controllers will automatically appear. Unlike RoCE, you do not need to manually configure the controllers because they will auto-discover.
- If you do not see anything listed here, ensure zoning is complete and upgrade to the latest HBA drivers.
- You can see this event in the vmkernel log on the ESXi host (/var/log/).
2021-01-16T01:29:10.112Z cpu22:2098492)NVMFDEV:302 Adding new controller nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785 to active list 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2895 disabling controller… 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2904 enabling controller… 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:877 Ctlr 257, queue 0, update queue size to 31 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2912 reading version register… 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2926 get controller identify data… 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:5101 Controller 257, name nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785#vmhba65#524a93775676b304:524a93775676b304 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:4991 Ctlr(257) nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785#vmhba65#524a93775676b304:524a93775676b304 got slot 0 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:6156 Succeeded to create recovery world for controller 257
- For optimal performance with Pure backed devices, please run the following two commands on the ESXi hosts you are going to use with NVMe-FC (more information here). Please note no reboot is required:
esxcli storage core claimrule add --rule 102 -t vendor -P HPP -V NVMe -M "Pure*" --config-string "pss=LB-Latency,latency-eval-time=180000"
esxcli storage core claimrule load
- next step is to create the host object on the FlashArray. In NVMe-oF, initiators use something called an NVMe Qualified Name (NQN).
The initiator has one and so does the target (the FlashArray). With NVMe-oF/FC, NQNs do not replace FC WWNs–they both exist.
The WWN of each side is what is advertised on the FC layer to enable physical connectivity and zoning. The NQN is what enables the NVMe layer to communicate to the correct endpoints on the FC fabric. You can look at it in a similar way as networking in IP (MAC addresses and IPs).
- Connect at the physical layer (WWN zoning),
- then the logical layer (NQNs).
For each ESXi host, you need to create a host object on the FlashArray, then add the NQN to it. So where do you get the NQN? However, not from the vSphere Client. For now, you need to use esxcli.
- So, SSH back into the ESXi host and run:
esxcli nvme info get
- Copy the NQN.
- Then log into the FlashArray.
- Create a host for the ESXi host (if one already exists for FC, create a new FA host object).
NQNs are case sensitive--ensure that the proper case is entered. Typically the NQNs should be all lowercase.
- Give the host a name, and choose the ESXi personality.
- Next, navigate to that new host and click on the vertical ellipsis and choose Configure NQNs.
- Add that NQN to the input box and click Add.
- Assuming your zoning is done, flip over to the Health screen, followed by Connections.
- In the Host Connections panel, find your new host(s) and ensure they are redundantly connected to the array (both controllers) and each controller (more than one port).
- If they are not, ensure zoning and/or cabling is correct.
If you have not zoned yet, you can find the NVMe-oF/FC WWNs on the Settings > Network > Fibre Channel panel.
- Now create a new host group and add all of those hosts into it:
- Give it a name that makes sense for the cluster:
- Then add the hosts:
- Choose each in the cluster and click Add.
From this point on, storage provisioning is no different from SCSI-based provisioning from a process perspective (though NVMe-oF no longer requires a bus rescan on the host--all storage and changes in size etc. will appear automatically).
While the fundamental process is the same, not all tools support NVMe-oF based provisioning.