Skip to main content
Pure Technical Services

How To: Setup NVMe-FC with VMware

Currently viewing public documentation. Please login to access the full scope of documentation.

Confirm NVMe-FC Support

  1. To confirm that it is enabled, log in to your array and look at Settings > Network > Fibre Channel. You should see nvme-fc on the ports. Note that a given FC port is either going to be SCSI or NVMe, not both.
clipboard_eb5a805bf8eb6da68a96e05a53915bc86.png

If you do not see this, it needs to be enabled on your FlashArray– follow this KB to enable it (you must be logged into view).

ESXi Host Configuration

  1. The first configuration step is to enable NVMe-oF on your ESXi host(s). In this environment, I have a cluster that has two hosts, each with supported NVMe-oF/FC HBAs:
clipboard_e4b9c6c06ac44957d822a068351599ab1.png

With some earlier NVMe HBA drivers, when you click on Storage Adapters, there isn't a listing for controllers, namespaces, etc. This is because the NVMe "feature" driver is not enabled by default.

With newer versions of certain adapters, the NVMe feature is enabled by default, such as with Emulex cards with version 12.8 or later. So, in that case the following step is not necessary.

clipboard_e5153b7fa2bd6090159d99a5b2a9f3666.png

For versions that do not have it enabled by default, this change cannot be done in the vSphere Client, but instead only through cmd line or a tool that can make esxcli calls such as PowerCLI.

  1. SSH into the host(s).
clipboard_e4eb2b07c71ce8057164d2f67284028d2.png
  1. Run the following command to enable this feature:

Emulex:

esxcli system module parameters set -m lpfc -p "lpfc_enable_fc4_type=3"

Qlogic:

esxcfg-module -s "ql2xnvmesupport=1" qlnativefc
clipboard_e7ecbd8065ba6e3607e8890072827d03f.png

Ensure that your installed HBA driver version supports NVMe-FC by checking the VMware Compatability Guide for a feature called IO Device NVMe/FC. If the currently installed driver does not have this listed, you will need to install a version that does.

  1. Then reboot the host.
  2. Repeat this on each host in the cluster.
  3. Once the host has come back up, navigate back to the storage adapters. You will see new adapters added (one for each port), which will likely be something like vmhba65 and so on.

clipboard_e5cf418cd2e743b8ec79892a8ca682cf1.png

  1. Once you click on one, you will see more information appear in the details panel:
clipboard_e4aed56bb8f479f475f08486a812df742.png
  1. If your zoning is complete at this point (this is not covered here–but the process is the same with SCSI-FC) the WWNs are the same for the host and array.
    1. So if zoning was already done for FC-SCSI, no additional steps are needed.
    2. If zoning is not complete, follow the normal process you use for SCSI zoning.
    3. The controllers will automatically appear. Unlike RoCE, you do not need to manually configure the controllers because they will auto-discover.
    4. If you do not see anything listed here, ensure zoning is complete and upgrade to the latest HBA drivers.
  2. You can see this event in the vmkernel log on the ESXi host (/var/log/). 
2021-01-16T01:29:10.112Z cpu22:2098492)NVMFDEV:302 Adding new controller nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785 to active list
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2895 disabling controller…
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2904 enabling controller…
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:877 Ctlr 257, queue 0, update queue size to 31
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2912 reading version register…
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:2926 get controller identify data…
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:5101 Controller 257, name nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785#vmhba65#524a93775676b304:524a93775676b304
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:4991 Ctlr(257) nqn.2010-06.com.purestorage:flasharray.6d122f70cac7d785#vmhba65#524a93775676b304:524a93775676b304 got slot 0
 2021-01-16T01:29:10.112Z cpu22:2098492)NVMEDEV:6156 Succeeded to create recovery world for controller 257
  1. For optimal performance with Pure backed devices, please run the following two commands on the ESXi hosts you are going to use with NVMe-FC (more information here). Please note no reboot is required:

    esxcli storage core claimrule add --rule 102 -t vendor -P HPP -V NVMe -M "Pure*" --config-string "pss=LB-Latency,latency-eval-time=180000"esxcli storage core claimrule load
  2.  next step is to create the host object on the FlashArray. In NVMe-oF, initiators use something called an NVMe Qualified Name (NQN).

The initiator has one and so does the target (the FlashArray). With NVMe-oF/FC, NQNs do not replace FC WWNs–they both exist.

The WWN of each side is what is advertised on the FC layer to enable physical connectivity and zoning. The NQN is what enables the NVMe layer to communicate to the correct endpoints on the FC fabric. You can look at it in a similar way as networking in IP (MAC addresses and IPs).

  1. Connect at the physical layer (WWN zoning),
  2. then the logical layer (NQNs).

For each ESXi host, you need to create a host object on the FlashArray, then add the NQN to it. So where do you get the NQN? However, not from the vSphere Client. For now, you need to use esxcli.

  1. So, SSH back into the ESXi host and run:
esxcli nvme info get
clipboard_e3540ea3a6596e94939fe2b9a68e322ed.png
  1. Copy the NQN.
  2. Then log into the FlashArray.

FlashArray Configuration

  1. Create a host for the ESXi host (if one already exists for FC, create a new FA host object).

NQNs are case sensitive--ensure that the proper case is entered. Typically the NQNs should be all lowercase.

clipboard_e74ff96489a027d153cc068daf9395a76.png
  1. Give the host a name, and choose the ESXi personality.
  2. Next, navigate to that new host and click on the vertical ellipsis and choose Configure NQNs.
clipboard_e4b8862c739577fd0a6594c1dd54989a3.png
  1. Add that NQN to the input box and click Add.
clipboard_e072d548adcdfb411162ef5956b8f8492.png
  1. Assuming your zoning is done, flip over to the Health screen, followed by Connections.
  2. In the Host Connections panel, find your new host(s) and ensure they are redundantly connected to the array (both controllers) and each controller (more than one port).
clipboard_e6e6bbec03f254fba8dc3d4d29588b7b3.png
  1. If they are not, ensure zoning and/or cabling is correct.

If you have not zoned yet, you can find the NVMe-oF/FC WWNs on the Settings Network > Fibre Channel panel.

clipboard_e271d4e1ed37a94c29bca13cc76448256.png
  1. Now create a new host group and add all of those hosts into it:
clipboard_e09079c74b26a44b0c32b2fee99938223.png
  1. Give it a name that makes sense for the cluster:
clipboard_efc52fe63e303c1d3bfdba3bdbc9ea0e4.png
  1. Then add the hosts:
clipboard_e170b26610979f0a5611d3f4f8ea0e635.png
  1. Choose each in the cluster and click Add.
clipboard_ef8f164742a144362d5e0f2994e963950.png

From this point on, storage provisioning is no different from SCSI-based provisioning from a process perspective (though NVMe-oF no longer requires a bus rescan on the host--all storage and changes in size etc. will appear automatically).

While the fundamental process is the same, not all tools support NVMe-oF based provisioning.