Skip to main content
Pure Technical Services

How To: Setup NVMe/TCP with VMware

Currently viewing public documentation. Please login to access the full scope of documentation.

Confirm NVMe/TCP Support

  • Review supported FlashArray hardware and software 
  • Review VMware Compatiability Guide (VCG)
  • vSphere 7.0 U3 and later support NVMe/TCP (vSphere 8.0+ is recommended)
  • Validate ESXi NIC is supported with NVMe/TCP by the NIC vendor
  • Configure ESXi NIC for NVMe/TCP per the NIC vendor
  • To configure NVMe/RoCE for VMware, use this KB
  • To configure FC-NVMe for VMware, use this KB
  1. To confirm that NVMe/TCP is enabled on the FlashArray, log in to the FlashArray and navigate to (1) Settings > (2) Network > Ethernet; (3) nvme-tcp should be listed under the Services column for the pertinent ports. Note that a given ethernet port is going to be either iSCSI or NVMe but not both. If this is not enabled, please follow step 2 below.
  2. To enable NVMe/TCP at the array level, SSH to the FlashArray and change the ports from service type iSCSI to service type NVMe/TCP by running this command on the pertinent ports. If this fails, validate the software and hardware requirements are met. More details on this command can be found in the FlashArray CLI guide for Purity 6.4.2 on page 272:

    purenetwork eth setattr --servicelist nvme-tcp ctx.ethx

Connect to ESXi Host

  1. To get the NQN of the host for configuring the host object on the FlashArray, SSH into the ESXi host and run:

    esxcli nvme info get
  2. Copy the NQN down for host object configuration in step 5 under Connect To The FlashArray GUI below.
  3. For optimal performance with Pure backed devices, please run the following two commands on the ESXi hosts that are going to be used with NVMe/TCP (more information here). Please note no reboot is required:
    esxcli storage core claimrule add --rule 102 -t vendor -P HPP -V NVMe -M "Pure*" --config-string "pss=LB-Latency,latency-eval-time=180000"
    esxcli storage core claimrule load
  4. Follow VMware's KB to configure NVMe/TCP port binding. 
  5. Follow VMware's KB to configure NVMe/TCP software adapters.
  6. Follow VMware's KB to add NVMe controllers on the NVMe/TCP software adapters. There should be at least one NVMe controller on ESXi per configured NVMe/TCP port on the FlashArray.

Connect to FlashArray GUI

These steps will likely vary based on the needs of your environment, but the standard ways to connect 

  1. To create a new host object, navigate to (1) Storage > (2) Hosts and click the (3) plus sign to add a new host object.
  2. Populate a (1) Name, change the personality type to (2) ESXi, optionally (3) Add to protection group after hosts are created and click (4) Create.
  3. Click the (1) host object hyperlink that was just created.
  4. Click the (1) ellipses then select (2) Configure NQNs...
  5. Paste the NQN from your host into (1) Port NQNs and click (2) Add to add the NQN of the host to the host object.
  6. To create a new host group object, click the (1) plus sign on Host Groups to create a new host group.
  7. Populate (1) Name with an appropriate name, optionally (2) Add to protection group after hosts are created and click (3) Create.
  8. Click the (1) host group object hyperlink that was just created.
  9. To add the host objects to the host group object, click the (1) ellipses then select (2) Add...
  10. Select the (1) hosts to be added to the host group and click (2) Add.
  11. Next, create a volume to connect to the host group and present to the ESXi hosts previously configured. Navigate to (1) Storage > (2) Volumes and click the (3) plus sign under Volumes.
  12. Populate a (1) Name and (2) Provisioned Size; click (3) Create to finish.
  13. In order to add the volume to the previously configured host group, click the (1) volume name hyperlink that was just created.
  14. Click the (1) ellipses and select (2) Connect...
  15. Select the (1) Host Group object(s) to connect to and click (2) Connect.