Skip to main content
Pure Technical Services

NVMe-TCP setup and connectivity for Splunk

Currently viewing public documentation. Please login to access the full scope of documentation.

Confirm NVMe-TCP Support

  • Purity 6.4.2+
  • A FlashArray//XL, FlashArray//XR3, FlashArray//CR3
  • A pair of NVMe-supporting NICs on the FlashArray
  • Host OS supporting NVMe-TCP (RHEL 8.4+)
  • NVMe-oF support matrix
  • nvme-tcp services enabled on the FlashArray.

To confirm nvme-tcp service is enabled, log into the FlashArray and look at Settings => Network => Ethernet.

clipboard_e278ed6cf5e9645f4411421443fd37159.png

There should entries with services nvme-tcp configured.  If not, the nvme-tcp service should be configured prior to performing the following section.

For more information please see this Pure Storage support article.

Setting up the host for NVMe-TCP

Install nvme-cli tool

The NVMe management command line interface tool is required on the host to connect to an NVMe target. If the tool is not installed, install the nvme-cli package using the yum command.

[root@splk-ix01 ~]# yum -y install nvme-cli

For the host to communicate with another NVMe device, it must have a unique identity known as NVMe Qualified Name or NQN.  If the host doesn’t have one, it must be created.  Check if a host nqn exists using the following command.

[root@splk-ix01 ~]# cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0030-4410-8039-c3c04f483133

If the hostnqn doesn’t exist, create the directory /etc/nvme and use the nvme gen-hostnqn command to add an id to a file named hostnqn in the /etc/nvme directory. This hostnqn is needed when creating a host in the FlashArray.

[root@splk-ix01 ~]# mkdir -p /etc/nvme
[root@splk-ix01 ~]# nvme gen-hostnqn > /etc/nvme/hostnqn
[root@splk-ix01 ~]# cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:4c4c4544-0030-4410-8039-c3c04f483133

Install device-mapper-multipath

To use all the paths available on the FlashArray to the NVMe target, device mapper multipath is recommended over native multipathing.  If the device-mapper multipath is not installed, it can be installed and configured using the following steps.

[root@splk-ix01 ~]# yum install device-mapper-multipath -y

Using vi, create a multipath.conf file under /etc directory with the following contents.

# This is a basic configuration file for Pure NVMe-oF multipath
#
defaults {
       polling_interval 10
}

devices {
  device {
    vendor                 "NVME"
    product                "Pure Storage FlashArray"
    path_selector          "queue-length 0"
    path_grouping_policy   "multibus"
    fast_io_fail_tmo       10
    user_friendly_names    no
    no_path_retry          10
    features               0
    dev_loss_tmo           60
  }
}

Start the multipath service and enable it to start at reboot with the following commands.

[root@splk-ix01 ~]# systemctl start multipathd
[root@splk-ix01 ~]# systemctl enable multipathd

Connecting the host to the FlashArray NVMe-TCP controller

1. Load the nvme_tcp module if not done already:

[root@splk-ix01 ~]# modprobe nvme_tcp

Note: The nvme_tcp module should load once the nvme-cli is installed.  In certain scenarios it has been observed that nvme-tcp module doesn’t load post reboot and it should be manually loaded.

2. Discover the available subsystems on the NVMe controller using the following command.

[root@splk-ix01 ~]# nvme discover –-transport=tcp –-traddr=<FlashArray-nvme-tcp-addr> --host-traddr=<host-ip-address> --trscvid=8009

The above command should show an output displaying the NVMe-TCP IP addresses, subnqn configured on the FlashArray.

3. Create a file named /etc/nvme/discovery.conf and add the following entries with the details ascertained from the nvme discover command. If the FlashArray returned four different IP addresses which are configured for the nvme-tcp service, include them along with the corresponding host IP addresses to route them.  If the FlashArray is configured with two different subnets for the nvme-tcp services, then it is recommended to have similar subnets and host ip addresses on the host to take advantage of the parallel connectivity.

[root@splk-ix01 ~]# vi /etc/nvme/discovery.conf
--transport=tcp --traddr=10.21.220.206 --host-traddr=10.21.220.21 -s 4420 -i 48
--transport=tcp --traddr=10.21.124.206 --host-traddr=10.21.124.15 -s 4420 -i 48
--transport=tcp --traddr=10.21.220.204 --host-traddr=10.21.220.21 -s 4420 -i 48
--transport=tcp --traddr=10.21.124.204 --host-traddr=10.21.124.15 -s 4420 -i 48

Note: In the above commands, the FlashArray is configured with 4 nvme-tcp services with 2 of them connected to one controller while the other two connected to the second controller. In each controller, there are two IP addresses configured across subnets 124 and 220.  Similarly, the host had two network interfaces on the same subnets 124 and 220.

4. Connect the host to the NVMe-TCP controller system (FlashArray) by running the following command.

[root@splk-ix01 ~]# nvme connect-all

The nvme connect-all command will attempt to find the /etc/nvme/discovery.conf file.  If the file doesn’t exist, the above command will quit with an error.

5.      Verify the host can access the namespace and list the nvme devices.

[root@splk-ix01 ~]# nvme list-subsys
[root@splk-ix01 ~]# nvme list​​​​

clipboard_e6547cb096f8bae38f731cbf51e7acefd.png