Skip to main content
Pure Technical Services

How To: Setup NVMe-FC for RHEL/CentOS 7.8

Currently viewing public documentation. Please login to access the full scope of documentation.

This document provides the steps necessary to configure an initiator running RHEL/CentOS 7.8 to support NVMe/FC. This document assumes that the Array and the Network fabric have already been properly configured.

Confirm NVMe-oF Support

  • FlashArray //X R2 (or newer) or //C
  • Purity 6.1+
  • RHEL/CentOS 7.8 (or later)
  • NVMe-oF Support Matrix
  • Switched FC fabric - direct connect is not supported.

Installing Supporting Application

In order to connect to a NVMe/FC target, you will need to verify the presence of some applications required for proper operation. If these components are not present you will need to install them on your Linux host.

Installing nvme-cli Tool

In order to connect to an NVMe target, you will need to install the nvme-cli tools.  Some RHEL/CentOS distributions may already have the tools installed.  You can check with the command nvme version. 

[root@init139-17 ~]# nvme version
nvme version 1.6

If the version is listed then the utility is installed.  If not you will see the following response and will need to follow the steps below to install the package.

[root@init139-17 ~]# nvme version 
-bash: nvme: command not found

Install the nvme-cli package using the yum command.

[root@init139-17 ~]# yum -y install nvme-cli
Loaded plugins: fastestmirror, langpacks, priorities
Loading mirror speeds from cached hostfile
* base: pxe.dev.purestorage.com
* centosplus: pxe.dev.purestorage.com
* epel: d2lzkl7pfhq30w.cloudfront.net
* extras: pxe.dev.purestorage.com
* updates: pxe.dev.purestorage.com
pure-internal
| 2.9 kB 00:00:00
253 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
<Output Omitted>
Installed:
nvme-cli.x86_64 0:1.6-1.el7                                                                                          
Complete!

In order to communicate with another NVMe device, it must have a unique identity known as a NVMe Qualified Name or NQN.  If your system doesn't have one you may need to create one.  The nvme utility can do this.  First check to see if a host nqn exists using the following command.

[root@init139-17 ~]# cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:7ea75ee6-35bd-4b97-b2cb-8763e02df8df

If you receive the following then you will need to create the nvme directory and the hostnqn file.

[root@init139-17 ~]# cat /etc/nvme/hostnqn
cat: /etc/nvme/hostnqn: No such file or directory

To create a host nqn, create the directory /etc/nvme and then use the nvme gen-hostnqn command to add an id to a file named hostnqn in the /etc/nvme directory.  You will need the host nqn when creating a host in the array.

[root@init139-17 ~]# mkdir /etc/nvme
[root@init139-17 ~]# nvme gen-hostnqn > /etc/nvme/hostnqn
[root@init139-17 ~]# cat /etc/nvme/hostnqn
nqn.2014-08.org.nvmexpress:uuid:7ea75ee6-35bd-4b97-b2cb-8763e02df8df

Installing device-mapper-multipath

In order to actively use all the network paths to the Array, device mapper multipath will need to be installed, configured, and enabled at startup. 

If the multipath is not installed, this can be accomplished through the following steps.

[root@init139-17 ~]# yum install device-mapper-multipath -y
Loaded plugins: fastestmirror, langpacks, priorities
Loading mirror speeds from cached hostfile
* base: pxe.dev.purestorage.com
* centosplus: pxe.dev.purestorage.com
* elrepo: repos.lax-noc.com
* epel: d2lzkl7pfhq30w.cloudfront.net
* extras: pxe.dev.purestorage.com
* updates: pxe.dev.purestorage.com
260 packages excluded due to repository priority protections
Resolving Dependencies
--> Running transaction check
---> Package device-mapper-multipath.x86_64 0:0.4.9-123.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=================================================================================================================
Package                               Arch                 Version                     Repository          Size
=================================================================================================================
Installing:
device-mapper-multipath               x86_64               0.4.9-123.el7               base               140 k
Transaction Summary
=================================================================================================================
Install  1 Package
Total download size: 140 k
Installed size: 192 k
Downloading packages:
device-mapper-multipath-0.4.9-123.el7.x86_64.rpm                                          
| 140 kB  00:00:00     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
 Installing : device-mapper-multipath-0.4.9-123.el7.x86_64                                
 1/1
 
 Verifying  : device-mapper-multipath-0.4.9-123.el7.x86_64                                                  
 1/1
Installed:
 device-mapper-multipath.x86_64 0:0.4.9-123.el7                                                                 
Complete!

Using vi, create a multipath.conf file.

[root@init139-17 ~]# vi /etc/multipath.conf

Press I for insert and enter the following text.

# This is a basic configuration file for Pure NVMe over Fabric multipath config
#
defaults {
       path_selector            "queue-length 0"
       path_grouping_policy     multibus
       fast_io_fail_tmo         10
       user_friendly_names      no
       no_path_retry            0
       features                 0
       dev_loss_tmo             60
       polling_interval       10
}
~
~                                                                                                          
-- INSERT --

Press : and then enter wq at the : prompt, and press Enter to save and quit.

:wq

Start the multipath service and enable it to start at reboot with the following commands.

[root@init139-17 ~]# systemctl start multipathd
[root@init139-17 ~]# systemctl enable multipathd

Updating Drivers/Firmware

In order to connect to an Array using NVMe/FC you may need to load the appropriate drivers for the HBA. Use the following commands to load the drivers.

Broadcom Emulex HBAs (LPe31xxx/LPe32xxx/LPe 35xxx) 

  1. Firmware install.

  2. Driver install.
    • Download the latest lpfc driver for RHEL/CentOS. The latest driver version should be available for download on the relevant HBA adapter page on Broadcom’s site (https://www.broadcom.com/products/st...t-bus-adapters).

      For the purpose of this example, we have a Emulex LPe31002 HBA card on a CentOS 7.x initiator and the driver version is 12.6.240.48 or newer. 

      • Download the latest lpfc driver for RHEL/CentOS 7.x from Broadcom’s site. The latest driver version is available at
        https://www.broadcom.com/products/storage/fibre-channel-host-bus-adapters/lpe31002-m6#downloads.

      • After download, install the driver as per the Readme instructions for nvme
        (eg.  ./elx_lpfc_install.sh --nvme).

      • Make sure there is a elx-lpfc.conf file in /etc/modprobe.d directory and that it contains the driver parameter to enable both FCP and NVMe. The lpfc driver  installation in the previous step normally creates this file with the option in it. If the file doesn’t already exist then create a new one and add the option. The driver reads these options only at the start.

        • options lpfc lpfc_enable_fc4_type=3

      • Rebuild boot image on initiator.

        • #dracut -f

  3. Reboot and verify installation.
    • #reboot

    • #hbacmd hbaattributes <pwwn>

    • Verify driver version and binding after reboot the output we should see this binding for the driver:
      # lsmod | grep lpfc
      lpfc                  991145  0 
      nvmet_fc               27734  1 lpfc
      nvme_fc                33624  513 lpfc

  1. Firmware install.
  2. Driver install.
  3. Reboot and verify installation. 
    • reboot

    • Verify the correct version is installed and running:
      grep -r . /sys/class/fc_host/host*/symbol*
      /sys/class/fc_host/host12/symbolic_name:QLE2742 FW:v8.08.231 DVR:v10.01.00.55.07.6-k
      /sys/class/fc_host/host13/symbolic_name:QLE2742 FW:v8.08.231 DVR:v10.01.00.55.07.6-k
      /sys/class/fc_host/host14/symbolic_name:QLE2742 FW:v8.08.231 DVR:v10.01.00.55.07.6-k
      /sys/class/fc_host/host15/symbolic_name:QLE2742 FW:v8.08.231 DVR:v10.01.00.55.07.6-k

    • Verify the nvme module param is set: 
      #cat /sys/module/qla2xxx/parameters/ql2xnvmeenable
      # Output should be 1 

Connecting NVMe Device (Array Volume) to the Host

NOTE: Before your initiator will connect to a volume on the array you must have done the following on the array:

  1. Created a volume.

  2. Created a host and associated it's NQN to the host entry.

  3. Connect the host to the volume.

See the following support article for array configuration details.

Once you have enabled the appropriate services and drivers you may need to manually connect to it in order to access the NVMe devices over the fabric. The first step is to check whether the auto-connect worked. If you see nvme controllers in /dev folder then the discover and connect steps can be skipped.

[root@init139-17 ~]# ls /dev/nvme?
/dev/nvme0  /dev/nvme1  /dev/nvme2  /dev/nvme3  /dev/nvme4  /dev/nvme5  /dev/nvme6  /dev/nvme7

What is listed in nvme_info file on initiator. On a Broadcom Emulex Initiator, run "cat /sys/class/scsi_host/host*/nvme_info" to get WWNN and WWPN. NVME RPORT indicates initiator addresses and NVME LPORT indicates target addresses. In the case of Emulex, all the LPORTs are listed together which are visible by that RPORT. That’s based on zoning setup.

[root@init139-17 ~]# cat /sys/class/scsi_host/host*/nvme_info
NVME Initiator Enabled
XRI Dist lpfc0 Total 6144 IO 5894 ELS 250
NVME LPORT lpfc0 WWPN x100000109ba70e00 WWNN x200000109ba70e00 DID x790100 ONLINE
NVME RPORT       WWPN x524a9375c807d302 WWNN x524a9375c807d302 DID x790800 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d312 WWNN x524a9375c807d312 DID x790e00 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d310 WWNN x524a9375c807d310 DID x791000 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d300 WWNN x524a9375c807d300 DID x790700 TARGET DISCSRVC ONLINE

NVME Statistics
LS: Xmt 00000026c0 Cmpl 00000026c0 Abort 00000000
LS XMIT: Err 00000000  CMPL: xb 00000013 Err 00000013
Total FCP Cmpl 000000000b6ab35e Issue 000000000b6ab362 OutIO 0000000000000004
        abort 0000004d noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
FCP CMPL: xb 000001e3 Err 0000028f
NVME Initiator Enabled
XRI Dist lpfc1 Total 6144 IO 5894 ELS 250
NVME LPORT lpfc1 WWPN x100000109ba70e01 WWNN x200000109ba70e01 DID x790400 ONLINE
NVME RPORT       WWPN x524a9375c807d303 WWNN x524a9375c807d303 DID x790900 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d313 WWNN x524a9375c807d313 DID x790f00 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d311 WWNN x524a9375c807d311 DID x791100 TARGET DISCSRVC ONLINE
NVME RPORT       WWPN x524a9375c807d301 WWNN x524a9375c807d301 DID x790600 TARGET DISCSRVC ONLINE
NVME Statistics
LS: Xmt 0000002851 Cmpl 0000002851 Abort 00000000
LS XMIT: Err 00000000  CMPL: xb 00000012 Err 00000012
Total FCP Cmpl 000000000b6df785 Issue 000000000b6df789 OutIO 0000000000000004
        abort 00000052 noxri 00000000 nondlp 00000000 qdepth 00000000 wqerr 00000000 err 00000000
FCP CMPL: xb 0000026f Err 00000334

On a QLogic Initiator, run "cat /sys/class/scsi_host/host*/nvme_connect_str" to get discover/connect CLI arg. NVME RPORT indicates initiator addresses and NVME LPORT indicates target addresses. In the case of QLogic, each entry specifies the RPORT-LPORT pair. That’s based on zoning setup.

[root@init29-16 ~]# cat /sys/class/scsi_host/host*/nvme_connect_str
FC-NVMe LPORT: host6 nn-0x2000f4e9d45488ca:pn-0x2100f4e9d45488ca port_id 162c00 ONLINE
FC-NVMe RPORT: host6 nn-0x524a937494322000:pn-0x524a937494322000 port_id 160c00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488ca:pn-0x2100f4e9d45488ca traddr=nn-0x524a937494322000:pn-0x524a937494322000
FC-NVMe LPORT: host6 nn-0x2000f4e9d45488ca:pn-0x2100f4e9d45488ca port_id 162c00 ONLINE
FC-NVMe RPORT: host6 nn-0x524a937494322010:pn-0x524a937494322010 port_id 160f00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488ca:pn-0x2100f4e9d45488ca traddr=nn-0x524a937494322010:pn-0x524a937494322010
FC-NVMe LPORT: host7 nn-0x2000f4e9d45488cb:pn-0x2100f4e9d45488cb port_id 162d00 ONLINE
FC-NVMe RPORT: host7 nn-0x524a937494322001:pn-0x524a937494322001 port_id 161a00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488cb:pn-0x2100f4e9d45488cb traddr=nn-0x524a937494322001:pn-0x524a937494322001
FC-NVMe LPORT: host7 nn-0x2000f4e9d45488cb:pn-0x2100f4e9d45488cb port_id 162d00 ONLINE
FC-NVMe RPORT: host7 nn-0x524a937494322011:pn-0x524a937494322011 port_id 161b00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488cb:pn-0x2100f4e9d45488cb traddr=nn-0x524a937494322011:pn-0x524a937494322011
FC-NVMe LPORT: host8 nn-0x2000f4e9d45488e0:pn-0x2100f4e9d45488e0 port_id 162e00 ONLINE
FC-NVMe RPORT: host8 nn-0x524a937494322002:pn-0x524a937494322002 port_id 160e00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488e0:pn-0x2100f4e9d45488e0 traddr=nn-0x524a937494322002:pn-0x524a937494322002
FC-NVMe LPORT: host8 nn-0x2000f4e9d45488e0:pn-0x2100f4e9d45488e0 port_id 162e00 ONLINE
FC-NVMe RPORT: host8 nn-0x524a937494322012:pn-0x524a937494322012 port_id 162000 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488e0:pn-0x2100f4e9d45488e0 traddr=nn-0x524a937494322012:pn-0x524a937494322012
FC-NVMe LPORT: host9 nn-0x2000f4e9d45488e1:pn-0x2100f4e9d45488e1 port_id 162f00 ONLINE
FC-NVMe RPORT: host9 nn-0x524a937494322003:pn-0x524a937494322003 port_id 161d00 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488e1:pn-0x2100f4e9d45488e1 traddr=nn-0x524a937494322003:pn-0x524a937494322003
FC-NVMe LPORT: host9 nn-0x2000f4e9d45488e1:pn-0x2100f4e9d45488e1 port_id 162f00 ONLINE
FC-NVMe RPORT: host9 nn-0x524a937494322013:pn-0x524a937494322013 port_id 162400 TARGET DISCOVERY ONLINE
NVMECLI: host-traddr=nn-0x2000f4e9d45488e1:pn-0x2100f4e9d45488e1 traddr=nn-0x524a937494322013:pn-0x524a937494322013

 The previous step indicates that the zoning setup is correct and the initiator can see the target ports. The next step is to use the discover command to retrieve NQN from the target array. The syntax of discover command is “nvme discover --transport=fc --host-traddr=nn-<0xWWNN>:pn-<0xWWPN> --traddr=<nn-0xWWNN:pn-0xWWPN>”. Here, host-traddr indicates the initiator addresses and traddr indicates target addresses. In the address format, nn implies None Name and pn implies Port Name. For Pure Arrays, the Node Name and Port Name are the same. In case of Emulex initiators, use the RPORT and LPORT from the same set. 

[root@init139-17 ~]# nvme discover --transport=fc --host-traddr=nn-0x200000109ba70e00:pn-0x100000109ba70e00 --traddr=nn-0x524a9375c807d302:pn-0x524a9375c807d302
Discovery Log Number of Records 1, Generation counter 3
=====Discovery Log Entry 0======
trtype:  fc
adrfam:  fibre-channel
subtype: nvme subsystem
treq:    not required
portid:  514
trsvcid: none
subnqn:  nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
traddr:  nn-0x524A9375C807D302:pn-0x524A9375C807D302

[root@init139-17 ~]# nvme discover --transport=fc --host-traddr=nn-0x200000109ba70e01:pn-0x100000109ba70e01 --traddr=nn-0x524a9375c807d303:pn-0x524a9375c807d303
Discovery Log Number of Records 1, Generation counter 3
=====Discovery Log Entry 0======
trtype:  fc
adrfam:  fibre-channel
subtype: nvme subsystem
treq:    not required
portid:  515
trsvcid: none
subnqn:  nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
traddr:  nn-0x524A9375C807D303:pn-0x524A9375C807D303

The next step is to use the connect command to connect with nvme controllers. The connect command has the same syntax as discover, except that we now specify the NQN as well in the parameters. The connect command has to be entered for each of the target ports. After that, we should see nvme controllers in /dev folder. 

[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e00:pn-0x100000109ba70e00 --traddr=nn-0x524a9375c807d300:pn-0x524a9375c807d300 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e00:pn-0x100000109ba70e00 --traddr=nn-0x524a9375c807d302:pn-0x524a9375c807d302 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e00:pn-0x100000109ba70e00 --traddr=nn-0x524a9375c807d310:pn-0x524a9375c807d310 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e00:pn-0x100000109ba70e00 --traddr=nn-0x524a9375c807d312:pn-0x524a9375c807d312 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# ls /dev/nvme?
/dev/nvme0  /dev/nvme1  /dev/nvme2  /dev/nvme3
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e01:pn-0x100000109ba70e01 --traddr=nn-0x524a9375c807d301:pn-0x524a9375c807d301 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e01:pn-0x100000109ba70e01 --traddr=nn-0x524a9375c807d303:pn-0x524a9375c807d303 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e01:pn-0x100000109ba70e01 --traddr=nn-0x524a9375c807d311:pn-0x524a9375c807d311 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# nvme connect --transport=fc --host-traddr=nn-0x200000109ba70e01:pn-0x100000109ba70e01 --traddr=nn-0x524a9375c807d313:pn-0x524a9375c807d313 --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e
[root@init139-17 ~]# ls /dev/nvme?
/dev/nvme0  /dev/nvme1  /dev/nvme2  /dev/nvme3  /dev/nvme4  /dev/nvme5  /dev/nvme6  /dev/nvme7

In some instances, the volumes may be automatically connected.  If you have connected volumes in the array and see them when you run the command nvme list, then you do not need to issue the connect command.

Issue the commands nvme list to verify connectivity to the array.  If the system doesn't connect, verify all the previous steps were completed successfully.

[root@init139-17 ~]# nvme list
Node             SN                   Model                                    Namespace Usage                      Format           FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme1n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme2n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme3n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme4n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme5n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme6n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9
/dev/nvme7n1     299AB271BE885B2E     Pure Storage FlashArray                  70630     536.87  GB / 536.87  GB    512   B +  0 B   99.9.9

If the volumes do not automatically connect you will need to issue the connect command to and from WWN to create multiple paths.  You may want to create a script that you can run to connect all paths.

[root@init139-17 ~]# vi /opt/nvme_connect.sh

Press I for insert and enter the following commands into the script.

#!/bin/bash
#
nvme connect  --transportfc <address info> --nqn=nqn.2010-06.com.purestorage:flasharray.299ab271be885b2e

Press escape then wq at the : prompt and then enter to save the configuration and exit.

:wq

Change the permissions of the file to include execute.

chmod +x /opt/nvme_connect.sh

Run the script to connect.

[root@init139-17 ~]# sh /opt/nvme_connect.sh

Verify that you have multiple connections by running the nvme list and the multipath -ll commands.