Skip to main content
Pure Technical Services

Pure Storage All-Flash Storage Array and Citrix XenServer

Currently viewing public documentation. Please login to access the full scope of documentation.

Applies to: Citrix XenServer 6.1, 6.2, 6.5, 7.x, and 8.x 

Pure Storage FlashArray provides multi-hypervisor support including Citrix XenServer, a fully functional free Hypervisor. Pure Storage is listed in the XenServer HCL for storage devices for both Fibre Channel (FC) and iSCSI. 

Pure Storage FlashArray with ActiveCluster is supported in XenServer 7.x and above.

Boot from SAN Considerations

If you are using a LUN to boot from SAN, you need to ensure the changes in your configuration files are applied upon rebooting. This is done by rebuilding the initial ramdisk (initrd or initramfs) to include the proper kernel modules, files, and configuration directives after the configuration changes have been made. As the procedure slightly varies depending on the host, we recommend that you refer to your vendor's documentation for the proper procedure.

When rebuilding the initial ramdisk, you will want to confirm that the necessary dependencies are in place before rebooting the host to avoid any errors during boot. Refer to your vendor's documentation for specific commands to confirm this information.

For example, on Citrix XenServer, you can run the following commands and then use the .cmd file to remake the image.

[root@symcert3 ~]# cd /boot
[root@symcert3 boot]# ls
chain.c32 grub initrd-fallback.img menu.c32 vmlinuz-fallback xen-4.4.1-xs90192.map
config-3.10.0+2 initrd-3.10.0+2.img ldlinux.sys System.map-3.10.0+2 xen-4.4.1-xs90192-d.gz xen-debug.gz
extlinux initrd-3.10.0+2.img.cmd mboot.c32 vmlinuz-3.10.0+2 xen-4.4.1-xs90192-d.map xen.gz
extlinux.conf initrd-3.10-xen.img memtest86+-1.65 vmlinuz-3.10-xen xen-4.4.1-xs90192.gz
[root@symcert3 boot]# sh ./initrd-3.10.0+2.img.cmd

Multipath Configuration

The Multipath Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, queue-length, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however, it provides some additional benefits. The queue-length path selector will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, queue-length will prevent the utilization of that path reducing the effect of the problem path.

These settings are applicable to Fibre Channel and iSCSI, and should be added to your multipath.conf file: 

  • XenServer 6.1, 6.2, 6.5 - /etc/multipath.conf
  • XenServer 7.x, 8.x - /etc/multipath.xenserver/multipath.conf
XenServer Version Multipath Recommendations
XenServer 7.x
XenServer 8.x

In ActiveCluster configurations, Pure Arrays make use of ALUA to handle path state changes in the event of a loss of replication link between arrays. 

defaults {
    polling_interval      10
}

       device {
                vendor                "PURE"
                product               "FlashArray"
                path_selector         "queue-length 0"
                path_grouping_policy  group_by_prio
                path_checker          tur
                fast_io_fail_tmo      10
                dev_loss_tmo          60
                no_path_retry         0
                hardware_handler      "1 alua"
                prio                  alua
                failback              immediate
        }
}
XenServer 6.5
 device {
                vendor                  "PURE"
                product                 "FlashArray"
                path_selector           "queue-length 0"
                path_grouping_policy    multibus
                rr_weight               uniform
                prio                    const
                rr_min_io_rq            1
                path_checker            tur
        }
XenServer 6.2
device {
                vendor                  "PURE"
                product                 "FlashArray"
                path_selector           "round-robin 0"
                path_grouping_policy    multibus
                rr_weight               uniform
                prio                    const
                rr_min_io_rq            1
                path_checker            tur
        }
XenServer 6.1
device {
      vendor                  “PURE”
      product                 “FlashArray”
      path_grouping_policy    multibus
      path_checker            tur
      rr_min_io               1
      path_selector           “round-robin 0”
      no_path_retry           0
      fast_io_fail_tmo        3
      dev_loss_tmo            30
      prio                    alua
      }

 

FC Configuration

Shown below is a typical FC configuration with Pure Storage FlashArray.

1.png

  1. Determine the WWPN of the servers, issue the following command on the hosts: systool –c fc_host –v | grep port_name

    Use the WWPN to do the zoning on the SAN switch and create the Pure Storage side host group and host configuration (populate the WWPNs on the Pure Storage GUI).
  2. We recommend that you update your multipath.conf file with the settings above in Multipath Configuration. To verify the status, run multipath –ll or multipath status
  3. Now enable the multipathing on the server, hop onto your XenCenter, and put the server in Maintenance Mode. Select the server’s properties tab and then select multipathing from the left panel, enable multipathing.  Exit from Maintenance Mode, you are all set. The following steps describe in detail how to configure multipathing in XenCenter.
    • Open XenCenter and right-click on the XenServer Host, select Enter Maintenance Mode from the drop-down menu. It will ask you if you want to move the VMs to another host. 
      2.png

      3.png
      4.png
                                                                                                                                                                              
      Select Enter Maintenance Mode on the pop-up window.
    • Now the server is in Maintenance Mode
      5.png
       
    • Select properties and enable Multipathing as shown below.
      6.png
    • Exit Maintenance Mode and reboot the server to make the configuration stick.
      7.png
       
  4. Repeat these steps for all of the servers in the cluster pool. Note, this can be accomplished by commands like xe CLI commands.
     

iSCSI Configuration

For general iSCSI and SAN recommendations, please see: SAN Guidelines for Maximizing Pure Performance

Following are the steps for iSCSI configuration:

  1. The best practice recommendations for configuring iSCSI are to use two switches and configure them to be on different subnets. An example is shown below:
    8.png
    Notice how the individual NICs are configured to be in different subnets and the corresponding target port IP addresses. This is a mandatory step to make multipathing work correctly. This means we need two switches to be on two subnets. 
     

    NOTE: As of Purity 4.6.0, we do support VLANs. And as of Purity 5.2, we support LACP as well. Also note that Citrix XenServer doesn’t let you do bonding and multipathing, so if you plan to use an existing bond that won’t work with Pure Storage FlashArray.
  2. Enable jumbo frames (i.e. set MTU to 9000) end to end. At the XenServer initiator ports, switch ports, and the FlashArray target ports. 
  3. With XenServer, you can either have bonds or multipathing (MP) but not both at the same time. We prefer to use MP and hence recommend using individual NICs to configure MP on them.  This may pose a challenge when we co-exist with other iSCSI vendor arrays. In those cases, it is advised to use additional NICs to configure MP and isolate the network. 
     
  4. We recommend that you update your /etc/multipath.conf file with the settings above in Multipath Configuration. To verify the status, runmultipath –ll or multipath status
     
  5. Additionally, we could set these parameters for higher throughput (bandwidth):

                node.session.cmds_max to 1024 (from 128)
                node.session.queue_depth to 128 (from 32)
                iscsi.MaxRecvDataSegmentLength to 256k (128k is default)

    A reboot is required to make the tunables stick.
     
  6. Perform the above steps 1-5 on all the hosts in the resource pool.
     
  7. Obtain the IQN from the XenServer host:
    1. In the XenCenter resource panel, select the host.
    2. Click on the general tab.
    3. Right-click on the iSCSI IQN and make a copy.
    4. If you want to change the iSCSI IQN follow these simple steps:
      1. Select the XenServer host on the XenCenter resource pane.
      2. Click on the General tab.
      3. Click on Properties.
      4. In the dialog, windows make the right changes (for ex. I would change the iqn.2015-02.com.example.my:optional-string to iqn.2015-02.com.pure-xserv-host-01:optional-string)
    5. Use this IQN to do the masking during Pure Storage host configuration.
       
  8.  Configure the SR by right-clicking on the pool and selecting New SR. Add SR with all the IP addresses (target IP of the Pure Storage FA). For example, add all the target port IPs: 10.10.1.10,10.10.2.20,10.10.1.11,10.10.2.21

    Select the first one from the Discovery LUN list (not the wildcard).
     
  9. Make sure you enable multipathing as shown in FC Configuration section (3) for each host.

Applying Queue Settings with udev

Once the IO scheduler elevator has been set to 'noop' it is often desired to keep the setting persistent, after reboots. 

Step 1: Create the Rules File

Create a new file in the following location. The OS will use the udev rules to set the elevators after each reboot.

/etc/udev/rules.d/99-pure-storage.rules

Step 2: Add the Following Entries to the Rules File

The following entries automatically sets the elevator to 'noop' each time the system is rebooted. Create a file that has the following entries, ensuring each entry exists on one line with no carriage returns:

# Recommended settings for Pure Storage FlashArray.

# Use noop scheduler for high-performance solid-state storage
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/scheduler}="noop"

# Reduce CPU overhead due to entropy collection
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/add_random}="0"

# Spread CPU load by redirecting completions to originating CPU
ACTION=="add|change", KERNEL=="sd*[!0-9]", SUBSYSTEM=="block", ENV{ID_VENDOR}=="PURE", ATTR{queue/rq_affinity}="2"

# Set the HBA timeout to 60 seconds
ACTION=="add", SUBSYSTEMS=="scsi", ATTRS{model}=="FlashArray      ", RUN+="/bin/sh -c 'echo 60 > /sys/$DEVPATH/device/timeout'"

Please note that 6 spaces are needed after "FlashArray" under "Set the HBA timeout to 60 seconds" above for the rule to take effect.

Reference

  1. Citrix XenServer performance tuning blog
  2. Citrix XenServer performance tuning blog
  3. Classic network and thruput performance guide
  4. How to Collect Diagnostic Information for XenServer