Skip to main content
Pure1 Support Portal

Pure Storage All-Flash Storage Array and Citrix XenServer

 Untitled.pngPure Storage All-Flash Storage Array and Citrix XenServer 6.x Configuration and Best Practice Guide

Author: Ravi Venkat, Data Center Architect

Pure Storage FlashArray provides multi hypervisor support including Citrix XenServer (v6.1, v6.2, v6.5, v7.0), a fully functional free Hypervisor. Pure Storage is listed in the XenServer HCL for storage devices for both Fibre Channel (FC) and iSCSI. The hardware compatibility listing can be obtained from - http://hcl.xensource.com/ProductDetails.aspx?ProductType=Storage&ProductName=Pure+Storage+FlashArray+FA-400+Series

This document highlights some of the best practice recommendations for Pure Storage FlashArray FA-400 series.

Boot from SAN Considerations

If you are using a LUN to boot from SAN, you need to ensure the changes in your configuration files are applied upon rebooting. This is done by rebuilding the initial ramdisk (initrd or initramfs) to include the proper kernel modules, files and configuration directives after the configuration changes have been made. As the procedure slightly varies depending on the host, we recommend that you refer to your vendor's documentation for the proper procedure.

When rebuilding the initial ramdisk, you will want to confirm that the necessary dependencies are in place before rebooting the host to avoid any errors during boot. Refer to your vendor's documentation for specific commands to confirm this information.

For example, on Citrix XenServer, you can run the following commands and then use the .cmd file to remake the image.

[root@symcert3 ~]# cd /boot
[root@symcert3 boot]# ls
chain.c32 grub initrd-fallback.img menu.c32 vmlinuz-fallback xen-4.4.1-xs90192.map
config-3.10.0+2 initrd-3.10.0+2.img ldlinux.sys System.map-3.10.0+2 xen-4.4.1-xs90192-d.gz xen-debug.gz
extlinux initrd-3.10.0+2.img.cmd mboot.c32 vmlinuz-3.10.0+2 xen-4.4.1-xs90192-d.map xen.gz
extlinux.conf initrd-3.10-xen.img memtest86+-1.65 vmlinuz-3.10-xen xen-4.4.1-xs90192.gz
[root@symcert3 boot]# sh ./initrd-3.10.0+2.img.cmd

Multipath Configuration

The Multipath Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, queue-length, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however it provides some additional benefits. The queue-length path selector will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, queue-length will prevent the utilization of that path reducing the effect of the problem path.

These settings are applicable to Fibre Channel and iSCSI

XenServer Version Multipath Recommendations
XenServer 7.0
defaults {
    polling_interval      10
}

devices {
    device {
        vendor                "PURE"
        product               "FlashArray"
        path_selector         "queue-length 0"
        path_grouping_policy  multibus
        path_checker          tur
        fast_io_fail_tmo      10
        dev_loss_tmo          60
        no_path_retry         0
    }
}
XenServer 6.5
 device {
                vendor                  "PURE"
                product                 "FlashArray"
                path_selector           "queue-length 0"
                path_grouping_policy    multibus
                rr_weight               uniform
                prio                    const
                rr_min_io_rq            1
                path_checker            tur
        }
XenServer 6.2
device {
                vendor                  "PURE"
                product                 "FlashArray"
                path_selector           "round-robin 0"
                path_grouping_policy    multibus
                rr_weight               uniform
                prio                    const
                rr_min_io_rq            1
                path_checker            tur
        }
XenServer 6.1
device {
      vendor                  “PURE”
      product                 “FlashArray”
      path_grouping_policy    multibus
      path_checker            tur
      rr_min_io               1
      path_selector           “round-robin 0”
      no_path_retry           0
      fast_io_fail_tmo        3
      dev_loss_tmo            30
      prio                    alua
      }

 

FC Configuration

Shown below is a typical FC configuration with Pure Storage FlashArray.

  1. Determine the WWPN of the servers, issue the following command on the hosts:

    systool –c fc_host –v | grep port_name

    Use the WWPN to do the zoning on the SAN switch and create the Pure Storage side host group and host configuration (populate the WWPNs on the Pure Storage GUI).
     
  2. We recommend that you update your /etc/multipath.conf file with the settings above in Multipath Configuration.  To verify the status, run multipath –ll or multipath status
     
  3. Now enable the multipathing on the server, hop onto your XenCenter and put the server in “Maintenance Mode”.  Select the server’s properties tab and then select multipathing from the left panel, enable multipathing.  Exit from “Maintenance Mode”, you are all set.  The following steps describe in detail how to configure multipathing in XenCenter.
     
    • Open XenCenter and right-click on the XenServer Host, select “Enter Maintenance Mode” from the drop down menu.  It will ask you if you want to move the VMs to another host. 


      3.png
      4.png
                                                                                                                                                                              
      Select “Enter Maintenance Mode” on the pop-up window.
    • Now the server is in “Maintenance Mode”

       
    • Select properties and enable multipathing as shown below:
    • Exit “Maintenance Mode” and reboot the server to make the configuration stick.

       
  4. Repeat these steps for all of the servers in the cluster pool.  Note, this can be accomplished by command like xe cli commands.
     

iSCSI Configuration

Following are the steps for iSCSI configuration:

  1. The best practice recommendations for configuring iSCSI are to use two switches and configure them to be on different subnets.  An example is shown below:

    Notice how the individual NICs are configured to be in different subnets and the corresponding target port IP addresses.  This is a mandatory step to make multipathing work correctly. This means we need two switches to be on two subnets. 
     
    NOTE: Pure Storage FlashArray (as of Purity 4.5.0 – June 2015) does not support LACP or VLANs.  Hence, we need to work around that limitation by having different subnets and still providing fabric-level redundancy.  Also note that Citrix XenServer doesn’t let you do bonding and multipathing, so if you plan to use an existing bond that won’t work with Pure Storage FlashArray.
     
  2. Enable jumbo frames (i.e. set MTU to 9000) end to end.  At the XenServer initiator ports, switch ports, and the Pure Storage target ports.
     
  3. With XenServer you can either have bonds or multipathing (MP) but not both at the same time.  We prefer to use MP and hence recommend using individual NICs to configure MP on them.  This may pose a challenge when we co-exist with other iSCSI vendor arrays.  In those cases, it is advised to use additional NICs to configure MP and isolate the network. 
     
  4. We recommend that you update your /etc/multipath.conf file with the settings above in Multipath Configuration.  To verify the status, run multipath –ll or multipath status
     
  5. Additionally, we could set these parameters for higher throughput (bandwidth):

                node.session.cmds_max to 1024 (from 128)
                node.session.queue_depth to 128 (from 32)
                iscsi.MaxRecvDataSegmentLength to 256k (128k is default)

    A reboot is required to make the tunables stick.

     
  6. Perform the above steps 1-5 on all the hosts in the resource pool.
     
  7. Obtain the IQN from the XenServer host:
    1. In the XenCenter resource panel, select the host.
    2. Click on the general tab.
    3. Right-click on the iSCSI IQN and make a copy.
    4. If you want to change the iSCSI IQN follow these simple steps:
      1. Select the XenServer host on the XenCenter resource pane.
      2. Click on the General tab.
      3. Click on Properties.
      4. In the dialog windows make the right changes (for ex. I would change the iqn.2015-02.com.example.my:optional-string to iqn.2015-02.com.pure-xserv-host-01:optional-string)
    5. Use this IQN to do the masking during Pure Storage host configuration.
       
  8.  Configure the SR by right-clicking on the pool and selecting “New SR”.  Add SR with all the IP addresses (target IP of the Pure Storage FA). For example, add all the target port IPs: 10.10.1.10, 10.10.2.20, 10.10.1.11, 10.10.2.21

    Select the first one from the Discovery LUN list (not the wildcard).

     
  9. Make sure you enable multipathing as shown in “FC Configuration” section (3) for each host.

Other XenServer Performance tuning

Setting the max_ring_page_order 2 and scheduler to “noop” is known to get best performance on a I/O load. Also certain dom0 settings like increasing CPU for dom0 will make a huge difference. The following script shows how to do that. Additional information can be obtained from reference [4] and [5].

# cat set-scripts.sh
#!/bin/bash
cd /sys/module/blkbk/parameters/
echo 2 > max_ring_page_order
echo "Setting max_ring_page_order to `cat /sys/module/blkbk/parameters/max_ring_page_order`"
echo "set the dom0 parameters"
/opt/xensource/libexec/xen-cmdline --set-xen dom0_max_vcpus=1-6
/opt/xensource/libexec/xen-cmdline --set-xen dom0_vcpus_pin
/opt/xensource/libexec/xen-cmdline --set-dom0 blkbk.reqs=256
# set scheduler to noop (script assumes the new devices are sdc, sdd, sde, sdf, sdg; pls make changes     # accordingly)
for i in c d e f g; do echo noop >  /sys/block/sd$i/queue/scheduler; done
# verify scheduler is set to noop for all devices
for i in c d e f g; do cat /sys/block/sd$i/queue/scheduler; done
[noop] anticipatory deadline cfq
[noop] anticipatory deadline cfq
[noop] anticipatory deadline cfq
[noop] anticipatory deadline cfq

One could put this script in the init.d directory to make them stick on every boot cycle.