Solaris Recommended Settings
Pure Storage recommends the following options to achieve maximum performance. This article documents several general configuration parameters that apply to any Solaris environment. These best practices are collected from various customer feedback and proven results with Pure Storage Array. This article is an overview of best practice configuration.
As per the Compatibility Matrix, Solaris 10 Update 8 is required as a minimum. Solaris 11.2 is required for using ActiveCluster.
Initial Configuration
The host personality should be set before presenting the LUN to the host because on some OS changing personality can be disruptive.
Adding Pure Storage FlashArray LUN's to the Host
- Identify the WWN's on the Solaris initiator:
# fcinfo hba-port HBA Port WWN: 21000024ff31e956 OS Device Name: /dev/cfg/c2 Manufacturer: QLogic Corp. Model: 371-4325-02 Firmware Version: 05.04.03 FCode/BIOS Version: BIOS: 2.02; fcode: 2.03; EFI: 2.01; Serial Number: 0402H00-1113936655 Driver Name: qlc Driver Version: 20110321-3.05 Type: N-port State: online Supported Speeds: 2Gb 4Gb 8Gb Current Speed: 8Gb Node WWN: 20000024ff31e956 HBA Port WWN: 21000024ff31e957 OS Device Name: /dev/cfg/c3 Manufacturer: QLogic Corp. Model: 371-4325-02 Firmware Version: 05.04.03 FCode/BIOS Version: BIOS: 2.02; fcode: 2.03; EFI: 2.01; Serial Number: 0402H00-1113936655 Driver Name: qlc Driver Version: 20110321-3.05 Type: N-port State: online Supported Speeds: 2Gb 4Gb 8Gb Current Speed: 8Gb Node WWN: 20000024ff31e957
- Check to ensure that the WWN's are logged into the array:
root@pure-b3-ct1:~# pureport list --initiator Initiator WWN Initiator Portal Initiator IQN Target Target WWN Target Portal Target IQN 21:00:00:0E:1E:0E:37:13 - - - - - - 21:00:00:0E:1E:0E:37:17 - - - - - - 21:00:00:0E:1E:0E:37:AB - - - - - - 21:00:00:0E:1E:0E:37:AF - - - - - - 21:00:00:24:FF:31:E9:56 - - CT1.FC0 52:4A:93:77:6B:DF:C0:10 - - 21:00:00:24:FF:31:E9:57 - - CT0.FC0 52:4A:93:77:6B:DF:C0:00 - - 21:00:00:24:FF:31:E9:57 - - CT1.FC0 52:4A:93:77:6B:DF:C0:10 - -
- Configure the Pure LUN's for each WWN. Using the "OS Device Name" from step 1 run the following:
# cfgadm -c configure c2 # # cfgadm -c configure c3
- Confirm the device is configured by running
cfgadm
and grepping for the Ap_Id:# cfgadm -al | grep c2 c2 fc-fabric connected configured unknown c2::21000024ff31e957 unknown connected unconfigured unknown c2::524a93776bdfc000 disk connected configured unknown c2::524a93776bdfc010 disk connected configured unknown # cfgadm -al | grep c3 c3 fc-fabric connected configured unknown c3::21000024ff31e956 unknown connected unconfigured unknown c3::524a93776bdfc000 disk connected configured unknown c3::524a93776bdfc010 disk connected configured unknown
- Confirm that the Pure WWN's are showing up as connected and configured, in this example, we see that they are.
- Use
devfsadm
to rescan the bus and load the drivers in the system:# devfsadm -Cv devfsadm[20722]: verbose: SUNW_port_link: port monitor ttymon0 added devfsadm[20722]: verbose: SUNW_port_link: /dev/term/0 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/1 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/2 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/3 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/4 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/5 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/6 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/7 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/8 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/9 added to sacadm devfsadm[20722]: verbose: SUNW_port_link: /dev/term/10 added to sacadm
- To confirm the serial number of the LUN against Pure, you can use the following:
luxadm display /dev/rdsk/*****
NOTE: You need to use the "rdsk" identifier, this command will not work with /dev/dsk/*****
If you see the error "cfgadm: Library error: report LUNs failed" you may need to reset the fabric, please consult with Solaris documentation.
Set Host Personality for Solaris on the Host declaration in Purity
pureuser@baie3> purehost create testhost --personality solaris Name WWN IQN NQN Personality testhost - - - solaris pureuser@baie3>
Configuring MPxIO
For full documentation on Solaris and MPxIO please reference Oracle's: Configuring Multipath Software. Below is a summary step by step get it working with Pure.
For Solaris 10 Update 9+ and 11 SPARC based systems
- Enable MPxIO, on devices attached to Pure Storage. By default MPxIO is disabled. If you have multiple controllers in a host, and only want to enable it for a specific controller type, then you would use the command stmsboot -D fp -e
- In the following example, we only have one controller type, so we just run
stmsboot -e
command to enable on all controllers.# stmsboot -e WARNING: stmsboot operates on each supported multipath-capable controller detected in a host. In your system, these controllers are /devices/pci@780/pci@0/pci@8/SUNW,qlc@0/fp@0,0 /devices/pci@780/pci@0/pci@8/SUNW,qlc@0,1/fp@0,0 /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@1 /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@1 /devices/pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2 /devices/pci@7c0/pci@0/pci@9/LSILogic,sas@0 /devices/pci@7c0/pci@0/pci@9/LSILogic,sas@0 If you do NOT wish to operate on these controllers, please quit stmsboot and re-invoke with -D { fp | mpt } to specify which controllers you wish to modify your multipathing configuration for. Do you wish to continue? [y/n] (default: y)
y
Checking mpxio status for driver fp Checking mpxio status for driver mpt WARNING: This operation will require a reboot. Do you want to continue ? [y/n] (default: y)y
The changes will come into effect after rebooting the system. Reboot the system now ? [y/n] (default: y)y
NOTE: During the reboot, which happens later in the process, /etc/vfstab and the dump configuration are updated to reflect the device name changes.
Confirm the Number of Device Paths
# mpathadm list LU /dev/rdsk/c4t5000C50031BFBACFd0s2 Total Path Count: 1 Operational Path Count: 1 /dev/rdsk/c4t624A9370A71D594A700C03C400014D1Dd0s2 Total Path Count: 8 Operational Path Count: 8 /dev/rdsk/c4t5000C50031C0642Bd0s2 Total Path Count: 1 Operational Path Count: 1 /dev/rdsk/c4t624A9370A71D594A700C03C400014D1Cd0s2 Total Path Count: 8 Operational Path Count: 8 /dev/rdsk/c4t624A9370A71D594A700C03C400014D1Bd0s2 Total Path Count: 8 Operational Path Count: 8 /dev/rdsk/c4t624A9370A71D594A700C03C400014D1Ad0s2 Total Path Count: 8 Operational Path Count: 8 /dev/rdsk/c4t624A9370A71D594A700C03C400014D19d0s2 Total Path Count: 8 Operational Path Count: 8
Important: Solaris considers any path that appears to the OS as anything but failed to be "Operational". In the case of Active-Cluster configurations, a path will be in a "standby" state during a re-sync operation, but is not available to perfom i/o, and should not be considered as operational. For a better representation of path state, it is recommended to use the command mpathadm show lu
and make determinations based upon that.
Recommended HBA I/O Timeout Settings
Pure Storage recommends a timeout of at least 60 seconds to be applied to the HBA for Pure Storage. You can do this with the following commands. Edit /etc/system and either add or modify (if not present) the sd setting as follows:
set sd:sd_io_time = 0x3c (which is 60 in hexadecimal) set ssd:ssd_io_time=0x3C (which is 60 in hexadecimal)
Configuration of ActiveCluster
- Solaris 11.2 or higher is required.
- Verify MPxIO is enabled per Configuring MPxIO above
- For Solaris 11 and higher, the .conf driver files need to be created in/copied to /etc/driver/drv/ directory - refer to Managing Devices in the Oracle Solaris OS - Oracle Solaris Administration: Devices and File Systems for more information.
- Include the following entry in the
/etc/system
fileset ssd:ssd_reset_throttle_timeout = 7
- Include the following entry in the
ssd.conf
file for defining correct timeouts# # Copyright (c) 1994, 2010, Oracle and/or its affiliates. All rights reserved. # name="ssd" parent="sf" target=0; name="ssd" parent="fp" target=0; # # The following stub node is needed for pathological bottom-up # devid resolution on a self-identifying transport. # name="ssd" class="scsi-self-identifying"; # # Associate the driver with devid resolution. # ddi-devid-registrant=1; ssd-config-list = "PURE FlashArray","reset-lun:true, disksort:false, cache-nonvolatile:true"; # Above parameters updated from previous recommendation to allow VxDMP as the multipath solution for boot and ASM disk under ActiveCluster
- Include the following entry in the
scsi_vhci.conf
file. Required for multipath configuration – changes are boldfaced and highlighted.# # Copyright (c) 2001, 2014, Oracle and/or its affiliates. All rights reserved. # # name="scsi_vhci" class="root"; # # Load balancing global configuration: setting load-balance="none" will cause # all I/O to a given device (which supports multipath I/O) to occur via one # path. Setting load-balance="round-robin" will cause each path to the device # to be used in turn. # load-balance="round-robin"; # # Automatic failback configuration # possible values are auto-failback="enable" or auto-failback="disable" auto-failback="enable"; #BEGIN: FAILOVER_MODULE_BLOCK (DO NOT MOVE OR DELETE) # # Declare scsi_vhci failover module paths with 'ddi-forceload' so that # they get loaded early enough to be available for scsi_vhci root use. # # NOTE: Correct operation depends on the value of 'ddi-forceload', this # value should not be changed. The ordering of entries is from # most-specific failover modules (with a "probe" implementation that is # completely VID/PID table based), to most generic (failover modules that # are based on T10 standards like TPGS). By convention the last part of a # failover module path, after "/scsi_vhci_", is called the # "failover-module-name", which begins with "f_" (like "f_asym_sun"). The # "failover-module-name" is also used in the override mechanism below. ddi-forceload = "misc/scsi_vhci/scsi_vhci_f_asym_sun", "misc/scsi_vhci/scsi_vhci_f_asym_emc", "misc/scsi_vhci/scsi_vhci_f_sym_hds", "misc/scsi_vhci/scsi_vhci_f_sym_enc", "misc/scsi_vhci/scsi_vhci_f_tpgs_tape", "misc/scsi_vhci/scsi_vhci_f_tape", "misc/scsi_vhci/scsi_vhci_f_sym_emc", "misc/scsi_vhci/scsi_vhci_f_asym_emc", "misc/scsi_vhci/scsi_vhci_f_asym_lsi", "misc/scsi_vhci/scsi_vhci_f_sym", "misc/scsi_vhci/scsi_vhci_f_tpgs"; # # For a device that has a GUID, discovered on a pHCI with mpxio enabled, vHCI # access also depends on one of the scsi_vhci failover modules accepting the # device. The default way this occurs is by a failover module's "probe" # implementation (sfo_device_probe) indicating the device is supported under # scsi_vhci. To override this default probe-oriented configuration in # order to # # 1) establish support for a device not currently accepted under scsi_vhci # # or 2) override the module selected by "probe" # # or 3) disable scsi_vhci support for a device # # you can add a 'scsi-vhci-failover-override' tuple, as documented in # scsi_get_device_type_string(9F). For each tuple, the first part provides # basic device identity information (vid/pid) and the second part selects # the failover module by "failover-module-name". If you want to disable # scsi_vhci support for a device, use the special failover-module-name "NONE". # Currently, for each failover-module-name in 'scsi-vhci-failover-override' # (except "NONE") there needs to be a # "misc/scsi_vhci/scsi_vhci_<failover-module-name>" in 'ddi-forceload' above. # # " 111111" # "012345670123456789012345", "failover-module-name" or "NONE" # "|-VID--||-----PID------|", #scsi-vhci-failover-override = "PURE FlashArray", "f_sym"; scsi-vhci-failover-override = "PURE FlashArray", "f_tpgs"; #scsi-vhci-failover-override = "PURE FlashArray", "f_asym_lsi"; # scsi-vhci-failover-override = # "STK FLEXLINE 400", "f_asym_lsi", # "SUN T4", "f_tpgs", # "CME XIRTEMMYS", "NONE"; # #END: FAILOVER_MODULE_BLOCK (DO NOT MOVE OR DELETE) #BEGIN: UPDATE_PATHSTATE_ON_RESET_BLOCK (DO NOT MOVE OR DELETE) # # Tunable for updating path states after a UNIT ATTENTION reset. # There are arrays which do not queue UAs during resets # after an implicit failover. For such arrays, we need to # update the path states after any type of UA resets, since # UA resets take higher precedence among other UNIT ATTENTION # conditions. By default, scsi_vhci does not update path states # on UA resets. To make scsi_vhci do that for such arrays, you need # to set the tunable scsi-vhci-update-pathstate-on-reset to "yes" # for the VID/PID combination as described below. # # "012345670123456789012345", "yes" or "no" # "|-VID--||-----PID------|", # scsi-vhci-update-pathstate-on-reset = # "Pillar Axiom", "yes", "Oracle Oracle FS", "yes", "PURE FlashArray", "yes"; # #END: UPDATE_PATHSTATE_ON_RESET_BLOCK (DO NOT MOVE OR DELETE) #BEGIN: SPREAD_IPORT_RESERVATION_BLOCK # # Tunable for path selection optimization of SCSI reservation command. With # this optimization, a path with least busy initiator port will be selected # for a SCSI reservation command. If optimization is disabled scsi_vhci will # use load balancing policy "none" for SCSI reservation command's path # selection. Tunable spread-iport-reservation is used to establish the default # value. Its default value is "yes". To make scsi_vhci to turn off the # optimization globally, you need to set the tunable spread-iport-reservation # to "no". Tunable spread-iport-reservation-exceptions can describe exceptional # cases with the VID/PID combination specified, which has higher priority than # the tunable spread-iport-reservation. # #spread-iport-reservation = "yes"; # # "012345670123456789012345", "yes" or "no" # "|-VID--||-----PID------|", # spread-iport-reservation-exceptions = # "STK T10000C", "yes", # "HP Ultrium 4-SCSI", "no"; # # To find the least busy initiator port, traffic load of every initiator port # need to be monitored. One important traffic load metric is rlentime: the # cumulative run length*time product of every initiator port. Delta rlentime # of latest period of time is used to represent the historical traffic load. # The simultaneous snapshot rlentime of every initiator port is needed to # calculate the delta rlentime. Tunable iport-rlentime-snapshot-interval is # used to configure the time interval in seconds to create rlentime snapshot # of every initiator port. Its default value is 30 seconds. # #iport-rlentime-snapshot-interval = 30; # #END: SPREAD_IPORT_RESERVATION_BLOCK #BEGIN: LSR_CLIENT_GRACE_PERIOD_BLOCK # # Keep the LSR suspended client device as attached for # lsr-client-lifetime seconds when all paths are LSR suspended. # During this extended lifetime, all I/O requests would be queued up. When the # extended lifetime is over, the I/O requests in the queue would be # re-processed with the client device detached. # # Setting this value to 0 will disable the feature. # # suspend-client-grace-period = 0; # #END: LSR_CLIENT_GRACE_PERIOD_BLOCK #BEGIN: NOTE_BLOCK # The VID fields above should contain exactly eight left-aligned ASCII # characters. If the VID is less than 8 characters, it should be padded with # spaces (ASCII 0x20) to 8 characters. # # The PID fields above should contain at most sixteen left-aligned ASCII # characters. The PID field has an implicit wild-card rule. The product ID # in the returned SCSI inquiry string is considered a match if it has the # PID field as its prefix. For example, "Pillar Axiom" applies to both # the "Pillar Axiom 600" and the "Pillar Axiom 500". # #END: NOTE_BLOCK
-
To verify the paths on Solaris 10, use the luxadm display command on the relevant volume, for example:
luxadm -v display /dev/rdsk/c0t624A9370217E49EA1344E178000167F5d0s2
-
For Solaris 11 and higher, you can use the fcinfo lu -v and mpathadm show lu commands instead.
- Path changes are displayed as shown below:
Displaying information for: /dev/rdsk/c0t624A9370217E49EA1344E178000167F5d0s2 DEVICE PROPERTIES for disk: /dev/rdsk/c0t624A9370217E49EA1344E178000167F5d0s2 Vendor: PURE Product ID: FlashArray Revision: 8888 Serial Num: 217E49EA1344E178000167F5 Unformatted capacity: 102400.000 MBytes Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0x0 Device Type: Disk device Path(s): /dev/rdsk/c0t624A9370217E49EA1344E178000167F5d0s2 /devices/scsi_vhci/ssd@g624a9370217e49ea1344e178000167f5:c,raw Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0 Device Address 524a937249ed1a00,f6 Host controller port WWN 2101001b32aed8ca Class secondary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0 Device Address 524a937249ed1a10,f6 Host controller port WWN 2101001b32aed8ca Class secondary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0 Device Address 524a937a295f8210,f6 Host controller port WWN 2101001b32aed8ca Class primary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0,1/fp@0,0 Device Address 524a937a295f8200,f6 Host controller port WWN 2101001b32aed8ca Class primary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0/fp@0,0 Device Address 524a937a295f8211,f6 Host controller port WWN 2100001b328ed8ca Class primary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0/fp@0,0 Device Address 524a937249ed1a11,f6 Host controller port WWN 2100001b328ed8ca Class secondary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0/fp@0,0 Device Address 524a937249ed1a01,f6 Host controller port WWN 2100001b328ed8ca Class secondary State ONLINE Controller /devices/pci@400/pci@2/pci@0/pci@c/SUNW,qlc@0/fp@0,0 Device Address 524a937a295f8201,f6 Host controller port WWN 2100001b328ed8ca Class primary State ONLINE
Notes for ActiveCluster Configuration
- If you set preferred paths, then the luxadm will show the paths grouped into target port groups/active optimized paths.
- In a uniform access configuration, If the replication link is lost then the paths to the offline site/array will go into STANDBY state.
- The FlashArray relies on some IO activity before any ALUA state changes are discovered; e.g., you might still see the state as standby on an inactive LUN until either a reboot or some activity resumes
UNMAP
You can find more information on Space Reclamation for Solaris in this Reclaiming Space in Solaris
References
Oracle Man Pages for:
Oracle Documentation: