HP-UX Recommended Settings
To ensure the best performance with the Pure Storage FlashArray's, please use this guide for configuration and implementation of HP-UX hosts in your environment. Pure Storage recommends you follow HP’s best practices and install the latest patch bundles and quality packs on your server.
These recommendations apply to the versions of HP-UX that we have certified as per our Compatibility Matrix.
Initial Storage Setup
The general steps needed to initially set up storage on a HP-UX host are as follows:
- Ensure physical connections and zoning is in place and following our best practices.
- Create host object in Purity and manually add the WWN's (they likely will not show up in the list of available WWN's in the case of a HP-UX host).
- Set host personality: `purehost setattr --personality hpux <HOST>`.
- Add a volume to the host.
General Considerations
-
There is no HP-UX native SCSI UNMAP support. So no way to reclaim deleted blocks with the native HP-UX JFS or OnlineJFS file systems.
- Do not connect Pure volumes to a host or host group until the host personality has been set to HP-UX:
purehost setattr --personality hpux <HOST>
- HP-UX 11i v3 introduces native multipathing that will allow you to take advantage of not only the failover protection, but also performance gains from true load-balancing. If you are running HP-UX 11i v2, it's recommended that you consider upgrading.
HP-UX 11i v3
HP-UX 11i v3 introduces a new representation of mass storage devices, known as the agile view. In the agile view, disk devices and tape drives are identified by the actual object, not by a hardware path to the object. In addition, paths to the device can change dynamically and multiple paths to a single device can be transparently treated as a single virtualized path, with I/O being distributed across those multiple paths.
In HP-UX 11i v3, there are three different types of paths to a device: legacy hardware path, lunpath hardware path, and LUN hardware path. All three are numeric strings of hardware components, Special considerations 31 with each number typically representing the location of a hardware component on the path to the device.
The new agile view increases the reliability, adaptability, performance, and scalability of the mass storage stack, all without the need for operator intervention. For more information, see the white papers “The Next Generation Mass Storage Stack: HP-UX 11i v3” and “HP-UX 11i v3 Persistent DSF Migration Guide” (http://hp.com/go/hpux-core-docs ).
[Source]
Pure Storage recommends that you use Agile DSF due to the round-robin load-balancing capability that has been introduced.
Path Fail Settings
- Ensure the path_fail_secs is set to 120 seconds, which is the default. This is the value in seconds for which the Mass Storage Stack will wait after the first I/O has failed on the path before marking the path as offline. At least one I/O request has to succeed within X seconds for the path to be not marked as offline. In other words, if a lunpath continues to see I/O errors with no successful I/O completions for X seconds, then Mass Storage Stack marks the lunpath as offline. [Source]
The command to check this attribute is as follows:scsimgr get_attr -D /dev/rdisk/disk0 -a path_fail_secs name = path_fail_secs current = 120 default = 120 saved =
If you need to change it you can use this command:scsimgr save_attr -D /dev/rdisk/disk0 -a path_fail_secs=120 Value of attribute path_fail_secs saved successfully
Load Balancing Settings
The MPIO Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, least_cmd_load, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however, it provides some additional benefits. The least_cmd_load policy will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, least_cmd_load will prevent the utilization of that path reducing the effect of the problem path.
We recommend using "least_cmd_load".
scsimgr set_attr -D _rdiskX_ -a load_bal_policy=least_cmd_load
- Check what is currently set with the following command:
scsimgr get_attr -D /dev/rdisk/disk0 -a load_bal_policy name = load_bal_policy current = least_cmd_load default = round_robin saved =
- Use the following commands to set the load-balancing algorithm for the Pure Storage LUNs:
scsimgr set_attr -D _rdiskX_ -a load_bal_policy=least_cmd_
load
Match the Pure Storage volume serial number to the HP-UX disk
Use the following command on the HP-UX server to get the serial number attributes:
scsimgr get_attr -D /dev/rdisk/disk0 -a serial_number SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk0 name = serial_number current = "7B04BBBD2E804C1100011059" default = saved =
To check LUN paths, run the following command:
scsimgr lun_map -D /dev/rdisk/disk26465 LUN PATH INFORMATION FOR LUN : /dev/rdisk/disk26465 Total number of LUN paths = 4 World Wide Identifier(WWID) = 0x624a93707b04bbbd2e804c1100011059 LUN path : lunpath1233 Class = lunpath Instance = 1233 Hardware path = 0/0/0/7/0/0/0.0x524a937000007310.0x400d000000000000 SCSI transport protocol = fibre_channel State = ACTIVE Last Open or Close state = ACTIVE LUN path : lunpath1239 Class = lunpath Instance = 1239 Hardware path = 0/0/0/7/0/0/1.0x524a937000007311.0x400d000000000000 SCSI transport protocol = fibre_channel State = ACTIVE Last Open or Close state = ACTIVE LUN path : lunpath1246 Class = lunpath Instance = 1246 Hardware path = 0/0/0/8/0/0/0.0x524a937000007312.0x400d000000000000 SCSI transport protocol = fibre_channel State = ACTIVE Last Open or Close state = ACTIVE LUN path : lunpath1236 Class = lunpath Instance = 1236 Hardware path = 0/0/0/8/0/0/1.0x524a937000007313.0x400d000000000000 SCSI transport protocol = fibre_channel State = ACTIVE Last Open or Close state = ACTIVE
Device IDs
Customers can set device IDs which are user-friendly names. User-friendly device identifiers can only be set for devices supporting the SET DEVICE IDENTIFIER and REPORT DEVICE IDENTIFIER SCSI commands. In this case, the identifier resides in non-volatile memory on the device and can be queried by all systems accessing the device. The alias is stored locally in the system registry. Therefore it must be set on each HP-UX system accessing the device (such as a cluster).
To assign the following user-friendly device identifier to disk device disk0: “Engineering”:
scsimgr -f set_devid -D /dev/rdisk/disk0 "Engineering" scsimgr: Device Identifier successfully set
To display the device identifier assigned to disk device disk0:
scsimgr get_devid -D /dev/rdisk/disk0 Device Identifier for /dev/rdisk/disk0 = Engineering
HP-UX 11i V2
This version of HP-UX does not include native round-robin load balancing.
Using PV links IN HP-UX 11i v2 for multipath
Considerations:
- PV Links are active/passive in nature, with only one of the paths being active to the array.
- If the primary path fails, PV links will switch the active path to one of the remaining paths.
- PV Links provide basic path failover only, and not the load-balancing and performance gains you would get from round-robin by upgrading to HP-UX 11i v3
- The order in which PV Links selects alternate paths during failures is controlled by the order in which the logical disk device special files are added into the volume group.
- Using PV Links with a 4GB, or even 8GB HBA, and you may not get the performance results or gains that would be expected due to only one path being active to the array.
Configuration:
For this example, we will assume you have a Pure Storage volume with four paths to the array, and then you will have four disk devices on the HP-UX system that map to the target ports.
/dev/dsk/c1t0d0 = ct0.fc0
/dev/dsk/c2t0d0 = ct0.fc2
/dev/dsk/c3t0d0 = ct1.fc0
/dev/dsk/c4t0d0 = ct1.fc2
As per consideration number four, you will want to be sure you add the disk devices to the volume group in such a way that you alternate between controllers. This is important, because the default PV timeout value is 30 seconds. If a controller were to go down, or you're upgrading Purity and your primary path and first alternate path are on the same controller, a failover could take a long time. PV Links will wait 30 seconds, or whatever the timeout value is set to, to switch paths and move on to the next one.
With the example devices above, you would run the following commands:
vgcreate /dev/vg01 /dev/dsk/c1t0d0
(ct0.fc0 is primary path)vgextend /dev/vg01 /dev/dsk/c3t0d0
(ct1.fc0 is first alternate path)vgextend /dev/vg01 /dev/dsk/c2t0d0
(ct0.fc2 is second alternate path)vgextend /dev/vg01 /dev/dsk/c4t0d0
(ct1.fc2 is third alternate path)
Persistent FCID's
In a switched environment, make sure persistent FCIDs are configured on Cisco MDS and Brocade switches. Another issue with 11i v2 and earlier is the HP-UX disk device hardware paths reference the FCID of the target ports. So if you move your Pure target ports to another switch port and persistent FCIDs are not in place, you will lose access to your LUNs since the FCID would have changed. That means the hardware path to your LUN changed and HP-UX loses track of the device.