Skip to main content
Pure Technical Services

AIX Recommended Settings

Currently viewing public documentation. Please login to access the full scope of documentation.

This article goes over our recommended settings for AIX on Pure Storage. AIX has been tested to work well with logical partitions (LPAR) and the Pure Storage array. 

Installing the Pure ODM File

The AIX ODM definition is attached. Please install this on all AIX initiators connecting to Pure Storage. This ODM definition was created by Pure Storage using IBM's supported method of ODM creation to comply with their supported standards. You will want to load the AIX ODM definition fileset into the AIX system. This can be performed manually or the System Manager Interface Tool (SMIT) can be used. 

It is Best Practice to load the Pure ODM definition before attaching any PURE LUN's.  If you do not, you may need to manually clean up the "Other FC Drive" devices that were created before loading the ODM definition. After installing the ODM definition you will be prompted to reboot.

ODM Definition Versions

Protocol Version Release Date Notes
Fibre Channel 1.0.0.12 November 24, 2020

Changes from the previous version 1.0.0.11:

  • Change the lower value in the range of rw_timeout from 30 to 10.
  • Add the 'x' bit to the permissions of the pre_d file. This makes it so that if PURE disks exist the ODM can't be removed.
  • Change the default value of reset_delay from 2 to 0.
Fibre Channel 1.0.0.11 May 22, 2019

Requires AIX 6.1 TL9, AIX 7.1 TL3 or AIX 7.2. 

This update:

  • Sets the shortest_queue algorithm as the default path selection policy mechanism.
  • Includes the attribute lbp_enabled,(although not visible) necessary for space reclamation in AIX now that they support in the AIX filesystem.
  • Introduces attribute timeout_policy with the default policy to fail_path.
  • Identifies the attributes that can be concurrently modified. 
Fibre Channel 1.0.0.4 July 10, 2014

For use on versions prior to AIX 6.1 TL9 or AIX 7.1 TL3.  The shortest_queue algorithm was added in this ODM, but the default remains round_robin. A chdev to shortest_queue is allowed.

ODMs higher than 1.0.0.4 require AIX 6.1 TL9+, AIX 7.1 TL3+, or AIX 7.2.

iSCSI 1.0.0.1 August 8, 2014 iSCSI specific ODM introduced.
       

The shortest-queue option is available as of the following AIX versions:

  • AIX Version 7.1 with Service Pack 3, or later
  • AIX Version 6.1 with the 6100-06 Technology Level, and Service Pack 5, or later
  • AIX Version 6.1 with the 6100-05 Technology Level, and Service Pack 6, or later
  • AIX Version 6.1 with the 6100-04 Technology Level, and Service Pack 10, or later
  • AIX Version 5.3 with the 5300-12 Technology Level and Service Pack 4, or later
  • AIX Version 5.3 with the 5300-11 Technology Level and Service Pack 7, or later

The maximum filesystem size for JFS2 file system on AIX is 32 TB. The maximum file size is 16TB.

ODM Install

NOTE: If you are configuring AIX with VIOS, please see the IBM VIOS section below for when and where to install the ODM.

  1. Zone only one HBA port on AIX to the desired Pure controller port(s).
  2. You will only have one hdisk to install on to.
  3. Install AIX on hdisk.
  4. You will see:
    -bash-4.3# lsdev -Cc disk
    hdisk0 Available 00-00-00 SAS Disk Drive
    hdisk1 Available 00-00-00 SAS Disk Drive
    hdisk2 Available 05-00-01 Other FC SCSI Disk Drive
  5. You can then install the PURE ODM definition. 
  6. Reboot, as prompted by the tool.
  7. PURE will have replaced the Other FC drive.
    -bash-4.3# lsdev -Cc disk
    hdisk0 Available 00-00-00 SAS Disk Drive
    hdisk1 Available 00-00-00 SAS Disk Drive
    hdisk2 Available 05-00-01 PURE MPIO Drive (Fibre)
  8. It is possible a SCSI Reservation still exists on the Pure array.  Versions prior to AIX 6.1TL7 will need to clear these reservations using their preferred method.
    For AIX 6.1TL7+ (Including 7.1 and 7.2) run the following to clear the reservation from AIX before proceeding: 
    bash-4.3# devrsrv -f -c release -l hdisk2
    Device Reservation State Information
    ==================================================
    Device Name : hdisk2
    Device Open On Current Host? : YES
    ODM Reservation Policy : NO RESERVE
    Device Reservation State : NO RESERVE
    Device is currently Open on this host by a process.
    Do you want to continue y/n:y
    The command was successful.
    
    

    Note: There is an IBM issue where the "devrsrv -f -c release -l hdisk1" command fails to clear SCSI-2 reservation see IBM's IV76821 for 6.1 and IV76995 for 7.1.

  9. You can now continue adding your subsequent zoned paths and LUN's. 

Checking the Current ODM Version

You can run the current command to check which version of the Pure ODM file you are currently running: 

# lslpp -l | grep -i pure
  devices.fcp.disk.pure.flasharray.mpio.rte
                             1.0.0.4  COMMITTED  AIX MPIO Support for PURE

Upgrading the ODM 

Versions prior to 1.0.0.11:

For ODMs earlier than release 1.0.11, in order to upgrade to a newer / latest ODM version, you'll need to first remove the old one. This requires a reboot on the initiator, and the steps are as follows: 

  1. Remove the current ODM file.
    installp -u devices.fcp.disk.pure.flasharray.mpio.rte
    
  2. Install the new ODM file.
    installp -acYd. devices.fcp.disk.pure.flasharray.mpio.rte
    
  3. Reboot the host.
    shutdown -r now 

Versions 1.0.0.11+ only:
For newer releases, specifically 1.0.11 and later, upgrading to the latest version of the ODM no longer requires removing the original ODM file first. The change which made this possible is the later ODM definition packages include scripts to clean up the previous configuration before installing the fresh one. Simply proceed as follows:

  1. Install new ODM over the pre-existing ODM. Do not uninstall the original ODM.
    installp -acYd. devices.fcp.disk.pure.flasharray.mpio.rte
    
  2. Reboot the Host.
    shutdown -r now
    

Multipath Recommendations

The MPIO Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, shortest_queue, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however it provides some additional benefits. The shortest_queue policy will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, shortest_queue will prevent the utilization of that path reducing the effect of the problem path.

The shortest_queue option was added in AIX version 6.1, Technology Level 9, and in AIX version 7.1, Technology Level 3.

Our ODM defaults to "round-robin", so if you would like to take advantage of the shortest-queue setting, you will need to run the following command: 

chdev -l hdiskX -a algorithm=shortest_queue

Keep in mind that you will need to manually make this change anytime you overwrite the ODM (i.e. upgrading the ODM). 

You can put the ODM fileset into your lpp_source, generate a new spot (which gets tftp’d to the client as part of the bootp process by NIM network install) and it will resolve the multiple disks issue at NIM network install.

Scanning for new LUNs on AIX

On AIX 7, use the command cfgmgr without arguments to rescan the HBA for new LUNs.

# cfgmgr
See which Pure disks are visible to the system: 
root@hclaix:~# lsdev -c disk | grep PURE
hdisk2   Defined    03-00-01  PURE MPIO Drive (Fibre)
hdisk3   Defined    03-00-01  PURE MPIO Drive (Fibre)
hdisk4   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk5   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk6   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk7   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk8   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk9   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk10  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk11  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk12  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk13  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk14  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk15  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk16  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk17  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk18  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk19  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk20  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk21  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk22  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk23  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk24  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk25  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk26  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk27  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk28  Available  03-00-01  PURE MPIO Drive (Fibre)

 Use "odmget" to correlate the serial number to the volume on Pure: 

root@hclaix:~# odmget -q "name=hdisk26 and attribute=unique_id" CuAt
CuAt:
 name = "hdisk26"
 attribute = "unique_id"
 value = "3A213624A937018563310400CACD7000115F10AFlashArray04PUREfcp"
 type = "R"
 generic = ""
 rep = "nl"
 nls_index = 79

Run the following on the Pure Storage FlashArray to verify the serial number matches: 

$ purevol list
Name                   Size  Source  Created                  Serial
support_test   100G  -       2014-03-05 11:43:52 PST  18563310400CACD7000115F1

Configure Pure LUN's with AIX 

Fast Fail and Dynamic Tracking

Fast Fail and Dynamic Tracking should be enabled for any HBA port zoned to a Pure FlashArray port.  

# chdev -l fscsi0 -a fc_err_recov=fast_fail -P 
fscsi0 changed 
# chdev -l fscsi0 -a dyntrk=yes -P 
fscsi0 changed 

You can confirm these settings with lsattr

# lsattr -l fscsi0 -E
attach switch How this adapter is CONNECTED False
dyntrk yes Dynamic Tracking of FC Devices True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
scsi_id 0x420d00 Adapter SCSI ID False
sw_fc_class 3 FC Class for Fabric True

HBA max_xfer_size

Pure Storage supports a maximum read and write transfer size of 4MB. The Pure AIX ODM definition ensures hdisk device max_transfer is configured at 4MB (0x400000).

It should be noted that the HBA max_transfer defaults to 1MB (0x100000) and will limit transfers larger than 1MB. However, for optimal performance, you should ensure that the max_xfer_size for each HBA zoned to a Pure port is set to 4MB, if supported by the HBA.

Please consult with IBM support before changing this setting, as changing this setting may break boot from SAN if the HBA or system does not support the 4MB (0x400000) setting. 

In the example below, we set the max_xfer_size to the AIX maximum of 4MB for the fcs0 device.  Please repeat this for all devices.

# chdev -l fcs0 -a max_xfer_size=0x400000 -P 
fcs0 changed

This change requires a reboot of the LPAR or disable and re-enable the fcs interface for the new value to be in use.

If the LPAR is a VIO Client using a virtual FC adapter, the attribute should be changed first in the VIO Server physical port (LPAR uses Virtual FC Adapters).

You can run the following command to verify the HBA max_xfer_size:

/# lsattr -l fcs0 -E
 DIF_enabled no DIF (T10 protection) enabled True
 bus_intr_lvl Bus interrupt level False
 bus_io_addr 0xff800 Bus I/O address False
 bus_mem_addr 0xffe76000 Bus memory address False
 bus_mem_addr2 0xffe78000 Bus memory address False
 init_link auto INIT Link flags False
 intr_msi_1 90645 Bus interrupt level False
 intr_priority 3 Interrupt priority False
 lg_term_dma 0x800000 Long term DMA True
 max_xfer_size 0x400000 Maximum Transfer Size True
 num_cmd_elems 500 Maximum number of COMMANDS to queue to the adapter True
 pref_alpa 0x1 Preferred AL_PA True
 sw_fc_class 2 FC Class for Fabric True
 tme no Target Mode Enabled True

LUN Connectivity and MPIO Control

After running cfmgr, the following command can be used to validate LUN connectivity and MPIO control:

# lsdev -Cc disk 
hdisk0 Available 00-08-00 SAS Disk Drive 
hdisk1 Available 00-08-00 SAS Disk Drive 
hdisk2 Available 03-00-01 PURE MPIO Drive (Fibre) 
root@hclaix:/# lspath -l hdisk2 -F "status:name:path_id:parent:connection" 
Enabled:hdisk2:0:fscsi0:21000024ff391bbf,1000000000000 
Enabled:hdisk2:1:fscsi0:21000024ff3855bd,1000000000000 
Enabled:hdisk2:2:fscsi1:21000024ff385093,1000000000000 
Enabled:hdisk2:3:fscsi1:21000024ff385009,1000000000000 
Enabled:hdisk2:4:fscsi2:21000024ff391bbe,1000000000000 
Enabled:hdisk2:5:fscsi2:21000024ff3855bc,1000000000000 
Enabled:hdisk2:6:fscsi3:21000024ff385092,1000000000000 
Enabled:hdisk2:7:fscsi3:21000024ff385008,1000000000000 

The command output from the above displays the following:

  • 8 logical paths for hdisk2 (this AIX system has 4 dual-port HBAs). 
  • The connection is the array target port wwpn and host LUN ID. 
  • Example: 21000024ff391bbf,1000000000000 (host LUN ID = 1 as set in GUI/CLI)

Run the following command to check disk attributes set by the Pure ODM definition (the below is an example only):

# lsattr -El hdisk2
PCM                PCM/friend/Pure      Path Control Module               False 
PR_key_value       none                 Reserve Key                       True 
algorithm          round_robin          Algorithm                         True 
clr_q              no                   Device CLEARS its Queue on error  True 
hcheck_cmd         inquiry              Health Check Command              True 
hcheck_interval    10                   Health Check Interval             True 
hcheck_mode        nonactive            Health Check Mode                 True 
location                                Location Label                    True 
lun_id             0x1000000000000      Logical Unit Number ID            False 
lun_reset_spt      yes                  SCSI LUN reset                    True 
max_transfer       0x400000             Maximum Transfer Size             True 
node_name          0x20000024ff391bbf   Node Name                         False 
pvid               none                 Physical Volume ID                False 
q_err              yes                  Use QERR bit                      False 
q_type             simple               Queue TYPE                        True 
queue_depth*       256                  Queue DEPTH                       True 
reassign_to        120                  REASSIGN time out                 True 
reserve_policy     no_reserve           Reserve Policy                    True 
rw_timeout         60                   READ/WRITE time out               True 
scsi_id            0x10f00              SCSI ID                           False 
start_timeout      60                   START UNIT time out               True 
ww_name            0x21000024ff391bbf   FC World Wide Name                False

* NOTE: The 256 queue depth setting in the ODM should be treated as a possible starting point, and may need to be adjusted based on host environment variables.

Note: reserve_policy

If you want to set the reserve_policy to "PR_shared" or "PR_exclusive" you will first need to set a PR_key_value.  What you set it to will need to be unique, you can set this value to "0x1" for example. 

NOTE: Our Purity 2.x implementation of AIX support does not include auto-contingency allegiance (ACA) that is required by AIX. Lacking this support, AIX will not allow a queue depth greater than 1 for Pure LUN's.  ACA is supported starting in Purity 3.x.  

Note: hcheck_interval

PureStorage ODM definition sets hcheck_interval setting to 10 as opposed to IBM recommendation of 30. Our best practice is set to be inline with our path checker default of 10 seconds in Linux. hcheck_interval setting is set to lower value than the rw_timeout on PureStorage devices as we are not checking active paths and a lower hcheck_interval will not have any SAN performance impact.

The hcheck_interval is lower than the rw_timeout on Pure Storage devices as we are not checking active paths and a lower hcheck_interval will not have any SAN performance impact.

LIVE PARTITION MOBILITY

When using Live Partition Mobility, it is essential that the following steps are taken.

Remove any reservations to any Pure LUNs used by the LPAR. This is done by using the following command against all Pure LUNs.

bash-4.3# devrsrv -f -c release -l hdisk2
Device Reservation State Information
==================================================
Device Name : hdisk2
Device Open On Current Host? : YES
ODM Reservation Policy : NO RESERVE
Device Reservation State : NO RESERVE
Device is currently Open on this host by a process.
Do you want to continue y/n:y
The command was successful.

When you create an NPIV adapter in VIOS, it will assign it two WWPNs.

vfchan.png

Ensure you zone your SAN to the Pure Array with *both* of the WWPNs the Virtual Fibre Channel adapter creates. In operation, the LPAR will only use one of these and only one will FLOGI into your SAN, so you may need to add the second one manually if you are using a SAN management software such as DCNM for example.

An example zone on a Cisco switch would look like this:

zone name lpar1 vsan 1
  pwwn 52:4a:93:7f:97:09:5d:03
  pwwn 52:4a:93:7f:97:09:5d:13
  pwwn c0:50:76:09:50:03:00:0c
  pwwn c0:50:76:09:50:03:00:0d

Lastly, make sure that both of the NPIV WWPNs are defined for the host on the Pure Storage Array. Note that the unused one will show up as not connected -- this is normal and during a LPM operation, it will switch over onto the alternate connection.

pure-lpar-host.png

 

pure-lpar-host2.png

 

IBM VIOS

There are several possible combinations of using VIOS with Pure Storage, and the configurations differ depending on which one you are using.

VIOS Option #1: Pure FlashArray presented only to the VIO Server

VIOS_option1.png

Install VIO Server with a single SAN path (if booting VIO server from SAN) and when installation is complete, install the Pure ODM definition in the VIO server before adding the remaining paths.

Present Pure LUNs directly to the VIO server, create storage pools as needed, and create virtual disks inside these storage pools to boot the LPARs from.

Pure Storage ODM definition  is not required inside the LPAR.

VIOS Option #2: Physically Assign HBA to the LPAR

VIOS_Option2.png

In this situation, the VIO server assigns a physical HBA to an LPAR. In this case the ODM definition should be installed in the LPAR AIX system.

If installing a new system, it's recommended to complete the install with just a single path to the storage presented, and once install is complete, install the Pure Storage ODM definition onto the LPAR and add the remaining paths.

Set the max_xfer_size to the AIX maximum of 4MB for the fcs0 device.  Please repeat this for all fcs devices. There is no need to set this on the VIO Server, because the HBA hardware is assigned only to the LPAR.

VIOS Option #3: LPARs use Virtual FC Adapters to Talk to Pure (NPIV)

VIOS_Option3.png

If the VIO Server is using volumes from the Pure Storage array, complete the setup of the ODM in the same way as described in VIOS Option #1. This step is not required if the VIO Server will not handle the Pure Storage Array LUNs.

The ODM needs to be installed on both the VIOS and LPAR.

For the VIO Client LPARs that will be using Virtual Fiber Channel adapters, follow all the recommendations in VIOS Option #2.

If setting the larger 4MB xfer size, it is critical to set it on the VIO Server first, be aware that having a larger value in the LPAR vFC max_xfer_size attribute than the VIO Server physical adapter max_xfer_size attribute could render the LPAR unable to boot

Set the xfer size to 4MB at this time.

# chdev -l fcs0 -a max_xfer_size=0x400000 -P
fcs0 changed

This change requires a reboot of the VIO Server or disable and re-enable the FCs interface for the new value to be in use.

If changing the max_xfer_size, ensure the VIO server is always set to an equal or larger value than any LPARs. If it is already set to an even larger value that 0x400000, then changing it on the VIO server before the LPAR will cause a lockup/issues on the LPAR with a larger value.

Option #4 VIOS Managed Physical Volumes – VSCSI (Thin RDM on VMWare)

(Install the ODM on the VIO server).

Option #5 VIRTUAL DISK

Create a volume group which could be carved in "LV" each presented as a vscsi LUN to the LPAR.