Skip to main content
Pure1 Support Portal

AIX Recommended Settings

This article goes over our recommended settings for AIX on Pure Storage. AIX has been tested to work well with logical partitions (LPAR) and the Pure Storage array.  

Installing the Pure ODM File

The AIX ODM definition is attached.  Please install this on all AIX initiators connecting to Pure Storage.  This ODM definition was created by Pure Storage using IBM's supported method of ODM creation to comply with their supported standards.  You will want to load the AIX ODM definition fileset into the AIX system.  This can be performed manually or the System Manager Interface Tool (SMIT) can be used. 

It is Best Practice to load the Pure ODM definition before attaching any PURE LUN's.  If you do not, you may need to manually clean up the "Other FC Drive" devices that were created before loading the ODM definition. After installing the ODM definition you will be prompted to reboot.

ODM Definition Versions

Protocol Version Release Date Notes
Fibre Channel May 22, 2019

Requires AIX 6.1 TL9, AIX 7.1 TL3 or AIX 7.2. 

This update:

  • Sets the shortest_queue algorithm as the default path selection policy mechanism.
  • Includes the attribute lbp_enabled,(although not visible) necessary for space reclamation in AIX now that they support in the AIX filesystem.
  • Introduces attribute timeout_policy with the default policy to fail_path.
  • Identifies the attributes that can be concurrently modified 
Fibre Channel July 10, 2014 For use on versions prior to AIX 6.1 TL9 or AIX 7.1 TL3.  The shortest_queue algorithm was added in this ODM, but the default remains round_robin. A chdev to shortest_queue is allowed.
iSCSI August 8, 2014 iSCSI specific ODM introduced.

ODM Install

NOTE: If you are configuring AIX with VIOS, please see the IBM VIOS section below for when and where to install the ODM.

  1. Zone only one HBA port on AIX to the desired Pure controller port(s)
  2. You will only have one hdisk to install on to
  3. Install AIX on hdisk
  4. You will see:
    -bash-4.3# lsdev -Cc disk
    hdisk0 Available 00-00-00 SAS Disk Drive
    hdisk1 Available 00-00-00 SAS Disk Drive
    hdisk2 Available 05-00-01 Other FC SCSI Disk Drive
  5. You can then install the PURE ODM definition. 
  6. Reboot, as prompted by the tool.
  7. PURE will have replaced the Other FC drive
    -bash-4.3# lsdev -Cc disk
    hdisk0 Available 00-00-00 SAS Disk Drive
    hdisk1 Available 00-00-00 SAS Disk Drive
    hdisk2 Available 05-00-01 PURE MPIO Drive (Fibre)
  8. It is possible a SCSI Reservation still exists on the Pure array.  Versions prior to AIX 6.1TL7 will need to clear these reservations using their preferred method.
    For AIX 6.1TL7+ (Including 7.1 and 7.2) run the following to clear the reservation from AIX before proceeding: 
    bash-4.3# devrsrv -f -c release -l hdisk2
    Device Reservation State Information
    Device Name : hdisk2
    Device Open On Current Host? : YES
    ODM Reservation Policy : NO RESERVE
    Device Reservation State : NO RESERVE
    Device is currently Open on this host by a process.
    Do you want to continue y/n:y
    The command was successful.

    Note: There is an IBM issue where the "devrsrv -f -c release -l hdisk1" command fails to clear SCSI-2 reservation see IBM's IV76821 for 6.1 and IV76995 for 7.1

  9. You can now continue adding your subsequent zoned paths and LUN's. 

Checking the Current ODM Version

You can run the current command to check which version of the Pure ODM file you are currently running: 

# lslpp -l | grep -i pure
                     COMMITTED  AIX MPIO Support for PURE

Upgrading the ODM 

In order to upgrade to the latest version of the ODM you will need to first remove the old one.  This will require a reboot on the initiator, and the steps are as follows: 

  1. Remove the current ODM file. 
  2. Reboot the host. 
  3. Install the new ODM file
  4. Reboot the host. 

Multipath Recommendations

The MPIO Policy defines how the host distributes IOs across the available paths to the storage. The Round Robin (RR) policy distributes IOs evenly across all Active/Optimized paths. A newer MPIO policy, shortest_queue, is similar to round robin in that IOs are distributed across all available Active/Optimized paths, however it provides some additional benefits. The shortest_queue policy will bias IOs towards paths that are servicing IO quicker (paths with shorter queues). In the event that one path becomes intermittently disruptive or is experiencing higher latency, shortest_queue will prevent the utilization of that path reducing the effect of the problem path.

The shortest_queue option was added in AIX version 6.1, Technology Level 9, and in AIX version 7.1, Technology Level 3.

Our ODM defaults to "round-robin", so if you would like to take advantage of the shortest-queue setting, you will need to run the following command: 

chdev -l hdiskX -a algorithm=shortest_queue

Keep in mind that you will need to manually make this change anytime you overwrite the ODM (i.e. upgrading the ODM). 

Scanning for new LUNs on AIX

On AIX 7, use the command cfgmgr without arguments to rescan the HBA for new LUNs.

# cfgmgr
See which Pure disks are visible to the system: 
root@hclaix:~# lsdev -c disk | grep PURE
hdisk2   Defined    03-00-01  PURE MPIO Drive (Fibre)
hdisk3   Defined    03-00-01  PURE MPIO Drive (Fibre)
hdisk4   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk5   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk6   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk7   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk8   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk9   Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk10  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk11  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk12  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk13  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk14  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk15  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk16  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk17  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk18  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk19  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk20  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk21  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk22  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk23  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk24  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk25  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk26  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk27  Available  03-00-01  PURE MPIO Drive (Fibre)
hdisk28  Available  03-00-01  PURE MPIO Drive (Fibre)

 Use "odmget" to correlate the serial number to the volume on Pure: 

root@hclaix:~# odmget -q "name=hdisk26 and attribute=unique_id" CuAt
 name = "hdisk26"
 attribute = "unique_id"
 value = "3A213624A937018563310400CACD7000115F10AFlashArray04PUREfcp"
 type = "R"
 generic = ""
 rep = "nl"
 nls_index = 79

Run the following on the Pure Storage FlashArray to verify the serial number matches: 

$ purevol list
Name                   Size  Source  Created                  Serial
support_test   100G  -       2014-03-05 11:43:52 PST  18563310400CACD7000115F1

Configure Pure LUN's with AIX 

Fast Fail and Dynamic Tracking

Fast Fail and Dynamic Tracking should be enabled for any HBA port zoned to a Pure FlashArray port.  

# chdev -l fscsi0 -a fc_err_recov=fast_fail -P 
fscsi0 changed 
# chdev -l fscsi0 -a dyntrk=yes -P 
fscsi0 changed 

You can confirm these settings with lsattr

# lsattr -l fscsi0 -E
attach switch How this adapter is CONNECTED False
dyntrk yes Dynamic Tracking of FC Devices True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
scsi_id 0x420d00 Adapter SCSI ID False
sw_fc_class 3 FC Class for Fabric True

HBA max_xfer_size

Pure Storage supports a maximum read and write transfer size of 4MB. The Pure AIX ODM definition ensures hdisk device max_xfer_size is configured at 4MB (0x400000).

It should be noted that the HBA max_xfer_size defaults to 1MB (0x100000) and will limit transfers larger than 1MB. However, for optimal performance, you should ensure that the max_xfer_size for each HBA zoned to a Pure port is set to 4MB, if supported by the HBA

Please consult with IBM support before changing this setting, as changing this setting may break boot from SAN if the HBA or system does not support the 4MB (0x400000) setting. 

More information can be found here: AIX and VIOS Disk And Fibre Channel Adapter Queue Tuning  

In the example below, we set the max_xfer_size to the AIX maximum of 4MB for the fcs0 device.  Please repeat this for all devices.

# chdev -l fcs0 -a max_xfer_size=0x400000 -P 
fcs0 changed

This change requires a reboot of the LPAR or disable and re-enable the fcs interface for the new value to be in use.

If the LPAR is a VIO Client using a virtual FC adapter, the attribute should be changed first in the VIO Server physical port (LPAR uses Virtual FC Adapters).

You can run the following command to verify the HBA max_xfer_size:

/# lsattr -l fcs0 -E
 DIF_enabled no DIF (T10 protection) enabled True
 bus_intr_lvl Bus interrupt level False
 bus_io_addr 0xff800 Bus I/O address False
 bus_mem_addr 0xffe76000 Bus memory address False
 bus_mem_addr2 0xffe78000 Bus memory address False
 init_link auto INIT Link flags False
 intr_msi_1 90645 Bus interrupt level False
 intr_priority 3 Interrupt priority False
 lg_term_dma 0x800000 Long term DMA True
 max_xfer_size 0x400000 Maximum Transfer Size True
 num_cmd_elems 500 Maximum number of COMMANDS to queue to the adapter True
 pref_alpa 0x1 Preferred AL_PA True
 sw_fc_class 2 FC Class for Fabric True
 tme no Target Mode Enabled True

LUN Connectivity and MPIO Control

After running cfmgr, the following command can be used to validate LUN connectivity and MPIO control:

# lsdev -Cc disk 
hdisk0 Available 00-08-00 SAS Disk Drive 
hdisk1 Available 00-08-00 SAS Disk Drive 
hdisk2 Available 03-00-01 PURE MPIO Drive (Fibre) 
root@hclaix:/# lspath -l hdisk2 -F "status:name:path_id:parent:connection" 

The command output from the above displays the following:

  • 8 logical paths for hdisk2 (this AIX system has 4 dual-port HBAs) 
  • The connection is the array target port wwpn and host LUN ID 
  • Example: 21000024ff391bbf,1000000000000 (host LUN ID = 1 as set in GUI/CLI)

Run the following command to check disk attributes set by the Pure ODM definition:

# lsattr -El hdisk2
PCM                PCM/friend/Pure      Path Control Module               False 
PR_key_value       none                 Reserve Key                       True 
algorithm          round_robin          Algorithm                         True 
clr_q              no                   Device CLEARS its Queue on error  True 
hcheck_cmd         inquiry              Health Check Command              True 
hcheck_interval    10                   Health Check Interval             True 
hcheck_mode        nonactive            Health Check Mode                 True 
location                                Location Label                    True 
lun_id             0x1000000000000      Logical Unit Number ID            False 
lun_reset_spt      yes                  SCSI LUN reset                    True 
max_transfer       0x400000             Maximum Transfer Size             True 
node_name          0x20000024ff391bbf   Node Name                         False 
pvid               none                 Physical Volume ID                False 
q_err              yes                  Use QERR bit                      False 
q_type             simple               Queue TYPE                        True 
queue_depth*       256                  Queue DEPTH                       True 
reassign_to        120                  REASSIGN time out                 True 
reserve_policy     no_reserve           Reserve Policy                    True 
rw_timeout         60                   READ/WRITE time out               True 
scsi_id            0x10f00              SCSI ID                           False 
start_timeout      60                   START UNIT time out               True 
ww_name            0x21000024ff391bbf   FC World Wide Name                False

* NOTE: The 256 queue depth setting in the ODM should be treated as a possible starting point, and may need to be adjusted based on host environment variables.

Note: reserve_policy

If you want to set the reserve_policy to "PR_shared" or "PR_exclusive" you will first need to set a PR_key_value.  What you set it to will need to be unique, you can set this value to "0x1" for example. 

NOTE: Our Purity 2.x implementation of AIX support does not include auto-contingency allegiance (ACA) that is required by AIX. Lacking this support, AIX will not allow a queue depth greater than 1 for Pure LUN's.  ACA is supported starting in Purity 3.x.  


There are several possible combinations of using VIOS with Pure Storage, and the configurations differ depending on which one you are using.

VIOS Option #1: Pure FlashArray presented only to the VIO Server


Install VIO Server with a single SAN path (if booting VIO server from SAN) and when installation is complete, install the Pure ODM definition in the VIO server before adding the remaining paths.

Present Pure LUNs directly to the VIO server, create storage pools as needed, and create virtual disks inside these storage pools to boot the LPARs from.

Pure Storage ODM definition  is not required inside the LPAR.

VIOS Option #2: Physically Assign HBA to the LPAR


In this situation, the VIO server assigns a physical HBA to an LPAR. In this case the ODM definition should be installed in the LPAR AIX system.

If installing a new system, it's recommended to complete the install with just a single path to the storage presented, and once install is complete, install the Pure Storage ODM definition onto the LPAR and add the remaining paths.

Set the max_xfer_size to the AIX maximum of 4MB for the fcs0 device.  Please repeat this for all fcs devices.. There is no need to set this on the VIO Server, because the HBA hardware is assigned only to the LPAR.

VIOS Option #3: LPARs use Virtual FC Adapters to Talk to Pure (NPIV)


If the VIO Server is using volumes from the Pure Storage array, complete the setup of the ODM in the same way as described in VIOS Option #1. This step is not required if the VIO Server will not handle the Pure Storage Array LUNs.

For the VIO Client LPARs that will be using Virtual Fiber Channel adapters, follow all the recommendations in VIOS Option #2.

If setting the larger 4MB xfer size, it is critical to set it on the VIO Server first, be aware that having a larger value in the LPAR vFC max_xfer_size attribute than the VIO Server physical adapter max_xfer_size attribute could render the LPAR unable to boot

Set the xfer size to 4MB at this time

# chdev -l fcs0 -a max_xfer_size=0x400000 -P
fcs0 changed

This change requires a reboot of the VIO Server or disable and re-enable the FCs interface for the new value to be in use.