Skip to main content
Pure1 Support Portal

ESXi Host Configuration

This section reviews the recommended configuration of ESXi hosts for use with the Pure Storage FlashArray.

VMware Native Multipathing Plugin (NMP) Configuration

VMware offers a Native Multipathing Plugin (NMP) layer in vSphere through Storage Array Type Plugins (SATP) and Path Selection Policies (PSP) as part of the VMware APIs for Pluggable Storage Architecture (PSA). The SATP has all the knowledge of the storage array to aggregate I/Os across multiple channels and has the intelligence to send failover commands when a path has failed. The Path Selection Policy can be either “Fixed”, “Most Recently Used” or “Round Robin”.

Round Robin Path Selection Policy

To best leverage the active-active nature of the front end of the FlashArray, Pure Storage requires that you configure FlashArray volumes to use the Round Robin Path Selection Policy. The Round Robin PSP rotates between all discovered paths for a given volume which allows ESXi (and therefore the virtual machines running on the volume) to maximize the possible performance by using all available resources (HBAs, target ports, etc.).

BEST PRACTICE: Use the Round Robin Path Selection Policy for FlashArray volumes.

Tuning Round Robin - the I/O Operations Limit

The Round Robin Path Selection Policy allows for additional tuning of its path-switching behavior in the form of a setting called the I/O Operations Limit. The I/O Operations Limit (sometimes called the “IOPS” value) dictates how often ESXi switches logical paths for a given device. By default, when Round Robin is enabled on a device, ESXi will switch to a new logical path every 1,000 I/Os. In other words, ESXi will choose a logical path, and start issuing all I/Os for that device down that path. Once it has issued 1,000 I/Os for that device, down that path, it will switch to a new logical path and so on.

Pure Storage recommends tuning this value down to the minimum of 1. This will cause ESXi to change logical paths after every single I/O, instead of 1,000.

This recommendation is made for a few reasons:

  1. Performance. Often the reason cited to change this value is performance. While this is true in certain cases, the performance impact of changing this value is not usually profound (generally in the single digits of percentage performance increase). While changing this value from 1,000 to 1 can improve performance, it generally will not solve a major performance problem. Regardless, changing this value can improve performance in some use cases, especially with iSCSI.
  2. Path Failover Time. It has been noted in testing that ESXi will fail logical paths much more quickly when this value is set to a the minimum of 1. During a physical failure of the storage environment (loss of a HBA, switch, cable, port, controller) ESXi, after a certain period of time, will fail any logical path that relies on that failed physical hardware and will discontinue attempting to use it for a given volume. This failure does not always happen immediately. When the I/O Operations Limit is set to the default of 1,000 path failover time can sometimes be in the 10s of seconds which can lead to noticeable disruption in performance during this failure. When this value is set to the minimum of 1, path failover generally decreases to sub-ten seconds. This greatly reduces the impact of a physical failure in the storage environment and provides greater performance resiliency and reliability.
  3. FlashArray Controller I/O Balance. When Purity is upgraded on a FlashArray, the following process is observed (at a high level): upgrade Purity on one controller, reboot it, wait for it to come back up, upgrade Purity on the other controller, reboot it and you’re done. Due to the reboots, twice during the process half of the FlashArray front-end ports go away. Because of this, we want to ensure that all hosts are actively using both controllers prior to upgrade. One method that is used to confirm this is to check the I/O balance from each host across both controllers. When volumes are configured to use Most Recently Used, an imbalance of 100% is usually observed (ESXi tends to select paths that lead to the same front end port for all devices). This then means additional troubleshooting to make sure that host can survive a controller reboot. When Round Robin is enabled with the default I/O Operations Limit, port imbalance is improved to about 20-30% difference. When the I/O Operations Limit is set to 1, this imbalance is less than 1%. This gives Pure Storage and the end user confidence that all hosts are properly using all available front end ports.

For these three above reasons, Pure Storage highly recommends altering the I/O Operations Limit to 1.

BEST PRACTICE: Change the Round Robin I/O Operations Limit from 1,000 to 1 for FlashArray volumes.

 ESXi Express Patch 5 or 6.5 Update 1 and later

Starting with ESXi 6.0 Express Patch 5 (build 5572656) and later (Release notes) and ESXi 6.5 Update 1 (build 5969303) and later (release notes), Round Robin and an I/O Operations limit is the default configuration for all Pure Storage FlashArray devices (iSCSI and Fibre Channel) and no configuration is required.

A new default SATP rule, provided by VMware by default was specifically built for the FlashArray to Pure Storage’s best practices. Inside of ESXi you will see a new system rule:

VMW_SATP_ALUA  PURE FlashArray   system  VMW_PSP_RR   iops=1 

For information, refer to this blog post:

Prior to ESXi 6.0 Express Patch 5 or 6.5 Update 1

The Pure Storage FlashArray is seen by VMware ESXi as an ALUA-compliant array. The FlashArray advertises all paths as optimized for doing I/Os (active/optimized). When a FlashArray is connected to an ESXi farm and storage is provisioned, FlashArray volumes get claimed by the “VMW_SATP_ALUA” SATP.

The default pathing policy for the ALUA SATP is the Most Recently Used (MRU) Path Selection Policy. With this PSP, ESXi will use the first identified path for a volume until that path goes away, or it is manually told to use a different path. For a variety of reasons this is not the ideal configuration for FlashArray volumes—most notably of which is performance.

Note that Round Robin is the default in 6.0 Patch 5 and later in the 6.0 code branch, and in 6.5 Update 1 and later in the 6.5 code branch.

Configuring Round Robin and the I/O Operations Limit

If you are running earlier than ESXi 6.0 Express Patch 5 or 6.5 Update 1, there are a variety of ways to configure Round Robin and the I/O Operations Limit. This can be set on a per-device basis and as every new volume is added, these options can be set against that volume. This is not a particularly good option as one must do this for every new volume, which can make it easy to forget, and must do it on every host for every volume. This makes the chance of exposure to mistakes quite large.

The recommended option for configuring Round Robin and the correct I/O Operations Limit is to create a rule that will cause any new FlashArray device that is added in the future to that host to automatically get the Round Robin PSP and an I/O Operation Limit value of 1.

The following command creates a rule that achieves both of these for only Pure Storage FlashArray devices:

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "PURE" -M "FlashArray" -P "VMW_PSP_RR" -O "iops=1"

This must be repeated for each ESXi host.

This can also be accomplished through PowerCLI. Once connected to a vCenter Server this script will iterate through all of the hosts in that particular vCenter and create a default rule to set Round Robin for all Pure Storage FlashArray devices with an I/O Operation Limit set to 1

$creds = Get-Credential
Connect-VIServer -Server <vCenter> -Credential $creds
$hosts = get-vmhost
foreach ($esx in $hosts)
        $esxcli=get-esxcli -VMHost $esx -v2
        $satpArgs = $
        $satpArgs.description = "Pure Storage FlashArray SATP Rule"
        $satpArgs.model = "FlashArray"
        $satpArgs.vendor = "PURE"
        $satpArgs.satp = "VMW_SATP_ALUA"
        $satpArgs.psp = "VMW_PSP_RR"
        $satpArgs.pspoption = "iops=1"

Furthermore, this can be configured using vSphere Host Profiles:


It is important to note that existing, previously presented devices will need to be manually set to Round Robin and an I/O Operation Limit of 1. Optionally, the ESXi host can be rebooted so that it can inherit the multipathing configuration set forth by the new rule.

For setting a new I/O Operation Limit on an existing device, see Appendix I: Per-Device NMP Configuration.

BEST PRACTICE: Use a SATP rule to configure multipathing for FlashArray volumes.

Note that I/O Operations of 1 is the default in 6.0 Patch 5 and later in the 6.0 code branch, and in 6.5 Update 1 and later in the 6.5 code branch. 

Verifying Connectivity

It is important to verify proper connectivity prior to implementing production workloads on a host or volume. This consists of a few steps:

  1. Verifying proper multipathing settings in ESXi.
  2. Verifying the proper numbers of paths
  3. Verifying I/O balance and redundancy on the FlashArray

The Path Selection Policy and number of paths can be verified easily inside of the vSphere Web Client.



This will report the path selection policy and the number of logical paths. The number of logical paths will depend on the number of HBAs, zoning and the number of ports cabled on the FlashArray.

The I/O Operations Limit cannot be checked from the vSphere Web Client—it can only be verified or altered via command line utilities. The following command can check a particular device for the PSP and I/O Operations Limit:

esxcli storage nmp device list -d naa.<device NAA>


Please remember that each of these settings are a per-host setting, so while a volume might be configured properly on one host, it may not be correct on another. The PowerCLI script below can help you verify this at scale in a simple way.

It is also possible to check multipathing from the FlashArray.

A CLI command exists to monitor I/O balance coming into the array:

purehost monitor --balance --interval <how long to sample> --repeat <how many iterations>

The command will report a few things:

  1. The host name
  2. The individual initiators from the host. If they are logged into more than one FlashArray port, it will be reported more than once. If an initiator is not logged in at all, it will not appear
  3. The port that initiator is logged into
  4. The number of I/Os that came into that port from that initiator over the time period sampled
  5. The relative percentage of I/Os for that initiator as compared to the maximum

The balance command will count the I/Os that came down the particular initiator during the sampled time period, and it will do that for all initiator/target relationships for that host. Whichever relationship/path has the most I/Os will be designated as 100%. The rest of the paths will be then denoted as a percentage of that number. So if a host has two paths, and the first path has 1,000 I/Os and the second path has 800, the first path will be 100% and the second will be 80%.

A well balanced host should be within a few percentage points of each path. Anything more than 15% or so might be worthy of investigation. Refer to this post for more information.

The GUI will also report on host connectivity in general, based on initiator logins.


This report should be listed as redundant for every hosts, meaning that it is connected to each controller. If this reports something else, investigate zoning and/or host configuration to correct this. For detailed explanation of the various reported states, please refer to the FlashArray User Guide which can be found directly in your GUI:



The ESXi host setting, Disk.MaxIOSize controls the largest I/O size that ESXi will allow to be sent from ESXi to an underlying storage device. By default this is 32 MB. If an I/O is larger, ESXi will split it into segments under the configured limit.

This setting needs to be reduced in two situations:

  1. If a virtual machine is using EFI (Extensible Firmware Interface) instead of BIOS
  2. If you are using vSphere Replication

In these situations is necessary to reduce the ESXi parameter Disk.DiskMaxIOSize from the default of 32 MB (32,768 KB) down to 4 MB (4,096 KB). If this is not configured for ESXi hosts running EFI-enabled VMs, the virtual machine will fail to properly boot. If it is not changed on hosts running VMs being replicated by vSphere Replication, replication will fail.


This should be set on every ESXi host EFI-enabled virtual machines have access to, in order to provide for vMotion support. If EFI is not used, this can remain at the default. There is no known performance effect to changing this value. For more detail on this change, please refer to the VMware KB article here:

KB: Windows virtual machines using EFI fails to start

BEST PRACTICE: Change the Disk.DiskMaxIOSize from 32 MB to at least 4 MB when EFI-enabled VMs are present. A lower value is also acceptable.

VAAI Configuration

The VMware API for Array Integration (VAAI) primitives offer a way to offload and accelerate certain operations in a VMware environment.

Pure Storage requires that all VAAI features be enabled on every ESXi hosts that are using FlashArray storage. Disabling VAAI features can greatly reduce the efficiency and performance of FlashArray storage in ESXi environments.

All VAAI features are enabled by default (set to 1) in ESXi 5.x and later, so no action is typically required. Though these settings can be verified via the vSphere Web Client or CLI tools.

  1. WRITE SAME—DataMover.HardwareAcceleratedInit
  2. XCOPY—DataMover.HardwareAcceleratedMove
  3. ATOMIC TEST & SET— VMFSHardwareAcceleratedLocking


BEST PRACTICE: Keep VAAI enabled. DataMover.HardwareAcceleratedInit, DataMover.HardwareAcceleratedMove, and VMFS3.HardwareAcceleratedLocking

In order to provide a more efficient heart-beating mechanism for datastores VMware introduced a new host-wide setting called /VMFS3/UseATSForHBOnVMFS5. In VMware’s own words:

“A change in the VMFS heartbeat update method was introduced in ESXi 5.5 Update 2, to help optimize the VMFS heartbeat process. Whereas the legacy method involves plain SCSI reads and writes with the VMware ESXi kernel handling validation, the new method offloads the validation step to the storage system.“


Pure Storage recommends keeping this value on whenever possible. That being said, it is a host wide setting, and it can possibly affect storage arrays from other vendors negatively. Read the VMware KB article here:

ESXi host loses connectivity to a VMFS3 and VMFS5 datastore

Pure Storage is NOT susceptible to this issue, but in the case of the presence of an affected array from another vendor, it might be necessary to turn this off. In this case, Pure Storage supports disabling this value and reverting to traditional heart-beating mechanisms.


BEST PRACTICE: Keep VMFS3.UseATSForHBOnVMFS5 enabled—this is preferred. If another vendor is present and prefers it to be disabled, it is supported by Pure Storage to disable it.

iSCSI Configuration

Just like any other array that supports iSCSI, Pure Storage recommends the following changes to an iSCSI-based vSphere environment for the best performance.

For a detailed walkthrough of setting up iSCSI on VMware ESXi and on the FlashArray please refer to the following VMware white paper. This is required reading for any VMware/iSCSI user:

Set Login Timeout to a Larger Value

For example, to set the Login Timeout value to 30 seconds, use commands similar to the following:

  1. Log in to the vSphere Web Client and select the host under Hosts and Clusters.
  2. Navigate to the Manage tab.
  3. Select the Storage option.
  4. Under Storage Adapters, select the iSCSI vmhba to be modified.
  5. Select Advanced and change the Login Timeout parameter. This can be done on the iSCSI adapter itself or on a specific target.

The default Login Timeout value is 5 seconds and the maximum value is 60 seconds.

BEST PRACTICE: Set iSCSI Login Timeout for FlashArray targets to 30 seconds. A higher value is supported but not necessary.

Disable DelayedAck

DelayedAck is an advanced iSCSI option that allows or disallows an iSCSI initiator to delay acknowledgement of received data packets.

Disabling DelayedAck in ESXi 5.x:

  1. Log in to the vSphere Web Client and select the host under Hosts and Clusters.
  2. Navigate to the Manage tab.
  3. Select the Storage option.
  4. Under Storage Adapters, select the iSCSI vmhba to be modified.

Navigate to Advanced Options and modify the DelayedAck setting by using the option that best matches your requirements, as follows:

Option 1: Modify the DelayedAck setting on a particular discovery address (recommended) as follows:

  1. Select Targets.
  2. On a discovery address, select the Dynamic Discovery tab.
  3. Select the iSCSI server.
  4. Click Advanced.
  5. Change DelayedAck to false.

Option 2: Modify the DelayedAck setting on a specific target as follows:

  1. Select Targets.
  2. Select the Static Discovery tab.
  3. Select the iSCSI server and click Advanced.
  4. Change DelayedAck to false.

Option 3: Modify the DelayedAck setting globally for the iSCSI adapter as follows:

  1. Select the Advanced Options tab and click Advanced.
  2. Change DelayedAck to false.

DelayedAck is highly recommended to be disabled, but is not absolutely required by Pure Storage. In highly-congested networks, if packets are lost, or simply take too long to be acknowledged, due to that congestion, performance can drop. If DelayedAck is enabled, where not every packet is acknowledged at once (instead one acknowledgement is sent per so many packets) far more re-transmission can occur, further exacerbating congestion. This can lead to continually decreasing performance until congestion clears. Since DelayedAck can contribute to this it is recommended to disable it in order to greatly reduce the effect of congested networks and packet retransmission.

Enabling jumbo frames can further harm this since packets that are retransmitted are far larger. If jumbo frames are enabled, it is absolutely recommended to disable DelayedAck. See the following VMware KB for more information:

 BEST PRACTICE: Disable DelayedAck for FlashArray iSCSI targets.

iSCSI Port Binding 

For software iSCSI initiators, without additional configuration the default behavior for iSCSI pathing is for ESXi to leverage its routing tables to identify a path to its configured iSCSI targets. Without solid understanding of network configuration and routing behaviors, this can lead to unpredictable pathing and/or path unavailability in a hardware failure. To configure predictable and reliable path selection and failover it is necessary to configure iSCSI port binding (iSCSI multi-pathing).

Configuration and detailed discussion is out of the scope of this document, but it is recommended to read through the following VMware document that describes this and other concepts in-depth:

BEST PRACTICE: Use Port Binding for ESXi software iSCSI adapters when possible.

Note that ESXi 6.5 has expanded support for port binding and features such as iSCSI routing (though the use of iSCSI routing is not usually recommended) and multiple subnets. Refer to ESXi 6.5 release notes for more information. 

Jumbo Frames

In some iSCSI environments it is required to enable jumbo frames to adhere with the network configuration between the host and the FlashArray. Enabling jumbo frames is a cross-environment change so careful coordination is required to ensure proper configuration. It is important to work with your networking team and Pure Storage representatives when enabling jumbo frames. Please note that this is not a requirement for iSCSI use on the Pure Storage FlashArray—in general Pure Storage recommends leaving MTU at the default setting.

That being said, altering the MTU is a fully supported and is up to the discretion of the user.

  1. Configure jumbo frames on the FlashArray iSCSI ports2018-01-29_10-00-52.png

Configure jumbo frames on the physical network switch/infrastructure for each port using the relevant switch CLI or GUI.

  1. Configure jumbo frames on the physical network switch/infrastructure for each port using the relevant switch CLI or GUI.
    1. Browse to a host in the vSphere Web Client navigator.
    2. Click the Manage tab and select Networking > Virtual Switches.
    3. Select the switch from the vSwitch list.
    4. Click the name of the VMkernel network adapter.
    5. Click the pencil icon to edit.
    6. Click NIC settings and set the MTU to your desired value.
    7. Click OK.
    8. Click the pencil icon to edit on the top to edit the vSwitch itself.
    9. Set the MTU to your desired value.
    10. Click OK.

Once jumbo frames are configured, verify end-to-end jumbo frame compatibility. To verify, try to ping an address on the storage network with vmkping.

vmkping -s 9000 <ip address of Pure Storage iSCSI port>

If the ping operations does not return successfully, then jumbo frames is not properly configured in ESXi, the networking devices, and/or the FlashArray port. 

Challenge-Handshake Authentication Protocol (CHAP)

iSCSI CHAP is supported on the FlashArray for uni- or bidirectional authentication. Enabling CHAP is optional and up to the discretion of the user. Please refer to the following post for a detailed walkthrough:


Please note that iSCSI CHAP is not currently supported with dynamic iSCSI targets on the FlashArray. If CHAP is going to be used, please configure your iSCSI FlashArray targets as static only. Dynamic target support for CHAP will be added in a future Purity release.