Skip to main content
Pure Technical Services

Web Guide: Implementing vSphere Virtual Volumes with FlashArray

Currently viewing public documentation. Please login to access the full scope of documentation.


VMware’s vSphere Virtual Volume (vVol) paradigm, introduced in vSphere version 6.0, is a storage technology that provides policy-based, granular storage configuration and control of virtual machines (VMs). Through API-based interaction with an underlying array, VMware administrators can maintain storage configuration compliance using only native VMware interfaces.

Version 5.0.0 of Purity//FA software introduced support for FlashArray-based vSphere Virtual Volumes (vVols). The accompanying FlashArray Plugin for the vSphere Web Client (the Plugin) makes it possible to create, manage, and use vVols that are based on FlashArray volumes from within the Web Client. This report describes the architecture, implementation, and best practices for using FlashArray-based vVols.


The primary audiences for this guide are VMware administrators, FlashArray administrators, and more generally, anyone interested in the architecture, implementation, administration, and use of FlashArray-based vVols.

Throughout this report, the terms FlashArray administrator, array administrator, and administrator in the context of array administration, refer to both the storage and array administration roles for FlashArrays.

For further questions and requests for assistance, customers can contact Pure Storage Technical Support at

vVols Best Practice Summary

The following is a summary of general best practices for FlashArray-based vVols. For more detailed information on each topic, refer to the body of this report.


  • Configure NTP on every ESXi host, vCenter, and FlashArray involved in vVol management
  • Run vCenter Version 6.5 and ESXi Version 6.5 or newer versions, throughout the VMware environment, including at replication target sites
    • Should vSphere 6.7 be deployed, ensure that you are running either vSphere 6.7 U1 or 6.7 U3+
  • Register each FlashArray’s two VASA providers with vCenter
  • Ensure that vCenter Server and ESXi Host management networks have TCP port 8084 access to FlashArray controller management ports
  • Configure host and host groups with appropriate initiators on the FlashArray
  • Always use VMware tools to create, change and provision FlashArray-based vVols. (Resizing or destroying FlashArray-based vVols directly requires manual clean up within VMware.
  • (Exception: FlashArray tools can be used to create snapshots and copies of FlashArray-based vVols.)
  • If EFI-boot virtual machines are in-use, change the Disk.MaxIOSize in the ESXi server(s) that host them from the 32 MB default to 4 MB
  • Configure VMware NMP Round Robin scheduling and set I/O Operations Limit to 1
  • (These are defaults in ESXi version 6.5 Update and newer versions.)
  • The 'pure-protocol-endpoint' must not be destroyed, this namespace must exist for vVols management path to operate correctly.


  • Run ESXi and vCenter Version 6.7 Update 3 or later.
  • FlashArray run Purity//FA 5.3.6 or Higher.
  • The Protocol Endpoint should be connected to Host Groups and not Individual Hosts.
  • In most cases, only a single PE is needed, but if there are more than one PE created on the FlashArray, only connect a single PE to a Host Group, not multiple.
  • When registering the VASA Provider, use a local FlashArray User.
  • Do not run vCenter Servers on vVols
  • Use Virtual Machine hardware version 11 or later.
  • Configure snapshot policies for all config vVols (VM home directories)
  • Present a protocol endpoint to any ESXi host group prior to mounting a vVol datastore or use the Pure Storage vSphere Plugin to automate the procedure.



This report uses the following short forms of the names of frequently mentioned entities.

Short Form

Full Name

Inventory pane

Common synonym for the Web Client Navigator pane.


The FlashArray vSphere Web Client Plugin

A plugin component for the vSphere Web Client that works in conjunction with Purity//FA Version 5.0.0 and later versions to enable the Web Client to manage FlashArray-based vVols.


Protocol Endpoint
VMware term for the T10 administrative logical unit (ALU) concept


VMware APIs for Storage Awareness

APIs for storage arrays that enable management from within VMware components.


Virtual Machine

In this report, a virtual machine instantiated by VMware ESXi and running a guest operating system.


Virtual Volume

A VMware virtual storage paradigm that supports finer-grained control of virtual machine storage, and enables integration with advanced features offered by storage arrays.

Web Client

The VMware vSphere Web Client

The web-based administration component of VMware.

[Back to Top]  


Introduction to vVols

Historically, the datastores that have provided storage for VMware virtual machines (VMs) have been created as follows:

  1. A VMware administrator requests storage from a storage administrator
  2. The storage administrator creates a disk-like virtual device on an array and provisions it to the ESXi host environment for access via iSCSI or Fibre Channel
  3. The VMware administrator rescans ESXi host I/O interconnects to locate the new device and formats it with VMware’s Virtual Machine File System (VMFS) to create a datastore.
  4. The VMware administrator creates a VM and one or more virtual disks, each instantiated as a file in the datastore’s file system and presented to the VM as a disk-like block storage device.

Virtual storage devices instantiated by storage arrays are called by multiple names. Among server users and administrators, LUN (numbered logical unit) is popular. The FlashArray term for virtual devices is volume. ESXi and guest hosts address commands to LUNs that are usually assigned automatically to volumes.

While plugins can automate datastore creation to some extent, they have some fundamental limitations:

  • Every time additional capacity is required, VMware and storage administrators must coordinate their activities
  • Certain widely-used storage array features such as replication are implemented at the datastore level of granularity. Enabling them affects all VMs that use a datastore
  • VMware administrators cannot easily verify that required storage features are properly configured and enabled.

VMware designed vVols to mitigate these limitations. vVol benefits include:

Virtual Disk Granularity

Each virtual disk is a separate volume on the array with is own unique properties

Automatic Provisioning

When a VMware administrator requests a new virtual disk for a VM, VMware automatically directs the array to create a volume and present it to the VM. Similarly, when a VMware administrator resizes or deletes a virtual disk, VMware directs the array to resize or remove the volume

Array-level VM Visibility

Because arrays recognize both VMs and their virtual disks, they can manage and report on performance and space utilization with both VM and individual virtual disk granularity.

Storage Policy Based Management

With visibility to individual virtual disks, arrays can take snapshots and replicate volumes at the precise granularity required. VMware can discover an array’s virtual disks and allow VMware administrators to manage each vVol’s capabilities either ad hoc or by specifying policies. If a storage administrator overrides a vVol capability configured by a VMware administrator, the VMware administrator is alerted to the non-compliance.

VMware designed the vVol architecture to mitigate the limitations of the VMFS-based storage paradigm while retaining the benefits, and merging them with the remaining advantages of Raw Device Mappings.

VMware’s vVol architecture consists of the following components:

Management Plane (section titled The FlashArray VASA Provider)

Implements the APIs that VMware uses to manage the array. Each supported array requires a vSphere API for Storage Awareness (VASA) provider, implemented by the array vendor.

Data Plane (section titled vVol Binding)

Provisions vVols to ESXi hosts

Policy Plane (section titled Storage Policy Based Management)

Simplifies and automates the creation and configuration of vVols.

Appendix I: Installing and Upgrading the Web Client Plugin describes Plugin installation and registering arrays with the Plugin.

[Back to Top


The FlashArray VASA Provider

VMware APIs for Storage Awareness (VASA) is a VMware interface for out-of-band communication between VMware ESXi and vCenter and storage arrays. Arrays’ VASA providers are services registered with vCenter. Storage vendors implement providers for their arrays, either as VMs or embedded in the arrays. As of vSphere Version 6.5, VMware has introduced three versions of VASA:

Version 1 (Introduced in vSphere Version 5.0)
    Provides basic configuration information for storage volumes hosting VMFS datastores, as well as injection of some basic alerts into vCenter

Version 2 (Introduced in vSphere Version 6.0)
    First version to support vVols

Version 3 (Introduced in vSphere Version 6.5)
    Added support for array-based replication of vVols and Oracle RAC.

FlashArrays support VASA Version 3.

Because the FlashArray vVol implementation uses VASA Version 3, the VMware environment must be running vSphere Version 6.5 or a newer version in both ESXi hosts and vCenter. Pure Storage recommends vSphere Version 6.5 Update 1.

Appendix I: Installing and Upgrading the Web Client Plugin contains instructions for verifying that a Plugin version that supports vVols is installed in vCenter, and for installing or upgrading to a version with vVol support.

FlashArray vVol support is included in Purity//FA Version 5.0. The Purity//FA upgrade process automatically installs and configures a VASA provider in each controller; there is no separate installation or configuration. To use FlashArray-based vVols, however, an array’s VASA providers must be registered with vCenter. Either the FlashArray Plugin for vSphere Web Client (the Plugin), the vSphere GUI, or API/CLI-based tools may be used to register VASA providers with vCenter. 

Recommendation: Create Local vVol/VASA Admin if the FlashArray is on Purity 5.1+

One of Pure Storage's recommendations is to register the controllers VASA Providers with a local user on the array.  This will help prevent any authentication issues in the event that the AD server is unreachable, the user that the storage providers is deleted/removed, or any other unexpected connectivity issues with the AD server.

Users are able to create local Array Admin, Storage Admin and Read Only users starting in Purity 5.1.0.  Pure Storage recommends that the user create a local Array Admin to register the array Storage Providers.

Here is an example of creating a local array admin to use when registering the storage provider.

From the FlashArray GUI, Navigate to Settings and then Users
Create User Screenshots - 01 - Settings Users.png
Then click on the triple dots in the users box and select "Create User..."

Create User Screenshots - 02 - Create User 1.png
Create a Array Admin user, for example naming it "vvol-admin" and giving it a super secure password.
Create User Screenshots - 02 - Create User 2.png
After the user is created, you will see it show up in the list with it's Role.  Now you can use it to register your storage provider.
Create User Screenshots - 03 - User is Created.png

Registering FlashArray VASA Providers with the Plugin

While the Plugin is not required for FlashArray-based vVols, it simplifies most administrative functions, including VASA provider registration.

From the Web Client Home screen, select Pure Storage from the dropdown menu to display the FlashArray pane Objects tab. Right-click the array whose VASA providers are to be registered and select Register Storage Provider from the dropdown menu to launch the Register Storage Provider wizard.

vSphereView 1: Register Storage Provider

Enter credentials for a FlashArray administrator registers the array’s two VASA providers to all vCenters present in the vSphere Single Sign-on Domain.  The FlashArray will log all subsequent vVol operations from those vCenters under the user used to register the storage providers.  Pure Storage recommends the user of a local user to register the storage provider.  The user will be able to create a local storage admin for this purpose starting in Purity 5.1 and higher.  The steps to create a local storage admin are outlined above.

vSphereView 2 Register Storage Provider Wizard.png
vSphereView 2: Register Storage Provider Wizard

Other Methods for Registering FlashArray VASA Providers

Alternatively, VMware administrators can use the Web Client, PowerCLI, and other CLI and API tools to register VASA providers. This section describes registration of FlashArray providers with the Web Client and with PowerCLI.

Prior to registration, use the FlashArray GUI to obtain the IP addresses of both controllers’ eth0 management ports.

Click Settings in the GUI navigation pane, and select the Network tab, (ArrayView 1) to display the array’s management port IP addresses (ArrayView 2).

ArrayView 1: FlashArray GUI Network Tab
ArrayView 2: FlashArray Managment Port IP Addresses

VASA Registration with the Web Client

In the Web Client inventory pane Host and Clusters view, select the target vCenter. Select the Configure tab and click Storage Providers in the menu. Click the green + icon (vSphereView 3) to launch the New Storage Provider wizard (vSphereViews 4 and 5).

v SphereView 3: Web Client VASA Provider Registration


vSphereView 4 New Storage Provider Wizard (ct0).png
vSphereView 4: New Storage Provider Wizard (ct0)


vSphereView 5 New Storage Provider Wizard (ct1).png
vSphereView 5: New Storage Provider (ct1)

Enter the following information:

    A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).

    The URL of the controller’s VASA provider in the form:
    https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified (not its FQDN), and port 8084 is required

    Credentials for an administrator of the target array. The user name entered is associated with VASA operations in future audit logs.

Click OK, and repeat the procedure for the other controller (vSphereView 5).

Perform the procedure for each array to be registered.

VASA Registration with PowerCLI

When a number of FlashArrays’ VASA providers are to be registered, using a PowerCLI script may be preferable. The VMware PowerCLI cmdlet called New-VasaProvider registers VASA providers with vCenter (vSphereView 6).

vSphereView 6: Use of the New-Vasa-Provider Cmdlet

The script in vSphereView 7 below uses both PowerCLI and the Pure Storage PowerShell SDK to register an array’s two VASA Providers. The script requires that both PowerCLI and the PureStorage PowerShell SDK be installed.

$vccreds = Get-Credential
$facreds = Get-Credential
$vcenter = Read-Host "Enter your vCenter IP/FQDN"
$flasharray = Read-Host "Enter your FlashArray IP/FQDN"
connect-viserver -Server $vcenter -Credential $vccreds
$endpoint = New-PfaArray -EndPoint $flasharray -Credentials $facreds -IgnoreCertificateError
$mgmtIPs = Get-PfaNetworkInterfaces -Array $endpoint | where-object {$ -like "*eth0"}
$arrayname = Get-PfaArrayAttributes -array $endpoint
$ctnum = 0
foreach ($mgmtIP in $mgmtIPs)
    New-VasaProvider -Name ("$($arrayname.array_name)-CT$($ctnum)") -Credential $facreds -Url ("https://$($mgmtIP.address):8084") -force
disconnect-viserver -Server $vcenter -confirm:$false
Disconnect-PfaArray -Array $endpoint 

 vSphereView 7: PowerCLI Script for Registering a FlashArray's Two VASA Providers

Verifying VASA Provider Registration

To verify that VASA Provider registration succeeded, in the Web Client Host and Clusters view, click the target vCenter in the inventory pane, select the Configure tab, and locate the newly-registered providers in the Storage Providers table (vSphereView 9).

vSphereView 9: Verification of VASA Provider Registration

The table can be arranged either by storage provider or by array (Storage system) by clicking the Group by dropdown and selecting the desired ordering (vSphereView 8).

vSphereView 8: Select Storage Provider Grouping Order

Although both FlashArray controllers’ VASA providers are online, vCenter uses one provider at a time. The provider in-use is marked Active; its companion as Standby as in vSphereView 9.

Alternatively, the PowerCLI Get-VasaProvider cmdlet can be used to list registered VASA providers (vSphereView 10).

vSphereView 10: PowerCLI Get-VasaProvider Cmdlet Usage

[Back to Top


Configuring Host Connectivity

For an ESXi host to access FlashArray storage, an array administrator must create a host object. A FlashArray host object (usually called host) is a list of the ESXi host’s initiator iSCSI Qualified Names (IQNs) or Fibre Channel Worldwide Names (WWNs). Arrays represent each ESXi host as one host object.

Similarly, arrays represent a VMware cluster as a host group, a collection of hosts with similar storage-related attributes. For example, an array would represent a cluster of four ESXi hosts as a host group containing four host objects, each representing an ESXi host. The FlashArray User Guide contains instructions for creating hosts and host groups.

To use the Plugin to create a FlashArray host group, in the Web Client’s Host and Clusters view inventory pane, right-click a cluster, select Pure Storage from the dropdown menu, and Add Host Group from the secondary dropdown to launch the Add FlashArray Host Group wizard (vSphereView 11).

vSphereView 11: Add FlashArray Host Group Wizard

Select iSCSI or Fibre Channel, (optionally) enter a friendly name for the host group, and click Create to create host objects and a host group to represent the cluster.

The Plugin can also configure the ESXi hosts’ iSCSI target addresses (not shown).
The Pure Storage VMware Best Practices Guide at and the blog series:  contain vSphere iSCSI target address assignment instructions and best practices.

Fibre Channel zoning must be completed before provisioning storage to hosts. Refer to switch vendor documentation for zoning instructions.

[Back to Top

Protocol Endpoints

The scale and dynamic nature of vVols intrinsically changes VMware storage provisioning. To provide scale and flexibility for vVols, VMware adopted the T10 administrative logical unit (ALU) standard, which it calls protocol endpoint (PE). vVols are connected to VMs through PEs acting as subsidiary logical units (SLUs, also called sub-luns).

The FlashArray vVol implementation makes PEs nearly transparent. Array administrators seldom deal with PEs, and not at all during day-to-day operations.

Protocol Endpoints (PEs)

Because a typical VM has multiple virtual disks, each instantiated as a volume on the array and addressed by a LUN, the ESXi Version 6.5 support limits of 512 SCSI devices (LUNs) per host and 2,000 logical paths to them can easily be exceeded by even a modest number of VMs.

Moreover, each time a new volume is created or an existing one is resized, VMware must rescan its I/O interconnects to discover the change. In large environments, rescans are time-consuming; rescanning each time the virtual disk configuration changes is generally considered unacceptable.

VMware uses PEs to eliminate these problems. A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. It effectively serves as a mount point for vVols. It is the only FlashArray volume that must be manually connected to hosts to use vVols.

Fun fact: Protocol endpoints were formerly called I/O de-multiplexers. PE is a much better name.

When an ESXi host requests access to a vVol (for example, when a VM is powered on), the array binds the vVol to it. Binding is synonym for sub-lun connection. For example, if a PE uses LUN 255, a vVol bound to it would be addressed as LUN 255:1.The section titled vVol Binding describes vVol binding in more detail.

PEs greatly extend the number of vVols that can be connected to an ESXi cluster; each PE can have up to 16,383 vVols per host bound to it simultaneously. Moreover, a new binding does not require a complete I/O rescan. Instead, ESXi issues a REPORT_LUNS SCSI command with SELECT REPORT to the PE to which the sub-lun is bound. The PE returns a list of sub-lun IDs for the vVols bound to that host. In large clusters, REPORT_LUNS is significantly faster than a full I/O rescan because it is more precisely targeted.

The FlashArray PE Implementation

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint, but the Web Client hides it from view until a sub-lun connection is made.

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PC per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific. ArrayView 3 illustrates this with excerpts from FlashArray GUI Host panes for two ESXi hosts. Both hosts share connections to the pure-protocol-endpoint PE (LUN 254). Both have shared connections to non-vVol volumes srm-vmfs and Template, using the same LUNs. The vVols bound to the host on the right, however, are only connected to that host; they use sub-luns of LUN 254.

ArrayView 3: Excerpts from FlashArray GUI Connected Volumes Panes for two ESXi Hosts

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually. Appendix II describes the use of the FlashArray CLI to create a new PE.

BEST PRACTICE: Use one (the default) PE per array. All hosts should share the same PE. VVo to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Protocol Endpoints in vSphere

To view the PE(s) presented to a host, in the Host and Clusters tab of the Web Client inventory pane, click the target host, select the Configure tab, and select Protocol Endpoints from the menu (vSphereView 12).

vSphereView 12: PE List for an ESXi Host

Click the table row for the PE of interest to display its network address authority (NAA) number, the protocol used to communicate with it, its state, the array that hosts it, the number of paths to it, its multipathing policy, and the datastore vVols associated with it. Of these, the only configurable property is multipathing. For optimal performance, Pure Storage recommends round robin path selection (the default policy with ESXi Version 6.5 update 1 and later versions) for all volumes, both VMFS and PE.

BEST PRACTICE: Configure the round robin path selection policy for PEs.

ESXi behaves differently with respect to queue depth limits for PEs than for other volumes. Pure Storage recommends leaving ESXi PE queue depth limits at the default values. 

BEST PRACTICE: Leave PE queue depth limits at the default values unless performance problems occur.
The blog post at contains additional information about PE queue depth limits.

[Back to Top

vVol Datastores

vVols replace LUN-based datastores formatted with VMFS. There is no file system on a datastore vVol, nor are vVol-based virtual disks encapsulated in files.

The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.

But VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the vVol architecture includes a storage container object, generally referred to as a vVol datastore, with two key properties:

Capacity limit

Allows an array administrator to limit the capacity that VMware administrators can provision as vVols.

Array capabilities

Allows vCenter to determine whether an array can satisfy a configuration request for a VM.

A vVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term vVol datastore exclusively.

The FlashArray Implementation of vVol Datastores

FlashArray vVol datastores have no artificial size limit. The initial FlashArray vVol release supports a single 8-petabyte vVol datastore per array. Pure Storage Technical Support can change an array’s vVol datastore size on customer request to alter the amount of storage VMware can allocate.

Pure Storage anticipates supporting multiple vVol datastores per array and user-configurable vVol datastore sizes in the future.

Purity//FA Version 5.0.0 and newer versions automatically create an array’s vVol datastore when its VASA provider is registered with vCenter. Once created, a vVol datastore can be mounted to ESXi hosts.

FlashArrays require two items to create a volume—a size and a name. vVol datastores do not require any additional input or enforce any configuration rules on vVols, so creation of FlashArray-based vVols is simple.

Mounting a vVol Datastore

A vVol datastore can be mounted to any ESXi host with access to a PE on the array that hosts the vVol datastore. Mounting a vVol datastore to a host requires:

  • Registration of the array’s VASA providers with vCenter
  • Provisioning of at least one PE to the host.

The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.

An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.

With the Plugin, a VMware administrator can connect a PE to an ESXi host and mount its vVol datastore without array administrator involvement.

Using the Plugin to Mount vVol Datastore

Navigate to Hosts and Clusters in the vCenter inventory pane, right-click the target cluster or host, select Pure Storage from the dropdown menu, and Create Datastore from the secondary dropdown to launch the Create Datastore wizard (vSphereView 13).

vSphereView 13: Create Datastore

Enter a friendly name for the vVol datastore (optional). Click the vVol radio button, select the array from which to provision in the Select Pure Storage Array dropdown, and click Create to provision the vVol datastore to the host or cluster.

vSphereView 14: Create Datastore Wizard

The Plugin connects the array’s PE(s) to the FlashArray host or host group that corresponds to the ESXi host or hosts and mounts the vVol datastore to the selected host(s).

vSphereView 15: An Already-mounted Datastore

A vVol datastore can be mounted to a cluster, or alternatively, to one of its hosts by expanding the display in the Select Host/Cluster box and selecting the host.

If the array’s vVol datastore has already been mounted to a host or cluster in the vCenter, the Datastore Name field is populated and the entry box is grayed out (vSphereView 15).

Error messages usually indicate that the array has no host group corresponding to the ESXi cluster, or that the host group is not configured properly. The section titled Configuring Host Connectivity describes connecting ESXi hosts and clusters to FlashArray volumes.

Mounting vVol Datastores Manually: FlashArray Actions 

Alternatively, vVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the vVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.

Pure Storage recommends using the Plugin to provision PEs to hosts. The FlashArray GUI does not currently support provisioning or de-provisioning PEs; those are done via the CLI or the REST APIs.

To provision a PE using the FlashArray CLI, use the purevol list command to discover the array’s PE(s). Use the purehost connect or purehgroup connect command to connect a PE to a host or host group. (vSphereView 16)

vSphereView 16 illustrates (a) an array with three PEs, and (b) the purehgroup command for connecting the default pure-protocol-endpoint to the Infrastructure host group.

vSphereView 16: Sample Use of the FlashArray CLI to Connect to a Host Group

Registering an array’s VASA Provider with a vCenter creates a default PE. To provision a PE prior to registration, use the commands listed in Appendix II

Mounting vVol Datastores Manually: Web Client Actions

The FlashArray GUI Storage view Hosts tab lists PE connections made by the CLI. (ArrayView 4)

ArrayView 4: FlashArray GUI Hosts Tab Showing PE Connection to Cluster

Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, vCenter does not recognize them until an I/O rescan occurs.

To demonstrate this, select the target host in the Hosts and Clusters list in the Web Client inventory pane, click the Configure tab, and select Protocol Endpoints to display a table of PEs known to vCenter (vSphereView 17).

vSphereView 17: Protocol Endpoints Are Not Visible Until Storage Rescan

To rescan storage for a host or cluster, right-click the host or cluster in the inventory pane, select Storage from the dropdown menu, and Rescan Storage from the secondary dropdown to launch the Mission – Rescan Storage wizard (vSphereView 19).

vSphereView 18: Rescan Storage Command

Check Scan for new Storage Devices and click OK to start the rescan. (Rescanning for new VMFS volumes is not required, but it can be selected.) Rescanning does not cause the PE to immediately appear in the Protocol Endpoints view (vSphereView 17). A PE does not become visible in this view until it is in use by a vVol datastore.

vSphereView 19: Mission-Rescan Storage Wizard

To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device (vSphereView 20).

vSphereView 20: PE Listed as an ESXi Host's Storage Device after Rescan

Mounting a vVol Datastore

To mount a vVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown (vSphereView 21) to launch the New Datastore wizard (vSphereView 22).

vSphereView 21: Mount vVol Datastore

Click the vVol radio button, then click Next. (not shown in vSphereView 22).

vSphereView 22: New Datastore Wizard (1)

Enter in a friendly name for the datastore and select the vVol container in the Backing Storage Container list. (vSphereView 23).

vSphereView 23: New Datastore Wizard (2)

Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.

No vVol datastore listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.

Select the host(s) on which to mount the vVol datastore and click Finish. (vSphereView 24—Finish button not shown) 

vSphereView 24: New Datastore Wizard (3)

Once a vVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that hosts it. (vSphereView 25).

vSphereView 25:  PE Listed for an ESXi Host after vVol Datastore Creation

Mounting a vVol Datastore to Additional Hosts

To mount the vVol datastore to additional hosts, right-click its row in the Web Client inventory pane and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard (vSphereView 27). Select the hosts to which to mount the vVol datastore by checking their boxes and click Finish (not shown).

vSphereView 26: Mount Datastore to Additional Hosts Command
vSphereView 27: Mount Datastore to Additonal Hosts Wizard

Using a vVol Datastore

A vVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to vVols.

vVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a vVol datastore’s contents (vSphereViews 28 and 29).


vSphereView 28: Web Client File Browser View of a vVol Datastore's Contents


vSphereView 29: vSphere CLI Listing of a vVol Datastore's Contents

[Back to Top

Types of vVols

The benefits of vVols are rooted in the increased storage granularity achieved by implementing each vVol-based virtual disk as a separate volume on the array. This property makes it possible to apply array-based features to individual vVols.

FlashArray Organization of vVols

FlashArrays organize the vVols associated with each vVol-based VM as a volume group. Each time VMware administrator creates a vVol-based VM, the hosting FlashArray creates a volume group whose name is the name of the VM, prefixed by vvol- and followed by -vg.
(ArrayView 5).

FlashArray syntax limits volume group names to letters, numbers and dashes; arrays remove other characters that are valid in virtual machine names during volume group creation.

ArrayView 5: Volume Groups Area of GUI Volumes Tab

To list the volumes associated with a vVol-based VM, select the Storage view Volumes tab. In the Volume Groups area, select the volume group name containing the VM name from the list or enter the VM name in the search box (ArrayView 5).

The Volumes area of the pane lists the volumes associated with the VM (ArrayView 6).

ArrayView 6: GUI View of Volume Group Membership

Clicking a volume name displays additional detail about the selected volume (ArrayView 7).

ArrayView 7: GUI View of a vVol's Details

Clicking the volume group name in the navigation breadcrumbs returns to the volume groups display.

When the last vVol in a volume group is deleted (destroyed), the array destroys the volume group automatically. As with all FlashArray data objects, destroying a volume group moves it to the array’s Destroyed Volume Groups folder for 24 hours before eradicating it permanently.

To recover or eradicate a destroyed volume group, click the respective icons in the Destroyed Volume Groups pane.

ArrayView 8: FlashArray GUI Destroyed Volume Groups Folder

The FlashArray CLI and REST interfaces can also be used to manage volume groups of vVols.

VM Datastore Structures

vVols do not change the fundamental VM architecture:

  • Every VM has a configuration file (a VMX file) that describes its virtual hardware and special settings
  • Every powered-on VM has a swap file.
  • Each virtual disk added to a VM is implemented as a storage object that limits guest OS disk capacity.
  • Every VM has a memory (vmem) file used to store snapshots of its memory state.

Conventional VM Datastores

Every VM has a home directory that contains information, such as:

Virtual hardware descriptions 

Guest operating system version and settings, BIOS configuration, virtual SCSI controllers, virtual NICs, pointers to virtual disks, etc.


Information used during VM troubleshooting

VMDK files 

Files that correspond to the VM’s virtual disks, whether implemented as NFS, VMFS, physical and virtual mode RDMs (Raw Device Mappings), or vVols. VMDK files indicate  where the ESXi vSCSI layer should send each virtual disk’s I/O.

For complete list VM home directory contents see VMware Workstation 5.0 What Files Make Up a Virtual Machine article.

When a VMware administrator creates a VM based on VMFS or NFS, VMware creates a directory in its home datastore. (vSphereView 30).

vSphereView 30: Web Client Edit Settings Wizard
vSphereView 31: Web Client File Browser View of a VM's Home Directory

With vVol-based VMs, there is no file system, but VMware makes the structure appear to be the same as that of a conventional VM. What occurs internally is quite different, however.

vVol-based VM Datastores

vVol-based VMs use four types of vVols:

  • Configuration vVol (usually called “config vVol” one per VM)
  • Data vVol (one or more per VM)
  • Swap vVol (one per VM)
  • Memory vVol (zero, one or more per VM)

The sections that follow describe these four types of vVols and the purposes they serve.

In addition to the four types of vVols used by vVol-based VMs, there are vVol snapshots, described in the section titled Snapshots of vVols, starting 

Config vVols 

When a VMware administrator creates a vVol-based VM, vCenter creates a 4-gigabyte thin-provisioned configuration vVol (config vVol) on the array, which ESXi formats with VMFS. A VM’s config vVol stores the files required to build and manage it: its VMX file, logs, VMDK pointers, etc. To create a vVol-based VM, right-click any inventory pane object to launch the New Virtual Machine wizard and specify that the VM’s home directory be created on a vVol datastore.

vSphereView 32: New Virtual Machine Wizard

For simplicity, the VM in this example has no additional virtual disks.

When VM creation is complete, a directory with the name of the VM appears in the array’s vVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.

vSphereView 33: Customize Hardware Wizard

When VM creation is complete, a directory with the name of the VM appears in the array’s vVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.

vSphereView 34: Directory of a New vVol-based VM

In the Web Client, a vVol datastore appears as a collection of folders, each representing a mount point for the mini-file system on a config vVol. The Web Client GUI Browse Datastore function and ESXi console cd operations work as they do with conventional VMs. Rather than traversing one file system, however, they transparently traverse the file systems hosted on all of the array’s config vVols.

A FlashArray creates a config vVol for each vVol-based VM. Arrays name config vVols by concatenating the volume group name with config-<UUID>. Arrays generate UUIDs randomly; an array administrator can change them if desired.

An array administrator can search for volumes containing a vVol-based VM name to verify that its volume group and config vVol have been created.

ArrayView 9: Locating a VM's Config vVol

As objects are added to a vVol-based VM, VMware creates pointer files in its config vVol; these are visible in its directory. When a VM is deleted, moved to another array, or moved to a non-vVol datastore, VMware deletes its config vVol.

Data vVols

Each data vVol on an array corresponds to a virtual disk. When a VMware administrator creates a virtual disk in a vVol datastore, VMware directs the array to create a volume and creates a VMDK file pointing to it in the VM’s config vVol. Similarly, to resize or delete a virtual disk, VMware directs the array to resize or destroy the corresponding volume.

Creating a Data vVol

vVol-based virtual disk creation is identical to conventional virtual disk creation. To create a vVol-based virtual disk using the Web Client, for example, right-click a VM in the Web Client inventory pane and select Edit Settings from the dropdown menu to launch the Edit Settings wizard.

vSphereView 35: Web Client Edit Settings Command

Select New Hard Disk in the New device dropdown and click Add.

vSphereView 36: New Hard Disk Selection

Enter configuration parameters. Select the VM’s home datastore (Datastore Default) or a different one for the new virtual disk, but to ensure that the virtual disk is vVol-based, select a vVol datastore.

vSphereView 37: Specifying Data vVol Parameters

Click OK to create the virtual disk. VMware does the following:

  1. For a VM’s first vVol on a given array, directs the array to create a volume group and a config vVol for it.
  2. Directs the array to create a volume in the VM’s volume group.
  3. Creates a VMDK pointer file in the VM’s config vVol to link the virtual disk to the data vVol on the array.
  4. Adds the new pointer file to the VM’s VMX file to enable the VM to use the data vVol.

The FlashArray GUI Storage view Volumes tab lists data vVols in the Volumes pane of the volume group display.

ArrayView 10: FlashArray GUI View of a Volume Group's Data vVols

Resizing a Data vVol

A VMware administrator can use any of several management tools expand a data vVol to a maximum size of 62 terabytes while it is online. Although FlashArrays can shrink volumes as well, vSphere does not support that function.

vSphereView 38: vSphere Disallows Volume Shrinking

VMware enforces the 62 terabyte maximum to enable vVols to be moved to VMFS or NFS, both of whose maximum virtual disk size is 62 terabytes.

At this time VMware does not support expanding a Volume that is configured with a SCSI controller that is enabled with sharing.

To expand a data vVol using the Web Client, right-click the VM in the inventory pane, select Edit Settings from the dropdown menu, and select the virtual disk to be expanded from the dropdown. The virtual disk’s current capacity is displayed. Enter the desired capacity and click OK, and use guest operating system tools to expose the additional capacity to the VM. 

vSphereView 39: Selecting Virtual Disk for Expansion
vSphereView 40: Entering Expanded Data vVol Capacity

Deleting a Data vVol

Deleting a data vVol is identical to deleting any other type of virtual disk. When a VMware administrator deletes a vVol-based virtual disk from a VM, ESXi deletes the reference VMDK file and directs the array to destroy the underlying volume.

To delete a vVol-based virtual disk, right-click the target VM in the Web Client inventory pane, select Edit Settings from the dropdown menu to launch the Edit Settings wizard. Select the virtual disk to be deleted, hover over the right side of its row and click the  vv52.png  symbol when it appears.

vSphereView 41: Selecting Data vVol for Deletion

To remove the vVol from the VM, click the OK button. To remove it from the VM and destroy it on the array, check the Delete files from datastore checkbox and click OK.

vSphereView 42: Destroying the Volume on the Array

Delete files from datastore is not a default—if it is not selected, the vVol is detached from the VM, but remains on the array. A VMware administrator can reattach it with the Add existing virtual disk Web Client command.

The ESXi host deletes the data vVol’s VMDK pointer file and directs the array to destroy the volume (move it to its Destroyed Volumes folder for 24 hours.

ArrayView 11: Deleted Data vVol in an Array's Destroyed Voumes Folder

An array administrator can recover a deleted vVol-based virtual disk at any time during the 24 hours following deletion. After 24 hours, the array permanently eradicates the volume and it can no longer be recovered.

Swap vVols

VMware creates swap files for VMs of all types when they are powered on, and deletes them at power-off. When a vVol-based VM is powered on, VMware directs the array to create a swap vVol, and creates a swap (.vswp) file in the VM’s config vVol that points to it.

vSphereView 43: Powered-off VM Configuration
Illustrates the components of a powered-off vVol-based VM. There is no vswp file.
ArrayView12: Data vVol Volumes for Powered-off VM
The VM’s volume group does not include a swap volume.

To power on a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown. 

vSphereView 44: Power On VM Command

 When a VM is powered on, the Web Client file navigator lists two vswp files in its folder.

vSphereView 45: Powered-On VM with vswp File

VMware creates a vswp file for the VM’s memory image when it is swapped out and another for ESXi administrative purposes.

The swap vVol’s name in the VM’s volume group on the array is Swap- concatenated with a unique identifier. The GUI Volumes tab shows a volume whose size is the VM’s memory size. 

ArrayView 13: Swap Volume for Powered-On VM
vSphereView 46: VM's Virtual Memory Size

Like all FlashArray volumes, swap vVols are thin-provisioned—they occupy no space until data is written to them.

To power off a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Shut Down Guest OS from the secondary dropdown.

vSphereView 47: Web Client Power Off VM Command

When a VM is powered off, its vswp file disappears from the Web Client file navigator, and the FlashArray GUI Volumes tab no longer shows a swap volume on the array.

ArrayView 14: GUI View of Powered-off VM's Volumes (No Swap vVol)

VMware destroys and immediately eradicates swap vVols from the array. (They do not remain in the Destroyed Volumes folder for 24 hours.)

ArrayView 15: Destroyed and Eradicated Swap vVol

Memory vVols

VMware creates memory vVols for two reasons:

VM suspension

When a VMware administrator suspends a VM, VMware stores its memory state in a memory vVol. When the VM resumes, its memory state is restored from the memory vVol, which is then deleted.

VM snapshots

When a VMware management tool creates a snapshot of a vVol-based VM with the “store memory state” option, VMware creates a memory vVol. Memory vVols that contain VM snapshots are deleted when the snapshots are deleted. They are described in the section titled Creating a VM Snapshot with Saved Memory.

To suspend a running VM, right-click its entry in the Web Client inventory pane, select Power from the dropdown menu, and Suspend from the secondary dropdown.

vSphereView 48: VM Suspend Command

VMware halts the VM’s processes, creates a memory vVol and a vmss file to reference it, de-stages (writes) the VM’s memory contents to the memory vVol, and directs the array to destroy and eradicate its swap vVol.

ArrayView 16: Memory vVol Host Connection
ArrayView 17: GUI View of Memory vVol
vSphereView 49: Memory vVol in File Navigator

When the VM’s memory has been written, The ESXi host unbinds its vVols. They are bound again when it is powered on.

To resume a suspended VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown.

vSphereView 50: Web Client Command to Power On a Suspended VM

Powering on a suspended VM binds its vVols, including its memory vVol, to the ESXi host, and loads its memory state is from the memory vVol. Once loading is complete, VMware unbinds the memory vVol and destroys (but does not immediately eradicate) it. The memory vVol moves to the array’s destroyed volumes folder where it is eradicated permanently after 24 hours.

ArrayView 18: GUI View of Destroyed Memory vVol

Recovering Deleted vVols

Deleted data and config vVols are both recoverable within 24 hours of deletion.

Throughout a VM’s life, it has a config vVol in every vVol datastore it uses. The config vVol hosts the VM’s home folder which contains its VMX file, logs, swap pointer file, and data vVol (VMDK) and snapshot pointer files. Restoring a config vVol from a snapshot and the corresponding data and snapshot vVols effectively restores a deleted VM.

vSphere View 51: File Navigator View of A Typical VM Home Directory Folder

To delete a VM, VMware deletes the files in its config vVol and directs the array to destroy the config vVol and any of its data vVols that are not shared with other VMs.

vSphereView 52: Confirm Delete Wizard

An array administrator can recover destroyed vVols at any time within 24 hours of their destruction. But because the config vVol’s files are deleted before destruction, recovering a VM’s config vVol results in an empty folder. A recovered config vVol must be restored from its most recent snapshot.

Recovering a config vVol requires at least one pre-existing array-based snapshot. Without a config vVol snapshot, a VM can be recovered, but its configuration must be recovered manually.

When a VMware administrator deletes a VM, VMware directs the array to destroy its config vVol, data vVols, and any snapshots. The array moves the objects to its destroyed objects folders for 24 hours.

ArrayView 19: GUI View of a Destroyed VM's Volumes, Snapshots, and Volume Group

To recover a deleted VM, recover its volume group first, followed by its config and data vVols. To recover a single object on the array, click the array options image.png  icon next to it.

To recover multiple objects of the same type with a single action, click the vertical ellipsis and select Recover… to launch the Recover Volumes wizard. Select the config vVol and the data vVols to be recovered by checking their boxes and click the Recover button.

ArrayView 20: GUI Command to Recover Objects
ArrayView 21: Selecting Volumes to Recover

In the GUI Snapshots pane, click the vertical ellipsis to the right of the snapshot from which to restore, and select Restore from the dropdown menu.

ArrayView 22: Restore Config vVol from Snapshot

When the Restore Volume from Snapshot wizard appears, click the Restore button.

ArrayView 23: Restore Volume Confirmation Wizard

Restoring the config vVol from a snapshot recreates the pointer files it contains. In the Web Client file navigator, right-click the vmx file and select Register VM… from the dropdown menu to register the VM.

vSphereView 53: Registering a Recovered VM

After registration, all data vVols, snapshots, and the VM configuration are as they were prior to VM deletion.

Recovering a Deleted Data vVol

During the 24 hour grace period between deletion of a vVol by a VMware administrator and its eradication by the array the virtual disk can be restored.

When a VMware administrator deleting a vVol-based virtual disk selects the Delete files from datastore option, the array moves the data vVol to its Destroyed Volumes folder for 24 hours.

vSphereView 54: Delete Virtual Disk Command

To use the Plugin to restore a deleted data vVol, click the VM in the inventory pane, select the FlashArray Virtual Volume Objects tab, and click the Restore Deleted Disk Plugin button to launch the Restore Deleted Disk wizard.

vSphereView 55: Restore Deleted Disk Command
vSphereView 56: Restore Deleted Disk Wizard

Select the data vVol to be restored from the list and click the Restore button. VMware directs the array to remove the data vVol from its Destroyed Volumes folder and makes the virtual disk visible to the VM and to the Web Client.

[Back to Top

vVol Binding

A primary goal of the vVol architecture is scale—increasing the number of virtual disks that can be exported to ESXi hosts concurrently. With previous approaches, each volume would require a separate LUN. In large environments, it is quite possible to exceed the ESXi limit of 512 LUNs. vVols introduces the concept of protocol endpoints (PEs) to significantly extend this limit.

ESXi hosts bind and unbind (connect and disconnect) vVols dynamically as needed. Hosts can provision VMs and power them on and off even when no vCenter is available.

When an ESXi host needs access to a vVol:

  • It issues a bind request to the VASA provider whose array hosts the vVol
  • The VASA provider binds the vVol to a PE visible to the requesting host and returns the binding information (the sub-lun) to the host
  • The host issues a SCSI REPORT LUNS command to the PE to make the newly-bound vVol accessible.

vVols are bound to specific ESXi host(s) for as long as they are needed. Binds (sub-lun connections) are specific to each ESXi host-PE-vVol relationship. A vVol bound to a PE that is visible to multiple hosts can only be accessed by hosts that request binds. Table 1 lists the most common scenarios in which ESXi hosts bind and unbind and vVols.

What causes the bind?

Bound Host


When is it unbound?

vVol type


Host running the VM

Power-off or vMotion

Config, data, swap

Folder navigated to in vVol Datastore via GUI

Host selected by vCenter with access to vVol datastore

When navigated away from or session ended


Folder navigated to in vVol Datastore via SSH or console

Host logged into

When navigated away from or session ended



Target host

Power-off or vMotion

Config, data, swap

VM creation

Target host

Creation completion

Config, data

VM deletion

Target host

Upon deletion completion


VM Reconfiguration

Target host

Reconfiguration completion



Target host

Clone completion

Config, data


Target host

Snapshot completion


Table 1: Reasons for Binding vVols to ESXi Host


Binding and unbinding is automatic There is never a need for a VMware or FlashArray administrator to manually bind a vVol to an ESXi host.

FlashArrays only bind vVols to ESXi hosts that make requests; they do not bind them to host groups.

If multiple PEs are presented to an ESXi host, the host selects one at random to satisfy each bind request. Array administrators cannot control which PE is used for a bind.

This blog post contains a detailed description of ESXi host to PE to vVol binding.

The end user should never need to manually connect a vVol to a FlashArray Host or Hostgroup.  Read more about why you shouldn't manually connect the vVol here.

A vVol with no sub-lun connection is not “orphaned”. No sub-lun connection simply indicates that no ESXi host has access to the vVol at that time. 

[Back to Top


Snapshots of vVols

An important benefit of vVols is in its handling of snapshots. With VMFS-based storage, ESXi takes VM snapshots by creating a delta VMDK file for each of the VM’s virtual disks. It redirects new virtual disk writes to the delta VMDKs, and directs reads of unmodified blocks to the originals, and reads of modified blocks to the delta VMDKs. The technique works, but it introduces I/O latency that can profoundly affect application performance. Additional snapshots intensify the latency increase.

The performance impact is so pronounced that both VMware and storage vendors recommend the briefest possible snapshot retention periods - see Best practices for using snapshots in the vSphere environment (1025279) kb article. Practically speaking, this limits snapshot uses to:

Patches and upgrades
Taking a snapshot prior to patching or upgrading an application or guest operating system, and deleting it immediately after the update succeeds.

Quiescing a VM and taking a snapshot prior to a VADP-based VM backup. Again, the recommended practice is deleting the snapshot immediately after the backup completes.

These snapshots are typically of limited utility for other purposes, such as development testing. Adapting them for such purposes usually entails custom scripting and/or lengthy copy operations with heavy impact on production performance. In summary, conventional VMware snapshots solve some problems, but with significant limitations.

Array-based snapshots are generally preferable, particularly for their lower performance impact. FlashArray snapshots are created instantaneously, have negligible performance impact, and initially occupy no space. They can be scheduled or taken on demand, and replicated to remote arrays. Scripts and orchestration tools can use them to quickly bring up or refresh development testing environments.

Because FlashArray snapshots have negligible performance impact, they can be retained for longer periods. In addition, they can be copied to create new volumes for development testing and analytics, either by other VMs or by physical servers.

FlashArray administrators can take snapshots of VMFS volumes directly, however there are limitations:

No integration with ESXi or vCenter

Plugins can enable VMFS snapshot creation and management from the Web Client, but vCenter and ESXi have no awareness of or capability for managing them.

Coarse granularity

Array-based snapshots of VMFS volumes capture the entire VMFS. They may include hundreds or thousands of VMs and their VMDKs. Restoring individual VMDKs requires extensive scripting.

vVols eliminate both limitations. VMware does not create vVol snapshots itself; it directs the array to create a snapshot for each of a VM’s data vVols. The Plugin translates Web Client commands into FlashArray operations. VMware administrators use the same tools to create, restore, and delete VMFS and vVol snapshots, but with vVols, they can operate on individual VMDKs. 

Starting in Purity//FA 5.1.3, when taking a managed snapshot the array will copy the VMs current data volume/s to new data volume/s that has a 'snap' suffix for it.
Below there is an example of a vVol VM on Purity 5.1.4 and vVol VM on Purity 5.1.2.  Each VM has a had a managed snapshot taken that included the memory.

Purity 5.1.4
Managed Snapshot - Array Volume Copy.png
Here you can see that there is a copy of the data volume with the -snap suffix.
Purity 5.1.2
Managed Snapshot - Array Volume Snapshot.png
Here you can see that the two Data Volumes have had snapshots taken of them.

Taking Snapshots of vVol-based VMs

While the FlashArray GUI, REST, and CLI interfaces can be used for both per-VM and per-virtual disk vVol operations, a major advantage of the Plugin is management of vVols from within vCenter. VMware administrators can use the Web Client or any other VMware management tool to create array-based snapshots of vVol-based VMs.

To take a snapshot of a vVol-based VM with the Web Client, right-click the VM in the inventory pane, select Snapshots from the dropdown menu, and Take Snapshot from the secondary dropdown to launch the Take VM Snapshot for vVol-VM wizard. (vSphereView 58)

vSphereView 57: Web Client Snapshot VM Command
vSphereView 58: Take Snapshot of vVol-VM Wizard

Enter a name for the snapshot and (optionally) check one of the boxes:

Snapshot the virtual machine’s memory:

Causes the snapshot to capture the VM’s memory state and power setting. Memory snapshots take longer to complete, and may cause a brief (a second or less) slowdown in VM response over the network.

Quiesce guest file system:

VMware Tools quiesces the VM’s file system before taking the snapshot. This allows outstanding I/O requests to complete, but queues new ones for execution after restart. When a VM restored from this type of snapshot restarts, any queued I/O requests complete. To use this option, VMware Tools must be installed in the VM. Either of these options can be used with vVol-based VMs.

VMware administrators can also take snapshots of vVol-based VMs with PowerCLI, for example:

New-Snapshot -Name NewSnapshot -Quiesce:$true -VM vVolVM -Memory:$false 
vSphereView 59: New Files Resulting from a Snapshot of a vVol-based VM

When a snapshot of a vVol-based VM is taken, new files appear in the VM’s vVol datastore folder. (vSphereView 59)

The files are:

VMDK (vVol-VM-000001.vmdk)

A pointer file to a FlashArray volume or snapshot. If the VM is running from that VMDK, the file points to a data vVol. If the VM is not running from that snapshot VMDK, the file points to a vVol snapshot. As administrators change VMs’ running states, VMware automatically re-points VMDK files.

Database file (vVol-VM.vmsd)

The VMware Snapshot Manager’s primary source of information. Contains entries that define relationships between snapshots and the disks from which they are created.

Memory snapshot file (vVol-VM-Snapshot1.vmsn)

Contains the state of the VM’s memory. Makes it possible to revert directly to a powered-on VM state. (With non-memory snapshots, VMs revert to turned off states.) Created even if the Snapshot the virtual machine’s memory option is not selected.

Memory file (not shown in vSphereView 59)

A pointer file to a memory vVol. Created only for snapshots that include VM memory states.

Creating Snapshots Without Saving Memory

If neither Snapshot the virtual machine’s memory nor Quiesce guest file system is selected, VMware directs the array to create snapshots with no pre-work. All FlashArray snapshots are crash consistent, so snapshots of vVol based-VMs that they host are likewise at least crash consistent.

VMware takes snapshots of vVol-based VMs by directing the array (or arrays) to take snapshots of its data vVols. Viewing a VM’s data vVols on the array shows each one’s live snapshots.

vSphereView 50: Completed VMware Shnapshot VM
Purity 5.1.3+ Non-Memory Managed Snapshot on Array GUI
ArrayView 24  Non-memory Snapshot if Array GUI 5.1.4.png
Purity 5.0.7 Non-Memory Managed Snapshot on Array GUI
ArrayView 24: Non-memory Snapshot if Array GUI
vSphereView 61: Non-memory Snaphsot in Web Client

FlashArray snapshot names are auto-generated, but VMware tools list the snapshot name supplied by the VMware administrator (as in vSphereView 58 on page 48).

Creating a VM Snapshot with Saved Memory

If the VMware administrator selects Store the Virtual Machine’s Memory State, the underlying snapshot process is more complex.

Memory snapshots generally take somewhat longer than non-memory ones because the ESXi host directs the array to create a memory vVol to which it writes the VM’s entire memory image. Creation time is proportional to the VM’s memory size.

vSphereView 62: Take VM Snapshot Wizard
vSphereView 63: Memory Snaphsot Progress Indicator

Memory snapshots typically cause a VM to pause briefly, usually for less than a second. vSphereView 64 shows a timeout in a sequence of ICMP pings to a VM due to a memory snapshot.

vSphereView 64: Missed Ping Due to Memory Copy During Snapshot Creation

The memory vVol in a VM’s volume group created as a consequence of a memory snapshot stores the VM’s active state (memory image). ArrayView 25 shows the volume group of a VM with a memory snapshot (vvol-vVol-VM-vg/Memory-b31d0eb0). The size of the memory vVol is the memory size of the VM’s memory image.

ArrayView 25: Memory vVol Created by Taking a Memory Snapshot of a VM

VMware flags a memory snapshot with a green vv92.png (play) icon to indicate that it includes the VM’s memory state.

vSphereView 65: Web Client View of a Memory Snapshot

Reverting a VM to a Snapshot

VMware management tools can revert VMs to snapshots taken by VMware. As with snapshot creation, reverting is identical for conventional and vVol-based VM snapshots.

To restore a VM from a snapshot, from the Web Client Hosts & Clusters or VMs and Templates view, select the VM to be restored and click the Snapshots tab in the adjacent pane to display a list of the VM’s snapshots.

Select the snapshot from which to revert, click the All Actions button, and select Revert to from the dropdown menu.

vSphereView 66: Revert VM to Snapshot Command

Subsequent steps differ slightly for non-memory and memory snapshots.

Reverting a VM from a Non-memory Snapshot

The Revert to command displays a confirmation dialog. Click Yes to revert the VM to the selected snapshot.

vSphereView 67: Confirm Reverting a VM to a Non-memory Snapshot

The array overwrites the VM’s data vVols from their snapshots. Any data vVols added to the VM after the snapshot was taken are unchanged.

Before reverting a VM from a non-memory snapshot, VMware shuts the VM down. Thus, reverted VMs are initially powered off.

Reverting a VM from Memory Snapshot

To revert a VM to a memory snapshot, the ESXi host first directs the array to restore the VM’s data vVols from their snapshots, and then binds the VM’s memory vVol and reloads its memory. Reverting a VM to a memory snapshot takes slightly longer and results in a burst of read activity on the array.

A VM reverted to a memory snapshot can be reverted either suspended or to a running state. Check the Suspend this virtual machine when reverting to selected snapshot box in the Confirm Revert to Snapshot wizard to force the reverted VM to be powered off initially. If the box is not checked, the VM is reverted into its state at the time of the snapshot.

ArrayView 26: FlashArray Read Activity while Reverting a VM from a Memory Snapshot

Deleting a Snapshot

Snapshots created with VMware management tools can be deleted with those same tools. VMware administrators can only delete snapshots taken with VMware tools.

To delete a VM snapshot from the Web Client Host and Clusters or VMs and Templates view, select the target VM and click the Snapshots tab in the adjacent pane to display a list of its snapshots.

Select the snapshot to be deleted, click the All Actions button, and select Delete Snapshot from the dropdown menu to launch the Confirm Delete wizard. Click Yes to confirm the deletion. (vSphereViews 69 and 70)

vSphereView 69: Delete VM Snapshot Command
vSphereView 70: Confirm VM Snapshot Deletion

VMware removes the VM’s snapshot files from the vVol datastore and directs the array to destroy the snapshot. The array moves the snapshot and any corresponding memory vVols to its Destroyed Volumes folder for 24 hours, after which it eradicates them permanently. (ArrayView 27)

ArrayView 27: Memory vVol for a Destroyed Snapshot

When VMware deletes a conventional VM snapshot, it reconsolidates (overwrites the VM’s original VMDKs with the data from the delta VMDKs). Depending on the amount of data changed after the snapshot, this can take a long time and have significant performance impact. With FlashArray based snapshots of vVols, however, there is no reconsolidation. Destroying a Flasharray snapshot is essentially instantaneous. Any storage reclamation occurs after the fact during the normal course of the array’s periodic background garbage collection (GC).

Unmanaged Snapshots

Snapshots created with VMware tools are called managed snapshots. Snapshots created by external means, such the FlashArray GUI, CLI, and REST interfaces and protection group policies, are referred to as unmanaged. The only difference between the two is that VMware tools can be used with managed snapshots, whereas unmanaged ones must be managed with external tools.

Unmanaged snapshots (and volumes) can be used in the VMware environment. For example, FlashArray tools can copy an unmanaged source snapshot or volume to a target data vVol, overwriting the latter’s contents, but with some restrictions:

Volume size

A source snapshot or volume must be of the same size as the target data vVol. FlashArrays can copy snapshots and volumes of different sizes (the target resizes to match the source), but VMware cannot accommodate external vVol size changes. To overwrite a data vVol with a snapshot or volume of a different size, use VMware tools to resize the target vVol prior to copying.

Offline copying

Overwriting a data vVol while it is in use typically causes the application to fail or produce incorrect results. A vVol should be offline to its VM, or the VM should be powered off before overwriting.

Config vVols

Config vVols should only be overwritten with their own snapshots.

Memory vVols

Memory vVols should never be overwritten. There is no reason to overwrite them, and doing so renders them unusable.

Snapshot Management with the Plugin

Plugin Version 3.0 introduces snapshot features that are not otherwise available with the Web Client. The vVol-based VM listing has a FlashArray Virtual Volume Objects tab that lists virtual disk-vVol relationships and includes four new feature buttons. (vSphereView 71)

vSphereView 71: Snapshot Features Available with Plugin Version 3.0

Three of the Plugin buttons invoke snapshot-related functions:

Import Disk

Instantly presents a copy of any data vVol or vVol snapshot in any vVol-based VM in the vCenter to the selected VM.

Create Snapshot

Creates a FlashArray snapshot of the selected data vVol.

Overwrite Disk

Overwrites the selected vVol with the contents of any data vVol or snapshot in any FlashArray vVol-based VM in the vCenter.

These functions can also be performed with PowerShell or the vRealize Orchestrator. They are included in the Plugin as “one button” conveniences. The subsections that follow describe the functions.

Import Disk

Click the Import Disk button to launch the Import Virtual Volume Disk wizard (vSphereView 72). The wizard lists all VMs with FlashArray data vVols and their managed and unmanaged snapshots.

vSphereView 72: Import vVol Disk Wizard

Select the data vVol or snapshot to be imported and click Create to create a new data vVol having the same size and content as the source.

Because copying FlashArray volumes only reproduces metadata, copies are nearly instantaneous regardless of volume size.

Create Snapshot

VMware tools can create snapshots of VMs that include all of the VM’s data vVols. The Plugin Create Snapshot function can create a snapshot of a selected virtual disk (data vVol).

To create a snapshot of a data vVol, select the target virtual disk and click the Create Snapshot button (vSphereView 73) to launch the Create Snapshot wizard (vSphereView 75).

vSphereView 73: Create Snapshot Plugin Button

Alternatively, right-click the selected virtual disk and select Create Snapshot from the dropdown menu to launch the wizard. (vSphereView 74)

vSphereView 74: Alternative Create Snapshot Command

Enter a name for the snapshot (optional—if no name is entered, the array assigns a name) and click Create. VMware directs the array to create a snapshot of the data vVol.

Because FlashArray snapshots only reproduce metadata, creation is nearly instantaneous regardless of volume size.

vSphereView 75: Create Snapshot Wizard

Overwrite Disk

To overwrite a data vVol with any data vVol or snapshot of equal size on the same array, select the virtual disk to be overwritten and either click the Overwrite Disk button or right-click the selection and select Overwrite Disk from the dropdown menu (vSphereView 76) to launch the Overwrite Virtual Volume Disk wizard. (vSphereView 78)

vSphereView 76: Overwrite Disk Command

If the source and target objects are not of the same size, the Plugin blocks the overwrite. (vSphereView 77)

vSphereView 77: Plugin Blocking Overwriting of Different-size Source and Target

If the source and target are of equal size, but the VM is powered on, the Plugin warns the administrator to ensure that the target virtual disk is not mounted by the VM, but allows the overwrite to proceed. (vSphereView 78)

vSphereView78: Overwrite Virtual Volume Disk Wizard

If the VM is powered off and the source and target objects are of the same size, no warnings are issued.

In either case, click Replace to overwrite the target volume with the contents of the source volume or snapshot.

Because copying a FlashArray volume from another volume or from a snapshot only reproduces its metadata, overwrites are nearly instantaneous regardless of target volume size.

[Back to Top


Storage Policy Based Management

A major benefit of the vVol architecture is granularity—its ability to configure each virtual volume as required and ensure that the configuration does not change.

Historically, configuring storage with VMware management tools has required GUI plugins. Every storage vendor’s tools were unique—there was no consistency across vendors. Plugins were integrated with the Web Client, but not with vCenter itself, so there was no integration with the SDK or PowerCLI. Moreover, ensuring on-going configuration compliance was not easy, especially in large environments. Assuring compliance with storage policies generally required 3rd party tools.

With vVol data granularity, an array administrator can configure each virtual disk or VM exactly as required. Moreover, with vVols, data granularity is integrated with vCenter in the form of custom storage policies that VMware administrators create and apply to both VMs and individual virtual disks.

Storage policies are VMware administrator-defined collections of storage capabilities. Storage capabilities are array-specific features that can be applied to volumes on the array. When a storage policy is applied, VMware filters out non-compliant storage so that only compliant targets are presented as options for configuring storage for a VM or vVol.

If an array administrator makes a VM or volume non-compliant with a VMware policy, for example by changing its configuration on the array, VMware marks the VM or VMDK non-compliant. A VMware administrator can remediate non-compliant configurations using only VMware management tools; no array access is required.

FlashArray Storage Capabilities

An array’s capabilities represent the features it offers. When any FlashArray’s VASA providers are registered with vCenter, the array informs vCenter that the array has the following capabilities:

  • Encryption of stored data (“data at rest”)
  • Deduplication
  • Compression
  • RAID protection
  • Flash storage

All FlashArrays offer these capabilities; they cannot be disabled. VMware administrators can configure the additional capabilities advertised by the VASA provider and listed in Table 2.

Capability Name

Value (not case-sensitive)

Consistency Group Name

A FlashArray protection group name

FlashArray Group

Name of one or more FlashArrays

Local Snapshot Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Local Snapshot Policy Capable

Yes or No

Local Snapshot Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Minimum Replication Concurrency

Number of target FlashArrays to replicate to at once

Pure Storage FlashArray

Yes or No

QoS Support

Yes or No

Replication Capable

Yes or No

Replication Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Replication Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Target Sites

Names of specific FlashArrays desired as replication targets

Table 2: Configurable Capabilities Advertised by FlashArray VASA Providers

Storage Capability Compliance

Administrators can specify values for some or all of these capabilities when creating storage policies. VMware performs two types of policy compliance checks:

  • If a vVol were created on the array, could it be configured with the feature?
  • Is a vVol in compliance with its policy? For example, a vVol with a policy of hourly snapshots must be (a) on FlashArray that hosts a protection group with hourly snapshots and (b) a member of that protection group.

Only VMs and virtual disks configured with vVols can be compliant. VMFS-based VMs are never compliant, even if their volume is on a compliant FlashArray.

Table 3 lists the circumstances under which a policy offers each capability, and those under which a vVol is in or out of compliance with it. 

Capability Name

An array offers this capability when…


A vVol is in compliance when…

A vVol is out of compliance when…

Pure Storage FlashArray

…it is a FlashArray (i.e. always).

…it is on a FlashArray, if the capability is set to ‘Yes’.

…it is on a different array vendor/model and the capability is set to ‘Yes’.

…it is on a FlashArray and the capability is set to ‘No’.

FlashArray Group

…it is a FlashArray and its name is listed in this group.

…it is on a FlashArray with one of the configured names.

…it is not on a FlashArray with one of the configured names.

QoS Support

…it is a FlashArray and has QoS enabled.

…it is on a FlashArray with QoS enabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘No’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS enabled and the capability is set to ‘No’.

Consistency Group Name

…it is a FlashArray and has a protection group with that name.

…it is in a protection group with that name.

…it is not in a protection group with that name.

Local Snapshot Policy Capable

…it is a FlashArray and has at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray with at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray that does not have at least one protection group or on a non-FlashArray.

Local Snapshot Interval

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified interval.

…it is in a protection group with an enabled local snapshot policy of the specified interval.

…it is in not a protection group with an enabled local snapshot policy of the specified interval.

Local Snapshot Retention

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified retention.

…it is in a protection group with an enabled local snapshot policy of the specified retention.

…it is in not a protection group with an enabled local snapshot policy of the specified retention.

Replication Capable

…it is a FlashArray (i.e. always).

…it is in a protection group with an enabled replication target.

…it is in not a protection group with an enabled replication target.

Replication Interval

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified interval.

…it is in a protection group with an enabled replication policy of the specified interval.

…it is in not a protection group with an enabled replication policy of the specified interval.

Replication Retention

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified retention.

…it is in a protection group with an enabled replication policy of the specified retention.

…it is in not a protection group with an enabled replication policy of the specified retention.

Minimum Replication Concurrency

…it is a FlashArray and has at least one protection group with the specified number or more of allowed replication targets.

…it is in a protection group that has the specified number of allowed replication targets.

…it is not in a protection group that has the specified number of allowed replication targets.

Target Sites

…it is a FlashArray and has at least one protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must match at least that configured value of FlashArrays.

…it is in a protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must be replicated to at least that configured value of the listed target FlashArrays.

…it is not in a protection group replicating to the minimum amount of correct target FlashArrays.

Table 3: Configurable Capabilities Advertised by FlashArray VASA Providers

Combining Capabilities and Storage Compliance

This section describes an example of combining capabilities into a policy. Storage policies are a powerful method of assuring specific configuration control, but they affect how vVol compliance is viewed. For an array or vVol to be compliant with a policy:

  1. The array or vVol must comply with all of the policy’s capabilities
  2. For snapshot and replication capabilities, the array must have at least one protection group that offers all of the policy’s capabilities. For example, if a policy requires hourly local snapshots and replication every 5 minutes, a protection group with a hourly snapshots and a different protection group with 5 minute replication do not make the array compliant. VMware requires that volumes be in a single group during policy configuration, so to be compliant for this example, an array would require at least one protection group with hourly snapshots and 5 minute replication.
  3. Some combinations of capabilities cannot be compliant. For example, setting an array’s Local Snapshot Policy Capable capability to No and specifying a policy that includes snapshots means that no storage compliant with the policy can be hosted on that array.

Creating a Storage Policy

vCenter makes the capabilities advertised by an array’s VASA Provider available to VMware administrators for assembling into storage policies. Administrators can create policies by using APIs, GUI, CLI, or other tools. This section describes two ways of creating policies for FlashArray-based vVols:

Custom Policy Creation

Using the Web Client to create custom policies using capabilities published by the FlashArray VASA provider

Importing FlashArray Protection Groups

Using the Plugin to create storage policies by importing a FlashArray protection group configuration

Creating Custom Storage Policies

Click the home icon at the top of the Web Client home screen, and select Policies and Profiles from the dropdown menu (vSphereView 79) to display the VM Storage Policies pane.

vSphereView 79: Policies and Profiles Command

Select the VM Storage Policies tab and click the Create VM Storage Policy button (vSphereView 80) to launch the Create New VM Storage Policy wizard. (vSphereView 81)

vSphereView 80: Create VM Storage Policy Button

Select a vCenter from the dropdown and enter a descriptive name for the policy.

vSphereView 81: Create New VM Storage Policy Wizard

It is a best practice to use a naming convention that is operationally meaningful. For example, the name in vSphereView 81 suggests a policy configured on FlashArray storage with 1 hour local snapshots and a 15 minute replication interval.

Configure pages 2 and 2a as necessary (refer to VMware documentation for instructions), click forward to the 2b Rule-set 1 page and select in the <Select provider> dropdown to use the FlashArray VASA provider rules ( to create the storage policy. (vSphereView 82)

vSphereView 82: Rule-set 1 Page 2b of the Create New VM Storage Policy Wizard

A storage policy requires at least one rule. To locate all VMs and virtual disks to which this policy will be assigned on FlashArrays, click the <Add rule> dropdown and select the Pure Storage FlashArray capability (vSphereView 83).

vSphereView 83: Adding a Storage Policy Rule

The selected rule name appears above the <Add rule> dropdown, and a dropdown list of valid values appears to the right of it. Select Yes and click Next (not shown) to create the policy. As defined thus far, the policy requires that VMs and vVols to which it is assigned be located on FlashArrays, but they are not otherwise constrained. When a policy is created, the Plugin checks registered arrays for compliance and displays a list of vVol datastores on arrays that support it (vSphereView 84).

vSphereView 84: List of Arrays Compatible with a New Storage Policy

The name assigned to the policy (FlashArray-1hrSnap15minReplication—see vSphereView 81) suggests that it should specify hourly snapshots and 15-minute replications of any VMs and virtual volumes to which it is assigned. Click Back (not shown in vSphereView 84) to edit the rule-set.

FlashArray replication-and snapshot capabilities require component rules. Click Add component and select Replication from the dropdown (vSphereView 85) to display the Replication component rule pane (vSphereView 86).

vSphereView 85: Selecting a Component for the Policy

Select the provider (vSphereView 86), and add rules, starting with the local snapshot policy.

vSphereView 86: Selecting Replication Provider

Click the Add Rule dropdown, select Local Snapshot Interval, enter 1 in the text box, and select Hours as the unit. (vSphereView 87)

vSphereView 87: Specifying Snapshot Interval Rule

Click the Add Rule dropdown again, select Remote Replication Interval, enter 15 in the text box, select Minutes as the unit (vSphereView 88), and click Next to display the list of registered arrays that are compatible with the augmented policy. vSphereView 89 indicates that there are two such arrays.

vSphereView 88: Specifying Replication Interval Rule
vSphereView 89: Arrays Compatible with the "FlashArray-1hr-Snap15minReplication" Storage Policy

A policy can be created even if no registered vVol datastores are compatible with it, but it cannot be assigned to any VMs or vVols. Storage can be adjusted to comply, for example, by creating a compliant protection group, or alternatively, the policy can be adjusted to be compatible with existing storage.

Auto-policy Creation with the Plugin

As an alternative to custom policies, the Plugin can import FlashArray protection groups and create vCenter policies with the same attributes.

vSphereView 90: Plugin Import Protection Groups Button

From the Plugin’s home pane, select an array and either click the Import Protection Groups button (vSphereView 90) or right-click the selected array and select Import Protection Groups on the dropdown menu (vSphereView 91) to launch the Import Protection Groups wizard. (vSphereView 92)

vSphereView 91: Import Protection Group Command
vSphereView 92: Import Protection Groups Wizard (1)

The wizard lists the available protection groups on the selected array along with a brief summary of their local snapshot and remote replication policies. For more detailed information, refer to the protection group display in the FlashArray GUI.

A grayed-out listing indicates a protection group whose properties match an existing vCenter storage policy.

Select the protection groups to be imported by checking the boxes and click the Import button (vSphereView 93). 


vSphereView 93: Import Protection Groups Wizard (2)

The protection group parameters used to create a storage policy are:

  • Snapshot interval
  • Short-term per-snapshot retention
  • Replication interval
  • Short-term per-replication snapshot interval

The Plugin creates storage policies on all vCenters in the environment to which the logged-in administrator has access. If vCenters are in enhanced linked-mode (by sharing SSO environments) the policies are created on all of them.

On the Web Client Policies and VM Storage Policies Profiles page, select the VM Storage Policies tab to display the vCenter’s default, previously created, and imported storage policies (vSphereView 94). The lower grouping in vSphereView 94 represents the imported policies (vSphereView 93). Each policy is created in the two available vCenters.

vSphereView 94: Default and Imported Storage Policies

The policy names supplied by the Plugin describe the policies in terms of snapshot and replication intervals.

Select a policy to view the details of its capabilities (vSphereView 95). In the FlashArray GUI Storage view Protection Groups pane, select platinum to display the snapshot and replication details for the protection group imported to create the Snap 1 HOURS Replication 5 MINUTES policy. (ArrayView 28)

vSphereView 95: Web Client View of Policy Details for Snap 1 HOURS Replication 5 Minutes
ArrayView 28: FlashArray GUI View of Details for Platinium Protection Group

Changing a Storage Policy

A VMware administrator can edit a storage policy that no longer fulfills the needs of the VMs assigned to make it fulfill current needs.

To change a policy’s parameters from the Policies and Profiles page in the Web Client, select VM Storage Policies, select the policy to be changed, and click the Edit Settings… button to display a list of the policy’s rules. Make the needed rule changes and click OK.

vSphereView 96: Edit Settings... Button
vSphereView 97: Changing a Policy Rule

Clicking OK launches the VM Storage Policy in Use wizard (vSphere 98), offering two options for resolution:

Manually later 

Flags all VMs and virtual disks to which the changed policy is assigned as Out of Date (vSphereView 99).


Assigns the changed policy to all VMs and virtual disks assigned to the original policy.

Click Yes to display the policy pane and select the Monitor tab.

vSphereView 98: VM Storage Policy in Use Wizard
vSphereView 99: Out of Date Storage Policies

If Manually later is selected, VMs and vVols show Out of Date compliance status. Update the policies for the affected VMs and virtual disks by selecting them and clicking the Reapply storage policy to all out of date entities button indicated in vSphereView 100.

vSphereView 100: Reapply Storage Policy Button

Selecting Now in the VM Storage Policy in Use wizard (vSphere 98) does not reconfigure the vVols on the array, so it typically causes VMs and virtual disks to show Noncompliant status.(vSphereView 101).

vSphereView 101: Non-compliant VM Objects

The subsection titled Changing a VM’s Storage Policy on page 77 describes the procedure for bringing non-compliant VMs and virtual disks into compliance.

Checking VM Storage Policy Compliance

A vVol-based VM or virtual disk may become noncompliant with its vCenter storage policy when a storage policy is changed, when an array administrator reconfigures volumes, or when the state of an array changes.

For example, if an array administrator changes the replication interval for a protection group that corresponds to a vCenter storage policy, the VMs and virtual disks to which the policy is assigned are no longer compliant.

To determine whether a VM or virtual disk is compliant with its assigned policy, either select the policy and display the objects assigned to it (vSphereViews 99 and 101), or validate VMs and virtual disks for compliance with a given policy.

From the Web Client home page, click the VM Storage Policies icon to view the vCenter’s list of storage policies (vSphereView 102). Select a policy, click the Monitor tab, and click the VMs and Virtual Disks button (vSphereView 104) to display a list of the VMs and virtual disks to which the policy is assigned.

vSphereView 102: VM Storage Policies Icon
vSphereView 103: Selecting a Policy for Validation
vSphereView 104: Validating Policy Compliance

Each policy’s status is either:


The VM or virtual disk is configured in compliance with the policy.


The VM or virtual disk is not configured according to the policy.


The policy has been changed but has not been re-applied. The VM or virtual disk may still be compliant, but the policy must be re-applied to determine that.

The subsection titled Changing a VM’s Storage Policy describes making objects compliant with their assigned storage policies.

Assigning a Storage Policy to a VM or Virtual Disk

The Web Client can assign a storage policy to a new VM or virtual disk when it is created, deployed from a template, or cloned from another VM. A VMware administrator can change the policy assigned to a VM or virtual disk. Finally, a VM’s storage policy can be changed during Storage vMotion.

Assigning a Storage Policy to New VM

A VMware administrator can assign a storage policy to a new VM created using the Deploy from Template wizard. (The procedure is identical to policy assignment with the Create New Virtual Machine and Clone Virtual Machine wizards.)

Right-click the target template in the Web Client inventory pane’s VMs and Templates list, and select New VM from This Template.

vSphereView 105: New VM from Template Command

Select options in steps 1a and 1b, and advance the wizard to step 1c, Select Storage.

vSphereView 106: Select Storage Step of Template

Setting a Policy for an Entire VM

In the Select Storage pane, select Thin Provision from the Select virtual disk format dropdown (FlashArrays only support thin provisioned volumes; selecting other options causes VM creation to fail), and either select a datastore (VMFS, NFS or vVol) from the list or a policy from the VM storage policy dropdown.

Selecting a policy filters the list to include only compliant storage. For example, selecting the built-in vVol No Requirements Policy, would filter the list to show only vVol datastores.

vSphereView 107: Selecting a Storage Policy

Selecting the FlashArray Snap 1 HOURS Replication 5 MINUTES policy filters out datastores on arrays that do not have protection groups with those properties.

vSphereView 108: Select VM Storage Policy

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (e.g., flasharray-vvol-1:platinum in vSphereView 110), or, if Automatic is selected, VMware directs the array to create a protection group with the specified capabilities.

vSphereView 109: Select Automatic Replication Group

Whichever option is chosen, the VM’s config vVol and all of its data vVols are assigned the same policy. (Swap vVols are never assigned a storage policy.) Click Finish (not shown in vSphereView 110) to complete the wizard. The VM is created and its data and config vVols are placed in the assigned protection group.

vSphereView 110: Assign an Existing Replication Group

BEST PRACTICES: Pure Storage recommends assigning local snapshot policies to all config vVols to simplify VM restoration.

All FlashArray volumes are thin provisioned, so the Thin Provision virtual disk format should always be selected. With FlashArray volumes, there is no performance impact for thin provisioning.

ArrayView 29 shows the FlashArray GUI view of a common storage policy for an entire vVol-based VM.

ArrayView 29: GUI View of a VM-wide Storage Policy

Assigning a Policy to Each of VM's Virtual Disks 

In most cases, VMware administrators put all of a VM’s volumes in the same protection group, thereby assigning the same storage policy to them.

Alternatively, assign a separate policy to some or all of a VM’s volumes by clicking the Advanced button of the Select Storage step (1c) of the Deploy from Template wizard to display the advanced view.

ArrayView 111: Advanced>>Button for per-vVol Storage Policies

In the advanced view, a separate storage policy can be specified for for the VM’s config vVol as well as for each virtual disk (data vVol).

The Configuration File line refers to the VM’s config vVol. The remaining lines enumerate its data vVols (Hard Disk 1 in the example).

vSphereView 112: Select Storage Advanced View

To select a storage policy for a vVol, click the dropdown in Storage column of its row and select Browse to launch the Select a datastore cluster or datastore wizard.

vSphereView 113: Browse for Custom Storage Policy

Either select a VMFS, NFS or vVol datastore from the list or select a policy from the dropdown.

vSphereView 114: Selectint Storage Policy vVol

Selecting a policy from the VM storage policy dropdown filters the list to include only compliant datastores. For example, selecting the vVol No Requirements Policy lists only vVol datastores. 

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (for example, flasharray-vvol-1:platinum in vSphereView 115).

vSphereView 115: Selecting Storage Policy for vVol

Alternatively, if Automatic is selected (as in vSphereView 115), the array creates a protection group with the capabilities specified by the policy. Whichever option is chosen, the policy is assigned to the vVol.

For example, a VM’s config vVol might be assigned a 1 hour snapshot and 1 hour replication storage policy, corresponding to the flasharray-vvol-1:gold replication group, whereas its data vVols might be assigned a 1 hour snapshot and 5 minute replication policy, corresponding to the flasharray-vvol-1:platinum replication group. vSphereView 116 shows the Select a datastore cluster or datastore panes for configuring the two policies.

vSphereView 116: Separate Storage Policies for Config and Data vVols

ArrayViews 30 and 31 list the contents of the two protection groups that correspond to the vCenter replication groups.

ArrayView 20: gold Protection Group
ArrayView 20: platinum Protection Group

Changing a VM's Storage Policy

To change a VM’s storage policy, a VMware administrator assigns a new policy to it. VMware directs the array to reconfigure the affected vVols. If the change makes the VM or any of its virtual disks non-compliant, the VMware administrator must adjust their policies.

To change a VM’s storage policy, select the VMs and Templates view in the Web Client inventory pane, (1) right-click the target VM, (2) select VM Policies from the dropdown menu, and (3) select Edit VM Storage Policies from the secondary dropdown (vSphereView 117) to launch the Edit VM Storage Policies wizard (vSphereView 118).

vSphereView 117: Edit VM Storage Policies Command

The storage policy for the VM in the example specifies a 1 hour snapshot interval and a 5 minute replication interval, so both the config and data vVols are in the array’s platinum protection group.

ArrayView 32: Config and Data vVols in the Same Protection Group
vSphereView 118: Edit VM Storage Policies Wizard
vSphereView 119: Apply a Common Storage Policy to All of VM's vVols

To change the storage policy assigned to a VM’s config vVol or a single data vVol, select a policy from the dropdown in the VM Storage Policy column of its row in the table.

vSphereView 120: Change Config vVol Storage Policy

Selecting a policy that is not valid for the array that hosts a vVol displays a Datastore does not match current VM policy error message. To satisfy the selected policy, the VM would have to be moved to a different array (reconfiguration would not suffice).

A storage policy change may require that the replication groups for one or more vVols be changed. If this is the case, the Replication Groups indicator is marked with an alert (vv155.png ) icon.

vSphereView 121: Non-Compliant Datastore
vSphereView 122: One or More Replication Groups not Configured


This alert typically appears for one of two reasons:

  1. One or more vVols are in replication groups (FlashArray protection groups) do not comply with the new storage policy.
  2. The new storage policy requires that vVols be in a replication group, and one or more vVols are not.

If the alert appears, or to verify or change the replication group, click Configure to launch the Configure VM Replication Groups wizard.

To assign a policy to all of a VM’s vVols, click the Common replication group radio button, select a replication group from the Replication group dropdown, and click OK.

vSphereView 123: Configure a VM Replication Group

Note: If no policy is shared by all of the VM’s vVols, the Replication group dropdown does not appear.

To assign different policies to individual vVols, click the Replication group per storage object radio button, select a replication group for each vVol to be replicated from the dropdown in its row. When selections are complete, click OK.

vSphereView 124: Configure vVol Replication Groups

Click OK again to complete reconfiguration. VMware directs the array to change the vVols’ protection group membership as indicated in the selections for the new policy.

vSphereView 125: Configure vVol Replication Groups
ArrayView 33: Common VM Protection Group

Assigning a Policy during Storage Migration

Compliance with an existing or newly assigned storage policy may require migrating a VM to a different array. For example, VM migration is required if:

  • A policy specifying a different array than the current VM or virtual disk location is assigned
  • A policy requiring QoS (or not) is assigned to a VM or virtual disk located on an array with the opposite QoS setting.
  • A policy specifying snapshot or replication parameters not available with any protection group on a VM or virtual disk’s current array is assigned.
  • Some of these situations can be avoided by array reconfiguration, for example by creating a new protection group or inverting the array’s QoS setting. Others, such as a specific array requirement, cannot. If an array cannot be made to meet a policy requirement, the VMware administrator must use Storage vMotion to move the VM or virtual disk to one that can satisfy the requirement. The administrator can select a new storage policy during Storage vMotion.

For example, vSphereView 126 illustrates a VM whose assigned storage policy specifies hourly snapshots and replication with one-day retention for both.

vSphereView 126: VM Storage Policy Specifying Hourly Snapshots and Replication

The VM in this example is located on flasharray-vvol-1, in protection group gold.

ArrayView 34: Protection Group with Hourly Snapshots and Replication Specified

The VM is compliant with the vCenter-assigned Snap 1 HOURS Replication 1 HOURS policy.

vSphereView 127: VM Compliance with Storage Policy

If the VMware administrator changes the VM’s storage policy to one that requires not only the snapshot and replication parameters, but also that the VM and its vVols be located on array flasharray-vvol-2, the VM and its vVols become noncompliant because they are located on flasharray-vvol-1.

vSphereView 128: New VM Storage Policy Requiring Location on a Specific FlashArray

No amount of reconfiguration of FlashArray flasharray-vvol-1 can remedy the discrepancy, so to make the VM compliant with the new policy, Storage vMotion must move it to flasharray-vvol-1.

vSphereView 129: VM Out of Compliance with its Assigned Storage Policy

To move a VM between arrays using Storage vMotion, from the VMs and Templates inventory pane, right-click the VM to be moved, and select Migrate from the dropdown menu to launch the Select the migration type wizard.

vSphereView 130: Select Migration Type Wizard

Click Change storage only and Next to launch the Migrate wizard.

vSphereView 131: Migrate (Storage vMotion) Wizard

Reselect the storage policy from the dropdown (do not select Keep existing storage policy), reselect the target from the list of datastores with compatible policies, and click Finish to migrate the VM to the target array and configure the vVols as specified in the reselected policy. When migration completes, the VM is on the target array and it and its vVols are compliant with the assigned storage policy.

BEST PRACTICE: Pure Storage recommends reselecting the same storage policy rather the Keep existing storage policy option in order to provide Storage vMotion with the information it needs to complete a migration.

The Migrate wizard contains an Advanced button. The subsection titled Assigning a Policy to Each of a VM’s Virtual Disks  describes the use of the advanced option to specify per-vVol storage policies.

vSphereView 132 illustrates the example VM (vSphereView 127) after (a) the policy in vSphereView 128 has been assigned to it, and (b) it has been migrated to flasharray-vvol-2. ArrayView 35 illustrates the GUI view of the example VM’s vVols, now located in flasharray-vvol-2’s gold protection group.

vSphereView 132: Migrated VM Compliant with its Assigned Storage Policy
ArrayView 35: Protection Group on flasharray--vvol-2 Showing Migrated VM's vVols

[Back to Top


Replicating vVols

With VASA version 3, FlashArrays can replicate vVols. VMware is aware of replicated VMs and can fail them over and otherwise manage replication. Additional information is available from VMware at:

VMware vVol replication has three components:

Replication Policies 

Specify sets of VM requirements and configurations for replication that can be applied to VMs or virtual disks. If configuration changes violate a policy, VMs to which it is assigned become non-compliant

Replication Groups

Correspond to FlashArray protection groups, and are therefore consistency groups in the sense that replicas of them are point-in-time consistent. Replication policies require replication groups

Failure domains

Sets of replication targets. VMware requires that a VM’s config vVol and data vVols be replicated within a single failure domain.

In the FlashArray context, a failure domain is a set of arrays. For two vVols to be in the same failure domain, one must be replicated to the same arrays as the other. In other words, a VM’s vVols must all be located in protection groups that have the same replication targets.

vSphereView 133: A Policy that Specifies Different Replication Fault Domains

Replication policies can only be assigned to config vVols and data vVols. Other VM objects inherit replication policies in the following way:

  • A memory vVol inherits the policy of its configuration vVol
  • The swap vVol, which only exists when a VM is powered on, is never replicated.

The initial release of FlashArray vVol support does not preserve local snapshot chains through replication. VMware-managed local snapshots are not replicated and are therefore unavailable after a VM fails over. For VMs that are to be replicated, either do not create VMware-managed snapshots or delete them before failover. Pure Storage plans to deliver preservation of VMware-managed snapshot chains through failover in a future release of FlashArray software.

VMware can perform three types of failovers on vVol-based VMs:

Planned Failover

Movement of a VM from one datacenter to another, for example for disaster avoidance or planned migration. Both source and target sites are up and running throughout the failover. Once a planned failover is complete, replication can be reversed so that the failed over VM can be failed back.

Unplanned Failover

Movement of a VM when a production datacenter fails in some way. Failures may be temporary or irreversible. If the original datacenter recovers after failover, automated reprotection may be possible. Otherwise, a-new replication scheme must be configured.

Test Failover

Similar to planned failover, but does not bring down the production VM. Test failover recovers temporary copies of protected VMs to verify the failover plan before an actual disaster or migration.

VMware vCenter Site Recovery Manager does not support vVols or array-based replication at the time of publication. Currently, vVol failover and SRM is only supported by vSphere Replication. Refer requests for SRM support of vVols and array-based replication to VMware.

These vVol failover modes for can be implemented using the VMware SDK, tools such as PowerCLI or vRealize Orchestrator, or any tool that can access the VMware SPBM SDK. Pure Storage plans to make PowerCLI example scripts and tools available on the Pure Storage Community and GitHub repositories as they are created and validated. 

PowerCLI version 6.5.4 or newer is required for use with FlashArray-based vVols.

[Back to Top


vVol Reporting

The vVol architecture that gives VMware insight into FlashArrays also gives FlashArrays insight into VMware. With vVol granularity, array can recognize and report on both entire vVol-based VMs (implemented as volume groups) and individual virtual disks (implemented as volumes).

Storage Consumption Reporting

FlashArrays represent VMs as volume groups. The Volumes tab of the GUI Storage pane lists an array’s volume groups. Select a group that represents a VM to display a list of its volumes.
ArrayView 36: GUI View of a Volume Group and its Volumes

The top panel of the display shows averaged and aggregated storage consumption statistics for the VM. Click the Space button in the Volumes pane to display storage consumption statistics for individual vVols.

ArrayView 37: GUI View of a Volume Group' Per-volume Storage Consumption

To view a VM’s storage consumption history, switch to the Analysis pane Capacity view and select the Volumes tab.

ArrayView 38: GUI Analysis

To view history for VMs (volume groups) or vVol (volumes), select an object type from the dropdown menu.

ArrayView 39: Selecting Volume Statistics

Click the desired object in the list to display its storage consumption history. (Alternatively, enter a full or partial VM name in the search box to filter the list.)

The array displays a graph of the selected object’s storage consumption over time. The graph is adjustable—time intervals from 24 hours to 1 year can be selected. It distinguishes between storage consumed by live volumes and that consumed by their snapshots. The consumption reported is for volume and snapshot data that is unique to the objects (i.e., not deduplicated against other objects). Data shared by two or more volumes or snapshots is reported separately on a volume group-wide basis as Shared.

ArrayView 40: GUI Storage Capacity History for a Volume Group

Data Reduction with vVol Managed Snapshots on Purity 5.1.3+

Beginning in Purity 5.1.3 Managed Snapshots behavior was changed to copy the Data Volumes to new volumes in the Array Volume Group vs taking array based snapshots of the data volumes.  As part of this update, data reduction numbers will now differ.  Since the VMware is is essentially asking the array to create several identical volumes through VASA and the Array will oblige and dedup them appropriately.  Which means that the more Managed Snapshots that are take, the higher the data reduction on number on the Volume Group will become.  Overall increasing the Array data reduction numbers. 

Performance Reporting

The FlashArray GUI can also report VM and vVol performance hostory. In the Analysis pane Performance view, the history of a VM or vVol’s IOPS, latency, and data throughput (Bandwidth) can be viewed.

Click the Volumes tab to display a list of the array’s VMs (volume groups) and/or vVols (volumes).

ArrayView 41: GUI Analysis Pane

To view an object’s performance history, select Volume Groups, Volumes, or All in the dropdown, and select a VM or vVol from the resulting list.

ArrayView 42: Selecting Volume Display

A VM’s or vVol’s performance history graph shows its IOPS, throughput (Bandwidth), and latency history in separate stacked charts.

The graphs show the selected object’s performance history over time intervals from 24 hours to 1 year. Read and write performance can be shown in separate curves. For VMs, latency is the average for all volumes; throughput and IOPS are an accumulation across volumes.

ArrayView 43: GUI Performance History for a Volume Group

[Back to Top


Migrating VMs to vVols

Storage vMotion can migrate VMs from VMFS, NFS, or Raw Device Mappings (RDMs) to vVols.

Migrating a VMFS or NFS-based VM to a vVol-based VM

From the Web Client VMs and Templates inventory pane, right-click the VM to be migrated and select Migrate from the dropdown menu to launch the Migrate wizard.

vSphereView 134: Web Client Migrate Command

Select Change Storage Only to migrate the VM’s storage, or Change both compute resource and storage to migrate both storage and compute resources.

vSphereView 135: Selecting Storage-only Migration

In the ensuing Select storage step, select a vVol datastore as a migration target. Optionally, select a storage policy for the migrated VM to provide additional features. (The section titled Storage Policy Based Management describes storage policies.)

Click Finish (not visible in vSphereView 135) to migrate the VM. If original and target datastores are on the same array, the array uses XCOPY to migrate the VM. FlashArray XCOPY only creates metadata, so migration is nearly instantaneous.

If source and target datastores are on different arrays, VMware uses reads and writes, so migration time is proportional to the amount of data copied.

When migration completes, the VM is vVol-based. Throughout the conversion, the VM remains online.

vSphereView 136: Select Storage Policy

ArrayView 44 shows a migrated VM’s FlashArray volume group.

ArrayView 44: GUI View of a Migrated VM (Volume Group)

Migration of a VM with VMDK Snapshots

Migrating a VM that has VMware managed snapshots is identical to the process described in the preceding subsection. In a VMFS or NFS-based VM, snapshots are VMDK files in the datastore that contain changes to the live VM. In a vVol-based VM, snapshots are FlashArray snapshots.

Storage vMotion automatically copies a VM’s VMware VMFS snapshots. ESXi directs the array to create the necessary data vVols, copies the source VMDK files to them and directs the array to take snapshots of them. It then copies each VMFS-based VMware snapshot to the corresponding data vVol, merging the changes. All copying occurs while the VM is online.

BEST PRACTICE: Only virtual hardware versions 11 and later are supported. If a VM has VMware-managed VMFS-based memory snapshots and is at virtual hardware level 10 or earlier, delete the memory snapshots prior to migration. Upgrading the virtual hardware does not resolve this issue. Refer to VMware’s note here

Migrating Raw Device Mappings

A Raw Device Mapping can be migrated to a vVol in any of the following ways:

  • Shut down the VM and perform a storage migration. Migration converts the RDM to a vVol.
  • Add to the VM a new virtual disk in a vVol datastore. The new virtual disk must be of the same size as the RDM and located on the same array. Copy the RDM volume to the vVol, redirect the VM’s applications to use the new virtual disk, and delete the RDM volume.
  • Remove the RDM from the VM and add it back as a vVol. At the time of publication, this process requires Pure Storage Technical Support assistance. Pure Storage plans to make a user-accessible mechanism for achieving in the future.

For more information, refer to the blog post

[Back to Top


Data Mobility with vVols

A significant, but under-reported benefit of vVols is data set mobility. Because a vVol-based VM’s storage is not encapsulated in a VMDK file, the VM’s data can easily be shared and moved.

A data vVol is a virtual block device presented to a VM; it is essentially identical to a virtual mode RDM. Thus, a data vVol (or a volume created by copying a snapshot of it) can be used by software that can interpret its contents, for example an NFS or XFS file system created by the VM.

Therefore, it is possible to present a data vVol, or a volume created from a snapshot of one, to a physical server, to present a volume created by physical server to a vVol-based VM as a vVol, or to overwrite a vVol from a volume created by a physical server.

This is an important benefit of the FlashArray vVol implementation. The following blog posts contain examples of and additional information about data mobility with FlashArray vVols:

[Back to Top


Appendix I

While the Plugin is not required to use of FlashArray-based vVols, it simplifies administrative procedures that would otherwise require either coordinated use of multiple GUIs or scripting.

Version 3.0 of the Plugin and later versions support vVols. To verify that a Plugin version that supports vVols is installed, select Administration in the Web Client home screen inventory pane and select Client Plug-Ins to display the Client Plug-ins pane (vSphereView 137).

Version 3.0 of the FlashArray Plugin for the vSphere Web Client integrates with the vSphere Web Client (also called Flash/Flex Client). Plugin support for VMware’s emerging vSphere Client (HTML5) is under development.

vSphereView 137: Web Client Plug-ins Pane

If the Pure Storage Plugin is not installed, or if the installed version is earlier than 3.0, use the FlashArray GUI to install or upgrade the Plugin to a version that supports vVols.

As a FlashArray administrator, select the Software tab on the Settings pane. The Available Version field (ArrayView 45) lists the array’s current Plugin version. If the version is earlier than 3.0, move to an array that does host Version 3.0. If no such array is available, contact Pure Storage Support to obtain a supported version of the Plugin. 

ArrayView 45: Plugin Installation and Upgrade

To install the Plugin in the vCenter Web Client, click the vv187b.png button in the vSphere Plugin pane (ArrayView 45) to launch the Edit vSphere Plugin Configuration wizard (ArrayView 46).

ArrayView 46: Edit vSphere Plugin Configuration Wizard

The target vCenter validates the administrator credentials and returns the version of the installed Plugin (if any) in the Version on vCenter field. ArrayView 47 shows the vCenter responses when no Plugin is installed (left) and when the installed version is earlier than 3.0 (right). Click Install or Upgrade as required.

ArrayView 47: vCenter Responses to FlashArray Plugin Query

When installation is complete, the wizard displays a confirmation message. Install the Plugin in additional vCenter instances as required. To verify the installation, log out of and back into vCenter, and look for the vv187.png icon in the Web Client Home tab.

vSphereView 138: Using Web Client to Verify Plugin Installation

Authenticating FlashArray to the Plugin

To authenticate a FlashArray to a Plugin installed in vCenter, either click the vv187.png icon on the Web Client Home tab (vSphereView 138) or click the Home button at the top of the pane and select Pure Storage from the dropdown menu (vSphereView 139) to display the FlashArray pane Objects tab (vSphereView 140).

vSphereView 139: FlashArray Authentication (1)
vSphereView 140: FlashArray Authentication (2)

Click + Add FlashArray to launch the Add FlashArray wizard.


vSphereView 141: Add FlashArray Wizard

vSphereView 142 illustrates the Web Client FlashArray pane Objects tab after the array has been added.

vSphereView 142: Array Authenticated to vCenter

Note: Role-Based Access Control is available for the Plugin, but configuration and use of this feature is beyond the scope of this report. 

Refer to the Plugin User Guide, available on for further information

[Back to Top


Appendix II: FlashArray CLI Commands for Protocol Endpoints

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol create command creates the volume as a protocol endpoint.

ArrayView 48: FlashArray CLI Command to Create a PE

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol list command displays a list of volumes on the array that were created as PEs.

ArrayView 49: FlashArray CLI Command to List an Array's PEs

[Back to Top


Appendix III: VMware ESXi CLI Commands for vVols

Use the esxcli storage vvol commands to troubleshoot a vVol environment.




esxcli storage core device


Identify protocol endpoints. The output entry Is VVOL PE: true indicates that the storage device is a protocol endpoint.

esxcli storage vvol daemon


Unbind all vVols from all VASA providers known to the ESXi host.

esxcli storage vvol protocolendpoint


List all protocol endpoints that a host can access.

esxcli storage vvol storagecontainer


abandonedvvol scan

List all available storage containers.

Scan the specified storage container for abandoned vVols.

esxcli storage vvol vasacontext


Show the VASA context (VC UUID) associated with the host.

esxcli storage vvol vasaprovider


List all storage (VASA) providers associated with the host.

[Back to Top


Appendix IV: Disconnecting a Protocol Endpoint from a Host

Decommissioning ESXi hosts or clusters normally includes removal of protocol endpoints (PEs). The usual FlashArray volume disconnect process is used to disconnect PEs from hosts. As with removal of any non-vVol block storage device however, the best practice is to detach the PE from each host in vCenter prior to disconnecting it from them on the array.

vSphereView 143: Web Client Tool for Detaching a PE from an ESXi Host

To detach a PE from a host, select the host in the Web Client inventory pane, navigate to the Storage Devices view Configure tab, select the PE to be detached, and click the vv197.png tool to launch the Detach Device confirmation wizard. Click Yes to detach the selected PE from the host.

vSphereView 144: Confirm Detach Wizard

vSphereView 145 shows the Web Client storage listing after successful detachment of a PE.

vSphereView 145: Detached PE

Failure to detach a PE from a host (vSphereView 146) typically occurs because there are vVols bound to the host through the PE that is being detached.

vSphereView 146: Failure to Detach PE (LUN) from a Host

FlashArrays prevent disconnecting a PE from a host (including members of a FlashArray host group) that has vVols bound through it.

The Purity//FA Version 5.0.0 GUI does not support disconnecting PEs from hosts. Administrators can only disconnect PEs via the CLI or REST API.

Before detaching a PE from an ESXi host, use one of the following VMware techniques to clear all bindings through it:

  1. vMotion all VMs to a different host
  2. Power-off all VMs on the host that use the PE
  3. Storage vMotion the VMs on that host that use the PE to a different FlashArray or to a VMFS

To completely delete a PE, remove all vVol connections through it. To prevent erroneous disconnects, FlashArrays prevent destruction of PE volumes with active connections.

[Back to Top


Appendix V: vVols and Volume Group Renaming

FlashArray volume groups are not in the VM management critical path. Therefore, renaming or deleting a volume group does not affect VMware’s ability to provision, delete or change a VM’s vVols.

A volume group is primarily a tool that enables FlashArray administrators to manage a VM’s volumes as a unit. Pure Storage highly recommends creating and deleting volume groups only through VMware tools, which direct arrays to perform actions through their VASA providers.

Volume group and vVol names are not related to VASA operations. vVols can be added to and removed from a volume group whose name has been changed by an array administrator. If, however, a VM’s config vVol is removed from its volume group, any vVols created for the VM after the removal are not placed in any volume group. If a VM’s config vVol is moved to a new volume group, any new vVols created for it are placed in the new volume group.

VMware does not inform the array that it has renamed a vVol-based VM, so renaming a VM does not automatically rename its volume group. Consequently, it is possible for volume group names differ from those of the corresponding VMs. For this reason, the FlashArray vVol implementation does not put volume group or vVol names in the vVol provisioning and management critical path.

For ease of management, however, Pure Storage recommends renaming volume groups when the corresponding VMs are renamed in vCenter.


Appendix Vi: CISCO FNIC Driver Support for vVols

Older Cisco UCS drivers do not support the SCSI features required for Protocol Endpoints and vVol sub-lun connections. To use vVols with Cisco UCS, FNIC drivers must be updated to a version that supports sub-luns. For information on firmware and update instructions consult:


About the Author


Cody Hosterman is the Technical Director for VMware Solutions at Pure Storage. His primary responsibility is overseeing, testing, designing, documenting, and demonstrating VMware-based integration with the Pure Storage FlashArray platform. Cody has been with Pure Storage since 2014 and has been working in vendor enterprise storage/VMware integration roles since 2008.

Cody graduated from the Pennsylvania State University with a bachelors degreee in Information Sciences & Technology in 2008. Special areas of focus include core ESXi storage, vRealize (Orchestrator, Automation and Log Insight), Site Recovery Manager and PowerCLI. Cody has been a named VMware vExpert every year since 2013.




[Back to Top

© 2018 Pure Storage, Inc. All rights reserved. Pure Storage, Pure1, and the Pure Storage Logo are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of their respective owners.