Skip to main content
Pure1 Support Portal

Web Guide: Implementing vSphere Virtual Volumes with FlashArray

Abstract

VMware’s vSphere Virtual Volume (VVol) paradigm, introduced in vSphere version 6.0, is a storage technology that provides policy-based, granular storage configuration and control of virtual machines (VMs). Through API-based interaction with an underlying array, VMware administrators can maintain storage configuration compliance using only native VMware interfaces.

Version 5.0.0 of Purity//FA software introduced support for FlashArray-based vSphere Virtual Volumes (VVols). The accompanying FlashArray Plugin for the vSphere Web Client (the Plugin) makes it possible to create, manage, and use VVols that are based on FlashArray volumes from within the Web Client. This report describes the architecture, implementation, and best practices for using FlashArray-based VVols.

Audience

The primary audiences for this guide are VMware administrators, FlashArray administrators, and more generally, anyone interested in the architecture, implementation, administration, and use of FlashArray-based VVols.

Throughout this report, the terms FlashArray administrator, array administrator, and administrator in the context of array administration, refer to both the storage and array administration roles for FlashArrays.

For further questions and requests for assistance, customers can contact Pure Storage Technical Support at support@purestorage.com.

VVol Best Practice Summary

The following is a summary of general best practices for FlashArray-based VVols. For more detailed information on each topic, refer to the body of this report.

REQUIREMENTS:

  • Configure NTP on every ESXi host, vCenter, and FlashArray involved in VVol management
  • Run vCenter Version 6.5 and ESXi Version 6.5 or newer versions, throughout the VMware environment, including at replication target sites
  • Register each FlashArray’s two VASA providers with vCenter
  • Ensure that vCenter and ESXi management networks have TCP port 8084 access to FlashArray controller management ports
  • Configure host and host groups with appropriate initiators on the FlashArray
  • Always use VMware tools to create, change and provision FlashArray-based VVols. (Resizing or destroying FlashArray-based VVols directly requires manual clean up within VMware.
  • (Exception: FlashArray tools can be used to create snapshots and copies of FlashArray-based VVols.)
  • If EFI-boot virtual machines are in-use, change the Disk.MaxIOSize in the ESXi server(s) that host them from the 32 MB default to 4 MB
  • Configure VMware NMP Round Robin scheduling and set I/O Operations Limit to 1
  • (These are defaults in ESXi version 6.5 Update and newer versions.)

RECOMMENDATIONS:

  • Run ESXi and vCenter Version 6.5 Update 1 or later
  • Configure a single protocol endpoint per FlashArray, shared among all hosts
  • Use Virtual Machine hardware version 11 or later.
  • Configure snapshot policies for all config VVols (VM home directories)
  • Present a protocol endpoint to any ESXi host prior to mounting a VVol datastore or use the Pure Storage vSphere Plugin to automate the procedure.

Terminology

This report uses the following short forms of the names of frequently mentioned entities.

Short Form

Full Name

Inventory pane

Common synonym for the Web Client Navigator pane.

Plugin

The FlashArray vSphere Web Client Plugin

A plugin component for the vSphere Web Client that works in conjunction with Purity//FA Version 5.0.0 and later versions to enable the Web Client to manage FlashArray-based VVols.

PE

Protocol Endpoint
VMware term for the T10 administrative logical unit (ALU) concept

VASA

VMware APIs for Storage Awareness

APIs for storage arrays that enable management from within VMware components.

VM

Virtual Machine

In this report, a virtual machine instantiated by VMware ESXi and running a guest operating system.

VVol

Virtual Volume

A VMware virtual storage paradigm that supports finer-grained control of virtual machine storage, and enables integration with advanced features offered by storage arrays.

Web Client

The VMware vSphere Web Client

The web-based administration component of VMware.

Introduction to VVols

Historically, the datastores that have provided storage for VMware virtual machines (VMs) have been created as follows:

  1. A VMware administrator requests storage from a storage administrator
  2. The storage administrator creates a disk-like virtual device on an array and provisions it to the ESXi host environment for access via iSCSI or Fibre Channel
  3. The VMware administrator rescans ESXi host I/O interconnects to locate the new device and formats it with VMware’s Virtual Machine File System (VMFS) to create a datastore.
  4. The VMware administrator creates a VM and one or more virtual disks, each instantiated as a file in the datastore’s file system and presented to the VM as a disk-like block storage device.

Virtual storage devices instantiated by storage arrays are called by multiple names. Among server users and administrators, LUN (numbered logical unit) is popular. The FlashArray term for virtual devices is volume. ESXi and guest hosts address commands to LUNs that are usually assigned automatically to volumes.

While plugins can automate datastore creation to some extent, they have some fundamental limitations:

  • Every time additional capacity is required, VMware and storage administrators must coordinate their activities
  • Certain widely-used storage array features such as replication are implemented at the datastore level of granularity. Enabling them affects all VMs that use a datastore
  • VMware administrators cannot easily verify that required storage features are properly configured and enabled.

VMware designed VVols to mitigate these limitations. VVol benefits include:

Virtual Disk Granularity

Each virtual disk is a separate volume on the array with is own unique properties

Automatic Provisioning

When a VMware administrator requests a new virtual disk for a VM, VMware automatically directs the array to create a volume and present it to the VM. Similarly, when a VMware administrator resizes or deletes a virtual disk, VMware directs the array to resize or remove the volume

Array-level VM Visibility

Because arrays recognize both VMs and their virtual disks, they can manage and report on performance and space utilization with both VM and individual virtual disk granularity.

Storage Policy Based Management

With visibility to individual virtual disks, arrays can take snapshots and replicate volumes at the precise granularity required. VMware can discover an array’s virtual disks and allow VMware administrators to manage each VVol’s capabilities either ad hoc or by specifying policies. If a storage administrator overrides a VVol capability configured by a VMware administrator, the VMware administrator is alerted to the non-compliance.

VMware designed the VVol architecture to mitigate the limitations of the VMFS-based storage paradigm while retaining the benefits, and merging them with the remaining advantages of Raw Device Mappings.

VMware’s VVol architecture consists of the following components:

Management Plane (section titled The FlashArray VASA Provider)
Implements the APIs that VMware uses to manage the array. Each supported array requires a vSphere API for Storage Awareness (VASA) provider, implemented by the array vendor.

Data Plane (section titled VVol Binding)
Provisions VVols to ESXi hosts

Policy Plane (section titled Storage Policy Based Management)
Simplifies and automates the creation and configuration of VVols.

Appendix I: Installing and Upgrading the Web Client Plugin on page 94 describes Plugin installation and registering arrays with the Plugin.

The FlashArray VASA Provider

VMware APIs for Storage Awareness (VASA) is a VMware interface for out-of-band communication between VMware ESXi and vCenter and storage arrays. Arrays’ VASA providers are services registered with vCenter. Storage vendors implement providers for their arrays, either as VMs or embedded in the arrays. As of vSphere Version 6.5, VMware has introduced three versions of VASA:

Version 1 (Introduced in vSphere Version 5.0)
    Provides basic configuration information for storage volumes hosting VMFS datastores, as well as injection of some basic alerts into vCenter

Version 2 (Introduced in vSphere Version 6.0)
    First version to support VVols

Version 3 (Introduced in vSphere Version 6.5)
    Added support for array-based replication of VVols and Oracle RAC.

FlashArrays support VASA Version 3.

Because the FlashArray VVol implementation uses VASA Version 3, the VMware environment must be running vSphere Version 6.5 or a newer version in both ESXi hosts and vCenter. Pure Storage recommends vSphere Version 6.5 Update 1.

Appendix I: Installing and Upgrading the Web Client Plugin contains instructions for verifying that a Plugin version that supports VVols is installed in vCenter, and for installing or upgrading to a version with VVol support.

FlashArray VVol support is included in Purity//FA Version 5.0. The Purity//FA upgrade process automatically installs and configures a VASA provider in each controller; there is no separate installation or configuration. To use FlashArray-based VVols, however, an array’s VASA providers must be registered with vCenter. Either the FlashArray Plugin for vSphere Web Client (the Plugin), the vSphere GUI, or API/CLI-based tools may be used to register VASA providers with vCenter. 

Registering FlashArray VASA Providers with the Plugin

While the Plugin is not required for FlashArray-based VVols, it simplifies most administrative functions, including VASA provider registration.

From the Web Client Home screen, select Pure Storage from the dropdown menu to display the FlashArray pane Objects tab. Right-click the array whose VASA providers are to be registered and select Register Storage Provider from the dropdown menu (vSphereView 1) to launch the Register Storage Provider wizard (vSphereView 2).

vv1.png

vSphereView 1: Register Storage Provider

Enter credentials for a FlashArray administrator registers the array’s two VASA providers to all vCenters present in the vSphere Single Sign-on Domain. Arrays log all subsequent VVol operations from those vCenters under the user name entered. For optimal audit control, Pure Storage recommends use of a dedicated LDAP or Active Directory FlashArray account for VASA provider registrations.

vv2.png

vSphereView 2: Register Storage Provider Wizard

Other Methods for Registering FlashArray VASA Providers

Alternatively, VMware administrators can use the Web Client, PowerCLI, and other CLI and API tools to register VASA providers. This section describes registration of FlashArray providers with the Web Client and with PowerCLI.

Prior to registration, use the FlashArray GUI to obtain the IP addresses of both controllers’ eth0 management ports.

Click Settings in the GUI navigation pane, and select the Network tab, (ArrayView 1) to display the array’s management port IP addresses (ArrayView 2).

vv3.png

ArrayView 1: FlashArray GUI Network Tab

vv4.png

ArrayView 2: FlashArray Managment Port IP Addresses

VASA Registration with the Web Client

In the Web Client inventory pane Host and Clusters view, select the target vCenter. Select the Configure tab and click Storage Providers in the menu. Click the green + icon (vSphereView 3) to launch the New Storage Provider wizard (vSphereViews 4 and 5).

vv5.png

vSphereView 3: Web Client VASA Provider Registration

vv6.png

vSphereView 4: New Storage Provider Wizard (ct0)

vv7.png

vSphereView 5: New Storage Provider (ct1)

Enter the following information:

Name
    A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).

URL
    The URL of the controller’s VASA provider in the form:
    https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified (not its FQDN), and port 8084 is required

Credentials
    Credentials for an administrator of the target array. The user name entered is associated with VASA operations in future audit logs.

Click OK, and repeat the procedure for the other controller (vSphereView 5).

Perform the procedure for each array to be registered.

VASA Registration with PowerCLI

When a number of FlashArrays’ VASA providers are to be registered, using a PowerCLI script may be preferable. The VMware PowerCLI cmdlet called New-VasaProvider registers VASA providers with vCenter (vSphereView 6).

vv8.png

vSphereView 6: Use of the New-Vasa-Provider Cmdlet

The script in vSphereView 7 below uses both PowerCLI and the Pure Storage PowerShell SDK to register an array’s two VASA Providers. The script requires that both PowerCLI and the PureStorage PowerShell SDK be installed.

$vccreds = Get-Credential
$facreds = Get-Credential
$vcenter = Read-Host "Enter your vCenter IP/FQDN"
$flasharray = Read-Host "Enter your FlashArray IP/FQDN"
connect-viserver -Server $vcenter -Credential $vccreds
$endpoint = New-PfaArray -EndPoint $flasharray -Credentials $facreds -IgnoreCertificateError
$mgmtIPs = Get-PfaNetworkInterfaces -Array $endpoint | where-object {$_.name -like "*eth0"}
$arrayname = Get-PfaArrayAttributes -array $endpoint
$ctnum = 0
foreach ($mgmtIP in $mgmtIPs)
{
    New-VasaProvider -Name ("$($arrayname)-CT$($ctnum)") -Credential $facreds -Url ("https://$($mgmtIP.address):8084") -force
    $ctnum++
}
disconnect-viserver -Server $vcenter -confirm:$false
Disconnect-PfaArray -Array $endpoint 

vSphereView 7: PowerCLI Script for Registering a FlashArray's Two VASA Providers 

Verifying VASA Provider Registration

To verify that VASA Provider registration succeeded, in the Web Client Host and Clusters view, click the target vCenter in the inventory pane, select the Configure tab, and locate the newly-registered providers in the Storage Providers table (vSphereView 9).

vv9.png

vSphereView 9: Verification of VASA Provider Registration

The table can be arranged either by storage provider or by array (Storage system) by clicking the Group by dropdown and selecting the desired ordering (vSphereView 8).

vv10.png 

vSphereView 8: Select Storage Provider Grouping Order

Although both FlashArray controllers’ VASA providers are online, vCenter uses one provider at a time. The provider in-use is marked Active; its companion as Standby as in vSphereView 9.

Alternatively, the PowerCLI Get-VasaProvider cmdlet can be used to list registered VASA providers (vSphereView 10).

vv11.png

vSphereView 10: PowerCLI Get-VasaProvider Cmdlet Usage

Configuring Host Connectivity

For an ESXi host to access FlashArray storage, an array administrator must create a host object. A FlashArray host object (usually called host) is a list of the ESXi host’s initiator iSCSI Qualified Names (IQNs) or Fibre Channel Worldwide Names (WWNs). Arrays represent each ESXi host as one host object.

Similarly, arrays represent a VMware cluster as a host group, a collection of hosts with similar storage-related attributes. For example, an array would represent a cluster of four ESXi hosts as a host group containing four host objects, each representing an ESXi host. The FlashArray User Guide contains instructions for creating hosts and host groups.

To use the Plugin to create a FlashArray host group, in the Web Client’s Host and Clusters view inventory pane, right-click a cluster, select Pure Storage from the dropdown menu, and Add Host Group from the secondary dropdown to launch the Add FlashArray Host Group wizard (vSphereView 11).

vv12.png

vSphereView 11: Add FlashArray Host Group Wizard

Select iSCSI or Fibre Channel, (optionally) enter a friendly name for the host group, and click Create to create host objects and a host group to represent the cluster.

Notes:
The Plugin can also configure the ESXi hosts’ iSCSI target addresses (not shown).
The Pure Storage VMware Best Practices Guide at support.purestorage.com and the blog series: https://blog.purestorage.com/author/cody-hosterman/  contain vSphere iSCSI target address assignment instructions and best practices.

Fibre Channel zoning must be completed before provisioning storage to hosts. Refer to switch vendor documentation for zoning instructions.

Protocol Endpoints

The scale and dynamic nature of VVols intrinsically changes VMware storage provisioning. To provide scale and flexibility for VVols, VMware adopted the T10 administrative logical unit (ALU) standard, which it calls protocol endpoint (PE). VVols are connected to VMs through PEs acting as subsidiary logical units (SLUs, also called sub-luns).

The FlashArray VVol implementation makes PEs nearly transparent. Array administrators seldom deal with PEs, and not at all during day-to-day operations.

Protocol Endpoints (PEs)

Because a typical VM has multiple virtual disks, each instantiated as a volume on the array and addressed by a LUN, the ESXi Version 6.5 support limits of 512 SCSI devices (LUNs) per host and 2,000 logical paths to them can easily be exceeded by even a modest number of VMs.

Moreover, each time a new volume is created or an existing one is resized, VMware must rescan its I/O interconnects to discover the change. In large environments, rescans are time-consuming; rescanning each time the virtual disk configuration changes is generally considered unacceptable.

VMware uses PEs to eliminate these problems. A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. It effectively serves as a mount point for VVols. It is the only FlashArray volume that must be manually connected to hosts to use VVols.

Fun fact: Protocol endpoints were formerly called I/O de-multiplexers. PE is a much better name.

When an ESXi host requests access to a VVol (for example, when a VM is powered on), the array binds the VVol to it. Binding is synonym for sub-lun connection. For example, if a PE uses LUN 255, a VVol bound to it would be addressed as LUN 255:1.The section titled VVol Binding describes VVol binding in more detail.

PEs greatly extend the number of VVols that can be connected to an ESXi cluster; each PE can have up to 16,383 VVols per host bound to it simultaneously. Moreover, a new binding does not require a complete I/O rescan. Instead, ESXi issues a REPORT_LUNS SCSI command with SELECT REPORT to the PE to which the sub-lun is bound. The PE returns a list of sub-lun IDs for the VVols bound to that host. In large clusters, REPORT_LUNS is significantly faster than a full I/O rescan because it is more precisely targeted.

The FlashArray PE Implementation

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint, but the Web Client hides it from view until a sub-lun connection is made.

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for VVols, so a single PC per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific. ArrayView 3 illustrates this with excerpts from FlashArray GUI Host panes for two ESXi hosts. Both hosts share connections to the pure-protocol-endpoint PE (LUN 254). Both have shared connections to non-VVol volumes srm-vmfs and Template, using the same LUNs. The VVols bound to the host on the right, however, are only connected to that host; they use sub-luns of LUN 254.

vv13.png

ArrayView 3: Excerpts from FlashArray GUI Connected Volumes Panes for two ESXi Hosts

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually. Appendix II describes the use of the FlashArray CLI to create a new PE.

BEST PRACTICE: Use one (the default) PE per array. All hosts should share the same PE. VVo to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary

As is typical for the FlashArray architecture, VVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Protocol Endpoints in vSphere

To view the PE(s) presented to a host, in the Host and Clusters tab of the Web Client inventory pane, click the target host, select the Configure tab, and select Protocol Endpoints from the menu (vSphereView 12).

vv14.png

vSphereView 12: PE List for an ESXi Host

Click the table row for the PE of interest to display its network address authority (NAA) number, the protocol used to communicate with it, its state, the array that hosts it, the number of paths to it, its multipathing policy, and the datastore VVols associated with it. Of these, the only configurable property is multipathing. For optimal performance, Pure Storage recommends round robin path selection (the default policy with ESXi Version 6.5 update 1 and later versions) for all volumes, both VMFS and PE.

BEST PRACTICE: Configure the round robin path selection policy for PEs.

ESXi behaves differently with respect to queue depth limits for PEs than for other volumes. Pure Storage recommends leaving ESXi PE queue depth limits at the default values. 

BEST PRACTICE: Leave PE queue depth limits at the default values unless performance problems occur.
The blog post at https://blog.purestorage.com/queue-depth-limits-and-vvol-protocol-endpoints/ contains additional information about PE queue depth limits.

VVol Datastores

VVols replace LUN-based datastores formatted with VMFS. There is no file system on a datastore VVol, nor are VVol-based virtual disks encapsulated in files.

The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.

But VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the VVol architecture includes a storage container object, generally referred to as a VVol datastore, with two key properties:

Capacity limit
    Allows an array administrator to limit the capacity that VMware administrators can provision as VVols.

Array capabilities
    Allows vCenter to determine whether an array can satisfy a configuration request for a VM.

A VVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term VVol datastore exclusively.

The FlashArray Implementation of VVol Datastores

FlashArray VVol datastores have no artificial size limit. The initial FlashArray VVol release supports a single 8-petabyte VVol datastore per array. Pure Storage Technical Support can change an array’s VVol datastore size on customer request to alter the amount of storage VMware can allocate.

Pure Storage anticipates supporting multiple VVol datastores per array and user-configurable VVol datastore sizes in the future.

Purity//FA Version 5.0.0 and newer versions automatically create an array’s VVol datastore when its VASA provider is registered with vCenter. Once created, a VVol datastore can be mounted to ESXi hosts.

FlashArrays require two items to create a volume—a size and a name. VVol datastores do not require any additional input or enforce any configuration rules on VVols, so creation of FlashArray-based VVols is simple.

Mounting a VVol Datastore

A VVol datastore can be mounted to any ESXi host with access to a PE on the array that hosts the VVol datastore. Mounting a VVol datastore to a host requires:

  • Registration of the array’s VASA providers with vCenter
  • Provisioning of at least one PE to the host.

The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.

An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.

With the Plugin, a VMware administrator can connect a PE to an ESXi host and mount its VVol datastore without array administrator involvement.

Using the Plugin to Mount VVol Datastore

Navigate to Hosts and Clusters in the vCenter inventory pane, right-click the target cluster or host, select Pure Storage from the dropdown menu, and Create Datastore from the secondary dropdown to launch the Create Datastore wizard (vSphereView 13).

vv15.png

vSphereView 13: Create Datastore

Enter a friendly name for the VVol datastore (optional). Click the VVol radio button, select the array from which to provision in the Select Pure Storage Array dropdown, and click Create to provision the VVol datastore to the host or cluster.

vv16.png

vSphereView 14: Create Datastore Wizard

The Plugin connects the array’s PE(s) to the FlashArray host or host group that corresponds to the ESXi host or hosts and mounts the VVol datastore to the selected host(s).

vv17.png

vSphereView 15: An Already-mounted Datastore

Notes:
A VVol datastore can be mounted to a cluster, or alternatively, to one of its hosts by expanding the display in the Select Host/Cluster box and selecting the host.

If the array’s VVol datastore has already been mounted to a host or cluster in the vCenter, the Datastore Name field is populated and the entry box is grayed out (vSphereView 15).

Error messages usually indicate that the array has no host group corresponding to the ESXi cluster, or that the host group is not configured properly. The section titled Configuring Host Connectivity describes connecting ESXi hosts and clusters to FlashArray volumes.

Mounting VVol Datastores Manually: FlashArray Actions 

Alternatively, VVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the VVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.

Pure Storage recommends using the Plugin to provision PEs to hosts. The FlashArray GUI does not currently support provisioning or de-provisioning PEs; those are done via the CLI or the REST APIs.

To provision a PE using the FlashArray CLI, use the purevol list command to discover the array’s PE(s). Use the purehost connect or purehgroup connect command to connect a PE to a host or host group. (vSphereView 16)

vSphereView 16 illustrates (a) an array with three PEs, and (b) the purehgroup command for connecting the default pure-protocol-endpoint to the Infrastructure host group.

vv19.png

vSphereView 16: Sample Use of the FlashArray CLI to Connect to a Host Group

Registering an array’s VASA Provider with a vCenter creates a default PE. To provision a PE prior to registration, use the commands listed in Appendix II

Mounting VVol Datastores Manually: Web Client Actions

The FlashArray GUI Storage view Hosts tab lists PE connections made by the CLI. (ArrayView 4)

vv20.png

ArrayView 4: FlashArray GUI Hosts Tab Showing PE Connection to Cluster

Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, vCenter does not recognize them until an I/O rescan occurs.

To demonstrate this, select the target host in the Hosts and Clusters list in the Web Client inventory pane, click the Configure tab, and select Protocol Endpoints to display a table of PEs known to vCenter (vSphereView 17).

vv21.png

vSphereView 17: Protocol Endpoints Are Not Visible Until Storage Rescan

To rescan storage for a host or cluster, right-click the host or cluster in the inventory pane, select Storage from the dropdown menu, and Rescan Storage from the secondary dropdown to launch the Mission – Rescan Storage wizard (vSphereView 19).

vv22.png

vSphereView 18: Rescan Storage Command

Check Scan for new Storage Devices and click OK to start the rescan. (Rescanning for new VMFS volumes is not required, but it can be selected.) Rescanning does not cause the PE to immediately appear in the Protocol Endpoints view (vSphereView 17). A PE does not become visible in this view until it is in use by a VVol datastore.

vv23.png

vSphereView 19: Mission-Rescan Storage Wizard

To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device (vSphereView 20).

vv24.png

vSphereView 20: PE Listed as an ESXi Host's Storage Device after Rescan

Mounting a VVol Datastore

To mount a VVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown (vSphereView 21) to launch the New Datastore wizard (vSphereView 22).

vv25.png

vSphereView 21: Mount VVol Datastore 

Click the VVol radio button, then click Next. (not shown in vSphereView 22).

vv26.png

vSphereView 22: New Datastore Wizard (1)

Enter in a friendly name for the datastore and select the VVol container in the Backing Storage Container list. (vSphereView 23).

vv27.png

vSphereView 23: New Datastore Wizard (2)

Notes:
Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.

No VVol datastore listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.

Select the host(s) on which to mount the VVol datastore and click Finish. (vSphereView 24—Finish button not shown) 

vv28.png

vSphereView 24: New Datastore Wizard (3)

Once a VVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that hosts it. (vSphereView 25).

vv29.png

vSphereView 25:  PE Listed for an ESXi Host after VVol Datastore Creation

Mounting a VVol Datastore to Additional Hosts

To mount the VVol datastore to additional hosts, right-click its row in the Web Client inventory pane and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard (vSphereView 27). Select the hosts to which to mount the VVol datastore by checking their boxes and click Finish (not shown).

 vv30.png

vSphereView 26: Mount Datastore to Additional Hosts Command

vv31.png

vSphereView 27: Mount Datastore to Additonal Hosts Wizard

Using a VVol Datastore

A VVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to VVols.

VVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a VVol datastore’s contents (vSphereViews 28 and 29).

vv32.png

vSphereView 28: Web Client File Browser View of a VVol Datastore's Contents

vv34.png

vSphereView 29: vSphere CLI Listing of a VVol Datastore's Contents

Types of VVols

The benefits of VVols are rooted in the increased storage granularity achieved by implementing each VVol-based virtual disk as a separate volume on the array. This property makes it possible to apply array-based features to individual VVols.

FlashArray Organization of VVols

FlashArrays organize the VVols associated with each VVol-based VM as a volume group. Each time VMware administrator creates a VVol-based VM, the hosting FlashArray creates a volume group whose name is the name of the VM, prefixed by vvol- and followed by -vg.
(ArrayView 5).

FlashArray syntax limits volume group names to letters, numbers and dashes; arrays remove other characters that are valid in virtual machine names during volume group creation.

vv35.png

ArrayView 5: Volume Groups Area of GUI Volumes Tab

To list the volumes associated with a VVol-based VM, select the Storage view Volumes tab. In the Volume Groups area, select the volume group name containing the VM name from the list or enter the VM name in the search box (ArrayView 5).

The Volumes area of the pane lists the volumes associated with the VM (ArrayView 6).

vv36.png

ArrayView 6: GUI View of Volume Group Membership

Clicking a volume name displays additional detail about the selected volume (ArrayView 7).

vv37.png

ArrayView 7: GUI View of a VVol's Details

Note:
Clicking the volume group name in the navigation breadcrumbs returns to the volume groups display.

When the last VVol in a volume group is deleted (destroyed), the array destroys the volume group automatically. As with all FlashArray data objects, destroying a volume group moves it to the array’s Destroyed Volume Groups folder for 24 hours before eradicating it permanently.

To recover or eradicate a destroyed volume group, click the respective icons in the Destroyed Volume Groups pane.
(ArrayView 8)

 vv38.png

ArrayView 8: FlashArray GUI Destroyed Volume Groups Folder

The FlashArray CLI and REST interfaces can also be used to manage volume groups of VVols.

VM Datastore Structures

VVols do not change the fundamental VM architecture:

  • Every VM has a configuration file (a VMX file) that describes its virtual hardware and special settings
  • Every powered-on VM has a swap file.
  • Each virtual disk added to a VM is implemented as a storage object that limits guest OS disk capacity.
  • Every VM has a memory (vmem) file used to store snapshots of its memory state.

Conventional VM Datastores

Every VM has a home directory that contains information, such as:

Virtual hardware descriptions 

Guest operating system version and settings, BIOS configuration, virtual SCSI controllers, virtual NICs, pointers to virtual disks, etc.

Logs

Information used during VM troubleshooting

VMDK files 

Files that correspond to the VM’s virtual disks, whether implemented as NFS, VMFS, physical and virtual mode RDMs (Raw Device Mappings), or VVols. VMDK files indicate  where the ESXi vSCSI layer should send each virtual disk’s I/O.

For complete list VM home directory contents see VMware Workstation 5.0 What Files Make Up a Virtual Machine article.

When a VMware administrator creates a VM based on VMFS or NFS, VMware creates a directory in its home datastore. (vSphereView 30).

vv39.png

vSphereView 30: Web Client Edit Settings Wizard

vv40.png

vSphereView 31: Web Client File Browser View of a VM's Home Directory

With VVol-based VMs, there is no file system, but VMware makes the structure appear to be the same as that of a conventional VM. What occurs internally is quite different, however.

VVol-based VM Datastores

VVol-based VMs use four types of VVols:

  • Configuration VVol (usually called “config VVol” one per VM)
  • Data VVol (one or more per VM)
  • Swap VVol (one per VM)
  • Memory VVol (zero, one or more per VM)

The sections that follow describe these four types of VVols and the purposes they serve.

In addition to the four types of VVols used by VVol-based VMs, there are VVol snapshots, described in the section titled Snapshots of VVols, starting 

Config VVols 

When a VMware administrator creates a VVol-based VM, vCenter creates a 4-gigabyte thin-provisioned configuration VVol (config VVol) on the array, which ESXi formats with VMFS. A VM’s config VVol stores the files required to build and manage it: its VMX file, logs, VMDK pointers, etc. To create a VVol-based VM, right-click any inventory pane object to launch the New Virtual Machine wizard and specify that the VM’s home directory be created on a VVol datastore.

vv41.png

vSphereView 32: New Virtual Machine Wizard

Note:
For simplicity, the VM in this example has no additional virtual disks.

When VM creation is complete, a directory with the name of the VM appears in the array’s VVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.
(vSphereView 34)

 vv42.png

vSphereView 33: Customize Hardware Wizard

When VM creation is complete, a directory with the name of the VM appears in the array’s VVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information. (vSphereView 34)

vv43.png

vSphereView 34: Directory of a New VVol-based VM

In the Web Client, a VVol datastore appears as a collection of folders, each representing a mount point for the mini-file system on a config VVol. The Web Client GUI Browse Datastore function and ESXi console cd operations work as they do with conventional VMs. Rather than traversing one file system, however, they transparently traverse the file systems hosted on all of the array’s config VVols.

A FlashArray creates a config VVol for each VVol-based VM. Arrays name config VVols by concatenating the volume group name with config-<UUID>. Arrays generate UUIDs randomly; an array administrator can change them if desired.

An array administrator can search for volumes containing a VVol-based VM name to verify that its volume group and config VVol have been created (ArrayView 9).

vv44.png

ArrayView 9: Locating a VM's Config VVol

As objects are added to a VVol-based VM, VMware creates pointer files in its config VVol; these are visible in its directory. When a VM is deleted, moved to another array, or moved to a non-VVol datastore, VMware deletes its config VVol.

Data VVols

Each data VVol on an array corresponds to a virtual disk. When a VMware administrator creates a virtual disk in a VVol datastore, VMware directs the array to create a volume and creates a VMDK file pointing to it in the VM’s config VVol. Similarly, to resize or delete a virtual disk, VMware directs the array to resize or destroy the corresponding volume.

Creating a Data VVol

VVol-based virtual disk creation is identical to conventional virtual disk creation. To create a VVol-based virtual disk using the Web Client, for example, right-click a VM in the Web Client inventory pane and select Edit Settings from the dropdown menu to launch the Edit Settings wizard.
(vSphereView 35)

vv45.png

vSphereView 35: Web Clinet Edit Settings Command

Select New Hard Disk in the New device dropdown and click Add (not shown in vSphereView 36).

vv46.png

vSphereView 36: New Hard Disk Selection

Enter configuration parameters (vSphereView 37). Select the VM’s home datastore (Datastore Default) or a different one for the new virtual disk, but to ensure that the virtual disk is VVol-based, select a VVol datastore.

vv47.png

vSphereView 37: Specifying Data VVol Parameters

 

Click OK to create the virtual disk (not shown in vSphereView 37). VMware does the following:

  1. For a VM’s first VVol on a given array, directs the array to create a volume group and a config VVol for it.
  2. Directs the array to create a volume in the VM’s volume group.
  3. Creates a VMDK pointer file in the VM’s config VVol to link the virtual disk to the data VVol on the array.
  4. Adds the new pointer file to the VM’s VMX file to enable the VM to use the data VVol.

The FlashArray GUI Storage view Volumes tab lists data VVols in the Volumes pane of the volume group display. (ArrayView 10)

 vv48.png

ArrayView 10: FlashArray GUI View of a Volume Group's Data VVols

Resizing a Data VVol

A VMware administrator can use any of several management tools expand a data VVol to a maximum size of 62 terabytes while it is online. Although FlashArrays can shrink volumes as well, vSphere does not support that function. (vSphereView 38)

vv49.png

vSphereView 38: vSphere Disallows Volume Shrinking

Note:
VMware enforces the 62 terabyte maximum to enable VVols to be moved to VMFS or NFS, both of whose maximum virtual disk size is 62 terabytes.

To expand a data VVol using the Web Client, right-click the VM in the inventory pane, select Edit Settings from the dropdown menu, and select the virtual disk to be expanded from the dropdown. The virtual disk’s current capacity is displayed (vSphereView 39). Enter the desired capacity and click OK (not shown in vSphereView 40), and use guest operating system tools to expose the additional capacity to the VM. 

vv50.png

vSphereView 39: Selecting Virtual Disk for Expansion

vv51.png

vSphereView 40: Entering Expanded Data VVol Capacity

Deleting a Data VVol

Deleting a data VVol is identical to deleting any other type of virtual disk. When a VMware administrator deletes a VVol-based virtual disk from a VM, ESXi deletes the reference VMDK file and directs the array to destroy the underlying volume.

To delete a VVol-based virtual disk, right-click the target VM in the Web Client inventory pane, select Edit Settings from the dropdown menu to launch the Edit Settings wizard. Select the virtual disk to be deleted, hover over the right side of its row and click the  vv52.png  symbol when it appears (vSphereView 41).

vv53.png

vSphereView 41: Selecting Data VVol for Deletion

To remove the VVol from the VM, click the OK button. To remove it from the VM and destroy it on the array, check the Delete files from datastore checkbox (vSphereView 42) and click OK.

vv54.png

vSphereView 42: Destroying the Volume on the Array

Note:
Delete files from datastore is not a default—if it is not selected, the VVol is detached from the VM, but remains on the array. A VMware administrator can reattach it with the Add existing virtual disk Web Client command.

The ESXi host deletes the data VVol’s VMDK pointer file and directs the array to destroy the volume (move it to its Destroyed Volumes folder for 24 hours.
(ArrayView 11)

vv55.png

ArrayView 11: Deleted Data VVol in an Array's Destroyed Voumes Folder

An array administrator can recover a deleted VVol-based virtual disk at any time during the 24 hours following deletion. After 24 hours, the array permanently eradicates the volume and it can no longer be recovered.

Swap VVols

VMware creates swap files for VMs of all types when they are powered on, and deletes them at power-off. When a VVol-based VM is powered on, VMware directs the array to create a swap VVol, and creates a swap (.vswp) file in the VM’s config VVol that points to it.

vSphereView 43 illustrates the components of a powered-off VVol-based VM. There is no vswp file. Likewise, ArrayView 12 shows that the VM’s volume group does not include a swap volume.

vv56.png

vSphereView 43: Powered-off VM Configuration

vv57.png

ArrayView12: Data VVol Volumes for Powered-off VM

To power on a VVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown.(vSphereView 44). 

vv58.png

vSphereView 44: Power On VM Command

 When a VM is powered on, the Web Client file navigator lists two vswp files in its folder (vSphereView 45).

vv59.png

vSphereView 45: Powered-On VM with vswp File

VMware creates a vswp file for the VM’s memory image when it is swapped out and another for ESXi administrative purposes.

The swap VVol’s name in the VM’s volume group on the array is Swap- concatenated with a unique identifier. The GUI Volumes tab shows a volume whose size is the VM’s memory size. (ArrayView 13). 

vv60.png

ArrayView 13: Swap Volume for Powered-On VM

vv61.png

vSphereView 46: VM's Virtual Memory SIze

Like all FlashArray volumes, swap VVols are thin-provisioned—they occupy no space until data is written to them.

To power off a VVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Shut Down Guest OS from the secondary dropdown
(vSphereView 47).

vv62.png

vSphereView 47: Web Client Power Off VM Command

When a VM is powered off, its vswp file disappears from the Web Client file navigator, and the FlashArray GUI Volumes tab no longer shows a swap volume on the array
(cf. ArrayViews 13 and 14).

vv63.png  

ArrayView 14: GUI View of Powered-off VM's Volumes (No Swap VVol)

VMware destroys and immediately eradicates swap VVols from the array. (They do not remain in the Destroyed Volumes folder for 24 hours.) (ArrayView 15)

vv64.png

ArrayView 15: Destroyed and Eradicated Swap VVol

Memory VVols

VMware creates memory VVols for two reasons:

VM suspension

When a VMware administrator suspends a VM, VMware stores its memory state in a memory VVol. When the VM resumes, its memory state is restored from the memory VVol, which is then deleted.

VM snapshots

When a VMware management tool creates a snapshot of a VVol-based VM with the “store memory state” option, VMware creates a memory VVol. Memory VVols that contain VM snapshots are deleted when the snapshots are deleted. They are described in the section titled Creating a VM Snapshot with Saved Memory.

To suspend a running VM, right-click its entry in the Web Client inventory pane, select Power from the dropdown menu, and Suspend from the secondary dropdown . (vSphereView 48).

vv65.png

vSphereView 48: VM Suspend Command

VMware halts the VM’s processes, creates a memory VVol (ArrayViews 16 and 17) and a vmss file to reference it (vSphereView 49), de-stages (writes) the VM’s memory contents to the memory VVol, and directs the array to destroy and eradicate its swap VVol.

vv66.png

ArrayView 16: Memory VVol Host Connection

vv67.png

ArrayView 17: GUI View of Memory VVol

vv68.png

vSphereView 49: Memory VVol in File Navigator

When the VM’s memory has been written, The ESXi host unbinds its VVols. They are bound again when it is powered on.

To resume a suspended VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown.
(vSphereView 50)

vv69.png

vSphereView 50: Web Client Command to Power On a Suspended VM

Powering on a suspended VM binds its VVols, including its memory VVol, to the ESXi host, and loads its memory state is from the memory VVol. Once loading is complete, VMware unbinds the memory VVol and destroys (but does not immediately eradicate) it. The memory VVol moves to the array’s destroyed volumes folder (ArrayView 18) where it is eradicated permanently after 24 hours.

vv70.png

ArrayView 18: GUI View of Destroyed Memory VVol

Recovering Deleted VVols

Deleted data and config VVols are both recoverable within 24 hours of deletion.

Throughout a VM’s life, it has a config VVol in every VVol datastore it uses. The config VVol hosts the VM’s home folder which contains its VMX file, logs, swap pointer file, and data VVol (VMDK) and snapshot pointer files. Restoring a config VVol from a snapshot and the corresponding data and snapshot VVols effectively restores a deleted VM.

vSphereView 51 illustrates a typical VM’s home folder.

vv71.png

vSphere View 51: File Navigator View of A VM Home Directory Folder

To delete a VM, VMware deletes the files in its config VVol and directs the array to destroy the config VVol and any of its data VVols that are not shared with other VMs.
(vSphereView 52)

vv72.png

vSphereView 52: Confirm Delete Wizard

An array administrator can recover destroyed VVols at any time within 24 hours of their destruction. But because the config VVol’s files are deleted before destruction, recovering a VM’s config VVol results in an empty folder. A recovered config VVol must be restored from its most recent snapshot.

Recovering a config VVol requires at least one pre-existing array-based snapshot. Without a config VVol snapshot, a VM can be recovered, but its configuration must be recovered manually.

When a VMware administrator deletes a VM, VMware directs the array to destroy its config VVol, data VVols, and any snapshots. The array moves the objects to its destroyed objects folders for 24 hours. (ArrayView 19)

vv73.png

ArrayView 19: GUI View of a Destroyed VM's Volumes, Snapshots, and Volume Group

To recover a deleted VM, recover its volume group first, followed by its config and data VVols. To recover a single object on the array, click the  icon next to it. (ArrayView 19)

To recover multiple objects of the same type with a single action, click the vertical ellipsis and select Recover… (ArrayView 20) to launch the Recover Volumes wizard (ArrayView 21). Select the config VVol and the data VVols to be recovered by checking their boxes and click the Recover button.

vv74.png

ArrayView 20: GUI Command to Recover Objects

vv75.png

ArrayView 21: Selecting Volumes to Recover

 

In the GUI Snapshots pane, click the vertical ellipsis to the right of the snapshot from which to restore, and select Restore from the dropdown menu. (ArrayView 22)

vv76.png

ArrayView 22: Restore Config VVol from Snapshot

 

When the Restore Volume from Snapshot wizard appears, click the Restore button. (ArrayView 23)

vv77.png

 

ArrayView 23: Restore Volume Confirmation Wizard

Restoring the config VVol from a snapshot recreates the pointer files it contains. In the Web Client file navigator, right-click the vmx file and select Register VM… from the dropdown menu to register the VM. (vSphereView 53)

vv78.png

vSphereView 53: Registering a Recovered VM

After registration, all data VVols, snapshots, and the VM configuration are as they were prior to VM deletion.

Recovering a Deleted Data VVol

During the 24 hour grace period between deletion of a VVol by a VMware administrator and its eradication by the array the virtual disk can be restored.

When a VMware administrator deleting a VVol-based virtual disk selects the Delete files from datastore option (vSphereView 54), the array moves the data VVol to its Destroyed Volumes folder for 24 hours.

vv79.png

vSphereView 54: Delete Virtual Disk Command

To use the Plugin to restore a deleted data VVol, click the VM in the inventory pane, select the FlashArray Virtual Volume Objects tab, and click the Restore Deleted Disk Plugin button to launch the Restore Deleted Disk wizard.
(vSphereView 56)

vv80.png

vSphereView 55: Restore Deleted Disk Command

vv81.png

vSphereView 56: Restore Deleted Disk Wizard

Select the data VVol to be restored from the list and click the Restore button. VMware directs the array to remove the data VVol from its Destroyed Volumes folder and makes the virtual disk visible to the VM and to the Web Client.

VVol Binding

A primary goal of the VVol architecture is scale—increasing the number of virtual disks that can be exported to ESXi hosts concurrently. With previous approaches, each volume would require a separate LUN. In large environments, it is quite possible to exceed the ESXi limit of 512 LUNs. VVols introduces the concept of protocol endpoints (PEs) to significantly extend this limit.

ESXi hosts bind and unbind (connect and disconnect) VVols dynamically as needed. Hosts can provision VMs and power them on and off even when no vCenter is available.

When an ESXi host needs access to a VVol:

  • It issues a bind request to the VASA provider whose array hosts the VVol
  • The VASA provider binds the VVol to a PE visible to the requesting host and returns the binding information (the sub-lun) to the host
  • The host issues a SCSI REPORT LUNS command to the PE to make the newly-bound VVol accessible.

VVols are bound to specific ESXi host(s) for as long as they are needed. Binds (sub-lun connections) are specific to each ESXi host-PE-VVol relationship. A VVol bound to a PE that is visible to multiple hosts can only be accessed by hosts that request binds. Table 1 lists the most common scenarios in which ESXi hosts bind and unbind and VVols.

What causes the bind?

Bound Host

 

When is it unbound?

VVol type

Power-on

Host running the VM

Power-off or vMotion

Config, data, swap

Folder navigated to in VVol Datastore via GUI

Host selected by vCenter with access to VVol datastore

When navigated away from or session ended

Config

Folder navigated to in VVol Datastore via SSH or console

Host logged into

When navigated away from or session ended

Config

vMotion

Target host

Power-off or vMotion

Config, data, swap

VM creation

Target host

Creation completion

Config, data

VM deletion

Target host

Upon deletion completion

Config

VM Reconfiguration

Target host

Reconfiguration completion

Config

Clone

Target host

Clone completion

Config, data

Snapshot

Target host

Snapshot completion

Config

Table 1: Reasons for Binding VVols to ESXi Host

 

Notes:
Binding and unbinding is automatic There is never a need for a VMware or FlashArray administrator to manually bind a VVol to an ESXi host.

FlashArrays only bind VVols to ESXi hosts that make requests; they do not bind them to host groups.

If multiple PEs are presented to an ESXi host, the host selects one at random to satisfy each bind request. Array administrators cannot control which PE is used for a bind.

The blog post at https://blog.purestorage.com/virtual-volumes-vvol-bindings-explained/ contains a detailed description of ESXi host to PE to VVol binding.

A VVol with no sub-lun connection is not “orphaned”. No sub-lun connection simply indicates that no ESXi host has access to the VVol at that time. 

Snapshots of VVols 

An important benefit of VVols is in its handling of snapshots. With VMFS-based storage, ESXi takes VM snapshots by creating a delta VMDK file for each of the VM’s virtual disks. It redirects new virtual disk writes to the delta VMDKs, and directs reads of unmodified blocks to the originals, and reads of modified blocks to the delta VMDKs. The technique works, but it introduces I/O latency that can profoundly affect application performance. Additional snapshots intensify the latency increase.

The performance impact is so pronounced that both VMware and storage vendors recommend the briefest possible snapshot retention periods - see Best practices for using snapshots in the vSphere environment (1025279) kb article. Practically speaking, this limits snapshot uses to:

Patches and upgrades
Taking a snapshot prior to patching or upgrading an application or guest operating system, and deleting it immediately after the update succeeds.

Backup
Quiescing a VM and taking a snapshot prior to a VADP-based VM backup. Again, the recommended practice is deleting the snapshot immediately after the backup completes.

These snapshots are typically of limited utility for other purposes, such as development testing. Adapting them for such purposes usually entails custom scripting and/or lengthy copy operations with heavy impact on production performance. In summary, conventional VMware snapshots solve some problems, but with significant limitations.

Array-based snapshots are generally preferable, particularly for their lower performance impact. FlashArray snapshots are created instantaneously, have negligible performance impact, and initially occupy no space. They can be scheduled or taken on demand, and replicated to remote arrays. Scripts and orchestration tools can use them to quickly bring up or refresh development testing environments.

Because FlashArray snapshots have negligible performance impact, they can be retained for longer periods. In addition, they can be copied to create new volumes for development testing and analytics, either by other VMs or by physical servers.

FlashArray administrators can take snapshots of VMFS volumes directly, however there are limitations:

No integration with ESXi or vCente

Plugins can enable VMFS snapshot creation and management from the Web Client, but vCenter and ESXi have no awareness of or capability for managing them.

Coarse granularity

Array-based snapshots of VMFS volumes capture the entire VMFS. They may include hundreds or thousands of VMs and their VMDKs. Restoring individual VMDKs requires extensive scripting.

VVols eliminate both limitations. VMware does not create VVol snapshots itself; it directs the array to create a snapshot for each of a VM’s data VVols. The Plugin translates Web Client commands into FlashArray operations. VMware administrators use the same tools to create, restore, and delete VMFS and VVol snapshots, but with VVols, they can operate on individual VMDKs.

Taking Snapshots of VVol-based VMs

While the FlashArray GUI, REST, and CLI interfaces can be used for both per-VM and per-virtual disk VVol operations, a major advantage of the Plugin is management of VVols from within vCenter. VMware administrators can use the Web Client or any other VMware management tool to create array-based snapshots of VVol-based VMs.

To take a snapshot of a VVol-based VM with the Web Client, right-click the VM in the inventory pane, select Snapshots from the dropdown menu, and Take Snapshot from the secondary dropdown to launch the Take VM Snapshot for VVol-VM wizard. (vSphereView 58)

vv82.png

vSphereView 57: Web Client Snapshot VM Command

vv83.png

vSphereView 58: Take Snapshot of VVol-VM Wizard

Enter a name for the snapshot and (optionally) check one of the boxes:

Snapshot the virtual machine’s memory:

Causes the snapshot to capture the VM’s memory state and power setting. Memory snapshots take longer to complete, and may cause a brief (a second or less) slowdown in VM response over the network.

Quiesce guest file system:

VMware Tools quiesces the VM’s file system before taking the snapshot. This allows outstanding I/O requests to complete, but queues new ones for execution after restart. When a VM restored from this type of snapshot restarts, any queued I/O requests complete. To use this option, VMware Tools must be installed in the VM. Either of these options can be used with VVol-based VMs.

VMware administrators can also take snapshots of VVol-based VMs with PowerCLI, for example:

New-Snapshot -Name NewSnapshot -Quiesce:$true -VM VVolVM -Memory:$false 

 vv84.png

vSphereView 59: New Files Resulting from a Snapshot of a VVol-based VM

When a snapshot of a VVol-based VM is taken, new files appear in the VM’s VVol datastore folder. (vSphereView 59)

The files are:

VMDK (VVol-VM-000001.vmdk)

A pointer file to a FlashArray volume or snapshot. If the VM is running from that VMDK, the file points to a data VVol. If the VM is not running from that snapshot VMDK, the file points to a VVol snapshot. As administrators change VMs’ running states, VMware automatically re-points VMDK files.

Database file (VVol-VM.vmsd)

The VMware Snapshot Manager’s primary source of information. Contains entries that define relationships between snapshots and the disks from which they are created.

Memory snapshot file (VVol-VM-Snapshot1.vmsn)

Contains the state of the VM’s memory. Makes it possible to revert directly to a powered-on VM state. (With non-memory snapshots, VMs revert to turned off states.) Created even if the Snapshot the virtual machine’s memory option is not selected.

Memory file (not shown in vSphereView 59)

A pointer file to a memory VVol. Created only for snapshots that include VM memory states.

Creating Snapshots Without Saving Memory

If neither Snapshot the virtual machine’s memory nor Quiesce guest file system is selected, VMware directs the array to create snapshots with no pre-work. All FlashArray snapshots are crash consistent, so snapshots of VVol based-VMs that they host are likewise at least crash consistent.

VMware takes snapshots of VVol-based VMs by directing the array (or arrays) to take snapshots of its data VVols. Viewing a VM’s data VVols on the array shows each one’s live snapshots.(ArrayView 24)

vv85.png

vSphereView 50: Completed VMware Shnapshot VM

vv86.png

ArrayView 24: Non-memory Snapshot if Array GUI

vv87.png

vSphereView 61: Non-memory Snaphsot in Web Client

 

Note:
FlashArray snapshot names are auto-generated, but VMware tools list the snapshot name supplied by the VMware administrator (as in vSphereView 58 on page 48).

Creating a VM Snapshot with Saved Memory

If the VMware administrator selects Store the Virtual Machine’s Memory State, the underlying snapshot process is more complex.

Memory snapshots generally take somewhat longer than non-memory ones because the ESXi host directs the array to create a memory VVol to which it writes the VM’s entire memory image. Creation time is proportional to the VM’s memory size. vSphereView 63 shows the progress indicator for a memory snapshot of a VM.

vv88.png

vSphereView 62: Take VM Snapshot Wizard

Memory snapshots typically cause a VM to pause briefly, usually for less than a second. vSphereView 64 shows a timeout in a sequence of ICMP pings to a VM due to a memory snapshot.

vv89.png

vSphereView 63: Memory Snaphsot Progress Indicator

The memory VVol in a VM’s volume group created as a consequence of a memory snapshot stores the VM’s active state (memory image). ArrayView 25 shows the volume group of a VM with a memory snapshot (vvol-VVol-VM-vg/Memory-b31d0eb0). The size of the memory VVol is the memory size of the VM’s memory image.

vv90.png

vSphereView 64: Missed Ping Due to Memory Copy During Snapshot Creation

vv91.png

ArrayView 25: Memory VVol Created by Taking a Memory Snapshot of a VM 

VMware flags a memory snapshot with a green vv92.png (play) icon to indicate that it includes the VM’s memory state. (vSphereView 65)

vv93.png

vSphereView 65: Web Client View of a Memory Snapshot

Reverting a VM to a Snapshot

VMware management tools can revert VMs to snapshots taken by VMware. As with snapshot creation, reverting is identical for conventional and VVol-based VM snapshots.

To restore a VM from a snapshot, from the Web Client Hosts & Clusters or VMs and Templates view, select the VM to be restored and click the Snapshots tab in the adjacent pane to display a list of the VM’s snapshots.

Select the snapshot from which to revert, click the All Actions button, and select Revert to from the dropdown menu. (vSphereView 66)

vv94.png

vSphereView 66: Revert VM to Snapshot Command

Subsequent steps differ slightly for non-memory and memory snapshots.

Reverting a VM from a Non-memory Snapshot

The Revert to command displays a confirmation dialog (vSphereView 67). Click Yes to revert the VM to the selected snapshot.

vv95.png

vSphereView 67: Confirm Reverting a VM to a Non-memory Snapshot

The array overwrites the VM’s data VVols from their snapshots. Any data VVols added to the VM after the snapshot was taken are unchanged.

Before reverting a VM from a non-memory snapshot, VMware shuts the VM down. Thus, reverted VMs are initially powered off.

Reverting a VM from Memory Snapshot

To revert a VM to a memory snapshot, the ESXi host first directs the array to restore the VM’s data VVols from their snapshots, and then binds the VM’s memory VVol and reloads its memory. Reverting a VM to a memory snapshot takes slightly longer and results in a burst of read activity on the array (ArrayView 26).

A VM reverted to a memory snapshot can be reverted either suspended or to a running state. Check the Suspend this virtual machine when reverting to selected snapshot box in the Confirm Revert to Snapshot wizard (vSphereView 68) to force the reverted VM to be powered off initially. If the box is not checked, the VM is reverted into its state at the time of the snapshot.

vv96.png

ArrayView 26: FlashArray Read Activity while Reverting a VM from a Memory Snapshot

Deleting a Snapshot

Snapshots created with VMware management tools can be deleted with those same tools. VMware administrators can only delete snapshots taken with VMware tools.

To delete a VM snapshot from the Web Client Host and Clusters or VMs and Templates view, select the target VM and click the Snapshots tab in the adjacent pane to display a list of its snapshots.

Select the snapshot to be deleted, click the All Actions button, and select Delete Snapshot from the dropdown menu to launch the Confirm Delete wizard. Click Yes to confirm the deletion. (vSphereViews 69 and 70)

vv97.png

vSphereView 69: Delete VM Snapshot Command

vv98.png

vSphereView 70: Confirm VM Snapshot Deletion

VMware removes the VM’s snapshot files from the VVol datastore and directs the array to destroy the snapshot. The array moves the snapshot and any corresponding memory VVols to its Destroyed Volumes folder for 24 hours, after which it eradicates them permanently. (ArrayView 27)

vv99.png

ArrayView 27: Memory VVol for a Destroyed Snapshot

When VMware deletes a conventional VM snapshot, it reconsolidates (overwrites the VM’s original VMDKs with the data from the delta VMDKs). Depending on the amount of data changed after the snapshot, this can take a long time and have significant performance impact. With FlashArray based snapshots of VVols, however, there is no reconsolidation. Destroying a Flasharray snapshot is essentially instantaneous. Any storage reclamation occurs after the fact during the normal course of the array’s periodic background garbage collection (GC).

Unamanged Snapshots

Snapshots created with VMware tools are called managed snapshots. Snapshots created by external means, such the FlashArray GUI, CLI, and REST interfaces and protection group policies, are referred to as unmanaged. The only difference between the two is that VMware tools can be used with managed snapshots, whereas unmanaged ones must be managed with external tools.

Unmanaged snapshots (and volumes) can be used in the VMware environment. For example, FlashArray tools can copy an unmanaged source snapshot or volume to a target data VVol, overwriting the latter’s contents, but with some restrictions:

Volume size

A source snapshot or volume must be of the same size as the target data VVol. FlashArrays can copy snapshots and volumes of different sizes (the target resizes to match the source), but VMware cannot accommodate external VVol size changes. To overwrite a data VVol with a snapshot or volume of a different size, use VMware tools to resize the target VVol prior to copying.

Offline copying

Overwriting a data VVol while it is in use typically causes the application to fail or produce incorrect results. A VVol should be offline to its VM, or the VM should be powered off before overwriting.

Config VVols

Config VVols should only be overwritten with their own snapshots.

Memory VVols

Memory VVols should never be overwritten. There is no reason to overwrite them, and doing so renders them unusable.

Snapshot Management with the Plugin

Plugin Version 3.0 introduces snapshot features that are not otherwise available with the Web Client. The VVol-based VM listing has a FlashArray Virtual Volume Objects tab that lists virtual disk-VVol relationships and includes four new feature buttons. (vSphereView 71)

vv100.png

vSphereView 71: Snapshot Features Available with Plugin Version 3.0

 

Three of the Plugin buttons invoke snapshot-related functions:

Import Disk

Instantly presents a copy of any data VVol or VVol snapshot in any VVol-based VM in the vCenter to the selected VM.

Create Snapshot

Creates a FlashArray snapshot of the selected data VVol.

Overwrite Disk

Overwrites the selected VVol with the contents of any data VVol or snapshot in any FlashArray VVol-based VM in the vCenter.

These functions can also be performed with PowerShell or the vRealize Orchestrator. They are included in the Plugin as “one button” conveniences. The subsections that follow describe the functions.

Import Disk

Click the Import Disk button to launch the Import Virtual Volume Disk wizard (vSphereView 72). The wizard lists all VMs with FlashArray data VVols and their managed and unmanaged snapshots.

vv101.png

vSphereView 72: Import VVol Disk Wizard

Select the data VVol or snapshot to be imported and click Create to create a new data VVol having the same size and content as the source.

Because copying FlashArray volumes only reproduces metadata, copies are nearly instantaneous regardless of volume size.

Create Snapshot

VMware tools can create snapshots of VMs that include all of the VM’s data VVols. The Plugin Create Snapshot function can create a snapshot of a selected virtual disk (data VVol).

To create a snapshot of a data VVol, select the target virtual disk and click the Create Snapshot button (vSphereView 73) to launch the Create Snapshot wizard (vSphereView 75).

vv102.png

vSphereView 73: Create Snapshot Plugin Button

 

Note:
Alternatively, right-click the selected virtual disk and select Create Snapshot from the dropdown menu to launch the wizard. (vSphereView 74)

vv103.png

vSphereView 74: Alternative Create Snapshot Command

 

Enter a name for the snapshot (optional—if no name is entered, the array assigns a name) and click Create. VMware directs the array to create a snapshot of the data VVol.

Because FlashArray snapshots only reproduce metadata, creation is nearly instantaneous regardless of volume size.

vv104.png

vSphereView 75: Create Snapshot Wizard

Overwrite Disk

To overwrite a data VVol with any data VVol or snapshot of equal size on the same array, select the virtual disk to be overwritten and either click the Overwrite Disk button or right-click the selection and select Overwrite Disk from the dropdown menu (vSphereView 76) to launch the Overwrite Virtual Volume Disk wizard. (vSphereView 78)

vv105.png

vSphereView 76: Overwrite Disk Command

If the source and target objects are not of the same size, the Plugin blocks the overwrite. (vSphereView 77)

vv106.png

vSphereView 77: Plugin Blocking Overwriting of Different-size Source and Target

If the source and target are of equal size, but the VM is powered on, the Plugin warns the administrator to ensure that the target virtual disk is not mounted by the VM, but allows the overwrite to proceed. (vSphereView 78)

vv107.png

vSphereView78: Overwrite Virtual Volume Disk Wizard

If the VM is powered off and the source and target objects are of the same size, no warnings are issued.

In either case, click Replace to overwrite the target volume with the contents of the source volume or snapshot.

Because copying a FlashArray volume from another volume or from a snapshot only reproduces its metadata, overwrites are nearly instantaneous regardless of target volume size.

Storage Policy Based Management

A major benefit of the VVol architecture is granularity—its ability to configure each virtual volume as required and ensure that the configuration does not change.

Historically, configuring storage with VMware management tools has required GUI plugins. Every storage vendor’s tools were unique—there was no consistency across vendors. Plugins were integrated with the Web Client, but not with vCenter itself, so there was no integration with the SDK or PowerCLI. Moreover, ensuring on-going configuration compliance was not easy, especially in large environments. Assuring compliance with storage policies generally required 3rd party tools.

With VVol data granularity, an array administrator can configure each virtual disk or VM exactly as required. Moreover, with VVols, data granularity is integrated with vCenter in the form of custom storage policies that VMware administrators create and apply to both VMs and individual virtual disks.

Storage policies are VMware administrator-defined collections of storage capabilities. Storage capabilities are array-specific features that can be applied to volumes on the array. When a storage policy is applied, VMware filters out non-compliant storage so that only compliant targets are presented as options for configuring storage for a VM or VVol.

If an array administrator makes a VM or volume non-compliant with a VMware policy, for example by changing its configuration on the array, VMware marks the VM or VMDK non-compliant. A VMware administrator can remediate non-compliant configurations using only VMware management tools; no array access is required.

FlashArray Storage Capabilities

An array’s capabilities represent the features it offers. When any FlashArray’s VASA providers are registered with vCenter, the array informs vCenter that the array has the following capabilities:

  • Encryption of stored data (“data at rest”)
  • Deduplication
  • Compression
  • RAID protection
  • Flash storage

All FlashArrays offer these capabilities; they cannot be disabled. VMware administrators can configure the additional capabilities advertised by the VASA provider and listed in Table 2.

Capability Name

Value (not case-sensitive)

Consistency Group Name

A FlashArray protection group name

FlashArray Group

Name of one or more FlashArrays

Local Snapshot Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Local Snapshot Policy Capable

Yes or No

Local Snapshot Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Minimum Replication Concurrency

Number of target FlashArrays to replicate to at once

Pure Storage FlashArray

Yes or No

QoS Support

Yes or No

Replication Capable

Yes or No

Replication Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Replication Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Target Sites

Names of specific FlashArrays desired as replication targets

Table 2: Configurable Capabilities Advertised by FlashArray VASA Providers

Storage Capability Compliance

Administrators can specify values for some or all of these capabilities when creating storage policies. VMware performs two types of policy compliance checks:

  • If a VVol were created on the array, could it be configured with the feature?
  • Is a VVol in compliance with its policy? For example, a VVol with a policy of hourly snapshots must be (a) on FlashArray that hosts a protection group with hourly snapshots and (b) a member of that protection group.

Only VMs and virtual disks configured with VVols can be compliant. VMFS-based VMs are never compliant, even if their volume is on a compliant FlashArray.

Table 3 lists the circumstances under which a policy offers each capability, and those under which a VVol is in or out of compliance with it. 

Capability Name

An array offers this capability when…

 

A VVol is in compliance when…

A VVol is out of compliance when…

Pure Storage FlashArray

…it is a FlashArray (i.e. always).

…it is on a FlashArray, if the capability is set to ‘Yes’.

…it is on a different array vendor/model and the capability is set to ‘Yes’.

…it is on a FlashArray and the capability is set to ‘No’.

FlashArray Group

…it is a FlashArray and its name is listed in this group.

…it is on a FlashArray with one of the configured names.

…it is not on a FlashArray with one of the configured names.

QoS Support

…it is a FlashArray and has QoS enabled.

…it is on a FlashArray with QoS enabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘No’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS enabled and the capability is set to ‘No’.

Consistency Group Name

…it is a FlashArray and has a protection group with that name.

…it is in a protection group with that name.

…it is not in a protection group with that name.

Local Snapshot Policy Capable

…it is a FlashArray and has at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray with at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray that does not have at least one protection group or on a non-FlashArray.

Local Snapshot Interval

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified interval.

…it is in a protection group with an enabled local snapshot policy of the specified interval.

…it is in not a protection group with an enabled local snapshot policy of the specified interval.

Local Snapshot Retention

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified retention.

…it is in a protection group with an enabled local snapshot policy of the specified retention.

…it is in not a protection group with an enabled local snapshot policy of the specified retention.

Replication Capable

…it is a FlashArray (i.e. always).

…it is in a protection group with an enabled replication target.

…it is in not a protection group with an enabled replication target.

Replication Interval

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified interval.

…it is in a protection group with an enabled replication policy of the specified interval.

…it is in not a protection group with an enabled replication policy of the specified interval.

Replication Retention

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified retention.

…it is in a protection group with an enabled replication policy of the specified retention.

…it is in not a protection group with an enabled replication policy of the specified retention.

Minimum Replication Concurrency

…it is a FlashArray and has at least one protection group with the specified number or more of allowed replication targets.

…it is in a protection group that has the specified number of allowed replication targets.

…it is not in a protection group that has the specified number of allowed replication targets.

Target Sites

…it is a FlashArray and has at least one protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must match at least that configured value of FlashArrays.

…it is in a protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must be replicated to at least that configured value of the listed target FlashArrays.

…it is not in a protection group replicating to the minimum amount of correct target FlashArrays.

Table 3: Configurable Capabilities Advertised by FlashArray VASA Providers

Combining Capabilities and Storage Compliance

This section describes an example of combining capabilities into a policy. Storage policies are a powerful method of assuring specific configuration control, but they affect how VVol compliance is viewed. For an array or VVol to be compliant with a policy:

  1. The array or VVol must comply with all of the policy’s capabilities
  2. For snapshot and replication capabilities, the array must have at least one protection group that offers all of the policy’s capabilities. For example, if a policy requires hourly local snapshots and replication every 5 minutes, a protection group with a hourly snapshots and a different protection group with 5 minute replication do not make the array compliant. VMware requires that volumes be in a single group during policy configuration, so to be compliant for this example, an array would require at least one protection group with hourly snapshots and 5 minute replication.
  3. Some combinations of capabilities cannot be compliant. For example, setting an array’s Local Snapshot Policy Capable capability to No and specifying a policy that includes snapshots means that no storage compliant with the policy can be hosted on that array.

Creating a Storage Policy

vCenter makes the capabilities advertised by an array’s VASA Provider available to VMware administrators for assembling into storage policies. Administrators can create policies by using APIs, GUI, CLI, or other tools. This section describes two ways of creating policies for FlashArray-based VVols:

Custom Policy Creation

Using the Web Client to create custom policies using capabilities published by the FlashArray VASA provider

Importing FlashArray Protection Groups

Using the Plugin to create storage policies by importing a FlashArray protection group configuration

Creating Custom Storage Policies

Click the home icon at the top of the Web Client home screen, and select Policies and Profiles from the dropdown menu (vSphereView 79) to display the VM Storage Policies pane.

vv108.png

vSphereView 79: Policies and Profiles Command

Select the VM Storage Policies tab and click the Create VM Storage Policy button (vSphereView 80) to launch the Create New VM Storage Policy wizard. (vSphereView 81)

vv109.png

vSphereView 80: Create VM Storage Policy Button

Select a vCenter from the dropdown and enter a descriptive name for the policy.

vv110.png

vSphereView 81: Create New VM Storage Policy Wizard

It is a best practice to use a naming convention that is operationally meaningful. For example, the name in vSphereView 81 suggests a policy configured on FlashArray storage with 1 hour local snapshots and a 15 minute replication interval.

Configure pages 2 and 2a as necessary (refer to VMware documentation for instructions), click forward to the 2b Rule-set 1 page and select com.purestorage.storage.policy in the <Select provider> dropdown to use the FlashArray VASA provider rules (com.purestorage.storage.policy) to create the storage policy. (vSphereView 82)

vv111.png

vSphereView 82: Rule-set 1 Page 2b of the Create New VM Storage Policy Wizard

A storage policy requires at least one rule. To locate all VMs and virtual disks to which this policy will be assigned on FlashArrays, click the <Add rule> dropdown and select the Pure Storage FlashArray capability (vSphereView 83).

vv112.png

vSphereView 83: Adding a Storage Policy Rule

The selected rule name appears above the <Add rule> dropdown, and a dropdown list of valid values appears to the right of it. Select Yes and click Next (not shown) to create the policy. As defined thus far, the policy requires that VMs and VVols to which it is assigned be located on FlashArrays, but they are not otherwise constrained. When a policy is created, the Plugin checks registered arrays for compliance and displays a list of VVol datastores on arrays that support it (vSphereView 84).

vv113.png

vSphereView 84: List of Arrays Compatible with a New Storage Policy

The name assigned to the policy (FlashArray-1hrSnap15minReplication—see vSphereView 81) suggests that it should specify hourly snapshots and 15-minute replications of any VMs and virtual volumes to which it is assigned. Click Back (not shown in vSphereView 84) to edit the rule-set.

FlashArray replication-and snapshot capabilities require component rules. Click Add component and select Replication from the dropdown (vSphereView 85) to display the Replication component rule pane (vSphereView 86).

vv114.png

vSphereView 85: Selecting a Component for the Policy

Select the provider (vSphereView 86), and add rules, starting with the local snapshot policy.

vv115.png

vSphereView 86: Selecting Replication Provider

Click the Add Rule dropdown, select Local Snapshot Interval, enter 1 in the text box, and select Hours as the unit. (vSphereView 87)

vv116.png

vSphereView 87: Specifying Snapshot Interval Rule

Click the Add Rule dropdown again, select Remote Replication Interval, enter 15 in the text box, select Minutes as the unit (vSphereView 88), and click Next to display the list of registered arrays that are compatible with the augmented policy. vSphereView 89 indicates that there are two such arrays.

vv117.png

vSphereView 88: Specifying Replication Interval Rule

vv118.png

vSphereView 89: Arrays Compatible with the "FlashArray-1hr-Snap15minReplication" Storage Policy

Note:
A policy can be created even if no registered VVol datastores are compatible with it, but it cannot be assigned to any VMs or VVols. Storage can be adjusted to comply, for example, by creating a compliant protection group, or alternatively, the policy can be adjusted to be compatible with existing storage.

Auto-policy Creation with the Plugin 

As an alternative to custom policies, the Plugin can import FlashArray protection groups and create vCenter policies with the same attributes.

vv119.png

vSphereView 90: Plugin Import Protection Groups Button

From the Plugin’s home pane, select an array and either click the Import Protection Groups button (vSphereView 90) or right-click the selected array and select Import Protection Groups on the dropdown menu (vSphereView 91) to launch the Import Protection Groups wizard. (vSphereView 92)

vv120.png

vSphereView 91: Import Protection Group Command

vv121.png

vSphereView 92: Import Protection Groups Wizard (1)

The wizard lists the available protection groups on the selected array along with a brief summary of their local snapshot and remote replication policies. For more detailed information, refer to the protection group display in the FlashArray GUI.

Note:
A grayed-out listing indicates a protection group whose properties match an existing vCenter storage policy.

Select the protection groups to be imported by checking the boxes and click the Import button (vSphereView 93). 

vv122.png

vSphereView 93: Import Protection Groups Wizard (2)

The protection group parameters used to create a storage policy are:

  • Snapshot interval
  • Short-term per-snapshot retention
  • Replication interval
  • Short-term per-replication snapshot interval

The Plugin creates storage policies on all vCenters in the environment to which the logged-in administrator has access. If vCenters are in enhanced linked-mode (by sharing SSO environments) the policies are created on all of them.

On the Web Client Policies and VM Storage Policies Profiles page, select the VM Storage Policies tab to display the vCenter’s default, previously created, and imported storage policies (vSphereView 94). The lower grouping in vSphereView 94 represents the imported policies (vSphereView 93). Each policy is created in the two available vCenters.

vv123.png

vSphereView 94: Default and Imported Storage Policies

The policy names supplied by the Plugin describe the policies in terms of snapshot and replication intervals.

Select a policy to view the details of its capabilities (vSphereView 95). In the FlashArray GUI Storage view Protection Groups pane, select platinum to display the snapshot and replication details for the protection group imported to create the Snap 1 HOURS Replication 5 MINUTES policy. (ArrayView 28)

vv124.png

vSphereView 95: Web Client View of Policy Details for Snap 1 HOURS Replication 5 Minutes

vv125.png

ArrayView 28: FlashArray GUI View of Details for Platinium Protection Group

Changing a Storage Policy

A VMware administrator can edit a storage policy that no longer fulfills the needs of the VMs assigned to make it fulfill current needs.

To change a policy’s parameters from the Policies and Profiles page in the Web Client, select VM Storage Policies, select the policy to be changed, and click the Edit Settings… button to display a list of the policy’s rules. Make the needed rule changes and click OK.

vv126.png

vSphereView 96: Edit Settings... Button

vv127.png

vSphereView 97: Changing a Policy Rule

Clicking OK launches the VM Storage Policy in Use wizard (vSphere 98), offering two options for resolution:

Manually later 

Flags all VMs and virtual disks to which the changed policy is assigned as Out of Date (vSphereView 99).

Now

Assigns the changed policy to all VMs and virtual disks assigned to the original policy.

Click Yes to display the policy pane and select the Monitor tab.

vv128.png

vSphereView 98: VM Storage Policy in Use Wizard  

vv129.png

vSphereView 99: Out of Date Storage Policies

If Manually later is selected, VMs and VVols show Out of Date compliance status. Update the policies for the affected VMs and virtual disks by selecting them and clicking the Reapply storage policy to all out of date entities button indicated in vSphereView 100.

vv130.png

vSphereView 100: Reapply Storage Policy Button

 

Selecting Now in the VM Storage Policy in Use wizard (vSphere 98) does not reconfigure the VVols on the array, so it typically causes VMs and virtual disks to show Noncompliant status.(vSphereView 101).

vv131.png

vSphereView 101: Non-compliant VM Objects

The subsection titled Changing a VM’s Storage Policy on page 77 describes the procedure for bringing non-compliant VMs and virtual disks into compliance.

Checking VM Storage Policy Compliance

A VVol-based VM or virtual disk may become noncompliant with its vCenter storage policy when a storage policy is changed, when an array administrator reconfigures volumes, or when the state of an array changes.

For example, if an array administrator changes the replication interval for a protection group that corresponds to a vCenter storage policy, the VMs and virtual disks to which the policy is assigned are no longer compliant.

To determine whether a VM or virtual disk is compliant with its assigned policy, either select the policy and display the objects assigned to it (vSphereViews 99 and 101), or validate VMs and virtual disks for compliance with a given policy.

From the Web Client home page, click the VM Storage Policies icon to view the vCenter’s list of storage policies (vSphereView 102). Select a policy, click the Monitor tab, and click the VMs and Virtual Disks button (vSphereView 104) to display a list of the VMs and virtual disks to which the policy is assigned.

vv132.png

vSphereView 102: VM Storage Policies Icon

vv133.png

vSphereView 103: Selecting a Policy for Validation

vv134.png

vSphereView 104: Validating Policy Compliance

Each policy’s status is either:

Compliant
    The VM or virtual disk is configured in compliance with the policy.

Noncompliant
    The VM or virtual disk is not configured according to the policy.

Out-of-date
    The policy has been changed but has not been re-applied. The VM or virtual disk may still be compliant, but the policy must be re-applied to determine that.

The subsection titled Changing a VM’s Storage Policy describes making objects compliant with their assigned storage policies.

Assigning a Storage Policy to a VM or Virtual Disk

The Web Client can assign a storage policy to a new VM or virtual disk when it is created, deployed from a template, or cloned from another VM. A VMware administrator can change the policy assigned to a VM or virtual disk. Finally, a VM’s storage policy can be changed during Storage vMotion.

Assigning a Storage Policy to New VM

A VMware administrator can assign a storage policy to a new VM created using the Deploy from Template wizard. (The procedure is identical to policy assignment with the Create New Virtual Machine and Clone Virtual Machine wizards.)

Right-click the target template in the Web Client inventory pane’s VMs and Templates list, and select New VM from This Template (vSphereView105).

vv135.png

vSphereView 105: New VM from Template Command

Select options in steps 1a and 1b, and advance the wizard to step 1c, Select Storage (vSphereView 106).

vv136.png

vSphereView 106: Select Storage Step of Template

Setting a Policy for an Entire VM

In the Select Storage pane, select Thin Provision from the Select virtual disk format dropdown (FlashArrays only support thin provisioned volumes; selecting other options causes VM creation to fail), and either select a datastore (VMFS, NFS or VVol) from the list or a policy from the VM storage policy dropdown.

Selecting a policy filters the list to include only compliant storage. For example, selecting the built-in VVol No Requirements Policy, would filter the list to show only VVol datastores. (vSphereView 107).

vv137.png

vSphereView 107: Selecting a Storage Policy

Selecting the FlashArray Snap 1 HOURS Replication 5 MINUTES policy filters out datastores on arrays that do not have protection groups with those properties. (vSphereView 108)

vv138.png

vSphereView 108: Select VM Storage Policy

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (e.g., flasharray-vvol-1:platinum in vSphereView 110), or, if Automatic is selected (vSphereView 109), VMware directs the array to create a protection group with the specified capabilities.

vv139.png

vSphereView 109: Select Automatic Replication Group

Whichever option is chosen, the VM’s config VVol and all of its data VVols are assigned the same policy. (Swap VVols are never assigned a storage policy.) Click Finish (not shown in vSphereView 110) to complete the wizard. The VM is created and its data and config VVols are placed in the assigned protection group.

vv140.png

vSphereView 110: Assign an Existing Replication Group

BEST PRACTICES: Pure Storage recommends assigning local snapshot policies to all config VVols to simplify VM restoration.

All FlashArray volumes are thin provisioned, so the Thin Provision virtual disk format should always be selected. With FlashArray volumes, there is no performance impact for thin provisioning.

ArrayView 29 shows the FlashArray GUI view of a common storage policy for an entire VVol-based VM.

vv141.png

ArrayView 29: GUI View of a VM-wide Storage Policy

Assigning a Policy to Each of VM's Virtual Disks 

In most cases, VMware administrators put all of a VM’s volumes in the same protection group, thereby assigning the same storage policy to them.

Alternatively, assign a separate policy to some or all of a VM’s volumes by clicking the Advanced button of the Select Storage step (1c) of the Deploy from Template wizard (vSphereView 111) to display the advanced view. (vSphereView 112).

vv142.png

ArrayView 111: Advanced>>Button for per-VVol Storage Policies

In the advanced view, a separate storage policy can be specified for for the VM’s config VVol as well as for each virtual disk (data VVol).

The Configuration File line in vSphereView 112, refers to the VM’s config VVol. The remaining lines enumerate its data VVols (Hard Disk 1 in the example).

vv143.png

vSphereView 112: Select Storage Advanced View

To select a storage policy for a VVol, click the dropdown in Storage column of its row and select Browse (vSphereView 113) to launch the Select a datastore cluster or datastore wizard. Either select a VMFS, NFS or VVol datastore from the list or select a policy from the dropdown. (vSphereView 114)

vv144.png

vSphereView 113: Browse for Custom Storage Policy

vv145.png

vSphereView 114: Selectint Storage Policy VVol

Selecting a policy from the VM storage policy dropdown filters the list to include only compliant datastores. For example, selecting the VVol No Requirements Policy lists only VVol datastores. 

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (for example, flasharray-vvol-1:platinum in vSphereView 115).

vv146.png

vSphereView 115: Selecting Storage Policy for VVol

Alternatively, if Automatic is selected (as in vSphereView 115), the array creates a protection group with the capabilities specified by the policy. Whichever option is chosen, the policy is assigned to the VVol.

For example, a VM’s config VVol might be assigned a 1 hour snapshot and 1 hour replication storage policy, corresponding to the flasharray-vvol-1:gold replication group, whereas its data VVols might be assigned a 1 hour snapshot and 5 minute replication policy, corresponding to the flasharray-vvol-1:platinum replication group. vSphereView 116 shows the Select a datastore cluster or datastore panes for configuring the two policies.

vv147.png

vSphereView 116: Separate Storage Policies for Config and Data VVols

ArrayViews 30 and 31 list the contents of the two protection groups that correspond to the vCenter replication groups.

vv148.png

ArrayView 20: gold Protection Group

vv149.png

ArrayView 20: platinum Protection Group

Changing a VM's Storage Policy

To change a VM’s storage policy, a VMware administrator assigns a new policy to it. VMware directs the array to reconfigure the affected VVols. If the change makes the VM or any of its virtual disks non-compliant, the VMware administrator must adjust their policies.

 

To change a VM’s storage policy, select the VMs and Templates view in the Web Client inventory pane, (1) right-click the target VM, (2) select VM Policies from the dropdown menu, and (3) select Edit VM Storage Policies from the secondary dropdown (vSphereView 117) to launch the Edit VM Storage Policies wizard (vSphereView 118).

vv150.png

vSphereView 117: Edit VM Storage Policies Command

The storage policy for the VM in the example specifies a 1 hour snapshot interval and a 5 minute replication interval, so both the config and data VVols are in the array’s platinum protection group. (ArrayView 32)

vv151.png

ArrayView 32: Config and Data VVols in the Same Protection Group

vv152.png

vSphereView 118: Edit VM Storage Policies Wizard

vv153.png

vSphereView 119: Apply a Common Storage Policy to All of VM's VVols

To change the storage policy assigned to a VM’s config VVol or a single data VVol, select a policy from the dropdown in the VM Storage Policy column of its row in the table.
(vSphereView 120)

vv154.png

vSphereView 120: Change Config VVol Storage Policy

 

Selecting a policy that is not valid for the array that hosts a VVol displays a Datastore does not match current VM policy error message. To satisfy the selected policy, the VM would have to be moved to a different array (reconfiguration would not suffice).

A storage policy change may require that the replication groups for one or more VVols be changed. If this is the case, the Replication Groups indicator is marked with an alert (vv155.png ) icon (vSphereView 122).

vv156.png

vSphereView 121: Non-Compliant Datastore

vv157.png

vSphereView 122: One or More Replication Groups not Configured

 

This alert typically appears for one of two reasons:

  1. One or more VVols are in replication groups (FlashArray protection groups) do not comply with the new storage policy.
  2. The new storage policy requires that VVols be in a replication group, and one or more VVols are not.

If the alert appears, or to verify or change the replication group, click Configure to launch the Configure VM Replication Groups wizard (vSphereView 123).

To assign a policy to all of a VM’s VVols, click the Common replication group radio button, select a replication group from the Replication group dropdown, and click OK.
(vSphereView 123)

vv158.png

vSphereView 123: Configure a VM Replication Group

 

Note: If no policy is shared by all of the VM’s VVols, the Replication group dropdown does not appear.

To assign different policies to individual VVols, click the Replication group per storage object radio button, select a replication group for each VVol to be replicated from the dropdown in its row. When selections are complete, click OK. (vSphereView 124)

 vv159.png

vSphereView 124: Configure VVol Replication Groups

Click OK again to complete reconfiguration. VMware directs the array to change the VVols’ protection group membership as indicated in the selections for the new policy.

vv160.png

 

vSphereView 125: Configure VVol Replication Groups

vv161.png

ArrayView 33: Common VM Protection Group

Assigning a Policy during Storage Migration

Compliance with an existing or newly assigned storage policy may require migrating a VM to a different array. For example, VM migration is required if:

  • A policy specifying a different array than the current VM or virtual disk location is assigned
  • A policy requiring QoS (or not) is assigned to a VM or virtual disk located on an array with the opposite QoS setting.
  • A policy specifying snapshot or replication parameters not available with any protection group on a VM or virtual disk’s current array is assigned.
  • Some of these situations can be avoided by array reconfiguration, for example by creating a new protection group or inverting the array’s QoS setting. Others, such as a specific array requirement, cannot. If an array cannot be made to meet a policy requirement, the VMware administrator must use Storage vMotion to move the VM or virtual disk to one that can satisfy the requirement. The administrator can select a new storage policy during Storage vMotion.

For example, vSphereView 126 illustrates a VM whose assigned storage policy specifies hourly snapshots and replication with one-day retention for both.

vv162.png

vSphereView 126: VM Storage Policy Specifying Hourly Snapshots and Replication

The VM in this example is located on flasharray-vvol-1, in protection group gold. (ArrayView 34)

vv163.png

ArrayView 34: Protection Group with Hourly Snapshots and Replication Specified

The VM is compliant with the vCenter-assigned Snap 1 HOURS Replication 1 HOURS policy (vSphereView 127).

vv164.png

vSphereView 127: VM Compliance with Storage Policy

If the VMware administrator changes the VM’s storage policy to one that requires not only the snapshot and replication parameters, but also that the VM and its VVols be located on array flasharray-vvol-2, the VM and its VVols become noncompliant because they are located on flasharray-vvol-1. (vSphereViews 128 and 129)

vv165.png

vSphereView 128: New VM Storage Policy Requiring Location on a Specific FlashArray

No amount of reconfiguration of FlashArray flasharray-vvol-1 can remedy the discrepancy, so to make the VM compliant with the new policy, Storage vMotion must move it to flasharray-vvol-1.

vv167.png

vSphereView 129: VM Out of Compliance with its Assigned Storage Policy

To move a VM between arrays using Storage vMotion, from the VMs and Templates inventory pane, right-click the VM to be moved, and select Migrate from the dropdown menu to launch the Select the migration type wizard, (vSphereView 130)

vv168.png

vSphereView 130: Select Migration Type Wizard

Click Change storage only and Next (not shown in vSphereView 130) to launch the Migrate wizard (vSphereView 131).

vv169.png

vSphereView 131: Migrate (Storage vMotion) Wizard

Reselect the storage policy from the dropdown (do not select Keep existing storage policy), reselect the target from the list of datastores with compatible policies, and click Finish to migrate the VM to the target array and configure the VVols as specified in the reselected policy. When migration completes, the VM is on the target array and it and its VVols are compliant with the assigned storage policy.

BEST PRACTICE: Pure Storage recommends reselecting the same storage policy rather the Keep existing storage policy option in order to provide Storage vMotion with the information it needs to complete a migration.

The Migrate wizard contains an Advanced button. The subsection titled Assigning a Policy to Each of a VM’s Virtual Disks  describes the use of the advanced option to specify per-VVol storage policies.

vSphereView 132 illustrates the example VM (vSphereView 127) after (a) the policy in vSphereView 128 has been assigned to it, and (b) it has been migrated to flasharray-vvol-2. ArrayView 35 illustrates the GUI view of the example VM’s VVols, now located in flasharray-vvol-2’s gold protection group.

vv170.png

vSphereView 132: Migrated VM Compliant with its Assigned Storage Policy

vv171.png

ArrayView 35: Protection Group on flasharray--vvol-2 Showing Migrated VM's VVols

Replicating VVols

With VASA version 3, FlashArrays can replicate VVols. VMware is aware of replicated VMs and can fail them over and otherwise manage replication. Additional information is available from VMware at:

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-6346A936-5084-4F38-ACB5-B5EC70AB8269.html

VMware VVol replication has three components:

Replication Policies 

Specify sets of VM requirements and configurations for replication that can be applied to VMs or virtual disks. If configuration changes violate a policy, VMs to which it is assigned become non-compliant

Replication Groups

Correspond to FlashArray protection groups, and are therefore consistency groups in the sense that replicas of them are point-in-time consistent. Replication policies require replication groups

Failure domains

Sets of replication targets. VMware requires that a VM’s config VVol and data VVols be replicated within a single failure domain.

In the FlashArray context, a failure domain is a set of arrays. For two VVols to be in the same failure domain, one must be replicated to the same arrays as the other. In other words, a VM’s VVols must all be located in protection groups that have the same replication targets.

vv172.png

vSphereView 133: A Policy that Specifies Different Replication Fault Domains

Replication policies can only be assigned to config VVols and data VVols. Other VM objects inherit replication policies in the following way:

  • A memory VVol inherits the policy of its configuration VVol
  • The swap VVol, which only exists when a VM is powered on, is never replicated.

The initial release of FlashArray VVol support does not preserve local snapshot chains through replication. VMware-managed local snapshots are not replicated and are therefore unavailable after a VM fails over. For VMs that are to be replicated, either do not create VMware-managed snapshots or delete them before failover. Pure Storage plans to deliver preservation of VMware-managed snapshot chains through failover in a future release of FlashArray software.

VMware can perform three types of failovers on VVol-based VMs:

Planned Failover

Movement of a VM from one datacenter to another, for example for disaster avoidance or planned migration. Both source and target sites are up and running throughout the failover. Once a planned failover is complete, replication can be reversed so that the failed over VM can be failed back.

Unplanned Failover

Movement of a VM when a production datacenter fails in some way. Failures may be temporary or irreversible. If the original datacenter recovers after failover, automated reprotection may be possible. Otherwise, a-new replication scheme must be configured.

Test Failover

Similar to planned failover, but does not bring down the production VM. Test failover recovers temporary copies of protected VMs to verify the failover plan before an actual disaster or migration.

VMware vCenter Site Recovery Manager does not support VVols or array-based replication at the time of publication. Currently, VVol failover and SRM is only supported by vSphere Replication. Refer requests for SRM support of VVols and array-based replication to VMware.

These VVol failover modes for can be implemented using the VMware SDK, tools such as PowerCLI or vRealize Orchestrator, or any tool that can access the VMware SPBM SDK. Pure Storage plans to make PowerCLI example scripts and tools available on the Pure Storage Community and GitHub repositories as they are created and validated. 

PowerCLI version 6.5.4 or newer is required for use with FlashArray-based VVols.

 

VVol Reporting

The VVol architecture that gives VMware insight into FlashArrays also gives FlashArrays insight into VMware. With VVol granularity, array can recognize and report on both entire VVol-based VMs (implemented as volume groups) and individual virtual disks (implemented as volumes).

Storage Consumption Reporting

FlashArrays represent VMs as volume groups. The Volumes tab of the GUI Storage pane lists an array’s volume groups. Select a group that represents a VM to display a list of its volumes (ArrayView 36). 

vv173.png

ArrayView 36: GUI View of a Volume Group and its Volumes 

The top panel of the display shows averaged and aggregated storage consumption statistics for the VM. Click the Space button in the Volumes pane to display storage consumption statistics for individual VVols (ArrayView 37).

vv174.png

ArrayView 37: GUI View of a Volume Group' Per-volume Storage Consumption

To view a VM’s storage consumption history, switch to the Analysis pane Capacity view and select the Volumes tab. (ArrayView 38)

vv175.png 

ArrayView 38: GUI Analysis

 

To view history for VMs (volume groups) or VVol (volumes), select an object type from the dropdown menu. (ArrayView 39)

vv176.png

ArrayView 39: Selecting Volume Statistics

 

Click the desired object in the list to display its storage consumption history. (Alternatively, enter a full or partial VM name in the search box to filter the list.)

The array displays a graph of the selected object’s storage consumption over time. The graph is adjustable—time intervals from 24 hours to 1 year can be selected. It distinguishes between storage consumed by live volumes and that consumed by their snapshots. The consumption reported is for volume and snapshot data that is unique to the objects (i.e., not deduplicated against other objects). Data shared by two or more volumes or snapshots is reported separately on a volume group-wide basis as Shared (for example, see ArrayView 36).

vv177.png

ArrayView 40: GUI Storage Capacity History for a Volume Group

Performance Reporting

The FlashArray GUI can also report VM and VVol performance hostory. In the Analysis pane Performance view, the history of a VM or VVol’s IOPS, latency, and data throughput (Bandwidth) can be viewed.

Click the Volumes tab to display a list of the array’s VMs (volume groups) and/or VVols (volumes). To view an object’s performance history, select Volume Groups, Volumes, or All in the dropdown (arrayView 42), and select a VM or VVol from the resulting iist.

vv178.png

ArrayView 41: GUI Analysis Pane

A VM’s or VVol’s performance history graph shows its IOPS, throughput (Bandwidth), and latency history in separate stacked charts (ArrayView 43).

vv179.png

ArrayView 42: Selecting Volume Display

The graphs show the selected object’s performance history over time intervals from 24 hours to 1 year. Read and write performance can be shown in separate curves. For VMs, latency is the average for all volumes; throughput and IOPS are an accumulation across volumes.

vv180.png

ArrayView 43: GUI Performance History for a Volume Group

Migrating VMs to VVols

Storage vMotion can migrate VMs from VMFS, NFS, or Raw Device Mappings (RDMs) to VVols.

Migrating a VMFS or NFS-based VM to a VVol-based VM

From the Web Client VMs and Templates inventory pane, right-click the VM to be migrated and select Migrate from the dropdown menu to launch the Migrate wizard (vSphereView 134).

vv181.png

vSphereView 134: Web Client Migrate Command

Select Change Storage Only to migrate the VM’s storage (vSphereView 135), or Change both compute resource and storage to migrate both storage and compute resources.

vv182.png

vSphereView 135: Selecting Storage-only Migration

 

In the ensuing Select storage step, select a VVol datastore as a migration target. Optionally, select a storage policy for the migrated VM to provide additional features. (The section titled Storage Policy Based Management describes storage policies.)

Click Finish (not visible in vSphereView 135) to migrate the VM. If original and target datastores are on the same array, the array uses XCOPY to migrate the VM. FlashArray XCOPY only creates metadata, so migration is nearly instantaneous.

If source and target datastores are on different arrays, VMware uses reads and writes, so migration time is proportional to the amount of data copied.

When migration completes, the VM is VVol-based. Throughout the conversion, the VM remains online.

vv183.png

vSphereView 136: Select Storage Policy

ArrayView 44 shows a migrated VM’s FlashArray volume group.

vv184.png

ArrayView 44: GUI View of a Migrated VM (Volume Group)

Migration of a VM with VMDK Snapshots

Migrating a VM that has VMware managed snapshots is identical to the process described in the preceding subsection. In a VMFS or NFS-based VM, snapshots are VMDK files in the datastore that contain changes to the live VM. In a VVol-based VM, snapshots are FlashArray snapshots.

Storage vMotion automatically copies a VM’s VMware VMFS snapshots. ESXi directs the array to create the necessary data VVols, copies the source VMDK files to them and directs the array to take snapshots of them. It then copies each VMFS-based VMware snapshot to the corresponding data VVol, merging the changes. All copying occurs while the VM is online.

BEST PRACTICE: Only virtual hardware versions 11 and later are supported. If a VM has VMware-managed VMFS-based memory snapshots and is at virtual hardware level 10 or earlier, delete the memory snapshots prior to migration. Upgrading the virtual hardware does not resolve this issue. Refer to VMware’s note here

Migrating Raw Device Mappings

A Raw Device Mapping can be migrated to a VVol in any of the following ways:

  • Shut down the VM and perform a storage migration. Migration converts the RDM to a VVol.
  • Add to the VM a new virtual disk in a VVol datastore. The new virtual disk must be of the same size as the RDM and located on the same array. Copy the RDM volume to the VVol, redirect the VM’s applications to use the new virtual disk, and delete the RDM volume.
  • Remove the RDM from the VM and add it back as a VVol. At the time of publication, this process requires Pure Storage Technical Support assistance. Pure Storage plans to make a user-accessible mechanism for achieving in the future.

For more information, refer to the blog post https://www.codyhosterman.com/2017/11/moving-from-an-rdm-to-a-vvol/

Data Mobility with VVols

A significant, but under-reported benefit of VVols is data set mobility. Because a VVol-based VM’s storage is not encapsulated in a VMDK file, the VM’s data can easily be shared and moved.

A data VVol is a virtual block device presented to a VM; it is essentially identical to a virtual mode RDM. Thus, a data VVol (or a volume created by copying a snapshot of it) can be used by software that can interpret its contents, for example an NFS or XFS file system created by the VM.

Therefore, it is possible to present a data VVol, or a volume created from a snapshot of one, to a physical server, to present a volume created by physical server to a VVol-based VM as a VVol, or to overwrite a VVol from a volume created by a physical server.

This is an important benefit of the FlashArray VVol implementation. The following blog posts contain examples of and additional information about data mobility with FlashArray VVols:

https://www.codyhosterman.com/2017/10/comparing-vvols-to-vmdks-and-rdms/

https://www.codyhosterman.com/2017/12/vvol-data-mobility-virtual-to-physical/

Appendix I

While the Plugin is not required to use of FlashArray-based VVols, it simplifies administrative procedures that would otherwise require either coordinated use of multiple GUIs or scripting.

Version 3.0 of the Plugin and later versions support VVols. To verify that a Plugin version that supports VVols is installed, select Administration in the Web Client home screen inventory pane and select Client Plug-Ins to display the Client Plug-ins pane (vSphereView 137).

Version 3.0 of the FlashArray Plugin for the vSphere Web Client integrates with the vSphere Web Client (also called Flash/Flex Client). Plugin support for VMware’s emerging vSphere Client (HTML5) is under development.

 vv185.png

 vSphereView 137: Web Client Plug-ins Pane

If the Pure Storage Plugin is not installed, or if the installed version is earlier than 3.0, use the FlashArray GUI to install or upgrade the Plugin to a version that supports VVols.

As a FlashArray administrator, select the Software tab on the Settings pane. The Available Version field (ArrayView 45) lists the array’s current Plugin version. If the version is earlier than 3.0, move to an array that does host Version 3.0. If no such array is available, contact Pure Storage Support to obtain a supported version of the Plugin. 

vv186.png

ArrayView 45: Plugin Installation and Upgrade

To install the Plugin in the vCenter Web Client, click the  vv187b.png button in the vSphere Plugin pane (ArrayView 45) to launch the Edit vSphere Plugin Configuration wizard (ArrayView 46).

vv182.png

ArrayView 46: Edit vSphere Plugin Configuration Wizard

The target vCenter validates the administrator credentials and returns the version of the installed Plugin (if any) in the Version on vCenter field. ArrayView 47 shows the vCenter responses when no Plugin is installed (left) and when the installed version is earlier than 3.0 (right). Click Install or Upgrade as required.

vv183.png

ArrayView 47: vCenter Responses to FlashArray Plugin Query

When installation is complete, the wizard displays a confirmation message. Install the Plugin in additional vCenter instances as required. To verify the installation, log out of and back into vCenter, and look for the vv187.png  icon in the Web Client Home tab (vSphereView 138)

vv189.png

vSphereView 138: Using Web Client to Verify Plugin Installation

 Authenticating FlashArray to the Plugin

To authenticate a FlashArray to a Plugin installed in vCenter, either click the vv187.png icon on the Web Client Home tab (vSphereView 138) or click the Home button at the top of the pane and select Pure Storage from the dropdown menu (vSphereView 139) to display the FlashArray pane Objects tab (vSphereView 140).

vv190.png

vSphereView 139: FlashArray Authentication (1)

vv191.png

vSphereView 140: FlashArray Authentication (2)

Click + Add FlashArray to launch the Add FlashArray wizard (vSphereView 141).

vv192.png

vSphereView 141: Add FlashArray Wizard

vSphereView 142 illustrates the Web Client FlashArray pane Objects tab after the array has been added.

vv193.png

vSphereView 142: Array Authenticated to vCenter

Note: Role-Based Access Control is available for the Plugin, but configuration and use of this feature is beyond the scope of this report. 

Refer to the Plugin User Guide, available on support.purestorage.com for further information

Appendix II: FlashArray CLI Commands for Protocol Endpoints

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol create command creates the volume as a protocol endpoint. (ArrayView 48)

vv194.png

ArrayView 48: FlashArray CLI Command to Create a PE

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol list command displays a list of volumes on the array that were created as PEs. (ArrayView 49)

vv195.png

ArrayView 49: FlashArray CLI Command to List an Array's PEs

 

Appendix III: VMware ESXi CLI Commands for VVols

Use the esxcli storage vvol commands to troubleshoot a VVol environment.

Version

Changes

 

esxcli storage core device

list

Identify protocol endpoints. The output entry Is VVOL PE: true indicates that the storage device is a protocol endpoint.

esxcli storage vvol daemon

unbindall

Unbind all VVols from all VASA providers known to the ESXi host.

esxcli storage vvol protocolendpoint

list

List all protocol endpoints that a host can access.

esxcli storage vvol storagecontainer

list

abandonedvvol scan

List all available storage containers.

Scan the specified storage container for abandoned VVols.

esxcli storage vvol vasacontext

get

Show the VASA context (VC UUID) associated with the host.

esxcli storage vvol vasaprovider

list

List all storage (VASA) providers associated with the host.

 

Appendix IV: Disconnecting a Protocol Endpoint from a Host

Decommissioning ESXi hosts or clusters normally includes removal of protocol endpoints (PEs). The usual FlashArray volume disconnect process is used to disconnect PEs from hosts. As with removal of any non-VVol block storage device however, the best practice is to detach the PE from each host in vCenter prior to disconnecting it from them on the array.

vv196.png

vSphereView 143: Web Client Tool for Detaching a PE from an ESXi Host

To detach a PE from a host, select the host in the Web Client inventory pane, navigate to the Storage Devices view Configure tab, select the PE to be detached, and click the vv197.png  tool (vSphereView 144) to launch the Detach Device confirmation wizard (vSphereView 143). Click Yes to detach the selected PE from the host.

vv198.png

vSphereView 144: Confirm Detach Wizard

vSphereView 145 shows the Web Client storage listing after successful detachment of a PE.

vv199.png

vSphereView 145: Detached PE

Failure to detach a PE from a host (vSphereView 146) typically occurs because there are VVols bound to the host through the PE that is being detached.

vv200.png

vSphereView 146: Failure to Detach PE (LUN) from a Host

FlashArrays prevent disconnecting a PE from a host (including members of a FlashArray host group) that has VVols bound through it.

The Purity//FA Version 5.0.0 GUI does not support disconnecting PEs from hosts. Administrators can only disconnect PEs via the CLI or REST API.

Before detaching a PE from an ESXi host, use one of the following VMware techniques to clear all bindings through it:

  1. vMotion all VMs to a different host
  2. Power-off all VMs on the host that use the PE
  3. Storage vMotion the VMs on that host that use the PE to a different FlashArray or to a VMFS

To completely delete a PE, remove all VVol connections through it. To prevent erroneous disconnects, FlashArrays prevent destruction of PE volumes with active connections.

 

Appendix V: VVols and Volume Group Renaming

FlashArray volume groups are not in the VM management critical path. Therefore, renaming or deleting a volume group does not affect VMware’s ability to provision, delete or change a VM’s VVols.

A volume group is primarily a tool that enables FlashArray administrators to manage a VM’s volumes as a unit. Pure Storage highly recommends creating and deleting volume groups only through VMware tools, which direct arrays to perform actions through their VASA providers.

Volume group and VVol names are not related to VASA operations. VVols can be added to and removed from a volume group whose name has been changed by an array administrator. If, however, a VM’s config VVol is removed from its volume group, any VVols created for the VM after the removal are not placed in any volume group. If a VM’s config VVol is moved to a new volume group, any new VVols created for it are placed in the new volume group.

VMware does not inform the array that it has renamed a VVol-based VM, so renaming a VM does not automatically rename its volume group. Consequently, it is possible for volume group names differ from those of the corresponding VMs. For this reason, the FlashArray VVol implementation does not put volume group or VVol names in the VVol provisioning and management critical path.

For ease of management, however, Pure Storage recommends renaming volume groups when the corresponding VMs are renamed in vCenter.

Appendix Vi: CISCO FNIC Driver Support for VVols

Older Cisco UCS drivers do not support the SCSI features required for Protocol Endpoints and VVol sub-lun connections. To use VVols with Cisco UCS, FNIC drivers must be updated to a version that supports sub-luns. For information on firmware and update instructions consult:

https://my.vmware.com/group/vmware/details?productId=491&downloadGroup=DT-ESX60-CISCO-FNIC-16033

https://quickview.cloudapps.cisco.com/quickview/bug/CSCux64473

 

About the Author

vv201.png

Cody Hosterman is the Technical Director for VMware Solutions at Pure Storage. His primary responsibility is overseeing, testing, designing, documenting, and demonstrating VMware-based integration with the Pure Storage FlashArray platform. Cody has been with Pure Storage since 2014 and has been working in vendor enterprise storage/VMware integration roles since 2008.

Cody graduated from the Pennsylvania State University with a bachelors degreee in Information Sciences & Technology in 2008. Special areas of focus include core ESXi storage, vRealize (Orchestrator, Automation and Log Insight), Site Recovery Manager and PowerCLI. Cody has been a named VMware vExpert every year since 2013.

Blog: www.codyhosterman.com

Twitter: www.twitter.com/codyhosterman

YouTube: https://www.youtube.com/codyhosterman


© 2018 Pure Storage, Inc. All rights reserved. Pure Storage, Pure1, and the Pure Storage Logo are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of their respective owners. The Pure Storage products described in this documentation are distributed under a license agreement restricting the use, copying, distribution, and decompilation/reverse engineering of the products. The Pure Storage products described in this documentation may only be used in accordance with the terms of the license agreement. No part of this documentation may be reproduced in any form by any means without prior written authorization from Pure Storage, Inc. and its licensors, if any. Pure Storage may make improvements and/or changes in the Pure Storage products and/or the programs described in this documentation at any time without notice.

THIS DOCUMENTATION IS PROVIDED "AS IS" AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE LEGALLY INVALID. PURE STORAGE SHALL NOT BE LIABLE FOR INCIDENTAL OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING, PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINED IN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.