Skip to main content
Pure Technical Services

Web Guide: Implementing vSphere Virtual Volumes with FlashArray

Currently viewing public documentation. Please login to access the full scope of documentation.

Abstract

VMware’s vSphere Virtual Volume (vVol) paradigm, introduced in vSphere version 6.0, is a storage technology that provides policy-based, granular storage configuration and control of virtual machines (VMs). Through API-based interaction with an underlying array, VMware administrators can maintain storage configuration compliance using only native VMware interfaces.

Version 5.0.0 of Purity//FA software introduced support for FlashArray-based vSphere Virtual Volumes (vVols). The accompanying FlashArray Plugin for the vSphere Web Client (the Plugin) makes it possible to create, manage, and use vVols that are based on FlashArray volumes from within the Web Client. This report describes the architecture, implementation, and best practices for using FlashArray-based vVols.

Audience

The primary audiences for this guide are VMware administrators, FlashArray administrators, and more generally, anyone interested in the architecture, implementation, administration, and use of FlashArray-based vVols.

Throughout this report, the terms FlashArray administrator, array administrator, and administrator in the context of array administration, refer to both the storage and array administration roles for FlashArrays.

For further questions and requests for assistance, customers can contact Pure Storage Technical Support at support@purestorage.com.

vVols Best Practice Summary

 


 

Terminology

These are the core terms to know and understand when discussing vVols and the implementation with Pure Storage's FlashArray.  Some aspects have more than one term that applies to them, both will be covered in the table.

Name/Concept Explanation
Protocol Endpoint
(PE)
A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. The PE effectively serves as a mount point for vVols. A PE is the only FlashArray volume that must be manually connected to hosts to use vVols.  The industry term for a PE is "Administrative Logical Unit".

VASA

vSphere APIs for Storage Awareness (VASA) is the VMware-designed API used to communicate between vSphere and the underlying storage.  For Pure Storage, this is the FlashArray.

SOAP In the Days before REST API was more widely used, SOAP (Simple Object Access Protocol) was a messaging protocol that was used to exchange structured data (information) via web services (HTTP).  SOAP uses an XML structure to exchange the information between source and destination.  SOAP is heavily used in the management communication of the vSphere environment, vCenter Services and most important for the purpose of this KB, VASA.  
Management Path
or
Control Path
This is the TCP/IP path between the compute management layer (vSphere) and the storage management layer (FlashArray).  Requests such as creating, deleting and otherwise managing storage are issued on this path.  This is done via HTTPS and TLS 1.2 over port 8084 for the FlashArray VASA Provider.
Data Path
or
Data Plane
The Data Path is the established connection from the ESXi hosts to the Protocol Endpoint on the FlashArray. The Data Path is the flow that SCSI Ops are sent and received, just as any traditional SAN.  This connection is established over the storage fabric. Today this means iSCSI or Fibre Channel.  
SPBM Storage Policy Based Management (SPBM) is a framework designed by VMware to provision and/or manage storage. Users can create policies of selected capabilities or tags and assign them to a VM or specific virtual disk. SPBM for internal storage is called vSAN, SPBM for external storage is called vVols. A vendor must support VASA to enable SPBM for their storage. 
VASA Provider

Storage Provider
A VASA provider is an instance of the VASA service that a storage vendor offers a customer that is deployed in their environment. For the FlashArray, the VASA Providers are built into the FlashArray controllers and will be represented as VASA-CT0 and VASA-CT1.  The term Storage Provider is used in vCenter to represent the VASA Providers for a given FlashArray.
Virtual Volume (vVol) Virtual Volumes (vVols) is the name for this full architecture. A specific vVol is any volume on the array that is in use by the vSphere environment and managed by the VASA provider. A vVol based volume is not fundamentally different than any other volume on the FlashArray.  The main distinction is that when it is in use, it is attached as a Sub-LUN via a PE, instead of via a direct LUN.
vVol Datastore

vVol Storage Container
The vVol Datastore is not a LUN, file system or volume. A vVol Datastore is a target provisioning object that represents a FlashArray, a quota for capacity, and is a logical collection of config vVols.  While the object created in vCenter is represented as a Datastore, the vVol Datastore is really a Storage Container that represents that given FlashArray.
SPS This is a vCenter deamon called Storage Policy Service (SPS or vmware-sps).  The SMS and SPBM services run as part of the Storage Policy Service.
SMS A vCenter Service called Storage Management Service (SMS).
vvold This is the service running on ESXi that handles the management requests directly from the ESXi host to the VASA provider as well as communicates with the vCenter SMS service to get the Storage Provider information.  
   

[Back to Top]  


 

Introduction to vVols

Historically, the datastores that have provided storage for VMware virtual machines (VMs) have been created as follows:

  1. A VMware administrator requests storage from a storage administrator
  2. The storage administrator creates a disk-like virtual device on an array and provisions it to the ESXi host environment for access via iSCSI or Fibre Channel
  3. The VMware administrator rescans ESXi host I/O interconnects to locate the new device and formats it with VMware’s Virtual Machine File System (VMFS) to create a datastore.
  4. The VMware administrator creates a VM and one or more virtual disks, each instantiated as a file in the datastore’s file system and presented to the VM as a disk-like block storage device.

Virtual storage devices instantiated by storage arrays are called by multiple names. Among server users and administrators, LUN (numbered logical unit) is popular. The FlashArray term for virtual devices is volume. ESXi and guest hosts address commands to LUNs that are usually assigned automatically to volumes.

While plugins can automate datastore creation to some extent, they have some fundamental limitations:

  • Every time additional capacity is required, VMware and storage administrators must coordinate their activities
  • Certain widely-used storage array features such as replication are implemented at the datastore level of granularity. Enabling them affects all VMs that use a datastore
  • VMware administrators cannot easily verify that required storage features are properly configured and enabled.

VMware designed vVols to mitigate these limitations. vVol benefits include:

  • Virtual Disk Granularity
    • Each virtual disk is a separate volume on the array with is own unique properties
  • Automatic Provisioning
    • When a VMware administrator requests a new virtual disk for a VM, VMware automatically directs the array to create a volume and present it to the VM. Similarly, when a VMware administrator resizes or deletes a virtual disk, VMware directs the array to resize or remove the volume
  • Array-level VM Visibility
    • Because arrays recognize both VMs and their virtual disks, they can manage and report on performance and space utilization with both VM and individual virtual disk granularity.
  • Storage Policy Based Management
    • With visibility to individual virtual disks, arrays can take snapshots and replicate volumes at the precise granularity required. VMware can discover an array’s virtual disks and allow VMware administrators to manage each vVol’s capabilities either ad hoc or by specifying policies. If a storage administrator overrides a vVol capability configured by a VMware administrator, the VMware administrator is alerted to the non-compliance.

vVol Architecture

Here is a generic high level view of the vVol Architecture.
Make note that the Control/Management Path is separate from the Data Path.

Picture1.png

VMware designed the vVol architecture to mitigate the limitations of the VMFS-based storage paradigm while retaining the benefits, and merging them with the remaining advantages of Raw Device Mappings.

VMware’s vVol architecture consists of the following components:

  • Management Plane (section titled The FlashArray VASA Provider)
    • Implements the APIs that VMware uses to manage the array. Each supported array requires a vSphere API for Storage Awareness (VASA) provider, implemented by the array vendor.
  • Data Plane (section titled vVol Binding)
    • Provisions vVols to ESXi hosts
  • Policy Plane (section titled Storage Policy Based Management)
    • Simplifies and automates the creation and configuration of vVols.

[Back to Top


The FlashArray VASA Provider

VMware's vSphere APIs for Storage Awareness (VASA) is a VMware interface for out-of-band communication between VMware ESXi and vCenter and storage arrays. The Arrays’ VASA providers are services registered with the vCenter Server. Storage vendors implement providers for their arrays, either as VMs or embedded in the arrays. As of vSphere Version 7.0 U1, VMware has introduced four versions of VASA:

  • Version 1 (Introduced in vSphere Version 5.0)
    • Provides basic configuration information for storage volumes hosting VMFS datastores, as well as injection of some basic alerts into vCenter
  • Version 2 (Introduced in vSphere Version 6.0)
    • First version to support vVols
  • Version 3 (Introduced in vSphere Version 6.5)
    • Added support for array-based replication of vVols and Oracle RAC.
  • Version 3.5 (Introduced in vSphere Version 7.0 U1)
    • Added additional feature support for iSCSI Chap and improved snapshot performance.

The Pure Storage FlashArray supports VASA Version 3 currently and is working on version 3.5 support in a future release.

Because the FlashArray vVol implementation uses VASA Version 3, the VMware environment must be running vSphere Version 6.5 or a newer version in both ESXi hosts and vCenter.

Pure Storage does recommend running vSphere 6.7 U3 p03 or higher for the various fixes and improvements found in this release.  Please see the KB that outlines VASA/vVols related fixes by ESXi release found here.

FlashArray vVol support is included in Purity//FA Version 5.0. The Purity//FA upgrade process automatically installs and configures a VASA provider in each controller; there is no separate installation or configuration. To use FlashArray-based vVols, however, an array’s VASA providers must be registered with vCenter. Either the FlashArray Plugin for vSphere Web Client (the Plugin), the vSphere GUI, or API/CLI-based tools may be used to register VASA providers with vCenter. 


VASA Provider Certificate Management

The management of VASA Provider certificates is supported by the FlashArray with the release of Purity//FA 5.3 and VASA 1.1.0.

Please see the following KBs that detail the management of the the VASA Provider Certificates:


Registering the FlashArray VASA Provider

There are multiple ways to register the FlashArray VASA Provider.  

Registering FlashArray VASA Providers with the Pure Storage vSphere Plugin

  1. A FlashArray will need to be added/registered in the Plugin to register the Storage Provider for the a given FlashArray.  Once the FlashArray is registered, Navigate to the main Plugin Page, select the FlashArray and then click on "Register Storage Provider".
    vvols-plugin-kb-01-registering-sp-1.png
  2. The recommended practice is to have a local FlashArray Array Admin user to register the storage providers with.  In the example below, there is a local array admin named "vvols-admin" that the Storage Providers will be registered with.  In the event that the vCenter is in Enhanced Linked Mode, the option to choose which vCenter to register the storage providers with will be given.
    Registering the Storage Provider with a Single vCenter
    vvols-plugin-kb-01-registering-sp-2.png
    Registering the Storage Provider with a vCenter in Linked Mode
    vvols-plugin-kb-01-registering-sp-4.png
  3. Once the Storage Provider is successfully registered, navigate to the vCenter Server page, then Config and the Storage Providers tab.  Confirm that the storage providers are online and healthy.
    vvols-plugin-kb-01-registering-sp-3.png

The FlashArray will log all subsequent vVol operations from those vCenters under the user used to register the storage providers.


Manually Registering the FlashArray VASA Providers with the vCenter UI

Alternatively, VMware administrators can use the vCenter Web Client, PowerCLI, and other CLI and API tools to register VASA providers. This section describes registration of FlashArray providers with the vCenter Web Client and with PowerCLI.

Finding the FlashArray Controller IP Addresses

Prior to registration, use the FlashArray GUI or CLI to obtain the IP addresses of both controllers’ eth0 management ports.

Click Settings in the GUI navigation pane, and select the Network tab to display the array’s management port IP addresses

vVols-User-Guide-VASA-Provider-01.png
FlashArray GUI Network Tab - Management IP Addresses
pureuser@sn1-x70-c05-33> purenetwork list ct0.eth0,ct1.eth0
Name      Enabled  Subnet  Address       Mask           Gateway      MTU   MAC                Speed      Services    Subinterfaces
ct0.eth0  True     -       10.21.149.22  255.255.255.0  10.21.149.1  1500  24:a9:37:01:f2:de  1.00 Gb/s  management  -
ct1.eth0  True     -       10.21.149.23  255.255.255.0  10.21.149.1  1500  24:a9:37:02:0b:8e  1.00 Gb/s  management  -
FlashArray CLI - Management IP Addresses

Registering the Storage Provider in vCenter

After the management IPs for the FlashArray are gathered head over to the vCenter UI and run through the following workflow to register the storage providers.

  1. Navigate to the either the Hosts/Datacenters, VMs, Storage or Network page.
  2. Select the vCenter Server to register the storage providers with.
  3. Navigate to the Configure and click Storage Providers under more.
  4. Click on the Add Button
    vVols-User-Guide-VASA-Provider-02.png
  5. Register CT0's storage provider
    vVols-User-Guide-VASA-Provider-03.png

    Name

    • A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).
       

    URL

    • The URL of the controller’s VASA provider in the form:  https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified, unless a custom cert with a FQDN in the Subjective Alternative Name is used, and port 8084 is required
       

    Credentials

    • Credentials for an administrator of the target array.   Best practice is to use a local array user and not the default user (pureuser).
      The user name entered is associated with VASA operations in future audit logs.
  6. Register CT1's storage provider
    vVols-User-Guide-VASA-Provider-04.png

    Name

    • A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).
       

    URL

    • The URL of the controller’s VASA provider in the form:  https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified (not its FQDN), and port 8084 is required
       

    Credentials

    • Credentials for an administrator of the target array.   Best practice is to use a local array user and not the default user (pureuser).
      The user name entered is associated with VASA operations in future audit logs.

Please ensure that both CT0 and CT1's storage providers are registered.


Manually Registering the FlashArray VASA Providers with PowerShell

When a number of FlashArrays’ VASA providers are to be registered, using a PowerCLI script may be preferable. The VMware PowerCLI cmdlet called New-VasaProvider registers VASA providers with vCenter (vSphereView 6).

New-VasaProvider Cmdlet
New-VasaProvider -Name "MyProvider" -Username "UserName" -Password "Password" -Url "MyUrl"
New-VasaProvider Cmdlet  PowerShell Core Example
PS /Users/alex.carver> $vc_creds = Get-Credential

PowerShell credential request
Enter your credentials.
User: purecloud\alex
Password for user purecloud\alex: 

PS /Users/alex.carver> $vasa_creds = Get-Credential

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 

PS /Users/alex.carver> connect-viserver -Server 10.21.202.95 -Credential $vc_creds

Name                           Port  User
----                           ----  ----
10.21.202.95                   443   PURECLOUD\alex

PS /Users/alex.carver> New-VasaProvider -Name 'sn1-x70-c05-36-ct0' -Credential $vasa_creds -Url 'https://10.21.149.22:8084'

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-36-ct0   online       3.0         11/5/2020 1:28:50 PM   com.purestorage      https://10.21.149.22:8084

PS /Users/alex.carver> New-VasaProvider -Name 'sn1-x70-c05-36-ct1' -Credential $vasa_creds -Url 'https://10.21.149.23:8084'

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---

The output from registering ct1 is expected as ct1 will be the standby provider.  Currently PowerCLI only displays the details on Active Storage Providers and not with standby providers.

An additional method with PowerShell would be the New-PfaVasaProvider cmdlet from the Pure Storage VMware PowerShell Module.  This will require having the Pure Storage PowerShell SDK also installed, but will work with either PowerShell Core or PowerShell.  A connection to a vCenter Server and FlashArray will be required to use the New-PfaVasaProvider cmdlet

New-PfaVasaProvider Cmdlet
New-PfaConnection -Endpoint "Management IP" -Credentials (Get-Credential) -DefaultArray -IgnoreCertificate

New-PfaVasaProvider -Flasharray $Global:DefaultFlashArray -Credentials (Get-Credential)
New-VasaProvider Cmdlet  PowerShell Core Example
PS /Users/alex.carver> Install-Module -Name PureStoragePowerShellSDK
PS /Users/alex.carver>
PS /Users/alex.carver> Install-Module -Name PureStorage.FlashArray.VMware
PS /Users/alex.carver>
PS /Users/alex.carver> New-PfaConnection -Endpoint 10.21.149.21 -Credentials (Get-Credential) -DefaultArray -IgnoreCertificateError

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 


Disposed   : False
EndPoint   : 10.21.149.21
UserName   : vvol-admin
ApiVersion : 1.17
Role       : ArrayAdmin
ApiToken   : 18e939a3

PS /Users/alex.carver> connect-viserver -Server 10.21.202.95 -Credential (Get-Credential)

PowerShell credential request
Enter your credentials.
User: purecloud\alex
Password for user purecloud\alex: 


Name                           Port  User
----                           ----  ----
10.21.202.95                   443   PURECLOUD\alex

PS /Users/alex.carver> New-PfaVasaProvider -Flasharray $Global:DefaultFlashArray -Credentials (Get-Credential)

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 


Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-33-CT0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084

 


Verifying VASA Provider Registration

To verify that VASA Provider registration succeeded, in the Web Client Host and Clusters: 

  1. Click the target vCenter in the inventory pane
  2. Select the Configure tab
  3. Locate the newly-registered providers in Storage Providers
vVols-User-Guide-VASA-Provider-05.png

On the Storage Providers page there are some useful sections that display the information for the VASA Providers.

  1. The first column has the Storage Providers names that were used to register the storage providers.  Additionally, the storage array that the VASA Provider is managing is listed below it, along with the number of online providers for that storage array.
  2. The Status column will list if the provider is online and accessible from vCenter.
  3. vCenter can only have a single Active storage provider for a given storage array.  The Active/Standby column will display if the provider is the active or standby provider.
  4. The Certificate Expiry column displays how many days are left before the certificate expires for that storage provider.  At 180 days a yellow warning will be displayed.
  5. After selecting a Storage Provider there are additional tabs and information that can be selected for that provider.  The general tab will display all the basic information for the given storage provider.  This is a very useful information tab.

Alternatively, the PowerCLI Get-VasaProvider cmdlet can be used to list registered VASA providers.  The results can be filtered to just display the VASA Providers that belong to the Pure Storage namespace.  Only the Active Storage providers are returned with this cmdlet.

PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'}

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-33-ct0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084
sn1-x70-b05-33-ct0   online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.40:8084/version.…
sn1-m20r2-c05-36-ct0 online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.61:8084/version.…

Un-registering and Removing the Storage Providers

There are a couple ways to remove a storage provider in the event that the end user needs to remove and re-register a Storage Provider or simply wants to remove the storage providers.  This can be done either from the vCenter Server UI or with PowerShell via PowerCLI.

Removing Storage Providers in the vCenter Server UI

Here is the workflow to remove the storage providers in the vCenter Server UI:

  1. Navigate to the vCenter Server -> Configure -> Storage Provider Page
  2. Select the Standby Storage Provider that is being removed and select remove and click yes to confirm the removal
    vVols-User-Guide-VASA-Provider-06.png
    vVols-User-Guide-VASA-Provider-07.png
  3. Repeat the steps for the active storage provider

Removing Storage Providers via PowerCLI

Here is the workflow to remove storage providers with PowerShell via PowerCLI:

  1. After connecting to the vCenter Server, find the storage provider and storage provider ID that needs to be removed and set a provider variable.
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'} | Select-Object Name,Id
    
    Name                 Id
    ----                 --
    sn1-x70-b05-33-ct0   VasaProvider-vasaProvider-3
    sn1-x70-c05-33-ct0   VasaProvider-vasaProvider-7
    sn1-m20r2-c05-36-ct0 VasaProvider-vasaProvider-5
    
    PS /Users/alex.carver> $provider = Get-VasaProvider -Id VasaProvider-vasaProvider-7
    PS /Users/alex.carver> $provider
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-c05-33-ct0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084
    
  2. Remove the storage provider with Remove-VASAProvider with the provider variable. 
    PS /Users/alex.carver> Remove-VasaProvider -Provider $provider -confirm:$false
    PS /Users/alex.carver>
    
  3. Remove the same steps with the second storage provider
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'} | Select-Object Name,Id
    
    Name                 Id
    ----                 --
    sn1-x70-b05-33-ct0   VasaProvider-vasaProvider-3
    sn1-x70-c05-33-ct1   VasaProvider-vasaProvider-8
    sn1-m20r2-c05-36-ct0 VasaProvider-vasaProvider-5
    
    PS /Users/alex.carver> $provider = Get-VasaProvider -Id VasaProvider-vasaProvider-8
    PS /Users/alex.carver> $provider
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-c05-33-ct1   online       3.0         11/11/2020 1:19:57 PM  com.purestorage      https://10.21.149.23:8084
    
    PS /Users/alex.carver> Remove-VasaProvider -Provider $provider -confirm:$false
    PS /Users/alex.carver>
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'}
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-b05-33-ct0   online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.40:8084/version.…
    sn1-m20r2-c05-36-ct0 online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.61:8084/version.…
    

The main reason that the workflow is using the VASA provider ID is due to inconsistent behavior when using the VASA provider name when trying to remove the second provider.  The behavior was much more consistent when using the provider id.


[Back to Top]

Configuring Host Connectivity

For an ESXi host to access FlashArray storage, an array administrator must create a host object. A FlashArray host object (usually called host) is a list of the ESXi host’s initiator iSCSI Qualified Names (IQNs) or Fibre Channel Worldwide Names (WWNs). Arrays represent each ESXi host as one host object.

Similarly, arrays represent a VMware cluster as a host group, a collection of hosts with similar storage-related attributes. For example, an array would represent a cluster of four ESXi hosts as a host group containing four host objects, each representing an ESXi host. The FlashArray User Guide contains instructions for creating hosts and host groups.

Pure Storage recommends using the Pure vSphere Plugin to create FlashArray hosts and host groups that are mapped to ESXi Hosts and ESXi Clusters.


Using the Pure Storage vSphere Plugin to Create and Configure FlashArray Host Groups

The Pure Storage Plugin for the vSphere Client provides the ability to VMware users to have insight into and control of their Pure Storage FlashArray environment while directly logged into the vSphere Client. The Pure Storage plugin extends the vSphere Client interface to include environmental statistics and objects that underpin the VMware objects in use and to provision new resources as needed.

Viewing Host Configuration

Creating Host Groups

Without the Pure Storage plugin the process of creating hosts and host groups on the FlashArray can be a slow and tedious process.

The steps required to complete this task would be to:

  1. Navigate to each ESXi host you wish to connect to the FlashArray and locate the initiator port identifiers (WWPNs, IQN(s), or NQN).
  2. Login to the FlashArray and create a new host object for each ESXi host followed by setting the applicable port identifiers for each of the hosts.
  3. Once the host objects have been created a new host group is created and each host object is manually moved to the applicable host group.

Not only is the process above slow but it also leaves room for human error during the configuration process. In many instances we have found that port identifiers have been applied to the wrong host objects, misspelled, or missing entirely if the end-user was not paying close attention. Additionally, this process often requires coordination between vSphere and Storage administrators which leaves room for additional errors and delays in completing this critical task.

By utilizing the Pure Storage plugin this process becomes entirely automated and allows for the creation of dozens of hosts in a matter of seconds or minutes.It can also be completed by the vSphere administrator directly from the vSphere Client which frees up the storage administrator to focus on other more pressing issues within the environment.

Due to the reasons outlined above Pure Storage recommends using the plugin for the creation of new host and host group objects.

Starting with the 4.4.0 version of the Pure Storage Plugin, the new hosts created during host group creation will also be configured with the ESXi host personality.

Due to a slight difference between creating a Fibre Channel (FC) and iSCSI host group from the plugin each process is outlined separately below.

Also: all hosts must be in a VMware cluster--the plugin does not support creating host groups for ESXi hosts that are not in a cluster. If for some reason the host cannot be put in a VMware cluster, manual creation of the FlashArray host is required. For the host-side configuration in the case of iSCSI, this can be done via the plugin. Skip the the last section of this pages for information.


Creating a Host Group
  1. Right-click on the ESXi cluster you wish to create a host group for.
  2. Navigate to Pure Storage > Add/Update Host Group.
clipboard_e66cc8bd80b211b19cb9b635320be9f3a.png
  1. Select the FlashArray on which to configure the host connectivity.
clipboard_ee4bbc9ed09cf1a856dcca8601bd57f5c.png
  1. Select Fibre Channel or iSCSI. The plugin will then auto-generate the name of the hosts and host group. They can be changed if needed at a later time.
If the host/host group does not yet exist on the array, it will be marked as Will be created. If it does exist, it will be marked as Already configured.
clipboard_eddc7d64d0a2c20b31a463ff868695b7d.png clipboard_ed21786f92f5c9004b2d9221a25521ef0.png

 

If the host name already exists on the array, the plugin will append the protocol name to the host name to make it unique. clipboard_e2d3de88ee5cf06ef4334d800f196bd84.png
A protocol will be grayed out if the target FlashArray does not currently offer that particular protocol. clipboard_e7a771a86269ad03ada4d698b1f3b6f95.png
  1. If you have selected Configure iSCSI initiators on the hosts, the plugin will also configure the iSCSI target information and best practices on that particular host or hosts. See the section entitled, iSCSI Configuration Workflow for details.
  2. Click Create to complete the creation.
clipboard_e22e31cbcd2dc38bc3491399622b6692b.png

Configuring iSCSI Host Groups

[Back to Top


Protocol Endpoints

The scale and dynamic nature of vVols intrinsically changes VMware storage provisioning. To provide scale and flexibility for vVols, VMware adopted the T10 administrative logical unit (ALU) standard, which it calls protocol endpoint (PE). vVols are connected to VMs through PEs acting as subsidiary logical units (SLUs, also called sub-luns).

The FlashArray vVol implementation makes PEs nearly transparent. Array administrators seldom deal with PEs, and not at all during day-to-day operations.

Protocol Endpoints (PEs)

A typical VM has multiple virtual disks, each instantiated as a volume on the array and addressed by a LUN, the ESXi Version 6.5 support limits of 512 SCSI devices (LUNs) per host and 2,000 logical paths to them can easily be exceeded by even a modest number of VMs.

Moreover, each time a new volume is created or an existing one is resized, VMware must rescan its I/O interconnects to discover the change. In large environments, rescans are time-consuming; rescanning each time the virtual disk configuration changes is generally considered unacceptable.

VMware uses PEs to eliminate these problems. A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. It effectively serves as a mount point for vVols. It is the only FlashArray volume that must be manually connected to hosts to use vVols.

Fun fact: Protocol endpoints were formerly called I/O de-multiplexers. PE is a much better name.

When an ESXi host requests access to a vVol (for example, when a VM is powered on), the array binds the vVol to it. Binding is synonym for sub-lun connection. For example, if a PE uses LUN 255, a vVol bound to it would be addressed as LUN 255:1.  The section titled vVol Binding describes vVol binding in more detail.

PEs greatly extend the number of vVols that can be connected to an ESXi cluster; each PE can have up to 16,383 vVols per host bound to it simultaneously. Moreover, a new binding does not require a complete I/O rescan. Instead, ESXi issues a REPORT_LUNS SCSI command with SELECT REPORT to the PE to which the sub-lun is bound. The PE returns a list of sub-lun IDs for the vVols bound to that host. In large clusters, REPORT_LUNS is significantly faster than a full I/O rescan because it is more precisely targeted.

The FlashArray PE Implementation

A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.

Using the FlashArray UI to Manage the Protocol Endpoint

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint.  The pure-protocol-endpoint can be filtered in the Volumes view.  A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.

From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
vVols-User-Guide-PE-01.png
This view will display the Protocol Endpoints for the FlashArray

vVols-User-Guide-PE-02.png

From the PE View the PE can be connected to a Host or Host Group
Best Practice is to connect the PE to a Host Group and not Hosts individually. 
vVols-User-Guide-PE-03.png

From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
vVols-User-Guide-PE-04.png

Using the FlashArray CLI to Manage the Protocol Endpoint

From the FlashArray CLI a storage admin can manage the Protocol Endpoint.  This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.

Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

A protocol endpoint can be created with purevol create --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

To connect a protocol endpoint use either purehgroup connect or purevol connect

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint
Name                    Host Group       Host       LUN
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  10

pureuser@sn1-x50r2-b12-36> purevol list --connect
Name                                Size  LUN  Host Group       Host
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-1-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-2-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-3-FC

A protocol endpoint can be disconnected from a host and host group with purevol disonnect.

However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint
Name                    Host Group       Host       LUN
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  11
pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint
Name                    Host Group       Host
pure-protocol-endpoint  Prod-Cluster-FC  -

A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint
Name                  Source  Created                  Serial
dr-protocol-endpoint  -       2020-12-02 14:15:23 PST  F4252922ADE248CF000113EA

pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint
Name
dr-protocol-endpoint

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually.  However, in most cases the default pure-protocol-endpoint is fine to use.  There is no additional HA value added by connecting a host to multiple protocol endpoints.

Do not destroy or eradicate the pure-protocol-endpoint PE on the FlashArray.  This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray. 

BEST PRACTICE: Use one (the default) PE per array. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Protocol Endpoints in vSphere

There are multiple ways to view Protocol Endpoints that the ESXi hosts is connected with or is currently using as a mount point for a vVol Datastore.

  • From the Hosts and Datacenter view, Navigate to Host -> Configure -> Storage Devices.
    This view will show all connected storage devices to this ESXi hosts. 
    All Protocol Endpoints that are connected via SAN will show as a 1.00 MB device
    vvols-guide-pe-vsphere-view-01.png
    From this view the LUN ID, Transport, Multipathing and much more can be found.  
  • From the Hosts and Datacenter view, Navigate to Host -> Configure -> Protocol Endpoints
    This View will only display Protocol Endpoints that are actively being used as a mount point for a vVol Datastore and it's Operational State

    vvols-guide-pe-vsphere-view-02.png
    In the previous page there was a PE that was LUN ID 253, however on this page that PE does not show up as configured or Operational.
    This is because that PE does not have a vVol Datastore that it is being used for.  This is expected behavior.  If a vVol datastore is not mounted to the ESXi host then no configured PEs will display in this View.

    Multipathing is configured from the Protocol Endpoint and not from a sub lun.  Each sub lun connection inherits the multipathing policy set on the PE.

    BEST PRACTICE: Configure the round robin path selection policy for PEs.

  • From the Datastore View, Navigate to a vVol Datastore -> Configure -> Protocol Endpoints
    This page will display all the PEs that this vVol Datastore (storage container) that are on the FlashArray.  By default there will only be one PE on the FA.
    In this example there are two PEs.

    vvols-guide-pe-vsphere-view-03.png
    Select one of the PEs and click on the Host Mount Data tab
    From here the mounted hosts will be displayed.  Take note that there is a UI bug that will always show the Operational Status as not accessible.
  • By comparison, when the 2nd PE is viewed, there are no mounted hosts.  This is because the second PE is not connected via that SAN to any ESXi hosts in this vCenter.

    vvols-guide-pe-vsphere-view-04.png
  • From the Datastore View page, Navigate to a vVol Datastore -> Configure -> Connectivity with Hosts

    vvols-guide-pe-vsphere-view-05.png
    This page will show the mounted hosts connectivity with the vVol Datastore.  Here the expected connectivity is Connected.  If a host has lost management connectivity then the host will show as disconnected.

With regards to PE Queue Depths, ESXi behaves differently with respect to queue depth limits for PEs than for other volumes. Pure Storage recommends leaving ESXi PE queue depth limits at the default values. 

BEST PRACTICE: Leave PE queue depth limits at the default values unless performance problems occur.
The blog post at https://blog.purestorage.com/queue-depth-limits-and-vvol-protocol-endpoints/ contains additional information about PE queue depth limits.

[Back to Top


vVol Datastore

vVols replace LUN-based datastores formatted with VMFS. There is no file system on a vVol datastore, nor are vVol-based virtual disks encapsulated in files.

The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.

However, VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the vVol architecture includes a storage container object, generally referred to as a vVol datastore, with two key properties:

Capacity limit

  • Allows an array administrator to limit the capacity that VMware administrators can provision as vVols.

Array capabilities

  • Allows vCenter to determine whether an array can satisfy a configuration request for a VM.

A vVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term vVol datastore exclusively.

The FlashArray Implementation of vVol Datastores

FlashArray vVol datastores have no artificial size limit. The initial FlashArray vVols release supports a single 8-petabyte vVol datastore per array. Pure Storage Technical Support can change an array’s vVol datastore size on customer request to alter the amount of storage VMware can allocate.  Should this be desired please open up a support case with Pure Storage to have the size change applied.

Pure Storage anticipates supporting multiple vVol datastores per array and user-configurable vVol datastore sizes in the future.

Purity//FA Version 5.0.0 and newer versions have the VASA service as a core part of the Purity OS, so if Purity is up then VASA is running.  Once storage providers are registered then a vVol Datastore can be "created" and/or mounted to ESXi hosts.  However, in order for vSphere to implement and use vVols, a Protocol Endpoint must be connected to the ESXi hosts on the FlashArray.  Otherwise there is only a management path connection and not a data path connection.

FlashArrays require two items to create a volume—a size and a name. vVol datastores do not require any additional input or enforce any configuration rules on vVols, so creation of FlashArray-based vVols is simple.

Mounting a vVol Datastore

A vVol datastore should be mounted to an ESXi host with access to a PE on the array that hosts the vVol datastore. Mounting a vVol datastore to a host requires:

The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.

An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.

With Pure Storage's vSphere Plugin, a VMware administrator can connect a PE to an ESXi Cluster and mount its vVol datastore without array administrator involvement.

Using the Plugin to Mount vVol Datastore

Once the Storage Providers are registered the vVol Datastore can be created and mounted using the vSphere Plugin.  Click blow to expand the workflow for creating the vVol Datastore and mounting it to an ESXi Cluster.  The workflow can also be found in the demo video at this point.

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
    vvols-plugin-kb-02-mounting-vvol-ds-1.png
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
    vvols-plugin-kb-02-mounting-vvol-ds-2.png
  3. Choose a name for the vVol Datastore
    vvols-plugin-kb-02-mounting-vvol-ds-3.png
  4. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
    vvols-plugin-kb-02-mounting-vvol-ds-4.png
  5. Confirm the FlashArray that the vVol Datastore will be created for.

    vvols-plugin-kb-02-mounting-vvol-ds-5.png
  6. Review the information and finish the workflow.
    vvols-plugin-kb-02-mounting-vvol-ds-6.png
  7. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.
    vvols-plugin-kb-02-mounting-vvol-ds-7.png

Mounting vVol Datastores Manually: FlashArray Actions 

Alternatively, vVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the vVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.

Pure Storage recommends using the Plugin to provision PEs to hosts.  Keep in mind that the FlashArray UI does not allow creation of Protocol Endpoints.  The FlashArray UI does allow finding the Protocol Endpoint and connecting them to Hosts and Host Groups.

A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.

Using the FlashArray UI to Manage the Protocol Endpoint

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint.  The pure-protocol-endpoint can be filtered in the Volumes view.  A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.

From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
vVols-User-Guide-PE-01.png
This view will display the Protocol Endpoints for the FlashArray

vVols-User-Guide-PE-02.png

From the PE View the PE can be connected to a Host or Host Group
Best Practice is to connect the PE to a Host Group and not Hosts individually. 
vVols-User-Guide-PE-03.png

From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
vVols-User-Guide-PE-04.png

Using the FlashArray CLI to Manage the Protocol Endpoint

From the FlashArray CLI a storage admin can manage the Protocol Endpoint.  This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.

Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

A protocol endpoint can be created with purevol create --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

To connect a protocol endpoint use either purehgroup connect or purevol connect

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint
Name                    Host Group       Host       LUN
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  10

pureuser@sn1-x50r2-b12-36> purevol list --connect
Name                                Size  LUN  Host Group       Host
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-1-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-2-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-3-FC

A protocol endpoint can be disconnected from a host and host group with purevol disonnect.

However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint
Name                    Host Group       Host       LUN
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  11
pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint
Name                    Host Group       Host
pure-protocol-endpoint  Prod-Cluster-FC  -

A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint
Name                  Source  Created                  Serial
dr-protocol-endpoint  -       2020-12-02 14:15:23 PST  F4252922ADE248CF000113EA

pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint
Name
dr-protocol-endpoint

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually.  However, in most cases the default pure-protocol-endpoint is fine to use.  There is no additional HA value added by connecting a host to multiple protocol endpoints.

Do not destroy or eradicate the pure-protocol-endpoint PE on the FlashArray.  This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray. 

BEST PRACTICE: Use one (the default) PE per array. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Mounting vVol Datastores Manually: Web Client Actions

Navigate to the vCenter UI once the PE is connected to the FlashArray Host Group that corresponds to the vSphere ESXi Cluster.

Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, ESXi does not recognize them until an I/O rescan occurs. (This is partially correct.  If you are on a recent version of Purity and ESXi, a Unit Attention will be issued to the ESXi hosts when the PE is connected to the hosts.  At this time, the ESXi host will dynamically update the devices that are presented via the SAN).  In the event that the FlashArray is not on a recent release of Purity  (Purity 5.1.15+, 5.3.6+ or 6.0.0+), a storage rescan from the ESXi hosts will be required for the PE to show up in the ESXi hosts connected devices.

To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device.

vvols-guide-pe-vsphere-view-01.png

The screen is useful to find the PEs that have been successfully connected via a SAN transport method.  Multipathing can be configured on the PE from this view as well.

Note that in this example there are three PEs from three different arrays.  When navigating to the Storage -> Protocol Endpoints Screen the PEs that are used as a vVol Datastore mount are displayed.  In this example we only have two that show, as there are currently only two vVol Datastores (from two different arrays) created.

vvols-guide-pe-vsphere-view-02.png

The expected behavior is that the ESXi host will only display connected PEs that are currently being used as mounts for a vVol Datastore.

To mount a vVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown to launch the New Datastore wizard.

vvols-guide-vvol-ds-01.png

Best Practice is to create and mount the vVol Datastore against the ESXi Cluster which would be mapped to a FlashArray Host Group.

Click the vVol Type

vvols-guide-vvol-ds-02.png

Enter in a friendly name for the datastore and select the vVol container in the Backing Storage Container list.

vvols-guide-vvol-ds-03.png

Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.

No Backing Storage listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.

Select the host(s) on which to mount the vVol datastore.  Best Practice would be to connect the vVol Datastore to all hosts in that ESXi Cluster.

vvols-guide-vvol-ds-04.png

Review the configuration details and then click Finish.

vvols-guide-vvol-ds-05.png

Once a vVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that the host is connected via SAN transport.  Note now that the PE LUN 253 is now listed as a PE for the ESXi host.

vvols-guide-vvol-ds-09.png

Mounting a vVol Datastore to Additional Hosts

In the event that an ESXi host has been added to a Cluster or the vVol Datastore was only mounted to some hosts in the cluster there is a workflow to mount additional hosts to the vVol Datastore.

To mount the vVol datastore to additional hosts, right-click on the vVol Datastore and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard.

vvols-guide-vvol-ds-06.png

Select the hosts to which to mount the vVol datastore by checking their boxes and click Finish.

vvols-guide-vvol-ds-07.png

Using a vVol Datastore

A vVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to vVols.

vVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a vVol datastore’s contents.

vSphere UI vVol Datastore View
vvols-guide-vvol-ds-08.png
ESXi CLI view of vVol Datastore Content
[root@ac-esxi-a-16:~] cd /vmfs/volumes/
[root@ac-esxi-a-16:/vmfs/volumes] cd sn1-m20r2-c05-36-vVols-DS/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] ls
AC-3-vVols-VM-1                               rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a  rfc4122.a46478bc-300d-459e-9b68-fa6acb59c01c  vVols-m20-VM-01                               vvol-w2k16-no-cbt-c-2
AC-3-vVols-VM-2                               rfc4122.7255934c-0a2e-479b-b231-cef40673ff1b  rfc4122.ba344b42-276c-4ad7-8be1-3b8a65a52846  vVols-m20-VM-02
rfc4122.1f972b33-12c9-4016-8192-b64187e49249  rfc4122.7384aa04-04c4-4fc5-9f31-8654d77be7e3  rfc4122.edfc856c-7de1-4e70-abfe-539e5cec1631  vvol-w2k16-light-c-1
rfc4122.24f0ffad-f394-4ea4-ad2c-47f5a11834d0  rfc4122.8a49b449-83a6-492f-ae23-79a800eb5067  vCLS (1)                                      vvol-w2k16-light-c-2
rfc4122.31123240-6a5d-4ead-a1e8-b5418ab72a3e  rfc4122.97815229-bbef-4c87-b69b-576fb55a780c  vVols-b05-VM-02                               vvol-w2k16-no-cbt-c-1
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] cd vVols-m20-VM-01/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS/vVols-m20-VM-01
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] ls
vVols-m20-VM-01-000001.vmdk                                          vVols-m20-VM-01.vmdk                                                 vmware-2.log
vVols-m20-VM-01-321c4c5a.hlog                                        vVols-m20-VM-01.vmsd                                                 vmware-3.log
vVols-m20-VM-01-3549e0a8.vswp                                        vVols-m20-VM-01.vmx                                                  vmware-4.log
vVols-m20-VM-01-3549e0a8.vswp.lck                                    vVols-m20-VM-01.vmx.lck                                              vmware-5.log
vVols-m20-VM-01-Snapshot2.vmsn                                       vVols-m20-VM-01.vmxf                                                 vmware.log
vVols-m20-VM-01-aux.xml                                              vVols-m20-VM-01.vmx~                                                 vmx-vVols-m20-VM-01-844ff34dc6a3e333b8e343784b3c65efa2adffa1-2.vswp
vVols-m20-VM-01.nvram                                                vmware-1.log

[Back to Top


Types of vVols

The benefits of vVols are rooted in the increased storage granularity achieved by implementing each vVol-based virtual disk as a separate volume on the array. This property makes it possible to apply array-based features to individual vVols.

FlashArray Organization of vVols

FlashArrays organize the vVols associated with each vVol-based VM as a volume group. Each time VMware administrator creates a vVol-based VM, the hosting FlashArray creates a volume group whose name is the name of the VM, prefixed by vvol- and followed by -vg.
(ArrayView 5).

FlashArray syntax limits volume group names to letters, numbers and dashes; arrays remove other characters that are valid in virtual machine names during volume group creation.

vv35.png
ArrayView 5: Volume Groups Area of GUI Volumes Tab

To list the volumes associated with a vVol-based VM, select the Storage view Volumes tab. In the Volume Groups area, select the volume group name containing the VM name from the list or enter the VM name in the search box (ArrayView 5).

The Volumes area of the pane lists the volumes associated with the VM (ArrayView 6).

vv36.png
ArrayView 6: GUI View of Volume Group Membership

Clicking a volume name displays additional detail about the selected volume (ArrayView 7).

vv37.png
ArrayView 7: GUI View of a vVol's Details

Note:
Clicking the volume group name in the navigation breadcrumbs returns to the volume groups display.

When the last vVol in a volume group is deleted (destroyed), the array destroys the volume group automatically. As with all FlashArray data objects, destroying a volume group moves it to the array’s Destroyed Volume Groups folder for 24 hours before eradicating it permanently.

To recover or eradicate a destroyed volume group, click the respective icons in the Destroyed Volume Groups pane.

vv38.png
ArrayView 8: FlashArray GUI Destroyed Volume Groups Folder

The FlashArray CLI and REST interfaces can also be used to manage volume groups of vVols.

VM Datastore Structures

vVols do not change the fundamental VM architecture:

  • Every VM has a configuration file (a VMX file) that describes its virtual hardware and special settings
  • Every powered-on VM has a swap file.
  • Each virtual disk added to a VM is implemented as a storage object that limits guest OS disk capacity.
  • Every VM has a memory (vmem) file used to store snapshots of its memory state.

Conventional VM Datastores

Every VM has a home directory that contains information, such as:

Virtual hardware descriptions 

Guest operating system version and settings, BIOS configuration, virtual SCSI controllers, virtual NICs, pointers to virtual disks, etc.

Logs

Information used during VM troubleshooting

VMDK files 

Files that correspond to the VM’s virtual disks, whether implemented as NFS, VMFS, physical and virtual mode RDMs (Raw Device Mappings), or vVols. VMDK files indicate  where the ESXi vSCSI layer should send each virtual disk’s I/O.

For complete list VM home directory contents see VMware Workstation 5.0 What Files Make Up a Virtual Machine article.

When a VMware administrator creates a VM based on VMFS or NFS, VMware creates a directory in its home datastore. (vSphereView 30).

vv39.png
vSphereView 30: Web Client Edit Settings Wizard
vv40.png
vSphereView 31: Web Client File Browser View of a VM's Home Directory


With vVol-based VMs, there is no file system, but VMware makes the structure appear to be the same as that of a conventional VM. What occurs internally is quite different, however.

vVol-based VM Datastores

vVol-based VMs use four types of vVols:

  • Configuration vVol (usually called “config vVol” one per VM)
  • Data vVol (one or more per VM)
  • Swap vVol (one per VM)
  • Memory vVol (zero, one or more per VM)

The sections that follow describe these four types of vVols and the purposes they serve.

In addition to the four types of vVols used by vVol-based VMs, there are vVol snapshots, described in the section titled Snapshots of vVols, starting 

Config vVols 

When a VMware administrator creates a vVol-based VM, vCenter creates a 4-gigabyte thin-provisioned configuration vVol (config vVol) on the array, which ESXi formats with VMFS. A VM’s config vVol stores the files required to build and manage it: its VMX file, logs, VMDK pointers, etc. To create a vVol-based VM, right-click any inventory pane object to launch the New Virtual Machine wizard and specify that the VM’s home directory be created on a vVol datastore.

vv41.png
vSphereView 32: New Virtual Machine Wizard

Note:
For simplicity, the VM in this example has no additional virtual disks.

When VM creation is complete, a directory with the name of the VM appears in the array’s vVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.

vv42.png
vSphereView 33: Customize Hardware Wizard

When VM creation is complete, a directory with the name of the VM appears in the array’s vVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.

vv43.png
vSphereView 34: Directory of a New vVol-based VM

In the Web Client, a vVol datastore appears as a collection of folders, each representing a mount point for the mini-file system on a config vVol. The Web Client GUI Browse Datastore function and ESXi console cd operations work as they do with conventional VMs. Rather than traversing one file system, however, they transparently traverse the file systems hosted on all of the array’s config vVols.

A FlashArray creates a config vVol for each vVol-based VM. Arrays name config vVols by concatenating the volume group name with config-<UUID>. Arrays generate UUIDs randomly; an array administrator can change them if desired.

An array administrator can search for volumes containing a vVol-based VM name to verify that its volume group and config vVol have been created.

vv44.png
ArrayView 9: Locating a VM's Config vVol

As objects are added to a vVol-based VM, VMware creates pointer files in its config vVol; these are visible in its directory. When a VM is deleted, moved to another array, or moved to a non-vVol datastore, VMware deletes its config vVol.

Data vVols

Each data vVol on an array corresponds to a virtual disk. When a VMware administrator creates a virtual disk in a vVol datastore, VMware directs the array to create a volume and creates a VMDK file pointing to it in the VM’s config vVol. Similarly, to resize or delete a virtual disk, VMware directs the array to resize or destroy the corresponding volume.

Creating a Data vVol

vVol-based virtual disk creation is identical to conventional virtual disk creation. To create a vVol-based virtual disk using the Web Client, for example, right-click a VM in the Web Client inventory pane and select Edit Settings from the dropdown menu to launch the Edit Settings wizard.

vv45.png
vSphereView 35: Web Client Edit Settings Command

Select New Hard Disk in the New device dropdown and click Add.

vv46.png
vSphereView 36: New Hard Disk Selection

Enter configuration parameters. Select the VM’s home datastore (Datastore Default) or a different one for the new virtual disk, but to ensure that the virtual disk is vVol-based, select a vVol datastore.

vv47.png
vSphereView 37: Specifying Data vVol Parameters

Click OK to create the virtual disk. VMware does the following:

  1. For a VM’s first vVol on a given array, directs the array to create a volume group and a config vVol for it.
  2. Directs the array to create a volume in the VM’s volume group.
  3. Creates a VMDK pointer file in the VM’s config vVol to link the virtual disk to the data vVol on the array.
  4. Adds the new pointer file to the VM’s VMX file to enable the VM to use the data vVol.

The FlashArray GUI Storage view Volumes tab lists data vVols in the Volumes pane of the volume group display.

vv48.png
ArrayView 10: FlashArray GUI View of a Volume Group's Data vVols

Resizing a Data vVol

A VMware administrator can use any of several management tools expand a data vVol to a maximum size of 62 terabytes while it is online. Although FlashArrays can shrink volumes as well, vSphere does not support that function.

vv49.png
vSphereView 38: vSphere Disallows Volume Shrinking

Note:
VMware enforces the 62 terabyte maximum to enable vVols to be moved to VMFS or NFS, both of whose maximum virtual disk size is 62 terabytes.

At this time VMware does not support expanding a Volume that is configured with a SCSI controller that is enabled with sharing.

To expand a data vVol using the Web Client, right-click the VM in the inventory pane, select Edit Settings from the dropdown menu, and select the virtual disk to be expanded from the dropdown. The virtual disk’s current capacity is displayed. Enter the desired capacity and click OK, and use guest operating system tools to expose the additional capacity to the VM. 

vv50.png
vSphereView 39: Selecting Virtual Disk for Expansion
vv51.png
vSphereView 40: Entering Expanded Data vVol Capacity

Deleting a Data vVol

Deleting a data vVol is identical to deleting any other type of virtual disk. When a VMware administrator deletes a vVol-based virtual disk from a VM, ESXi deletes the reference VMDK file and directs the array to destroy the underlying volume.

To delete a vVol-based virtual disk, right-click the target VM in the Web Client inventory pane, select Edit Settings from the dropdown menu to launch the Edit Settings wizard. Select the virtual disk to be deleted, hover over the right side of its row and click the  vv52.png  symbol when it appears.

vv53.png
vSphereView 41: Selecting Data vVol for Deletion

To remove the vVol from the VM, click the OK button. To remove it from the VM and destroy it on the array, check the Delete files from datastore checkbox and click OK.

vv54.png
vSphereView 42: Destroying the Volume on the Array

Note:
Delete files from datastore is not a default—if it is not selected, the vVol is detached from the VM, but remains on the array. A VMware administrator can reattach it with the Add existing virtual disk Web Client command.

The ESXi host deletes the data vVol’s VMDK pointer file and directs the array to destroy the volume (move it to its Destroyed Volumes folder for 24 hours.

vv55.png
ArrayView 11: Deleted Data vVol in an Array's Destroyed Voumes Folder

An array administrator can recover a deleted vVol-based virtual disk at any time during the 24 hours following deletion. After 24 hours, the array permanently eradicates the volume and it can no longer be recovered.

Swap vVols

VMware creates swap files for VMs of all types when they are powered on, and deletes them at power-off. When a vVol-based VM is powered on, VMware directs the array to create a swap vVol, and creates a swap (.vswp) file in the VM’s config vVol that points to it.

vv56.png
vSphereView 43: Powered-off VM Configuration
Illustrates the components of a powered-off vVol-based VM. There is no vswp file.
vv57.png
ArrayView12: Data vVol Volumes for Powered-off VM
The VM’s volume group does not include a swap volume.

To power on a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown. 

vv58.png
vSphereView 44: Power On VM Command

 When a VM is powered on, the Web Client file navigator lists two vswp files in its folder.

vv59.png
vSphereView 45: Powered-On VM with vswp File

VMware creates a vswp file for the VM’s memory image when it is swapped out and another for ESXi administrative purposes.

The swap vVol’s name in the VM’s volume group on the array is Swap- concatenated with a unique identifier. The GUI Volumes tab shows a volume whose size is the VM’s memory size. 

vv60.png
ArrayView 13: Swap Volume for Powered-On VM
vv61.png
vSphereView 46: VM's Virtual Memory Size

Like all FlashArray volumes, swap vVols are thin-provisioned—they occupy no space until data is written to them.

To power off a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Shut Down Guest OS from the secondary dropdown.

vv62.png
vSphereView 47: Web Client Power Off VM Command

When a VM is powered off, its vswp file disappears from the Web Client file navigator, and the FlashArray GUI Volumes tab no longer shows a swap volume on the array.

vv63.png
ArrayView 14: GUI View of Powered-off VM's Volumes (No Swap vVol)

VMware destroys and immediately eradicates swap vVols from the array. (They do not remain in the Destroyed Volumes folder for 24 hours.)

vv64.png
ArrayView 15: Destroyed and Eradicated Swap vVol

Memory vVols

VMware creates memory vVols for two reasons:

VM suspension

When a VMware administrator suspends a VM, VMware stores its memory state in a memory vVol. When the VM resumes, its memory state is restored from the memory vVol, which is then deleted.

VM snapshots

When a VMware management tool creates a snapshot of a vVol-based VM with the “store memory state” option, VMware creates a memory vVol. Memory vVols that contain VM snapshots are deleted when the snapshots are deleted. They are described in the section titled Creating a VM Snapshot with Saved Memory.

To suspend a running VM, right-click its entry in the Web Client inventory pane, select Power from the dropdown menu, and Suspend from the secondary dropdown.

vv65.png
vSphereView 48: VM Suspend Command

VMware halts the VM’s processes, creates a memory vVol and a vmss file to reference it, de-stages (writes) the VM’s memory contents to the memory vVol, and directs the array to destroy and eradicate its swap vVol.

vv66.png
ArrayView 16: Memory vVol Host Connection
vv67.png
ArrayView 17: GUI View of Memory vVol
vv68.png
vSphereView 49: Memory vVol in File Navigator

When the VM’s memory has been written, The ESXi host unbinds its vVols. They are bound again when it is powered on.

To resume a suspended VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown.

vv69.png
vSphereView 50: Web Client Command to Power On a Suspended VM

Powering on a suspended VM binds its vVols, including its memory vVol, to the ESXi host, and loads its memory state is from the memory vVol. Once loading is complete, VMware unbinds the memory vVol and destroys (but does not immediately eradicate) it. The memory vVol moves to the array’s destroyed volumes folder where it is eradicated permanently after 24 hours.

vv70.png
ArrayView 18: GUI View of Destroyed Memory vVol

Recovering Deleted vVols

Deleted data and config vVols are both recoverable within 24 hours of deletion.

Throughout a VM’s life, it has a config vVol in every vVol datastore it uses. The config vVol hosts the VM’s home folder which contains its VMX file, logs, swap pointer file, and data vVol (VMDK) and snapshot pointer files. Restoring a config vVol from a snapshot and the corresponding data and snapshot vVols effectively restores a deleted VM.

vv71.png
vSphere View 51: File Navigator View of A Typical VM Home Directory Folder

To delete a VM, VMware deletes the files in its config vVol and directs the array to destroy the config vVol and any of its data vVols that are not shared with other VMs.

vv72.png
vSphereView 52: Confirm Delete Wizard

An array administrator can recover destroyed vVols at any time within 24 hours of their destruction. But because the config vVol’s files are deleted before destruction, recovering a VM’s config vVol results in an empty folder. A recovered config vVol must be restored from its most recent snapshot.

Recovering a config vVol requires at least one pre-existing array-based snapshot. Without a config vVol snapshot, a VM can be recovered, but its configuration must be recovered manually.

When a VMware administrator deletes a VM, VMware directs the array to destroy its config vVol, data vVols, and any snapshots. The array moves the objects to its destroyed objects folders for 24 hours.

vv73.png
ArrayView 19: GUI View of a Destroyed VM's Volumes, Snapshots, and Volume Group

To recover a deleted VM, recover its volume group first, followed by its config and data vVols. To recover a single object on the array, click the array options image.png  icon next to it.

To recover multiple objects of the same type with a single action, click the vertical ellipsis and select Recover… to launch the Recover Volumes wizard. Select the config vVol and the data vVols to be recovered by checking their boxes and click the Recover button.

vv74.png
ArrayView 20: GUI Command to Recover Objects
vv75.png
ArrayView 21: Selecting Volumes to Recover

In the GUI Snapshots pane, click the vertical ellipsis to the right of the snapshot from which to restore, and select Restore from the dropdown menu.

vv76.png
ArrayView 22: Restore Config vVol from Snapshot

When the Restore Volume from Snapshot wizard appears, click the Restore button.

vv77.png
ArrayView 23: Restore Volume Confirmation Wizard

Restoring the config vVol from a snapshot recreates the pointer files it contains. In the Web Client file navigator, right-click the vmx file and select Register VM… from the dropdown menu to register the VM.

vv78.png
vSphereView 53: Registering a Recovered VM

After registration, all data vVols, snapshots, and the VM configuration are as they were prior to VM deletion.

Recovering a Deleted Data vVol

During the 24 hour grace period between deletion of a vVol by a VMware administrator and its eradication by the array the virtual disk can be restored.

When a VMware administrator deleting a vVol-based virtual disk selects the Delete files from datastore option, the array moves the data vVol to its Destroyed Volumes folder for 24 hours.

vv79.png
vSphereView 54: Delete Virtual Disk Command

To use the Plugin to restore a deleted data vVol, click the VM in the inventory pane, select the FlashArray Virtual Volume Objects tab, and click the Restore Deleted Disk Plugin button to launch the Restore Deleted Disk wizard.

vv80.png
vSphereView 55: Restore Deleted Disk Command
vv81.png
vSphereView 56: Restore Deleted Disk Wizard

Select the data vVol to be restored from the list and click the Restore button. VMware directs the array to remove the data vVol from its Destroyed Volumes folder and makes the virtual disk visible to the VM and to the Web Client.

[Back to Top


vVol Binding

A primary goal of the vVol architecture is scale—increasing the number of virtual disks that can be exported to ESXi hosts concurrently. With previous approaches, each volume would require a separate LUN. In large environments, it is quite possible to exceed the ESXi limit of 512 LUNs. vVols introduces the concept of protocol endpoints (PEs) to significantly extend this limit.

ESXi hosts bind and unbind (connect and disconnect) vVols dynamically as needed. Hosts can provision VMs and power them on and off even when no vCenter is available.

When an ESXi host needs access to a vVol:

  • It issues a bind request to the VASA provider whose array hosts the vVol
  • The VASA provider binds the vVol to a PE visible to the requesting host and returns the binding information (the sub-lun) to the host
  • The host issues a SCSI REPORT LUNS command to the PE to make the newly-bound vVol accessible.

vVols are bound to specific ESXi host(s) for as long as they are needed. Binds (sub-lun connections) are specific to each ESXi host-PE-vVol relationship. A vVol bound to a PE that is visible to multiple hosts can only be accessed by hosts that request binds. Table 1 lists the most common scenarios in which ESXi hosts bind and unbind and vVols.

What causes the bind?

Bound Host

 

When is it unbound?

vVol type

Power-on

Host running the VM

Power-off or vMotion

Config, data, swap

Folder navigated to in vVol Datastore via GUI

Host selected by vCenter with access to vVol datastore

When navigated away from or session ended

Config

Folder navigated to in vVol Datastore via SSH or console

Host logged into

When navigated away from or session ended

Config

vMotion

Target host

Power-off or vMotion

Config, data, swap

VM creation

Target host

Creation completion

Config, data

VM deletion

Target host

Upon deletion completion

Config

VM Reconfiguration

Target host

Reconfiguration completion

Config

Clone

Target host

Clone completion

Config, data

Snapshot

Target host

Snapshot completion

Config

Table 1: Reasons for Binding vVols to ESXi Host

 

Notes:
Binding and unbinding is automatic There is never a need for a VMware or FlashArray administrator to manually bind a vVol to an ESXi host.

FlashArrays only bind vVols to ESXi hosts that make requests; they do not bind them to host groups.

If multiple PEs are presented to an ESXi host, the host selects one at random to satisfy each bind request. Array administrators cannot control which PE is used for a bind.

This blog post contains a detailed description of ESXi host to PE to vVol binding.

The end user should never need to manually connect a vVol to a FlashArray Host or Hostgroup.  Read more about why you shouldn't manually connect the vVol here.

A vVol with no sub-lun connection is not “orphaned”. No sub-lun connection simply indicates that no ESXi host has access to the vVol at that time. 

[Back to Top

 

Snapshots of vVols

An important benefit of vVols is in its handling of snapshots. With VMFS-based storage, ESXi takes VM snapshots by creating a delta VMDK file for each of the VM’s virtual disks. It redirects new virtual disk writes to the delta VMDKs, and directs reads of unmodified blocks to the originals, and reads of modified blocks to the delta VMDKs. The technique works, but it introduces I/O latency that can profoundly affect application performance. Additional snapshots intensify the latency increase.

The performance impact is so pronounced that both VMware and storage vendors recommend the briefest possible snapshot retention periods - see Best practices for using snapshots in the vSphere environment (1025279) kb article. Practically speaking, this limits snapshot uses to:

Patches and upgrades
Taking a snapshot prior to patching or upgrading an application or guest operating system, and deleting it immediately after the update succeeds.

Backup
Quiescing a VM and taking a snapshot prior to a VADP-based VM backup. Again, the recommended practice is deleting the snapshot immediately after the backup completes.

These snapshots are typically of limited utility for other purposes, such as development testing. Adapting them for such purposes usually entails custom scripting and/or lengthy copy operations with heavy impact on production performance. In summary, conventional VMware snapshots solve some problems, but with significant limitations.

Array-based snapshots are generally preferable, particularly for their lower performance impact. FlashArray snapshots are created instantaneously, have negligible performance impact, and initially occupy no space. They can be scheduled or taken on demand, and replicated to remote arrays. Scripts and orchestration tools can use them to quickly bring up or refresh development testing environments.

Because FlashArray snapshots have negligible performance impact, they can be retained for longer periods. In addition, they can be copied to create new volumes for development testing and analytics, either by other VMs or by physical servers.

FlashArray administrators can take snapshots of VMFS volumes directly, however there are limitations:

No integration with ESXi or vCenter

Plugins can enable VMFS snapshot creation and management from the Web Client, but vCenter and ESXi have no awareness of or capability for managing them.

Coarse granularity

Array-based snapshots of VMFS volumes capture the entire VMFS. They may include hundreds or thousands of VMs and their VMDKs. Restoring individual VMDKs requires extensive scripting.

vVols eliminate both limitations. VMware does not create vVol snapshots itself; it directs the array to create a snapshot for each of a VM’s data vVols. The Plugin translates Web Client commands into FlashArray operations. VMware administrators use the same tools to create, restore, and delete VMFS and vVol snapshots, but with vVols, they can operate on individual VMDKs. 

Starting in Purity//FA 5.1.3, when taking a managed snapshot the array will copy the VMs current data volume/s to new data volume/s that has a 'snap' suffix for it.
Below there is an example of a vVol VM on Purity 5.1.4 and vVol VM on Purity 5.1.2.  Each VM has a had a managed snapshot taken that included the memory.

Purity 5.1.4
Managed Snapshot - Array Volume Copy.png
Here you can see that there is a copy of the data volume with the -snap suffix.
Purity 5.1.2
Managed Snapshot - Array Volume Snapshot.png
Here you can see that the two Data Volumes have had snapshots taken of them.

Taking Snapshots of vVol-based VMs

While the FlashArray GUI, REST, and CLI interfaces can be used for both per-VM and per-virtual disk vVol operations, a major advantage of the Plugin is management of vVols from within vCenter. VMware administrators can use the Web Client or any other VMware management tool to create array-based snapshots of vVol-based VMs.

To take a snapshot of a vVol-based VM with the Web Client, right-click the VM in the inventory pane, select Snapshots from the dropdown menu, and Take Snapshot from the secondary dropdown to launch the Take VM Snapshot for vVol-VM wizard. (vSphereView 58)

vv82.png
vSphereView 57: Web Client Snapshot VM Command
vv83.png
vSphereView 58: Take Snapshot of vVol-VM Wizard

Enter a name for the snapshot and (optionally) check one of the boxes:

Snapshot the virtual machine’s memory:

Causes the snapshot to capture the VM’s memory state and power setting. Memory snapshots take longer to complete, and may cause a brief (a second or less) slowdown in VM response over the network.

Quiesce guest file system:

VMware Tools quiesces the VM’s file system before taking the snapshot. This allows outstanding I/O requests to complete, but queues new ones for execution after restart. When a VM restored from this type of snapshot restarts, any queued I/O requests complete. To use this option, VMware Tools must be installed in the VM. Either of these options can be used with vVol-based VMs.

VMware administrators can also take snapshots of vVol-based VMs with PowerCLI, for example:

New-Snapshot -Name NewSnapshot -Quiesce:$true -VM vVolVM -Memory:$false 
vv84.png
vSphereView 59: New Files Resulting from a Snapshot of a vVol-based VM

When a snapshot of a vVol-based VM is taken, new files appear in the VM’s vVol datastore folder. (vSphereView 59)

The files are:

VMDK (vVol-VM-000001.vmdk)

A pointer file to a FlashArray volume or snapshot. If the VM is running from that VMDK, the file points to a data vVol. If the VM is not running from that snapshot VMDK, the file points to a vVol snapshot. As administrators change VMs’ running states, VMware automatically re-points VMDK files.

Database file (vVol-VM.vmsd)

The VMware Snapshot Manager’s primary source of information. Contains entries that define relationships between snapshots and the disks from which they are created.

Memory snapshot file (vVol-VM-Snapshot1.vmsn)

Contains the state of the VM’s memory. Makes it possible to revert directly to a powered-on VM state. (With non-memory snapshots, VMs revert to turned off states.) Created even if the Snapshot the virtual machine’s memory option is not selected.

Memory file (not shown in vSphereView 59)

A pointer file to a memory vVol. Created only for snapshots that include VM memory states.

Creating Snapshots Without Saving Memory

If neither Snapshot the virtual machine’s memory nor Quiesce guest file system is selected, VMware directs the array to create snapshots with no pre-work. All FlashArray snapshots are crash consistent, so snapshots of vVol based-VMs that they host are likewise at least crash consistent.

VMware takes snapshots of vVol-based VMs by directing the array (or arrays) to take snapshots of its data vVols. Viewing a VM’s data vVols on the array shows each one’s live snapshots.

vv85.png
vSphereView 50: Completed VMware Shnapshot VM
Purity 5.1.3+ Non-Memory Managed Snapshot on Array GUI
ArrayView 24  Non-memory Snapshot if Array GUI 5.1.4.png
Purity 5.0.7 Non-Memory Managed Snapshot on Array GUI
vv86.png
ArrayView 24: Non-memory Snapshot if Array GUI
vv87.png
vSphereView 61: Non-memory Snaphsot in Web Client

Note:
FlashArray snapshot names are auto-generated, but VMware tools list the snapshot name supplied by the VMware administrator (as in vSphereView 58 on page 48).

Creating a VM Snapshot with Saved Memory

If the VMware administrator selects Store the Virtual Machine’s Memory State, the underlying snapshot process is more complex.

Memory snapshots generally take somewhat longer than non-memory ones because the ESXi host directs the array to create a memory vVol to which it writes the VM’s entire memory image. Creation time is proportional to the VM’s memory size.

vv88.png
vSphereView 62: Take VM Snapshot Wizard
vv89.png
vSphereView 63: Memory Snaphsot Progress Indicator

Memory snapshots typically cause a VM to pause briefly, usually for less than a second. vSphereView 64 shows a timeout in a sequence of ICMP pings to a VM due to a memory snapshot.

vv90.png
vSphereView 64: Missed Ping Due to Memory Copy During Snapshot Creation

The memory vVol in a VM’s volume group created as a consequence of a memory snapshot stores the VM’s active state (memory image). ArrayView 25 shows the volume group of a VM with a memory snapshot (vvol-vVol-VM-vg/Memory-b31d0eb0). The size of the memory vVol is the memory size of the VM’s memory image.

vv91.png
ArrayView 25: Memory vVol Created by Taking a Memory Snapshot of a VM

VMware flags a memory snapshot with a green vv92.png (play) icon to indicate that it includes the VM’s memory state.

vv93.png
vSphereView 65: Web Client View of a Memory Snapshot

Reverting a VM to a Snapshot

VMware management tools can revert VMs to snapshots taken by VMware. As with snapshot creation, reverting is identical for conventional and vVol-based VM snapshots.

To restore a VM from a snapshot, from the Web Client Hosts & Clusters or VMs and Templates view, select the VM to be restored and click the Snapshots tab in the adjacent pane to display a list of the VM’s snapshots.

Select the snapshot from which to revert, click the All Actions button, and select Revert to from the dropdown menu.

vv94.png
vSphereView 66: Revert VM to Snapshot Command

Subsequent steps differ slightly for non-memory and memory snapshots.

Reverting a VM from a Non-memory Snapshot

The Revert to command displays a confirmation dialog. Click Yes to revert the VM to the selected snapshot.

vv95.png
vSphereView 67: Confirm Reverting a VM to a Non-memory Snapshot

The array overwrites the VM’s data vVols from their snapshots. Any data vVols added to the VM after the snapshot was taken are unchanged.

Before reverting a VM from a non-memory snapshot, VMware shuts the VM down. Thus, reverted VMs are initially powered off.

Reverting a VM from Memory Snapshot

To revert a VM to a memory snapshot, the ESXi host first directs the array to restore the VM’s data vVols from their snapshots, and then binds the VM’s memory vVol and reloads its memory. Reverting a VM to a memory snapshot takes slightly longer and results in a burst of read activity on the array.

A VM reverted to a memory snapshot can be reverted either suspended or to a running state. Check the Suspend this virtual machine when reverting to selected snapshot box in the Confirm Revert to Snapshot wizard to force the reverted VM to be powered off initially. If the box is not checked, the VM is reverted into its state at the time of the snapshot.

vv96.png
ArrayView 26: FlashArray Read Activity while Reverting a VM from a Memory Snapshot

Deleting a Snapshot

Snapshots created with VMware management tools can be deleted with those same tools. VMware administrators can only delete snapshots taken with VMware tools.

To delete a VM snapshot from the Web Client Host and Clusters or VMs and Templates view, select the target VM and click the Snapshots tab in the adjacent pane to display a list of its snapshots.

Select the snapshot to be deleted, click the All Actions button, and select Delete Snapshot from the dropdown menu to launch the Confirm Delete wizard. Click Yes to confirm the deletion. (vSphereViews 69 and 70)

vv97.png
vSphereView 69: Delete VM Snapshot Command
vv98.png
vSphereView 70: Confirm VM Snapshot Deletion

VMware removes the VM’s snapshot files from the vVol datastore and directs the array to destroy the snapshot. The array moves the snapshot and any corresponding memory vVols to its Destroyed Volumes folder for 24 hours, after which it eradicates them permanently. (ArrayView 27)

vv99.png
ArrayView 27: Memory vVol for a Destroyed Snapshot

When VMware deletes a conventional VM snapshot, it reconsolidates (overwrites the VM’s original VMDKs with the data from the delta VMDKs). Depending on the amount of data changed after the snapshot, this can take a long time and have significant performance impact. With FlashArray based snapshots of vVols, however, there is no reconsolidation. Destroying a Flasharray snapshot is essentially instantaneous. Any storage reclamation occurs after the fact during the normal course of the array’s periodic background garbage collection (GC).

Unmanaged Snapshots

Snapshots created with VMware tools are called managed snapshots. Snapshots created by external means, such the FlashArray GUI, CLI, and REST interfaces and protection group policies, are referred to as unmanaged. The only difference between the two is that VMware tools can be used with managed snapshots, whereas unmanaged ones must be managed with external tools.

Unmanaged snapshots (and volumes) can be used in the VMware environment. For example, FlashArray tools can copy an unmanaged source snapshot or volume to a target data vVol, overwriting the latter’s contents, but with some restrictions:

Volume size

A source snapshot or volume must be of the same size as the target data vVol. FlashArrays can copy snapshots and volumes of different sizes (the target resizes to match the source), but VMware cannot accommodate external vVol size changes. To overwrite a data vVol with a snapshot or volume of a different size, use VMware tools to resize the target vVol prior to copying.

Offline copying

Overwriting a data vVol while it is in use typically causes the application to fail or produce incorrect results. A vVol should be offline to its VM, or the VM should be powered off before overwriting.

Config vVols

Config vVols should only be overwritten with their own snapshots.

Memory vVols

Memory vVols should never be overwritten. There is no reason to overwrite them, and doing so renders them unusable.

Snapshot Management with the Plugin

Plugin Version 3.0 introduces snapshot features that are not otherwise available with the Web Client. The vVol-based VM listing has a FlashArray Virtual Volume Objects tab that lists virtual disk-vVol relationships and includes four new feature buttons. (vSphereView 71)

vv100.png
vSphereView 71: Snapshot Features Available with Plugin Version 3.0

Three of the Plugin buttons invoke snapshot-related functions:

Import Disk

Instantly presents a copy of any data vVol or vVol snapshot in any vVol-based VM in the vCenter to the selected VM.

Create Snapshot

Creates a FlashArray snapshot of the selected data vVol.

Overwrite Disk

Overwrites the selected vVol with the contents of any data vVol or snapshot in any FlashArray vVol-based VM in the vCenter.

These functions can also be performed with PowerShell or the vRealize Orchestrator. They are included in the Plugin as “one button” conveniences. The subsections that follow describe the functions.

Import Disk

Click the Import Disk button to launch the Import Virtual Volume Disk wizard (vSphereView 72). The wizard lists all VMs with FlashArray data vVols and their managed and unmanaged snapshots.

vv101.png
vSphereView 72: Import vVol Disk Wizard

Select the data vVol or snapshot to be imported and click Create to create a new data vVol having the same size and content as the source.

Because copying FlashArray volumes only reproduces metadata, copies are nearly instantaneous regardless of volume size.

Create Snapshot

VMware tools can create snapshots of VMs that include all of the VM’s data vVols. The Plugin Create Snapshot function can create a snapshot of a selected virtual disk (data vVol).

To create a snapshot of a data vVol, select the target virtual disk and click the Create Snapshot button (vSphereView 73) to launch the Create Snapshot wizard (vSphereView 75).

vv102.png
vSphereView 73: Create Snapshot Plugin Button

Note:
Alternatively, right-click the selected virtual disk and select Create Snapshot from the dropdown menu to launch the wizard. (vSphereView 74)

vv103.png
vSphereView 74: Alternative Create Snapshot Command

Enter a name for the snapshot (optional—if no name is entered, the array assigns a name) and click Create. VMware directs the array to create a snapshot of the data vVol.

Because FlashArray snapshots only reproduce metadata, creation is nearly instantaneous regardless of volume size.

vv104.png
vSphereView 75: Create Snapshot Wizard

Overwrite Disk

To overwrite a data vVol with any data vVol or snapshot of equal size on the same array, select the virtual disk to be overwritten and either click the Overwrite Disk button or right-click the selection and select Overwrite Disk from the dropdown menu (vSphereView 76) to launch the Overwrite Virtual Volume Disk wizard. (vSphereView 78)

vv105.png
vSphereView 76: Overwrite Disk Command

If the source and target objects are not of the same size, the Plugin blocks the overwrite. (vSphereView 77)

vv106.png
vSphereView 77: Plugin Blocking Overwriting of Different-size Source and Target

If the source and target are of equal size, but the VM is powered on, the Plugin warns the administrator to ensure that the target virtual disk is not mounted by the VM, but allows the overwrite to proceed. (vSphereView 78)

vv107.png
vSphereView78: Overwrite Virtual Volume Disk Wizard

If the VM is powered off and the source and target objects are of the same size, no warnings are issued.

In either case, click Replace to overwrite the target volume with the contents of the source volume or snapshot.

Because copying a FlashArray volume from another volume or from a snapshot only reproduces its metadata, overwrites are nearly instantaneous regardless of target volume size.

[Back to Top

 

Storage Policy Based Management

A major benefit of the vVol architecture is granularity—its ability to configure each virtual volume as required and ensure that the configuration does not change.

Historically, configuring storage with VMware management tools has required GUI plugins. Every storage vendor’s tools were unique—there was no consistency across vendors. Plugins were integrated with the Web Client, but not with vCenter itself, so there was no integration with the SDK or PowerCLI. Moreover, ensuring on-going configuration compliance was not easy, especially in large environments. Assuring compliance with storage policies generally required 3rd party tools.

With vVol data granularity, an array administrator can configure each virtual disk or VM exactly as required. Moreover, with vVols, data granularity is integrated with vCenter in the form of custom storage policies that VMware administrators create and apply to both VMs and individual virtual disks.

Storage policies are VMware administrator-defined collections of storage capabilities. Storage capabilities are array-specific features that can be applied to volumes on the array. When a storage policy is applied, VMware filters out non-compliant storage so that only compliant targets are presented as options for configuring storage for a VM or vVol.

If an array administrator makes a VM or volume non-compliant with a VMware policy, for example by changing its configuration on the array, VMware marks the VM or VMDK non-compliant. A VMware administrator can remediate non-compliant configurations using only VMware management tools; no array access is required.

FlashArray Storage Capabilities

An array’s capabilities represent the features it offers. When any FlashArray’s VASA providers are registered with vCenter, the array informs vCenter that the array has the following capabilities:

  • Encryption of stored data (“data at rest”)
  • Deduplication
  • Compression
  • RAID protection
  • Flash storage

All FlashArrays offer these capabilities; they cannot be disabled. VMware administrators can configure the additional capabilities advertised by the VASA provider and listed in Table 2.

Capability Name

Value (not case-sensitive)

Consistency Group Name

A FlashArray protection group name

FlashArray Group

Name of one or more FlashArrays

Local Snapshot Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Local Snapshot Policy Capable

Yes or No

Local Snapshot Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Minimum Replication Concurrency

Number of target FlashArrays to replicate to at once

Pure Storage FlashArray

Yes or No

QoS Support

Yes or No

Replication Capable

Yes or No

Replication Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Replication Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Target Sites

Names of specific FlashArrays desired as replication targets

Table 2: Configurable Capabilities Advertised by FlashArray VASA Providers

Storage Capability Compliance

Administrators can specify values for some or all of these capabilities when creating storage policies. VMware performs two types of policy compliance checks:

  • If a vVol were created on the array, could it be configured with the feature?
  • Is a vVol in compliance with its policy? For example, a vVol with a policy of hourly snapshots must be (a) on FlashArray that hosts a protection group with hourly snapshots and (b) a member of that protection group.

Only VMs and virtual disks configured with vVols can be compliant. VMFS-based VMs are never compliant, even if their volume is on a compliant FlashArray.

Table 3 lists the circumstances under which a policy offers each capability, and those under which a vVol is in or out of compliance with it. 

Capability Name

An array offers this capability when…

 

A vVol is in compliance when…

A vVol is out of compliance when…

Pure Storage FlashArray

…it is a FlashArray (i.e. always).

…it is on a FlashArray, if the capability is set to ‘Yes’.

…it is on a different array vendor/model and the capability is set to ‘Yes’.

…it is on a FlashArray and the capability is set to ‘No’.

FlashArray Group

…it is a FlashArray and its name is listed in this group.

…it is on a FlashArray with one of the configured names.

…it is not on a FlashArray with one of the configured names.

QoS Support

…it is a FlashArray and has QoS enabled.

…it is on a FlashArray with QoS enabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘No’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS enabled and the capability is set to ‘No’.

Consistency Group Name

…it is a FlashArray and has a protection group with that name.

…it is in a protection group with that name.

…it is not in a protection group with that name.

Local Snapshot Policy Capable

…it is a FlashArray and has at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray with at least one protection group. It does not have to have an enabled policy though.

…it is on a FlashArray that does not have at least one protection group or on a non-FlashArray.

Local Snapshot Interval

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified interval.

…it is in a protection group with an enabled local snapshot policy of the specified interval.

…it is in not a protection group with an enabled local snapshot policy of the specified interval.

Local Snapshot Retention

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified retention.

…it is in a protection group with an enabled local snapshot policy of the specified retention.

…it is in not a protection group with an enabled local snapshot policy of the specified retention.

Replication Capable

…it is a FlashArray (i.e. always).

…it is in a protection group with an enabled replication target.

…it is in not a protection group with an enabled replication target.

Replication Interval

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified interval.

…it is in a protection group with an enabled replication policy of the specified interval.

…it is in not a protection group with an enabled replication policy of the specified interval.

Replication Retention

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified retention.

…it is in a protection group with an enabled replication policy of the specified retention.

…it is in not a protection group with an enabled replication policy of the specified retention.

Minimum Replication Concurrency

…it is a FlashArray and has at least one protection group with the specified number or more of allowed replication targets.

…it is in a protection group that has the specified number of allowed replication targets.

…it is not in a protection group that has the specified number of allowed replication targets.

Target Sites

…it is a FlashArray and has at least one protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must match at least that configured value of FlashArrays.

…it is in a protection group with one or more of the specified allowed replication targets. If “Minimum Replication Currency” is set, then it must be replicated to at least that configured value of the listed target FlashArrays.

…it is not in a protection group replicating to the minimum amount of correct target FlashArrays.

Table 3: Configurable Capabilities Advertised by FlashArray VASA Providers

Combining Capabilities and Storage Compliance

This section describes an example of combining capabilities into a policy. Storage policies are a powerful method of assuring specific configuration control, but they affect how vVol compliance is viewed. For an array or vVol to be compliant with a policy:

  1. The array or vVol must comply with all of the policy’s capabilities
  2. For snapshot and replication capabilities, the array must have at least one protection group that offers all of the policy’s capabilities. For example, if a policy requires hourly local snapshots and replication every 5 minutes, a protection group with a hourly snapshots and a different protection group with 5 minute replication do not make the array compliant. VMware requires that volumes be in a single group during policy configuration, so to be compliant for this example, an array would require at least one protection group with hourly snapshots and 5 minute replication.
  3. Some combinations of capabilities cannot be compliant. For example, setting an array’s Local Snapshot Policy Capable capability to No and specifying a policy that includes snapshots means that no storage compliant with the policy can be hosted on that array.

Creating a Storage Policy

vCenter makes the capabilities advertised by an array’s VASA Provider available to VMware administrators for assembling into storage policies. Administrators can create policies by using APIs, GUI, CLI, or other tools. This section describes two ways of creating policies for FlashArray-based vVols:

Custom Policy Creation

Using the Web Client to create custom policies using capabilities published by the FlashArray VASA provider

Importing FlashArray Protection Groups

Using the Plugin to create storage policies by importing a FlashArray protection group configuration

Creating Custom Storage Policies

Click the home icon at the top of the Web Client home screen, and select Policies and Profiles from the dropdown menu (vSphereView 79) to display the VM Storage Policies pane.

vv108.png
vSphereView 79: Policies and Profiles Command

Select the VM Storage Policies tab and click the Create VM Storage Policy button (vSphereView 80) to launch the Create New VM Storage Policy wizard. (vSphereView 81)

vv109.png
vSphereView 80: Create VM Storage Policy Button

Select a vCenter from the dropdown and enter a descriptive name for the policy.

vv110.png
vSphereView 81: Create New VM Storage Policy Wizard

It is a best practice to use a naming convention that is operationally meaningful. For example, the name in vSphereView 81 suggests a policy configured on FlashArray storage with 1 hour local snapshots and a 15 minute replication interval.

Configure pages 2 and 2a as necessary (refer to VMware documentation for instructions), click forward to the 2b Rule-set 1 page and select com.purestorage.storage.policy in the <Select provider> dropdown to use the FlashArray VASA provider rules (com.purestorage.storage.policy) to create the storage policy. (vSphereView 82)

vv111.png
vSphereView 82: Rule-set 1 Page 2b of the Create New VM Storage Policy Wizard

A storage policy requires at least one rule. To locate all VMs and virtual disks to which this policy will be assigned on FlashArrays, click the <Add rule> dropdown and select the Pure Storage FlashArray capability (vSphereView 83).

vv112.png
vSphereView 83: Adding a Storage Policy Rule

The selected rule name appears above the <Add rule> dropdown, and a dropdown list of valid values appears to the right of it. Select Yes and click Next (not shown) to create the policy. As defined thus far, the policy requires that VMs and vVols to which it is assigned be located on FlashArrays, but they are not otherwise constrained. When a policy is created, the Plugin checks registered arrays for compliance and displays a list of vVol datastores on arrays that support it (vSphereView 84).

vv113.png
vSphereView 84: List of Arrays Compatible with a New Storage Policy

The name assigned to the policy (FlashArray-1hrSnap15minReplication—see vSphereView 81) suggests that it should specify hourly snapshots and 15-minute replications of any VMs and virtual volumes to which it is assigned. Click Back (not shown in vSphereView 84) to edit the rule-set.

FlashArray replication-and snapshot capabilities require component rules. Click Add component and select Replication from the dropdown (vSphereView 85) to display the Replication component rule pane (vSphereView 86).

vv114.png
vSphereView 85: Selecting a Component for the Policy

Select the provider (vSphereView 86), and add rules, starting with the local snapshot policy.

vv115.png
vSphereView 86: Selecting Replication Provider

Click the Add Rule dropdown, select Local Snapshot Interval, enter 1 in the text box, and select Hours as the unit. (vSphereView 87)

vv116.png
vSphereView 87: Specifying Snapshot Interval Rule

Click the Add Rule dropdown again, select Remote Replication Interval, enter 15 in the text box, select Minutes as the unit (vSphereView 88), and click Next to display the list of registered arrays that are compatible with the augmented policy. vSphereView 89 indicates that there are two such arrays.

vv117.png
vSphereView 88: Specifying Replication Interval Rule
vv118.png
vSphereView 89: Arrays Compatible with the "FlashArray-1hr-Snap15minReplication" Storage Policy

Note:
A policy can be created even if no registered vVol datastores are compatible with it, but it cannot be assigned to any VMs or vVols. Storage can be adjusted to comply, for example, by creating a compliant protection group, or alternatively, the policy can be adjusted to be compatible with existing storage.

Auto-policy Creation with the Plugin

As an alternative to custom policies, the Plugin can import FlashArray protection groups and create vCenter policies with the same attributes.

vv119.png
vSphereView 90: Plugin Import Protection Groups Button

From the Plugin’s home pane, select an array and either click the Import Protection Groups button (vSphereView 90) or right-click the selected array and select Import Protection Groups on the dropdown menu (vSphereView 91) to launch the Import Protection Groups wizard. (vSphereView 92)

vv120.png
vSphereView 91: Import Protection Group Command
vv121.png
vSphereView 92: Import Protection Groups Wizard (1)

The wizard lists the available protection groups on the selected array along with a brief summary of their local snapshot and remote replication policies. For more detailed information, refer to the protection group display in the FlashArray GUI.

Note:
A grayed-out listing indicates a protection group whose properties match an existing vCenter storage policy.

Select the protection groups to be imported by checking the boxes and click the Import button (vSphereView 93). 

 

vv122.png
vSphereView 93: Import Protection Groups Wizard (2)

The protection group parameters used to create a storage policy are:

  • Snapshot interval
  • Short-term per-snapshot retention
  • Replication interval
  • Short-term per-replication snapshot interval

The Plugin creates storage policies on all vCenters in the environment to which the logged-in administrator has access. If vCenters are in enhanced linked-mode (by sharing SSO environments) the policies are created on all of them.

On the Web Client Policies and VM Storage Policies Profiles page, select the VM Storage Policies tab to display the vCenter’s default, previously created, and imported storage policies (vSphereView 94). The lower grouping in vSphereView 94 represents the imported policies (vSphereView 93). Each policy is created in the two available vCenters.

vv123.png
vSphereView 94: Default and Imported Storage Policies

The policy names supplied by the Plugin describe the policies in terms of snapshot and replication intervals.

Select a policy to view the details of its capabilities (vSphereView 95). In the FlashArray GUI Storage view Protection Groups pane, select platinum to display the snapshot and replication details for the protection group imported to create the Snap 1 HOURS Replication 5 MINUTES policy. (ArrayView 28)

vv124.png
vSphereView 95: Web Client View of Policy Details for Snap 1 HOURS Replication 5 Minutes
vv125.png
ArrayView 28: FlashArray GUI View of Details for Platinium Protection Group

Changing a Storage Policy

A VMware administrator can edit a storage policy that no longer fulfills the needs of the VMs assigned to make it fulfill current needs.

To change a policy’s parameters from the Policies and Profiles page in the Web Client, select VM Storage Policies, select the policy to be changed, and click the Edit Settings… button to display a list of the policy’s rules. Make the needed rule changes and click OK.

vv126.png
vSphereView 96: Edit Settings... Button
vv127.png
vSphereView 97: Changing a Policy Rule

Clicking OK launches the VM Storage Policy in Use wizard (vSphere 98), offering two options for resolution:

Manually later 

Flags all VMs and virtual disks to which the changed policy is assigned as Out of Date (vSphereView 99).

Now

Assigns the changed policy to all VMs and virtual disks assigned to the original policy.

Click Yes to display the policy pane and select the Monitor tab.

vv128.png
vSphereView 98: VM Storage Policy in Use Wizard
vv129.png
vSphereView 99: Out of Date Storage Policies

If Manually later is selected, VMs and vVols show Out of Date compliance status. Update the policies for the affected VMs and virtual disks by selecting them and clicking the Reapply storage policy to all out of date entities button indicated in vSphereView 100.

vv130.png
vSphereView 100: Reapply Storage Policy Button

Selecting Now in the VM Storage Policy in Use wizard (vSphere 98) does not reconfigure the vVols on the array, so it typically causes VMs and virtual disks to show Noncompliant status.(vSphereView 101).

vv131.png
vSphereView 101: Non-compliant VM Objects

The subsection titled Changing a VM’s Storage Policy on page 77 describes the procedure for bringing non-compliant VMs and virtual disks into compliance.

Checking VM Storage Policy Compliance

A vVol-based VM or virtual disk may become noncompliant with its vCenter storage policy when a storage policy is changed, when an array administrator reconfigures volumes, or when the state of an array changes.

For example, if an array administrator changes the replication interval for a protection group that corresponds to a vCenter storage policy, the VMs and virtual disks to which the policy is assigned are no longer compliant.

To determine whether a VM or virtual disk is compliant with its assigned policy, either select the policy and display the objects assigned to it (vSphereViews 99 and 101), or validate VMs and virtual disks for compliance with a given policy.

From the Web Client home page, click the VM Storage Policies icon to view the vCenter’s list of storage policies (vSphereView 102). Select a policy, click the Monitor tab, and click the VMs and Virtual Disks button (vSphereView 104) to display a list of the VMs and virtual disks to which the policy is assigned.

vv132.png
vSphereView 102: VM Storage Policies Icon
vv133.png
vSphereView 103: Selecting a Policy for Validation
vv134.png
vSphereView 104: Validating Policy Compliance

Each policy’s status is either:

Compliant

The VM or virtual disk is configured in compliance with the policy.

Noncompliant

The VM or virtual disk is not configured according to the policy.

Out-of-date

The policy has been changed but has not been re-applied. The VM or virtual disk may still be compliant, but the policy must be re-applied to determine that.

The subsection titled Changing a VM’s Storage Policy describes making objects compliant with their assigned storage policies.

Assigning a Storage Policy to a VM or Virtual Disk

The Web Client can assign a storage policy to a new VM or virtual disk when it is created, deployed from a template, or cloned from another VM. A VMware administrator can change the policy assigned to a VM or virtual disk. Finally, a VM’s storage policy can be changed during Storage vMotion.

Assigning a Storage Policy to New VM

A VMware administrator can assign a storage policy to a new VM created using the Deploy from Template wizard. (The procedure is identical to policy assignment with the Create New Virtual Machine and Clone Virtual Machine wizards.)

Right-click the target template in the Web Client inventory pane’s VMs and Templates list, and select New VM from This Template.

vv135.png
vSphereView 105: New VM from Template Command

Select options in steps 1a and 1b, and advance the wizard to step 1c, Select Storage.

vv136.png
vSphereView 106: Select Storage Step of Template

Setting a Policy for an Entire VM

In the Select Storage pane, select Thin Provision from the Select virtual disk format dropdown (FlashArrays only support thin provisioned volumes; selecting other options causes VM creation to fail), and either select a datastore (VMFS, NFS or vVol) from the list or a policy from the VM storage policy dropdown.

Selecting a policy filters the list to include only compliant storage. For example, selecting the built-in vVol No Requirements Policy, would filter the list to show only vVol datastores.

vv137.png
vSphereView 107: Selecting a Storage Policy

Selecting the FlashArray Snap 1 HOURS Replication 5 MINUTES policy filters out datastores on arrays that do not have protection groups with those properties.

vv138.png
vSphereView 108: Select VM Storage Policy

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (e.g., flasharray-vvol-1:platinum in vSphereView 110), or, if Automatic is selected, VMware directs the array to create a protection group with the specified capabilities.

vv139.png
vSphereView 109: Select Automatic Replication Group

Whichever option is chosen, the VM’s config vVol and all of its data vVols are assigned the same policy. (Swap vVols are never assigned a storage policy.) Click Finish (not shown in vSphereView 110) to complete the wizard. The VM is created and its data and config vVols are placed in the assigned protection group.

vv140.png
vSphereView 110: Assign an Existing Replication Group

BEST PRACTICES: Pure Storage recommends assigning local snapshot policies to all config vVols to simplify VM restoration.

All FlashArray volumes are thin provisioned, so the Thin Provision virtual disk format should always be selected. With FlashArray volumes, there is no performance impact for thin provisioning.

ArrayView 29 shows the FlashArray GUI view of a common storage policy for an entire vVol-based VM.

vv141.png
ArrayView 29: GUI View of a VM-wide Storage Policy

Assigning a Policy to Each of VM's Virtual Disks 

In most cases, VMware administrators put all of a VM’s volumes in the same protection group, thereby assigning the same storage policy to them.

Alternatively, assign a separate policy to some or all of a VM’s volumes by clicking the Advanced button of the Select Storage step (1c) of the Deploy from Template wizard to display the advanced view.

vv142.png
ArrayView 111: Advanced>>Button for per-vVol Storage Policies

In the advanced view, a separate storage policy can be specified for for the VM’s config vVol as well as for each virtual disk (data vVol).

The Configuration File line refers to the VM’s config vVol. The remaining lines enumerate its data vVols (Hard Disk 1 in the example).

vv143.png
vSphereView 112: Select Storage Advanced View

To select a storage policy for a vVol, click the dropdown in Storage column of its row and select Browse to launch the Select a datastore cluster or datastore wizard.

vv144.png
vSphereView 113: Browse for Custom Storage Policy

Either select a VMFS, NFS or vVol datastore from the list or select a policy from the dropdown.

vv145.png
vSphereView 114: Selectint Storage Policy vVol

Selecting a policy from the VM storage policy dropdown filters the list to include only compliant datastores. For example, selecting the vVol No Requirements Policy lists only vVol datastores. 

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (for example, flasharray-vvol-1:platinum in vSphereView 115).

vv146.png
vSphereView 115: Selecting Storage Policy for vVol

Alternatively, if Automatic is selected (as in vSphereView 115), the array creates a protection group with the capabilities specified by the policy. Whichever option is chosen, the policy is assigned to the vVol.

For example, a VM’s config vVol might be assigned a 1 hour snapshot and 1 hour replication storage policy, corresponding to the flasharray-vvol-1:gold replication group, whereas its data vVols might be assigned a 1 hour snapshot and 5 minute replication policy, corresponding to the flasharray-vvol-1:platinum replication group. vSphereView 116 shows the Select a datastore cluster or datastore panes for configuring the two policies.

vv147.png
vSphereView 116: Separate Storage Policies for Config and Data vVols

ArrayViews 30 and 31 list the contents of the two protection groups that correspond to the vCenter replication groups.

vv148.png
ArrayView 20: gold Protection Group
vv149.png
ArrayView 20: platinum Protection Group

Changing a VM's Storage Policy

To change a VM’s storage policy, a VMware administrator assigns a new policy to it. VMware directs the array to reconfigure the affected vVols. If the change makes the VM or any of its virtual disks non-compliant, the VMware administrator must adjust their policies.

To change a VM’s storage policy, select the VMs and Templates view in the Web Client inventory pane, (1) right-click the target VM, (2) select VM Policies from the dropdown menu, and (3) select Edit VM Storage Policies from the secondary dropdown (vSphereView 117) to launch the Edit VM Storage Policies wizard (vSphereView 118).

vv150.png
vSphereView 117: Edit VM Storage Policies Command

The storage policy for the VM in the example specifies a 1 hour snapshot interval and a 5 minute replication interval, so both the config and data vVols are in the array’s platinum protection group.

vv151.png
ArrayView 32: Config and Data vVols in the Same Protection Group
vv152.png
vSphereView 118: Edit VM Storage Policies Wizard
vv153.png
vSphereView 119: Apply a Common Storage Policy to All of VM's vVols

To change the storage policy assigned to a VM’s config vVol or a single data vVol, select a policy from the dropdown in the VM Storage Policy column of its row in the table.

vv154.png
vSphereView 120: Change Config vVol Storage Policy

Selecting a policy that is not valid for the array that hosts a vVol displays a Datastore does not match current VM policy error message. To satisfy the selected policy, the VM would have to be moved to a different array (reconfiguration would not suffice).

A storage policy change may require that the replication groups for one or more vVols be changed. If this is the case, the Replication Groups indicator is marked with an alert (vv155.png ) icon.

vv156.png
vSphereView 121: Non-Compliant Datastore
vv157.png
vSphereView 122: One or More Replication Groups not Configured

 

This alert typically appears for one of two reasons:

  1. One or more vVols are in replication groups (FlashArray protection groups) do not comply with the new storage policy.
  2. The new storage policy requires that vVols be in a replication group, and one or more vVols are not.

If the alert appears, or to verify or change the replication group, click Configure to launch the Configure VM Replication Groups wizard.

To assign a policy to all of a VM’s vVols, click the Common replication group radio button, select a replication group from the Replication group dropdown, and click OK.

vv158.png
vSphereView 123: Configure a VM Replication Group

Note: If no policy is shared by all of the VM’s vVols, the Replication group dropdown does not appear.

To assign different policies to individual vVols, click the Replication group per storage object radio button, select a replication group for each vVol to be replicated from the dropdown in its row. When selections are complete, click OK.

vv159.png
vSphereView 124: Configure vVol Replication Groups

Click OK again to complete reconfiguration. VMware directs the array to change the vVols’ protection group membership as indicated in the selections for the new policy.

vv160.png
vSphereView 125: Configure vVol Replication Groups
vv161.png
ArrayView 33: Common VM Protection Group

Assigning a Policy during Storage Migration

Compliance with an existing or newly assigned storage policy may require migrating a VM to a different array. For example, VM migration is required if:

  • A policy specifying a different array than the current VM or virtual disk location is assigned
  • A policy requiring QoS (or not) is assigned to a VM or virtual disk located on an array with the opposite QoS setting.
  • A policy specifying snapshot or replication parameters not available with any protection group on a VM or virtual disk’s current array is assigned.
  • Some of these situations can be avoided by array reconfiguration, for example by creating a new protection group or inverting the array’s QoS setting. Others, such as a specific array requirement, cannot. If an array cannot be made to meet a policy requirement, the VMware administrator must use Storage vMotion to move the VM or virtual disk to one that can satisfy the requirement. The administrator can select a new storage policy during Storage vMotion.

For example, vSphereView 126 illustrates a VM whose assigned storage policy specifies hourly snapshots and replication with one-day retention for both.

vv162.png
vSphereView 126: VM Storage Policy Specifying Hourly Snapshots and Replication

The VM in this example is located on flasharray-vvol-1, in protection group gold.

vv163.png
ArrayView 34: Protection Group with Hourly Snapshots and Replication Specified

The VM is compliant with the vCenter-assigned Snap 1 HOURS Replication 1 HOURS policy.

vv164.png
vSphereView 127: VM Compliance with Storage Policy

If the VMware administrator changes the VM’s storage policy to one that requires not only the snapshot and replication parameters, but also that the VM and its vVols be located on array flasharray-vvol-2, the VM and its vVols become noncompliant because they are located on flasharray-vvol-1.

vv165.png
vSphereView 128: New VM Storage Policy Requiring Location on a Specific FlashArray

No amount of reconfiguration of FlashArray flasharray-vvol-1 can remedy the discrepancy, so to make the VM compliant with the new policy, Storage vMotion must move it to flasharray-vvol-1.

vv167.png
vSphereView 129: VM Out of Compliance with its Assigned Storage Policy

To move a VM between arrays using Storage vMotion, from the VMs and Templates inventory pane, right-click the VM to be moved, and select Migrate from the dropdown menu to launch the Select the migration type wizard.

vv168.png
vSphereView 130: Select Migration Type Wizard

Click Change storage only and Next to launch the Migrate wizard.

vv169.png
vSphereView 131: Migrate (Storage vMotion) Wizard

Reselect the storage policy from the dropdown (do not select Keep existing storage policy), reselect the target from the list of datastores with compatible policies, and click Finish to migrate the VM to the target array and configure the vVols as specified in the reselected policy. When migration completes, the VM is on the target array and it and its vVols are compliant with the assigned storage policy.

BEST PRACTICE: Pure Storage recommends reselecting the same storage policy rather the Keep existing storage policy option in order to provide Storage vMotion with the information it needs to complete a migration.

The Migrate wizard contains an Advanced button. The subsection titled Assigning a Policy to Each of a VM’s Virtual Disks  describes the use of the advanced option to specify per-vVol storage policies.

vSphereView 132 illustrates the example VM (vSphereView 127) after (a) the policy in vSphereView 128 has been assigned to it, and (b) it has been migrated to flasharray-vvol-2. ArrayView 35 illustrates the GUI view of the example VM’s vVols, now located in flasharray-vvol-2’s gold protection group.

vv170.png
vSphereView 132: Migrated VM Compliant with its Assigned Storage Policy
vv171.png
ArrayView 35: Protection Group on flasharray--vvol-2 Showing Migrated VM's vVols

[Back to Top

 

Replicating vVols

With VASA version 3, FlashArrays can replicate vVols. VMware is aware of replicated VMs and can fail them over and otherwise manage replication. Additional information is available from VMware at:

https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-6346A936-5084-4F38-ACB5-B5EC70AB8269.html

VMware vVol replication has three components:

Replication Policies 

Specify sets of VM requirements and configurations for replication that can be applied to VMs or virtual disks. If configuration changes violate a policy, VMs to which it is assigned become non-compliant

Replication Groups

Correspond to FlashArray protection groups, and are therefore consistency groups in the sense that replicas of them are point-in-time consistent. Replication policies require replication groups

Failure domains

Sets of replication targets. VMware requires that a VM’s config vVol and data vVols be replicated within a single failure domain.

In the FlashArray context, a failure domain is a set of arrays. For two vVols to be in the same failure domain, one must be replicated to the same arrays as the other. In other words, a VM’s vVols must all be located in protection groups that have the same replication targets.

vv172.png
vSphereView 133: A Policy that Specifies Different Replication Fault Domains

Replication policies can only be assigned to config vVols and data vVols. Other VM objects inherit replication policies in the following way:

  • A memory vVol inherits the policy of its configuration vVol
  • The swap vVol, which only exists when a VM is powered on, is never replicated.

The initial release of FlashArray vVol support does not preserve local snapshot chains through replication. VMware-managed local snapshots are not replicated and are therefore unavailable after a VM fails over. For VMs that are to be replicated, either do not create VMware-managed snapshots or delete them before failover. Pure Storage plans to deliver preservation of VMware-managed snapshot chains through failover in a future release of FlashArray software.

VMware can perform three types of failovers on vVol-based VMs:

Planned Failover

Movement of a VM from one datacenter to another, for example for disaster avoidance or planned migration. Both source and target sites are up and running throughout the failover. Once a planned failover is complete, replication can be reversed so that the failed over VM can be failed back.

Unplanned Failover

Movement of a VM when a production datacenter fails in some way. Failures may be temporary or irreversible. If the original datacenter recovers after failover, automated reprotection may be possible. Otherwise, a-new replication scheme must be configured.

Test Failover

Similar to planned failover, but does not bring down the production VM. Test failover recovers temporary copies of protected VMs to verify the failover plan before an actual disaster or migration.

VMware vCenter Site Recovery Manager does not support vVols or array-based replication at the time of publication. Currently, vVol failover and SRM is only supported by vSphere Replication. Refer requests for SRM support of vVols and array-based replication to VMware.

These vVol failover modes for can be implemented using the VMware SDK, tools such as PowerCLI or vRealize Orchestrator, or any tool that can access the VMware SPBM SDK. Pure Storage plans to make PowerCLI example scripts and tools available on the Pure Storage Community and GitHub repositories as they are created and validated. 

PowerCLI version 6.5.4 or newer is required for use with FlashArray-based vVols.

[Back to Top

 

vVol Reporting

The vVols architecture that gives VMware insight into FlashArrays also gives FlashArrays insight into VMware. With vVol granularity, the FlashArray can recognize and report on both entire vVol-based VMs (implemented as volume groups) and individual virtual disks (implemented as volumes).

Storage Consumption Reporting

FlashArrays represent VMs as volume groups. The Volumes tab of the GUI Storage pane lists an array’s volume groups. Select a group that represents a VM to display a list of its volumes. 

The volume group naming schema will follow the pattern: vvol-VMname-vg with the VM name being set when the VM is first created as a vVols based VM or Storage vMotioned to the vVol Datastore.

When a VM is renamed in vCenter the volume group is not automatically renamed on the FlashArray.  This applies to renaming volume groups on the FlashArray not changing the VM name in vSphere as well.  In the event that a VM's name is changed in vCenter then the volume group name would need to either be updated manually or could be done via a PowerCLI or python workflow as well.  See this KB section for more information on this workflow.

GUI View of a Volume Group and its Volumes
vv173.png

The top panel of the display shows averaged and aggregated storage consumption statistics for the VM. Click the Space button in the Volumes pane to display storage consumption statistics for individual vVols.

GUI View of a Volume Group' Per-volume Storage Consumption
vv174.png

To view a VM’s storage consumption history, switch to the Analysis pane Capacity view and select the Volumes tab.

GUI Analysis
vv175.png

To view history for VMs (volume groups) or vVol (volumes), select an object type from the dropdown menu.

Selecting Volume Statistics
vv176.png

Click the desired object in the list to display its storage consumption history. (Alternatively, enter a full or partial VM name in the search box to filter the list.)

The array displays a graph of the selected object’s storage consumption over time. The graph is adjustable—time intervals from 24 hours to 1 year can be selected. It distinguishes between storage consumed by live volumes and that consumed by their snapshots. The consumption reported is for volume and snapshot data that is unique to the objects (i.e., not deduplicated against other objects). Data shared by two or more volumes or snapshots is reported separately on a volume group-wide basis as Shared.

GUI Storage Capacity History for a Volume Group
vv177.png

Data Reduction with vVol Managed Snapshots on Purity 5.1.3+

Beginning in Purity 5.1.3 Managed Snapshots behavior was changed to copy the Data Volumes to new volumes in the Array Volume Group vs taking array based snapshots of the data volumes.  As part of this update, data reduction numbers will now differ.  Since VMwareis essentially asking the array to create several identical volumes through VASA and the Array will oblige and dedup them appropriately.  Which means that the more Managed Snapshots that are taken, the higher the data reduction on number on the Volume Group will become.  Overall increasing the Array data reduction numbers. 


Performance Reporting

The FlashArray GUI can also report VM and vVol performance hostory. In the Analysis pane Performance view, the history of a VM or vVol’s IOPS, latency, and data throughput (Bandwidth) can be viewed.

Click the Volumes tab to display a list of the array’s VMs (volume groups) and/or vVols (volumes).

GUI Analysis Pane
vv178.png

To view an object’s performance history, select Volume Groups, Volumes, or All in the dropdown, and select a VM or vVol from the resulting list.

Selecting Volume Display
vv179.png

A VM’s or vVol’s performance history graph shows its IOPS, throughput (Bandwidth), and latency history in separate stacked charts.

The graphs show the selected object’s performance history over time intervals from 24 hours to 1 year. Read and write performance can be shown in separate curves. For VMs, latency is the average for all volumes; throughput and IOPS are an accumulation across volumes.

GUI Performance History for a Volume Group
vv180.png

 

[Back to Top


Migrating VMs to vVols

Storage vMotion can migrate VMs from VMFS, NFS, or Raw Device Mappings (RDMs) to vVols.

Migrating a VMFS or NFS-based VM to a vVol-based VM

From the Web Client VMs and Templates inventory pane, right-click the VM to be migrated and select Migrate from the dropdown menu to launch the Migrate wizard.

vv181.png
vSphereView 134: Web Client Migrate Command

Select Change Storage Only to migrate the VM’s storage, or Change both compute resource and storage to migrate both storage and compute resources.

vv182.png
vSphereView 135: Selecting Storage-only Migration

In the ensuing Select storage step, select a vVol datastore as a migration target. Optionally, select a storage policy for the migrated VM to provide additional features. (The section titled Storage Policy Based Management describes storage policies.)

Click Finish (not visible in vSphereView 135) to migrate the VM. If original and target datastores are on the same array, the array uses XCOPY to migrate the VM. FlashArray XCOPY only creates metadata, so migration is nearly instantaneous.

If source and target datastores are on different arrays, VMware uses reads and writes, so migration time is proportional to the amount of data copied.

When migration completes, the VM is vVol-based. Throughout the conversion, the VM remains online.

vv183.png
vSphereView 136: Select Storage Policy

ArrayView 44 shows a migrated VM’s FlashArray volume group.

vv184.png
ArrayView 44: GUI View of a Migrated VM (Volume Group)

Migration of a VM with VMDK Snapshots

Migrating a VM that has VMware managed snapshots is identical to the process described in the preceding subsection. In a VMFS or NFS-based VM, snapshots are VMDK files in the datastore that contain changes to the live VM. In a vVol-based VM, snapshots are FlashArray snapshots.

Storage vMotion automatically copies a VM’s VMware VMFS snapshots. ESXi directs the array to create the necessary data vVols, copies the source VMDK files to them and directs the array to take snapshots of them. It then copies each VMFS-based VMware snapshot to the corresponding data vVol, merging the changes. All copying occurs while the VM is online.

BEST PRACTICE: Only virtual hardware versions 11 and later are supported. If a VM has VMware-managed VMFS-based memory snapshots and is at virtual hardware level 10 or earlier, delete the memory snapshots prior to migration. Upgrading the virtual hardware does not resolve this issue. Refer to VMware’s note here

Migrating Raw Device Mappings

A Raw Device Mapping can be migrated to a vVol in any of the following ways:

  • Shut down the VM and perform a storage migration. Migration converts the RDM to a vVol.
  • Add to the VM a new virtual disk in a vVol datastore. The new virtual disk must be of the same size as the RDM and located on the same array. Copy the RDM volume to the vVol, redirect the VM’s applications to use the new virtual disk, and delete the RDM volume.
  • Remove the RDM from the VM and add it back as a vVol. At the time of publication, this process requires Pure Storage Technical Support assistance. Pure Storage plans to make a user-accessible mechanism for achieving in the future.

For more information, refer to the blog post https://www.codyhosterman.com/2017/11/moving-from-an-rdm-to-a-vvol/

[Back to Top

 

Data Mobility with vVols

A significant, but under-reported benefit of vVols is data set mobility. Because a vVol-based VM’s storage is not encapsulated in a VMDK file, the VM’s data can easily be shared and moved.

A data vVol is a virtual block device presented to a VM; it is essentially identical to a virtual mode RDM. Thus, a data vVol (or a volume created by copying a snapshot of it) can be used by software that can interpret its contents, for example an NFS or XFS file system created by the VM.

Therefore, it is possible to present a data vVol, or a volume created from a snapshot of one, to a physical server, to present a volume created by physical server to a vVol-based VM as a vVol, or to overwrite a vVol from a volume created by a physical server.

This is an important benefit of the FlashArray vVol implementation. The following blog posts contain examples of and additional information about data mobility with FlashArray vVols:

https://www.codyhosterman.com/2017/10/comparing-vvols-to-vmdks-and-rdms/

https://www.codyhosterman.com/2017/12/vvol-data-mobility-virtual-to-physical/

[Back to Top

 

Appendix I

While the Plugin is not required to use of FlashArray-based vVols, it simplifies administrative procedures that would otherwise require either coordinated use of multiple GUIs or scripting.

Version 3.0 of the Plugin and later versions support vVols. To verify that a Plugin version that supports vVols is installed, select Administration in the Web Client home screen inventory pane and select Client Plug-Ins to display the Client Plug-ins pane (vSphereView 137).

Version 3.0 of the FlashArray Plugin for the vSphere Web Client integrates with the vSphere Web Client (also called Flash/Flex Client). Plugin support for VMware’s emerging vSphere Client (HTML5) is under development.

vv185.png
vSphereView 137: Web Client Plug-ins Pane

If the Pure Storage Plugin is not installed, or if the installed version is earlier than 3.0, use the FlashArray GUI to install or upgrade the Plugin to a version that supports vVols.

As a FlashArray administrator, select the Software tab on the Settings pane. The Available Version field (ArrayView 45) lists the array’s current Plugin version. If the version is earlier than 3.0, move to an array that does host Version 3.0. If no such array is available, contact Pure Storage Support to obtain a supported version of the Plugin. 

vv186.png
ArrayView 45: Plugin Installation and Upgrade

To install the Plugin in the vCenter Web Client, click the vv187b.png button in the vSphere Plugin pane (ArrayView 45) to launch the Edit vSphere Plugin Configuration wizard (ArrayView 46).

vv182.png
ArrayView 46: Edit vSphere Plugin Configuration Wizard

The target vCenter validates the administrator credentials and returns the version of the installed Plugin (if any) in the Version on vCenter field. ArrayView 47 shows the vCenter responses when no Plugin is installed (left) and when the installed version is earlier than 3.0 (right). Click Install or Upgrade as required.

vv183.png
ArrayView 47: vCenter Responses to FlashArray Plugin Query

When installation is complete, the wizard displays a confirmation message. Install the Plugin in additional vCenter instances as required. To verify the installation, log out of and back into vCenter, and look for the vv187.png icon in the Web Client Home tab.

vv189.png
vSphereView 138: Using Web Client to Verify Plugin Installation

Authenticating FlashArray to the Plugin

To authenticate a FlashArray to a Plugin installed in vCenter, either click the vv187.png icon on the Web Client Home tab (vSphereView 138) or click the Home button at the top of the pane and select Pure Storage from the dropdown menu (vSphereView 139) to display the FlashArray pane Objects tab (vSphereView 140).

vv190.png
vSphereView 139: FlashArray Authentication (1)
vv191.png
vSphereView 140: FlashArray Authentication (2)

Click + Add FlashArray to launch the Add FlashArray wizard.

 

vv192.png
vSphereView 141: Add FlashArray Wizard

vSphereView 142 illustrates the Web Client FlashArray pane Objects tab after the array has been added.

vv193.png
vSphereView 142: Array Authenticated to vCenter

Note: Role-Based Access Control is available for the Plugin, but configuration and use of this feature is beyond the scope of this report. 

Refer to the Plugin User Guide, available on support.purestorage.com for further information

[Back to Top

 

Appendix II: FlashArray CLI Commands for Protocol Endpoints

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol create command creates the volume as a protocol endpoint.

vv194.png
ArrayView 48: FlashArray CLI Command to Create a PE

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol list command displays a list of volumes on the array that were created as PEs.

vv195.png
ArrayView 49: FlashArray CLI Command to List an Array's PEs

[Back to Top

 

Appendix III: VMware ESXi CLI Commands for vVols

Use the esxcli storage vvol commands to troubleshoot a vVol environment.

Version

Changes

 

esxcli storage core device

list

Identify protocol endpoints. The output entry Is VVOL PE: true indicates that the storage device is a protocol endpoint.

esxcli storage vvol daemon

unbindall

Unbind all vVols from all VASA providers known to the ESXi host.

esxcli storage vvol protocolendpoint

list

List all protocol endpoints that a host can access.

esxcli storage vvol storagecontainer

list

abandonedvvol scan

List all available storage containers.

Scan the specified storage container for abandoned vVols.

esxcli storage vvol vasacontext

get

Show the VASA context (VC UUID) associated with the host.

esxcli storage vvol vasaprovider

list

List all storage (VASA) providers associated with the host.

[Back to Top

 

Appendix IV: Disconnecting a Protocol Endpoint from a Host

Decommissioning ESXi hosts or clusters normally includes removal of protocol endpoints (PEs). The usual FlashArray volume disconnect process is used to disconnect PEs from hosts. As with removal of any non-vVol block storage device however, the best practice is to detach the PE from each host in vCenter prior to disconnecting it from them on the array.

vv196.png
vSphereView 143: Web Client Tool for Detaching a PE from an ESXi Host

To detach a PE from a host, select the host in the Web Client inventory pane, navigate to the Storage Devices view Configure tab, select the PE to be detached, and click the vv197.png tool to launch the Detach Device confirmation wizard. Click Yes to detach the selected PE from the host.

vv198.png
vSphereView 144: Confirm Detach Wizard

vSphereView 145 shows the Web Client storage listing after successful detachment of a PE.

vv199.png
vSphereView 145: Detached PE

Failure to detach a PE from a host (vSphereView 146) typically occurs because there are vVols bound to the host through the PE that is being detached.

vv200.png
vSphereView 146: Failure to Detach PE (LUN) from a Host

FlashArrays prevent disconnecting a PE from a host (including members of a FlashArray host group) that has vVols bound through it.

The Purity//FA Version 5.0.0 GUI does not support disconnecting PEs from hosts. Administrators can only disconnect PEs via the CLI or REST API.

Before detaching a PE from an ESXi host, use one of the following VMware techniques to clear all bindings through it:

  1. vMotion all VMs to a different host
  2. Power-off all VMs on the host that use the PE
  3. Storage vMotion the VMs on that host that use the PE to a different FlashArray or to a VMFS

To completely delete a PE, remove all vVol connections through it. To prevent erroneous disconnects, FlashArrays prevent destruction of PE volumes with active connections.

[Back to Top

 

Appendix V: vVols and Volume Group Renaming

FlashArray volume groups are not in the VM management critical path. Therefore, renaming or deleting a volume group does not affect VMware’s ability to provision, delete or change a VM’s vVols.

A volume group is primarily a tool that enables FlashArray administrators to manage a VM’s volumes as a unit. Pure Storage highly recommends creating and deleting volume groups only through VMware tools, which direct arrays to perform actions through their VASA providers.

Volume group and vVol names are not related to VASA operations. vVols can be added to and removed from a volume group whose name has been changed by an array administrator. If, however, a VM’s config vVol is removed from its volume group, any vVols created for the VM after the removal are not placed in any volume group. If a VM’s config vVol is moved to a new volume group, any new vVols created for it are placed in the new volume group.

VMware does not inform the array that it has renamed a vVol-based VM, so renaming a VM does not automatically rename its volume group. Consequently, it is possible for volume group names differ from those of the corresponding VMs. For this reason, the FlashArray vVol implementation does not put volume group or vVol names in the vVol provisioning and management critical path.

For ease of management, however, Pure Storage recommends renaming volume groups when the corresponding VMs are renamed in vCenter.

 

Appendix Vi: CISCO FNIC Driver Support for vVols

Older Cisco UCS drivers do not support the SCSI features required for Protocol Endpoints and vVol sub-lun connections. To use vVols with Cisco UCS, FNIC drivers must be updated to a version that supports sub-luns. For information on firmware and update instructions consult:

https://my.vmware.com/group/vmware/details?productId=491&downloadGroup=DT-ESX60-CISCO-FNIC-16033

https://quickview.cloudapps.cisco.com/quickview/bug/CSCux64473

 

About the Author

vv201.png

Cody Hosterman is the Technical Director for VMware Solutions at Pure Storage. His primary responsibility is overseeing, testing, designing, documenting, and demonstrating VMware-based integration with the Pure Storage FlashArray platform. Cody has been with Pure Storage since 2014 and has been working in vendor enterprise storage/VMware integration roles since 2008.

Cody graduated from the Pennsylvania State University with a bachelors degreee in Information Sciences & Technology in 2008. Special areas of focus include core ESXi storage, vRealize (Orchestrator, Automation and Log Insight), Site Recovery Manager and PowerCLI. Cody has been a named VMware vExpert every year since 2013.

Blog: www.codyhosterman.com

Twitter: www.twitter.com/codyhosterman

YouTube: https://www.youtube.com/codyhosterman

[Back to Top


© 2018 Pure Storage, Inc. All rights reserved. Pure Storage, Pure1, and the Pure Storage Logo are trademarks or registered trademarks of Pure Storage, Inc. in the U.S. and other countries. Other company, product, or service names may be trademarks or service marks of their respective owners.