Skip to main content
Pure Technical Services

Web Guide: Implementing vSphere Virtual Volumes with FlashArray

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

Abstract

VMware’s vSphere Virtual Volume (vVol) paradigm, introduced in vSphere version 6.0, is a storage technology that provides policy-based, granular storage configuration and control of virtual machines (VMs). Through API-based interaction with an underlying array, VMware administrators can maintain storage configuration compliance using only native VMware interfaces.

Version 5.0.0 of Purity//FA software introduced support for FlashArray-based vSphere Virtual Volumes (vVols). The accompanying FlashArray Plugin for the vSphere Web Client (the Plugin) makes it possible to create, manage, and use vVols that are based on FlashArray volumes from within the Web Client. This report describes the architecture, implementation, and best practices for using FlashArray-based vVols.

Audience

The primary audiences for this guide are VMware administrators, FlashArray administrators, and more generally, anyone interested in the architecture, implementation, administration, and use of FlashArray-based vVols.

Throughout this report, the terms FlashArray administrator, array administrator, and administrator in the context of array administration, refer to both the storage and array administration roles for FlashArrays.

Both management interfaces must be configured on both controllers and both arrays with enabled and active links. The management interfaces are as follows:

  • FlashArray//XR4 - ct0.eth4, ct0.eth5, ct1.eth4, ct1.eth5
  • All other FlashArray Models - ct0.eth0, ct0.eth1, ct1.eth0

For further questions and requests for assistance, customers can contact Pure Storage Technical Support at support@purestorage.com.

vVols Best Practice Summary

vVols Best Practices Summary

Requirements

Recommendations

  • Purity//FA 6.1.8 or higher
  • FlashArray 400 Series, FlashArray//M, FlashArray//X, FlashArray//C
  • vCenter 6.5+ and ESXi 6.5+
    • vSphere 6.5 and 6.7 are end of life by VMware, 7.0 or higher should be used moving forward
  • Configure NTP for VMware environment and FlashArray
    • Ensure that all hosts, vCenter servers and arrays are all synced
  • Ensure that vCenter Server and ESXi Host managemen    t networks have TCP port 8084 access to FlashArray controller management ports.
  • Configure host and host groups with appropriate initiators on the FlashArray.
  • The 'pure-protocol-endpoint' must not be destroyed.
    • This namespace must exist for vVols management path to operate correctly.
  • Purity//FA 6.2.10 or later
  • vCenter 7.0 Update 3f (build 20051473) or later
  • ESXi 7.0 Update 3f (build 20036589) or later
  • When registering the VASA Provider, use a local FlashArray User
  • Do not run vCenter Servers on vVols
  • The Protocol Endpoint should be connected to Host Groups and not Individual Hosts.
  • Configure a syslog server for the vSphere environment
  • Configure snapshot policies for all Config vVols (VM home directories).
  • Use Virtual Machine hardware version 15 or later.
    • The Hardware Version will need to be 15 or later when the Virtual Machine needs to have more than 15 virtual devices per SCSI controller.

If using Virtual Volumes and FlashArray replication, ensure that anticipated recovery site is running vSphere 6.7 or later. 

As always, please ensure you follow standard Pure Storage best practices for vSphere.

Currently VMware does not support Stretched Storage with vVols. This means that due to limitations both with Purity//FA and vSphere, vVols is not supported with ActiveCluster. vVols are also not supported with ActiveDr.  

Pure Storage and VMware are actively partnered to develop support for stretched storage (ActiveCluster) for vVols and have targeted 1H of 2024CY for release, release timelines are subject to change. 

vVols Best Practices Quick Guidance Points

Here are some quick points of guidance when using vVols with the Pure Storage FlashArray. These are not meant to be Best Practices deep dives nor a comprehensive outline of all best practices when using vVols with Pure Storage; a Best Practices deep dive will be given in the future.  However, more explanation about the requirements and recommendations are given in the summary above.

Purity Version

While vVols support was first introduced with Purity 5.0.0, there have been significant fixes and enhancements to the VASA provider in later releases of Purity. Because of this, Pure has set the required Purity version for vVols to a later release.

  • For general vVols use, while Purity 5.3 and 5.1 can support vVols both Purity release are end of life.  As such, the minimum target Purity version should be Purity//FA 6.1.8.

Pure Storage recommends that customers running vVols upgrade to Purity//FA 6.2.10 or higher.

The main reason behind this is that there are enhancements to VASA to help support vVols at higher scale, performance of Managed Snapshots, and SPBM Replication Group Failover API at scale. For more information please see what's new with VASA Provider 2.0.0

vSphere Version

While vSphere Virtual Volumes 2.0 was released with vSphere 6.0, the Pure Storage FlashArray only supports vSphere Virtual Volumes 3.0, which was release with vSphere 6.5.  As such, the minimum required vSphere version is 6.5 GA release.  That being said, there are significant fixes specific to vVols so the required versions and recommended versions are as follows:

vSphere 7.0 Update 3 released several improvements for vVols and running vVols at scale.  vSphere 7.0 Update 3f should be the minimum vSphere version to target for running vVols with Pure Storage in your vSphere environment.

vSphere Environment

With regards to the vSphere environment, there are some networking requirements and some strong recommendations from Pure Storage when implementing vVols in your vSphere Environment.

  • Requirement: NTP must be configured the same across all ESXi hosts and vCenter Servers in the environment.  The time and data must be configured to the current date/time.
  • Recommended: Configure Syslog forwarding for vSphere environment.
  • Requirement: Network port 8084 must be open and accessible from vCenter Servers and ESXi hosts to the FlashArray that will be used for vVols.
  • Recommended: Use Virtual Machine Hardware version 14 or higher.
  • Requirement: Do not run vCenter servers on vVols.
    • While a vCenter server can run on vVols, in the event of any failure on the VASA Management Path combined with a vCenter server restart, the environment could enter a state where vCenter Server may not be able to boot or start.  Please see the failure scenerio KB for more detail on this.
  • Recommended: Either configured a SPBM policy to snapshot all of the vVol VM's Config vVols or manually put Config vVols in a FlashArray protection group with snapshot scheduled enabled.
    • A snapshot of the Config vVol is required for the vSphere Plugin's VM undelete feature.  Having a backup of the Config vVol also helps the recovery process or roll back process for the VM in the event that there is an issue.  There is a detailed KB that outlines some of these workflows that can be found here.

FlashArray Environment

Here is some more detail and color for the requirements and recommendations with the FlashArray:

  • Requirement: The FlashArray Protocol Endpoint object 'pure-protocol-endpoint' must exist. The FlashArray admin must not rename, delete or otherwise edit the default FlashArray Protocol Endpoint.
    • Currently, Pure Storage stores important information for the VASA Service with the pure-protocol-endpoint namespace.  Destroying or renaming this object will cause VASA to be unable to forward requests to the database service in the FlashArray.  This effectively makes the VASA Provider unable to process requests and the Management Path to fail.  Pure Storage is working to correct this and improve this implementation in a future Purity release.
  • RecommendationCreate a local array admin user when running Purity 5.1 and higher.  This user should then be used when registering the storage providers in vCenter.
  • Recommendation: Following vSphere Best Practices with the FlashArray, ESXi clusters should map to FlashArray host groups and ESXi hosts should map to FlashArray hosts.  
  • Recommendation: The protocol endpoint should be connected to host groups on the FlashArray and not to individual hosts.
  • Recommendation: While multiple protocol endpoints can be created manually, the default device queue depth for protocol endpoints is 128 in ESXi and can be configured up to 4096.  This generally means adding additional protocol endpoints is often unnecessary.

VASA Provider/Storage Provider

The FlashArray has a storage provider running on each FlashArray controller called the VASA Service. The VASA Service is part of the core Purity Service, meaning that it automatically starts when Purity is running on that controller.  In vSphere, the VASA Providers will be registered as Storage Providers.  While Storage Providers/VASA Providers can manage multiple Storage Arrays, the Pure VASA Provider will only manage the FlashArray that it is running on.  Even though the VASA Service is running and active on both controllers, vCenter will only use one VASA Provider as the active Storage Provider and the other VASA Provider will be the Standby Provider.

Here are some requirements and recommendations when working with the FlashArray VASA Provider.

  • Requirement: Register both VASA Providers, CT0 and CT1, respectively.
    • While it's possible to only register a single VASA provider, this leaves a single point of failure in your management path.
  • Recommendation: Do not use a Active Directory user to register the storage providers. 
    • Should the AD service/server be running on vVols, Pure Storage strongly recommends not to use an AD user to register the storage providers.  This leaves a single point of failure on the management path in the event that the AD User have permissions changed, password changed or the account is deleted.
  • Recommendation: User a local array admin created to register the storage providers.
  • Recommendation: Should the FlashArray be running Purity 5.3.6 or higher, Import CA signed certificates to VASA-CT0 and VASA-CT1

Managed Snapshots for vVols based VMs

One of the core benefits of using vVols is the integration with storage and vSphere Manage Snapshots.  The operations of the managed snapshot are offloaded to the FlashArray and there is no performance penalty for keeping the managed snapshots.  When the operations behind managed snapshot are offloaded to VASA and the FlashArray, this creates additional work being done on the FlashArray that is not there with managed snapshots on VMFS. 

Massive improvements to vVols performance at scale and load has been released with the FlashArray VASA Provider 2.0.0 with Purity//FA 6.2 and 6.3

Pure Storage's recommendation when using vVols with the FlashArray is to upgrade to a Purity//FA 6.2.10 or higher.  

Please see the KB What's new with VASA Provider 2.0.0 for more information.

Here are some points to keep in mind when using Managed Snapshots with vVols based VMs.

  • Managed Snapshots for vVols based VMs create volumes for each Data vVol on that VM that have a -snap suffix in their naming.
    • The process of taking a managed snapshot for a vVol based VM will first issue a Prepare Snapshot Virtual Volume operation which will cause VASA to create placeholder data-snap volumes.  Once completed vSphere will then send the Snapshot Virtual Volume request after stunning the VM.  VASA will then take consistent point in time snapshots of each data vVol and copy them out to the placeholder volumes previously created.  Once the requests complete for each virtual disk the VM is unstunned and the snapshot is completed.
    • With FA volumes being created for the managed snapshot, this directly impacts the volume count on the FlashArray.  For example, a vVol VM with 5 VMDK (Data vVols) will create 5 new volumes on the FA for each managed snapshot.  If 3 managed snapshots are taken, then this VM has a volume count on the FA of 22 volumes (1 Config and 20 Data vVols while powered off; 1 additional Swap vVol while powered on). 
  • Managed Snapshots only trigger Point in Time snapshots of the Data vVols and not the Config vVol.  In the event that the VM is deleted and a recovery of the VM is desired, it will manually have to be done from a pgroup snapshot.
  • The process of VMware taking a managed snapshot is fairly serialized; specifically, the snapshotVirtualVolume operations are serialized.  This means that if a VM has 3 VMDKs (Data vVols), the snapshotVIrtualVolume request will be issued for one VMDK and after it's complete the next VMDK will have the operation issued against it. The more VMDKs a VM has, the larger the impact to how long the managed snapshot will take to complete. This could increase the stun time for that VM.  
    • VMware has committed to improveing the performance of these calls from vSphere.  In vSphere 7.0 U3 they have updated snapshotVirtualVolume to use the max batch size advertised by VASA to issue snapshotVirtualVolume calls with multiple data vVols.  Multiple snapshotVirtualVolume calls for the same VM will be issued close to the same time now as well in the event that the number of virutal disks is greater than the max batch size.
  • Recommendation:  Plan accordingly when setting up managed snapshots (scheduled or manual) and configuring backup software which leverages managed snapshots for incremental backups.  The size of the Data vVols and the amount of Data vVols per VM can impact how long the snapshot virtual volume op takes and how long the stun time can be for the VM.

Storage Policy Based Management (SPBM)

There are a few aspects of utilizing Storage Policies with vVols and the FlashArray to keep in mind when managing your vSphere Environment.

  • Storage Policies can be compatible with one or multiple replication groups (FlashArray protection groups).
    • While storage policies can be compatible with multiple replication groups, when applying the policy to a VM, mutliple replication groups should not be used.  The VM should be part of a single consistency group.
  • SPBM Failover workflow APIs are ran against the replication group and not the storage policy itself.
  • Recommendation: Attempt to keep replication groups under 100 VMs.  This will assist with the VASA Ops being issued against the policies and replication groups and the time it takes to return these queries.
    • This includes both Snapshot and Replication enabled protection groups.  These VASA Ops, such as queryReplicationGroup, will look up all objects in both local replication and snapshot pgroups, as well as target protection groups.  The more protection groups and the more objects in protection groups will inherently cause these queries to take longer.  Please see vVols Deep Dive: Lifecycle of a VASA Operation for more information.
  • Recommendation: Do not change the default storage policy with the vVols Datastore.  This could cause issues in the vSphere UI when provisioning to the vVols Datastore.

FlashArray SafeMode with vVols

For FlashArrays with SafeMode enabled additional considerations and planning will be required for the best experience.  As the management of storage is done through VASA, the VASA service frequently will create new volumes, destroy volumes, eradicate volumes, place volumes in FlashArray protection groups, remove volumes from FlashArray protection groups and disable snapshot/replication schedules.

For more detailed information on SafeMode with vVols see the User Guide.  Here is a quick summary of recommendations when running vVols with SafeMode enabled on the FlashArray.

  • Any FlashArray should be running Purity 6.1.8 or higher when using vVols before enabling SafeMode.
  • vSphere Environment running 7.0 U1 or higher is ideal to leverage the allocated bitmap hint as part of VASA 3.5.
  • Object count, object count, object count.  Seriously, the biggest impact that enabling SafeMode will have is on object count.  Customers that want to enable SafeMode must plan to always be monitoring the object counts for volumes, volume groups, volumes snapshots and pgroup snapshots.  Do not just monitor current object counts but all pending eradication object counts as well.
  • The use of Auto-RG for SPBM when assigning replication groups to a VM should not be used.
  • Once a VM has a storage policy replication group assigned, VASA will be unable to assign a different replication group.  Plan that once a storage policy and replication group are assigned, that the vSphere admin will be unable to change that with SafeMode enabled.
  • Failover replication group workflows will not be able to disable replication group schedules.  Nor will cleanup workflows be able to eradicate objects.  Users must plan for higher object counts after any tests or failover workflows.  
  • Environments that are frequently powering on/off VMs or vMotioning between hosts will have higher amounts of swap vVols pending eradication.  Should the eradication timer be changed to be longer than 24hr, then they will be pending eradication for longer time.  Storage and vSphere admins will have to plan around higher object counts with these environments.
    • In some cases, vSphere Admins may want to configure a VMFS Datastore that is shared between all hosts to be the target for VMs Swap.
  • When changed block tracking (CBT) is enabled the first time, this will increase the amount of volume snapshots pending eradication.  Backup workflows that periodically refresh CBT (disable and re-enable CBT) will increase the amount of this volume diffs that are issued.  Pure does not recommend to frequently refresh CBT.  Once enabled, CBT should not normally need to be refreshed.

[Back to Top]  


Terminology

These are the core terms to know and understand when discussing vVols and the implementation with Pure Storage's FlashArray.  Some aspects have more than one term that applies to them, both will be covered in the table.

Name/Concept Explanation
Protocol Endpoint
(PE)
A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. The PE effectively serves as a mount point for vVols. A PE is the only FlashArray volume that must be manually connected to hosts to use vVols.  The industry term for a PE is "Administrative Logical Unit".

VASA

vSphere APIs for Storage Awareness (VASA) is the VMware-designed API used to communicate between vSphere and the underlying storage.  For Pure Storage, this is the FlashArray.

SOAP In the Days before REST API was more widely used, SOAP (Simple Object Access Protocol) was a messaging protocol that was used to exchange structured data (information) via web services (HTTP).  SOAP uses an XML structure to exchange the information between source and destination.  SOAP is heavily used in the management communication of the vSphere environment, vCenter Services and most important for the purpose of this KB, VASA.  
Management Path
or
Control Path
This is the TCP/IP path between the compute management layer (vSphere) and the storage management layer (FlashArray).  Requests such as creating, deleting and otherwise managing storage are issued on this path.  This is done via HTTPS and TLS 1.2 over port 8084 for the FlashArray VASA Provider.
Data Path
or
Data Plane
The Data Path is the established connection from the ESXi hosts to the Protocol Endpoint on the FlashArray. The Data Path is the flow that SCSI Ops are sent and received, just as any traditional SAN.  This connection is established over the storage fabric. Today this means iSCSI or Fibre Channel.  
SPBM Storage Policy Based Management (SPBM) is a framework designed by VMware to provision and/or manage storage. Users can create policies of selected capabilities or tags and assign them to a VM or specific virtual disk. SPBM for internal storage is called vSAN, SPBM for external storage is called vVols. A vendor must support VASA to enable SPBM for their storage. 
VASA Provider

Storage Provider
A VASA provider is an instance of the VASA service that a storage vendor offers a customer that is deployed in their environment. For the FlashArray, the VASA Providers are built into the FlashArray controllers and will be represented as VASA-CT0 and VASA-CT1.  The term Storage Provider is used in vCenter to represent the VASA Providers for a given FlashArray.
Virtual Volume (vVol) Virtual Volumes (vVols) is the name for this full architecture. A specific vVol is any volume on the array that is in use by the vSphere environment and managed by the VASA provider. A vVol based volume is not fundamentally different than any other volume on the FlashArray.  The main distinction is that when it is in use, it is attached as a Sub-LUN via a PE, instead of via a direct LUN.
vVol Datastore

vVol Storage Container
The vVol Datastore is not a LUN, file system or volume. A vVol Datastore is a target provisioning object that represents a FlashArray, a quota for capacity, and is a logical collection of config vVols.  While the object created in vCenter is represented as a Datastore, the vVol Datastore is really a Storage Container that represents that given FlashArray.
SPS This is a vCenter deamon called Storage Policy Service (SPS or vmware-sps).  The SMS and SPBM services run as part of the Storage Policy Service.
SMS A vCenter Service called Storage Management Service (SMS).
vvold This is the service running on ESXi that handles the management requests directly from the ESXi host to the VASA provider as well as communicates with the vCenter SMS service to get the Storage Provider information.  
   

[Back to Top]  


Introduction to vVols

Historically, the datastores that have provided storage for VMware virtual machines (VMs) have been created as follows:

  1. A VMware administrator requests storage from a storage administrator
  2. The storage administrator creates a disk-like virtual device on an array and provisions it to the ESXi host environment for access via iSCSI or Fibre Channel
  3. The VMware administrator rescans ESXi host I/O interconnects to locate the new device and formats it with VMware’s Virtual Machine File System (VMFS) to create a datastore.
  4. The VMware administrator creates a VM and one or more virtual disks, each instantiated as a file in the datastore’s file system and presented to the VM as a disk-like block storage device.

Virtual storage devices instantiated by storage arrays are called by multiple names. Among server users and administrators, LUN (numbered logical unit) is popular. The FlashArray term for virtual devices is volume. ESXi and guest hosts address commands to LUNs that are usually assigned automatically to volumes.

While plugins can automate datastore creation to some extent, they have some fundamental limitations:

  • Every time additional capacity is required, VMware and storage administrators must coordinate their activities
  • Certain widely-used storage array features such as replication are implemented at the datastore level of granularity. Enabling them affects all VMs that use a datastore
  • VMware administrators cannot easily verify that required storage features are properly configured and enabled.

VMware designed vVols to mitigate these limitations. vVol benefits include:

  • Virtual Disk Granularity
    • Each virtual disk is a separate volume on the array with is own unique properties
  • Automatic Provisioning
    • When a VMware administrator requests a new virtual disk for a VM, VMware automatically directs the array to create a volume and present it to the VM. Similarly, when a VMware administrator resizes or deletes a virtual disk, VMware directs the array to resize or remove the volume
  • Array-level VM Visibility
    • Because arrays recognize both VMs and their virtual disks, they can manage and report on performance and space utilization with both VM and individual virtual disk granularity.
  • Storage Policy Based Management
    • With visibility to individual virtual disks, arrays can take snapshots and replicate volumes at the precise granularity required. VMware can discover an array’s virtual disks and allow VMware administrators to manage each vVol’s capabilities either ad hoc or by specifying policies. If a storage administrator overrides a vVol capability configured by a VMware administrator, the VMware administrator is alerted to the non-compliance.

vVol Architecture

Here is a generic high level view of the vVol Architecture.
Make note that the Control/Management Path is separate from the Data Path.

Picture1.png

VMware designed the vVol architecture to mitigate the limitations of the VMFS-based storage paradigm while retaining the benefits, and merging them with the remaining advantages of Raw Device Mappings.

VMware’s vVol architecture consists of the following components:

  • Management Plane (section titled The FlashArray VASA Provider)
    • Implements the APIs that VMware uses to manage the array. Each supported array requires a vSphere API for Storage Awareness (VASA) provider, implemented by the array vendor.
  • Data Plane (section titled vVol Binding)
    • Provisions vVols to ESXi hosts
  • Policy Plane (section titled Storage Policy Based Management)
    • Simplifies and automates the creation and configuration of vVols.

[Back to Top


The FlashArray VASA Provider

VMware's vSphere APIs for Storage Awareness (VASA) is a VMware interface for out-of-band communication between VMware ESXi and vCenter and storage arrays. The Arrays’ VASA providers are services registered with the vCenter Server. Storage vendors implement providers for their arrays, either as VMs or embedded in the arrays. As of vSphere Version 7.0 U1, VMware has introduced four versions of VASA:

  • Version 1 (Introduced in vSphere Version 5.0)
    • Provides basic configuration information for storage volumes hosting VMFS datastores, as well as injection of some basic alerts into vCenter
  • Version 2 (Introduced in vSphere Version 6.0)
    • First version to support vVols
  • Version 3 (Introduced in vSphere Version 6.5)
    • Added support for array-based replication of vVols and Oracle RAC.
  • Version 3.5 (Introduced in vSphere Version 7.0 U1)
    • Added additional feature support for improved bitmap operation performance and iSCSI CHAP.
  • Version 4.0 (Introduced in vSphere Version 8.0 GA)
    • Added support for NVMe-FC with vVols.
    • Support for NVMe-TCP with vVols was added in 8.0 U1.
  • Version 5.0 (Introduced in vSphere Version 8.0 U1)
    • Added support for multiple unlinked vCenters without requiring custom certs.  

The Pure Storage FlashArray supports the following VASA Versions via Purity Release 

  • Version 3 support was added with Purity//FA 5.0.0 and higher
  • Version 3.5 support was added with Purity//FA 6.1.6 and higher
  • Version 4 support was added with Purity//FA 6.6.2 and higher
  • Version 5 support was added with Purity//FA 6.4.9 and higher

 

 

Because the FlashArray vVol implementation uses VASA Version 3, the VMware environment must be running vSphere Version 6.5 or a newer version in both ESXi hosts and vCenter.

Pure Storage does recommend running vSphere 6.7 U3 p03 or higher for the various fixes and improvements found in this release.  Please see the KB that outlines VASA/vVols related fixes by ESXi release found here.

FlashArray vVol support is included in Purity//FA Version 5.0. The Purity//FA upgrade process automatically installs and configures a VASA provider in each controller; there is no separate installation or configuration. To use FlashArray-based vVols, however, an array’s VASA providers must be registered with vCenter. Either the FlashArray Plugin for vSphere Web Client (the Plugin), the vSphere GUI, or API/CLI-based tools may be used to register VASA providers with vCenter. 


VASA Provider Certificate Management

The management of VASA Provider certificates is supported by the FlashArray with the release of Purity//FA 5.3 and VASA 1.1.0.

Please see the following KBs that detail the management of the the VASA Provider Certificates:


Registering the FlashArray VASA Provider

There are multiple ways to register the FlashArray VASA Provider.  

Registering FlashArray VASA Providers with the Pure Storage vSphere Plugin

  1. A FlashArray will need to be added/registered in the Plugin to register the Storage Provider for the a given FlashArray.  Once the FlashArray is registered, Navigate to the main Plugin Page, select the FlashArray and then click on "Register Storage Provider".
    vvols-plugin-kb-01-registering-sp-1.png
  2. The recommended practice is to have a local FlashArray Array Admin user to register the storage providers with.  In the example below, there is a local array admin named "vvols-admin" that the Storage Providers will be registered with.  In the event that the vCenter is in Enhanced Linked Mode, the option to choose which vCenter to register the storage providers with will be given.
    Registering the Storage Provider with a Single vCenter
    vvols-plugin-kb-01-registering-sp-2.png
    Registering the Storage Provider with a vCenter in Linked Mode
    vvols-plugin-kb-01-registering-sp-4.png
  3. Once the Storage Provider is successfully registered, navigate to the vCenter Server page, then Config and the Storage Providers tab.  Confirm that the storage providers are online and healthy.
    vvols-plugin-kb-01-registering-sp-3.png

The FlashArray will log all subsequent vVol operations from those vCenters under the user used to register the storage providers.


Manually Registering the FlashArray VASA Providers with the vCenter UI

Alternatively, VMware administrators can use the vCenter Web Client, PowerCLI, and other CLI and API tools to register VASA providers. This section describes registration of FlashArray providers with the vCenter Web Client and with PowerCLI.

Finding the FlashArray Controller IP Addresses

Prior to registration, use the FlashArray GUI or CLI to obtain the IP addresses of both controllers’ eth0 management ports.

Click Settings in the GUI navigation pane, and select the Network tab to display the array’s management port IP addresses

vVols-User-Guide-VASA-Provider-01.png
FlashArray GUI Network Tab - Management IP Addresses
pureuser@sn1-x70-c05-33> purenetwork list ct0.eth0,ct1.eth0
Name      Enabled  Subnet  Address       Mask           Gateway      MTU   MAC                Speed      Services    Subinterfaces
ct0.eth0  True     -       10.21.149.22  255.255.255.0  10.21.149.1  1500  24:a9:37:01:f2:de  1.00 Gb/s  management  -
ct1.eth0  True     -       10.21.149.23  255.255.255.0  10.21.149.1  1500  24:a9:37:02:0b:8e  1.00 Gb/s  management  -
FlashArray CLI - Management IP Addresses

Registering the Storage Provider in vCenter

After the management IPs for the FlashArray are gathered head over to the vCenter UI and run through the following workflow to register the storage providers.

  1. Navigate to the either the Hosts/Datacenters, VMs, Storage or Network page.
  2. Select the vCenter Server to register the storage providers with.
  3. Navigate to the Configure and click Storage Providers under more.
  4. Click on the Add Button
    vVols-User-Guide-VASA-Provider-02.png
  5. Register CT0's storage provider
    vVols-User-Guide-VASA-Provider-03.png

    Name

    • A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).
       

    URL

    • The URL of the controller’s VASA provider in the form:  https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified, unless a custom cert with a FQDN in the Subjective Alternative Name is used, and port 8084 is required
       

    Credentials

    • Credentials for an administrator of the target array.   Best practice is to use a local array user and not the default user (pureuser).
      The user name entered is associated with VASA operations in future audit logs.
  6. Register CT1's storage provider
    vVols-User-Guide-VASA-Provider-04.png

    Name

    • A friendly name for the VASA provider. A best practice is to use names that make operational sense (for example, array name concatenated with controller number).
       

    URL

    • The URL of the controller’s VASA provider in the form:  https://<controllerIP>:8084. HTTPS (not HTTP) is required, the controller’s IP address must be specified (not its FQDN), and port 8084 is required
       

    Credentials

    • Credentials for an administrator of the target array.   Best practice is to use a local array user and not the default user (pureuser).
      The user name entered is associated with VASA operations in future audit logs.

Please ensure that both CT0 and CT1's storage providers are registered.


Manually Registering the FlashArray VASA Providers with PowerShell

When a number of FlashArrays’ VASA providers are to be registered, using a PowerCLI script may be preferable. The VMware PowerCLI cmdlet called New-VasaProvider registers VASA providers with vCenter (vSphereView 6).

New-VasaProvider Cmdlet
New-VasaProvider -Name "MyProvider" -Username "UserName" -Password "Password" -Url "MyUrl"
New-VasaProvider Cmdlet  PowerShell Core Example
PS /Users/alex.carver> $vc_creds = Get-Credential

PowerShell credential request
Enter your credentials.
User: purecloud\alex
Password for user purecloud\alex: 

PS /Users/alex.carver> $vasa_creds = Get-Credential

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 

PS /Users/alex.carver> connect-viserver -Server 10.21.202.95 -Credential $vc_creds

Name                           Port  User
----                           ----  ----
10.21.202.95                   443   PURECLOUD\alex

PS /Users/alex.carver> New-VasaProvider -Name 'sn1-x70-c05-36-ct0' -Credential $vasa_creds -Url 'https://10.21.149.22:8084'

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-36-ct0   online       3.0         11/5/2020 1:28:50 PM   com.purestorage      https://10.21.149.22:8084

PS /Users/alex.carver> New-VasaProvider -Name 'sn1-x70-c05-36-ct1' -Credential $vasa_creds -Url 'https://10.21.149.23:8084'

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---

The output from registering ct1 is expected as ct1 will be the standby provider.  Currently PowerCLI only displays the details on Active Storage Providers and not with standby providers.

An additional method with PowerShell would be the New-PfaVasaProvider cmdlet from the Pure Storage VMware PowerShell Module.  This will require having the Pure Storage PowerShell SDK also installed, but will work with either PowerShell Core or PowerShell.  A connection to a vCenter Server and FlashArray will be required to use the New-PfaVasaProvider cmdlet

New-PfaVasaProvider Cmdlet
New-PfaConnection -Endpoint "Management IP" -Credentials (Get-Credential) -DefaultArray -IgnoreCertificate

New-PfaVasaProvider -Flasharray $Global:DefaultFlashArray -Credentials (Get-Credential)
New-VasaProvider Cmdlet  PowerShell Core Example
PS /Users/alex.carver> Install-Module -Name PureStoragePowerShellSDK
PS /Users/alex.carver>
PS /Users/alex.carver> Install-Module -Name PureStorage.FlashArray.VMware
PS /Users/alex.carver>
PS /Users/alex.carver> New-PfaConnection -Endpoint 10.21.149.21 -Credentials (Get-Credential) -DefaultArray -IgnoreCertificateError

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 


Disposed   : False
EndPoint   : 10.21.149.21
UserName   : vvol-admin
ApiVersion : 1.17
Role       : ArrayAdmin
ApiToken   : 18e939a3

PS /Users/alex.carver> connect-viserver -Server 10.21.202.95 -Credential (Get-Credential)

PowerShell credential request
Enter your credentials.
User: purecloud\alex
Password for user purecloud\alex: 


Name                           Port  User
----                           ----  ----
10.21.202.95                   443   PURECLOUD\alex

PS /Users/alex.carver> New-PfaVasaProvider -Flasharray $Global:DefaultFlashArray -Credentials (Get-Credential)

PowerShell credential request
Enter your credentials.
User: vvol-admin
Password for user vvol-admin: 


Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-33-CT0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084

 


Verifying VASA Provider Registration

To verify that VASA Provider registration succeeded, in the Web Client Host and Clusters: 

  1. Click the target vCenter in the inventory pane
  2. Select the Configure tab
  3. Locate the newly-registered providers in Storage Providers
vVols-User-Guide-VASA-Provider-05.png

On the Storage Providers page there are some useful sections that display the information for the VASA Providers.

  1. The first column has the Storage Providers names that were used to register the storage providers.  Additionally, the storage array that the VASA Provider is managing is listed below it, along with the number of online providers for that storage array.
  2. The Status column will list if the provider is online and accessible from vCenter.
  3. vCenter can only have a single Active storage provider for a given storage array.  The Active/Standby column will display if the provider is the active or standby provider.
  4. The Certificate Expiry column displays how many days are left before the certificate expires for that storage provider.  At 180 days a yellow warning will be displayed.
  5. After selecting a Storage Provider there are additional tabs and information that can be selected for that provider.  The general tab will display all the basic information for the given storage provider.  This is a very useful information tab.

Alternatively, the PowerCLI Get-VasaProvider cmdlet can be used to list registered VASA providers.  The results can be filtered to just display the VASA Providers that belong to the Pure Storage namespace.  Only the Active Storage providers are returned with this cmdlet.

PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'}

Name                 Status       VasaVersion LastSyncTime           Namespace            Url
----                 ------       ----------- ------------           ---------            ---
sn1-x70-c05-33-ct0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084
sn1-x70-b05-33-ct0   online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.40:8084/version.…
sn1-m20r2-c05-36-ct0 online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.61:8084/version.…

Un-registering and Removing the Storage Providers

There are a couple ways to remove a storage provider in the event that the end user needs to remove and re-register a Storage Provider or simply wants to remove the storage providers.  This can be done either from the vCenter Server UI or with PowerShell via PowerCLI.

Removing Storage Providers in the vCenter Server UI

Here is the workflow to remove the storage providers in the vCenter Server UI:

  1. Navigate to the vCenter Server -> Configure -> Storage Provider Page
  2. Select the Standby Storage Provider that is being removed and select remove and click yes to confirm the removal
    vVols-User-Guide-VASA-Provider-06.png
    vVols-User-Guide-VASA-Provider-07.png
  3. Repeat the steps for the active storage provider

Removing Storage Providers via PowerCLI

Here is the workflow to remove storage providers with PowerShell via PowerCLI:

  1. After connecting to the vCenter Server, find the storage provider and storage provider ID that needs to be removed and set a provider variable.
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'} | Select-Object Name,Id
    
    Name                 Id
    ----                 --
    sn1-x70-b05-33-ct0   VasaProvider-vasaProvider-3
    sn1-x70-c05-33-ct0   VasaProvider-vasaProvider-7
    sn1-m20r2-c05-36-ct0 VasaProvider-vasaProvider-5
    
    PS /Users/alex.carver> $provider = Get-VasaProvider -Id VasaProvider-vasaProvider-7
    PS /Users/alex.carver> $provider
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-c05-33-ct0   online       3.0         11/5/2020 5:06:10 PM   com.purestorage      https://10.21.149.22:8084
    
  2. Remove the storage provider with Remove-VASAProvider with the provider variable. 
    PS /Users/alex.carver> Remove-VasaProvider -Provider $provider -confirm:$false
    PS /Users/alex.carver>
    
  3. Remove the same steps with the second storage provider
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'} | Select-Object Name,Id
    
    Name                 Id
    ----                 --
    sn1-x70-b05-33-ct0   VasaProvider-vasaProvider-3
    sn1-x70-c05-33-ct1   VasaProvider-vasaProvider-8
    sn1-m20r2-c05-36-ct0 VasaProvider-vasaProvider-5
    
    PS /Users/alex.carver> $provider = Get-VasaProvider -Id VasaProvider-vasaProvider-8
    PS /Users/alex.carver> $provider
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-c05-33-ct1   online       3.0         11/11/2020 1:19:57 PM  com.purestorage      https://10.21.149.23:8084
    
    PS /Users/alex.carver> Remove-VasaProvider -Provider $provider -confirm:$false
    PS /Users/alex.carver>
    PS /Users/alex.carver> Get-VasaProvider | Where-Object {$_.Namespace -eq 'com.purestorage'}
    
    Name                 Status       VasaVersion LastSyncTime           Namespace            Url
    ----                 ------       ----------- ------------           ---------            ---
    sn1-x70-b05-33-ct0   online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.40:8084/version.…
    sn1-m20r2-c05-36-ct0 online       3.0         10/23/2020 11:37:26 AM com.purestorage      https://10.21.149.61:8084/version.…
    

The main reason that the workflow is using the VASA provider ID is due to inconsistent behavior when using the VASA provider name when trying to remove the second provider.  The behavior was much more consistent when using the provider id.


[Back to Top]


Configuring Host Connectivity

For an ESXi host to access FlashArray storage, an array administrator must create a host object. A FlashArray host object (usually called host) is a list of the ESXi host’s initiator iSCSI Qualified Names (IQNs) or Fibre Channel Worldwide Names (WWNs). Arrays represent each ESXi host as one host object.

Similarly, arrays represent a VMware cluster as a host group, a collection of hosts with similar storage-related attributes. For example, an array would represent a cluster of four ESXi hosts as a host group containing four host objects, each representing an ESXi host. The FlashArray User Guide contains instructions for creating hosts and host groups.

Pure Storage recommends using the Pure vSphere Plugin to create FlashArray hosts and host groups that are mapped to ESXi Hosts and ESXi Clusters.


Using the Pure Storage vSphere Plugin to Create and Configure FlashArray Host Groups

The Pure Storage Plugin for the vSphere Client provides the ability to VMware users to have insight into and control of their Pure Storage FlashArray environment while directly logged into the vSphere Client. The Pure Storage plugin extends the vSphere Client interface to include environmental statistics and objects that underpin the VMware objects in use and to provision new resources as needed.

Viewing Host Configuration

Creating Host Groups

Without the Pure Storage plugin the process of creating hosts and host groups on the FlashArray can be a slow and tedious process.

The steps required to complete this task would be to:

  1. Navigate to each ESXi host you wish to connect to the FlashArray and locate the initiator port identifiers (WWPNs, IQN(s), or NQN).
  2. Login to the FlashArray and create a new host object for each ESXi host followed by setting the applicable port identifiers for each of the hosts.
  3. Once the host objects have been created a new host group is created and each host object is manually moved to the applicable host group.

Not only is the process above slow but it also leaves room for human error during the configuration process. In many instances we have found that port identifiers have been applied to the wrong host objects, misspelled, or missing entirely if the end-user was not paying close attention. Additionally, this process often requires coordination between vSphere and Storage administrators which leaves room for additional errors and delays in completing this critical task.

By utilizing the Pure Storage plugin this process becomes entirely automated and allows for the creation of dozens of hosts in a matter of seconds or minutes.It can also be completed by the vSphere administrator directly from the vSphere Client which frees up the storage administrator to focus on other more pressing issues within the environment.

Due to the reasons outlined above Pure Storage recommends using the plugin for the creation of new host and host group objects.

Starting with the 4.4.0 version of the Pure Storage Plugin, the new hosts created during host group creation will also be configured with the ESXi host personality.

Due to a slight difference between creating a Fibre Channel (FC) and iSCSI host group from the plugin each process is outlined separately below.

Also: all hosts must be in a VMware cluster--the plugin does not support creating host groups for ESXi hosts that are not in a cluster. If for some reason the host cannot be put in a VMware cluster, manual creation of the FlashArray host is required. For the host-side configuration in the case of iSCSI, this can be done via the plugin. Skip the the last section of this pages for information.


Creating a Host Group
  1. Right-click on the ESXi cluster you wish to create a host group for.
  2. Navigate to Pure Storage > Add/Update Host Group.
clipboard_e66cc8bd80b211b19cb9b635320be9f3a.png
  1. Select the FlashArray on which to configure the host connectivity.
clipboard_ee4bbc9ed09cf1a856dcca8601bd57f5c.png
  1. Select Fibre Channel or iSCSI. The plugin will then auto-generate the name of the hosts and host group. They can be changed if needed at a later time.
If the host/host group does not yet exist on the array, it will be marked as Will be created. If it does exist, it will be marked as Already configured.
clipboard_eddc7d64d0a2c20b31a463ff868695b7d.png clipboard_ed21786f92f5c9004b2d9221a25521ef0.png

 

If the host name already exists on the array, the plugin will append the protocol name to the host name to make it unique. clipboard_e2d3de88ee5cf06ef4334d800f196bd84.png
A protocol will be grayed out if the target FlashArray does not currently offer that particular protocol. clipboard_e7a771a86269ad03ada4d698b1f3b6f95.png
  1. If you have selected Configure iSCSI initiators on the hosts, the plugin will also configure the iSCSI target information and best practices on that particular host or hosts. See the section entitled, iSCSI Configuration Workflow for details.
  2. Click Create to complete the creation.
clipboard_e22e31cbcd2dc38bc3491399622b6692b.png

Configuring iSCSI Host Groups

The task of configuring iSCSI was traditionally fraught with antipathy as there are a lot of steps to remember throughout this process. The plugin aims to eliminate some of the complexity by automating some of the configuration around this process.

iSCSI Configuration Workflow

When the Configure iSCSI initiators on hosts workflow is selected then the following actions are taken by the plugin:

  • Creates an iSCSI Software Adapter on each selected ESXi host (if one is not already created).
  • Adds the FlashArray iSCSI IP addresses to the "Dynamic Discovery" section of the iSCSI Software Adapter.
  • Applies Pure Storage Best Practices for iSCSI Configurations on the newly established iSCSI sessions. Including:
    • DelayedAck to disabled
    • LoginTimeout to 30 seconds.

These actions are completely non-disruptive for existing iSCSI connections to other Pure Storage FlashArrays and 3rd party storage vendors. This is due to the configuration changes only being applied at the individual iSCSI sessions level rather than being set at a global level. 

If you review the Creating an iSCSI Host Group section in this document you will note there is an option to Configure iSCSI initiators on hosts when creating a new host group. If you created the new host / host group objects on the FlashArray with this option  then you do not need to execute the Configure iSCSI workflow separately.

This workflow is for configuring iSCSI after the host / host group objects have already been created on the FlashArray but have not yet completed the iSCSI configuration.

Step 1: Right click the on the ESXi cluster or individual ESXi host you wish to configure iSCSI on.

Step 2: Navigate to Pure Storage > Configure iSCSI.

configure-iscsi.png

Step 3: Select the FlashArray you wish to connect to via iSCSI and select Configure.

configure-iscsi-select-array.png

Once the iSCSI configuration has been completed you can then start the process of creating new VMFS or vVol datastores for use within the environment

[Back to Top]  


Protocol Endpoints

The scale and dynamic nature of vVols intrinsically changes VMware storage provisioning. To provide scale and flexibility for vVols, VMware adopted the T10 administrative logical unit (ALU) standard, which it calls protocol endpoint (PE). vVols are connected to VMs through PEs acting as subsidiary logical units (SLUs, also called sub-luns).

The FlashArray vVol implementation makes PEs nearly transparent. Array administrators seldom deal with PEs, and not at all during day-to-day operations.

Protocol Endpoints (PEs)

A typical VM has multiple virtual disks, each instantiated as a volume on the array and addressed by a LUN, the ESXi Version 6.5 support limits of 512 SCSI devices (LUNs) per host and 2,000 logical paths to them can easily be exceeded by even a modest number of VMs.

Moreover, each time a new volume is created or an existing one is resized, VMware must rescan its I/O interconnects to discover the change. In large environments, rescans are time-consuming; rescanning each time the virtual disk configuration changes is generally considered unacceptable.

VMware uses PEs to eliminate these problems. A PE is a volume of zero capacity with a special setting in its Vital Product Data (VPD) page that ESXi detects during a SCSI inquiry. It effectively serves as a mount point for vVols. It is the only FlashArray volume that must be manually connected to hosts to use vVols.

Fun fact: Protocol endpoints were formerly called I/O de-multiplexers. PE is a much better name.

When an ESXi host requests access to a vVol (for example, when a VM is powered on), the array binds the vVol to it. Binding is synonym for sub-lun connection. For example, if a PE uses LUN 255, a vVol bound to it would be addressed as LUN 255:1.  The section titled vVol Binding describes vVol binding in more detail.

PEs greatly extend the number of vVols that can be connected to an ESXi cluster; each PE can have up to 16,383 vVols per host bound to it simultaneously. Moreover, a new binding does not require a complete I/O rescan. Instead, ESXi issues a REPORT_LUNS SCSI command with SELECT REPORT to the PE to which the sub-lun is bound. The PE returns a list of sub-lun IDs for the vVols bound to that host. In large clusters, REPORT_LUNS is significantly faster than a full I/O rescan because it is more precisely targeted.

The FlashArray PE Implementation

A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.

Using the FlashArray UI to Manage the Protocol Endpoint

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint.  The pure-protocol-endpoint can be filtered in the Volumes view.  A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.

From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
vVols-User-Guide-PE-01.png
This view will display the Protocol Endpoints for the FlashArray

vVols-User-Guide-PE-02.png

From the PE View the PE can be connected to a Host or Host Group
Best Practice is to connect the PE to a Host Group and not Hosts individually. 
vVols-User-Guide-PE-03.png

From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
vVols-User-Guide-PE-04.png

Using the FlashArray CLI to Manage the Protocol Endpoint

From the FlashArray CLI a storage admin can manage the Protocol Endpoint.  This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.

Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

A protocol endpoint can be created with purevol create --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

To connect a protocol endpoint use either purehgroup connect or purevol connect

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint
Name                    Host Group       Host       LUN
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  10

pureuser@sn1-x50r2-b12-36> purevol list --connect
Name                                Size  LUN  Host Group       Host
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-1-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-2-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-3-FC

A protocol endpoint can be disconnected from a host and host group with purevol disonnect.

However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint
Name                    Host Group       Host       LUN
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  11
pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint
Name                    Host Group       Host
pure-protocol-endpoint  Prod-Cluster-FC  -

A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint
Name                  Source  Created                  Serial
dr-protocol-endpoint  -       2020-12-02 14:15:23 PST  F4252922ADE248CF000113EA

pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint
Name
dr-protocol-endpoint

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually.  However, in most cases the default pure-protocol-endpoint is fine to use.  There is no additional HA value added by connecting a host to multiple protocol endpoints.

Do not rename, destroy or eradicate the pure-protocol-endpoint PE on the FlashArray.  This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray. 

BEST PRACTICE: Use one PE per vVol container. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary.

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Protocol Endpoints in vSphere

There are multiple ways to view Protocol Endpoints that the ESXi hosts is connected with or is currently using as a mount point for a vVol Datastore.

  • From the Hosts and Datacenter view, Navigate to Host -> Configure -> Storage Devices.
    This view will show all connected storage devices to this ESXi hosts. 
    All Protocol Endpoints that are connected via SAN will show as a 1.00 MB device
    vvols-guide-pe-vsphere-view-01.png
    From this view the LUN ID, Transport, Multipathing and much more can be found.  
  • From the Hosts and Datacenter view, Navigate to Host -> Configure -> Protocol Endpoints
    This View will only display Protocol Endpoints that are actively being used as a mount point for a vVol Datastore and it's Operational State

    vvols-guide-pe-vsphere-view-02.png
    In the previous page there was a PE that was LUN ID 253, however on this page that PE does not show up as configured or Operational.
    This is because that PE does not have a vVol Datastore that it is being used for.  This is expected behavior.  If a vVol datastore is not mounted to the ESXi host then no configured PEs will display in this View.

    Multipathing is configured from the Protocol Endpoint and not from a sub lun.  Each sub lun connection inherits the multipathing policy set on the PE.

    BEST PRACTICE: Configure the round robin path selection policy for PEs.

  • From the Datastore View, Navigate to a vVol Datastore -> Configure -> Protocol Endpoints
    This page will display all the PEs that this vVol Datastore (storage container) that are on the FlashArray.  By default there will only be one PE on the FA.
    In this example there are two PEs.

    vvols-guide-pe-vsphere-view-03.png
    Select one of the PEs and click on the Host Mount Data tab
    From here the mounted hosts will be displayed.  Take note that there is a UI bug that will always show the Operational Status as not accessible.
  • By comparison, when the 2nd PE is viewed, there are no mounted hosts.  This is because the second PE is not connected via that SAN to any ESXi hosts in this vCenter.

    vvols-guide-pe-vsphere-view-04.png
  • From the Datastore View page, Navigate to a vVol Datastore -> Configure -> Connectivity with Hosts

    vvols-guide-pe-vsphere-view-05.png
    This page will show the mounted hosts connectivity with the vVol Datastore.  Here the expected connectivity is Connected.  If a host has lost management connectivity then the host will show as disconnected.

With regards to PE Queue Depths, ESXi behaves differently with respect to queue depth limits for PEs than for other volumes. Pure Storage recommends leaving ESXi PE queue depth limits at the default values. 

BEST PRACTICE: Leave PE queue depth limits at the default values unless performance problems occur.
The blog post at https://blog.purestorage.com/queue-depth-limits-and-vvol-protocol-endpoints/ contains additional information about PE queue depth limits.

[Back to Top]  


vVol Datastore

vVols replace LUN-based datastores formatted with VMFS. There is no file system on a vVol datastore, nor are vVol-based virtual disks encapsulated in files.

The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.

However, VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the vVol architecture includes a storage container object, generally referred to as a vVol datastore, with two key properties:

Capacity limit

  • Allows an array administrator to limit the capacity that VMware administrators can provision as vVols.

Array capabilities

  • Allows vCenter to determine whether an array can satisfy a configuration request for a VM.

A vVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term vVol datastore exclusively.


The FlashArray Implementation of vVol Datastores

FlashArray vVol datastores have no artificial size limit. The initial FlashArray vVols release, 5.0.0, supports a single 8-petabyte vVol datastore per array; in Purity 6.4.1 and higher, the amount of vVol datastores has been increased to the array's pod limit amount. The default vVol datastore size is 1-petabyte in 6.4.1 and later because of an issue with vSphere in OVF deployment process. Prior to Purity//FA 6.4.4, Pure Storage Technical Support can change an array’s default vVol datastore size on customer request to alter the amount of storage VMware can allocate.  Should this be desired please open up a support case with Pure Storage to have the size change applied.  With Purity//6.4.4 and later, the size of pod based storage containers can be manually set with the pod quota.  Essentially pod quota is the storage container size.

With the release of Purity//FA version 6.4.1, the VASA provider now supports multiple storage containers.  In order to leverage multiple storage containers and multiple vVol Datastores on the same array, the Purity version will need to be at 6.4.1 or higher.

Purity//FA Version 5.0.0 and newer versions have the VASA service as a core part of the Purity OS, so if Purity is up then VASA is running.  Once storage providers are registered then a vVol Datastore can be "created" and/or mounted to ESXi hosts.  However, in order for vSphere to implement and use vVols, a Protocol Endpoint must be connected to the ESXi hosts on the FlashArray.  Otherwise there is only a management path connection and not a data path connection.

FlashArrays require two items to create a volume—a size and a name. vVol datastores do not require any additional input or enforce any configuration rules on vVols, so creation of FlashArray-based vVols is simple.


Creating a Storage Container on the FlashArray (Optional)

With the release of Purity 6.4.1, multiple storage containers can be created on a single FlashArray. There will be the default storage container, which is the "root pod" or default location for volumes and volume groups.  On the FlashArray, these additional storage containers are managed through the Pod object.

Not all use cases will require creating additional storage containers.  By default all FlashArray will have the default storage container in the root, so additional storage container is not required to use vVols.  However, in the event that multiple storage containers is required this section covers how to create additional storage containers.

1. Navigate to the pod creation screen by clicking (1) Storage, (2) Pods and (3) + sign to create a new pod.

MSC1.png

2. Give the pod a (1) Name then click (2) Create.

MSC2.png

3. After pod creation, the GUI will direct you to the screen for the pod object. From here, under volumes, select the (1) ellipses then click (2) Show Protocol Endpoints. This will change this part of the GUI to show only PEs (Protocol Endpoints) attached to the pod. 

MSC3.png

4. To complete the storage container creation process create the protocol endpoint by selecting the (1) ellipses then clicking (2) Create Protocol Endpoint. Please note that generally speaking, only one PE per pod will be necessary, but more are supported if needed.

MSC4.png

5. Give the PE a (1) Name then click (2) Create to create it.

MSC5.png

6. After the PE has been created, it will show up under Volumes in the pod screen. (1) Click on the PE name. Note that the name format is PodName::ProtocolEndpointName.

MSC6.png

7. First, Highlight and Copy the serial number of the PE; this will be used later in the vCenter GUI to validate the connection of the PE to the host object. Click the (1) ellipses then click (2) Connect. While the PE can be connected to individual hosts, the recommended way to connect Protocol Endpoints is to connect it with a host group.

MSC7.png

8. Select the (1) Host Group to connect the PE to then click (2) Connect.

MSC8.png

9. To validate that the PE was successfully connected to the correct host objects, log into the vCenter client that manages the hosts in the host group that were connected earlier. Select (1) Hosts and Clusters view, select (2) a Host object, select (3) Storage Devices, left click the (4) Filter button and finally (5) paste the PE serial number. vCenter will filter devices that have serial number in the name. If the PE does not show up initially, you might need to rescan the storage devices associated with that host.

MSC9.png

10. If the PE shows up correctly as a Storage Device, next rescan the Storage Providers. Still under Hosts and Clusters view, select (1) the vCenter previously configured with the appropriate storage provider, select (2) Configure, (3) Storage Providers, (4) the Storage Provider for the array where the pod was configured and for the FlashArray controller that is Active (not Standby) then select (5) Rescan.

MSC10.png

11. Now that the additional PE has been connected and configured in a pod on the FlashArray, proceed to Mounting a vVol Datastore.


Mounting a vVol Datastore

A vVol datastore should be mounted to an ESXi host with access to a PE on the array that hosts the vVol datastore. Mounting a vVol datastore to a host requires:

The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.

An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.

With Pure Storage's vSphere Plugin, a VMware administrator can connect a PE to an ESXi Cluster and mount its vVol datastore without array administrator involvement.

Using the Plugin to Mount vVol Datastore

Once the Storage Providers are registered the vVol Datastore can be created and mounted using the vSphere Plugin.  Click blow to expand the workflow for creating the vVol Datastore and mounting it to an ESXi Cluster.  The workflow can also be found in the demo video at this point.

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
    MountvVolDatastore1.png
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
    MountvVolDatastore2.png
  3. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
    MountvVolDatastore3.png
  4. Select a FlashArray to back the vVol datastore.
    MountvVolDatastore4.png
  5. Select an existing container or optionally, create a new container. If using the existing container, select the container to use. Please note- for the purposes of FlashArrays that are versions of Purity 6.4.2 or higher, multiple storage containers are managed through the pod object on FlashArray.

    MountvVolDatastore5.png
  6. Populate the datastore name to be created. The container name is not editable because an existing container was selected, so the name selected from the previous window is pre-populated.
    MountvVolDatastore6-existing.png
  7. Optional. If new container was selected, populate the datastore name. If the datastore name should match the container name, check the Same as datastore name checkbox. Optionally, uncheck the checkbox and populate the container name field. Finally, populate the container quota value with the size of the datastore you'd like to reflect in vSphere. This will set a capacity quota on the pod on the FlashArray. 
    MountvVolDatastore7-new.png
  8. Review the information and finish the workflow.
    MountvVolDatastore8.png
  9. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.
    MountvVolDatastore9.png

Creating multiple containers through Pure's vSphere plugin is not currently supported but will be in an upcoming release of the plugin.

Mounting vVol Datastores Manually: FlashArray Actions 

Alternatively, vVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the vVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.

Pure Storage recommends using the Plugin to provision PEs to hosts.  Keep in mind that the FlashArray UI does not allow creation of Protocol Endpoints.  The FlashArray UI does allow finding the Protocol Endpoint and connecting them to Hosts and Host Groups.

A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.

Using the FlashArray UI to Manage the Protocol Endpoint

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint.  The pure-protocol-endpoint can be filtered in the Volumes view.  A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.

From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
vVols-User-Guide-PE-01.png
This view will display the Protocol Endpoints for the FlashArray

vVols-User-Guide-PE-02.png

From the PE View the PE can be connected to a Host or Host Group
Best Practice is to connect the PE to a Host Group and not Hosts individually. 
vVols-User-Guide-PE-03.png

From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
vVols-User-Guide-PE-04.png

Using the FlashArray CLI to Manage the Protocol Endpoint

From the FlashArray CLI a storage admin can manage the Protocol Endpoint.  This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.

Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

A protocol endpoint can be created with purevol create --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

To connect a protocol endpoint use either purehgroup connect or purevol connect

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint
Name                    Host Group       Host       LUN
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  10

pureuser@sn1-x50r2-b12-36> purevol list --connect
Name                                Size  LUN  Host Group       Host
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-1-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-2-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-3-FC

A protocol endpoint can be disconnected from a host and host group with purevol disonnect.

However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint
Name                    Host Group       Host       LUN
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  11
pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint
Name                    Host Group       Host
pure-protocol-endpoint  Prod-Cluster-FC  -

A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint
Name                  Source  Created                  Serial
dr-protocol-endpoint  -       2020-12-02 14:15:23 PST  F4252922ADE248CF000113EA

pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint
Name
dr-protocol-endpoint

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually.  However, in most cases the default pure-protocol-endpoint is fine to use.  There is no additional HA value added by connecting a host to multiple protocol endpoints.

Do not rename, destroy or eradicate the pure-protocol-endpoint PE on the FlashArray.  This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray. 

BEST PRACTICE: Use one PE per vVol container. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary.

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Mounting vVol Datastores Manually: Web Client Actions

Navigate to the vCenter UI once the PE is connected to the FlashArray Host Group that corresponds to the vSphere ESXi Cluster.

Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, ESXi does not recognize them until an I/O rescan occurs. (This is partially correct.  If you are on a recent version of Purity and ESXi, a Unit Attention will be issued to the ESXi hosts when the PE is connected to the hosts.  At this time, the ESXi host will dynamically update the devices that are presented via the SAN).  In the event that the FlashArray is not on a recent release of Purity  (Purity 5.1.15+, 5.3.6+ or 6.0.0+), a storage rescan from the ESXi hosts will be required for the PE to show up in the ESXi hosts connected devices.

To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device.

vvols-guide-pe-vsphere-view-01.png

The screen is useful to find the PEs that have been successfully connected via a SAN transport method.  Multipathing can be configured on the PE from this view as well.

Note that in this example there are three PEs from three different arrays.  When navigating to the Storage -> Protocol Endpoints Screen the PEs that are used as a vVol Datastore mount are displayed.  In this example we only have two that show, as there are currently only two vVol Datastores (from two different arrays) created.

vvols-guide-pe-vsphere-view-02.png

The expected behavior is that the ESXi host will only display connected PEs that are currently being used as mounts for a vVol Datastore.

To mount a vVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown to launch the New Datastore wizard.

vvols-guide-vvol-ds-01.png

Best Practice is to create and mount the vVol Datastore against the ESXi Cluster which would be mapped to a FlashArray Host Group.

Click the vVol Type

vvols-guide-vvol-ds-02.png

Enter in a friendly name for the datastore and select the vVol container in the Backing Storage Container list.

This is how the storage container list looks on Purity//FA 6.4.1 and higher.  The default container will show up as the default_storage_container (red box) and all others will show up with the pod name as the storage container name (orange box).
DefaultContainer641.png
This is how the storage container list looks in Purity//FA 6.4.0 or earlier.  The default container for an array will only be shown as Vvol container.
vvols-guide-vvol-ds-03.png

Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.

No Backing Storage listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.

Select the host(s) on which to mount the vVol datastore.  Best Practice would be to connect the vVol Datastore to all hosts in that ESXi Cluster.

vvols-guide-vvol-ds-04.png

Review the configuration details and then click Finish.

vvols-guide-vvol-ds-05.png

Once a vVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that the host is connected via SAN transport.  Note now that the PE LUN 253 is now listed as a PE for the ESXi host.

vvols-guide-vvol-ds-09.png

Mounting a vVol Datastore to Additional Hosts

In the event that an ESXi host has been added to a Cluster or the vVol Datastore was only mounted to some hosts in the cluster there is a workflow to mount additional hosts to the vVol Datastore.

To mount the vVol datastore to additional hosts, right-click on the vVol Datastore and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard.

vvols-guide-vvol-ds-06.png

Select the hosts to which to mount the vVol datastore by checking their boxes and click Finish.

vvols-guide-vvol-ds-07.png

Using a vVol Datastore

A vVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to vVols.

vVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a vVol datastore’s contents.

vSphere UI vVol Datastore View
vvols-guide-vvol-ds-08.png
ESXi CLI view of vVol Datastore Content
[root@ac-esxi-a-16:~] cd /vmfs/volumes/
[root@ac-esxi-a-16:/vmfs/volumes] cd sn1-m20r2-c05-36-vVols-DS/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] ls
AC-3-vVols-VM-1                               rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a  rfc4122.a46478bc-300d-459e-9b68-fa6acb59c01c  vVols-m20-VM-01                               vvol-w2k16-no-cbt-c-2
AC-3-vVols-VM-2                               rfc4122.7255934c-0a2e-479b-b231-cef40673ff1b  rfc4122.ba344b42-276c-4ad7-8be1-3b8a65a52846  vVols-m20-VM-02
rfc4122.1f972b33-12c9-4016-8192-b64187e49249  rfc4122.7384aa04-04c4-4fc5-9f31-8654d77be7e3  rfc4122.edfc856c-7de1-4e70-abfe-539e5cec1631  vvol-w2k16-light-c-1
rfc4122.24f0ffad-f394-4ea4-ad2c-47f5a11834d0  rfc4122.8a49b449-83a6-492f-ae23-79a800eb5067  vCLS (1)                                      vvol-w2k16-light-c-2
rfc4122.31123240-6a5d-4ead-a1e8-b5418ab72a3e  rfc4122.97815229-bbef-4c87-b69b-576fb55a780c  vVols-b05-VM-02                               vvol-w2k16-no-cbt-c-1
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] cd vVols-m20-VM-01/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS/vVols-m20-VM-01
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] ls
vVols-m20-VM-01-000001.vmdk                                          vVols-m20-VM-01.vmdk                                                 vmware-2.log
vVols-m20-VM-01-321c4c5a.hlog                                        vVols-m20-VM-01.vmsd                                                 vmware-3.log
vVols-m20-VM-01-3549e0a8.vswp                                        vVols-m20-VM-01.vmx                                                  vmware-4.log
vVols-m20-VM-01-3549e0a8.vswp.lck                                    vVols-m20-VM-01.vmx.lck                                              vmware-5.log
vVols-m20-VM-01-Snapshot2.vmsn                                       vVols-m20-VM-01.vmxf                                                 vmware.log
vVols-m20-VM-01-aux.xml                                              vVols-m20-VM-01.vmx~                                                 vmx-vVols-m20-VM-01-844ff34dc6a3e333b8e343784b3c65efa2adffa1-2.vswp
vVols-m20-VM-01.nvram                                                vmware-1.log

[Back to Top


Types of vVols

The benefits of vVols are rooted in the increased storage granularity achieved by implementing each vVol-based virtual disk as a separate volume on the array. This property makes it possible to apply array-based features to individual vVols.

FlashArray Organization of vVols

FlashArrays organize the vVols associated with each vVol-based VM as a volume group. Each time VMware administrator creates a vVol-based VM, the hosting FlashArray creates a volume group with the following naming schema: vvol-{VM Name}-{unique 8 character string}-vg

FlashArray syntax limits volume group names to letters, numbers and dashes; arrays remove other characters that are valid in virtual machine names during volume group creation.  The length of the volume group name is limited between 1 and 63 characters.  In the event that the VM name is longer than 46 characters the VM name will be truncated as part of the volume group name.

Volume Groups Area of GUI Volumes Tab
vvol-kb-volume-type-01.png

To list the volumes associated with a vVol-based VM, select the Storage view Volumes tab. In the Volume Groups area, select the volume group name containing the VM name from the list or enter the VM name in the search box.

The Volumes area of the pane lists the volumes associated with the VM.

GUI View of Volume Group Membership
vvol-kb-volume-type-02.png

Clicking a volume name displays additional detail about the selected volume.

GUI View of a vVol's Details
vvol-kb-volume-type-03.png

Note:
Clicking the volume group name in the navigation breadcrumbs returns to the volume groups display.

As with all FlashArray data objects, destroying a volume group moves it to the array’s Destroyed Volume Groups folder for 24 hours before eradicating it permanently.

To recover or eradicate a destroyed volume group, click the respective icons in the Destroyed Volume Groups pane.  A destroyed volume group can only be eradicated if all objects in that volume group have already been eradicated.

FlashArray GUI Destroyed Volume Groups View
vvol-kb-volume-type-04.png

The FlashArray CLI and REST interfaces can also be used to manage volume groups of vVols.

VM Datastore Structures

vVols do not change the fundamental VM architecture:

  • Every VM has a configuration file (a VMX file) that describes its virtual hardware and special settings
  • Every powered-on VM has a swap file.
  • Each virtual disk added to a VM is implemented as a storage object that limits guest OS disk capacity.
  • Every VM has a memory (vmem) file used to store snapshots of its memory state.

Conventional VM Datastores

Every VM has a home directory that contains information, such as:

Virtual hardware descriptions 

Guest operating system version and settings, BIOS configuration, virtual SCSI controllers, virtual NICs, pointers to virtual disks, etc.

Logs

Information used during VM troubleshooting

VMDK files 

Files that correspond to the VM’s virtual disks, whether implemented as NFS, VMFS, physical and virtual mode RDMs (Raw Device Mappings), or vVols. VMDK files indicate  where the ESXi vSCSI layer should send each virtual disk’s I/O.

For complete list VM home directory contents refer to VMware's documentation that covers Virtual Machine Files.

When a VMware administrator creates a VM based on VMFS or NFS, VMware creates a directory in its home datastore.

vCenter UI View - VM Settings - VM Options
vvol-kb-volume-type-05.png

vvol-kb-volume-type-06.png

 

Web Client File Browser View of a VM's Home Directory
vvol-kb-volume-type-07.png


With vVol-based VMs, there is no file system, but VMware makes the structure appear to be the same as that of a conventional VM. What occurs internally is quite different, however.

vVol-based VM Datastores

vVol-based VMs use four types of vVols:

  • Configuration vVol (usually called “config vVol” one per VM)
  • Data vVol (one or more per VM)
  • Swap vVol (one per VM)
  • Memory vVol (zero, one or more per VM)

The sections that follow describe these four types of vVols and the purposes they serve.

In addition to the four types of vVols used by vVol-based VMs, there are vVol snapshots, described in the section titled Snapshots of vVols, starting 

Config vVols 

When a VMware administrator creates a vVol-based VM, vCenter creates a 4-gigabyte thin-provisioned configuration vVol (config vVol) on the array, which ESXi formats with VMFS. A VM’s config vVol stores the files required to build and manage it: its VMX file, logs, VMDK pointers, etc. To create a vVol-based VM, right-click any inventory pane object to launch the New Virtual Machine wizard and specify that the VM’s home directory be created on a vVol datastore.

vCenter UI View - New VM
types-of-vvols-kb-01.png

Note:
For simplicity, the VM in this example has no additional virtual disks.

vCenter UI View - VM Hardware Settings
types-of-vvols-kb-02.png

When VM creation is complete, a directory with the name of the VM appears in the array’s vVol datastore. The directory contains the VM’s vmx file, log file and an initially empty vmsd file used to store snapshot information.

vCenter UI View - vVol DS File Browser
types-of-vvols-kb-03.png

In the Web Client, a vVol datastore appears as a collection of folders, each representing a mount point for the mini-file system on a config vVol. The Web Client GUI Browse Datastore function and ESXi console cd operations work as they do with conventional VMs. Rather than traversing one file system, however, they transparently traverse the file systems hosted on all of the array’s config vVols.

A FlashArray creates a config vVol for each vVol-based VM. Arrays name config vVols by concatenating the volume group name with config-<UUID>. Arrays generate UUIDs randomly; an array administrator can change them if desired.

An array administrator can search for volumes containing a vVol-based VM name to verify that its volume group and config vVol have been created.

FlashArray UI View - Volumes List
types-of-vvols-kb-04.png

As objects are added to a vVol-based VM, VMware creates pointer files in its config vVol; these are visible in its directory. When a VM is deleted, moved to another array, or moved to a non-vVol datastore, VMware deletes its config vVol.

Data vVols

Each data vVol on an array corresponds to a virtual disk. When a VMware administrator creates a virtual disk in a vVol datastore, VMware directs the array to create a volume and creates a VMDK file pointing to it in the VM’s config vVol. Similarly, to resize or delete a virtual disk, VMware directs the array to resize or destroy the corresponding volume.

Creating a Data vVol

vVol-based virtual disk creation is identical to conventional virtual disk creation. To create a vVol-based virtual disk using the Web Client, for example, right-click a VM in the Web Client inventory pane and select Edit Settings from the dropdown menu to launch the Edit Settings wizard.

vCenter UI View - VM Edit Settings
types-of-vvols-kb-05.png

Select New Hard Disk in the New device dropdown and click Add.

vCenter UI View - New Hard Disk Selection
types-of-vvols-kb-06.png

Enter configuration parameters. Select the VM’s home datastore (Datastore Default) or a different one for the new virtual disk, but to ensure that the virtual disk is vVol-based, select a vVol datastore.

vCenter UI View - Specifying Data vVol Parameters
types-of-vvols-kb-07.png

Click OK to create the virtual disk. VMware does the following:

  1. For a VM’s first vVol on a given array, directs the array to create a volume group and a config vVol for it.
  2. Directs the array to create a volume in the VM’s volume group.
  3. Creates a VMDK pointer file in the VM’s config vVol to link the virtual disk to the data vVol on the array.
  4. Adds the new pointer file to the VM’s VMX file to enable the VM to use the data vVol.

The FlashArray GUI Storage view Volumes tab lists data vVols in the Volumes pane of the volume group display.

FlashArray UI View - Volume Group Volume Objects View
types-of-vvols-kb-08.png

Resizing a Data vVol

A VMware administrator can use any of several management tools expand a data vVol to a maximum size of 62 terabytes while it is online. Although FlashArrays can shrink volumes as well, vSphere does not support that function.

vCenter UI View - vSphere Disallows Volume Shrinking
types-of-vvols-kb-09.png

Note:
VMware enforces the 62 terabyte maximum to enable vVols to be moved to VMFS or NFS, both of whose maximum virtual disk size is 62 terabytes.

At this time VMware does not support expanding a Volume that is configured with a SCSI controller that is enabled with sharing.

To expand a data vVol using the Web Client, right-click the VM in the inventory pane, select Edit Settings from the dropdown menu, and select the virtual disk to be expanded from the dropdown. The virtual disk’s current capacity is displayed. Enter the desired capacity and click OK, and use guest operating system tools to expose the additional capacity to the VM. 

vCenter UI View - Entering Expanded Data vVol Capacity
types-of-vvols-kb-10.png
FlashArray UI View - Updated Capacity Size of the Data vVol
types-of-vvols-kb-11.png

Deleting a Data vVol

Deleting a data vVol is identical to deleting any other type of virtual disk. When a VMware administrator deletes a vVol-based virtual disk from a VM, ESXi deletes the reference VMDK file and directs the array to destroy the underlying volume.

To delete a vVol-based virtual disk, right-click the target VM in the Web Client inventory pane, select Edit Settings from the dropdown menu to launch the Edit Settings wizard. Select the virtual disk to be deleted, hover over the right side of its row and click the  vv52.png  symbol when it appears.

vCenter UI View - Selecting Data vVol for Deletion
types-of-vvol-kb-adhoc.png

To remove the vVol from the VM, click the OK button. To remove it from the VM and destroy it on the array, check the Delete files from datastore checkbox and click OK.

vCenter UI View - Destroying the Volume on the Array
types-of-vvols-kb-12.png

Note:
Delete files from datastore is not a default—if it is not selected, the vVol is detached from the VM, but remains on the array. A VMware administrator can reattach it with the Add existing virtual disk Web Client command.

The ESXi host deletes the data vVol’s VMDK pointer file and directs the array to destroy the volume (move it to its Destroyed Volumes folder for 24 hours.

FlashArray UI View - Deleted Data vVol in Destroyed Volumes Objects
types-of-vvols-kb-13.png

An array administrator can recover a deleted vVol-based virtual disk at any time during the 24 hours following deletion. After 24 hours, the array permanently eradicates the volume and it can no longer be recovered.

Swap vVols

VMware creates swap files for VMs of all types when they are powered on, and deletes them at power-off. When a vVol-based VM is powered on, VMware directs the array to create a swap vVol, and creates a swap (.vswp) file in the VM’s config vVol that points to it.

vCenter UI View - vVol DS Browser - Powered Off VM Files
types-of-vvols-kb-14.png

Illustrates the components of a powered-off vVol-based VM. There is no vswp file.
FlashArray UI View - Volumes for Powered Off vVol based VM
types-of-vvols-kb-15.png

The VM’s volume group does not include a swap volume.

To power on a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown. 

vCenter UI View - Powering On the VM
types-of-vvols-kb-16.png

 When a VM is powered on, the Web Client file navigator lists two vswp files in its folder.

vCenter UI View - vVol DS Browser - Powered On VM Files
types-of-vvols-kb-17.png

VMware creates a vswp file for the VM’s memory image when it is swapped out and another for ESXi administrative purposes.

The swap vVol’s name in the VM’s volume group on the array is Swap- concatenated with a unique identifier. The GUI Volumes tab shows a volume whose size is the VM’s memory size. 

FlashArray UI View - Swap Volume for Powered On VM
types-of-vvols-kb-18.png
vCenter UI View - VMs Virtual Memory Size
types-of-vvols-kb-19.png

Like all FlashArray volumes, swap vVols are thin-provisioned—they occupy no space until data is written to them.

To power off a vVol-based VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Shut Down Guest OS from the secondary dropdown.

vCenter UI View - Command to Shutdown the Guest OS
types-of-vvols-kb-20.png

When a VM is powered off, its vswp file disappears from the Web Client file navigator, and the FlashArray GUI Volumes tab no longer shows a swap volume on the array.

FlashArray UI View - Powered Off VM's Volumes (note there is no swap)
types-of-vvols-kb-21.png

VMware destroys and immediately eradicates swap vVols from the array. (They do not remain in the Destroyed Volumes folder for 24 hours.)

FlashArray UI View - Audit log of operations to destroy and eradicate Swap
types-of-vvols-kb-22.png

Memory vVols

VMware creates memory vVols for two reasons:

VM suspension

When a VMware administrator suspends a VM, VMware stores its memory state in a memory vVol. When the VM resumes, its memory state is restored from the memory vVol, which is then deleted.

VM snapshots

When a VMware management tool creates a snapshot of a vVol-based VM with the “store memory state” option, VMware creates a memory vVol. Memory vVols that contain VM snapshots are deleted when the snapshots are deleted. They are described in the section titled Creating a VM Snapshot with Saved Memory.

To suspend a running VM, right-click its entry in the Web Client inventory pane, select Power from the dropdown menu, and Suspend from the secondary dropdown.

vCenter UI View - Command to Suspend the VM
types-of-vvols-kb-23.png

VMware halts the VM’s processes, creates a memory vVol and a vmss file to reference it, de-stages (writes) the VM’s memory contents to the memory vVol, and directs the array to destroy and eradicate its swap vVol.

FlashArray UI View - Memory vVol in the VM Volume Group
types-of-vvols-kb-24.png
vCenter UI View - vVol DS Browser - Memory vVol File
types-of-vvols-kb-25.png

When the VM’s memory has been written, the ESXi host unbinds its vVols. They are bound again when it is powered on.

To resume a suspended VM, right-click it in the Web Client inventory pane, select Power from the dropdown menu, and Power On from the secondary dropdown.

vCenter UI View - Powering on the Suspended VM
types-of-vvols-kb-26.png

Powering on a suspended VM binds its vVols, including its memory vVol, to the ESXi host, and loads its memory state is from the memory vVol. Once loading is complete, VMware unbinds the memory vVol and destroys the volume and the VASA provider will automatically eradicate it on the FlashArray.

FlashArray UI View - Eradicated Memory vVol
types-of-vvols-kb-27.png

Recovering Deleted vVols

Deleted data and config vVols are both recoverable within 24 hours of deletion.

Throughout a VM’s life, it has a config vVol in every vVol datastore it uses. The config vVol hosts the VM’s home folder which contains its VMX file, logs, swap pointer file, and data vVol (VMDK) and snapshot pointer files. Restoring a config vVol from a snapshot and the corresponding data and snapshot vVols effectively restores a deleted VM.

vCenter UI View - vVol DS Browser - Typical VM Home Directory
types-of-vvols-kb-28.png

Creating a Config vVol FlashArray Snapshot

As there needs to be a snapshot of the Config vVol in order to run through this recovery workflow Pure has provided several ways to snapshot the Config vVol.

FlashArray UI View - Taking an array based snapshot of the Config vVol
types-of-vvols-kb-29.png
vCenter UI View - Pure vSphere Plugin - Create Config Snapshot from VM Overiew Page
types-of-vvols-kb-30.png
vCenter UI View - Pure vSphere Plugin - Create VM Home (config vVol) Snapshot from Pure Storage Snapshot Management Page
types-of-vvols-kb-31.png

There are other ways to do this, including the FlashArray CLI, having the confg vVol be part of a FlashArray protection group, using storage policies with snapshot rulesets, etc.  The main thing is that by default there are no array snapshots taken for any of the vVols.  Pure encourages the use of Storage Policies to leverage array based snapshots to help protect the vms from accidental deletion.

Here the Config vVol now shows three volume snapshots that were taken using the above three methods.

FlashArray UI View - Config vVol that has volume snapshots on the FlashArray
types-of-vvols-kb-32.png

Manually Restoring a Deleted Data vVol

Without using the Pure Storage Plugin for the vSphere Client, manually restoring a deleted data vVol without a backup of the config vVol looks like this:

  1. In vCenter, create a new Virtual Disk that is the same size as the VMDK that you destroyed.
  2. On the FlashArray, recover the destroyed VMDK
  3. Overwrite the new VMDK - Data VVol - with the Data VVol that was just recovered.
  4. From the Guest OS, check that everything is recovered.

This workflow is outlined in detail in another kb and can be found here.

Manually Restoring a Deleted vVol VM

To delete a VM, VMware deletes the files in its config vVol and directs the array to destroy the config vVol and any of its data vVols that are not shared with other VMs.

vCenter UI View - Destroying a Powered Off VM
types-of-vvols-kb-33.png

types-of-vvols-kb-34.png

An array administrator can recover destroyed vVols at any time within 24 hours of their destruction. But because the config vVol’s files are deleted before destruction, recovering a VM’s config vVol results in an empty folder. A recovered config vVol must be restored from its most recent snapshot.

Recovering a config vVol requires at least one pre-existing array-based snapshot. Without a config vVol snapshot, a VM can be recovered, but its configuration must be recovered manually.

When a VMware administrator deletes a VM, VMware directs the array to destroy its config vVol, data vVols, and any snapshots. The array moves the objects to its destroyed objects folders for 24 hours.

FlashArray UI View - Destroyed Volumes and Volume Group for the VM that was Deleted
types-of-vvols-kb-35.png

To recover a deleted VM, recover its volume group first, followed by its config and data vVols. To recover a single object on the array, click the array options image.png  icon next to it.

To recover multiple objects of the same type with a single action, click the vertical ellipsis and select Recover… to launch the Recover Volumes wizard. Select the config vVol and the data vVols to be recovered by checking their boxes and click the Recover button.

FlashArray UI View - Command to Recover Destroyed Volumes
types-of-vvols-kb-36.png
FlashArray UI View - Selecting Volumes to Recover
types-of-vvols-kb-37.png

While the VMs Volumes and Volume Group were restored, recall that during the VM deletion process that the Config vVol is first erased at the VMFS level from vSphere.  When navigating to the VMs Directory it will be empty.

vCenter UI View - Empty Directory of the Recovered Config
types-of-vvols-kb-39.png

In the GUI Snapshots pane, click the vertical ellipsis to the right of the snapshot from which to restore, and select Restore from the dropdown menu.

FlashArray UI View - Restoring Config vVol from Volume Snapshot - 1
types-of-vvols-kb-38.png

When the Restore Volume from Snapshot wizard appears, click the Restore button.

FlashArray UI View - Restoring Config vVol from Volume Snapshot - 2
Screen Shot 2021-08-26 at 2.20.03 PM.png

Restoring the config vVol from a snapshot recreates the pointer files it contains. In the Web Client file navigator, right-click the vmx file and select Register VM… from the dropdown menu to register the VM.

vCenter UI View - Registering the Recovered VM
types-of-vvols-kb-40.png

After registration, all data vVols, snapshots, and the VM configuration are as they were when the snapshot of the config vVol was taken.

Restoring a Deleted Data vVol with the FlashArray vSphere Plugin

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
    vvols-plugin-kb-05-Restoring-vvol-1.png
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
    vvols-plugin-kb-05-Restoring-vvol-2.png
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
    vvols-plugin-kb-05-Restoring-vvol-3.png
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
    vvols-plugin-kb-05-Restoring-vvol-4.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    vvols-plugin-kb-05-Restoring-vvol-5.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 

Restoring a Deleted vVol VM with the FlashArray vSphere Plugin

This KB covers how to use the Pure Storage vSphere Plugin to manage the Virtual Volumes (vVols) Environment from the vCenter UI.

 


Registering VASA Providers

The vSphere Plugin allows users with Permissions to register Storage Providers the ability to register a FlashArray's VASA Providers that has been Added to the FlashArray list.  The workflow to register the Storage Providers for vCenters in non-linked or linked mode is outlined blow and also in the Demo Video.  Click to expand the workflow. 

Registering the VASA Providers with the Pure Storage vSphere Plugin
  1. A FlashArray will need to be added/registered in the Plugin to register the Storage Provider for the a given FlashArray.  Once the FlashArray is registered, Navigate to the main Plugin Page, select the FlashArray and then click on "Register Storage Provider".
    vvols-plugin-kb-01-registering-sp-1.png
  2. The recommended practice is to have a local FlashArray Array Admin user to register the storage providers with.  In the example below (and in the demo video), there is a local array admin named "vvols-admin" that the Storage Providers will be registered with.  In the event that the vCenter is in Enhanced Linked Mode, the option to choose which vCenter to register the storage providers with will be given.
    Registering the Storage Provider with a Single vCenter
    vvols-plugin-kb-01-registering-sp-2.png
    Registering the Storage Provider with a vCenter in Linked Mode
    vvols-plugin-kb-01-registering-sp-4.png
  3. Once the Storage Provider is successfully registered, navigate to the vCenter Server page, then Config and the Storage Providers tab.  Confirm that the storage providers are online and healthy.
    vvols-plugin-kb-01-registering-sp-3.png

Mounting a vVol Datastore

Once the Storage Providers are registered the vVol Datastore can be created and mounted using the vSphere Plugin.  Click blow to expand the workflow for creating the vVol Datastore and mounting it to an ESXi Cluster.  The workflow can also be found in the demo video at this point.

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
    MountvVolDatastore1.png
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
    MountvVolDatastore2.png
  3. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
    MountvVolDatastore3.png
  4. Select a FlashArray to back the vVol datastore.
    MountvVolDatastore4.png
  5. Select an existing container or optionally, create a new container. If using the existing container, select the container to use. Please note- for the purposes of FlashArrays that are versions of Purity 6.4.2 or higher, multiple storage containers are managed through the pod object on FlashArray.

    MountvVolDatastore5.png
  6. Populate the datastore name to be created. The container name is not editable because an existing container was selected, so the name selected from the previous window is pre-populated.
    MountvVolDatastore6-existing.png
  7. Optional. If new container was selected, populate the datastore name. If the datastore name should match the container name, check the Same as datastore name checkbox. Optionally, uncheck the checkbox and populate the container name field. Finally, populate the container quota value with the size of the datastore you'd like to reflect in vSphere. This will set a capacity quota on the pod on the FlashArray. 
    MountvVolDatastore7-new.png
  8. Review the information and finish the workflow.
    MountvVolDatastore8.png
  9. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.
    MountvVolDatastore9.png

Edit a vVol Datastore

The vSphere Plugin allows users that have permissions to edit vVol datastores. Click to expand the workflow below.

Edit a vVol Datastore

Left-click the (1) datastore view in vSphere, right-click the (2) vVol datastore you want to edit, hover over (3) Pure Storage and finally left-click (4) Edit Datastore to open the edit datastore workflow.

EditvVolDatastore1.png

Populate (1) Datastore Name, (2) Container Name and (3) Container Size then left-click (4) Submit. Datastore name will always be customizable as long as permissions in vCenter are correct, but if the FlashArray is on a Purity version less than 6.4.4, Container Name and Container Size will not be customizable because this is when multiple storage containers for vVols on a single FlashArray was introduced as well as pod quotas.

EditvVolDatastore2.png

 


Destroy a vVol Datastore

The vSphere Plugin allows users that have permissions to destroy vVol datastores. Click to expand the workflow below.

Destroy a vVol Datastore

Left-click the (1) datastore view in vSphere, right-click the (2) vVol datastore to be to destroyed, hover over (3) Pure Storage and finally left-click (4) Destory Datastore to open the edit datastore workflow.

DeletevVolDatastore1.png

Left-click (1) UNMOUNT.
DeletevVolDatastore2.png

 

SPBM Storage Policy Wizard

The vSphere Plugin allows users that have Permissions to create vCenter Storage Policies the ability to import a FlashArray Protection Groups Schedule as Policy Rules.  Click to expand the workflow below.

Storage Policy Wizard (5.2.0 and higher)

vSphere Remote Plugin 5.2.0 and higher

With 5.2.0 and higher the workflow is now the storage policy wizard.  This is largely do to the fact that the workflow is no longer just importing a protection groups schedule as policy capabilities and rules are more granular than they have been before.  Here is the new workflow:

  1. From the main plugin page select the create storage policies to open up the wizard workflow.  New policies can be created for one or more vCenter servers that are in linked mode.
    Storage-Policy-Wizard-01.png
    Plugin Home Page - Create Storage Policies
    Storage-Policy-Wizard-02.png
    Select vCenter Server(s) to create the policy for
  2. There are several features that can be selected to create the policy with.  Some features have specific Purity versions that are required in order to use them.  Please make note of those versions.  Once the features are selected a list of compatible arrays will be returned that can support these features.
    Storage-Policy-Wizard-03.png
    Select Features
    Storage-Policy-Wizard-04.png
    Array Compatibility List
  3. There is an option to specify which array(s) the policy can be restricted to.  Here one array is selected and this will restrict the policy to using datastores from only this array.
    Storage-Policy-Wizard-05.png
    Array selection - optional ruleset
  4. With QoS support the per disk bandwidth and IOPs limits can be set.  Remember that these are enforced at an individual virtual disk level, not a vm level.
    Storage-Policy-Wizard-06.png
    QoS Feature Support
  5. The volume tagging feature allows policies to have a key value pair that is tagged on the volumes with the policy applied to them.  
    Storage-Policy-Wizard-07.png
    Volume Tagging Feature
  6. The local snapshot protection feature uses the capabilities and rulesets for the policy to automatically create and manage a protection group on the array based off these capabilities.  Long term retention can also be configured for this feature.
    Storage-Policy-Wizard-08.png
    Snapshot Protection Feature
  7. With the replication feature there are two options.  One is to use a pre-existing protection group on a given array as the base/template for the replication schedule used in the policy.  The next is to manually configure/specify the replication rules that are used.
    Storage-Policy-Wizard-09.png
    Replication Policy Feature - Using an existing protection group as a template
    Storage-Policy-Wizard-10.png
    Replication Policy Feature - manually configuring the protection settings
  8. When manually setting the replication protection, all the same rules/capabilities that are normally available are now organized in a clearer method.  In the example only the replication interval and retention is configured.
    Storage-Policy-Wizard-11.png
    Replication Feature - Customized Protection
  9. Once all the featues are selected and configured it's time to name the storage policy.
    Storage-Policy-Wizard-12.png
    Naming the new Storage Policy
  10. At the end of the wizard a query is issued to see if any datastores match the rules outlined for each of the features.  Once completed a new storage policy is created with the rules specified in the wizard.
    Storage-Policy-Wizard-13.png
    Storage Policy Wizard - Ready to Complete Summary View

Here is a video demo and walkthrough of the new storage policy wizard:


 

Importing FlashArray Protection Groups as SPBM Policies with the Pure Storage vSphere Plugin (5.1.1 and lower workflow)

vSphere Plugin 4.5.0 through 5.1.1

  1. From the main plugin page, select the FlashArray to import the protection group settings and click on Import Protection Groups
vvols-plugin-kb-03-importing-pgroup-1.png
  1. Select the vCenters in which you would like the polic(ies) to be created.
clipboard_eb8c25714b3a33f3dea91600e6da8a04d.png
  1. Choose one or more protection groups. The selected protection groups will be used as "templates" to create vVol storage policies.
clipboard_e1092e0cea44013635c6ea584ca4ac232.png
  1. In the next screen, you can enter in a name for the policies. It will default to the protection group name, but you can change it as needed here.

clipboard_e26cf46c671e1b318af6e5256fec63e03.png

Note that it will prevent you from using a name that is in-use in one or more of the vCenters: clipboard_e7b3e0d6d45cefa92304ca6feb672d761.png

 

  1. The last screen offers two optional settings.
    1. Require array match: This will add the selected FlashArray into the policy and doing so will make sure that only storage from that specific FlashArray comply with the policy. This option maps to the FlashArray Group capability.
    2. Require protection group match: This will add the selected protection group name into the policy and doing so will make sure that only arrays with the specific protection group configuration AND name will comply with the policy. This option maps to the Consistency Group Name capability.

These settings will be configured uniformly for each selected protection group in the wizard, so if you want to configure the resultant policies differently, run through the wizard more than once, selecting the specific protection groups each time. Note a given protection group can be imported more than once as the source protection groups are used a templates for policies, but there is no strict one-to-one mapping.

clipboard_e2bd1052e79d397ecbdc2ed1544d4348d.png
Complete the process after confirming the selections on the final screen.
clipboard_e00812127a5985eaa5608d7fb2e3d0e5f.png
Note that policies do not span vCenters, so the policy will be created once per selected vCenter in the wizard.
clipboard_e1a36b6839fedf6035de4a593ac02027f.png
 
If you selected the Require array match option in the wizard you will see the array name populated in the policy: clipboard_e5b61a4cedb8a99edbe34a625418104a5.png
If you selected the Require protection group match option in the wizard you will see the protection group name populated in the policy: clipboard_e53beae478d7779890f8bc13b86915f2f.png

vSphere Plugin 4.4.0 and Earlier

  1. From the main plugin page, select the FlashArray to import the protection group settings and click on "Import Protection Groups"
    vvols-plugin-kb-03-importing-pgroup-1.png
  2. The screen that shows up next will list the FlashArray protection groups.  In the parentheses the schedule and capabilities of the protection group will be listed.  In the event that a Storage Policy in vCenter already matches the FlashArray pgroup schedule the option to select that pgroup will be grayed out. Select the policy or policies and click Import.
    vvols-plugin-kb-03-importing-pgroup-2.png
  3. Navigate to "Policies and Profiles" and click on the VM Storage Policies tab.  From here you will see that the Storage Policies have been created.  The naming schema for these policies will be [FlashArray] [either Snapshot or Replication] [Schedule Interval].  Below there is a Replication and Snapshot policy shown.
    vvols-plugin-kb-03-importing-pgroup-3.png

 

Viewing VM vVol details

When a FlashArray is registered with the vSphere Plugin there will be details reported in vCenter for vVols based Virtual Machines that are stored on that FlashArray.  

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.2.0 or higher)
  1. On the VM main page view there is the undelete protection box that also has links to the capacity, performance and virtual volumes management page.
    VM-Insights-01.png
    VM View - Pure Storage Undelete Protection Status and Quick Links
  2. From the VM view, navigate to the monitor and then Pure Storage view.  Here performance and capacity can be monitored at a volume or volume group level.
    VM-Insights-02.png
    VM View - Monitor - Pure Storage - Capacity - Volume View
    VM-Insights-03.png
    VM View - Monitor - Pure Storage - Capacity - Volume Group View
    VM-Insights-04.png
    VM View - Monitor - Pure Storage - Performance - Volume View
    VM-Insights-05.png
    VM View - Monitor - Pure Storage - Performance - Volume Group View
  3. From the VM view, navigate to the configure and then Pure Storage view.  From this page there are various workflows available as well as Guest Insights that are displayed for a supported guest OS and VMware tools version.
    VM-Insights-06.png
    VM View - Configure - Pure Storage - Virtual Volumes - VM Home Select - Rename Volume
    (Volume Group Rename is only available when renaming the VM Home)
    VM-Insights-07.png
    VM View - Configure - Pure Storage - Virtual Volumes - Hard Disk Select - Guest Insights

Here is a Demo on the new VM Insights from the 5.2.0 Plugin.


 

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.1.0 or lower)
  1. From the Virtual Machine view and Summary Tab, there is a FlashArray widget box.  This will show whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.
    vvols-plugin-kb-04-VM-Details-1.png
  2. On the Virtual Machine's Configure Page, there is a Pure Storage Virtual Volumes tab.  
    vvols-plugin-kb-04-VM-Details-2.png

    The page will allow end users to run the workflows to Import a virtual disk (vVol), restore a destroyed vVol or to Overwrite an existing vVol.
    Additionally the page contains important information about the VMs Data vVols.  Some of the important information here would be the Virtual Device (SCSI controller connection), the vVol Datastore that the vVol is on, which Array the vVol is on and the FlashArray Volume Group Name and Volume name.

Viewing vVol Datastore Details

When a FlashArray is registered with the vSphere Plugin there will be details reported in vCenter on the respective vVol datastore objects with some useful pieces of information concerning that datastore. Note that this feature is only available in plugin version 4.5.0 and later.

Viewing the vVol Datastore Details with the Pure Storage vSphere Plugin
  1. Click on a vVol datastore and view the summary tab. There you will see a new panel titled FlashArray that shows details of the underlying environment for that datastore.
clipboard_e71bfe339c856f75154a74582dd7f109a.png

Currently, there are five properties:

  1. Array: This is the FlashArray hosting that datastore.
  2. Active Storage Provider: This is the VASA provider that is currently in-use for that vVol datastore by the selected vCenter. While either VASA provider on the array can be used at any given time, vCenter will only use one at a time.
  3. Protocol Endpoint: This shows that protocol endpoints that are available on that FlashArray for that vVol datastore. This shows their name and the device serial number.
  4. Volume Groups in Use: Generally, a given VM (and its volumes) on a vVol datastore will be managed on the FlashArray via a volume group. This table shows the number of volume groups currently configured with vVol-type volumes (volumes with vVol tags). If you have more volume groups than number of registered VMs, it is likely that there are some unregistered VMs on the array or that vVol datastore is in use by another vCenter which is running those additional VMs.
  5. Volumes in Use. This shows how many volumes on the underlying FlashArray are in use a vVols with that particular vVol datastore. This will show the volumes with vVols tags indicating that particular vVol datastore owns it.

Creating a FlashArray Snapshot of a vVol Disk

The Pure Storage Plugin version 4.4.0 and later for the vSphere Client has the ability to create a new snapshot of only a vVol virtual disk.

Create a Snapshot of a vVol Disk
  1. From the Virtual Machine Configure tab, navigate to the Pure Storage - Virtual Volumes pane, select the disk you would like to snapshot and click Create Snapshot.

     

     

    clipboard_ee1b1a9dde32840f3374ab7d72fcfc010.png

  2. After clicking the Create Snapshot button, a dialog appears. You can optionally enter a snapshot name, otherwise it will assign the next available numerical name for the snapshot. Click Create.

     

    clipboard_ead9911a2a356f2ae181f303ec96b1e4f.png

  3. After the workflow is complete, you can verify the snapshot by either clicking the Import Disk or the Overwrite Disk button and finding the correct disk and expanding its snapshots.

    clipboard_ee7ee1501092792ea27f6dda452961b65.png

 

Restoring a vVol from a FlashArray Snapshot

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
    vvols-plugin-kb-05-Restoring-vvol-1.png
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
    vvols-plugin-kb-05-Restoring-vvol-2.png
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
    vvols-plugin-kb-05-Restoring-vvol-3.png
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
    vvols-plugin-kb-05-Restoring-vvol-4.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    vvols-plugin-kb-05-Restoring-vvol-5.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 

Creating a vVol Copy

With the Pure Storage vSphere plugin there is the ability to import a vVol from the same vVol VM or from another vVol VM.  The source can be either a FlashArray Snapshot or a Managed Snapshot.  The workflows for importing the same vVol from either a FA Snapshot or a Managed Snapshot is walked through below as well as in the Demo Video here.

Creating the Copy from a FlashArray Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to recover.  This can be a different vVol VM or the same vVol VM that you want to import the data vVol to.  In this example the Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "53".
    vvols-plugin-kb-06-vvol-copy-2.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be imported.
    Click on Import to complete the workflow. 
Creating the Copy from a vSphere Managed Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. Instead of using a FlashArray pgroup snapshot to import the vVol, this time a Managed Snapshot will be selected.  Notice the difference in the naming for the selected vVol.  There is no pgroup or snapshot name associated with it.  Just the volume group and data vvol name, followed by a "-snap" indicating that this is a managed snapshot for this vVol.  
    vvols-plugin-kb-06-vvol-copy-3.png
    The same type of information is provided in the Volume Information for Managed Snapshot or FlashArray Snapshots.
    To complete the import workflow, click on Import.
     
  3. Once the Import Workflows have completed, the new Data vVols will show up on the Virtual Volumes page.
    vvols-plugin-kb-06-vvol-copy-4.png

Recovering a vVols based Virtual Machine - VM Undelete and VM Revert
-- Purity//FA 6.2.6 or Higher --

With the release of Purity//FA 6.2.6 and Pure Storage vSphere Remote Plugin 5.1.0, two new features of VM Undelete were released:  the ability to revert a VM to a specific Point in Time (PiT) array based snapshot and being able to Undelete a vVols based VM that has been eradicated on the array to a specific PiT array snapshot.  The standard VM Undelete is still present.

Before getting into the workflows specifically let's cover what is required in order to be able to execute the PiT Revert and PiT Undelete workflows.

  • In order to be able to execute VM Undelete within the eradication timer (default 24 hours), at minimum an array snapshot of the VM's config vVol is required.  
  • In order to be able to execute PiT VM Undelete of a vVols based VM that has been eradicated, a FA protection group snapshot of all the VMs Data vVols, managed snapshots and Config vVols are required.
  • In order to be able to execute a PiT VM Revert of a vVols based VM, a FA protection group snapshot of all vVol disks of the currently configured VM are required.
    • The volume snapshot objects should be associated with an FA protection group directly and not through a host object or host group attached to the protection group.
      •  If the ability to consistently back up the environment with snapshots is the goal, when the VM is powered off while a snapshot is being taken of the host or host group object, but then gets powered on after the snapshot interval, the protection required might not be present because that VM would not have had a snapshot taken of it. This applies if the VM is in-flight during the snapshot operation as well.
      • The vVol VM can shift hosts with DRS enabled in vCenter or can be manually migrated to other hosts; this changes where the volumes are mapped from a FlashArray host object perspective.
      • The host will disconnect from some of the vVol volumes when the VM is powered off and while the VM is powered on, the swap volume would be backed up which is unnecessary.
      • The vVols service on the FlashArray, VASA, won't have insight into why those volumes are part of the protection group. Depending on the SPBM policy or replication group applied to the VM, there might be misleading compliance results on the VM in vCenter.

With that covered, Pure Storage does recommend leveraging SPBM and local snapshot protection placement rules for vVols based VMs.  Please see the implementation guide for SPBM for more information on local snapshot protection placement in particular.

Using a storage policy to enable VM Undelete Protection and the policy's configuration
vSphere Plugin - vVols Management KB - VM Undelete - 01.png
When a vVols based VM does not have a snapshot of the Config vVol, the VM will report that it does not have Undelete Protection
vSphere Plugin - vVols Management KB - VM Undelete - 02.png
For these workflows, a Storage Policy is created that has Local Snapshot Protection placement rule sets and the FlashArray Group specified
vSphere Plugin - vVols Management KB - VM Undelete - 03.png
The VM that showed that Undelete Protection was not current had the vVol No Requirements Policy applied to it
vSphere Plugin - vVols Management KB - VM Undelete - 04.png
The VM storage policy is changed to the one previously covered and applied to the whole VM
vSphere Plugin - vVols Management KB - VM Undelete - 01 - additional.png
Once the protection group snapshot schedule triggers, we now see that the VM has Undelete Protection
vSphere Plugin - vVols Management KB - VM Undelete - 07.png
Looking at the VM Compliance for the Policy we see the 4 VMs that have it applied and that they are all in a Compliant state

Now that the way the VMs are protected on the array is covered here are the specific workflows with these examples.

VM Undelete

In this example, the VMs have been powered off and Deleted from disk. Two of the VMs have additionally been eradicated from the array.

Recovering a Deleted vVol VM pending eradication with the Pure Storage vSphere Plugin - VM Undelete
  1. VM Undelete is a workflow available to the vVols Datastore.  Navigate to the vVols Datastore - Right click on the vVol Datastore - click on Pure Storage - Select Undelete Virtual Machine
    vSphere Plugin - vVols Management KB - VM Undelete - 13.png
  2. Take note of the Sources column as this will denote the available protection group snapshots on the array that can be used to recover the VM.  The exception is VMs that have been deleted from disk in vCenter but the array volumes have not yet been eradicated.  The sources will always show as 1 for them
    vSphere Plugin - vVols Management KB - VM Undelete - 14.png
    1. Checking from the array GUI, we can see that the 01 and 05 VMs have not been eradicated and are in a pending eradication state
      vSphere Plugin - vVols Management KB - VM Undelete - 15.png
  3. The VM c08-17-vvol-vm-01 is selected to be undeleted and the next page shows the point in times to chose from.  As this VM is in a pending eradication state, only one PiT source is shown but has the "destroyed volumes" notation.  Meaning that the VM's objects on the array have been destroyed but not eradicated yet
    vSphere Plugin - vVols Management KB - VM Undelete - 16.png
  4. A compute resource is selected next
    vSphere Plugin - vVols Management KB - VM Undelete - 17.png
  5. Review the details and then click Finish to recover the VM that has been deleted in vSphere but volumes on the array have not been eradicated
    vSphere Plugin - vVols Management KB - VM Undelete - 18.png
  6. Once the VM has been recovered, review that everything looks correct and the VM can be powered on and verified
  7. One thing to keep in mind is that storage policies are not applied or reviewed when registering an VM in vSphere
    vSphere Plugin - vVols Management KB - VM Undelete - 20.png
    As such, please apply the appropriate storage policy for the VM to ensure that the VM is still protected
    vSphere Plugin - vVols Management KB - VM Undelete - 26.png

Point in Time VM Undelete

In this example, the VMs have been powered off and Deleted from Disk. Two of the VMs have additionally been eradicated from the array. The workflow shown below is selecting a specific array protection group snapshot to recover (Undelete) the eradicated VMs from.

Recovering a Deleted vVol VM that has been eradicated with the Pure Storage vSphere Plugin - PiT VM Undelete
  1. Similar to the standard VM Undelete process, but instead of choosing a VM that is pending eradication on the array, select a VM that has been eradicated.  In the example the VM c08-17-vvol-vm-02 is selected
    vSphere Plugin - vVols Management KB - VM Undelete - 19.png
  2. Now the sources will show each array protection group snapshot that includes all vVols for that VM
    vSphere Plugin - vVols Management KB - VM Undelete - 21.png
  3. After a source is selected click next, then select the compute desired and click next
    vSphere Plugin - vVols Management KB - VM Undelete - 22.png
  4. Review the details and then click Finish
    vSphere Plugin - vVols Management KB - VM Undelete - 23.png
  5. After the VM is recovered, confirm that the VM looks healthy and then the VM can be powered on and verified
  6. As with the standard Undelete process, the policy association is lost in vSphere when the VM is deleted.  Ensure that the storage policy is reapplied to the VM to ensure that protection is still applied.

Point in Time VM Revert

In this example one powered of VM is reverted (rolled back) to a specific array protection group snapshot.

Reverting a vVol VM with the Pure Storage vSphere Plugin - PiT VM Revert
  1. Ensure that the VM is powered off.  The VM PiT Revert can only be executed against a VM that is powered off

  2. Right Click on the Powered Off VM - Navigate to the Pure Storage option - Select Revert to Snapshot

    vSphere Plugin - vVols Management KB - VM Undelete - 08.png
  3. The list of array protection group snapshots will be listed out as the sources to revert to
    vSphere Plugin - vVols Management KB - VM Undelete - 09.png
    1. The sources are determined by the array snapshots that contain all of the objects for the current configuration of the VM
      vSphere Plugin - vVols Management KB - VM Undelete - 10.png
      On the array we see that there are three protection group snapshots and those line up for the 3 options to revert the VM to
  4. Click Next after selecting the PiT to revert to and then Finish

    vSphere Plugin - vVols Management KB - VM Undelete - 11.png
  5. Now the VM has been rolled back to that specific PiT, power on the VM and confirm that it is in the desired state
    vSphere Plugin - vVols Management KB - VM Undelete - 12.png
    The audit log will show the specific workflow that was executed to revert the VM as well

Recovering a vVols based Virtual Machine - VM Undelete
-- Purity//FA 6.1 and Lower --

The Pure Storage vSphere Plugin has a workflow that can recover a vVol based VM that has a FlashArray snapshot of the VMs config vVol.  The section in the Demo Video that covers this workflow can be found here.  This Undelete workflow is specific to arrays running Purity//FA 6.1 and lower.  See the previous section if Purity//FA 6.2.6 or higher is running.

Recovering a Deleted vVol VM with the Pure Storage vSphere Plugin
  1. From the Virtual Machine view, there is a FlashArray box.  This will explain whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.  This is required for the Undelete workflow because of the following reasons:
    1. When a vVol VM is deleted, VMware will first delete the information of the Data vVols inventory from the config.
    2. After that is complete, VMware issues a volume unbind and destroy the Config vVol.  This means that by the time the FlashArray has destroyed the Config vVol, the inventory mapping and Data vVol information has been deleted.  
    3. In order to recover a VM that has been deleted, the Config vVol has to be overwritten with the snapshot of that Config vVol
  2. From the Virtual Machine view, we can see that the last snapshot of the Config vVol on the FlashArray is at 3:17 PM on July 21st.  Which means, that if there have been any edits to the VM such as CPU, Memory, new vVols, etc, it will not be recovered.  The state of the VM at the Undelete Protection timestamp will be what is recovered.
    vvols-plugin-kb-04-VM-Details-1.png
  3. This VM has been powered off and is now going to be deleted.
    vvols-plugin-kb-07-vm-undelete-1.png
  4. From the Datastore tab, select the vVol Datastore.  Right Click on the vVol Datastore, go to the Pure Storage option, and select "Undelete Virtual Machine"
    vvols-plugin-kb-07-vm-undelete-2.png
  5. The first page "Virtual Machine" will let you select which destroyed VM you want to recover.  The caveat is that by default, a volume on the FlashArray that is destroyed has 24 hours until it is eradicated.  This page will notify how much Time Remaining the VM has to be recovered.
    vvols-plugin-kb-07-vm-undelete-3.png
  6. The next page, "Compute Resource", select the ESXi host that will recover the VM.
    vvols-plugin-kb-07-vm-undelete-4.png
  7. Review the details and then select Finish.
    vvols-plugin-kb-07-vm-undelete-5.png
  8. Power on the VM and check that everything is powering on and is healthy.
    vvols-plugin-kb-07-vm-undelete-6.png

Video Demo 

Here is a Video Demo that walks through each of these steps covered in the KB - With the undelete workflows this example is of Purity//FA 6.1 and lower.  This does not show the workflows in Purity//FA 6.2 and higher.

 

[Back to Top


vVol Binding

A primary goal of the vVol architecture is scale—increasing the number of virtual disks that can be exported to ESXi hosts concurrently. With previous approaches, each volume would require a separate LUN. In large environments, it is quite possible to exceed the ESXi limit of 512 LUNs. vVols introduces the concept of protocol endpoints (PEs) to significantly extend this limit.

ESXi hosts bind and unbind (connect and disconnect) vVols dynamically as needed. Hosts can provision VMs and power them on and off even when no vCenter is available if they still have valid sessions to the FlashArray's VASA Provider.

When an ESXi host needs access to a vVol:

  • ESXi issues a bind request to the VASA provider whose array hosts the vVol
  • The VASA provider binds the vVol to a PE visible to the requesting host and returns the binding information (the sub-lun) to the host
  • The FlashArray issues a Unit Attention (UA) to the host notifying them that a new volume has been connected to the host ( SCSI rescans are unnecessary with vVols)
  • The host issues a SCSI REPORT LUNS command to the PE to make the newly-bound vVol accessible.

vVols are bound to specific ESXi host(s) for as long as they are needed. Binds (sub-lun connections) are specific to each ESXi host-PE-vVol relationship. A vVol bound to a PE that is visible to multiple hosts can only be accessed by hosts that request binds. The following Table lists the most common scenarios in which ESXi hosts bind and unbind and vVols.

What causes the bind?

Bound Host

 

When is it unbound?

vVol type

Power-on

Host running the VM

Power-off or vMotion

Config, data, swap

Folder navigated to in vVol Datastore via GUI

Host selected by vCenter with access to vVol datastore

When navigated away from or session ended

Config

Folder navigated to in vVol Datastore via SSH or console

Host logged into

When navigated away from or session ended

Config

vSphere vMotion

Target host

Power-off or vMotion

Config, data, swap

VM Creation

Target host

Creation completion

Config, data

VM Deletion

Target host

Upon deletion completion

Config

VM Reconfiguration

Target host

Reconfiguration completion

Config

VM or Template Clone

Target host

Clone completion

Config, data

VM Managed Snapshot

Target host

Snapshot completion

Config

Notes:
Binding and unbinding is automatic There is never a need for a VMware or FlashArray administrator to manually bind a vVol to an ESXi host.

FlashArrays only bind vVols to ESXi hosts that make requests; they do not bind them to host groups.

If multiple PEs are presented to an ESXi host, the host selects a PE that the VASA provider listed as available to satisfy each bind request. Array administrators cannot control which PE is used for a bind.

This blog post contains a detailed description of ESXi host to PE to vVol binding.

The end user should never need to manually connect a vVol to a FlashArray Host or Hostgroup.  Without the bind request coming from vSphere issued to VASA, the VASA provider and vSphere environment will not recognize the volume is connected.

A vVol with no sub-lun connection is not “orphaned”. No sub-lun connection simply indicates that no ESXi host has access to the vVol at that time. 


[Back to Top]  


Snapshots of vVols

An important benefit of vSphere Virtual Volumes (vVols) is in its handling of snapshots. With VMFS-based storage, ESXi takes VM snapshots by creating a delta VMDK file for each of the VM’s virtual disks. It redirects new virtual disk writes to the delta VMDKs, and directs reads of unmodified blocks to the originals, and reads of modified blocks to the delta VMDKs. The technique works, but it introduces I/O latency that can profoundly affect application performance. Additional snapshots intensify the latency increase.

The performance impact is so pronounced that both VMware and storage vendors recommend the briefest possible snapshot retention periods - see Best practices for using snapshots in the vSphere environment (1025279) kb article. Practically speaking, this limits snapshot uses to:

Patches and upgrades
Taking a snapshot prior to patching or upgrading an application or guest operating system, and deleting it immediately after the update succeeds.

Backup
Quiescing a VM and taking a snapshot prior to a VADP-based VM backup. Again, the recommended practice is deleting the snapshot immediately after the backup completes.

These snapshots are typically of limited utility for other purposes, such as development testing. Adapting them for such purposes usually entails custom scripting and/or lengthy copy operations with heavy impact on production performance. In summary, conventional VMware snapshots solve some problems, but with significant limitations.

Array-based snapshots are generally preferable, particularly for their lower performance impact. FlashArray snapshots are created instantaneously, have negligible performance impact, and initially occupy no space. They can be scheduled or taken on demand, and replicated to remote arrays. Scripts and orchestration tools can use them to quickly bring up or refresh development testing environments.

Because FlashArray snapshots have negligible performance impact, they can be retained for longer periods. In addition, they can be copied to create new volumes for development testing and analytics, either by other VMs or by physical servers.

FlashArray administrators can take snapshots of VMFS volumes directly, however there are limitations:

No integration with ESXi or vCenter

Plugins can enable VMFS snapshot creation and management from the Web Client, but vCenter and ESXi have no awareness of or capability for managing them.

Coarse granularity

Array-based snapshots of VMFS volumes capture the entire VMFS. They may include hundreds or thousands of VMs and their VMDKs. Restoring individual VMDKs requires extensive scripting.

vVols eliminate both limitations. VMware does not create vVol snapshots itself; vSphere directs the array to create a snapshot for each of a VM’s data vVols.  VASA then translates vSphere commands into FlashArray operations. VMware administrators use the same tools to create, restore, and delete VMFS and vVol snapshots, but with vVols, they can operate on individual VMDKs. 

With Purity//FA when taking a managed snapshot the array will copy the VMs current data volume/s to new data volume/s that has a 'snap' suffix for it.  Keep this in mind from an object count perspective.  Then when creating a managed snapshot there will be an additional array volume created for each virtual disk.

Taking Managed Snapshots of vVol-based VMs

While the FlashArray GUI, REST, and CLI interfaces can be used for both per-VM and per-virtual disk vVol operations, a major advantage of vVols is management of vVols from within vCenter. VMware administrators can use the Web Client or any other VMware management tool to create array-based snapshots of vVol-based VMs.

To take a snapshot of a vVol-based VM with the Web Client, right-click the VM in the inventory pane, select Snapshots from the dropdown menu, and Take Snapshot from the secondary dropdown to launch the Take VM Snapshot for vVol-VM wizard.  With vSphere 7.0 and higher in the HTML client Manage Snapshots has it's own tab again on the VM View. Snapshots can be managed, taken, reverted and deleted from this view.

vVols Implementation Guide - Snapshots - 01.png
vSphere Client View - Right Clicking a VM to Take a Snapshot
vVols Implementation Guide - Snapshots - 02.png
vSphere Client View - Snapshot Management View 
vVols Implementation Guide - Snapshots - 03.png
vSphere Client View - Managed Snapshot Wizard 

Enter a name for the snapshot, a description (optional) and optionally check one of the boxes:

Snapshot the virtual machine’s memory:

Causes the snapshot to capture the VM’s memory state and power setting. Memory snapshots take longer to complete, and may cause a slowdown in VM response over the network.

Quiesce guest file system:

VMware Tools quiesces the VM’s file system before taking the snapshot. This allows outstanding I/O requests to complete, but queues new ones for execution after restart. When a VM restored from this type of snapshot restarts, any queued I/O requests complete. To use this option, VMware Tools must be installed in the VM. Either of these options can be used with vVol-based VMs.

VMware administrators can also take snapshots of vVol-based VMs with PowerCLI, for example:

New-Snapshot -Name NewSnapshot -Quiesce:$false -VM vVolVM -Memory:$false 
vVols Implementation Guide - Snapshots - 05.png
vSphere Client View - vVols based VM's New Files 

When a snapshot of a vVol-based VM is taken, new files appear in the VM’s vVol datastore folder.

The files are:

VMDK (MSSQL-VM-01-000001.vmdk)

A pointer file to a FlashArray volume for the managed snapshot. If the VM is running from that VMDK, the file points to the data vVol that will have the active bind. If the VM is not running from that snapshot VMDK, the file points to a data vVol that is not bound. As administrators change VMs’ running states, VMware automatically re-points VMDK files.

Database file (MSSQL-VM-01.vmsd)

The VMware Snapshot Manager’s primary source of information. Contains entries that define relationships between snapshots and the disks from which they are created.

Memory snapshot file (MSSQL-VM-01-Snapshot7.vmsn)

Contains the state of the VM’s memory. Makes it possible to revert directly to a powered-on VM state. (With non-memory snapshots, VMs revert to turned off states.) Created even if the Snapshot the virtual machine’s memory option is not selected.

Memory file (not shown)

A pointer file to a memory vVol. Created only for snapshots that include VM memory states.

Creating a Managed Snapshot Without Saving Memory

If neither Snapshot the virtual machine’s memory nor Quiesce guest file system is selected, VMware directs the array to create snapshots with no pre-work. All FlashArray snapshots are crash consistent, so snapshots of vVol based-VMs that they host are likewise at least crash consistent.

The managed snapshot process for vVols comes in two parts:  Prepare to Snapshot Virtual Volume (prepareToSnapshotVirtualVolume) and then Snapshot Virtual Volume (snapshotVirtualVolume).  When a vSphere user initiates a managed snapshot operation vSphere will communicate with the array's VASA Provider to in the following method:

  1. vSphere issues a Prepare to Snapshot Virtual Volume request to the VASA Provider
  2. The VASA Provider ensures that the virtual volume is ready to have a managed snapshot taken
    1. On the FlashArray this is the step that the volumes are created with the data-vvol name with a -snap suffix
    2. If the VM has a policy and replication group associated with it, the data-snap volumes are placed in the protection group associated with that policy and replication group
  3. The VASA Provider responds back to vSphere that the prepare operation has completed and provides a uuid for the snapshot
  4. vSphere pauses the VM to ensure that no outstanding activity happens (Usually referred to as VM Stun time)
  5. vSphere issues a Snapshot Virtual Volume request to the VASA Provider for each Data vVol that the VM has
  6. The VASA provider copies out the data vVols to the managed snapshot
    1. On the FlashArray a purevol copy is issued where the data-vvol is copied out and overwrites the data-vvol-snap that was created during the prepare phase
    2. In vSphere 7.0 U3 and higher the Snapshot Virtual Volume request passes multiple vVol uuids as part of the request, which helps improve batching and performance at scale
  7. The VASA provider responds back to vSphere that Snapshot Virtual Volume is complete
  8. vSphere unpauses the VM (the VM is unstunned) and the managed snapshot operation is complete 

While this seems like a lot of steps and process at first, they typically will complete under a ms.  The biggest goal that both VMware and Pure Storage has is to decrease the amount of calls and time it takes between when the VM is stunned, snapshot virtual volume is issued and then completed.

Here is a view from the Snapshot Management for a normal snapshot that had completed.

vVols Implementation Guide - Snapshots - 04.png
vSphere Client View - Snapshot Management View - Normal Snapshot 

Here is a view from the FlashArray that shows the new volume objects created for a managed snapshot.

vVols Implementation Guide - Snapshots - 06.png
FlashArray GUI View - Managed Snapshot for vVols based VM 

Note:
FlashArray volume names are auto-generated, but VMware tools list the snapshot name supplied by the VMware administrator.

Creating a Managed Snapshot with Saved Memory

If the VMware administrator selects Store the Virtual Machine’s Memory State, the underlying snapshot process is more complex.

Memory snapshots generally take somewhat longer than non-memory ones because the ESXi host directs the array to create a memory vVol to which it writes the VM’s entire memory image. Creation time is proportional to the VM’s memory size.

vVols Implementation Guide - Snapshots - 07.png
vSphere Client View - Managed Snapshot Wizard 

Normal Snapshots, Memory Snapshots and File Quiesced Snapshots will all cause a VM to pause briefly.  A normal snapshot causing the smallest amount of "VM Stun" during the snapshot process.  File Quiesced and Memory based snapshots can have a VM "Stun Time" that varies depending on how busy the VM is, how large the VM is sized (Memory or Storage) and the version of vSphere.  Typically the stun time for a VM snapshot will be seconds, but a memory snapshot could vastly increase the amount of stun time if the VM is sized large and is very busy.  

The memory vVol in a VM’s volume group created as a consequence of a memory snapshot stores the VM’s active state (memory image). ArrayView 25 shows the volume group of a VM with a memory snapshot (vvol-test-a-VM-light-0011-92ecaac2-vg/Memory-f60d917b). The size of the memory vVol is the memory size of the VM’s memory image.

vVols Implementation Guide - Snapshots - 09.png
FlashArray GUI View - Memory Managed Snapshot Object 

VMware flags a memory snapshot with a green play icon to indicate that it includes the VM’s memory state.

vVols Implementation Guide - Snapshots - 08.png
vSphere Client View - Snapshot Management View - Memory Managed Snapshot 

Reverting a VM to a Managed Snapshot

VMware management tools can revert VMs to snapshots taken by VMware. As with snapshot creation, reverting is identical for conventional and vVol-based VM snapshots.

To restore a VM from a snapshot, from the Web Client Hosts & Clusters or VMs and Templates view, select the VM to be restored and click the Snapshots tab in the adjacent pane to display a list of the VM’s snapshots.

Select the snapshot from which to revert, click the All Actions button, and select Revert to from the dropdown menu.

vVols Implementation Guide - Snapshots - 10 - 1.png
vSphere Client View - Reverting a VM to a Managed Snapshot 

Subsequent steps differ slightly for non-memory and memory snapshots.

Reverting a VM from a Non-memory Managed Snapshot

The Revert to command displays a confirmation dialog. Click Yes to revert the VM to the selected snapshot.

The array overwrites the VM’s data vVols from their snapshots. Any data vVols added to the VM after the snapshot was taken are unchanged.

Before reverting a VM from a non-memory snapshot, VMware shuts the VM down. Thus, reverted VMs are initially powered off.

Reverting a VM from Memory Managed Snapshot

To revert a VM to a memory snapshot, the ESXi host first directs the array to restore the VM’s data vVols from their snapshots, and then binds the VM’s memory vVol and reloads its memory. Reverting a VM to a memory snapshot takes slightly longer and results in a burst of read activity on the array.

A VM reverted to a memory snapshot can be reverted either suspended or to a running state. Check the Suspend this virtual machine when reverting to selected snapshot box in the Confirm Revert to Snapshot wizard to force the reverted VM to be powered off initially. If the box is not checked, the VM is reverted into its state at the time of the snapshot.

vVols Implementation Guide - Snapshots - 11.png
FlashArray GUI View - Memory Snapshot Read IO

Deleting a Managed Snapshot

Snapshots created with VMware management tools can be deleted with those same tools. VMware administrators can only delete snapshots taken with VMware tools.

To delete a VM snapshot from the Web Client Host and Clusters or VMs and Templates view, select the target VM and click the Snapshots tab in the adjacent pane to display a list of its snapshots.

Select the snapshot to be deleted, click the All Actions button, and select Delete Snapshot from the dropdown menu to launch the Confirm Delete wizard. Click Yes to confirm the deletion.

vVols Implementation Guide - Snapshots - 12.png

VMware removes the VM’s snapshot files from the vVol datastore and directs the array to destroy the snapshot. Depending on whether FlashArray Safemode is enabled or not will have differing outcomes on the array.

When Safemode is disabled the VASA Provider will destroy and automatically eradicate the deleted managed snapshot.  This helps reduce object count churn in environments that leverage managed snapshots as part of backup workflows.

One of the options when Safemode is enabled is disabling the manual eradication on the array.  This means that the VASA Provider is no longer able to automatically eradicate the deleted managed snapshot.  Keep this in mind when planning object count headroom when using vVols with FlashArray Safemode.

vVols Implementation Guide - Snapshots - 13.png
FlashArray GUI View - Deleted Managed Snapshot objects are eradicated
vVols Implementation Guide - Snapshots - 14.png
FlashArray GUI View - Safemode is enabled - Deleted Managed Snapshot objects are not eradicated 

When VMware deletes a conventional VM snapshot, it reconsolidates (overwrites the VM’s original VMDKs with the data from the delta VMDKs). Depending on the amount of data changed after the snapshot, this can take a long time and have significant performance impact. With FlashArray based snapshots of vVols, however, there is no reconsolidation. Destroying a Flasharray snapshot is essentially instantaneous. Any storage reclamation occurs after the fact during the normal course of the array’s periodic background garbage collection (GC).

Unmanaged Snapshots - FlashArray Snapshots

Snapshots created with VMware tools are called managed snapshots. Snapshots created by external means, such the FlashArray GUI, CLI, and REST interfaces and protection group policies, are referred to as unmanaged. The only difference between the two is that VMware tools can be used with managed snapshots, whereas unmanaged ones must be managed with external tools.

Unmanaged snapshots (and volumes) can be used in the VMware environment. For example, FlashArray tools can copy an unmanaged source snapshot or volume to a target data vVol, overwriting the latter’s contents, but with some restrictions:

Volume size

A source snapshot or volume must be of the same size as the target data vVol. FlashArrays can copy snapshots and volumes of different sizes (the target resizes to match the source), but VMware cannot accommodate external vVol size changes. To overwrite a data vVol with a snapshot or volume of a different size, use VMware tools to resize the target vVol prior to copying.

Offline copying

Overwriting a data vVol while it is in use typically causes the application to fail or produce incorrect results. A vVol should be offline to its VM, or the VM should be powered off before overwriting.

Config vVols

Config vVols should only be overwritten with their own snapshots.

Memory vVols

Memory vVols should never be overwritten. There is no reason to overwrite them, and doing so renders them unusable.

Snapshot Management with the Plugin

Viewing VM vVol details

When a FlashArray is registered with the vSphere Plugin there will be details reported in vCenter for vVols based Virtual Machines that are stored on that FlashArray.  

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.2.0 or higher)
  1. On the VM main page view there is the undelete protection box that also has links to the capacity, performance and virtual volumes management page.
    VM-Insights-01.png
    VM View - Pure Storage Undelete Protection Status and Quick Links
  2. From the VM view, navigate to the monitor and then Pure Storage view.  Here performance and capacity can be monitored at a volume or volume group level.
    VM-Insights-02.png
    VM View - Monitor - Pure Storage - Capacity - Volume View
    VM-Insights-03.png
    VM View - Monitor - Pure Storage - Capacity - Volume Group View
    VM-Insights-04.png
    VM View - Monitor - Pure Storage - Performance - Volume View
    VM-Insights-05.png
    VM View - Monitor - Pure Storage - Performance - Volume Group View
  3. From the VM view, navigate to the configure and then Pure Storage view.  From this page there are various workflows available as well as Guest Insights that are displayed for a supported guest OS and VMware tools version.
    VM-Insights-06.png
    VM View - Configure - Pure Storage - Virtual Volumes - VM Home Select - Rename Volume
    (Volume Group Rename is only available when renaming the VM Home)
    VM-Insights-07.png
    VM View - Configure - Pure Storage - Virtual Volumes - Hard Disk Select - Guest Insights

Here is a Demo on the new VM Insights from the 5.2.0 Plugin.


 

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.1.0 or lower)
  1. From the Virtual Machine view and Summary Tab, there is a FlashArray widget box.  This will show whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.
    vvols-plugin-kb-04-VM-Details-1.png
  2. On the Virtual Machine's Configure Page, there is a Pure Storage Virtual Volumes tab.  
    vvols-plugin-kb-04-VM-Details-2.png

    The page will allow end users to run the workflows to Import a virtual disk (vVol), restore a destroyed vVol or to Overwrite an existing vVol.
    Additionally the page contains important information about the VMs Data vVols.  Some of the important information here would be the Virtual Device (SCSI controller connection), the vVol Datastore that the vVol is on, which Array the vVol is on and the FlashArray Volume Group Name and Volume name.

Creating a FlashArray Snapshot of a vVol Disk 

The Pure Storage Plugin version 4.4.0 and later for the vSphere Client has the ability to create a new snapshot of only a vVol virtual disk.

Create a Snapshot of a vVol Disk
  1. From the Virtual Machine Configure tab, navigate to the Pure Storage - Virtual Volumes pane, select the disk you would like to snapshot and click Create Snapshot.

     

     

    clipboard_ee1b1a9dde32840f3374ab7d72fcfc010.png

  2. After clicking the Create Snapshot button, a dialog appears. You can optionally enter a snapshot name, otherwise it will assign the next available numerical name for the snapshot. Click Create.

     

    clipboard_ead9911a2a356f2ae181f303ec96b1e4f.png

  3. After the workflow is complete, you can verify the snapshot by either clicking the Import Disk or the Overwrite Disk button and finding the correct disk and expanding its snapshots.

    clipboard_ee7ee1501092792ea27f6dda452961b65.png

 

Restoring a vVol from a FlashArray Snapshot

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
    vvols-plugin-kb-05-Restoring-vvol-1.png
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
    vvols-plugin-kb-05-Restoring-vvol-2.png
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
    vvols-plugin-kb-05-Restoring-vvol-3.png
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
    vvols-plugin-kb-05-Restoring-vvol-4.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    vvols-plugin-kb-05-Restoring-vvol-5.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 

Creating a vVol Copy 

With the Pure Storage vSphere plugin there is the ability to import a vVol from the same vVol VM or from another vVol VM.  The source can be either a FlashArray Snapshot or a Managed Snapshot.  The workflows for importing the same vVol from either a FA Snapshot or a Managed Snapshot is walked through below as well as in the Demo Video here.

Creating the Copy from a FlashArray Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to recover.  This can be a different vVol VM or the same vVol VM that you want to import the data vVol to.  In this example the Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "53".
    vvols-plugin-kb-06-vvol-copy-2.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be imported.
    Click on Import to complete the workflow. 
Creating the Copy from a vSphere Managed Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. Instead of using a FlashArray pgroup snapshot to import the vVol, this time a Managed Snapshot will be selected.  Notice the difference in the naming for the selected vVol.  There is no pgroup or snapshot name associated with it.  Just the volume group and data vvol name, followed by a "-snap" indicating that this is a managed snapshot for this vVol.  
    vvols-plugin-kb-06-vvol-copy-3.png
    The same type of information is provided in the Volume Information for Managed Snapshot or FlashArray Snapshots.
    To complete the import workflow, click on Import.
     
  3. Once the Import Workflows have completed, the new Data vVols will show up on the Virtual Volumes page.
    vvols-plugin-kb-06-vvol-copy-4.png


[Back to Top]  


Storage Policy Based Management

A major benefit of the vVol architecture is granularity—its ability to configure each virtual volume as required and ensure that the configuration does not change.

Historically, configuring storage with VMware management tools has required GUI plugins. Every storage vendor’s tools were unique—there was no consistency across vendors. Plugins were integrated with the Web Client, but not with vCenter itself, so there was no integration with the SDK or PowerCLI. Moreover, ensuring on-going configuration compliance was not easy, especially in large environments. Assuring compliance with storage policies generally required 3rd party tools.

With vVol data granularity, an array administrator can configure each virtual disk or VM exactly as required. Moreover, with vVols, data granularity is integrated with vCenter in the form of custom storage policies that VMware administrators create and apply to both VMs and individual virtual disks.

Storage policies are VMware administrator-defined collections of storage capabilities. Storage capabilities are array-specific features that can be applied to volumes on the array. When a storage policy is applied, VMware filters out non-compliant storage so that only compliant targets are presented as options for configuring storage for a VM or vVol.

If an array administrator makes a VM or volume non-compliant with a VMware policy, for example by changing its configuration on the array, VMware marks the VM or VMDK non-compliant. A VMware administrator can remediate non-compliant configurations using only VMware management tools; no array access is required.


FlashArray Storage Capabilities

An array’s capabilities represent the features it offers. When any FlashArray’s VASA providers are registered with vCenter, the array informs vCenter that the array has the following capabilities:

  • Encryption of stored data (“data at rest”)
  • Deduplication
  • Compression
  • RAID protection
  • Flash storage

All FlashArrays offer these capabilities; they cannot be disabled. VMware administrators can configure the additional capabilities advertised by the VASA provider and listed in the following table.

Capability Name

Value (not case-sensitive)

Consistency Group Name

A FlashArray protection group name

FlashArray Group

Name of one or more FlashArrays

Local Snapshot Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Local Snapshot Policy Capable

Yes or No

Local Snapshot Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Minimum Replication Concurrency

Number of target FlashArrays to replicate to at once

Pure Storage FlashArray

Yes or No

QoS Support

Yes or No

Replication Capable

Yes or No

Replication Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Replication Retention

A time interval in seconds, minutes, hours, days, week, months or years.

Target Sites

Names of specific FlashArrays desired as replication targets

Configurable Capabilities Advertised by FlashArray VASA Providers 1.0.0 and higher

There are some new capabilites that are now advertised with the release of Purity//FA 6.2.6 and the Pure VASA Provider 2.0.  These capabilities are all placement based rules and will only match a vVol datastore running on a FlashArray running Purity//FA 6.2.6 or higher.  Here are the capabilities that are available starting with Purity//FA 6.2.6 and VASA Provider 2.0.

QoS Placement Capability Name

Value (not case-sensitive)

Per Virtual Disk IOPS Limit

A value and the unit of measurement (hundreds, thousands or millions)

Per Virtual Disk Bandwith Limit

A value and the unit of measurement (KB/s, MB/s or GB/s)

Local Snapshot Protection Placement Capability Name

Value (not case-sensitive)

Snapshot Interval

A time interval in seconds, minutes, hours, days, week, months or years.

Retain all Snapshots for 

A time interval in seconds, minutes, hours, days, week, months or years.

Retain Additional Snapshots Number of snapshots to be retained for
Days to Retain Additional Snapshots Number of Days to retain the additional snapshots

Volume Tagging Placement Capability Name

Value (not case-sensitive)

Key

Name of the the volume key tag

Value Name of the volume value tag
Copyable Yes or No

Configurable Capabilities Advertised by FlashArray VASA Providers 2.0.0 and higher


Storage Capability Compliance

Administrators can specify values for some or all of these capabilities when creating storage policies. VMware performs two types of policy compliance checks:

  • If a vVol were to be created on the array, could it be configured with the feature?
  • Is a vVol in compliance with its policy? For example, a vVol with a policy of hourly snapshots must be (a) on FlashArray that hosts a protection group with hourly snapshots enabled and (b) a member of that protection group.

Only VMs and virtual disks configured with vVols can be compliant. VMFS-based VMs are never compliant, even if their volume is on a compliant FlashArray.

The following table lists the circumstances under which a policy offers each capability, and under which a vVol is in or out of compliance with it. 

Capability Name An array offers this capability when… A vVol is in compliance when… A vVol is out of compliance when…
Pure Storage FlashArray …it is a FlashArray (i.e. always). …it is on a FlashArray, if the capability is set to ‘Yes’.

…it is on a different array vendor/model and the capability is set to ‘Yes’.

…it is on a FlashArray and the capability is set to ‘No’.

FlashArray Group

…it is a FlashArray and its name is listed in this group.

…it is on a FlashArray with one of the configured names.

…it is not on a FlashArray with one of the configured names.

QoS Support

…it is a FlashArray and has QoS enabled.

…it is on a FlashArray with QoS enabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘No’.

…it is on a FlashArray with QoS disabled and the capability is set to ‘Yes’.

…it is on a FlashArray with QoS enabled and the capability is set to ‘No’.

Consistency Group Name

…it is a FlashArray and has a protection group with that name.

…it is in a protection group with that name.

…it is not in a protection group with that name.

Local Snapshot Policy Capable

…it is a FlashArray and has at least one protection group with an enabled snapshot schedule. 

…it is on a FlashArray with at least one protection group with an enabled snapshot schedule. 

…it is on a FlashArray that does not have at least one protection group or on a non-FlashArray.

Local Snapshot Interval

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified interval.

…it is in a protection group with an enabled local snapshot policy of the specified interval.

…it is not in a protection group with an enabled local snapshot policy of the specified interval.

Local Snapshot Retention

…it is a FlashArray and has at least one protection group with an enabled local snapshot policy of the specified retention.

…it is in a protection group with an enabled local snapshot policy of the specified retention.

…it is not in a protection group with an enabled local snapshot policy of the specified retention.

Replication Capable

…it is a FlashArray and has at least one protection group with an enabled replication schedule.

…it is in a protection group with an enabled replication target.

…it is not in a protection group with an enabled replication target.

Replication Interval

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified interval.

…it is in a protection group with an enabled replication policy of the specified interval.

…it is not in a protection group with an enabled replication policy of the specified interval.

Replication Retention

…it is a FlashArray and has at least one protection group with an enabled replication policy of the specified retention.

…it is in a protection group with an enabled replication policy of the specified retention.

…it is not in a protection group with an enabled replication policy of the specified retention.

Minimum Replication Concurrency

…it is a FlashArray and has at least one protection group with the specified number or more of allowed replication targets.

…it is in a protection group that has the specified number of allowed replication targets.

…it is not in a protection group that has the specified number of allowed replication targets.

Target Sites

…it is a FlashArray and has at least one protection group with one or more of the specified allowed replication targets.

If “Minimum Replication Currency” is set, then it must match at least that configured value of FlashArrays.

…it is in a protection group with one or more of the specified allowed replication targets.

If “Minimum Replication Currency” is set, then it must be replicated to at least that configured value of the listed target FlashArrays.

…it is not in a protection group replicating to the minimum amount of correct target FlashArrays.

 

QoS Placement
Capability Name
An array offers this capability when… A vVol is in compliance when… A vVol is out of compliance when…
Per Virtual Disk IOPS Limit ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the volume QoS IOPS Limit matches the value of the rule. ...the volume's QoS IOPS Limit is either unset or does not match the value in the rule.
Per Virtual Disk Bandwith Limit ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the volume QoS Bandwidth Limit matches the value of the rule. ...the volume's QoS Bandwidth Limit is either unset or does not match the value in the rule.
Local Snapshot
Protection Placement
Capability Name
An array offers this capability when… A vVol is in compliance when… A vVol is out of compliance when…
Snapshot Interval ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. ...the vVol is not a member of the paired protection group, the interval does not match the policy rule or the snapshot schedule is disabled.
Retain all Snapshots for  ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. ...the vVol is not a member of the paired protection group, the retention interval does not match the policy rule or the snapshot schedule is disabled.
Retain Additional Snapshots ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. ...the vVol is not a member of the paired protection group, the value does not match the policy rule or the snapshot schedule is disabled.
Days to Retain Additional Snapshots ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...the protection group that VASA is using for the storage policy matches the ruleset, the vVol is a member of the protection group and the snapshot schedule is enabled. ...the vVol is not a member of the paired protection group, the value does not match the policy rule or the snapshot schedule is disabled.
Volume Tagging Placement
Capability Name
An array offers this capability when… A vVol is in compliance when… A vVol is out of compliance when…
Key ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...a tag exists on the volume that matches the key value pair dictated by the rule. ...a tag does not exist or does not match the key value pair dictated by the rule.
Value ...it is a FlashArray runing Purity//FA 6.2.6 or higher. ...a tag exists on the volume that matches the key value pair dictated by the rule. ...a tag does not exist or does not match the key value pair dictated by the rule.

 


Combining Capabilities and Storage Compliance

This section describes an example of combining capabilities into a policy. Storage policies are a powerful method of assuring specific configuration control, but they affect how vVol compliance is viewed. For an array or vVol to be compliant with a policy:

  1. The array or vVol must comply with all of the policy’s capabilities
  2. For snapshot and replication capabilities, the array must have at least one protection group that offers all of the policy’s capabilities. For example, if a policy requires hourly local snapshots and replication every 5 minutes, a protection group with a hourly snapshots and a different protection group with 5 minute replication do not make the array compliant. VMware requires that volumes be in a single group during policy configuration, so to be compliant for this example, an array would require at least one protection group with hourly snapshots and 5 minute replication.
  3. Some combinations of capabilities cannot be compliant. For example, setting an array’s Local Snapshot Policy Capable capability to No and specifying a policy that includes snapshots means that no storage compliant with the policy can be hosted on that array.

Creating a Storage Policy

vCenter makes the capabilities advertised by an array’s VASA Provider available to VMware administrators for assembling into storage policies. Administrators can create policies by using APIs, GUI, CLI, or other tools. This section describes two ways of creating policies for FlashArray-based vVols:

  1. Custom Policy Creation
    1. Using the Web Client to create custom policies using capabilities published by the FlashArray VASA provider
  2. Importing FlashArray Protection Groups
    1. Using the Plugin to create storage policies by importing a FlashArray protection group configuration

Creating Custom Storage Policies

Click the home icon at the top of the Web Client home screen, and select Policies and Profiles from the dropdown menu to display the VM Storage Policies pane.

vVols Implementation Guide - SPBM - 01.png
Policies and Profiles Command

Select the VM Storage Policies tab and click the Create VM Storage Policy button  to launch the Create New VM Storage Policy wizard.

vVols Implementation Guide - SPBM - 02.png
Create VM Storage Policy Button

Select a vCenter from the dropdown and enter a name and description for the policy.

vVols Implementation Guide - SPBM - 03.png
Create New VM Storage Policy Wizard

It is a best practice to use a naming convention that is operationally meaningful. For example the name above suggests that the policy will have local snapshot protection with a one hour interval and 15 minute replication.

Select com.purestorage.storage.policy in the <Select provider> dropdown to use the FlashArray VASA provider rules (com.purestorage.storage.policy) to create the storage policy.

vVols Implementation Guide - SPBM - 04.png
Before Purity 6.2.6 and VASA Provider 2.0.0 the FlashArray VASA Provider rulesets showed up as a string like this.  Moving forward the rules will have a friendlier name displaying as just "Pure Storage"

vVols Implementation Guide - SPBM - 04 - b.png
Rule-set Page of the Create New VM Storage Policy Wizard

A storage policy requires at least one rule. To locate all VMs and virtual disks to which this policy will be assigned on FlashArrays, click the <Add rule> dropdown and select the Pure Storage FlashArray capability.

vVols Implementation Guide - SPBM - 05.png
Adding a Storage Policy Rule

At least one Placement rule needs to be provided in order to create a storage policy with the Pure Storage rules.

vVols Implementation Guide - SPBM - 17.png
Error if no placement rule is provided

The selected rule name appears above the <Add rule> dropdown, and a dropdown list of valid values appears to the right of it. Select Yes and click Next to create the policy. As defined thus far, the policy requires that VMs and vVols to which it is assigned be located on FlashArrays, but they are not otherwise constrained. When a policy is created, the Plugin checks registered arrays for compliance and displays a list of vVol datastores on arrays that support it.

vVols Implementation Guide - SPBM - 09.png
List of Arrays Compatible with a New Storage Policy

The name assigned to the policy (FlashArray-1hrSnap15minReplication) suggests that it should specify hourly snapshots and 15-minute replications of any VMs and virtual volumes to which it is assigned. Click Back to edit the rule-set.

With the release of Purity 6.2.6 and VASA Provider 2.0.0 a placement rule was added for local snapshot protection.  Meaning that replication groups are no longer required to provide local snapshot protection to VMs.  If using a Purity//FA version lower that 6.2.6 then replication groups will still be needed for providing local snapshot protection.  In the example we provide in this KB we will show how to leverage Snapshot Placement ruleset in a policy.

FlashArray replication and snapshot (only required for Purity//FA versions below 6.2.6) capabilities require component rules. Click Custom and select Replication from the dropdown to display the Replication component rule pane.

vVols Implementation Guide - SPBM - 07.png
Selecting Replication Capabilities for the Policy

Click the Add Rule dropdown again, select Remote Replication Interval, enter 15 in the text box, select Minutes as the unit and click Next to display the list of registered arrays that are compatible with the augmented policy.

vVols Implementation Guide - SPBM - 08.png
Specifying Replication Interval Rule

In vSphere there is the ability to create pre-defined replication rule requirements.  These can be created in the Storage Policy Components tab under Policies and Profiles.

vVols Implementation Guide - SPBM - 11.png
Creating a new Storage Policy Component

After selecting the Pure Storage Replication Provider the same replication rules are available to be used.  Here we create a component ruleset that matches the custom one we provided in the storage policy above.

vVols Implementation Guide - SPBM - 12.png
Storage Policy Component with Replication rules

If we had wanted to use a replication component in the new storage policy we would have selected it from a drop down instead of choosing the custom option.

vVols Implementation Guide - SPBM - 13.png
Selecting a Storage Policy Component

With Purity//FA 6.2.6 and VASA Provider 2.0.0 local snapshot protection is provided through a placement capability.  To provide local snapshot protection the "Local Snapshot Protection" rule will need to be selected.  While the local snapshot rules can be added to a replication component, we recommend using local snapshot placement rules for this moving forward.  Here is what the local snapshot placement rules look like.

vVols Implementation Guide - SPBM - 06.png
Selecting Local Snapshot Placement Capability

With local snapshot placement VASA will be creating a new FlashArray protection group that will be mapped one to one with this storage policy.  As such all protection group settings will be required as part of enabling that capability ruleset.  Additionally help tooltips are provided for each rule.

vVols Implementation Guide - SPBM - 06 - b.png
Snapshot placement rule hints

In the example provided here we have a Storage Policy that has both local snapshot placement enabled along with replication rules.

vVols Implementation Guide - SPBM - 10.png
Storage Policy review and finish

Note:
A policy can be created even if no registered vVol datastores are compatible with it, but it cannot be assigned to any VMs or vVols. Storage can be adjusted to comply, for example, by creating a compliant protection group, or alternatively, the policy can be adjusted to be compatible with existing storage.

Auto-policy Creation with the Plugin

The vSphere Plugin allows users that have Permissions to create vCenter Storage Policies the ability to import a FlashArray Protection Groups Schedule as Policy Rules.  Click to expand the workflow below.

Storage Policy Wizard (5.2.0 and higher)

vSphere Remote Plugin 5.2.0 and higher

With 5.2.0 and higher the workflow is now the storage policy wizard.  This is largely do to the fact that the workflow is no longer just importing a protection groups schedule as policy capabilities and rules are more granular than they have been before.  Here is the new workflow:

  1. From the main plugin page select the create storage policies to open up the wizard workflow.  New policies can be created for one or more vCenter servers that are in linked mode.
    Storage-Policy-Wizard-01.png
    Plugin Home Page - Create Storage Policies
    Storage-Policy-Wizard-02.png
    Select vCenter Server(s) to create the policy for
  2. There are several features that can be selected to create the policy with.  Some features have specific Purity versions that are required in order to use them.  Please make note of those versions.  Once the features are selected a list of compatible arrays will be returned that can support these features.
    Storage-Policy-Wizard-03.png
    Select Features
    Storage-Policy-Wizard-04.png
    Array Compatibility List
  3. There is an option to specify which array(s) the policy can be restricted to.  Here one array is selected and this will restrict the policy to using datastores from only this array.
    Storage-Policy-Wizard-05.png
    Array selection - optional ruleset
  4. With QoS support the per disk bandwidth and IOPs limits can be set.  Remember that these are enforced at an individual virtual disk level, not a vm level.
    Storage-Policy-Wizard-06.png
    QoS Feature Support
  5. The volume tagging feature allows policies to have a key value pair that is tagged on the volumes with the policy applied to them.  
    Storage-Policy-Wizard-07.png
    Volume Tagging Feature
  6. The local snapshot protection feature uses the capabilities and rulesets for the policy to automatically create and manage a protection group on the array based off these capabilities.  Long term retention can also be configured for this feature.
    Storage-Policy-Wizard-08.png
    Snapshot Protection Feature
  7. With the replication feature there are two options.  One is to use a pre-existing protection group on a given array as the base/template for the replication schedule used in the policy.  The next is to manually configure/specify the replication rules that are used.
    Storage-Policy-Wizard-09.png
    Replication Policy Feature - Using an existing protection group as a template
    Storage-Policy-Wizard-10.png
    Replication Policy Feature - manually configuring the protection settings
  8. When manually setting the replication protection, all the same rules/capabilities that are normally available are now organized in a clearer method.  In the example only the replication interval and retention is configured.
    Storage-Policy-Wizard-11.png
    Replication Feature - Customized Protection
  9. Once all the featues are selected and configured it's time to name the storage policy.
    Storage-Policy-Wizard-12.png
    Naming the new Storage Policy
  10. At the end of the wizard a query is issued to see if any datastores match the rules outlined for each of the features.  Once completed a new storage policy is created with the rules specified in the wizard.
    Storage-Policy-Wizard-13.png
    Storage Policy Wizard - Ready to Complete Summary View

Here is a video demo and walkthrough of the new storage policy wizard:


 

Importing FlashArray Protection Groups as SPBM Policies with the Pure Storage vSphere Plugin (5.1.1 and lower workflow)

vSphere Plugin 4.5.0 through 5.1.1

  1. From the main plugin page, select the FlashArray to import the protection group settings and click on Import Protection Groups
vvols-plugin-kb-03-importing-pgroup-1.png
  1. Select the vCenters in which you would like the polic(ies) to be created.
clipboard_eb8c25714b3a33f3dea91600e6da8a04d.png
  1. Choose one or more protection groups. The selected protection groups will be used as "templates" to create vVol storage policies.
clipboard_e1092e0cea44013635c6ea584ca4ac232.png
  1. In the next screen, you can enter in a name for the policies. It will default to the protection group name, but you can change it as needed here.

clipboard_e26cf46c671e1b318af6e5256fec63e03.png

Note that it will prevent you from using a name that is in-use in one or more of the vCenters: clipboard_e7b3e0d6d45cefa92304ca6feb672d761.png

 

  1. The last screen offers two optional settings.
    1. Require array match: This will add the selected FlashArray into the policy and doing so will make sure that only storage from that specific FlashArray comply with the policy. This option maps to the FlashArray Group capability.
    2. Require protection group match: This will add the selected protection group name into the policy and doing so will make sure that only arrays with the specific protection group configuration AND name will comply with the policy. This option maps to the Consistency Group Name capability.

These settings will be configured uniformly for each selected protection group in the wizard, so if you want to configure the resultant policies differently, run through the wizard more than once, selecting the specific protection groups each time. Note a given protection group can be imported more than once as the source protection groups are used a templates for policies, but there is no strict one-to-one mapping.

clipboard_e2bd1052e79d397ecbdc2ed1544d4348d.png
Complete the process after confirming the selections on the final screen.
clipboard_e00812127a5985eaa5608d7fb2e3d0e5f.png
Note that policies do not span vCenters, so the policy will be created once per selected vCenter in the wizard.
clipboard_e1a36b6839fedf6035de4a593ac02027f.png
 
If you selected the Require array match option in the wizard you will see the array name populated in the policy: clipboard_e5b61a4cedb8a99edbe34a625418104a5.png
If you selected the Require protection group match option in the wizard you will see the protection group name populated in the policy: clipboard_e53beae478d7779890f8bc13b86915f2f.png

vSphere Plugin 4.4.0 and Earlier

  1. From the main plugin page, select the FlashArray to import the protection group settings and click on "Import Protection Groups"
    vvols-plugin-kb-03-importing-pgroup-1.png
  2. The screen that shows up next will list the FlashArray protection groups.  In the parentheses the schedule and capabilities of the protection group will be listed.  In the event that a Storage Policy in vCenter already matches the FlashArray pgroup schedule the option to select that pgroup will be grayed out. Select the policy or policies and click Import.
    vvols-plugin-kb-03-importing-pgroup-2.png
  3. Navigate to "Policies and Profiles" and click on the VM Storage Policies tab.  From here you will see that the Storage Policies have been created.  The naming schema for these policies will be [FlashArray] [either Snapshot or Replication] [Schedule Interval].  Below there is a Replication and Snapshot policy shown.
    vvols-plugin-kb-03-importing-pgroup-3.png

 

Changing a Storage Policy

A VMware administrator can edit a storage policy that no longer fulfills the needs of the VMs assigned to make it fulfill current needs.

To change a policy’s parameters from the Policies and Profiles page in the Web Client, select VM Storage Policies, select the policy to be changed, and click the Edit button to display a list of the policy’s rules. Make the needed rule changes and click OK.

vVols Implementation Guide - SPBM - 14.png
Edit Settings Button
vVols Implementation Guide - SPBM - 15.png
Changing or adding a Policy Rule

Clicking OK launches the VM Storage Policy in Use wizard, offering two options for resolution:

Manually later 

Flags all VMs and virtual disks to which the changed policy is assigned as Out of Date.

Now

Assigns the changed policy to all VMs and virtual disks assigned to the original policy.

vVols Implementation Guide - SPBM - 16.png
VM Storage Policy in Use Wizard

Should Manually Later have been chosen the VM compliance can be checked by selecting the storage policy and then looking at the VM Compliance tab.

vVols Implementation Guide - SPBM - 18.png
Out of Date Storage Policies

If Manually later is selected, VMs and vVols show Out of Date compliance status. Update the policies for the affected VMs by clicking the Reapply from the VM Storage Policy header bar.

vVols Implementation Guide - SPBM - 19.png
Reapply Storage Policy Button
vVols Implementation Guide - SPBM - 20.png
Confirm the Reapply Action

Now that the Policy has been reapplied the VMs policy status will be in compliance.  


Checking VM Storage Policy Compliance

A vVol-based VM or virtual disk may become noncompliant with its vCenter storage policy when a storage policy is changed, when an array administrator reconfigures volumes, or when the state of an array changes.

For example, if an array administrator changes the replication interval for a protection group that corresponds to a vCenter storage policy, the VMs and virtual disks to which the policy is assigned are no longer compliant.

To determine whether a VM or virtual disk is compliant with its assigned policy, either select the policy and display the objects assigned to it , or validate VMs and virtual disks for compliance with a given policy.

From the Web Client home page, click the VM Storage Policies icon to view the vCenter’s list of storage policies. Select a policy, click the Monitor tab, and click the VMs and Virtual Disks button to display a list of the VMs and virtual disks to which the policy is assigned.

vVols Implementation Guide - SPBM - 22.png
VM Storage Policies Icon
vVols Implementation Guide - SPBM - 21.png
Selecting a Policy and showing the VM Compliance List

Each policy’s status is either:

Compliant

The VM or virtual disk is configured in compliance with the policy.

Noncompliant

The VM or virtual disk is not configured according to the policy.

Out-of-date

The policy has been changed but has not been re-applied. The VM or virtual disk may still be compliant, but the policy must be re-applied to determine that.


Assigning a Storage Policy to a VM or Virtual Disk

The Web Client can assign a storage policy to a new VM or virtual disk when it is created, deployed from a template, or cloned from another VM. A VMware administrator can change the policy assigned to a VM or virtual disk. Finally, a VM’s storage policy can be changed during Storage vMotion.

Assigning a Storage Policy to New VM

A VMware administrator can assign a storage policy to a new VM created using the Deploy from Template wizard. (The procedure is identical to policy assignment with the Create New Virtual Machine and Clone Virtual Machine wizards.)

Right-click the target template in the Web Client inventory pane’s VMs and Templates list, and select New VM from This Template.

vVols Implementation Guide - SPBM - 23.png
New VM from Template Command

From the Storage selection page there is a section for VM Storage policy that allows the selectin of any policies that have been created.  

vVols Implementation Guide - SPBM - 24.png
Select Storage Step of Template

Setting a Policy for an Entire VM

In the Select Storage pane, select Thin Provision from the Select virtual disk format dropdown (FlashArrays not running on Purity//FA 6.2.0 or later only support thin provisioned volumes; selecting other options causes VM creation to fail), and either select a datastore (VMFS, NFS or vVol) from the list or a policy from the VM storage policy dropdown.

Selecting a policy filters the list to include only compliant storage. For example, selecting the built-in vVol No Requirements Policy, would filter the list to show only vVol datastores.

Selecting the FlashArray-15MinReplication-Component policy filters out datastores on arrays that do not have protection groups with those properties.

vVols Implementation Guide - SPBM - 25.png
Selecting a Storage Policy

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned, or, if Automatic is selected, VMware directs the array to create a protection group with the specified capabilities.

Whichever option is chosen, the VM’s config vVol and all of its data vVols are assigned the same policy. (Swap vVols are never assigned a storage policy.) Click Finish to complete the wizard. The VM is created and its data and config vVols are placed in the assigned protection group.

vVols Implementation Guide - SPBM - 26.png
Assign an Existing Replication Group

BEST PRACTICES: Pure Storage recommends assigning local snapshot policies to all config vVols to simplify VM restoration.

All FlashArray volumes are thin provisioned, so the Thin Provision virtual disk format should always be selected. With FlashArray volumes, there is no performance impact for thin provisioning.

The screenshot below shows the FlashArray GUI view of a common storage policy for an entire vVol-based VM.

vVols Implementation Guide - SPBM - 27.png

FlashArray GUI View of a VM-wide Storage Policy

Assigning a Policy to Each of VM's Virtual Disks 

In most cases, VMware administrators put all of a VM’s volumes in the same protection group, thereby assigning the same storage policy to them.

Alternatively, there is the option to apply the storage policy at a per virtual disk basis.  Select configure Per Disk.

vVols Implementation Guide - SPBM - 28.png
Configure storage policy per Disk

In this view, a separate storage policy can be specified for for the VM’s config vVol as well as for each virtual disk (data vVol).

The Configuration File line refers to the VM’s config vVol. The remaining lines enumerate its data vVols (Hard Disk 1 in the example).

vVols Implementation Guide - SPBM - 29.png
Configure Per Disk - Configuring the Configuration File

The objects can be configured individual or by selecting multiple virtual disks or configuration file.

vVols Implementation Guide - SPBM - add 01.png
Configure Per Disk - Selecting one virtual disk
vVols Implementation Guide - SPBM - 30.png
Configure Per Disk - Selecting two virtual disks

Selecting a policy from the VM storage policy dropdown filters the list to include only compliant datastores. For example, selecting the vVol No Requirements Policy lists only vVol datastores. 

A storage policy that includes local snapshots or remote replication requires a replication group. An existing group can be assigned (for example, sn1-x70-c05-33:FlashArray-SPBM-15minReplication).

Alternatively, if Automatic is selected, the array creates a protection group with the capabilities specified by the policy. Whichever option is chosen, the policy is assigned to the vVol.

vVols Implementation Guide - SPBM - add 02.png
Select VM Storage Policy - Storage - Replication Group

For example, a VM’s config vVol might be assigned a FlashArray-1hrLocalSnap storage policy, which uses Local Snapshot Protection rules.  Whereas its the boot data vVol might be assigned the FlashArray-15minReplication policy, corresponding to the sn1-x70-c05-33:FlashArray-SPBM-15minReplication replication group.

vVols Implementation Guide - SPBM - 31.png
Separate Storage Policies for Config and Data vVols

Here are the screenshots from the Array that list the contents of the two protection groups that correspond to the policies chosen for the config and data.

vVols Implementation Guide - SPBM - add 03.png
Config vVol in the VASA managed protection group for the given Storage Policy
vVols Implementation Guide - SPBM - 32.png
Data vVol in the 15 Minute Replication Array Protection Group

VMware does not allow a single VM to have different Replication Groups for different virtual disks.  If a VM has objects in a replication group, then all objects that have a replication policy applied should use the same replication group.  

Changing a VM's Storage Policy

To change a VM’s storage policy, a VMware administrator assigns a new policy to it. VMware directs the array to reconfigure the affected vVols. If the change makes the VM or any of its virtual disks non-compliant, the VMware administrator must adjust their policies.

To change a VM’s storage policy, select the VMs and Templates view in the Web Client inventory pane, (1) right-click the target VM, (2) select VM Policies from the dropdown menu, and (3) select Edit VM Storage Policies from the secondary dropdown to launch the Edit VM Storage Policies wizard.

vVols Implementation Guide - SPBM - 33.png
Edit VM Storage Policies Command

The VM Storage Policies can be edied a per disk level or all at the same time.  Same with the replication group selection.

vVols Implementation Guide - SPBM - 34.png
Edit VM Storage Policies Wizard
vVols Implementation Guide - SPBM - 35.png
Edit VM Storage Policy Replication Group

To change the storage policy assigned to a VM’s config vVol or a single data vVol, select a policy from the dropdown in the VM Storage Policy column of its row in the table.

Selecting a policy that is not valid for the array that hosts a vVol displays a Datastore does not match current VM policy error message. To satisfy the selected policy, the VM would have to be moved to a different array (reconfiguration would not suffice).

A storage policy change may require that the replication groups for one or more vVols be changed.

vVols Implementation Guide - SPBM - 37.png

vVols Implementation Guide - SPBM - 38.png
One or More Replication Groups not Configured

This warning typically appears for one of two reasons:

  1. One or more vVols are in replication groups (FlashArray protection groups) do not comply with the new storage policy.
  2. The new storage policy requires that vVols be in a replication group, and one or more vVols are not.

Note: If no policy is shared by all of the VM’s vVols, the Replication group dropdown does not appear.

When the policy and replication groups (if required) are set then the Policy summary page will show the state and compliance for each object.

vVols Implementation Guide - SPBM - 37.png

vVols Implementation Guide - SPBM - 38.png
One or More Replication Groups not Configured

Assigning a Policy during Storage Migration

Compliance with an existing or newly assigned storage policy may require migrating a VM to a different array. For example, VM migration is required if:

  • A policy specifying a different array than the current VM or virtual disk location is assigned
  • A policy requiring QoS (or not) is assigned to a VM or virtual disk located on an array with the opposite QoS setting.
  • A policy specifying snapshot or replication parameters not available with any protection group on a VM or virtual disk’s current array is assigned.
  • Some of these situations can be avoided by array reconfiguration, for example by creating a new protection group or inverting the array’s QoS setting. Others, such as a specific array requirement, cannot. If an array cannot be made to meet a policy requirement, the VMware administrator must use Storage vMotion to move the VM or virtual disk to one that can satisfy the requirement. The administrator can select a new storage policy during Storage vMotion.

Here is what the process would look like when migrating a VM from VMFS to vVols.  During the storage selection process you would be able to choose the storage policy and what compatible storage is available.  After choosing to migrate the VM, select to change storage only.  

Screen Shot 2022-04-21 at 5.10.33 PM.png
Migrate VM - Storage Only

From the select storage page in the wizard the option to select storage policy is there.  When selecting a storage policy the compatible storage will be shown at the top of the storage list.  If choosing a policy that requires a replciation group there is a replication group selection box at the bottom.

Screen Shot 2022-04-21 at 5.11.47 PM.png
Screen Shot 2022-04-21 at 5.12.34 PM.png
Migrate VM - Storage and Policy Selection

Once the desired policy and replication group (if needed) is selected the migration wizard can be completed.  

BEST PRACTICE: Pure Storage recommends reselecting the same storage policy rather the Keep existing storage policy option in order to provide Storage vMotion with the information it needs to complete a migration.

[Back to Top]  


Replicating vVols

With VASA version 3, FlashArrays can replicate vVols. VMware is aware of replicated VMs and can fail them over and otherwise manage replication. This User Guide will dive into vVols Replication, the concepts, api calls, operations and how to use them.

The terminology when discussing vVols Replication should be covered before discussing the workflows and methods of replicating vVols.


vVols Replication Terminology

These terms are fundamental to how the APIs and integration with vVols replication will work.  

Name/Concept Explanation

Replication Provider

A VASA provider that supports VASA version 3 and array based replication-type features.

This will inform VMware of replication features, configure VMs with replication, and inform VMware of compliance.

Storage Capabilities

The array based replication features offered up by a replication provider. What these are is very vendor specific.

This can be replication interval, consistency groups, concurrency, retention, etc.

Storage Policy

A collection of VASA capabilities; assembled together by a user and assigned values.

Fault Domain

This is an available target in the replication group. In other words, each fault domain is an array that you can fail VMs in that replication group over to. 

Fault domain = Array.

Source Replication Group

A unit of failover for replicated vVol VMs. Individual VM failover is not possible (unless it is the only VM in the replication group).

Replicated vVols are put into a source group. Every source group has a respective target group on each replication target (fault domain).

The source replication group will be associated to a FlashArray protection group on the source FlashArray. e.g. pgroup-1

Target Replication Group

For every fault domain specified in a source replication group, there is a target replication group.

Test failovers, failovers, and reprotects are executed against a target replication group.

If there is a DR event, it is possible that only the target group is left. It is designed to withstand the failure of the source.

The target replication group will be associated to a target protection group on the target FlashArray.  e.g. FlashArray-A:pgroup-1

With these terms covered, here is a visual representation of what these terms correlate to.  In the illustration below there are three FlashArrays, with FlashArray-A replicating to FlashArray-B and FlashArray-C.  

work-in-progress-vvol-replication-overview.png

VMware's vSphere user guide covers vVols replication groups and fault domains in some additional detail.  Please refer to that user guide if additional context is desired.


vVols Array Based Replication Overview

VMware vVol replication has three components:

  • Replication Policies (Storage Policy)
    • Specify sets of VM requirements and configurations for replication that can be applied to VMs or virtual disks. If configuration changes violate a policy, VMs to which it is assigned become non-compliant
  • Replication Groups (FlashArray Protection Groups)
    • Correspond to FlashArray protection groups, and are therefore consistency groups in the sense that replicas of them are point-in-time consistent. Replication policies require replication groups
  • Failure domains
    • Sets of replication targets. VMware requires that a VM’s config vVol and data vVols be replicated within a single failure domain.

In the FlashArray context, a failure domain is a set of arrays. For two vVols to be in the same failure domain, one must be replicated to the same arrays as the other. In other words, a VM’s vVols must all be located in protection groups that have the same replication targets.

vv172.png

Replication policies can only be assigned to config vVols and data vVols. Other VM objects inherit replication policies in the following way:

  • A memory vVol inherits the policy of its configuration vVol.
  • Managed Snapshots (and their chains) inherit the policy of the configuration vVol.
  • vVols of Other type (such as digest or sidecar) inherit the policy of the configuration vVol.
  • The swap vVol, which only exists when a VM is powered on, is never replicated.

VMware can perform three types of failovers on vVol-based VMs:

Planned Failover

Movement of a VM from one datacenter to another, for example for disaster avoidance or planned migration. Both source and target sites are up and running throughout the failover. Once a planned failover is complete, replication can be reversed so that the failed over VM can be failed back.

Unplanned Failover

Movement of a VM when a production datacenter fails in some way. Failures may be temporary or irreversible. If the original datacenter recovers after failover, automated re-protection may be possible. Otherwise, a-new replication scheme must be configured.

Test Failover

Similar to planned failover, but does not bring down the production VM. Test failover recovers temporary copies of protected VMs to verify the failover plan before an actual disaster or migration.

These vVol failover modes for can be implemented using the VMware SDK, tools such as Site Recovery Manager (SRM), PowerCLI or vRealize Orchestrator, or any tool that can access the VMware SPBM SDK. 


vVols Replication Operations

With the terminology foundation laid, it's time to dig into the specific APIs that drive vVols replication and management.  With each API call the API's operation, purpose and use cases will be covered.

API Call Operation and Explanation
SyncReplicationGroup_Task()

Operation:  Synchronize Replication Group

Purpose:  To tell a replication group to synchronize from its source group. You can specify a name to also indicate a point-in-time to reference later.

Use Case:  Useful for creating quiesced or special failover points.

You issue this against a target replication group—it will then return a task. When the replication to that fault domain is complete, the task will return completion.

What VASA Does:  VASA on the Target FlashArray triggers an on demand snapshot to be replicated from the source FlashArray to the Target FlashArray. 

For example:
purepgroup snap --replicate-now --on source-fa-name source-pgroup-name

This allows the command to correctly be issued against the target replication group.  VASA will add a suffix to these snapshots named "VasaSyncRG" followed 6 random letters/numbers.  This is an async task and VASA will return a task ID to vCenter for this async task.  

TestFailoverReplicationGroupStart_Task()

Operation:  Test Failover Start

Purpose:  This initiates a target replication group to present writable vVols of the VMs in a replication group to the target side. This can be issued without a point-in-time (using the latest version) or a specified point-in-time.

Use Case:  Testing your recovery plans without affecting production.

A test is run against a target replication group. This changes the target replication group from the TARGET state into the INTEST state.

What VASA Does:  VASA on the Target FlashArray starts the workflow of grabbing the most recent snapshot for the target replication group.  Then, VASA copies out all of the volumes from the snapshot on the target FlashArray.  The correct volume groups will be created and the config and data vVols will be placed in the correct vgroups.

In addition to copying out the volumes from the snapshot, VASA will create a pgroup with the same replication schedule/settings as the source replication group if this is the first time that the replication group has had a testFailover or Failover ran.  If there has previously been testFailover or Failover ran on the target Replciation Group VASA will attempt to reuse the pgroup that was created in that process.  If that pgroup was destroyed or if additional unknown volumes were added to the pgroup VASA will treat the testFailover as the 1st one and create a new pgroup.

In the event that the testFailover is being ran after a successful Failover and re-protect, VASA will first attempt to reuse the vgroups, volumes and pgroup when copying out the volumes from the snapshot.  In the event that the pgroup has been destroyed or has been edited, VASA will create a new pgroup and will not reuse the existing vgroups and volumes. 

After the volumes are copied out, all of the metadata associated with these vVol VMs are updated for the new storage container and VASA provider as needed.  The files are then accessible from the target vVol storage container.  Once the job is complete, the updated .vmx filepaths are returned.  The VMs must be registered as part of an independent task from this API.

Here is a look at browsing the vVol Datastore on the Target vCenter/FlashArray after a testFailoverStart has completed.
TargetVvolDatastore-1-smaller.png

Here is a look at browsing the vVol Datastore on the Source vCenter/Array.  We can see the Files and paths for the source compared to how they show up on the Target.
SourceVvolDatastore-1-smaller.png

TestFailoverReplicationGroupStop_Task()

Operation:  Test Failover Stop

Purpose:  This ends a test failover and cleans up any volumes or created VMs on the target side.

Use Case:  Ending a test failover and cleaning up the storage provisioned for it.

A test stop is run against a target replication group that is in the INTEST state. This reverts it back to the TARGET state.

What VASA Does:  VASA will destroy and eradicate the copied out volumes and vgroups that were created for the test.  Note that a volume can not be destroyed if it is connected to a host.  If existing binds exist, the stop task will fail.  Once all the volumes and vgroups have been destroyed and eradicated the replication group be updated back to Target State.

Here is a look at the vVol Datastore on the Target after the testFailoverReplicationGroupStop has completed:
TargetVvolDatastore-2-smaller.png

Notice that none of the VM Files exist.  Please note that the VMs should be powered off and unregistered before running this API. 
Otherwise the testFailoverReplicationGroup would fail to start and complete.  This would be due to binds existing on these vVols.

PromoteReplicationGroup_Task()

Operation:  Promote Replication Group

Purpose:  In the case of a disaster during a test recovery, this allows you to specify VMs that are in the test state to become the production VMs.

Use Case:  Loss of source site VMs during a test recovery.

This is executed against a target replication group in the INTEST state and converts the state to FAILEDOVER.
Note that running this will cause any attempt to run test failover stop to fail.

What VASA Does:  When testFailoverReplicationGroup is run against a Target Replication Group, the Replication Group state is changed from Target to INTEST.  When running a PromoteReplicationGroup on an INTEST Replication Group VASA will update the state of the Replication Group to FAILEDOVER.  This then allows the ReverseReplicationGroup call to be issued to update it to Source.

PrepareFailoverReplicationGroup_Task()

Operation:  Prepare Failover

Purpose:  This synchronizes the replication group to a fault domain. The target replication group will no longer accept syncReplicationGroup operations.

Use Case:  Doing a final synchronization before a failover.

This is issued to the source replication group, so it is really only useful for planned migrations. 

It is not recommended for a test failover, just actual failovers.

FailoverReplicationGroup_Task()

Operation:  Failover

Purpose:  To run a migration or a disaster recovery failover of VMs in a replication group.

Use Case:  Disruptively moving VMs in a replication group from one array to another for a DR event or a planned migration.

This is run against a target replication group and changes the state from TARGET to FAILEDOVER.

What VASA Does:  This process is similar to the testFailover, in that the most recent snapshot (or PiT if specified) has the volumes copied out and updated on the target FlashArray (fault domain). 

The difference here is that the target replication group has it's state updated to FAILEDOVER and not INTEST.  Meaning that a ReverseReplicationGroup can be issued once the Failover task has completed.

In addition to copying out the volumes from the snapshot, VASA will create a pgroup with the same replication schedule/settings as the source replication group if a testFailover has not been ran and this is the first time a Failover has been ran.  

If there has previously been a testFailover or Failover run on the target Replication Group, VASA will attempt to reuse the pgroup that was created in that process.  If that pgroup was destroyed or if additional unknown volumes were added to the pgroup, VASA will treat the Failover as the first time and create a new pgroup.

VASA does not destroy or eradicate the source volumes, vgroup and pgroup for the VMs that are failed over as part of this replication group.  In the event that a testFailover is ran, those volumes, vgroups and pgroup will be reused and then destroyed when the testFailover is cleaned up.  If a Failover is ran before a testFailover is ran, then VASA will attempt to reuse the vgroups, volumes and pgroup when failing over to the target.

Please note that the API does not power off or unregister the VMs at the source vCenter/FlashArray; nor do the recovered VMs get registered in the recovery vCenter Server. This must all be done by the end user.

ReverseReplicateGroup_Task()

Operation:  Reprotect

Purpose:  Makes a failed over group a source that replicates back to the original source.

Use Case:  Ensures that your VMs are protected back to the original site.

Run against a FAILEDOVER replication group and changes its state to SOURCE.

This is not necessarily required—you can also just apply a new storage policy to the VMs to protect them. This is only needed to reset the state of the original target group.

What VASA Does:  VASA will initiate a snapshot replication from the pgroup that the copied out volumes have been added to.  Once this snapshot has completed, the replication schedule is enabled and the replication group's state is updated from FAILEDOVER to Source.  

The ReverseReplicationGroup does not re-apply storage policies or assign replication groups in SMS/vCenter.  In order to complete the re-protect process, the end user will need to reset the storage policy to vVols No Requirements and then apply the storage policy on the new protected site and the correct replication group.

VmServiceProfile-1-small.png

QueryReplicationGroup()

QueryPointInTimeReplica()

QueryReplicationPeer()

Operation:  Queries

Purpose:  Retrieve state of replication environment.

Use Case:  Used to script/detect state of replication, available point-in-times, and status of a group.

These can be run against most types of groups to find out the state of replication.

What VASA Does:  For each query, VASA will check the metadata and tags for each of the associated requests.  Then returns the results of the request.  

Please pay close attention to the notice below:

Regarding Management Path changes:

API calls for FailoverReplicationGroup and TestFailoverReplicationGroup do not register VMs, power them on or change networks. These are still required.

The vVols replication management APIs just make sure the VM storage is ready on the target site.

The VMs appear on the target storage then can be registered and configured as needed.

Each of these APIs can be leveraged with the vCenter MOB.  However, that's not the most optimal way to manage a vVols ecosystem. vRealize Orchestrator, PowerCLI and Site Recovery Manager (8.3+) all integrate with these APIs to support vVols Array Replication workflows.  


vVols Replication with Site Recovery Manager

Full support with SRM and vVols FA replication is GA with the release of Pure Storage's VASA 1.1.0 (available with Purity 5.3.6+) and VMware's Site Recovery Manager 8.3.

This is integration is certified with VMware and Pure Storage. 

Please refer to the Pure SRM user guide for further information.


vVols Replication PowerCLI Commands

Here are the PowerCLI commands that relate to managing vVols Array based replication with storage policies.  Each command has a brief explanation about what the command does.  If further information is needed, please run get-help and then the name of the command that you want more information about.

 

Name/Concept Explanation
Get-SpbmFaultDomain Retrieves fault domains based on name or ID filter - Prints the Name of the FaultDomain and the VASA Provider managing it.
> Get-SpbmFaultDomain

Name                 VasaProvider
----                 ------------
sn1-m20r2-c05-36     sn1-m20r2-c05-36-ct0
sn1-x70-b05-33       sn1-x70-b05-33-ct0
sn1-x70-c05-33       sn1-x70-c05-33-ct0
Get-SpbmReplicationGroup Retrieves the replication groups queried from the VASA Providers - Prints the Name and State of the replication groups.
> Get-SpbmReplicationGroup

Name                                      ReplicationState
----                                      ----------------
sn1-x70-b05-33:vVols-Replication          Source
sn1-x70-b05-33:x70-1-policy-ac1-light-001 Source

Get-SpbmReplicationPair

Retrieves the relation of replication groups in a pair of source & target replication group.

The Source replication group is printed as the FlashArray:pgroup-name and the target will print with the ID.

> Get-SpbmReplicationPair

Source Group                              Target Group
------------                              ------------
sn1-x70-b05-33:vVols-Replication          395a60c2-5803-40be-95b7-029b1b3ffc3e:62
sn1-x70-c05-33:x70-2-policy-ac2-light-001 35770c78-edaf-4afc-9b75-f3fb5c2acee9:9
Get-SpbmPointInTimeReplica Retrieves the point in time replicas (array based snapshots) for a provided replication group.

Scheduled pgroup snapshots will not have a name or description.  
> Get-SpbmPointInTimeReplica

Name  CreationTime         ReplicationGroup
----  ------------         ----------------
      8/25/2020 3:34:25 PM 395a60c2-5803-40be-95b7-029b1b3ffc3e:62
PiT-1 8/25/2020 3:33:48 PM 395a60c2-5803-40be-95b7-029b1b3ffc3e:62
Get-SpbmStoragePolicy Retrieves the Storage Policies from the connect vCenter Servers.
> Get-SpbmStoragePolicy

Name                                     Description                                                                                         Rule Sets
----                                     -----------                                                                                         ---------
Pure-Demo                                                                                                                                    {(com.purestorage.storage.policy.PureFlashArray=True) AND (com.purestorage.storage.replication.RemoteReplicationInterval=00:05:00…
VVol No Requirements Policy              Allow the datastore to determine the best placement strategy for storage objects
FlashArray Snap 1 DAYS                   FlashArray Storage Policy. Snapshot every 1 Days, retained for 7 Days.                              {(com.purestorage.storage.policy.PureFlashArray=True) AND (com.purestorage.storage.replication.LocalSnapshotInterval=1.00:00:00) …
FlashArray Replication 8 HOURS           FlashArray Storage Policy. Remote Replication every 8 Hours, retained for 1 Days.                   {(com.purestorage.storage.policy.PureFlashArray=True) AND (com.purestorage.storage.replication.RemoteReplicationInterval=08:00:00…
Sync-SpbmReplicationGroup

Triggers an on demand snapshot replication job. 

This is ran against the target replication group and initiated from the target FlashArray.

> Sync-SpbmReplicationGroup -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62' -PointInTimeReplicaName 'PiT-3'
Sync-SpbmReplicationGroup: 8/25/2020 3:48:04 PM Sync-SpbmReplicationGroup               Error doing 'Sync' on replication group '30488813-7524-3538-868d-66c8037a6d39/395a60c2-5803-40be-95b7-029b1b3ffc3e:62'. Reason:                                                         
Error 1: Sync of the replication group is ongoing. Ongoing task ID: 'SmsTask-SmsTask-90'

> Sync-SpbmReplicationGroup -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62' -PointInTimeReplicaName 'PiT-4'
Sync-SpbmReplicationGroup: 8/25/2020 3:56:26 PM Sync-SpbmReplicationGroup               Error doing 'Sync' on replication group '30488813-7524-3538-868d-66c8037a6d39/395a60c2-5803-40be-95b7-029b1b3ffc3e:62'. Reason:                                                         
Error 1: Sync of the replication group is ongoing. Ongoing task ID: 'SmsTask-SmsTask-92'

This type of error is an expected outcome for a syncReplicationGroup Task. The key here is that the "error" is saying that there is an "ongoing task". This means that a replication job was started and is now in progress.  syncReplicationGroup is an async task and the Pure VASA provider will provide a task ID for async task.

The cmdlet for syncReplicationGroup in PowerShell does not process task IDs or to query the VASA Provider for the task progress.

Start-SpbmReplicationTestFailover Performs a test failover against the target replication group - upon completion the replication group is in an INTEST state.
> Start-SpbmReplicationTestFailover -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

[FlashArray-B-vVol-DS] rfc4122.918928d8-01aa-47f1-80cd-f31e66d5eac7/vVols-Rep-VM-1.vmx
[FlashArray-B-vVol-DS] rfc4122.fa025596-332f-4e39-82e8-8055f7b589fb/vVols-Rep-VM-2.vmx
[FlashArray-B-vVol-DS] rfc4122.f24fc678-26a4-4234-9356-3b712abbc20b/vVols-Rep-VM-3.vmx

> Get-SpbmReplicationGroup -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

Name                                      ReplicationState
----                                      ----------------
395a60c2-5803-40be-95b7-029b1b3ffc3e:62   InTest
Start-SpbmReplicationPromote Promotes a target replication group from InTest to FailedOver state.
   
Stop-SpbmReplicationTestFailover Stops the test failover on the specified replication groups and performs a cleanup on the target site.
> Stop-SpbmReplicationTestFailover -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

Name                      ReplicationState
----                      ----------------
395a60c2-5803-40be-95b7-… Target
Start-SpbmReplicationPrepareFailover Prepares the specified replication groups to failover - this is ran against the source replication group.
> Start-SpbmReplicationPrepareFailover -ReplicationGroup 'sn1-x70-b05-33:vVols-Replication'
Start-SpbmReplicationFailover Performs a failover of the devices in the specified replication groups.
> Start-SpbmReplicationFailover -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

Confirm
Are you sure you want to perform this action?
Performing the operation "Starting failover on" on target "Replication group '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): y

[FlashArray-B-vVol-DS] rfc4122.f44d3e0f-f25d-4107-bbd6-9a8c2940720b/vVols-Rep-VM-1.vmx
[FlashArray-B-vVol-DS] rfc4122.6a3f3e3c-755d-4c00-a63c-1fd4a69b1476/vVols-Rep-VM-2.vmx
[FlashArray-B-vVol-DS] rfc4122.995eff3c-630c-4ea1-bf33-2eb0f06de84d/vVols-Rep-VM-3.vmx

> Get-SpbmReplicationGroup -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

Name                                      ReplicationState
----                                      ----------------
395a60c2-5803-40be-95b7-029b1b3ffc3e:62   FailedOver
Start-SpbmReplicationReverse

Initiates reverse replication, this will reverse the state of the replication groups from source to target and target to source.

> Start-SpbmReplicationReverse -ReplicationGroup '395a60c2-5803-40be-95b7-029b1b3ffc3e:62'

Name                      ReplicationState
----                      ----------------
sn1-x70-c05-33:r-vVols-R… Source

Now that we have the Commands that can be used let's see how a Failover workflow would look like.

Running a Replication Group Failover with PowerCLI
First is a break down of each of the processes that will be ran through in this workflow.
  • Setting the vCenter name variables and then connecting to both vCenter Servers
    $vc1 = "ac-vcenter-1.purecloud.com"
    $vc2 = "ac-vcenter-2.purecloud.com"
    
    Connect-VIServer -server $vc1,$vc2
    
  • Getting the VM/s to Failover and setting them to a Variable
    $vm = get-vm -name "vVol-Rep-VM-*"
    
  • Setting the Source Replication Group Variable and then Printing that information
    $sourceGroup = $vm | Get-SpbmReplicationGroup -server $vc1
    $sourceGroup | format-table -autosize
    
  • Setting the Target Replication Pair Variable from the source replication group variable
    $targetPair = get-spbmreplicationpair -source $sourceGroup
    $targetPair | format-table -autosize
    
  • Setting the Replication Group name from the Target variable
    $syncgroup = $targetPair.Target.Name
    
  • Running a Sync Replication Group before stopping the VMs with a specific Point in Time name
    Sync-SpbmReplicationGroup -ReplicationGroup $syncgroup -PointInTimeReplicaName 'Sync-Powered-On-VM'
    
  • Stopping the VMs that are going to be failed over
    Stop-VM -VM $vm -Confirm:$false
    
  • Running another sync replication group job after the VMs have powered off
    Sync-SpbmReplicationGroup -ReplicationGroup $syncgroup -PointInTimeReplicaName 'Sync-Powered-Off-VM'
    
  • Setting the target replication group variable from the target pair
    $targetGroup = $targetPair.Target
    
  • Beginning the Failover Process by first running a prepare Failover
    Start-SpbmReplicationPrepareFailover -ReplicationGroup $sourceGroup
    
  • Failing over the replication group for the VMs and setting the operations to a variable
    $testVms = Start-SpbmReplicationFailover -ReplicationGroup $targetGroup -Confirm:$false
    
  • In the event that multiple VMs were part of the replication group, a for-each loop will need to be ran against the $testVMs variable to register each one
    This example just has a single VM that is being registered
    new-vm -VMFilePath $testVms -ResourcePool Replicated-SNY -Server $vc2
    
  • Starting the VM. Please note that there is a question that is asked if the VM is copied or moved. This question must be answered
    get-vm -name 'vVol-Rep-VM-*' -Server $vc2 | Start-VM
    
  • Running the reverse replication group operation to reverse the source and target status for the Replication Group
    $new_source_group = Start-SpbmReplicationReverse -ReplicationGroup $targetGroup
    
  • Setting the New VM to a variable on the recovery vCenter Server
    $newvm = get-vm -name "vVol-Rep-VM-1" -Server $vc2
    
  • Resetting the storage policy for the VM and each virtual disk to the "VVol No Requirements Policy"
    $HD1 = $newvm | Get-HardDisk
    $newvm_policy_1 = $newvm, $HD1 | Get-SpbmEntityConfiguration
    Write-Host -ForegroundColor Cyan "Setting policy to VVol No Requirements for $vm"
    $newvm_policy_1 | Set-SpbmEntityConfiguration -StoragePolicy "VVol No Requirements Policy" -Server $vc2
    
  • Setting the Variables for the replication group and storage policy that we want to use to re-protect the VM to the previous source/protected site
    $policy_1 = Get-SpbmStoragePolicy -Server $vc2 -name "AC-2-vVol-20-min-replication-policy"
    $new_rg = $policy_1 | Get-SpbmReplicationGroup -Server $vc2 -Name "sn1-x70-c05-33:r-vVol-replication-group-1"
    
  • Applying the Storage Policy and Replication group to the VMs to complete the Re-protect process
    Write-Host -ForegroundColor Cyan "Setting $vm storage policy to $policy_1 and replication group to $new_rg."
    $newvm_policy_1 | Set-SpbmEntityConfiguration -StoragePolicy $policy_1 -ReplicationGroup $new_rg
    Write-Host -ForegroundColor Green "$newvm's storage policy has been set"
    
Here is the workflow all in a single snippet

## Setting the vCenter name variables and then connecting to both vCenter Servers ##

$vc1 = "ac-vcenter-1.purecloud.com"
$vc2 = "ac-vcenter-2.purecloud.com"

Connect-VIServer -server $vc1,$vc2

##Getting the VM/s to Failover and setting them to a Variable ##

$vm = get-vm -name "vVol-Rep-VM-*"

## Setting the Source Replication Group Variable and then Printing that information ##

$sourceGroup = $vm | Get-SpbmReplicationGroup -server $vc1
$sourceGroup | format-table -autosize

## Setting the Target Replication Pair Variable from the source replication group variable ##

$targetPair = get-spbmreplicationpair -source $sourceGroup
$targetPair | format-table -autosize

## Setting the Replication Group name from the Target variable ##

$syncgroup = $targetPair.Target.Name

## Running a Sync Replication Group before stopping the VMs with a specific Point in Time name ##

Sync-SpbmReplicationGroup -ReplicationGroup $syncgroup -PointInTimeReplicaName 'Sync-Powered-On-VM'

## Stopping the VMs that are going to be failed over ##

Stop-VM -VM $vm -Confirm:$false

## Running another sync replication group job after the VMs have powered off ##

Sync-SpbmReplicationGroup -ReplicationGroup $syncgroup -PointInTimeReplicaName 'Sync-Powered-Off-VM'

## Setting the target replication group variable from the target pair ##

$targetGroup = $targetPair.Target

## Beginning the Failover Process by first running a prepare Failover ##

Start-SpbmReplicationPrepareFailover -ReplicationGroup $sourceGroup

## Failing over the replication group for the VMs and setting the operations to a variable ##

$testVms = Start-SpbmReplicationFailover -ReplicationGroup $targetGroup -Confirm:$false

## In the event that multiple VMs were part of the replication group, a for-each loop will need to be ran against the $testVMs variable to register each one ##
## This example just has a single VM that is being registered ##

new-vm -VMFilePath $testVms -ResourcePool Replicated-SNY -Server $vc2

## Starting the VM. Please note that there is a question that is asked if the VM is copied or moved. This question must be answered ##

get-vm -name 'vVol-Rep-VM-*' -Server $vc2 | Start-VM

## Running the reverse replication group operation to reverse the source and target status for the Replication Group ##

$new_source_group = Start-SpbmReplicationReverse -ReplicationGroup $targetGroup

## Setting the New VM to a variable on the recovery vCenter Server

$newvm = get-vm -name "vVol-Rep-VM-1" -Server $vc2

## Resetting the storage policy for the VM and each virtual disk to the "VVol No Requirements Policy" ##

$HD1 = $newvm | Get-HardDisk
$newvm_policy_1 = $newvm, $HD1 | Get-SpbmEntityConfiguration
Write-Host -ForegroundColor Cyan "Setting policy to VVol No Requirements for $vm"
$newvm_policy_1 | Set-SpbmEntityConfiguration -StoragePolicy "VVol No Requirements Policy" -Server $vc2

## Setting the Variables for the replication group and storage policy that we want to use to re-protect the VM to the previous source/protected site ##

$policy_1 = Get-SpbmStoragePolicy -Server $vc2 -name "AC-2-vVol-20-min-replication-policy"
$new_rg = $policy_1 | Get-SpbmReplicationGroup -Server $vc2 -Name "sn1-x70-c05-33:r-vVol-replication-group-1"

## Applying the Storage Policy and Replication group to the VMs to complete the Re-protect process ##

Write-Host -ForegroundColor Cyan "Setting $vm storage policy to $policy_1 and replication group to $new_rg."
$newvm_policy_1 | Set-SpbmEntityConfiguration -StoragePolicy $policy_1 -ReplicationGroup $new_rg
Write-Host -ForegroundColor Green "$newvm's storage policy has been set"

Overall, the workflow is straight forward, but in order to fully re-protect the VMs after the reverse, there are some extra steps that can be missed or skipped accidentally.


vVols Replication with vRealize Orchestrator

The Pure Storage vRO plugin contains workflows for vVols replication such as a testFailover and Failover for FlashArray replication groups.  Additionally, there are workflows to assign storage policies and replication groups to VMs.

Pure is currently revamping the vRO documentation and there is currently no KB for running vVols based workflows.  This section will be updated once Pure finishes up the KB that runs through managing vVols replication with vRO.

Please keep an eye on the KBs for vRO for more information.


[Back to Top]  


vVol Reporting

The vVols architecture that gives VMware insight into FlashArrays also gives FlashArrays insight into VMware. With vVol granularity, the FlashArray can recognize and report on both entire vVol-based VMs (implemented as volume groups) and individual virtual disks (implemented as volumes).

Storage Consumption Reporting

FlashArrays represent VMs as volume groups. The Volumes tab of the GUI Storage pane lists an array’s volume groups. Select a group that represents a VM to display a list of its volumes. 

The volume group naming schema will follow the pattern: vvol-VMname-vg with the VM name being set when the VM is first created as a vVols based VM or Storage vMotioned to the vVol Datastore.

When a VM is renamed in vCenter the volume group is not automatically renamed on the FlashArray.  This applies to renaming volume groups on the FlashArray not changing the VM name in vSphere as well.  In the event that a VM's name is changed in vCenter then the volume group name would need to either be updated manually or could be done via a PowerCLI or python workflow as well.  See this KB section for more information on this workflow.

GUI View of a Volume Group and its Volumes
vv173.png

The top panel of the display shows averaged and aggregated storage consumption statistics for the VM. Click the Space button in the Volumes pane to display storage consumption statistics for individual vVols.

GUI View of a Volume Group' Per-volume Storage Consumption
vv174.png

To view a VM’s storage consumption history, switch to the Analysis pane Capacity view and select the Volumes tab.

GUI Analysis
vv175.png

To view history for VMs (volume groups) or vVol (volumes), select an object type from the dropdown menu.

Selecting Volume Statistics
vv176.png

Click the desired object in the list to display its storage consumption history. (Alternatively, enter a full or partial VM name in the search box to filter the list.)

The array displays a graph of the selected object’s storage consumption over time. The graph is adjustable—time intervals from 24 hours to 1 year can be selected. It distinguishes between storage consumed by live volumes and that consumed by their snapshots. The consumption reported is for volume and snapshot data that is unique to the objects (i.e., not deduplicated against other objects). Data shared by two or more volumes or snapshots is reported separately on a volume group-wide basis as Shared.

GUI Storage Capacity History for a Volume Group
vv177.png

Data Reduction with vVol Managed Snapshots on Purity 5.1.3+

Beginning in Purity 5.1.3 Managed Snapshots behavior was changed to copy the Data Volumes to new volumes in the Array Volume Group vs taking array based snapshots of the data volumes.  As part of this update, data reduction numbers will now differ.  Since VMwareis essentially asking the array to create several identical volumes through VASA and the Array will oblige and dedup them appropriately.  Which means that the more Managed Snapshots that are taken, the higher the data reduction on number on the Volume Group will become.  Overall increasing the Array data reduction numbers. 


Performance Reporting

The FlashArray GUI can also report VM and vVol performance hostory. In the Analysis pane Performance view, the history of a VM or vVol’s IOPS, latency, and data throughput (Bandwidth) can be viewed.

Click the Volumes tab to display a list of the array’s VMs (volume groups) and/or vVols (volumes).

GUI Analysis Pane
vv178.png

To view an object’s performance history, select Volume Groups, Volumes, or All in the dropdown, and select a VM or vVol from the resulting list.

Selecting Volume Display
vv179.png

A VM’s or vVol’s performance history graph shows its IOPS, throughput (Bandwidth), and latency history in separate stacked charts.

The graphs show the selected object’s performance history over time intervals from 24 hours to 1 year. Read and write performance can be shown in separate curves. For VMs, latency is the average for all volumes; throughput and IOPS are an accumulation across volumes.

GUI Performance History for a Volume Group
vv180.png

 

[Back to Top]  


Migrating VMs to vVols

Storage vMotion can migrate VMs from VMFS, NFS, or Raw Device Mappings (RDMs) to vVols.

Migrating a VMFS or NFS-based VM to a vVol-based VM

From the Web Client VMs and Templates inventory pane, right-click the VM to be migrated and select Migrate from the dropdown menu to launch the Migrate wizard.

vvol migrate 01.png
vSphere View: Web Client Migrate Command

Select Change Storage Only to migrate the VM’s storage, or Change both compute resource and storage to migrate both storage and compute resources.

vvol migrate 02.png
vSphere View: Selecting Storage-only Migration

In the ensuing Select storage step, select a vVol datastore as a migration target. Optionally, select a storage policy for the migrated VM to provide additional features. (The section titled Storage Policy Based Management describes storage policies.)

Click Finish (not visible in vSphereView 135) to migrate the VM. If original and target datastores are on the same array, the array uses XCOPY to migrate the VM. FlashArray XCOPY only creates metadata, so migration is nearly instantaneous.

If source and target datastores are on different arrays, VMware uses reads and writes, so migration time is proportional to the amount of data copied.

When migration completes, the VM is vVol-based. Throughout the conversion, the VM remains online.

vvol migrate 03.png
vSphere View: Select Storage Policy

The array view below shows a migrated VM’s FlashArray volume group.

vvol migrate 04.png
Array View: GUI View of a Migrated VM (Volume Group)

Migration of a VM with VMDK Snapshots

Migrating a VM that has VMware managed snapshots is identical to the process described in the preceding subsection. In a VMFS or NFS-based VM, snapshots are VMDK files in the datastore that contain changes to the live VM. In a vVol-based VM, snapshots are FlashArray snapshots.

Storage vMotion automatically copies a VM’s VMware VMFS snapshots. ESXi directs the array to create the necessary data vVols, copies the source VMDK files to them and directs the array to take snapshots of them. It then copies each VMFS-based VMware snapshot to the corresponding data vVol, merging the changes. All copying occurs while the VM is online.

BEST PRACTICE: Only virtual hardware versions 11 and later are supported. If a VM has VMware-managed VMFS-based memory snapshots and is at virtual hardware level 10 or earlier, delete the memory snapshots prior to migration. Upgrading the virtual hardware does not resolve this issue. Refer to VMware’s note here

Migrating Raw Device Mappings

A Raw Device Mapping can be migrated to a vVol in any of the following ways:

  • Shut down the VM and perform a storage migration. Migration converts the RDM to a vVol.
  • Add to the VM a new virtual disk in a vVol datastore. The new virtual disk must be of the same size as the RDM and located on the same array. Copy the RDM volume to the vVol, redirect the VM’s applications to use the new virtual disk, and delete the RDM volume.

For more information, refer to the blog post https://www.codyhosterman.com/2017/11/moving-from-an-rdm-to-a-vvol/

 

[Back to Top

 

Data Mobility with vVols

A significant, but under-reported benefit of vVols is data set mobility. Because a vVol-based VM’s storage is not encapsulated in a VMDK file, the VM’s data can easily be shared and moved.

A data vVol is a virtual block device presented to a VM; it is essentially identical to a virtual mode RDM. Thus, a data vVol (or a volume created by copying a snapshot of it) can be used by software that can interpret its contents, for example an NFS or XFS file system created by the VM.

Therefore, it is possible to present a data vVol, or a volume created from a snapshot of one, to a physical server, to present a volume created by physical server to a vVol-based VM as a vVol, or to overwrite a vVol from a volume created by a physical server.

This is an important benefit of the FlashArray vVol implementation. The following blog posts contain examples of and additional information about data mobility with FlashArray vVols:

https://www.codyhosterman.com/2017/10/comparing-vvols-to-vmdks-and-rdms/

https://www.codyhosterman.com/2017/12/vvol-data-mobility-virtual-to-physical/

[Back to Top

 

Appendices

Appendix I: Authenticating FlashArray to the Plugin

While the Plugin is not required to use of FlashArray-based vVols, it simplifies administrative procedures that would otherwise require either coordinated use of multiple GUIs or scripting.

There are many workflows specific to vVols in the Pure Storage vSphere Client Plugin.  Please see the following documentation for how to use the plugin and leverage vVols with the plugin.

[Back to Top

 

Appendix II: FlashArray CLI Commands for Protocol Endpoints

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol create command creates the volume as a protocol endpoint.

vv194.png
ArrayView 48: FlashArray CLI Command to Create a PE

Specifying the --protocol-endpoint option in the he Purity//FA CLI purevol list command displays a list of volumes on the array that were created as PEs.

vv195.png
ArrayView 49: FlashArray CLI Command to List an Array's PEs

[Back to Top

 

Appendix III: VMware ESXi CLI Commands for vVols

Use the esxcli storage vvol commands to troubleshoot a vVol environment.

Version

Changes

 

esxcli storage core device

list

Identify protocol endpoints. The output entry Is VVOL PE: true indicates that the storage device is a protocol endpoint.

esxcli storage vvol daemon

unbindall

Unbind all vVols from all VASA providers known to the ESXi host.

esxcli storage vvol protocolendpoint

list

List all protocol endpoints that a host can access.

esxcli storage vvol storagecontainer

list

abandonedvvol scan

List all available storage containers.

Scan the specified storage container for abandoned vVols.

esxcli storage vvol vasacontext

get

Show the VASA context (VC UUID) associated with the host.

esxcli storage vvol vasaprovider

list

List all storage (VASA) providers associated with the host.

[Back to Top

 

Appendix IV: Disconnecting a Protocol Endpoint from a Host

Decommissioning ESXi hosts or clusters normally includes removal of protocol endpoints (PEs). The usual FlashArray volume disconnect process is used to disconnect PEs from hosts. As with removal of any non-vVol block storage device however, the best practice is to detach the PE from each host in vCenter prior to disconnecting it from them on the array.

vv196.png
vSphereView 143: Web Client Tool for Detaching a PE from an ESXi Host

To detach a PE from a host, select the host in the Web Client inventory pane, navigate to the Storage Devices view Configure tab, select the PE to be detached, and click the vv197.png tool to launch the Detach Device confirmation wizard. Click Yes to detach the selected PE from the host.

vv198.png
vSphereView 144: Confirm Detach Wizard

vSphereView 145 shows the Web Client storage listing after successful detachment of a PE.

vv199.png
vSphereView 145: Detached PE

Failure to detach a PE from a host (vSphereView 146) typically occurs because there are vVols bound to the host through the PE that is being detached.

vv200.png
vSphereView 146: Failure to Detach PE (LUN) from a Host

FlashArrays prevent disconnecting a PE from a host (including members of a FlashArray host group) that has vVols bound through it.

The Purity//FA Version 5.0.0 GUI does not support disconnecting PEs from hosts. Administrators can only disconnect PEs via the CLI or REST API.

Before detaching a PE from an ESXi host, use one of the following VMware techniques to clear all bindings through it:

  1. vMotion all VMs to a different host
  2. Power-off all VMs on the host that use the PE
  3. Storage vMotion the VMs on that host that use the PE to a different FlashArray or to a VMFS

To completely delete a PE, remove all vVol connections through it. To prevent erroneous disconnects, FlashArrays prevent destruction of PE volumes with active connections.

[Back to Top

 

Appendix V: vVols and Volume Group Renaming

FlashArray volume groups are not in the VM management critical path. Therefore, renaming or deleting a volume group does not affect VMware’s ability to provision, delete or change a VM’s vVols.

A volume group is primarily a tool that enables FlashArray administrators to manage a VM’s volumes as a unit. Pure Storage highly recommends creating and deleting volume groups only through VMware tools, which direct arrays to perform actions through their VASA providers.

Volume group and vVol names are not related to VASA operations. vVols can be added to and removed from a volume group whose name has been changed by an array administrator. If, however, a VM’s config vVol is removed from its volume group, any vVols created for the VM after the removal are not placed in any volume group. If a VM’s config vVol is moved to a new volume group, any new vVols created for it are placed in the new volume group.

VMware does not inform the array that it has renamed a vVol-based VM, so renaming a VM does not automatically rename its volume group. Consequently, it is possible for volume group names differ from those of the corresponding VMs. For this reason, the FlashArray vVol implementation does not put volume group or vVol names in the vVol provisioning and management critical path.

For ease of management, however, Pure Storage recommends renaming volume groups when the corresponding VMs are renamed in vCenter.

 

Appendix Vi: CISCO FNIC Driver Support for vVols

Older Cisco UCS drivers do not support the SCSI features required for Protocol Endpoints and vVol sub-lun connections. To use vVols with Cisco UCS, FNIC drivers must be updated to a version that supports sub-luns. For information on firmware and update instructions consult:

https://my.vmware.com/group/vmware/details?productId=491&downloadGroup=DT-ESX60-CISCO-FNIC-16033

https://quickview.cloudapps.cisco.com/quickview/bug/CSCux64473

[Back to Top