Skip to main content
Pure Technical Services

Web Guide: Virtual Volumes Quick Start Guide

Currently viewing public documentation. Please login to access the full scope of documentation.


With the Purity 5.0.0 release, Pure Storage introduced support for vSphere Virtual Volumes on the FlashArray storage platform. This quick start guide will provide the necessary information to get Virtual Volumes up and running on the FlashArray and configured in the VMware environment. This guide assumes use of the FlashArray Plugin for the vSphere Web Client.

Quick Start Checklist

Please ensure the following before attempting vVols setup and configuration:

A vVols best practices summary.



  • Purity//FA 5.1.16+
    • Purity//FA 5.3.6+ is required for SRM + vVols sup port.
  • FlashArray 400 Series, FlashArray//M, FlashArray//X, FlashArray//C
  • vCenter 6.5+ and ESXi 6.5+
  • Configure NTP and Syslog for VMware environment and FlashArray
  • Ensure that vCenter Server and ESXi Host managemen    t networks have TCP port 8084 access to FlashArray controller management ports.
  • Configure host and host groups with appropriate initiators on the FlashArray.
  • The 'pure-protocol-endpoint' must not be destroyed.
    • This namespace must exist for vVols management path to operate correctly.
  • Purity//FA 6.1.10 or later
  • vCenter 6.7 U3 or later
  • ESXi 6.7 U3 P03 or later
  • When registering the VASA Provider, use a local FlashArray User
  • Do not run vCenter Servers on vVols
  • The Protocol Endpoint should be connected to Host Groups and not Individual Hosts.
  • Configure snapshot policies for all Config vVols (VM home directories).
  • Use Virtual Machine hardware version 11 or later.
    • The Hardware Version will need to be 15 or later when the Virtual Machine needs to have more than 15 virtual devices per SCSI controller.

If using Virtual Volumes and FlashArray replication, ensure that anticipated recovery site is running vSphere 6.5 or later.

If using vVols array based replication for failover and recovery methods, Pure Storage strongly recommends running at minimum Purity//FA 5.3.6.

As always, please ensure you follow standard Pure Storage best practices for vSphere.

vVols Best Practices Quick Guidance Points

Here are some quick points of guidance when using vVols with the Pure Storage FlashArray. These are not meant to be Best Practices deep dives nor a comprehensive outline of all best practices when using vVols with Pure Storage; a Best Practices deep dive will be given in the future.  However, more explanation about the requirements and recommendations are given in the summary above.

Purity Version

While vVols support was first introduced with Purity 5.0.0, there have been significant fixes and enhancements to the VASA provider in later releases of Purity. Because of this, Pure has set the required Purity version for vVols to a later release.

  • For general vVols use, Purity 5.1.15+ or Purity 5.3.6+ is required.
  • Purity 5.3.6+ is required for vVols support with Site Recovery Manager (SRM) Array Based Replication protection and recovery.

Pure Storage recommends (with vigor and energy) that customers running vVols upgrade to Purity//FA 6.1.8 or higher.

The main reason behind this is that there are enhancements to VASA to help support vVols at higher scale, performance of Managed Snapshots, and SPBM Replication Group Failover API at scale. 

vSphere Version

While vSphere Virtual Volumes 2.0 was released with vSphere 6.0, the Pure Storage FlashArray only supports vSphere Virtual Volumes 3.0, which was release with vSphere 6.5.  As such, the minimum required vSphere version is 6.5 GA release.  That being said, there are significant fixes specific to vVols so the required versions and recommended versions are as follows:

With the release of vSphere 6.7 U3 P03, VMware fixed a few major issues that customers had seen when migrating workloads to vVols.  Pure Storage has these fixes tracked in a KB that outlines any VASA, vVols or Storage provider fixes per ESXi release.  Please refer to this KB and VMware's vSphere release notes when planning your vSphere environments version recommendations.

vSphere Environment

With regards to the vSphere environment, there are some networking requirements and some strong recommendations from Pure Storage when implementing vVols in your vSphere Environment.

  • Requirement: NTP must be configured the same across all ESXi hosts and vCenter Servers in the environment.  The time and data must be configured to the current date/time.
  • Recommended: Configure Syslog forwarding for vSphere environment.
  • Requirement: Network port 8084 must be open and accessible from vCenter Servers and ESXi hosts to the FlashArray that will be used for vVols.
  • Recommended: Use Virtual Machine Hardware version 11 or higher.
    • The Best Practice is to use the recommended HW version that your vSphere Environment is running as long as it's at 11 or higher.
  • Requirement: Do not run vCenter servers on vVols.
    • While a vCenter server can run on vVols, in the event of any failure on the VASA Management Path combined with a vCenter server restart, the environment could enter a state where vCenter Server may not be able to boot or start.  Please see the failure scenerio KB for more detail on this.
  • Recommended: Either configured a SPBM policy to snapshot all of the vVol VM's Config vVols or manually put Config vVols in a FlashArray protection group with snapshot scheduled enabled.
    • A snapshot of the Config vVol is required for the vSphere Plugin's VM undelete feature.  Having a backup of the Config vVol also helps the recovery process or roll back process for the VM in the event that there is an issue.  There is a detailed KB that outlines some of these workflows that can be found here.

FlashArray Environment

Here is some more detail and color for the requirements and recommendations with the FlashArray:

  • Requirement: The FlashArray Protocol Endpoint object 'pure-protocol-endpoint' must exist. The FlashArray admin must not rename, delete or otherwise edit the default FlashArray Protocol Endpoint.
    • Currently, Pure Storage stores important information for the VASA Service with the pure-protocol-endpoint namespace.  Destroying or renaming this object will cause VASA to be unable to forward requests to the database service in the FlashArray.  This effectively makes the VASA Provider unable to process requests and the Management Path to fail.  Pure Storage is working to correct this and improve this implementation in a future Purity release.
  • RecommendationCreate a local array admin user when running Purity 5.1 and higher.  This user should then be used when registering the storage providers in vCenter.
  • Recommendation: Following vSphere Best Practices with the FlashArray, ESXi clusters should map to FlashArray host groups and ESXi hosts should map to FlashArray hosts.  
  • Recommendation: The protocol endpoint should be connected to host groups on the FlashArray and not to individual hosts.
  • Recommendation: While multiple protocol endpoints can be created manually, the default device queue depth for protocol endpoints is 128 in ESXi and can be configured up to 4096.  This generally means adding additional protocol endpoints is often unnecessary.

VASA Provider/Storage Provider

The FlashArray has a storage provider running on each FlashArray controller called the VASA Service. The VASA Service is part of the core Purity Service, meaning that it automatically starts when Purity is running on that controller.  In vSphere, the VASA Providers will be registered as Storage Providers.  While Storage Providers/VASA Providers can manage multiple Storage Arrays, the Pure VASA Provider will only manage the FlashArray that it is running on.  Even though the VASA Service is running and active on both controllers, vCenter will only use one VASA Provider as the active Storage Provider and the other VASA Provider will be the Standby Provider.

Here are some requirements and recommendations when working with the FlashArray VASA Provider.

  • Requirement: Register both VASA Providers, CT0 and CT1, respectively.
    • While it's possible to only register a single VASA provider, this leaves a single point of failure in your management path.
  • Recommendation: Do not use a Active Directory user to register the storage providers. 
    • Should the AD service/server be running on vVols, Pure Storage strongly recommends not to use an AD user to register the storage providers.  This leaves a single point of failure on the management path in the event that the AD User have permissions changed, password changed or the account is deleted.
  • Recommendation: User a local array admin created to register the storage providers.
  • Recommendation: Should the FlashArray be running Purity 5.3.6 or higher, Import CA signed certificates to VASA-CT0 and VASA-CT1

Managed Snapshots for vVols based VMs

One of the core benefits of using vVols is the integration with storage and vSphere Manage Snapshots.  The operations of the managed snapshot are offloaded to the FlashArray and there is no performance penalty for keeping the managed snapshots.  When the operations behind managed snapshot are offloaded to VASA and the FlashArray, this creates additional work being done on the FlashArray that is not there with managed snapshots on VMFS. 

Here are some points to keep in mind when using Managed Snapshots with vVols based VMs.

  • Managed Snapshots for vVols based VMs create volumes for each Data vVol on that VM that have a -snap suffix in their naming.
    • The process of taking a managed snapshot for a vVol based VM will first issue a Prepare Snapshot Virtual Volume operation which will cause VASA to create placeholder data-snap volumes.  Once completed vSphere will then send the Snapshot Virtual Volume request after stunning the VM.  VASA will then take consistent point in time snapshots of each data vVol and copy them out to the placeholder volumes previously created.  Once the requests complete for each virtual disk the VM is unstunned and the snapshot is completed.
    • With FA volumes being created for the managed snapshot, this directly impacts the volume count on the FlashArray.  For example, a vVol VM with 5 VMDK (Data vVols) will create 5 new volumes on the FA for each managed snapshot.  If 3 managed snapshots are taken, then this VM has a volume count on the FA of 22 volumes (1 Config and 20 Data vVols while powered off; 1 additional Swap vVol while powered on). 
  • Managed Snapshots only trigger Point in Time snapshots of the Data vVols and not the Config vVol.  In the event that the VM is deleted and a recovery of the VM is desired, it will manually have to be done from a pgroup snapshot.
  • The process of VMware taking a managed snapshot is fairly serialized; specifically, the snapshotVirtualVolume operations are serialized.  This means that if a VM has 3 VMDKs (Data vVols), the snapshotVIrtualVolume request will be issued for one VMDK and after it's complete the next VMDK will have the operation issued against it. The more VMDKs a VM has, the larger the impact to how long the managed snapshot will take to complete. This could increase the stun time for that VM.  
    • VMware has committed to improveing the performance of these calls from vSphere.  In vSphere 7.0 U3 they have updated snapshotVirtualVolume to use the max batch size advertised by VASA to issue snapshotVirtualVolume calls with multiple data vVols.  Multiple snapshotVirtualVolume calls for the same VM will be issued close to the same time now as well in the event that the number of virutal disks is greater than the max batch size.
  • Recommendation:  Plan accordingly when setting up managed snapshots (scheduled or manual) and configuring backup software which leverages managed snapshots for incremental backups.  The size of the Data vVols and the amount of Data vVols per VM can impact how long the snapshot virtual volume op takes and how long the stun time can be for the VM.

Storage Policy Based Management (SPBM)

There are a few aspects of utilizing Storage Policies with vVols and the FlashArray to keep in mind when managing your vSphere Environment.

  • Storage Policies can be compatible with one or multiple replication groups (FlashArray protection groups).
    • While storage policies can be compatible with multiple replication groups, when applying the policy to a VM, mutliple replication groups should not be used.  The VM should be part of a single consistency group.
  • SPBM Failover workflow APIs are ran against the replication group and not the storage policy itself.
  • Recommendation: Attempt to keep replication groups under 100 VMs.  This will assist with the VASA Ops being issued against the policies and replication groups and the time it takes to return these queries.
    • This includes both Snapshot and Replication enabled protection groups.  These VASA Ops, such as queryReplicationGroup, will look up all objects in both local replication and snapshot pgroups, as well as target protection groups.  The more protection groups and the more objects in protection groups will inherently cause these queries to take longer.  Please see vVols Deep Dive: Lifecycle of a VASA Operation for more information.
  • Recommendation: Do not change the default storage policy with the vVols Datastore.  This could cause issues in the vSphere UI when provisioning to the vVols Datastore.

FlashArray SafeMode with vVols

For FlashArrays with SafeMode enabled additional considerations and planning will be required for the best experience.  As the management of storage is done through VASA, the VASA service frequently will create new volumes, destroy volumes, eradicate volumes, place volumes in FlashArray protection groups, remove volumes from FlashArray protection groups and disable snapshot/replication schedules.

For more detailed information on SafeMode with vVols see the User Guide.  Here is a quick summary of recommendations when running vVols with SafeMode enabled on the FlashArray.

  • Any FlashArray should be running Purity 6.1.8 or higher when using vVols before enabling SafeMode.
  • vSphere Environment running 7.0 U1 or higher is ideal to leverage the allocated bitmap hint as part of VASA 3.5.
  • Object count, object count, object count.  Seriously, the biggest impact that enabling SafeMode will have is on object count.  Customers that want to enable SafeMode must plan to always be monitoring the object counts for volumes, volume groups, volumes snapshots and pgroup snapshots.  Do not just monitor current object counts but all pending eradication object counts as well.
  • The use of Auto-RG for SPBM when assigning replication groups to a VM should not be used.
  • Once a VM has a storage policy replication group assigned, VASA will be unable to assign a different replication group.  Plan that once a storage policy and replication group are assigned, that the vSphere admin will be unable to change that with SafeMode enabled.
  • Failover replication group workflows will not be able to disable replication group schedules.  Nor will cleanup workflows be able to eradicate objects.  Users must plan for higher object counts after any tests or failover workflows.  
  • Environments that are frequently powering on/off VMs or vMotioning between hosts will have higher amounts of swap vVols pending eradication.  Should the eradication timer be changed to be longer than 24hr, then they will be pending eradication for longer time.  Storage and vSphere admins will have to plan around higher object counts with these environments.
    • In some cases, vSphere Admins may want to configure a VMFS Datastore that is shared between all hosts to be the target for VMs Swap.
  • When changed block tracking (CBT) is enabled the first time, this will increase the amount of volume snapshots pending eradication.  Backup workflows that periodically refresh CBT (disable and re-enable CBT) will increase the amount of this volume diffs that are issued.  Pure does not recommend to frequently refresh CBT.  Once enabled, CBT should not normally need to be refreshed.

Introduction to Virtual Volumes

Traditional storage provisioning of VMware-based virtual machines was done via a datastore mechanism.
The process was typically as follows:

  1. VMware administrator requests storage
  2. Storage administrator creates a “LUN” and provisions it to the ESXi environment via SAN protocol, such as iSCSI or Fibre Channel.
  3. VMware administrator rescans the SCSI bus of the ESXi host(s), identifies the device, and then formats it with the Virtual Machine File System (VMFS).
  4. A virtual machine is then created with various virtual disks. Each virtual disk was a file on that datastore. These virtual disks were then presented as block devices back up the virtual machine.

While this process could be automated via plugins and the like, it still presented a variety of problems. First off, every time additional capacity was needed, this process was required to be followed. Also, if a virtual machine needed a certain array feature (replication for instance), how was that achieved? Array based replication was at the datastore level, so enabling a feature on that datastore affected all of the other virtual machines on that datastore (for better or for worse). Furthermore, how could the VMware administrator be sure that feature was, at any point in the future, still configured properly or even enabled?

There were not a lot of great answers to these questions.

Enter VMware vSphere Virtual Volumes (henceforth referred to as vVols).

vVols solve these problems. At a high level, vVols offer the following benefits:

  • Virtual Disk granularity on the array:
    Each virtual disk is a physical volume on the array.
  • Automatic Provisioning:
    When a new virtual disk is requested for a VM, VMware automatically has the array create a corresponding volume and present it to that VM. A 100 GB virtual disk means a 100 GB volume on the array. When that virtual disk is resized, so is the array volume. When the virtual disk is deleted, so is the array volume.
  • VM-insights on the array:
    Since the array now sees each virtual disk, it can report on that granularity.  The array also understands the virtual machine object, so an array can now manage and report on a VM itself or its individual virtual disks.
  • Storage Policy Based Management:
    Since the array now has virtual disk granularity, features like array snapshots or array-based replication can be provided at the exact granularity needed. With vVols, VMware can communicate to the array to find out what features it supports and allow the VMware administrator to assign, change, or remove functionality on a vVol on demand and via policies. If a storage administrator overrides a configured feature on a vVol, the VMware administrator is alerted because the VM is marked as non-compliant with its assigned policy.

Configuring the vSphere Web Client Plugin

While the FlashArray Plugin for the vSphere Web Client is not required for vVols on the FlashArray—it does help streamline some processes that would require coordinated use of multiple GUIs or scripting work.  The vSphere Plugin should be installed and the Flasharray(s) connections will need to be added to the plugin.  The plugin can be installed a few different ways, the most common being the FlashArray Web Interface or the Pure Storage VMware Powershell Module.  

Installing the FlashArray vSphere Plugin with the FlashArray Web Interface

Installing the vSphere Client Plugin can be accomplished directly from the FlashArray Web Interface with a few simple steps. This option is used most often when customer environments are unable to utilize the PowerShell or vRealize Orchestrator options due to software or firewall limitations. 

Step 1: Login to the Pure Storage FlashArray Web Interface (GUI).

Step 2: In the left hand pane of the GUI select the Settings option.

Step 3: After you have selected Settings in the right hand pane you will then select the Software option (shown below).


Step 4: Once in the Software section you will see an option on the top right of the lower pane titled "vSphere Plugin".

Step 5: In the top right hand of that section you will note a radio button resembling pencil and paper icon, select that icon to edit the vSphere Plugin options (shown below).


Step 6: An embedded window is opened allowing you to fill in the vCenter Server details where the vSphere Client Plugin will be installed. Once completed select Save.


Step 7: Once the vCenter details have been saved, and the plugin status verified on the vCenter Server, an option will be displayed to Install.


Please note the Available Version on the FlashArray. If the available version is not listed you will need to open a ticket with Pure Storage Support to have the desired version loaded onto the FlashArray. 

Step 8: After the install has been completed you will note that the Version on vCenter  is now populated and the Install radio button will have changed to Uninstall.


Step 9: Verify the installation is successful by logging out of the vCenter Server and back in again. Note that this process in the FlashArray UI is only an installer, once you navigate from this page the credentials are not stored. This is not a persistent connection to vCenter, just a one-time install process. All management of the plugin is now done from with the vSphere Client itself. This allows you to install the vSphere Plugin to many vCenters from the same FlashArray.

When utilizing the Flash Client there are times where you may need to restart the vSphere Web Client service (vsphere-client) for the Plugin to appear. 

Installing the FlashArray vSphere Plugin with PowerShell

In order to use the aforementioned cmdlet you need to first ensure you have the Pure Storage module installed and loaded.

  • The first step is to ensure you have installed the VMware PowerCLI module on the server. Without this module installed the Pure Storage commands will not work:
PS C:\> Install-Module VMware.PowerCLI
  • After the VMware PowerCLI module has been installed you can then install the Pure Storage module on the server and load it for use:
PS C:\> Install-Module PureStorage.FlashArray.VMware
PS C:\> Import-Module PureStorage.FlashArray.VMware

Updating the Pure Storage Module

The Pure Storage PowerShell module is actively maintained and thus updating every so often is recommended. This will ensure you have the most recent fixes and features available when utilizing this module. This process is relatively quick and requires only a single command:

PS C:\> Update-Module Purestorage.FlashArray.VMware

If you would like to know which versions are available for download, for both the Flash and HTML-5 client, you can execute the following command:

PS C:\> Get-PfavSpherePlugin

Source Type   Version
------ ----   -------
Pure1  Flash  3.1.3
Pure1  HTML-5 4.3.1

Once you have installed the required modules on the server, and know which version you want installed, you can then install the vSphere Client Plugin using the PowerShell cmdlet "Install-PfavSpherePlugin".

Step 1: Connect to the vCenter Server that you want to install the plugin on (one example below):

PS C:\> Connect-VIServer Ip.Address.Goes.Here

Step 2: Install the vSphere Client Plugin:

Installing the HTML-5 version:

PS C:\> Install-PfavSpherePlugin -html 

Installing the Flash Client version:

PS C:\> Install-PfavSpherePlugin -flash 

Step 3: Verify the installation is successful by logging out of the vCenter Server and back in again.

When utilizing the Flash Client there are times where you may need to restart the vSphere Web Client service (vsphere-client) for the Plugin to appear. 


Adding a FlashArray Connection with the vSphere Plugin

To add a single FlashArray, login to the vSphere Client and click on the Menu drop-down and choose Pure Storage.


Click on the +Add button shown under the Pure Storage icon.


Choose Add a Single Array:


Enter in:

  • Array name. This does not have to be the actual FlashArray's domain name, but it is recommended. This name is not verified--but should be descriptive either way.
  • Array URL. In the form of an IP address or fully-qualified domain name representing a FlashArray virtual address. FQDN is always preferred.
  • Username. A username of either a local user or a directory attached user.
  • Password. The corresponding password of selected user.


The virtual address can be verified from the array on Settings > Network > Subnets & Interfaces:


FQDN can be verified with nslookup or similar tools:


Now that the vSphere Plugin is installed and the FlashArray(s) has been registered the next step will be registering the VASA Provider.

Registering the FlashArray VASA Provider

The quickest method for registering the FlashArray VASA Provider is through the use of the FlashArray Plugin for the vSphere Web Client. It should be noted that this plugin is NOT required to be installed to use vVols with the FlashArray—though it does help streamline some processes such as this.

Pure Storage recommends using a local array admin to register the storage provider.  A local array admin user can be created starting in Purity 5.1 and higher.  This process is outlined here.

Registering the VASA Providers with the Pure Storage vSphere Plugin

  1. A FlashArray will need to be added/registered in the Plugin to register the Storage Provider for the a given FlashArray.  Once the FlashArray is registered, Navigate to the main Plugin Page, select the FlashArray and then click on "Register Storage Provider".
  2. The recommended practice is to have a local FlashArray Array Admin user to register the storage providers with.  In the example below, there is a local array admin named "vvols-admin" that the Storage Providers will be registered with.  In the event that the vCenter is in Enhanced Linked Mode, the option to choose which vCenter to register the storage providers with will be given.
    Registering the Storage Provider with a Single vCenter
    Registering the Storage Provider with a vCenter in Linked Mode
  3. Once the Storage Provider is successfully registered, navigate to the vCenter Server page, then Config and the Storage Providers tab.  Confirm that the storage providers are online and healthy.

The FlashArray will log all subsequent vVol operations from those vCenters under the user used to register the storage providers.

Mounting the FlashArray vVol Datastore

Once VASA has been registered, the FlashArray Plugin for the vSphere Web Client can automate the process to connect a PE to a cluster and also mount the vVol datastore. 

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
  3. Choose a name for the vVol Datastore
  4. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
  5. Confirm the FlashArray that the vVol Datastore will be created for.

  6. Review the information and finish the workflow.
  7. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.

Creating VM Storage Policies

A quick option for the creation of VM storage policies is to use the FlashArray Plugin for the vSphere Client. The 4.1.0 release of the plugin offers the ability to import one or more FlashArray Protection Groups and create respective storage policies in vCenter. 

Importing FlashArray Protection Groups as SPBM Policies with the Pure Storage vSphere Plugin

  • From the main plugin page, select the FlashArray to import the protection group settings and click on "Import Protection Groups"
  • The screen that shows up next will list the FlashArray protection groups.  In the parentheses the schedule and capabilities of the protection group will be listed.  In the event that a Storage Policy in vCenter already matches the FlashArray pgroup schedule the option to select that pgroup will be grayed out. Select the policy or policies and click Import.
  • Navigate to "Policies and Profiles" and click on the VM Storage Policies tab.  From here you will see that the Storage Policies have been created.  The naming schema for these policies will be [FlashArray] [either Snapshot or Replication] [Schedule Interval].  Below there is a Replication and Snapshot policy shown.

Policies can also be manually created or changed using the following FlashArray capabilities:

Capability Name Value
Pure Storage FlashArray Yes or No
FlashArray Group Name of one or more FlashArrays
QoS Support Yes or No
Consistency Group Name A FlashArray protection group name
Local Snapshot Policy Capable Yes or No
Local Snapshot Interval A time interval in seconds, minutes, hours, days, week, months or years.
Local Snapshot Retention A time interval in seconds, minutes, hours, days, week, months or years.
Replication Capable Yes or No
Replication Interval A time interval in seconds, minutes, hours, days, week, months or years.
Replication Retention A time interval in seconds, minutes, hours, days, week, months or years.
Minimum Replication Concurrency Number of target FlashArrays to replicate to at once
Target Sites Names of specific FlashArrays desired as replication targets

Moving a VM to vVols

A virtual machine can be easily and non-disruptively be migrated from NFS or VMFS to Virtual Volumes via a Storage vMotion operation. This will convert virtual disks into the separate volumes on the array. 

  • Click on a virtual machine in the vSphere Web Client and choose Migrate.
  • Then choose “Change storage only”.
  • Choose a VM storage policy if desired, or just choose a vVol datastore.
  • If you chose a policy, VMware will filter out non-compatible datastores. If the policy has replication or snapshot capabilities, you will need to choose a replication group (which is a FlashArray protection group).
  • Click Next then finish to have VMware convert the VM online to vVols and apply any policy configurations if selected

Virtual Volume Reporting

The Virtual Volume architecture not only gives VMware insight into the FlashArray, but it also gives the FlashArray insight into VMware. The granularity provided by Virtual Volumes gives the FlashArray the ability to understand the virtual machine object (volume group) and its various virtual disks (volumes).

Data Reduction Reporting

  • As noted in previous sections, a VM is represented on the FlashArray as a volume group. By clicking on the Storage pane and then the Volumes tab, you can see the volume groups. Click on the one that represents your virtual machine. 
  • The top panel of the volume group shows averaged or aggregate information for your virtual machine. If you click on the Space button in the Volumes box, the space stats will be displayed for the individual vVols as well. 
  • To see historical information, click on the Analysis pane and choose Capacity and then the Volumes tab. 
  • To look at VMs (volume groups) or vVols (volumes) click on the drop down and choose the appropriate object type.  
  • Once selected, look through the objects and select it to see the VM or vVol of your choosing. Alternatively, type in the name of the VM into the search box and the listing will be filtered automatically.  Up to 5 volumes or 5 volume groups can be selected in the GUI.

Performance Reporting

  • VM and vVol performance can also be reported on. By click on the Analysis section and the Performance sub-section, details like IOPS, latency, and throughput can be viewed. Click on the Volumes tab to find the various VMs (volume groups) or vVols (volumes).
  • To see report on a VM, choose volume groups in the drop down. To see specific vVols, choose volumes. 

The analysis tab will breakout the performance stats (IOPS, throughput, and latency) in different charts, which can be split further into Reads or Writes. For VM latency, it is averaged for all volumes in that VM. For throughput and IOPS, it is accumulative across the volumes. If a specific volume is selected, they are the stats for just that volume. 


Creating a Snapshot

While a benefit of virtual volumes is that you can go to the array GUI/REST/CLI and perform per-VM or per-virtual disk operations, a primary advantage is that this can be done from within vCenter natively.  

When you have a virtual machine that consists of vVols, you can use the vSphere Web Client (or any VMware management tool) to create array-based snapshots. 

  • For instance, the process to create a snapshot with the vSphere Web Client is as follows. Navigate to the Host & Clusters view and identify the target virtual machine. Then right-click on the VM and choose Snapshots and then click Take Snapshot
  • This will bring up the snapshot creation panel. In here, you can choose a name and one or neither of the following options.
    • Snapshot the virtual machine’s memory
      When you create a memory snapshot, the snapshot captures the state of the virtual machine's memory and the virtual machine power settings. When you capture the virtual machine's memory state, the snapshot operation takes longer to complete. You might also see a momentary lapse in VM response over the network (a ping or so).
    • Quiesce the guest file system
      When you quiesce a virtual machine during a snapshot, VMware Tools quiesces the file system in the virtual machine. The quiesce operation pauses or alters the state of running processes on the virtual machine, especially processes that might modify information stored on the disk during a restore operation. This does require VMware Tools to be installed inside of the VM. 
  • Purity//FA 5.1.3+ -- Snapshots will be a copy of the volumes for the VM that you are taking a managed snapshot of.
    • Here is a look at the volume group and the the config, data and data snapshot volumes.
    • Here is the Data volume for the VM running, you'll see it's connected to the host.
    • Then here is the the snapshot volume of the current data volume.  Notice that it is not connected to any hosts and it is not a Pure Snapshot, but it's own volume.
  • Pre Purity//FA 5.1.3 -- If you look at one or all of the data vVols on the FlashArray, you will now see their respective snapshots:
  • The snapshot will also appear inside of VMware interfaces where it can be fully managed.

Additional vVols Features of the FlashArray vSphere Web Client Plugin

The FlashArray Plugin for the vSphere Web Client 3.0 introduces a few value-add snapshot and recovery features that are not otherwise built-in to the vSphere Web Client.  

Viewing VM vVol details

When a FlashArray is registered with the vSphere Plugin there will be details reported in vCenter for vVols based Virtual Machines that are stored on that FlashArray.  These details are explained here in the Demo Video.  Click to expand the explanation below.

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin
  1. From the Virtual Machine view and Summary Tab, there is a FlashArray widget box.  This will show whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.
  2. On the Virtual Machine's Configure Page, there is a Pure Storage Virtual Volumes tab.  

    The page will allow end users to run the workflows to Import a virtual disk (vVol), restore a destroyed vVol or to Overwrite an existing vVol.
    Additionally the page contains important information about the VMs Data vVols.  Some of the important information here would be the Virtual Device (SCSI controller connection), the vVol Datastore that the vVol is on, which Array the vVol is on and the FlashArray Volume Group Name and Volume name.

Creating a FlashArray Snapshot of a vVol Disk

The Pure Storage Plugin version 4.4.0 and later for the vSphere Client has the ability to create a new snapshot of only a vVol virtual disk.

Create a Snapshot of a vVol Disk
  1. From the Virtual Machine Configure tab, navigate to the Pure Storage - Virtual Volumes pane, select the disk you would like to snapshot and click Create Snapshot.




  2. After clicking the Create Snapshot button, a dialog appears. You can optionally enter a snapshot name, otherwise it will assign the next available numerical name for the snapshot. Click Create.



  3. After the workflow is complete, you can verify the snapshot by either clicking the Import Disk or the Overwrite Disk button and finding the correct disk and expanding its snapshots.



Restoring a vVol from a FlashArray Snapshot

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 

Creating a vVol Copy

With the Pure Storage vSphere plugin there is the ability to import a vVol from the same vVol VM or from another vVol VM.  The source can be either a FlashArray Snapshot or a Managed Snapshot.  The workflows for importing the same vVol from either a FA Snapshot or a Managed Snapshot is walked through below as well as in the Demo Video here.

Creating the Copy from a FlashArray Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to recover.  This can be a different vVol VM or the same vVol VM that you want to import the data vVol to.  In this example the Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "53".
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be imported.
    Click on Import to complete the workflow. 
Creating the Copy from a Managed Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
  2. Instead of using a FlashArray pgroup snapshot to import the vVol, this time a Managed Snapshot will be selected.  Notice the difference in the naming for the selected vVol.  There is no pgroup or snapshot name associated with it.  Just the volume group and data vvol name, followed by a "-snap" indicating that this is a managed snapshot for this vVol.  
    The same type of information is provided in the Volume Information for Managed Snapshot or FlashArray Snapshots.
    To complete the import workflow, click on Import.
  3. Once the Import Workflows have completed, the new Data vVols will show up on the Virtual Volumes page.

Recovering a Deleted VM from a FlashArray Snapshot (VM Undelete)

The Pure Storage vSphere Plugin has a workflow that can recover a vVol based VM that has a FlashArray snapshot of the VMs config vVol.  The section in the Demo Video that covers this workflow can be found here.  Click below to expand the workflow in the KB.

Recovering a Deleted vVol VM with the Pure Storage vSphere Plugin
  1. From the Virtual Machine view, there is a FlashArray box.  This will explain whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.  This is required for the Undelete workflow because of the following reasons:
    1. When a vVol VM is deleted, VMware will first delete the information of the Data vVols inventory from the config.
    2. After that is complete, VMware issues a volume unbind and destroy the Config vVol.  This means that by the time the FlashArray has destroyed the Config vVol, the inventory mapping and Data vVol information has been deleted.  
    3. In order to recover a VM that has been deleted, the Config vVol has to be overwritten with the snapshot of that Config vVol
  2. From the Virtual Machine view, we can see that the last snapshot of the Config vVol on the FlashArray is at 3:17 PM on July 21st.  Which means, that if there have been any edits to the VM such as CPU, Memory, new vVols, etc, it will not be recovered.  The state of the VM at the Undelete Protection timestamp will be what is recovered.
  3. This VM has been powered off and is now going to be deleted.
  4. From the Datastore tab, select the vVol Datastore.  Right Click on the vVol Datastore, go to the Pure Storage option, and select "Undelete Virtual Machine"
  5. The first page "Virtual Machine" will let you select which destroyed VM you want to recover.  The caveat is that by default, a volume on the FlashArray that is destroyed has 24 hours until it is eradicated.  This page will notify how much Time Remaining the VM has to be recovered.
  6. The next page, "Compute Resource", select the ESXi host that will recover the VM.
  7. Review the details and then select Finish.
  8. Power on the VM and check that everything is powering on and is healthy.

vSphere Client Video Demo 

Here is a Video Demo that walks through each of these steps covered in the KB



There may be instances that some of the ESXi hosts are mounted to the vVol Datastore and some hosts are unable to mount the vVol Datastore.  Here are some posts to help troubleshoot those instances.