Skip to main content
Pure Technical Services

Web Guide: Virtual Volumes Quick Start Guide

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

  

With the Purity 5.0.0 release, Pure Storage introduced support for vSphere Virtual Volumes on the FlashArray storage platform. This quick start guide will provide the necessary information to get Virtual Volumes up and running on the FlashArray and configured in the VMware environment. This guide assumes use of the FlashArray Plugin for the vSphere Web Client.

Quick Start Checklist

Please ensure the following before attempting vVols setup and configuration:

vVols Best Practices Summary

Requirements

Recommendations

  • Purity//FA 6.1.8 or higher
  • FlashArray 400 Series, FlashArray//M, FlashArray//X, FlashArray//C
  • vCenter 6.5+ and ESXi 6.5+
    • vSphere 6.5 and 6.7 are end of life by VMware, 7.0 or higher should be used moving forward
  • Configure NTP for VMware environment and FlashArray
    • Ensure that all hosts, vCenter servers and arrays are all synced
  • Ensure that vCenter Server and ESXi Host managemen    t networks have TCP port 8084 access to FlashArray controller management ports.
  • Configure host and host groups with appropriate initiators on the FlashArray.
  • The 'pure-protocol-endpoint' must not be destroyed.
    • This namespace must exist for vVols management path to operate correctly.
  • Purity//FA 6.2.10 or later
  • vCenter 7.0 Update 3f (build 20051473) or later
  • ESXi 7.0 Update 3f (build 20036589) or later
  • When registering the VASA Provider, use a local FlashArray User
  • Do not run vCenter Servers on vVols
  • The Protocol Endpoint should be connected to Host Groups and not Individual Hosts.
  • Configure a syslog server for the vSphere environment
  • Configure snapshot policies for all Config vVols (VM home directories).
  • Use Virtual Machine hardware version 15 or later.
    • The Hardware Version will need to be 15 or later when the Virtual Machine needs to have more than 15 virtual devices per SCSI controller.

If using Virtual Volumes and FlashArray replication, ensure that anticipated recovery site is running vSphere 6.7 or later. 

As always, please ensure you follow standard Pure Storage best practices for vSphere.

Currently VMware does not support Stretched Storage with vVols. This means that due to limitations both with Purity//FA and vSphere, vVols is not supported with ActiveCluster. vVols are also not supported with ActiveDr.  

Pure Storage and VMware are actively partnered to develop support for stretched storage (ActiveCluster) for vVols and have targeted 1H of 2024CY for release, release timelines are subject to change. 

vVols Best Practices Quick Guidance Points

Here are some quick points of guidance when using vVols with the Pure Storage FlashArray. These are not meant to be Best Practices deep dives nor a comprehensive outline of all best practices when using vVols with Pure Storage; a Best Practices deep dive will be given in the future.  However, more explanation about the requirements and recommendations are given in the summary above.

Purity Version

While vVols support was first introduced with Purity 5.0.0, there have been significant fixes and enhancements to the VASA provider in later releases of Purity. Because of this, Pure has set the required Purity version for vVols to a later release.

  • For general vVols use, while Purity 5.3 and 5.1 can support vVols both Purity release are end of life.  As such, the minimum target Purity version should be Purity//FA 6.1.8.

Pure Storage recommends that customers running vVols upgrade to Purity//FA 6.2.10 or higher.

The main reason behind this is that there are enhancements to VASA to help support vVols at higher scale, performance of Managed Snapshots, and SPBM Replication Group Failover API at scale. For more information please see what's new with VASA Provider 2.0.0

vSphere Version

While vSphere Virtual Volumes 2.0 was released with vSphere 6.0, the Pure Storage FlashArray only supports vSphere Virtual Volumes 3.0, which was release with vSphere 6.5.  As such, the minimum required vSphere version is 6.5 GA release.  That being said, there are significant fixes specific to vVols so the required versions and recommended versions are as follows:

vSphere 7.0 Update 3 released several improvements for vVols and running vVols at scale.  vSphere 7.0 Update 3f should be the minimum vSphere version to target for running vVols with Pure Storage in your vSphere environment.

vSphere Environment

With regards to the vSphere environment, there are some networking requirements and some strong recommendations from Pure Storage when implementing vVols in your vSphere Environment.

  • Requirement: NTP must be configured the same across all ESXi hosts and vCenter Servers in the environment.  The time and data must be configured to the current date/time.
  • Recommended: Configure Syslog forwarding for vSphere environment.
  • Requirement: Network port 8084 must be open and accessible from vCenter Servers and ESXi hosts to the FlashArray that will be used for vVols.
  • Recommended: Use Virtual Machine Hardware version 14 or higher.
  • Requirement: Do not run vCenter servers on vVols.
    • While a vCenter server can run on vVols, in the event of any failure on the VASA Management Path combined with a vCenter server restart, the environment could enter a state where vCenter Server may not be able to boot or start.  Please see the failure scenerio KB for more detail on this.
  • Recommended: Either configured a SPBM policy to snapshot all of the vVol VM's Config vVols or manually put Config vVols in a FlashArray protection group with snapshot scheduled enabled.
    • A snapshot of the Config vVol is required for the vSphere Plugin's VM undelete feature.  Having a backup of the Config vVol also helps the recovery process or roll back process for the VM in the event that there is an issue.  There is a detailed KB that outlines some of these workflows that can be found here.

FlashArray Environment

Here is some more detail and color for the requirements and recommendations with the FlashArray:

  • Requirement: The FlashArray Protocol Endpoint object 'pure-protocol-endpoint' must exist. The FlashArray admin must not rename, delete or otherwise edit the default FlashArray Protocol Endpoint.
    • Currently, Pure Storage stores important information for the VASA Service with the pure-protocol-endpoint namespace.  Destroying or renaming this object will cause VASA to be unable to forward requests to the database service in the FlashArray.  This effectively makes the VASA Provider unable to process requests and the Management Path to fail.  Pure Storage is working to correct this and improve this implementation in a future Purity release.
  • RecommendationCreate a local array admin user when running Purity 5.1 and higher.  This user should then be used when registering the storage providers in vCenter.
  • Recommendation: Following vSphere Best Practices with the FlashArray, ESXi clusters should map to FlashArray host groups and ESXi hosts should map to FlashArray hosts.  
  • Recommendation: The protocol endpoint should be connected to host groups on the FlashArray and not to individual hosts.
  • Recommendation: While multiple protocol endpoints can be created manually, the default device queue depth for protocol endpoints is 128 in ESXi and can be configured up to 4096.  This generally means adding additional protocol endpoints is often unnecessary.

VASA Provider/Storage Provider

The FlashArray has a storage provider running on each FlashArray controller called the VASA Service. The VASA Service is part of the core Purity Service, meaning that it automatically starts when Purity is running on that controller.  In vSphere, the VASA Providers will be registered as Storage Providers.  While Storage Providers/VASA Providers can manage multiple Storage Arrays, the Pure VASA Provider will only manage the FlashArray that it is running on.  Even though the VASA Service is running and active on both controllers, vCenter will only use one VASA Provider as the active Storage Provider and the other VASA Provider will be the Standby Provider.

Here are some requirements and recommendations when working with the FlashArray VASA Provider.

  • Requirement: Register both VASA Providers, CT0 and CT1, respectively.
    • While it's possible to only register a single VASA provider, this leaves a single point of failure in your management path.
  • Recommendation: Do not use a Active Directory user to register the storage providers. 
    • Should the AD service/server be running on vVols, Pure Storage strongly recommends not to use an AD user to register the storage providers.  This leaves a single point of failure on the management path in the event that the AD User have permissions changed, password changed or the account is deleted.
  • Recommendation: User a local array admin created to register the storage providers.
  • Recommendation: Should the FlashArray be running Purity 5.3.6 or higher, Import CA signed certificates to VASA-CT0 and VASA-CT1

Managed Snapshots for vVols based VMs

One of the core benefits of using vVols is the integration with storage and vSphere Manage Snapshots.  The operations of the managed snapshot are offloaded to the FlashArray and there is no performance penalty for keeping the managed snapshots.  When the operations behind managed snapshot are offloaded to VASA and the FlashArray, this creates additional work being done on the FlashArray that is not there with managed snapshots on VMFS. 

Massive improvements to vVols performance at scale and load has been released with the FlashArray VASA Provider 2.0.0 with Purity//FA 6.2 and 6.3

Pure Storage's recommendation when using vVols with the FlashArray is to upgrade to a Purity//FA 6.2.10 or higher.  

Please see the KB What's new with VASA Provider 2.0.0 for more information.

Here are some points to keep in mind when using Managed Snapshots with vVols based VMs.

  • Managed Snapshots for vVols based VMs create volumes for each Data vVol on that VM that have a -snap suffix in their naming.
    • The process of taking a managed snapshot for a vVol based VM will first issue a Prepare Snapshot Virtual Volume operation which will cause VASA to create placeholder data-snap volumes.  Once completed vSphere will then send the Snapshot Virtual Volume request after stunning the VM.  VASA will then take consistent point in time snapshots of each data vVol and copy them out to the placeholder volumes previously created.  Once the requests complete for each virtual disk the VM is unstunned and the snapshot is completed.
    • With FA volumes being created for the managed snapshot, this directly impacts the volume count on the FlashArray.  For example, a vVol VM with 5 VMDK (Data vVols) will create 5 new volumes on the FA for each managed snapshot.  If 3 managed snapshots are taken, then this VM has a volume count on the FA of 22 volumes (1 Config and 20 Data vVols while powered off; 1 additional Swap vVol while powered on). 
  • Managed Snapshots only trigger Point in Time snapshots of the Data vVols and not the Config vVol.  In the event that the VM is deleted and a recovery of the VM is desired, it will manually have to be done from a pgroup snapshot.
  • The process of VMware taking a managed snapshot is fairly serialized; specifically, the snapshotVirtualVolume operations are serialized.  This means that if a VM has 3 VMDKs (Data vVols), the snapshotVIrtualVolume request will be issued for one VMDK and after it's complete the next VMDK will have the operation issued against it. The more VMDKs a VM has, the larger the impact to how long the managed snapshot will take to complete. This could increase the stun time for that VM.  
    • VMware has committed to improveing the performance of these calls from vSphere.  In vSphere 7.0 U3 they have updated snapshotVirtualVolume to use the max batch size advertised by VASA to issue snapshotVirtualVolume calls with multiple data vVols.  Multiple snapshotVirtualVolume calls for the same VM will be issued close to the same time now as well in the event that the number of virutal disks is greater than the max batch size.
  • Recommendation:  Plan accordingly when setting up managed snapshots (scheduled or manual) and configuring backup software which leverages managed snapshots for incremental backups.  The size of the Data vVols and the amount of Data vVols per VM can impact how long the snapshot virtual volume op takes and how long the stun time can be for the VM.

Storage Policy Based Management (SPBM)

There are a few aspects of utilizing Storage Policies with vVols and the FlashArray to keep in mind when managing your vSphere Environment.

  • Storage Policies can be compatible with one or multiple replication groups (FlashArray protection groups).
    • While storage policies can be compatible with multiple replication groups, when applying the policy to a VM, mutliple replication groups should not be used.  The VM should be part of a single consistency group.
  • SPBM Failover workflow APIs are ran against the replication group and not the storage policy itself.
  • Recommendation: Attempt to keep replication groups under 100 VMs.  This will assist with the VASA Ops being issued against the policies and replication groups and the time it takes to return these queries.
    • This includes both Snapshot and Replication enabled protection groups.  These VASA Ops, such as queryReplicationGroup, will look up all objects in both local replication and snapshot pgroups, as well as target protection groups.  The more protection groups and the more objects in protection groups will inherently cause these queries to take longer.  Please see vVols Deep Dive: Lifecycle of a VASA Operation for more information.
  • Recommendation: Do not change the default storage policy with the vVols Datastore.  This could cause issues in the vSphere UI when provisioning to the vVols Datastore.

FlashArray SafeMode with vVols

For FlashArrays with SafeMode enabled additional considerations and planning will be required for the best experience.  As the management of storage is done through VASA, the VASA service frequently will create new volumes, destroy volumes, eradicate volumes, place volumes in FlashArray protection groups, remove volumes from FlashArray protection groups and disable snapshot/replication schedules.

For more detailed information on SafeMode with vVols see the User Guide.  Here is a quick summary of recommendations when running vVols with SafeMode enabled on the FlashArray.

  • Any FlashArray should be running Purity 6.1.8 or higher when using vVols before enabling SafeMode.
  • vSphere Environment running 7.0 U1 or higher is ideal to leverage the allocated bitmap hint as part of VASA 3.5.
  • Object count, object count, object count.  Seriously, the biggest impact that enabling SafeMode will have is on object count.  Customers that want to enable SafeMode must plan to always be monitoring the object counts for volumes, volume groups, volumes snapshots and pgroup snapshots.  Do not just monitor current object counts but all pending eradication object counts as well.
  • The use of Auto-RG for SPBM when assigning replication groups to a VM should not be used.
  • Once a VM has a storage policy replication group assigned, VASA will be unable to assign a different replication group.  Plan that once a storage policy and replication group are assigned, that the vSphere admin will be unable to change that with SafeMode enabled.
  • Failover replication group workflows will not be able to disable replication group schedules.  Nor will cleanup workflows be able to eradicate objects.  Users must plan for higher object counts after any tests or failover workflows.  
  • Environments that are frequently powering on/off VMs or vMotioning between hosts will have higher amounts of swap vVols pending eradication.  Should the eradication timer be changed to be longer than 24hr, then they will be pending eradication for longer time.  Storage and vSphere admins will have to plan around higher object counts with these environments.
    • In some cases, vSphere Admins may want to configure a VMFS Datastore that is shared between all hosts to be the target for VMs Swap.
  • When changed block tracking (CBT) is enabled the first time, this will increase the amount of volume snapshots pending eradication.  Backup workflows that periodically refresh CBT (disable and re-enable CBT) will increase the amount of this volume diffs that are issued.  Pure does not recommend to frequently refresh CBT.  Once enabled, CBT should not normally need to be refreshed.

Introduction to Virtual Volumes

Traditional storage provisioning of VMware-based virtual machines was done via a datastore mechanism.
The process was typically as follows:

  1. VMware administrator requests storage
  2. Storage administrator creates a “LUN” and provisions it to the ESXi environment via SAN protocol, such as iSCSI or Fibre Channel.
  3. VMware administrator rescans the SCSI bus of the ESXi host(s), identifies the device, and then formats it with the Virtual Machine File System (VMFS).
  4. A virtual machine is then created with various virtual disks. Each virtual disk was a file on that datastore. These virtual disks were then presented as block devices back up the virtual machine.

While this process could be automated via plugins and the like, it still presented a variety of problems. First off, every time additional capacity was needed, this process was required to be followed. Also, if a virtual machine needed a certain array feature (replication for instance), how was that achieved? Array based replication was at the datastore level, so enabling a feature on that datastore affected all of the other virtual machines on that datastore (for better or for worse). Furthermore, how could the VMware administrator be sure that feature was, at any point in the future, still configured properly or even enabled?

There were not a lot of great answers to these questions.

Enter VMware vSphere Virtual Volumes (henceforth referred to as vVols).

vVols solve these problems. At a high level, vVols offer the following benefits:

  • Virtual Disk granularity on the array:
    Each virtual disk is a physical volume on the array.
  • Automatic Provisioning:
    When a new virtual disk is requested for a VM, VMware automatically has the array create a corresponding volume and present it to that VM. A 100 GB virtual disk means a 100 GB volume on the array. When that virtual disk is resized, so is the array volume. When the virtual disk is deleted, so is the array volume.
  • VM-insights on the array:
    Since the array now sees each virtual disk, it can report on that granularity.  The array also understands the virtual machine object, so an array can now manage and report on a VM itself or its individual virtual disks.
  • Storage Policy Based Management:
    Since the array now has virtual disk granularity, features like array snapshots or array-based replication can be provided at the exact granularity needed. With vVols, VMware can communicate to the array to find out what features it supports and allow the VMware administrator to assign, change, or remove functionality on a vVol on demand and via policies. If a storage administrator overrides a configured feature on a vVol, the VMware administrator is alerted because the VM is marked as non-compliant with its assigned policy.

Configuring the vSphere Client Plugin

While the FlashArray Plugin for the vSphere Web Client is not required for vVols on the FlashArray—it does help streamline some processes that would require coordinated use of multiple GUIs or scripting work.  The vSphere Client plugin is now using the remote plugin architecture.  Please refer to the remote plugin deployment user guide for even further information, but here are the quick notes below.


Plugin Download Location 

The Pure VMware Appliance is an OVA that can be downloaded from HERE. Please note you will need to download the OVA for both online and offline deployments.


Online Deployment of the Remote Plugin

The OVA must have access to Pure1 for appliance installations, application upgrades, queries and tasks with puresw. Access to deb.cloud-support.purestorage.comvia port 443 is required.

Additionally the appliance will need access to each vCenter it will be registered with on ports 8443 and 443.  The FlashArray's that will be used with the plugin will need to be accessible via port 443 to the appliance.

For offline deployments that are unable to access deb.cloud-support.purestorage.com, please see the Offline Deployment Procedure section of this KB.

Deployment of the Pure Storage Appliance is very similar to a typical OVA deployment.  We outline the steps below.

  1. To start, right-click on the cluster you wish to deploy the OVA to, and then select Deploy OVF Template...

    install1.png

  2. In the Deploy OVF template wizard, either provide the URL for the OVA or if it has been downloaded locally, select Local File, then click on Upload Files and choose the OVA file from your local hard drive.

    install2.png

  3. Click Next when the OVA file has been specified.

    install3.png

  4. Optionally provide a unique Virtual machine name and select a folder for it to be deployed into.  Click on Next when these selections have been made.

    install4.png

  5. Pick the ESXi cluster or Host where you want to deploy the OVA.  Click on Next once the selection has been made.

    install5.png

  6. Confirm the details selected thus far and then click on Next.

    install6.png

  7. Read the licensing agreement and then click the checkbox to accept to the licensing terms and click Next.

    install7.png

  8. Pick a storage device to install the OVA template to.  Optionally change the virtual disk format and/or select a VM storage policy and then click on Next.

    install8.png

  9. Pick the network you wish to use for the appliance.  Note that it must be routable to the vCenter management network.  Click on Next to continue.

    install9.png

  10. In the OVA customization template, at the top of the screen first pick the vSphere Remote Client Plugin option from the Appliance Type list.

    Reminder: While the OVA supports both the VM Analytics and Remote Plugin, the OVA can only support one integration at a time.  When multiple integrations are needed, more than one OVA will need to be deployed.

    If your environment does not have access to Pure1 during the OVA deployment please see the Offline Deployment Procedure for this step.

    install10.png

  11. If wanting to use DHCP check the DHCP checkbox and skip 11a.; otherwise un-check the DHCP checkbox and follow a. below. 

    The remote vSphere plugin runs on Pure Storage's OVA as a Docker container. Docker has its own bridge network configuration, with the default being a 172.17.0.1/16 subnet and 172.17.0.1 is the gateway. If your internal network configuration overlaps with this default, you must change the Docker bridge network via the field below (Docker IP Range) to avoid IP address conflicts.

    install11.png

    1. If a static IP address is to be associated with the Pure VMware Appliance, fill out the relevant networking information including IP address, netmask, gateway, DNS Server(s).  Optionally specificy a custom hostname and if a Proxy is being used, supply the URL, Port and login information for the proxy.  Click on Next when finished filling out the required custom template fieldsinstall12.png

  12. Review the details of the OVA deployment and then click on Finish to deploy.

    install13.png

  13. Once the OVA has finished deployment within vCenter, power it on to finish its configuration and to make it available to login to.

  14. Follow Change pureuser's Password from this KB (you must be logged in to view the link) to change the default appliance password.

  15. If your vCenter's fully qualified domain name (FQDN) contains .local, you will need to run the following command from the command line of the OVA to ensure you can add arrays properly from the plugin in vCenter:

    pureuser@purestorage-vmware-appliance:~$ puredns setattr --search {your .local domain} --nameservers {ip or FQDN of DNS server}

Configuring the vSphere Plugin and Registering with vCenter

Perform these steps after the OVA has been installed in a vCenter.

You are required to change the password for the pureuser account when you first log in. Be sure to note your new password. If pureuser cannot log in, you will have to redeploy the OVA to gain access.

  1. Open an SSH connection to the appliance using the OVA VM's DNS name or IP address displayed in vCenter.
  2. In the Pure VMware Appliance shell:
    1. On first login, you are prompted to change the pureuserpassword.
    2. Log back in to the appliance with the new password for the pureuseraccount.

Running the following command will confirm that the appliance is using the correct domain and DNS servers.  Both are required to be set for the plugin to function correctly.

$ puredns list

 If any DNS changes are required, run the following command:

$ puredns setattr --search <Domain Name> --nameserver <DNS Server(s)>

Once the appliance has been deployed and configured, then the vSphere Remote Plugin can be configured.

Use the pureplugin register command and positional argument to register the remote plugin's extensions with the vCenter server.

Please see the following section for more detailed information for what type of vSphere user to use when registering the plugin extensions with a vCenter Server.

$ pureplugin register --host <IP_or_FQDN_of_vCenter> --user <vSphere Account>

Optionally, the --plugin-fqdn <IP address or FQDN> argument can be appended to the above command line for instances where the plugin does not have external internet access.

A remote plugin OVA instance may be registered against a single vCenter instance or a set of vCenters that are in linked-mode.  For the linked-mode scenario, the plugin must be registered against every vCenter instance that is linked.  Non-linked vCenter instances each require their own Pure Storage VMware appliance.  

A successful registration will appear within vCenter soon after.

plugin-deploy1.png

plugin-deploy2.png

Make sure that the Pure VMware appliance OVA remains powered on after the plugin has been registered to vCenter as it actively communicates with vCenter and stores relevant configuration information.

For environments where there are vCenter instances in linked-mode, repeat the pureplugin registration process for each unique vCenter IP address or FQDN.

Below is an example of two vCenter instances in linked-mode that have each been registered with the pureplugin command with their IP addresses:

$ pureplugin status
Plugin   Status   Version  Registrations
vSphere  running  5.0.0    10.21.143.120
                           10.21.143.150

It is important to note that multiple vCenter registrations agains the same appliance instance are only applicable to vCenters in linked-mode.  Non-linked vCenter instances will each require their own separate VMware appliance instance to be registered against.


Adding a FlashArray Connection with the vSphere Plugin

To add a single FlashArray, login to the vSphere Client and click on the Menu drop-down and choose Pure Storage.

clipboard_ed786352f108ec40c9f36c5fec08def78.png

Click on the +Add button shown under the Pure Storage icon.

clipboard_ef0071cd2399fe166fefa7242461ae62e.png

Choose Add a Single Array:

clipboard_e5a0cee3578e36b9b558375dd86d6aaaa.png

Enter in:

  • Array name. This does not have to be the actual FlashArray's domain name, but it is recommended. This name is not verified--but should be descriptive either way.
  • Array URL. In the form of an IP address or fully-qualified domain name representing a FlashArray virtual address. FQDN is always preferred.
  • Username. A username of either a local user or a directory attached user.
  • Password. The corresponding password of selected user.

clipboard_ed3d40e3eaf2e8bd0856aa3d7f1e319d2.png

The virtual address can be verified from the array on Settings > Network > Subnets & Interfaces:

clipboard_e6eef0325e4b565dc0ceada35da8292db.png

FQDN can be verified with nslookup or similar tools:

clipboard_ebdf84a1db6911ac71a92702c744d44f4.png


Now that the vSphere Plugin is installed and the FlashArray(s) has been registered the next step will be registering the VASA Provider.


Registering the FlashArray VASA Provider

The quickest method for registering the FlashArray VASA Provider is through the use of the FlashArray Plugin for the vSphere Web Client. It should be noted that this plugin is NOT required to be installed to use vVols with the FlashArray—though it does help streamline some processes such as this.

Pure Storage recommends using a local array admin to register the storage provider.  A local array admin user can be created starting in Purity 5.1 and higher.  This process is outlined here.


Registering the VASA Providers with the Pure Storage vSphere Plugin

  1. A FlashArray will need to be added/registered in the Plugin to register the Storage Provider for the a given FlashArray.  Once the FlashArray is registered, Navigate to the main Plugin Page, select the FlashArray and then click on "Register Storage Provider".
    vvols-plugin-kb-01-registering-sp-1.png
  2. The recommended practice is to have a local FlashArray Array Admin user to register the storage providers with.  In the example below, there is a local array admin named "vvols-admin" that the Storage Providers will be registered with.  In the event that the vCenter is in Enhanced Linked Mode, the option to choose which vCenter to register the storage providers with will be given.
    Registering the Storage Provider with a Single vCenter
    vvols-plugin-kb-01-registering-sp-2.png
    Registering the Storage Provider with a vCenter in Linked Mode
    vvols-plugin-kb-01-registering-sp-4.png
  3. Once the Storage Provider is successfully registered, navigate to the vCenter Server page, then Config and the Storage Providers tab.  Confirm that the storage providers are online and healthy.
    vvols-plugin-kb-01-registering-sp-3.png

The FlashArray will log all subsequent vVol operations from those vCenters under the user used to register the storage providers.

Mounting the FlashArray vVol Datastore

Once VASA has been registered, the FlashArray Plugin for the vSphere Web Client can automate the process to connect a PE to a cluster and also mount the vVol datastore. 

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
    vvols-plugin-kb-02-mounting-vvol-ds-1.png
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
    vvols-plugin-kb-02-mounting-vvol-ds-2.png
  3. Choose a name for the vVol Datastore
    vvols-plugin-kb-02-mounting-vvol-ds-3.png
  4. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
    vvols-plugin-kb-02-mounting-vvol-ds-4.png
  5. Confirm the FlashArray that the vVol Datastore will be created for.

    vvols-plugin-kb-02-mounting-vvol-ds-5.png
  6. Review the information and finish the workflow.
    vvols-plugin-kb-02-mounting-vvol-ds-6.png
  7. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.
    vvols-plugin-kb-02-mounting-vvol-ds-7.png

NVMe-vVols

Pure Storage has shipped support with NVMe-oF and vVols with the release of Purity//FA 6.6.2.  Please see the NVMe-vVols Implementation Guide for more detail.  However, here is a quick view of getting started with NVMe-vVols.

NVMe-vVols on the Pure Storage FlashArray is currently being certified with VMware.  Certification for vVols with vSphere 8.0 is broken into two parts, the development and enablement phase, and the final build certification phase.  The engineering, development and enablement phase of the certification has been completed, but a VCG (VMware Compatibility Guide) Listing is pending the completion of the final certification process.  While Pure Storage fully supports NVMe-vVols, the VCG Listing is pending the completion of that certification process.

This is the workflow for getting started with NVMe-vVols with a FlashArray running Purity//FA 6.6.2 or later.  The process will require the follow:

  1. ESXi hosts and vCenter Server running 8.0 U1 or later
  2. FlashArray running Purity//FA 6.6.2 or later
  3. ESXi hosts configured to use NVMe-oF with TCP or FC
  4. FlashArray configured to use NVMe-oF with TCP or FC
  5. The unique ESXi host's NQN for vVols
  6. Create host object on FlashArray with the unique NQN
  7. Create a new Pod on the FlashArray
  8. Create a new Protocol Endpoint in the new Pod
  9. Connect the Protocol Endpoint to the host objects with the unique NQN
  10. Rescan Storage Providers in vCenter
  11. Create vVol Datastore with the NVMe Storage Container

Creating NVMe-vVols Hosts on the FlashArray

One aspect of using NVMe vVols is that there is a unique NQN that the ESXi host will use when connecting to the storage array to leverage NVMe-vVols.  At the time of writing this KB, the only way to get the ESXi host's NQN is with esxcli.  There are two ways to get the unique NQN.  One way with vSphere 8.0 U1 and then another when vSphere 8.0 U2 and later.  Once the NQN's are retrieved then create a new host object on the FlashArray and assign the vVol NQN to the host object.

Using esxcli to list the host vVol NQN

vSphere 8.0 U1 and later
[root@esxi-1:~] /usr/bin/localcli --plugin-dir /usr/lib/vmware/esxcli/int storage internal vvol vasanvmecontext get
VasaNvmeContext:
   Host ID: 52e3d127-0a3d-7217-90f0-7a8201e9a93e
   Host NQN: nqn.2021-01.com.vmware:62e2d72d-8c06-16ca-abf7-0025b5b10b99-vvol-7a8201e9a93e
vSphere 8.0 U2 and later
[root@esxi-1:~] esxcli storage vvol nvme info get
   Host ID: 52e3d127-0a3d-7217-90f0-7a8201e9a93e
   Host NQN: nqn.2021-01.com.vmware:62e2d72d-8c06-16ca-abf7-0025b5b10b99-vvol-7a8201e9a93e

Create FlashArray Host Object with unique NQN

You can create these host objects with the CLI or with the GUI.

With the FA CLI
purehost create --personality esxi --nqnlist nqn.2021-01.com.vmware:62e2d72d-8c06-16ca-abf7-0025b5b10b99-vvol-7a8201e9a93e ESXi-1-nvme-vvols

purehost create --personality esxi --nqnlist nqn.2021-01.com.vmware:5caf7451-fd14-c51c-e2f0-0025b521005d-vvol-155830bed346 ESXi-2-nvme-vvols
With the FA GUI
Screen Shot 2024-01-18 at 1.09.38 PM.png
Screen Shot 2024-01-18 at 1.10.28 PM.png

Create FlashArray Host Group Object with NVMe-vVols Hosts

In the event that there is an ESXi Cluster that will need to map directly to the ESXi hosts using NVMe-vVols, then create a Host Group for those hosts.

With the FA CLI
purehgroup create --hostlist ESXi-1-nvme-vvols,ESXi-2-nvme-vvols NVMe-vVols-Host-Group-FC
With the FA GUI
Screen Shot 2024-01-18 at 1.17.07 PM.png
Screen Shot 2024-01-18 at 1.19.13 PM.png

Creating NVMe-vVols Storage Containers on the FlashArray

To create a NVMe-vVols capable storage container on the FlashArray a new storage container will need to be created.  Then the Protocol Endpoint created with the container will need to be connected to the host group with the hosts with the unique nvme vvols NQN.

With the FA CLI

When creating the new pod in the CLI, you can also set the quota for the pod.  This will allow the user to set the specific size for the new storage container.

# Create the new Pod #
purepod create --quota-limit 500 TB FA-NVMe-vVols-SC-01

# Create the new Pod's Protocol Endpoint #
purevol create --protocol-endpoint FA-NVMe-vVols-SC-01::pure-protocol-endpoint

# Connect the Protocol Endpoint to the NVMe-vVols Host Group #
purevol connect --hgroup NVMe-vVols-Host-Group-FC FA-NVMe-vVols-SC-01::pure-protocol-endpoint

With the FA GUI

When creating the new pod in the FA GUI you do not have the ability to set the quota on creation.

Create and new pod, the name of the pod will be the name of the new storage container
create-pod-1.png
Click on Volumes:Options and select "Show Protocol Endpoints"
create-pe-2.png
Click on "Create Protocol Endpoint"
create-pe-3.png
Create the Protocol Endpoint
create-pe-4.png
Click on the new Protocol Endpoint
after-pe-created.png
Connect the PE to the NVMe-vVols Host Group
connect-pe-to-hostgroup.png

Creating the NVMe-vVols Datastores in vSphere

When creating the vVol Datastore in vSphere you will need to first re-sync the storage providers.  Then you can go through the normal process of creating a vVol datastore.  The important part is selecting the correct storage container which will match the name that was given to the new pod that was created on the array.

After connecting the Pod::PE to the NVMe-vVols hosts, in vCenter synchronize storage providers
resync sp.png
On the ESXi Cluster, right click on the cluster and click on New Datastore
storage - new datastore.png
Select vVol
new datastore 1.png
Select the storage container that matches the Pod Name
new datastore 2.png
Select the hosts in the Cluster
new datastore 3.png
And finish the vVol Datastore creation wizard
new datastore 4.png
 
Here is a quick look at the virtual Protocol Endpoint.  Notice that the LUN is 0 and the size is 1.00 GB.
vPE Listing.png

Differences in vSphere 8.0 U2 and vSphere 8.0 U1 GUI Views

There is a little difference in the vSphere 8.0 U1 GUI and the 8.0 U2 GUI in the Host view and Datastore view.  You'll notice that there isn't a specific spot for the NVMe vPE in 8.0 U1, but in 8.0 U2 then is more detail and listing provided.

vSphere 8.0 U2 Host Protocol Endpoint View
80u2-host-view.png
vSphere 8.0 U1 Host Protocol Endpoint View (notice that there are no NVMe PEs
80u1-host-view.png
vSphere 8.0 U2 Datastore Configure View
80u2-ds-view.png
vSphere 8.0 U1 Datastore Configure View
80u1-ds-view.png

Overall vSphere 8.0 U2 has a lot more quality of life updates in the GUI and CLI for NVMe vVols.

Creating VM Storage Policies

A quick option for the creation of VM storage policies is to use the FlashArray Plugin for the vSphere Client. The 4.1.0 release of the plugin offers the ability to import one or more FlashArray Protection Groups and create respective storage policies in vCenter. 


Importing FlashArray Protection Groups as SPBM Policies with the Pure Storage vSphere Plugin

  • From the main plugin page, select the FlashArray to import the protection group settings and click on "Import Protection Groups"
    vvols-plugin-kb-03-importing-pgroup-1.png
  • The screen that shows up next will list the FlashArray protection groups.  In the parentheses the schedule and capabilities of the protection group will be listed.  In the event that a Storage Policy in vCenter already matches the FlashArray pgroup schedule the option to select that pgroup will be grayed out. Select the policy or policies and click Import.
    vvols-plugin-kb-03-importing-pgroup-2.png
  • Navigate to "Policies and Profiles" and click on the VM Storage Policies tab.  From here you will see that the Storage Policies have been created.  The naming schema for these policies will be [FlashArray] [either Snapshot or Replication] [Schedule Interval].  Below there is a Replication and Snapshot policy shown.
    vvols-plugin-kb-03-importing-pgroup-3.png

Policies can also be manually created or changed using the following FlashArray capabilities:

Capability Name Value
Pure Storage FlashArray Yes or No
FlashArray Group Name of one or more FlashArrays
QoS Support Yes or No
Consistency Group Name A FlashArray protection group name
Local Snapshot Policy Capable Yes or No
Local Snapshot Interval A time interval in seconds, minutes, hours, days, week, months or years.
Local Snapshot Retention A time interval in seconds, minutes, hours, days, week, months or years.
Replication Capable Yes or No
Replication Interval A time interval in seconds, minutes, hours, days, week, months or years.
Replication Retention A time interval in seconds, minutes, hours, days, week, months or years.
Minimum Replication Concurrency Number of target FlashArrays to replicate to at once
Target Sites Names of specific FlashArrays desired as replication targets

Moving a VM to vVols

A virtual machine can be easily and non-disruptively be migrated from NFS or VMFS to Virtual Volumes via a Storage vMotion operation. This will convert virtual disks into the separate volumes on the array. 

  • Click on a virtual machine in the vSphere Web Client and choose Migrate.
    migrate.png
  • Then choose “Change storage only”.
    stprageponmly.png
  • Choose a VM storage policy if desired, or just choose a vVol datastore.
    choosepolicy.png
  • If you chose a policy, VMware will filter out non-compatible datastores. If the policy has replication or snapshot capabilities, you will need to choose a replication group (which is a FlashArray protection group).
    chooserg.png
  • Click Next then finish to have VMware convert the VM online to vVols and apply any policy configurations if selected

Virtual Volume Reporting

The Virtual Volume architecture not only gives VMware insight into the FlashArray, but it also gives the FlashArray insight into VMware. The granularity provided by Virtual Volumes gives the FlashArray the ability to understand the virtual machine object (volume group) and its various virtual disks (volumes).

Data Reduction Reporting

  • As noted in previous sections, a VM is represented on the FlashArray as a volume group. By clicking on the Storage pane and then the Volumes tab, you can see the volume groups. Click on the one that represents your virtual machine. 
  • The top panel of the volume group shows averaged or aggregate information for your virtual machine. If you click on the Space button in the Volumes box, the space stats will be displayed for the individual vVols as well. 
  • To see historical information, click on the Analysis pane and choose Capacity and then the Volumes tab. 
  • To look at VMs (volume groups) or vVols (volumes) click on the drop down and choose the appropriate object type.  
  • Once selected, look through the objects and select it to see the VM or vVol of your choosing. Alternatively, type in the name of the VM into the search box and the listing will be filtered automatically.  Up to 5 volumes or 5 volume groups can be selected in the GUI.
    volume-capacity.png
    volume-group-capacity.png

Performance Reporting

  • VM and vVol performance can also be reported on. By click on the Analysis section and the Performance sub-section, details like IOPS, latency, and throughput can be viewed. Click on the Volumes tab to find the various VMs (volume groups) or vVols (volumes).
  • To see report on a VM, choose volume groups in the drop down. To see specific vVols, choose volumes. 

The analysis tab will breakout the performance stats (IOPS, throughput, and latency) in different charts, which can be split further into Reads or Writes. For VM latency, it is averaged for all volumes in that VM. For throughput and IOPS, it is accumulative across the volumes. If a specific volume is selected, they are the stats for just that volume. 

volume-perf.png
volume-group-perf.png

Creating a Snapshot

While a benefit of virtual volumes is that you can go to the array GUI/REST/CLI and perform per-VM or per-virtual disk operations, a primary advantage is that this can be done from within vCenter natively.  

When you have a virtual machine that consists of vVols, you can use the vSphere Web Client (or any VMware management tool) to create array-based snapshots. 

  • For instance, the process to create a snapshot with the vSphere Web Client is as follows. Navigate to the Host & Clusters view and identify the target virtual machine. Then right-click on the VM and choose Snapshots and then click Take Snapshot
    VM-Snapshot-01.png
  • This will bring up the snapshot creation panel. In here, you can choose a name and one or neither of the following options.
    VM-Snapshot-02.png
    • Snapshot the virtual machine’s memory
      When you create a memory snapshot, the snapshot captures the state of the virtual machine's memory and the virtual machine power settings. When you capture the virtual machine's memory state, the snapshot operation takes longer to complete. You might also see a momentary lapse in VM response over the network (a ping or so).
    • Quiesce the guest file system
      When you quiesce a virtual machine during a snapshot, VMware Tools quiesces the file system in the virtual machine. The quiesce operation pauses or alters the state of running processes on the virtual machine, especially processes that might modify information stored on the disk during a restore operation. This does require VMware Tools to be installed inside of the VM. 
  • Purity//FA 5.1.3+ -- Snapshots will be a copy of the volumes for the VM that you are taking a managed snapshot of.
    • Here is a look at the volume group and the the config, data and data snapshot volumes.
      VM-Snapshot-03.png
    • Here is the Data volume for the VM running, you'll see it's connected to the host.
      VM-Snapshot-04.png
    • Then here is the the snapshot volume of the current data volume.  Notice that it is not connected to any hosts and it is not a Pure Snapshot, but it's own volume.
      VM-Snapshot-05.png
  • Pre Purity//FA 5.1.3 -- If you look at one or all of the data vVols on the FlashArray, you will now see their respective snapshots:
  • The snapshot will also appear inside of VMware interfaces where it can be fully managed.
    VM-Snapshot-07.png

Additional vVols Features of the FlashArray vSphere Web Client Plugin

The FlashArray Plugin for the vSphere Web Client 3.0 introduces a few value-add snapshot and recovery features that are not otherwise built-in to the vSphere Web Client.  


Viewing VM vVol details

When a FlashArray is registered with the vSphere Plugin there will be details reported in vCenter for vVols based Virtual Machines that are stored on that FlashArray.  

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.2.0 or higher)
  1. On the VM main page view there is the undelete protection box that also has links to the capacity, performance and virtual volumes management page.
    VM-Insights-01.png
    VM View - Pure Storage Undelete Protection Status and Quick Links
  2. From the VM view, navigate to the monitor and then Pure Storage view.  Here performance and capacity can be monitored at a volume or volume group level.
    VM-Insights-02.png
    VM View - Monitor - Pure Storage - Capacity - Volume View
    VM-Insights-03.png
    VM View - Monitor - Pure Storage - Capacity - Volume Group View
    VM-Insights-04.png
    VM View - Monitor - Pure Storage - Performance - Volume View
    VM-Insights-05.png
    VM View - Monitor - Pure Storage - Performance - Volume Group View
  3. From the VM view, navigate to the configure and then Pure Storage view.  From this page there are various workflows available as well as Guest Insights that are displayed for a supported guest OS and VMware tools version.
    VM-Insights-06.png
    VM View - Configure - Pure Storage - Virtual Volumes - VM Home Select - Rename Volume
    (Volume Group Rename is only available when renaming the VM Home)
    VM-Insights-07.png
    VM View - Configure - Pure Storage - Virtual Volumes - Hard Disk Select - Guest Insights

Here is a Demo on the new VM Insights from the 5.2.0 Plugin.


 

Viewing the Virtual Machine vVol Details with the Pure Storage vSphere Plugin (versions 5.1.0 or lower)
  1. From the Virtual Machine view and Summary Tab, there is a FlashArray widget box.  This will show whether or not the VM has Undelete Protection.  Undelete Protection means that there is currently a FlashArray Snapshot of this VMs Config vVol.
    vvols-plugin-kb-04-VM-Details-1.png
  2. On the Virtual Machine's Configure Page, there is a Pure Storage Virtual Volumes tab.  
    vvols-plugin-kb-04-VM-Details-2.png

    The page will allow end users to run the workflows to Import a virtual disk (vVol), restore a destroyed vVol or to Overwrite an existing vVol.
    Additionally the page contains important information about the VMs Data vVols.  Some of the important information here would be the Virtual Device (SCSI controller connection), the vVol Datastore that the vVol is on, which Array the vVol is on and the FlashArray Volume Group Name and Volume name.


Creating a FlashArray Snapshot of a vVol Disk

The Pure Storage Plugin version 4.4.0 and later for the vSphere Client has the ability to create a new snapshot of only a vVol virtual disk.

Create a Snapshot of a vVol Disk
  1. From the Virtual Machine Configure tab, navigate to the Pure Storage - Virtual Volumes pane, select the disk you would like to snapshot and click Create Snapshot.

     

     

    clipboard_ee1b1a9dde32840f3374ab7d72fcfc010.png

  2. After clicking the Create Snapshot button, a dialog appears. You can optionally enter a snapshot name, otherwise it will assign the next available numerical name for the snapshot. Click Create.

     

    clipboard_ead9911a2a356f2ae181f303ec96b1e4f.png

  3. After the workflow is complete, you can verify the snapshot by either clicking the Import Disk or the Overwrite Disk button and finding the correct disk and expanding its snapshots.

    clipboard_ee7ee1501092792ea27f6dda452961b65.png

 


Restoring a vVol from a FlashArray Snapshot

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
    vvols-plugin-kb-05-Restoring-vvol-1.png
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
    vvols-plugin-kb-05-Restoring-vvol-2.png
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
    vvols-plugin-kb-05-Restoring-vvol-3.png
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
    vvols-plugin-kb-05-Restoring-vvol-4.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    vvols-plugin-kb-05-Restoring-vvol-5.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 


Creating a vVol Copy

With the Pure Storage vSphere plugin there is the ability to import a vVol from the same vVol VM or from another vVol VM.  The source can be either a FlashArray Snapshot or a Managed Snapshot.  The workflows for importing the same vVol from either a FA Snapshot or a Managed Snapshot is walked through below as well as in the Demo Video here.

Creating the Copy from a FlashArray Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to recover.  This can be a different vVol VM or the same vVol VM that you want to import the data vVol to.  In this example the Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "53".
    vvols-plugin-kb-06-vvol-copy-2.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be imported.
    Click on Import to complete the workflow. 
Creating the Copy from a vSphere Managed Snapshot with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Import Disk.
    vvols-plugin-kb-06-vvol-copy-1.png
  2. Instead of using a FlashArray pgroup snapshot to import the vVol, this time a Managed Snapshot will be selected.  Notice the difference in the naming for the selected vVol.  There is no pgroup or snapshot name associated with it.  Just the volume group and data vvol name, followed by a "-snap" indicating that this is a managed snapshot for this vVol.  
    vvols-plugin-kb-06-vvol-copy-3.png
    The same type of information is provided in the Volume Information for Managed Snapshot or FlashArray Snapshots.
    To complete the import workflow, click on Import.
     
  3. Once the Import Workflows have completed, the new Data vVols will show up on the Virtual Volumes page.
    vvols-plugin-kb-06-vvol-copy-4.png


Recovering a Deleted VM from a FlashArray Snapshot (VM Undelete)

The Pure Storage vSphere plugin has the ability to recover a destroyed vVol within 24 hours of when the vVol was destroyed.  There is also an integration to overwrite an existing vVol with a previous FlashArray snapshot of the vVol.  These workflows are covered in the Demo Video here.  Click to expand the workflows below.

Restoring a Destroyed vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Restore Deleted Disk.

    When deleting a Data vVol, the FlashArray will destroy the volume and the volume will be in a Pending Eradication state for 24 hours.

    In this workflow example, the VM 405-Win-VM-2 has had the virtual disk "Hard disk 2" deleted from disk.  
    vvols-plugin-kb-05-Restoring-vvol-1.png
  2. After selecting the Restory Deleted Disk option, any Data vVols that have been destroyed and are pending eradication will be displayed.  Select the Data vVol that should be restored and click Restore to complete the workflow.
    vvols-plugin-kb-05-Restoring-vvol-2.png
  3. After the workflow is complete, the recovered vVol will be displayed in the Pure Storage Virtual Volumes tab.
    vvols-plugin-kb-05-Restoring-vvol-3.png
Rolling Back a vVol with the Pure Storage vSphere Plugin
  1. From the Virtual Machines Configure page, navigate to the Pure Storage - Virtual Volumes tab, select Overwrite Disk.
    vvols-plugin-kb-05-Restoring-vvol-4.png
  2. From this page, select the vVol based VM and the Data vVol from that VM that you want to use to overwrite the Data vVol with.  While this can be a different vVol VM or the same vVol VM that you want to import the data vVol to, the example show will be to roll back this Data vVol to a previous snapshot.  Here Hard Disk 2 is selected and when expanded all Snapshots for that vVol are shown.  In this case, the one selected in a Snapshot from the FlashArray pgroup "vSphere-Plugin-pgroup-2" and the Snapshot Name of "Safe-Snapshot".
    vvols-plugin-kb-05-Restoring-vvol-5.png
    In the Volume Information for the selected snapshot, we can see when the snapshot was created and the information for this vVol that will be used to Overwrite the Existing Data vVol.
    Click on Overwrite to complete the workflow. 


vSphere Client Video Demo 

Here is a Video Demo that walks through each of these steps covered in the KB - With the undelete workflows this example is of Purity//FA 6.1 and lower.  This does not show the workflows in Purity//FA 6.2 and higher.

 


Troubleshooting

There may be instances that some of the ESXi hosts are mounted to the vVol Datastore and some hosts are unable to mount the vVol Datastore.  Here are some posts to help troubleshoot those instances.