Skip to main content
Pure Technical Services

vSphere Plugin User Guide: VMFS Management

Prerequisites:  This section assumes that the HTML5 vSphere Web Client Plugin has been installed to vCenter, that one or more FlashArray(s) has been added as a connection to it and lastly that one or more ESXi host(s) or clusters has been configured for use with the plugin.

Creating a VMFS Datastore

The Pure Storage vSphere plugin provides a straightforward methodology to create a new VMFS datastore that automates several steps relative to using the native vSphere datastore creation method.

Creating a VMFS Datastore

To start, right-click on the host or ESXi cluster where the VMFS datastore will be attached, select the Pure Storage menu item and choose Create Datastore.  This operation is shown in the below image:

2020-07-16_10-44-06.gif

Next, we select the VMFS datastore type and it is recommend using VMFS 6 for its automatic space reclamation support.

2020-07-16_11-12-23.gif

Provide a name and size for the VMFS datastore in the third section of the wizard.  Available sizes are KB, MB, GB and TB.

2020-07-16_11-15-20.gif

Select the cluster of ESXi hosts or an individual ESXi host as the Compute Resource to connect the VMFS volume to:

2020-07-16_13-53-30.gif

Choose the desired FlashArray for the VMFS datastore to be provisioned to. 

When you have Pure1 Authentication enabled, you will see a recommended FlashArray to place the VMFS datastore on based upon reported Load and Capacity, as shown in this example.

2020-07-17_12-03-17.gif

Optional:  Selecting the Clustered option on the Storage page will enable creating the VMFS datastore within an ActiveCluster Pod.  Please see this KB article for more information on VMware and ActiveCluster.

2020-07-17_12-11-33.gif

Optional:  You can associate the VMFS volume you are creating with one or more Protection Group(s) that exist on the FlashArray.  VMFS datastores can also be added to one or more Protection Group(s) at a later time.  Using Protection Groups with VMFS volumes will be covered in more detail later within this guide. 

2020-07-17_12-13-12.gif

Optional:   QoS limitations for the VMFS volume can be set via a Bandwidth and/or IOPs limit in order to ensure that the newly created volume does not consume more FlashArray resources than desired.  Placing the VMFS volume within a Volume Group is also supported, but generally is not used.  For more information on Volume Groups, please visit the vVols section of the VMware Platform Guide.

2020-07-17_12-15-16.gif

The final page of the wizard shows a summary of the options selected prior to VMFS datastore creation.  If the values are as intended, hit Finish to create the VMFS datastore on the FlashArray and attach it to the selected host or host group.

VMFS-Creation-summary.png


Resizing a VMFS Datastore

Generally speaking, over time VMware users will fill up volumes with data, and at some point that volume will reach maximum capacity if it is not expanded and/or UNMAP is utilized.  Resizing a volume to add more capacity is a simple operation using the Pure Storage vSphere Web Client plugin.  

Resizing a VMFS Datastore

To start, select the VMFS volume in question and right-click on it.  From there, select the Pure Storage menu option and pick the Edit Datastore option.

2020-07-17_15-00-19.gif

From the spawned window, simply go the the Size field and select the desired new size.  Note that only VMFS volume expansions are supported within the plugin.

2020-07-17_15-01-21.gif

After the volume expansion and host storage rescan, we can see that the new capacity has been added and is immediately available for use.

resize-expansion.png


Adding a VMFS Datastore to Another Cluster

Adding a VMFS Datastore to Another Cluster

Accessibility to a datastore is of key importance, especially as ESXi hosts are moved around to best suit evolving datacenter requirements.  The vSphere plugin features the ability to mount an existing VMFS datastore to a(n) additional host(s) or even to an additional cluster within a vSphere datacenter.

To mount an additional host or cluster, right-click on the VMFS datastore and open the Pure Storage context menu.  From there, select the Mount on Additional Hosts option.  That will spawn a wizard where you can either select one or more host(s) or an entire additional cluster to attach the VMFS volume to.  The below GIF shows the step-by-step process for accomplishing this task.

2020-07-20_13-57-57.gif

Upon completion of this task, the datastore will now be available on the additional host(s) or cluster(s).


VMFS Snapshot Management

VMFS Snapshot Management

Creating a Snapshot on Demand

Pure Storage snapshots are extremely space efficient, immutable and enable multiple recovery and copy options both on and off of the FlashArray.  Before a snapshot can be used, it must be created, though.  Within the Pure Storage vSphere Plugin there are a couple of ways to create a VMFS snapshot:  on-demand or on a schedule via a Pure Storage Protection Group.  We will start with showing how to create an on-demand snapshot via the Pure Storage plugin.

To create a snapshot on-demand, there are two options:

1.  Right-click on the VMFS datastore, select the Pure Storage menu item and select Create Snapshot as shown in the below example GIF.

2020-07-20_12-01-41.gif

After the create snapshot wizard opens, simply provide a suffix for naming and uniquely identifying the snapshot.

2020-07-20_12-02-41.gif

 

The newly created snapshot (along with previously created snapshots) can be viewed by clicking on the datastore within vSphere and navigating to Configure > Pure Storage > Snapshots:

2020-07-20_12-07-29.gif

2.  The other way to create an on-demand snapshot is accomplished within the Pure Storage datastore Configure screen by selecting the + Create Snapshot button.  The below two images illustrate how to use this method:

2020-07-20_12-13-36.gif

2020-07-20_12-14-33.gif

Destroying a Snapshot

Removal of a VMFS snapshot from Pure Storage is just as simple of an operation as creation was.  To delete a snapshot, select the target datastore, go to Configure, select the snapshot in question and click on Destroy Snapshot.  By default, the snapshot will be retained for 24 hours on the FlashArray after deletion but there is a selectable option to Eradicate the snapshot which will cause it to be destroyed immediately, freeing up capacity for use.  The Eradicate operation is at the expense of that snapshot not being recoverable for the normal 24 hour period after deletion from the FlashArray.  The following two GIFs show how to destroy a snapshot with the default 24 hour retention option.

First, select Destroy Snapshot to spawn the related wizard.

2020-07-20_14-10-49.gif

Then, confirm deletion and optionally select to eradicate the snapshot, rather than waiting the normal 24 hour period.

2020-07-20_14-11-38.gif

Restore a VMFS Datastore from a Snapshot

The vSphere plugin provides the capability to automatically overwrite and resignature an existing VMFS volume from one of it's FlashArray snapshots.  This use case is particularly important in the event of a ransomware attack, disaster recovery, an operating system patch breaking one or more application(s) after it has been applied or if an administrator makes a mistake and needs to recover multiple connected VMs within the same volume to their previous state.  This Restore operation applies only to the current location of the VMFS volume.  

In order for the Restore VMFS from a snapshot workflow to function properly, please make certain to power off all virtual machines that are running within the volume, as the overwrite operation will also remove these VMs and re-add their version from this snapshot into vCenter inventory.

To access this workflow, select the Datastore, go to Configure Pure Storage > Snapshots, highlight the snapshot you want to use and then click on the Restore to Original Datastore button.  This operation is shown below:

2020-07-20_15-08-45.gif

restore_vm_powered_off.png  

If one or more virtual machines on the datastore you wish to restore are powered on; the Restore button will be grayed out.  The wizard will display which VM(s) need to be powered off prior to the snapshot restoration operation being available.  The below example shows a single VM that is still powered on and thus preventing the snapshot restore.

restore_vm_powered_on.png

Upon clicking Restore the snapshot will overwrite the existing volume which will automatically replace the previous VMFS volume and be resignatured for immediate use in vSphere.

Creating a VMFS Copy from a Snapshot

Another way that Pure Storage FlashArray snapshots can be utilized via the plugin is that a snapshot can be copied into a new VMFS volume and mounted to a cluster or single ESXi host on the local FlashArray.  This is particularly useful for test/dev instances (e.g. creating a copy of a VMFS datastore with recent SQL data for a developer), disaster recovery operations at a remote site, or if one or a few VMs but not all VMs that reside on a particular VMFS volume need to be restored to a previous state. 

To get started, select the datastore within vSphere and go to Configure > Pure Storage > Snapshots.  Next, select the desired snapshot that you wish to copy and click on the Copy to New Datastore button.  This process is shown in the below image:

2020-07-22_13-21-59.gif

From there, the steps to copy the snapshot to a new VMFS volume are more or less identical to how a new VMFS datastore is created.  First, a Name is given to the copied datastore (you will not be able to resize the datastore until after this procedure has been completed).  Next, select the ESXi cluster or ESXi host you wish to connect the copied volume to, optionally add it to one or more Protection Groups and lastly optionally assign the copied volume QoS limits and/or to a Volume Group.  An example of this entire process is depicted in the below GIF:

2020-07-22_13-42-02.gif

Once the volume copy, mount and ESXi storage rescan operations have completed, we can see that the copied volume is available for use in vSphere.

clipboard_e194d05e0839a6a7a0fb0075991b24976.png

 


Destroying a VMFS Datastore

Destroying a VMFS datastore through the Pure Storage vSphere plugin will automatically unmount it from all attached hosts, removes it and all snapshots from the FlashArray and lastly does a ESXi storage rescan.  By default, the volume and all snapshots are fully recoverable from the FlashArray for 24 hours after deletion. 

Destroying a VMFS Datastore

Prior to destroying a VMFS datastore, it is required to make sure that all VMs that reside on that volume have either been deleted, storage vMotioned to a different datastore or powered off and removed from vSphere inventory.

To proceed with deleting the datastore, select it within vSphere, right-click and open the Pure Storage menu option.  Next, select the Destroy Datastore option.  This will launch a window to confirm that you wish to proceed with datastore removal.  This procedure is shown in the two below images.

2020-07-22_14-21-04.gif

clipboard_e21b8a6da787a25a0eaa20d22849a97c5.png

From the FlashArray GUI, we can see that the volume is available to be restored for 24 hours.

clipboard_ed3aa57689f6888057e16ca261a6cb774.png


Viewing a VMFS Datastore Details

When the Pure Storage vSphere plugin is installed, selecting a Pure Storage VMFS volume within vSphere will show several important metrics and insights into how the volume is being utilized.  To see this type of information, select the VMFS volume of interest and go the the Summary screen.

VMFS Datastore Details

 

clipboard_e5e4892f8d992e80d48aca20bebe9bf05.png

Most of the FlashArray information shown within the Summary screen are fairly self-explanatory, however, please see the below key for more detailed information for the numbered items shown in the datastore Summary image.

  1. Array:  What FlashArray or FlashArrays (if using ActiveCluster) does the VMFS datastore reside on.
  2. Volume Name:  The name of the volume on the FlashArray.  Useful for if the name in vSphere does not match the Volume name on the FlashArray.
  3. Volume Bandwidth Limit:  This field displays any optional QoS bandwidth limitations placed upon the VMFS volume when it was created.
  4. Volume IOPS Limit:  This field shows any optional QoS IOPS limitations placed upon the VMFS volume when it was created.  
  5. Data Reduction:  Report level of Data Reduction (data deduplication + compression) that the Volume is reporting from the FlashArray.
  6. Pod:  If the VMFS datastore is a member of an ActiveCluster Pod, the name of the Pod is shown here for easy cross-reference to associated underlying arrays.
  7. Volume Group:  A VMFS datastore can optionally be added to a Volume Group when it is created.  This option is generally not used, but is displayed here if it is.
  8. Serial#:  This is the volume serial number assigned on the FlashArray.
  9. Snapshot Count:  Number of Pure Storage FlashArray snapshots associated with the VMFS volume.
  10. Protection Group Count:  The number of Pure Storage Protection Groups that the VMFS volume is a member of.

Real-time Capacity and Performance Metrics from FlashArray may also be viewed from the vSphere Web Client.  To access those, simply select the Capacity or Performance buttons at the top of the summary screen:

clipboard_e882ebc10ebf4daed600f135b497b13c3.png

Alternatively, these metrics may be accessed from the Monitor tab of the datastore when selected.  

2020-07-22_14-48-35.gif

Capacity will show some more granular details about the VMFS datastore selected:

clipboard_ee400599d782b0713d4402e31d0b03594.png

In addition to volume name, data reduction, volume size and percentage full, this screen also shows the following information:

VMFS Used:  This is the VMware-reported provisioned space on the datastore. This includes the sum total of the sizes of all files and VMDKs on the VMFS datastore.

Array Host Written:  This is the amount currently written to the underlying volume as seen by the array BEFORE data reduction. If this number is higher than ‘VMFS Used’, then UNMAP needs to be enabled (VMFS-6) or manually run (VMFS-5).  See the last section of this guide for more information on UNMAP.

Array Unique Space:  This is the amount of physical capacity that is currently unique to this volume—meaning that if this volume was deleted, this is how much would be freed up on the array. This value has little to no correlation with % used of the volume.

Clicking on the Performance option for the VMFS volume will display statistics of interest directly from the underlying FlashArray.  These statistics include separate windows for Latency, IOPS and Bandwidth.  Select from those tabs to see real-time array statistics.  Note that you may need to login to the FlashArray GUI within the metric window with the pureuser credential for access.  The various performance windows all feature the capability to zoom in or to expand the sampling window up to 1 year. and down to 1 hour.  This image shows an example of zooming in on a short timeframe and then viewing the same metric over a much larger window:

2020-07-22_15-14-57.gif

 


Adding/Removing a VMFS Datastore from a Protection Group

Protection Groups and VMFS Volumes

For the scope of this KB article, we can think of Protection Groups as a policy-driven construct which controls snapshot frequency, retention, replication frequency and replication target(s) of one or more VMFS datastore(s).  A VMFS volume can be a member of one or multiple Protection Groups, or no Protection Groups at all.  The Pure Storage vSphere plugin has the capability to add or remove VMFS volumes from Protection Groups either when the VMFS datastore is created (see Creating a VMFS datastore section or Copying a VMFS snapshot) or it can later be added to an existing Protection Group from the Pure Storage plugin menu.  It is important to note that Protection Groups must be created and setup at the FlashArray level prior to being available for use in the plugin.  Protection Group policies may be altered from the FlashArray GUI, CLI, PowerShell or via other VMware integrations available within the VMware Platform Guide.      

To add an existing VMFS volume to a datastore, first highlight it and right-click.  Select the Pure Storage menu and pick the Update Datastore Protection option.

2020-07-22_15-00-47.gif

That will spawn the below window which will be automatically populated with available Protection Groups on the FlashArray.  ActiveCluster enabled VMFS volumes will have the ability to select between underlying FlashArray Protection Groups.  Simply select the Protection Groups you wish to associate with the VMFS volume and click on Update Protection.

 

2020-07-22_15-01-36.gif

To Remove a VMFS volume from one or more Protection Groups, follow the same procedure to access the Update Datastore Protection menu, but instead unselect one or more Protection Groups and click on Update Protection.

2020-07-22_15-02-11.gif


Running/Scheduling UNMAP on a VMFS Datastore

Running and Scheduling UNMAP on VMFS

Running UNMAP on a VMFS Datastore

UNMAP is a valuable tool to keep VMFS datastore capacity utilization accurate on the FlashArray.  As VMs, files, containers and other items are removed through vSphere, the underlying VMFS volume on the FlashArray is not automatically aware that now deleted space is available for use.  UNMAP signals to the array that those previously written blocks are available and can be returned as free capacity for future use.  
Fortunately, running UNMAP is easy with the vSphere plugin.  UNMAP can either be run on demand (immediately) or set to run on a schedule.

To run UNMAP on demand, right-click on the target VMFS datastore, select the Pure Storage menu and pick Run Space Reclaimation

2020-07-22_15-39-26.gif
That selection will open the Run Space Reclamation window.  Select an ESXi host within the cluster that the VMFS datastore is attached to and click on Run to start the UNMAP operation.

  clipboard_edc992112022d8e7be200724630dcdec8.png

Many factors, including how much dead space there is to clean up and how busy the FlashArray and ESXi host are can impact how long the UNMAP job will take to run.  For this reason, it is advisable to look at performance utilization of those two items prior to running UNMAP, and delay if resource utilization is high if possible.  The other option, as our next section will explain, is to create an UNMAP schedule that runs during periods of relatively low resource utilization.

Scheduling UNMAP on a VMFS Datastore

UNMAP requires ESXi and FlashArray resources to execute.  While the resource requirements are not extreme at all, it is advisable to set a schedule to run UNMAP during lower utilization times in the environment, if available.

To build a schedule for running UNMAP, right-click on the target VMFS datastore, right-click and select the Pure Storage menu option.  Pick the Schedule Space Reclamation option as shown in the below image:

2020-07-22_15-40-21.gif 

The below window that spawns has a full options that need to be selected:

  • Which ESXi host in the cluster will run the UNMAP job.  It is advised to spread multiple Space Reclamation jobs for different VMFS datastores to different hosts if they will be running on.
  • Frequency of running the job.
  • Day to run the job
  • Time to run the job

clipboard_ef7db635f42eeafd0c9b8461ea914e71d.png

Once these fields have been filled out, click on the Schedule button to active the job.  Each VMFS datastore will require its own Space Reclamation job.

To update the schedule or delete an existing job entirely, simply return to the Schedule Space Reclamation menu as shown above.

clipboard_e23b14b58d30a4e4757e33c9a402f53ab.png

How often to run UNMAP is a common question and the answer is:  it depends.  For environments like VDI where virtual machines may be created and destroyed on a regular basis (thus accumulating dead space quickly), it probably makes sense to run UNMAP with a weekly cadence.  For VMFS datastores that do not experience much churn, running UNMAP monthly will likely suffice.  Users can confirm the amount of dead space to be reclaimed from the VMFS capacity Monitoring section we covered earlier within this guide to find more in-depth metrics for deciding what UNMAP schedule will work best for their unique environment.