vSphere Plugin User Guide: VMFS Management
- Last updated
- Save as PDF
Currently viewing public documentation. Please login to access the full scope of documentation.
Prerequisites: This KB article assumes that the local vSphere Plugin or the remote vSphere Plugin have been installed to vCenter, that one or more FlashArray(s) have been added as a connection to it and lastly that one or more ESXi host(s) or clusters have been configured for use with the plugin.
- Creating a VMFS Datastore
-
Creating a VMFS Datastore
The Pure Storage vSphere plugin provides a straightforward methodology to create new VMFS datastores that automates several steps compared to using the native vSphere datastore creation method.
To start, right-click on the host or ESXi cluster where the VMFS datastore will be attached, select the Pure Storage menu item and choose Create Datastore. This operation is shown in the below image:
Next, we select the VMFS datastore type and it is recommend to use VMFS 6 for its automatic space reclamation support.
Provide a name and size for the VMFS datastore in the third section of the wizard. Available sizes are KB, MB, GB and TB.
Select the cluster of ESXi hosts or an individual ESXi host as the Compute Resource to connect the VMFS volume to:
Choose the desired FlashArray for the VMFS datastore to be provisioned to.
When you have Pure1 Authentication enabled, you will see a recommended FlashArray to place the VMFS datastore on based upon reported Load and Capacity, as shown in this example.
Optional: Selecting the Clustered option on the Storage page will enable creating the VMFS datastore within an ActiveCluster Pod if it has been previously setup. Please see this KB article for more information on VMware and ActiveCluster.
Optional: Selecting the Continuous option on the Storage page will enable creating the VMFS datastore within an ActiveDR-enabled Pod if it has been previously setup. Please see this guide for more information on VMware and ActiveDR. This feature requires plugin 4.4.0 or later.
Optional: You can associate the VMFS volume you are creating with one or more Protection Group(s) that exist on the FlashArray. VMFS datastores can also be added to one or more Protection Group(s) at a later time. Using Protection Groups with VMFS volumes will be covered in more detail later within this guide.
Optional: QoS limitations for the VMFS volume can be set via a Bandwidth and/or IOPs limit in order to ensure that the newly created volume does not consume more FlashArray resources than desired. Placing the VMFS volume within a Volume Group is also supported, but generally is not used. For more information on Volume Groups, please visit the vVols section of the VMware Platform Guide.
The final page of the wizard shows a summary of the options selected prior to VMFS datastore creation. If the values are as intended, hit Finish to create the VMFS datastore on the FlashArray and attach it to the selected host or host group.
Resizing a VMFS Datastore
Generally speaking, over time VMware users will fill up volumes with data, and at some point that volume will reach maximum capacity if it is not expanded and/or UNMAP is utilized. Resizing a volume to add more capacity is a simple operation using the Pure Storage vSphere Web Client plugin.
- Resizing a VMFS Datastore
-
To start, select the VMFS volume in question and right-click on it. From there, select the Pure Storage menu option and pick the Edit Datastore option.
From the spawned window, simply go the the Size field and select the desired new size. Note that only VMFS volume expansions are supported within the plugin.
After clicking submit, the volume expands and hosts automatically rescan. We can see that the new capacity has been added and is immediately available for use.
Adding/Removing a VMFS Datastore to/from a Cluster
-
- Adding/Removing a VMFS Datastore to/from a Cluster
-
Accessibility to a datastore is of key importance, especially as ESXi hosts are moved around to best suit evolving datacenter requirements. Therefore the vSphere plugin features the ability to mount or to remove an existing VMFS datastore to/from compute.
Plugin 4.4.0 and earlier
Prior to plugin version 4.5.0, to mount an additional host or cluster, right-click on the VMFS datastore and open the Pure Storage context menu. From there, select the Mount on Additional Hosts option.
That will spawn a wizard where you can either select one or more host(s) or an entire additional cluster to attach the VMFS volume to.
After making the appropriate selection(s), click Mount to complete.
Upon completion of this task, the datastore will now be available on the additional host(s) or cluster(s).
Plugin 4.5.0 and later
Host Connection Screen
In plugin version 4.5.0 or later all host connectivity management for a datastore (besides provisioning an entirely new datastore) has been moved to its own screen which is accessible by selecting a datastore in the left-hand inventory panel and then choosing the corresponding Configure tab > Pure Storage > Host Connections.
Alternatively, there is a link on the summary tab of a selected datastore that redirects to the Host Connections screen:
The Host Connection screen will show the hosts/clusters that the datastore is currently connected to.
There are five columns:
Column Description Filter Cluster/Host This is the name of the ESXi host or Cluster and in the case of a host the cluster is belongs to if it is in one. A host could be listed twice if it has two objects on the FlashArray (for instance one FlashArray host for iSCSI and one for Fibre Channel). You can filter by the name of the host or cluster Status Information about the host connection in general or the datastore connection status to that host. Possible values are:
- Mounted (properly and fully connected to this object)
- Warnings (there is some issue on this host or cluster or a host in the cluster).
- Not Connected (the host is configured on the FlashArray but the connection is not online)
- Not Configured (the host is not configured on the FlashArray)
- Not Mounted (the host is configured and online with the FlashArray but the volume is not connected to it)
You can filter by the status of the hosts or cluster connection with the datastore. Array The name of the array(s) that host that volume. In the case of ActiveCluster this will be two arrays. You can filter by array name. Array Host Group or Host The name of the host or host group object that corresponds to that host or cluster. You can filter by the host or host group name. Protocol The in-use protocol (iSCSI, Fibre Channel, or NVMe-oF) for that host. You can filter by iSCSI or Fibre Channel. NVMe filtering is forthcoming. Understanding Status Warnings
If certain configuration problems are identifed, the plugin will warn on the Host Connection screen:
To remove the filter, click RESET FILTERS in the same box. A summary of common identified issues:Direct mounts found. This means that one or more hosts in the cluster have the datastore directly connected to one or more hosts instead of the host group. Mounted Directly to the host. This means that the datastore is connected directly to the host and not to the host group the host is in. While this is not specifically a dangerous situation, it is recommended to connect storage via the host group for a cluster to ensure uniform access. You may ignore this warning if direct connection is intended. Multiple Host Groups found. This means that more than one host group has been found for the cluster. This can cause provisioning to be non-uniform/irregular and should be investigated to ensure the configuration is intended. Add Datastore to Host or Cluster
To add a datastore to a host or cluster, click on desired cluster or host in the Host Connection panel and click the Mount Datastore button.
If the host or cluster is not configured, the window will surface a link to the Host Connections management panel to configure the host or cluster connectivity. If the host or host group is configured properly, the window will ask for confirmation to mount. The process will connect the volume to the host or host group, rescan, then mount the datastore. If you have selected a host in a cluster and it is in the host group, the plugin will by default prompt to add it to the host group instead of directly to the host. If you do want to add it just to that host, deselect the Mount via Host Group option. Mount Datastore is Grayed out
If the Mount Datastore is Grayed out, it means one of the following things:
- The datastore is presented to another host through a different protocol. VMware does not support datastores being used via more than one protocol at a time.
- The datastore is already mounted to that host.
- You have selected a cluster and the datastore is already mounted to a host directly in that cluster. To resolve this either remove the direct mount and re-add to the whole cluster (preferred), or mount it individually to each host.
Remove Datastore from Host or Cluster
If a datastore is connected at the host group (cluster) level (meaning it does not have a direct mount warning, select the cluster or a host in that cluster and click the Unmount Datastore button.
Once the window pop-up you must confirm the operation.
If you selected a cluster you can just click Unmount to start the operation. If you selected a host in a cluster, the window will ask you to confirm that you do mean to proceed as this operation will remove access to the datastore for ALL hosts in the cluster. A datastore can not be removed from a single host if it is provisioned at the host level on the FlashArray--so the plugin will iterate this operation through all hosts in the host group.
To remove the datastore from a single host the connection must be a direct mount, in this case the host will be noted as below:
If so, the datastore can be removed from that single host. Select the host and click the Unmount Datastore button. Confirm the operation.
This will unmount the volume, detach the device from the host, disconnect the volume on the FlashArray from the host object, and then rescan the host storage subsystem to remove the datastore object safely. -
VMFS Snapshot Management
- VMFS Snapshot Management
-
Creating a Snapshot on Demand
Pure Storage snapshots are extremely space efficient, immutable and enable multiple recovery and copy options both on and off of the FlashArray. Before a snapshot can be used, it must be created, though. Within the Pure Storage vSphere Plugin there are a couple of ways to create a VMFS snapshot: on-demand or on a schedule via a Pure Storage Protection Group. We will start with showing how to create an on-demand snapshot via the Pure Storage plugin.
To create a snapshot on-demand, there are two options:
1. Right-click on the VMFS datastore, select the Pure Storage menu item and select Create Snapshot as shown in the below example:
After the create snapshot wizard opens, simply provide a suffix for naming and uniquely identifying the snapshot.
The newly created snapshot (along with previously created snapshots) can be viewed by clicking on the datastore within vSphere and navigating to Configure > Pure Storage > Snapshots:
2. The other way to create an on-demand snapshot is accomplished within the Pure Storage datastore Configure screen by selecting the + Create Snapshot button. The below two images illustrate how to use this method:
Destroying a Snapshot
Removal of a VMFS snapshot from Pure Storage is just as simple of an operation as creation was. To delete a snapshot, select the target datastore, go to Configure, select the snapshot in question and click on Destroy Snapshot. By default, the snapshot will be retained for 24 hours on the FlashArray after deletion but there is a selectable option to Eradicate the snapshot which will cause it to be destroyed immediately, freeing up capacity for use. The Eradicate operation is at the expense of that snapshot not being recoverable for the normal 24 hour period after deletion from the FlashArray. The following images show how to destroy a snapshot with the default 24 hour retention option.
First, select Destroy Snapshot to spawn the related wizard.
Then, confirm deletion and optionally select to eradicate the snapshot, rather than waiting the normal 24 hour period during which time the snapshot can be recovered from the array.
Restore a VMFS Datastore from a Snapshot
The vSphere plugin provides the capability to automatically overwrite and resignature an existing VMFS volume from one of its FlashArray snapshots. This use case is particularly important in the event of a ransomware attack, disaster recovery, an operating system patch breaking one or more application(s) after it has been applied or if an administrator makes a mistake and needs to recover multiple connected VMs within the same volume to their previous state. This Restore operation applies only to the current location of the VMFS volume.
In order for the Restore VMFS from a snapshot workflow to function properly, please make certain to power off all virtual machines that are running within the volume, as the overwrite operation will also remove these VMs and re-add their version from this snapshot into vCenter inventory.
Recommendation: VMs are stopped, and unregistered, and the datastore unmounted. Reason: If it's not unmounted, Windows VM can't be started if they were running at the time the Snapshot was taken.
Summary of steps required.
- Stop VMs.
- Unregister VMs (remove them from inventory).
- Unmount the datastore.
- Snapshot restored to original.
- Mount the datstore.
- Register the VMs.
To access this workflow, select the Datastore, go to Configure > Pure Storage > Snapshots, highlight the snapshot you want to use and then click on the Restore to Original Datastore button. This operation is shown below:
If one or more virtual machines on the datastore you wish to restore are powered on; the Restore button will be grayed out. The wizard will display which VM(s) need to be powered off prior to the snapshot restoration operation being available. The below example shows a single VM that is still powered on and thus preventing the snapshot restore.
Once all VMs that reside on the VMFS volume to be restored have been vMotioned elsewhere and/or powered off, the Restore button shows as available.
Upon clicking Restore the snapshot will overwrite the existing volume which will automatically replace the previous VMFS volume and be resignatured for immediate use in vSphere.
Creating a VMFS Copy from a Snapshot
Another way that Pure Storage FlashArray snapshots can be utilized via the plugin is that a snapshot can be copied into a new VMFS volume and mounted to a cluster or single ESXi host on the local FlashArray. This is particularly useful for test/dev instances (e.g. creating a copy of a VMFS datastore with recent SQL data for a developer), disaster recovery operations at a remote site, or if one or a few VMs but not all VMs that reside on a particular VMFS volume need to be restored to a previous state.
To get started, select the datastore within vSphere and go to Configure > Pure Storage > Snapshots. Next, select the desired snapshot that you wish to copy and click on the Copy to New Datastore button. This process is shown in the below image:
From there, the steps to copy the snapshot to a new VMFS volume are more or less identical to how a new VMFS datastore is created. An example of this entire process is depicted in the below sequence of images:
First, a Name is given to the copied datastore (you will not be able to resize the datastore until after this procedure has been completed).
Next, select the ESXi cluster or ESXi host you wish to connect the copied volume to:
Optionally add the volume to be copied to one or more Protection Groups:
Lastly, optionally assign the copied volume QoS limits and/or to a Volume Group.
Confirm that your selections are correct and click Finish to copy the datastore snapshot to a new volume.
Once the volume copy, mount and ESXi storage rescan operations have completed, we can see that the copied volume is available for use in vSphere.
-
Recover a VMFS VM from FlashArray snapshot
Starting in plugin version 5.2.0, plugin users are able to restore VMFS backed VMs through the plugin using array-based snapshots as the recovery point. These workflows provide a more streamlined way of recovering a VMFS-backed VM from a FlashArray volume snapshot without having to log into the FlashArray for any of the operations.
- Recover a VMFS VM from FlashArray snapshot
-
There are two different paths to accomplish this task:
- Use the Configure section for the VMFS datastore to select a specific array-based snapshot at the onset of the workflow
- Because the snapshot is picked at the beginning of the workflow, this skips some steps in the wizard from path 2 below
- Right click on the VMFS datastore you want to recover the VM from and use the Recover VMs from Snapshot workflow
The steps taken by the two different workflows are ultimately the same. At a high level, they are:
- The user selects the VMFS datastore that is backed by a FlashArray volume that has FlashArray snapshots to recover a VM from.
- The user selects the snapshot to recover the VM from.
- The plugin gathers details on the FlashArray volume and snapshots that back the VMFS datastore.
- The plugin copies out the snapshot to a new temporary volume on the FlashArray.
- The plugin connects this temporary volume to the pertinent host or host group on the FlashArray.
- The plugin creates a temporary datastore on the pertinent hosts backed by the temporary FlashArray volume.
- The plugin presents the discovered .vmx files on the temporary datastore to the user.
- The user selects the .vmx file associated with the VM they would like to recover.
- The user selects a compute resource to place the VM on.
- The user modifies the name of the recovered VM.
- The plugin registers the recovered VM on the datastore that was initially selected by the user.
- The plugin destroys the temporary datastore on vSphere, the temporary volume on the FlashArray and eradicates the temporary volume on the FlashArray.
Configure Section Path
1. (1) select the VMFS datastore you want to recover the VM from under the Storage view. Then, select (2) Configure, (3) Snapshots, (4) select the array-based snapshot you want to recover from, then click the (5) RECOVER VMS FROM SNAPSHOT button.
2. (1) Select the virtual machine to be restored from the list of .vmx files populated and (2) click NEXT.
3. (1) Select the host that the VM should be associated with and (2) click NEXT.
4. (1) (Optional) Modify the VM's name as needed and (2) click NEXT.
5. (1) Click FINISH to complete the workflow.
The recovered VM will now be in Hosts and Clusters view as a powered off VM.
Right Click on VMFS Datastore Path
For this path, things are largely the same as the Configure Section Path, except the path starts from right-clicking on the VMFS datastore you want to recover from.
1. (1) Right-click the VMFS datastore you want to recover the VM from, (2) hover over the Pure Storage option and finally (3) click Recover VMs from Snapshot.
2. (1) Select the snapshot you want to recover the VM from then (2) click NEXT.
3. Follow steps 2-5 under the Configure Section Path above.
If you would like a video demo of the workflow, please watch the following video:
- Use the Configure section for the VMFS datastore to select a specific array-based snapshot at the onset of the workflow
-
Destroying a VMFS Datastore
Destroying a VMFS datastore through the Pure Storage vSphere plugin will automatically unmount it from all attached hosts, removes it and all snapshots from the FlashArray and lastly does a ESXi storage rescan. By default, the volume and all snapshots are fully recoverable from the FlashArray for 24 hours after deletion.
- Destroying a VMFS Datastore
-
Prior to destroying a VMFS datastore, it is required to make sure that all VMs that reside on that volume have either been deleted, storage vMotioned to a different datastore or powered off and removed from vSphere inventory.
To proceed with deleting the datastore, select it within vSphere, right-click and open the Pure Storage menu option. Next, select the Destroy Datastore option. This will launch a window to confirm that you wish to proceed with datastore removal. This procedure is shown in the two below images.
From the FlashArray GUI, we can see that the volume is available to be restored for 24 hours.
-
Viewing a VMFS Datastore Details
When the Pure Storage vSphere plugin is installed, selecting a Pure Storage VMFS volume within vSphere will show several important metrics and insights into how the volume is being utilized. To see this type of information, select the VMFS volume of interest and go the the Summary screen.
- VMFS Datastore Details
-
Most of the FlashArray information shown within the Summary screen are fairly self-explanatory, however, please see the below key for more detailed information for the numbered items shown in the datastore Summary image. Note that some details do not appear for all datastores (lag for instance is only for ActiveDR datastores).
- Array: What FlashArray or FlashArrays (if using ActiveCluster) does the VMFS datastore reside on.
- Volume Name: The name of the volume on the FlashArray. Useful for if the name in vSphere does not match the Volume name on the FlashArray.
- Volume Bandwidth Limit: This field displays any optional QoS bandwidth limitations placed upon the VMFS volume when it was created.
- Volume IOPS Limit: This field shows any optional QoS IOPS limitations placed upon the VMFS volume when it was created.
- Data Reduction: Report level of Data Reduction (data deduplication + compression) that the Volume is reporting from the FlashArray.
- Pod: If the VMFS datastore is a member of an ActiveCluster Pod, the name of the Pod is shown here for easy cross-reference to associated underlying arrays. If this in a linked and enabled ActiveDR pod it will show the source and target pod and the replication direction.
- Remote Array: The FlashArray the volume is being replicated to if in an ActiveDR pair.
- Lag: If in an ActiveDR pod, this will show how much time behind the target pod is (Recovery Point)
- Volume Group: A VMFS datastore can optionally be added to a Volume Group when it is created. This option is generally not used, but is displayed here if it is.
- Serial#: This is the volume serial number assigned on the FlashArray.
- Snapshot Count: Number of Pure Storage FlashArray snapshots associated with the VMFS volume.
- Protection Group Count: The number of Pure Storage Protection Groups that the VMFS volume is a member of.
VMFS Capacity Metrics
Capacity metrics from FlashArray may also be viewed from the vSphere Client. To access this, simply select the Capacity button at the top of the summary screen:
Alternatively, these metrics may be accessed from the Monitor tab of the datastore when selected.
Capacity will show some more granular details about the VMFS datastore selected:
In addition to volume name, data reduction, volume size and percentage full, this screen also shows the following information:
VMFS Used: This is the VMware-reported provisioned space on the datastore. This includes the sum total of the sizes of all files and VMDKs on the VMFS datastore.
Array Host Written: This is the amount currently written to the underlying volume as seen by the array BEFORE data reduction. If this number is higher than ‘VMFS Used’, then UNMAP needs to be enabled (VMFS-6) or manually run (VMFS-5). See the last section of this guide for more information on UNMAP.
Array Unique Space: This is the amount of physical capacity that is currently unique to this volume—meaning that if this volume was deleted, this is how much would be freed up on the array. This value has little to no correlation with % used of the volume.
VMFS Performance Metrics
Real-time performance metrics from FlashArray may also be viewed from the vSphere Client. To access this, simply select the Performance button at the top of the summary screen:
If you run into errors loading the performance metrics screen, it is highly advised to upgrade to at least the vSphere Plugin version 4.5.0 or later as the method to retrieve this data has been completely rewritten eliminating typical issues seen in previous releases.
Alternatively, these metrics may be accessed from the Monitor tab of the datastore when selected.
The screen shows the performance metrics over time separated into three charts, Latency, IOPs, and Bandwidth:
Some details about the chart:
When you hover your cursor over one point-in-time on the chart it will show the details for each metric type. The chart defaults to reads and writes but you can de-select one or the other to show more detailed information on read or write data. Refer to the table below for details on these metrics. When either reads or writes are selected, they will be shown in a specific color to indicate the chart has changed. If the datastore is protected by ActiveCluster (stretched storage) an additional metric of Mirrored Write will be available. The default dataset is for the past 1 hour, but the FlashArray metric table will go back as much as a year if the volume has existed for that length. The following metrics are available:
- Latency
- Read/write latency: the amount of time it takes for the FlashArray to service a read or write request. This is an internal statistic, meaning that once the FlashArray receives the request, it starts a timer, retrieves or commits the data, initiates the response, then stops the timer. This, therefore, does not include external latency caused by the network, or host-based queue or resource-induced latency.
- SAN time: This is how long the I/O is waiting in the SAN during host and array exchanges. SAN time plus read/write latency is the latency the host actually experiences. If SAN time plus read or write latency is significantly lower than a metric a host reports, there is likely a bottleneck or issue within the host itself.
- Queue time: This is how long the I/O is waiting in the FlashArray queue. If this is anything besides near zero, please reach out to Pure Storage support.
- QoS Rate Limit time: This is how long an I/O has waited in the queue due to a QoS limit being hit. Significant and sudden increases in latency can be often due to this.
- Mirrored Writes (or MW): This is the time it takes to send commit a write to an ActiveCluster volume to BOTH arrays involved in replication.
- IOPS
- How many I/Os are being sent to that FlashArray volume from all hosts. This is split between reads and writes.
- Bandwidth
- The size of the overall I/Os being sent to that FlashArray volume from all hosts. This is split between reads and writes.
-
Adding/Removing a VMFS Datastore from a Protection Group
- Protection Groups and VMFS Volumes
-
For the scope of this KB article, we can think of Protection Groups as a policy-driven construct which controls snapshot frequency, retention, replication frequency and replication target(s) of one or more VMFS datastore(s). A VMFS volume can be a member of one or multiple Protection Groups, or no Protection Groups at all. The Pure Storage vSphere plugin has the capability to add or remove VMFS volumes from Protection Groups either when the VMFS datastore is created (see Creating a VMFS datastore section or Copying a VMFS snapshot) or it can later be added to an existing Protection Group from the Pure Storage plugin menu. It is important to note that Protection Groups must be created and setup at the FlashArray level prior to being available for use in the plugin. Protection Group policies may be altered from the FlashArray GUI, CLI, PowerShell or via other VMware integrations available within the VMware Platform Guide.
To add an existing VMFS volume to a datastore, first highlight it and right-click. Select the Pure Storage menu and pick the Update Datastore Protection option.
If SafeMode is enabled on the array or volume that is backing this datastore and the protection is trying to be reduced, this workflow is expected to fail.
That will spawn the below window which will be automatically populated with available Protection Groups on the FlashArray. ActiveCluster enabled VMFS volumes will have the ability to select between underlying FlashArray Protection Groups. Simply select the Protection Groups you wish to associate with the VMFS volume and click on Update Protection.
To Remove a VMFS volume from one or more Protection Groups, follow the same procedure to access the Update Datastore Protection menu, but instead unselect one or more Protection Groups and click on Update Protection.
-
Running/Scheduling UNMAP on a VMFS Datastore
- Running and Scheduling UNMAP on VMFS
-
Running UNMAP on a VMFS Datastore
UNMAP is a valuable tool to keep VMFS datastore capacity utilization accurate on the FlashArray. As VMs, files, containers and other items are removed through vSphere, the underlying VMFS volume on the FlashArray is not automatically aware that now deleted space is available for use. UNMAP signals to the array that those previously written blocks are available and can be returned as free capacity for future use.
Fortunately, running UNMAP is easy with the vSphere plugin. UNMAP can either be run on demand (immediately) or set to run on a schedule.To run UNMAP on demand, right-click on the target VMFS datastore, select the Pure Storage menu and pick Run Space Reclaimation.
That selection will open the Run Space Reclamation window. Select an ESXi host within the cluster that the VMFS datastore is attached to and click on Run to start the UNMAP operation.Many factors, including how much dead space there is to clean up and how busy the FlashArray and ESXi host are can impact how long the UNMAP job will take to run. For this reason, it is advisable to look at performance utilization of those two items prior to running UNMAP, and delay if resource utilization is high if possible. The other option, as our next section will explain, is to create an UNMAP schedule that runs during periods of relatively low resource utilization.
Scheduling UNMAP on a VMFS Datastore
UNMAP requires ESXi and FlashArray resources to execute. While the resource requirements are not extreme at all, it is advisable to set a schedule to run UNMAP during lower utilization times in the environment, if available.
To build a schedule for running UNMAP, right-click on the target VMFS datastore, right-click and select the Pure Storage menu option. Pick the Schedule Space Reclamation option as shown in the below image:
The below window that spawns has a full options that need to be selected:
- Which ESXi host in the cluster will run the UNMAP job. It is advised to spread multiple Space Reclamation jobs for different VMFS datastores to different hosts if they will be running on.
- Frequency of running the job.
- Day to run the job
- At what time to run the job
Once these fields have been filled out, click on the Schedule button to active the job. Each VMFS datastore will require its own Space Reclamation job.
To update the schedule or delete an existing job entirely, simply return to the Schedule Space Reclamation menu as shown above.
How often to run UNMAP is a common question and the answer is: it depends. For environments like VDI where virtual machines may be created and destroyed on a regular basis (thus accumulating dead space quickly), it probably makes sense to run UNMAP with a weekly cadence. For VMFS datastores that do not experience much churn, running UNMAP monthly will likely suffice. Users can confirm the amount of dead space to be reclaimed from the VMFS capacity Monitoring section we covered earlier within this guide to find more in-depth metrics for deciding what UNMAP schedule will work best for their unique environment.