vVols User Guide: vVol Datastore
vVol Datastores
vVols replace LUN-based datastores formatted with VMFS. There is no file system on a vVol datastore, nor are vVol-based virtual disks encapsulated in files.
The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.
However, VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the vVol architecture includes a storage container object, generally referred to as a vVol datastore, with two key properties:
Capacity limit
- Allows an array administrator to limit the capacity that VMware administrators can provision as vVols.
Array capabilities
- Allows vCenter to determine whether an array can satisfy a configuration request for a VM.
A vVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term vVol datastore exclusively.
The FlashArray Implementation of vVol Datastores
FlashArray vVol datastores have no artificial size limit. The initial FlashArray vVols release, 5.0.0, supports a single 8-petabyte vVol datastore per array; in Purity 6.4.1 and higher, the amount of vVol datastores has been increased to the array's pod limit amount. The default vVol datastore size is 1-petabyte in 6.4.1 and later because of this issue. Pure Storage Technical Support can change an array’s vVol datastore size on customer request to alter the amount of storage VMware can allocate. Should this be desired please open up a support case with Pure Storage to have the size change applied.
With the release of Purity//FA version 6.4.1, the VASA provider now supports multiple storage containers. In order to leverage multiple storage containers and multiple vVol Datastores on the same array, the Purity version will need to be at 6.4.1 or higher.
Purity//FA Version 5.0.0 and newer versions have the VASA service as a core part of the Purity OS, so if Purity is up then VASA is running. Once storage providers are registered then a vVol Datastore can be "created" and/or mounted to ESXi hosts. However, in order for vSphere to implement and use vVols, a Protocol Endpoint must be connected to the ESXi hosts on the FlashArray. Otherwise there is only a management path connection and not a data path connection.
FlashArrays require two items to create a volume—a size and a name. vVol datastores do not require any additional input or enforce any configuration rules on vVols, so creation of FlashArray-based vVols is simple.
Creating a Storage Container on the FlashArray
With the release of Purity 6.4.1, multiple storage containers can now be created in a single vSphere environment from a single FlashArray. On the FlashArray, these multiple storage containers are managed through the Pod object.
1. First, navigate to the pod creation screen by clicking (1) Storage, (2) Pods and finally (3) + sign to create new.
![]() |
2. Give the pod a (1) Name then click (2) Create.
![]() |
3. After pod creation, the GUI will direct you to the screen for the pod object. From here, under volumes, select the (1) ellipses then click (2) Show Protocol Endpoints. This will change this part of the GUI to show only PEs (Protocol Endpoints) attached to the pod.
![]() |
4. Now create the protocol endpoint by selecting the (1) ellipses then clicking (2) Create Protocol Endpoint. Please note that generally speaking, only one PE per pod will be necessary, but more are supported if needed.
![]() |
5. Give the PE a (1) Name then click (2) Create to create it.
![]() |
6. After the PE has been created, it will show up under Volumes in the pod screen. (1) Click on the PE name. Note that the name format is PodName::ProtocolEndpointName.
![]() |
7. First, Highlight and Copy the serial number of the PE; this will be used later in the vCenter GUI to validate the connection of the PE to the host object. Click the (1) ellipses then click (2) Connect. Alternatively, the PE can connect to an individual host here; a host group is not a requirement of using vVols on the FlashArray.
![]() |
8. Select the (1) Host Group to connect the PE to then click (2) Connect.
![]() |
9. To validate that the PE was successfully connected to the correct host objects, log into the vCenter client that manages the hosts in the host group that were connected earlier. Select (1) Hosts and Clusters view, select (2) a Host object, select (3) Storage Devices, left click the (4) Filter button and finally (5) paste the PE serial number. vCenter will filter devices that have serial number in the name. If the PE does not show up initially, you might need to rescan the storage devices associated with that host.
![]() |
10. If the PE shows up correctly as a Storage Device, next rescan the Storage Providers. Still under Hosts and Clusters view, select (1) the vCenter previously configured with the appropriate storage provider, select (2) Configure, (3) Storage Providers, (4) the Storage Provider for the array where the pod was configured and for the FlashArray controller that is Active (not Standby) then select (5) Rescan.
![]() |
11. Now that the additional PE has been connected and configured in a pod on the FlashArray, proceed to Mounting a vVol Datastore.
Mounting a vVol Datastore
A vVol datastore should be mounted to an ESXi host with access to a PE on the array that hosts the vVol datastore. Mounting a vVol datastore to a host requires:
- Registration of the array’s VASA providers with vCenter
- Connecting a PE to the ESXi Hosts to be mounted to the vVol Datastore
The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.
An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.
With Pure Storage's vSphere Plugin, a VMware administrator can connect a PE to an ESXi Cluster and mount its vVol datastore without array administrator involvement.
Using the Plugin to Mount vVol Datastore
Creating multiple containers through Pure's vSphere plugin is not currently supported but will be in an upcoming release of the plugin.
Mounting vVol Datastores Manually: FlashArray Actions
Alternatively, vVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the vVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.
Pure Storage recommends using the Plugin to provision PEs to hosts. Keep in mind that the FlashArray UI does not allow creation of Protocol Endpoints. The FlashArray UI does allow finding the Protocol Endpoint and connecting them to Hosts and Host Groups.
Mounting vVol Datastores Manually: Web Client Actions
Navigate to the vCenter UI once the PE is connected to the FlashArray Host Group that corresponds to the vSphere ESXi Cluster.
Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, ESXi does not recognize them until an I/O rescan occurs. (This is partially correct. If you are on a recent version of Purity and ESXi, a Unit Attention will be issued to the ESXi hosts when the PE is connected to the hosts. At this time, the ESXi host will dynamically update the devices that are presented via the SAN). In the event that the FlashArray is not on a recent release of Purity (Purity 5.1.15+, 5.3.6+ or 6.0.0+), a storage rescan from the ESXi hosts will be required for the PE to show up in the ESXi hosts connected devices.
To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device.
![]() |
The screen is useful to find the PEs that have been successfully connected via a SAN transport method. Multipathing can be configured on the PE from this view as well.
Note that in this example there are three PEs from three different arrays. When navigating to the Storage -> Protocol Endpoints Screen the PEs that are used as a vVol Datastore mount are displayed. In this example we only have two that show, as there are currently only two vVol Datastores (from two different arrays) created.
![]() |
The expected behavior is that the ESXi host will only display connected PEs that are currently being used as mounts for a vVol Datastore.
To mount a vVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown to launch the New Datastore wizard.
![]() |
Best Practice is to create and mount the vVol Datastore against the ESXi Cluster which would be mapped to a FlashArray Host Group.
Click the vVol Type
![]() |
Enter in a friendly name for the datastore and select the vVol container in the Backing Storage Container list.
This is how the storage container list looks on Purity//FA 6.4.1 and higher. The default container will show up as the default_storage_container (red box) and all others will show up with the pod name as the storage container name (orange box).![]() |
This is how the storage container list looks in Purity//FA 6.4.0 or earlier. The default container for an array will only be shown as Vvol container.![]() |
Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.
No Backing Storage listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.
Select the host(s) on which to mount the vVol datastore. Best Practice would be to connect the vVol Datastore to all hosts in that ESXi Cluster.
![]() |
Review the configuration details and then click Finish.
![]() |
Once a vVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that the host is connected via SAN transport. Note now that the PE LUN 253 is now listed as a PE for the ESXi host.
![]() |
Mounting a vVol Datastore to Additional Hosts
In the event that an ESXi host has been added to a Cluster or the vVol Datastore was only mounted to some hosts in the cluster there is a workflow to mount additional hosts to the vVol Datastore.
To mount the vVol datastore to additional hosts, right-click on the vVol Datastore and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard.
![]() |
Select the hosts to which to mount the vVol datastore by checking their boxes and click Finish.
![]() |
Using a vVol Datastore
A vVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to vVols.
vVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a vVol datastore’s contents.
vSphere UI vVol Datastore View![]() |
ESXi CLI view of vVol Datastore Content
[root@ac-esxi-a-16:~] cd /vmfs/volumes/ [root@ac-esxi-a-16:/vmfs/volumes] cd sn1-m20r2-c05-36-vVols-DS/ [root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] pwd /vmfs/volumes/sn1-m20r2-c05-36-vVols-DS [root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] ls AC-3-vVols-VM-1 rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a rfc4122.a46478bc-300d-459e-9b68-fa6acb59c01c vVols-m20-VM-01 vvol-w2k16-no-cbt-c-2 AC-3-vVols-VM-2 rfc4122.7255934c-0a2e-479b-b231-cef40673ff1b rfc4122.ba344b42-276c-4ad7-8be1-3b8a65a52846 vVols-m20-VM-02 rfc4122.1f972b33-12c9-4016-8192-b64187e49249 rfc4122.7384aa04-04c4-4fc5-9f31-8654d77be7e3 rfc4122.edfc856c-7de1-4e70-abfe-539e5cec1631 vvol-w2k16-light-c-1 rfc4122.24f0ffad-f394-4ea4-ad2c-47f5a11834d0 rfc4122.8a49b449-83a6-492f-ae23-79a800eb5067 vCLS (1) vvol-w2k16-light-c-2 rfc4122.31123240-6a5d-4ead-a1e8-b5418ab72a3e rfc4122.97815229-bbef-4c87-b69b-576fb55a780c vVols-b05-VM-02 vvol-w2k16-no-cbt-c-1 [root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] cd vVols-m20-VM-01/ [root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] pwd /vmfs/volumes/sn1-m20r2-c05-36-vVols-DS/vVols-m20-VM-01 [root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] ls vVols-m20-VM-01-000001.vmdk vVols-m20-VM-01.vmdk vmware-2.log vVols-m20-VM-01-321c4c5a.hlog vVols-m20-VM-01.vmsd vmware-3.log vVols-m20-VM-01-3549e0a8.vswp vVols-m20-VM-01.vmx vmware-4.log vVols-m20-VM-01-3549e0a8.vswp.lck vVols-m20-VM-01.vmx.lck vmware-5.log vVols-m20-VM-01-Snapshot2.vmsn vVols-m20-VM-01.vmxf vmware.log vVols-m20-VM-01-aux.xml vVols-m20-VM-01.vmx~ vmx-vVols-m20-VM-01-844ff34dc6a3e333b8e343784b3c65efa2adffa1-2.vswp vVols-m20-VM-01.nvram vmware-1.log |