Skip to main content
Pure Technical Services

vVols User Guide: vVol Datastore

Currently viewing public documentation. Please login to access the full scope of documentation.

vVol Datastores

vVols replace LUN-based datastores formatted with VMFS. There is no file system on a vVol datastore, nor are vVol-based virtual disks encapsulated in files.

The datastore concept does not disappear entirely, however. VMs must be provisioned somewhere. Historically, VMs have typically been implemented as files in NFS mounts or in a VMFS. Datastores are necessary, both because VM provisioning tools use them to house new VMs, and because they help control storage allocation and differentiate between different types of storage.

However, VMFS datastores limit flexibility, primarily because their sizes and features are specified when they are created, and it is not possible to assign different features to individual objects in them. To overcome this limitation, the vVol architecture includes a storage container object, generally referred to as a vVol datastore, with two key properties:

Capacity limit

  • Allows an array administrator to limit the capacity that VMware administrators can provision as vVols.

Array capabilities

  • Allows vCenter to determine whether an array can satisfy a configuration request for a VM.

A vVol datastore is sometimes referred to as a storage container. Although the terms are essentially interchangeable, this report uses the term vVol datastore exclusively.


The FlashArray Implementation of vVol Datastores

FlashArray vVol datastores have no artificial size limit. The initial FlashArray vVols release, 5.0.0, supports a single 8-petabyte vVol datastore per array; in Purity 6.4.1 and higher, the amount of vVol datastores has been increased to the array's pod limit amount. The default vVol datastore size is 1-petabyte in 6.4.1 and later because of this issue. Pure Storage Technical Support can change an array’s vVol datastore size on customer request to alter the amount of storage VMware can allocate.  Should this be desired please open up a support case with Pure Storage to have the size change applied.

With the release of Purity//FA version 6.4.1, the VASA provider now supports multiple storage containers.  In order to leverage multiple storage containers and multiple vVol Datastores on the same array, the Purity version will need to be at 6.4.1 or higher.

Purity//FA Version 5.0.0 and newer versions have the VASA service as a core part of the Purity OS, so if Purity is up then VASA is running.  Once storage providers are registered then a vVol Datastore can be "created" and/or mounted to ESXi hosts.  However, in order for vSphere to implement and use vVols, a Protocol Endpoint must be connected to the ESXi hosts on the FlashArray.  Otherwise there is only a management path connection and not a data path connection.

FlashArrays require two items to create a volume—a size and a name. vVol datastores do not require any additional input or enforce any configuration rules on vVols, so creation of FlashArray-based vVols is simple.


Creating a Storage Container on the FlashArray

With the release of Purity 6.4.1, multiple storage containers can now be created in a single vSphere environment from a single FlashArray. On the FlashArray, these multiple storage containers are managed through the Pod object.

1. First, navigate to the pod creation screen by clicking (1) Storage, (2) Pods and finally (3) + sign to create new.

MSC1.png

2. Give the pod a (1) Name then click (2) Create.

MSC2.png

3. After pod creation, the GUI will direct you to the screen for the pod object. From here, under volumes, select the (1) ellipses then click (2) Show Protocol Endpoints. This will change this part of the GUI to show only PEs (Protocol Endpoints) attached to the pod. 

MSC3.png

4. Now create the protocol endpoint by selecting the (1) ellipses then clicking (2) Create Protocol Endpoint. Please note that generally speaking, only one PE per pod will be necessary, but more are supported if needed.

MSC4.png

5. Give the PE a (1) Name then click (2) Create to create it.

MSC5.png

6. After the PE has been created, it will show up under Volumes in the pod screen. (1) Click on the PE name. Note that the name format is PodName::ProtocolEndpointName.

MSC6.png

7. First, Highlight and Copy the serial number of the PE; this will be used later in the vCenter GUI to validate the connection of the PE to the host object. Click the (1) ellipses then click (2) Connect. Alternatively, the PE can connect to an individual host here; a host group is not a requirement of using vVols on the FlashArray.

MSC7.png

8. Select the (1) Host Group to connect the PE to then click (2) Connect.

MSC8.png

9. To validate that the PE was successfully connected to the correct host objects, log into the vCenter client that manages the hosts in the host group that were connected earlier. Select (1) Hosts and Clusters view, select (2) a Host object, select (3) Storage Devices, left click the (4) Filter button and finally (5) paste the PE serial number. vCenter will filter devices that have serial number in the name. If the PE does not show up initially, you might need to rescan the storage devices associated with that host.

MSC9.png

10. If the PE shows up correctly as a Storage Device, next rescan the Storage Providers. Still under Hosts and Clusters view, select (1) the vCenter previously configured with the appropriate storage provider, select (2) Configure, (3) Storage Providers, (4) the Storage Provider for the array where the pod was configured and for the FlashArray controller that is Active (not Standby) then select (5) Rescan.

MSC10.png

11. Now that the additional PE has been connected and configured in a pod on the FlashArray, proceed to Mounting a vVol Datastore.


Mounting a vVol Datastore

A vVol datastore should be mounted to an ESXi host with access to a PE on the array that hosts the vVol datastore. Mounting a vVol datastore to a host requires:

The latter requires that (a) an array administrator connect the PE to the host or host group, and (b) a VMware administrator rescan the ESXi host’s I/O interconnects.

An array administrator can use the FlashArray GUI, CLI, or REST API to connect a PE and a host or host group; the FlashArray User Guide contains instructions for connecting a host or host group and a volume.

With Pure Storage's vSphere Plugin, a VMware administrator can connect a PE to an ESXi Cluster and mount its vVol datastore without array administrator involvement.

Using the Plugin to Mount vVol Datastore

Once the Storage Providers are registered the vVol Datastore can be created and mounted using the vSphere Plugin.  Click blow to expand the workflow for creating the vVol Datastore and mounting it to an ESXi Cluster.  The workflow can also be found in the demo video at this point.

Mounting the vVol Datastore with the Pure Storage vSphere Plugin

The ESXi hosts will need to have been added to the FlashArray and best practice is to correlate the ESXi cluster to a FlashArray Host Group. Then each ESXi host that is in that Cluster should be added to the FlashArray Host Group.

  1. Right Click on the ESXi Cluster that you want to create and mount the vVol Datastore.  Go to the Pure Storage option and then click on Create Datastore.
    MountvVolDatastore1.png
  2. Choose to create a vVol FlashArray Storage Container (vVol Datastore).
    MountvVolDatastore2.png
  3. Select the ESXi Cluster that will be the compute resource to mount the vVol Datastore to.  Best Practice for vVols is to mount the vVol Datastore to the host group and not individual ESX hosts.  Why is this important?  During this step, the Plugin will check to see that the Host Group on the FlashArray is connected to a Protocol Endpoint.  In the event that there is no connection, the Plugin will automatically connect the Protocol Endpoint on that FA to the Host Group.  Best practice is to connect PEs to Host Groups and not to individual ESXi Hosts.
    MountvVolDatastore3.png
  4. Select a FlashArray to back the vVol datastore.
    MountvVolDatastore4.png
  5. Select an existing container or optionally, create a new container. If using the existing container, select the container to use. Please note- for the purposes of FlashArrays that are versions of Purity 6.4.2 or higher, multiple storage containers are managed through the pod object on FlashArray.

    MountvVolDatastore5.png
  6. Populate the datastore name to be created. The container name is not editable because an existing container was selected, so the name selected from the previous window is pre-populated.
    MountvVolDatastore6-existing.png
  7. Optional. If new container was selected, populate the datastore name. If the datastore name should match the container name, check the Same as datastore name checkbox. Optionally, uncheck the checkbox and populate the container name field. Finally, populate the container quota value with the size of the datastore you'd like to reflect in vSphere. This will set a capacity quota on the pod on the FlashArray. 
    MountvVolDatastore7-new.png
  8. Review the information and finish the workflow.
    MountvVolDatastore8.png
  9. From the Datastore Page, click on the newly created vVol Datastore and then check the Connectivity with the Hosts in the ESXi Cluster to ensure that they are connected and healthy.
    MountvVolDatastore9.png

Creating multiple containers through Pure's vSphere plugin is not currently supported but will be in an upcoming release of the plugin.

Mounting vVol Datastores Manually: FlashArray Actions 

Alternatively, vVol datastores can be provisioned by connecting the PE to the hosts or host group, rescanning each host’s I/O interconnects, and mounting the vVol datastore to each host. These operations require both FlashArray and VMware involvement, however. Array administrators can use the CLI, REST, or REST interfaces, or tools such as PowerShell. VMware administrators can use the Web Client, the VMware CLI, or the VMware SDK and SDK-based tools like PowerCLI.

Pure Storage recommends using the Plugin to provision PEs to hosts.  Keep in mind that the FlashArray UI does not allow creation of Protocol Endpoints.  The FlashArray UI does allow finding the Protocol Endpoint and connecting them to Hosts and Host Groups.

A Protocol Endpoint on the FlashArray can be viewed and connected from either the FlashArray UI or CLI.

Using the FlashArray UI to Manage the Protocol Endpoint

When its first VASA provider is registered, a FlashArray automatically creates a PE called pure-protocol-endpoint.  The pure-protocol-endpoint can be filtered in the Volumes view.  A PE can be connected from the PE volume view or from a Host/Host Group view in the FlashArray UI.

From the Storage -> Volumes view
Click on the options and select Show Protocol Endpoints
vVols-User-Guide-PE-01.png
This view will display the Protocol Endpoints for the FlashArray

vVols-User-Guide-PE-02.png

From the PE View the PE can be connected to a Host or Host Group
Best Practice is to connect the PE to a Host Group and not Hosts individually. 
vVols-User-Guide-PE-03.png

From the Connect Host Groups page you can select one or multiple Host Groups to connect the PE to
vVols-User-Guide-PE-04.png

Using the FlashArray CLI to Manage the Protocol Endpoint

From the FlashArray CLI a storage admin can manage the Protocol Endpoint.  This includes listing/viewing, creating, connecting, disconnecting or destroying a protocol endpoint.

Protocol endpoints that have been created can be listed with purevol list --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

A protocol endpoint can be created with purevol create --protocol-endpoint

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint prod-protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7

pureuser@sn1-x50r2-b12-36> purevol list --protocol-endpoint
Name                    Source  Created                  Serial
prod-protocol-endpoint  -       2020-12-02 12:29:21 PST  F4252922ADE248CF000113E7
pure-protocol-endpoint  -       2020-12-02 12:28:08 PST  F4252922ADE248CF000113E6

To connect a protocol endpoint use either purehgroup connect or purevol connect

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 10 prod-protocol-endpoint
Name                    Host Group       Host       LUN
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  10
prod-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  10

pureuser@sn1-x50r2-b12-36> purevol list --connect
Name                                Size  LUN  Host Group       Host
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-1-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-2-FC
prod-protocol-endpoint              -     10   Prod-Cluster-FC  ESXi-3-FC

A protocol endpoint can be disconnected from a host and host group with purevol disonnect.

However, if there are any active sub-lun connections this operation will fail as disconnecting the PE would cause a sev-1 and data path failure to that ESXi host.

pureuser@sn1-x50r2-b12-36> purevol connect --hgroup Prod-Cluster-FC --lun 11 pure-protocol-endpoint
Name                    Host Group       Host       LUN
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-3-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-2-FC  11
pure-protocol-endpoint  Prod-Cluster-FC  ESXi-1-FC  11
pureuser@sn1-x50r2-b12-36> purevol disconnect --hgroup Prod-Cluster-FC pure-protocol-endpoint
Name                    Host Group       Host
pure-protocol-endpoint  Prod-Cluster-FC  -

A disconnected Protocol Endpoint can be destroyed with purevol destroy. DO NOT DESTROY THE DEFAULT PURE-PROTOCOL-ENDPOINT!

pureuser@sn1-x50r2-b12-36> purevol create --protocol-endpoint dr-protocol-endpoint
Name                  Source  Created                  Serial
dr-protocol-endpoint  -       2020-12-02 14:15:23 PST  F4252922ADE248CF000113EA

pureuser@sn1-x50r2-b12-36> purevol destroy dr-protocol-endpoint
Name
dr-protocol-endpoint

A FlashArray’s performance is independent of the number of volumes it hosts; array’s full performance capability can be delivered through a single PE. PEs are not performance bottlenecks for vVols, so a single PE per array is all that is needed.

Configuring a single PE per array does not restrict multi-tenancy. Sub-lun connections are host-specific.

A FlashArray automatically creates a default pure-protocol-endpoint PE when its first VASA provider is registered. If necessary, additional PEs can also be created manually.  However, in most cases the default pure-protocol-endpoint is fine to use.  There is no additional HA value added by connecting a host to multiple protocol endpoints.

Do not rename, destroy or eradicate the pure-protocol-endpoint PE on the FlashArray.  This namespace is required for VASA to be able to storage required metadata for VASA to correctly work with the FlashArray. 

BEST PRACTICE: Use one PE per vVol container. All hosts should share the same PE and vVol to host bindings are host-specific, so multi-tenancy is inherently supported.

More than one PE can be configured, but is seldom necessary.

As is typical for the FlashArray architecture, vVol support, and in particular, the PE implementation are as simple as it is possible for them to be.

Mounting vVol Datastores Manually: Web Client Actions

Navigate to the vCenter UI once the PE is connected to the FlashArray Host Group that corresponds to the vSphere ESXi Cluster.

Although the PE volumes are connected to the ESXi hosts from a FlashArray standpoint, ESXi does not recognize them until an I/O rescan occurs. (This is partially correct.  If you are on a recent version of Purity and ESXi, a Unit Attention will be issued to the ESXi hosts when the PE is connected to the hosts.  At this time, the ESXi host will dynamically update the devices that are presented via the SAN).  In the event that the FlashArray is not on a recent release of Purity  (Purity 5.1.15+, 5.3.6+ or 6.0.0+), a storage rescan from the ESXi hosts will be required for the PE to show up in the ESXi hosts connected devices.

To display a provisioned PE, select the host in the inventory pane, select the Configure tab, and click Storage Devices. The PE appears as a 1 megabyte device.

vvols-guide-pe-vsphere-view-01.png

The screen is useful to find the PEs that have been successfully connected via a SAN transport method.  Multipathing can be configured on the PE from this view as well.

Note that in this example there are three PEs from three different arrays.  When navigating to the Storage -> Protocol Endpoints Screen the PEs that are used as a vVol Datastore mount are displayed.  In this example we only have two that show, as there are currently only two vVol Datastores (from two different arrays) created.

vvols-guide-pe-vsphere-view-02.png

The expected behavior is that the ESXi host will only display connected PEs that are currently being used as mounts for a vVol Datastore.

To mount a vVol datastore, right-click the target host or cluster, select Storage from the dropdown menu, and select New Datastore from the secondary dropdown to launch the New Datastore wizard.

vvols-guide-vvol-ds-01.png

Best Practice is to create and mount the vVol Datastore against the ESXi Cluster which would be mapped to a FlashArray Host Group.

Click the vVol Type

vvols-guide-vvol-ds-02.png

Enter in a friendly name for the datastore and select the vVol container in the Backing Storage Container list.

This is how the storage container list looks on Purity//FA 6.4.1 and higher.  The default container will show up as the default_storage_container (red box) and all others will show up with the pod name as the storage container name (orange box).
DefaultContainer641.png
This is how the storage container list looks in Purity//FA 6.4.0 or earlier.  The default container for an array will only be shown as Vvol container.
vvols-guide-vvol-ds-03.png

Clicking a container displays the array that hosts it in the lower Backing Storage Container panel.

No Backing Storage listing typically indicates either that the array’s VASA providers have not been registered or that vCenter cannot communicate with them.

Select the host(s) on which to mount the vVol datastore.  Best Practice would be to connect the vVol Datastore to all hosts in that ESXi Cluster.

vvols-guide-vvol-ds-04.png

Review the configuration details and then click Finish.

vvols-guide-vvol-ds-05.png

Once a vVol datastore is mounted, the Configure tab for any ESXi host to which it is mounted lists the PEs available from the array that the host is connected via SAN transport.  Note now that the PE LUN 253 is now listed as a PE for the ESXi host.

vvols-guide-vvol-ds-09.png

Mounting a vVol Datastore to Additional Hosts

In the event that an ESXi host has been added to a Cluster or the vVol Datastore was only mounted to some hosts in the cluster there is a workflow to mount additional hosts to the vVol Datastore.

To mount the vVol datastore to additional hosts, right-click on the vVol Datastore and select Mount Datastore to Additional Hosts from the dropdown menu to launch the Mount Datastore to Additional Hosts wizard.

vvols-guide-vvol-ds-06.png

Select the hosts to which to mount the vVol datastore by checking their boxes and click Finish.

vvols-guide-vvol-ds-07.png

Using a vVol Datastore

A vVol datastore is neither a file system nor a volume (LUN) per se, but an abstraction that emulates a file system to (a) represent VMs provisioned through it and (b) manage VM space allocation. It can be viewed as a collection of references to vVols.

vVol datastores are managed similarly to conventional datastores. For example, the Web Client file browser and an ESXi SSH session can display a vVol datastore’s contents.

vSphere UI vVol Datastore View
vvols-guide-vvol-ds-08.png
ESXi CLI view of vVol Datastore Content
[root@ac-esxi-a-16:~] cd /vmfs/volumes/
[root@ac-esxi-a-16:/vmfs/volumes] cd sn1-m20r2-c05-36-vVols-DS/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] ls
AC-3-vVols-VM-1                               rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a  rfc4122.a46478bc-300d-459e-9b68-fa6acb59c01c  vVols-m20-VM-01                               vvol-w2k16-no-cbt-c-2
AC-3-vVols-VM-2                               rfc4122.7255934c-0a2e-479b-b231-cef40673ff1b  rfc4122.ba344b42-276c-4ad7-8be1-3b8a65a52846  vVols-m20-VM-02
rfc4122.1f972b33-12c9-4016-8192-b64187e49249  rfc4122.7384aa04-04c4-4fc5-9f31-8654d77be7e3  rfc4122.edfc856c-7de1-4e70-abfe-539e5cec1631  vvol-w2k16-light-c-1
rfc4122.24f0ffad-f394-4ea4-ad2c-47f5a11834d0  rfc4122.8a49b449-83a6-492f-ae23-79a800eb5067  vCLS (1)                                      vvol-w2k16-light-c-2
rfc4122.31123240-6a5d-4ead-a1e8-b5418ab72a3e  rfc4122.97815229-bbef-4c87-b69b-576fb55a780c  vVols-b05-VM-02                               vvol-w2k16-no-cbt-c-1
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf] cd vVols-m20-VM-01/
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] pwd
/vmfs/volumes/sn1-m20r2-c05-36-vVols-DS/vVols-m20-VM-01
[root@ac-esxi-a-16:/vmfs/volumes/vvol:db046da6c3633fd8-b4df272b7c417bdf/rfc4122.3408aa5d-da4d-4b34-84ac-54ac220ca40a] ls
vVols-m20-VM-01-000001.vmdk                                          vVols-m20-VM-01.vmdk                                                 vmware-2.log
vVols-m20-VM-01-321c4c5a.hlog                                        vVols-m20-VM-01.vmsd                                                 vmware-3.log
vVols-m20-VM-01-3549e0a8.vswp                                        vVols-m20-VM-01.vmx                                                  vmware-4.log
vVols-m20-VM-01-3549e0a8.vswp.lck                                    vVols-m20-VM-01.vmx.lck                                              vmware-5.log
vVols-m20-VM-01-Snapshot2.vmsn                                       vVols-m20-VM-01.vmxf                                                 vmware.log
vVols-m20-VM-01-aux.xml                                              vVols-m20-VM-01.vmx~                                                 vmx-vVols-m20-VM-01-844ff34dc6a3e333b8e343784b3c65efa2adffa1-2.vswp
vVols-m20-VM-01.nvram                                                vmware-1.log