In VMware Cloud Foundation version 4.1, vVols have taken center stage as a Principal Storage type available for Workload Domain deployments. This inclusion in one of VMware's products of focus should eliminate any doubt that vVols is not an important area of investment for VMware and their ecosystem partners. This technical KB will walk through the steps required to deploy a Workload Domain using Fibre Channel and vVols with the Pure Storage FlashArray.
- VMware Cloud Foundation Management Domain deployed with VCF/Cloud Builder version 4.1.
- FlashArray with Fibre Channel connectivity
- FlashArray running Purity 5.3.6+
- NOTE: do not use Purity version 5.3.9 as there is a bug which prevents vVols deployment from completing--this is resolved in 5.3.10
- Three or more ESXi hosts with the following characteristics:
- Fibre Channel connectivity and zoning completed
- ESXi version 7.0.1 or above
- Setup for use in VCF 4.1 per VMware's Documentation
- Added as a host object to the FlashArray with their respective WWNs assigned
- Hosts should not have any shared VMFS datastores attached to them (a private volume like boot from SAN is fine, though)
With the above prerequisites confirmed, the process of building a Workload Domain using vVols and FC can be broken down into these steps that we will detail in the remainder of this article:
- Register FlashArray VASA Provider in SDDC Manager
- Create a Host Group and attach it to Pure Storage Protocol Endpoint on FlashArray
- Commission ESXi Hosts within SDDC Manager for Workload Domain
- Create the Workload Domain with the vVols Storage Type
- Complete VASA registration with Pure Storage vSphere Plugin post Workload Domain Deployment
Add FlashArray VASA Provider in SDDC Manager
A cornerstone for building vVols-based Workload Domains is registering a Storage Provider to the Workload Domain vCenter instance during the deployment process. Storage Providers leverage VASA (VMware API for Storage Awareness) as the key that enables vCenter to deploy vVols and the numerous benefits that they provide with a FlashArray. As of VMware Cloud Foundation 4.1, VASA Storage Providers can be added and managed in SDDC Manager by going to the Administration > Storage Settings menu.
Within the Storage Settings menu, we next select the +Add VASA Provider to open the wizard for that task.
The fields required for adding a VASA provider are broken out individually below to show what information is needed to register the FlashArray in SDDC Manager.
- Provide a descriptive name for the VASA provider. It is recommended to use the FlashArray name and append it with -ct0 or -ct1 to denote which controller the entry is associated with.
- Provide the URL for the VASA provider. This cannot be the management VIP of the array. Instead this field needs to be the management IP address associated with one of the controllers. The URL also is required to have the VASA port and version.xml appended to it. The format for the URL is: https://<IP of FlashArrayController>:8084/version.xml
- Give a FlashArray user name with the arrayadmin role. The procedure for how to create such a user can be found here. While the pureuser account can be used, we recommend creating and using a separate FlashArray local user for VASA operations.
- Provide the password for the FlashArray username to be used.
- Container Name must be vVol container. Note that this value is case-sensitive. This is the default container, customized names will come in a future release.
- For Container Type, select FC from the drop-down menu to use Fibre Channel.
- Once all entries are completed, click Save.
This completes the SDDC Manager VASA Registration component of the process and we can now proceed to the next step.
Note that each FlashArray has two VASA providers--one on each controller. SDDC manager only offers the ability to register a single VASA provider when deploying a new workload domain. So register one VASA provider with SDDC manager per FlashArray (which VASA provider is up to you--consistency is good, so always choose CT0 for instance), and ensure that post-deployment the second VASA provider is registered. Instructions on that process are below.
Create and Attach PE to FlashArray Host Group
A protocol endpoint (PE) is a logical I/O proxy that establishes the data path between the ESXi hosts and FlashArray for vVols. Establishing this connection at the FlashArray level is a simple, but required step before the VMware Cloud Foundation Workload Domain can be successfully deployed.
As mentioned earlier, it is expected that the ESXi hosts to be used in the Workload Domain have been added to the FlashArray and their WWNs have been associated with that host object. Another required step is to create a Host Group and add those hosts to it.
From there, select the Host Group (1), click on the radio button under Connected Volumes and click on the Connect... button (2).
Click the checkbox next to pure-protocol-endpoint (1) and then click Connect (2) to complete the operation.
Inspect the Host Group to confirm that the pure-protocol-endpoint has been successfully connected to it.
Alternatively, the Purity CLI can be used to connect the PE to a Host Group via the following command, which also provides confirmation of PE connectivity to each host within the host group:
pureuser@FlashArray> purevol connect --hgroup VCF-Sisters pure-protocol-endpoint
Name Host Group Host LUN
pure-protocol-endpoint VCF-Sisters sn1-m5-ch1-05 254
pure-protocol-endpoint VCF-Sisters sn1-m5-ch1-06 254
pure-protocol-endpoint VCF-Sisters sn1-m5-ch1-07 254
Now that the PE has been connected to the host group, we can move forward with commissioning the ESXi hosts into SDDC Manager for use.
Commission ESXi Hosts to SDDC Manager
With all preparatory work completed for our vVols-based Workload Domain, we can now proceed to commission the hosts into SDDC Manager.
To get started, click on Hosts under Inventory and then select the Commission Hosts button on the top-right.
Below the next screen capture we will break out what the various fields are and what to populate them with.
- ESXi Host FQDN: Enter in the ESXi Host name you wish to commission.
- Storage Type: Select the vVol option.
- vVol Storage Protocol Type: Pick FC from the drop-down menu of available protocols.
- Network Pool Name: Select a Network Pool that can be used with the Workload Domain ESXi hosts. Only a vMotion network is required for FC-based vVols.
- Provide the root username for the ESXi host.
- Provide the root password for the ESXi host.
- Click the Add button to save the hosts entries and then repeat the above process for as many additional hosts as you will be adding. Note that a JSON template can be populated and imported to speed up this process by importing ESXi hosts in batches.
Once all hosts have been added, the following actions are needed to finish the commissioning process.
- Select all or individual hosts you wish to validate.
- Click on Confirm FingerPrint.
- Click on Validate All to precheck the hosts to confirm that they are ready for use in SDDC Manager and Workload Domains.
- Once validation has completed successfully click on Next to proceed.
Confirm that the hosts are as expected (making certain that Storage Type is VVOL) before clicking the Commission button to add them to SDDC Manager inventory.
With the ESXi hosts now available in SDDC Manager inventory, we can now use them to build a vVols-based Workload Domain.
Create vVols-based Workload Domain
All of the previous steps come together in this section when we create our Workload Domain as this next procedure will showcase.
To get started, select Workload Domains from under the Inventory menu item and then click on + Workload Domain and VI -Workload Domains.
In the first window that spawns, select vVol from the available Storage Selections and then click on Begin.
Provide a descriptive Virtual Infrastructure and Organization Name. For our vVols-based Workload Domain we will later be using Workload Management so we select the option to Enable vSphere Lifecycle Manager Baselines rather than using vSphere Lifecycle Manager and then click on Next.
Provide a Cluster Name for the Workload Domain and click on Next.
Input the vCenter FQDN. This should have already been added to your DNS server which can be confirmed when the IP address, subnet mask and gateway all auto-populate when the FQDN is correctly resolved. Provide the vCenter Root Password then click Next.
NSX-T Deployment parameters are provided in more detail below:
- Host Overlay (TEP) VLAN needs to be provided. This VLAN should have an available DHCP scope that the ESXi hosts can grab an IP address from. This is a critical piece for Workload Management/vSphere with Kubernetes to function properly and should also be routable to the Edge TEP network on a separate VLAN.
- Similar to vCenter, all NSX-T component FQDNs should be added to DNS and when the FQDNs are added the IP addresses associated with them should be automatically resolved.
- Provide a strong Admin Password for NSX-T.
- Click on Next to proceed.
The vVol Storage section allow us to specify what array and protocol we wish to associate the Workload Domain vVol datastore to.
- Select the FC protocol from the drop-down list.
- Select the name of the VASA Provider we entered in the first section of this KB article.
- Pick Vvol container as the Storage Container. Note that this is the only container option currently supported.
- Pick the FlashArray user account added during VASA registration
- Provide a descriptive Datastore Name for the vVol datastore that will be deployed with the Workload Domain
- Click the Next button to proceed.
Hosts that match the vVol storage protocol (FC in our example) will be shown as available to be used with the Workload Domain. Select at minimum 3 hosts and then click Next to proceed.
Pick the licenses you wish to use for vSphere and NSX-T (page redacted to not show license info).
Review the object names to be used with the Workload Domain deployment and click Next.
Review the overall Workload Domain deployment specification and then click Finish to kick off the deployment process.
Typical deployments can take around an hour to complete. Once the Workload Domain has been built we can see it within SDDC Manager and that it is indeed using the vVol storage option as shown here:
Upon logging in to the Workload Domain vCenter instance, we can see that the correct FlashArray has been added as a Storage Provider.
However, for failover and other performance considerations, it is required to add the 2nd FlashArray controller as a Storage Provider as well. We will outline that procedure in the next section.
Complete VASA Registration with Pure Storage vSphere Plugin
The first step is to install the Pure Storage vSphere Plugin. There are multiple ways to install the plugin which can be found here.
Note that it is not required to install the Pure Storage Plugin for the vSphere Client, but it is generally recommended. There are many other ways to register the VASA provider besides the plugin--such as manually in the vSphere Client, PowerShell, or vRealize Orchestrator. Find more information here.
The first step is to register an array against the plugin. Click on Menu and then Pure Storage.
Click on the + Add button to register an array.
Arrays can be added individually or via Pure1 if you have authenticated against it. The procedure for authenticating and then importing arrays via Pure1 is available here.
To add a Single array, we break out each field below.
- Click on Add a Single Array.
- Provide a descriptive Array Name.
- Give the FlashArray IP address (management VIP or ideally the correlating FQDN is recommended).
- Provide the pureuser username and password.
- Click on Submit to add the array.
Next, select the array we just added and select the Register Storage Provider button.
To Register the Storage Provider, we recommend providing the FlashArray username and password created specifically for VASA use and then select the Workload Domain vCenter. Select Register to complete the process.
Returning to the Storage Providers registered against our Workload Domain vCenter instance, we can see that both FlashArray controllers have been added. The Workload Domain is now ready for production use.
Narrated Technical Demo Video