Skip to main content
Pure Technical Services

How To: Deploy VCF Workload Domains with vVols and iSCSI as Principal Storage

Currently viewing public documentation. Please login to access the full scope of documentation.

In VMware Cloud Foundation version 4.1, vVols have taken center stage as a Principal Storage type available for Workload Domain deployments.  This inclusion in one of VMware's products of focus should eliminate any doubt around how important vVols is for VMware and their ecosystem partners.  This technical KB will walk through the steps required to deploy a Workload Domain using iSCSI and vVols with the Pure Storage FlashArray.  vVols with iSCSI is particularly exciting as this is the first instance of the iSCSI protocol being able to be used as a Principal Storage type within VCF.

Prerequisites

  • VMware Cloud Foundation Management Domain deployed with VCF/Cloud Builder version 4.1.
  • FlashArray with iSCSI connectivity
  • FlashArray running Purity 5.3.6+
    • NOTE:  do not use Purity version 5.3.9 as there is a bug which prevents vVols deployment from completing--this is resolved in 5.3.10.
  • Three or more ESXi hosts with the following characteristics:
    • iSCSI connectivity.
    • ESXi version 7.0.1 or above
    • Setup for use in VCF 4.1 per VMware's Documentation
    • Added as a host object to the FlashArray
    • Hosts should not have any shared VMFS datastores attached to them.  (A private volume like boot from SAN is fine, though).

With the above prerequisites confirmed, the process of building a Workload Domain using vVols and iSCSI can be broken down into these steps that we will detail in the remainder of this article:

  1. Register FlashArray VASA Provider in SDDC Manager 
  2. Add a Network Pool to SDDC Manager with iSCSI
  3. Enable and Configure Software iSCSI on ESXi Hosts
  4. Create a Host Group and attach it to Pure Storage Protocol Endpoint on FlashArray
  5. Commission ESXi Hosts within SDDC Manager for Workload Domain
  6. Create the Workload Domain with the vVols Storage Type
  7. Complete VASA registration and Set iSCSI Best Practices with Pure Storage vSphere Plugin

Add FlashArray VASA Provider in SDDC Manager

A cornerstone for building vVols-based Workload Domains is registering a Storage Provider to the Workload Domain vCenter instance during the deployment process.  Storage Providers leverage VASA (VMware API for Storage Awareness) as the key that enables vCenter to deploy vVols and the numerous benefits that they provide with a FlashArray.  As of VMware Cloud Foundation 4.1, VASA Storage Providers can be added and managed in SDDC Manager by going to the Administration > Storage Settings menu.

clipboard_e3e544e1cc74e4c039a53abc3c4746951.png

Within the Storage Settings menu, we next select the +Add VASA Provider to open the wizard for that task.

vasa1.jpg

iscsivasa1.jpg

The fields required for adding a VASA provider are broken out individually below to show what information is needed to register the FlashArray in SDDC Manager.

  1. Provide a descriptive name for the VASA provider.  It is recommended to use the FlashArray name and append it with -ct0 or -ct1 to denote which controller the entry is associated with.
  2. Provide the URL for the VASA provider.  This cannot be the management VIP of the array.  Instead this field needs to be the management IP address associated with one of the controllers.  The URL also is required to have the VASA port and version.xml appended to it.  The format for the URL is:  https://<IP of FlashArrayController>:8084/version.xml
  3. Give a FlashArray user name with the arrayadmin role.  The procedure for how to create such a user can be found here.  While the pureuser account can be used, we recommend creating and using a separate FlashArray local user for VASA operations.
  4. Provide the password for the FlashArray username to be used.
  5. Container Name must be vVol container.  Note that this value is case-sensitive. This is the default container, customized names will come in a future release.
  6. For Container Type, select iSCSI from the drop-down menu to use iSCSI.
  7. Once all entries are completed, click Save.

This completes the SDDC Manager VASA Registration component of the process and we can now proceed to the next step.

Note there is a current bug in SDDC manager where updates to this information do not go through unless you manually edit the JSON configuration files. To avoid having to do that, it is important to verify the above for typos before proceeding. This issue has been reported to VMware.

Note that each FlashArray has two VASA providers--one on each controller. SDDC manager only offers the ability to register a single VASA provider when deploying a new workload domain. So register one VASA provider with SDDC manager per FlashArray (which VASA provider is up to you--consistency is good, so always choose CT0 for instance), and ensure that post-deployment the second VASA provider is registered. Instructions on that process are below.

This completes the SDDC Manager VASA Registration component of the process and we can now proceed to the next step.

Add a Network Pool to SDDC Manager with iSCSI

iscsipool1.jpg

iscipool2.jpg

Configure Software iSCSI on ESXi Hosts

Prior to commissioning ESXi hosts for use within a Workload Domain backed by iSCSI vVols, the member hosts must have software iSCSI enabled on them and have their associated IQNs added to the FlashArray.  This section will detail how to accomplish this task.

First, connect to the DCUI of an ESXi host either via FQDN or IP address and login as root.  Select Host and then Storage on the left-side of the GUI.

iconfig1.jpg

Pick the Adapters tab and then click on Software iSCSI to open the wizard to enable that feature.

iconfig2.jpg

The relevant Configure iSCSI fields that need to be populated are:

  1. iSCSI enabled:  Toggle this to Enabled.
  2. Name & Alias:  This is the iSCSI IQN for the ESXi host.  Copy this value when it becomes available as it will be used later in a subsequent step.
  3. Dynamic Targets:  Click on Add dynamic target and then enter the IP address of an iSCSI port on the FlashArray.
  4. Once the above three fields have been populated, click on Edit Settings.

clipboard_e1660e88af988fce3bc8241eb57fef74f.png

Click on Advanced Settings, and disabled Inherit from parent for both LoginTimeout and DelayedAck. Change LoginTimeout to 30 seconds and disable DelayedAck.

clipboard_e81d43cc2dbd419c4b149ce6f93d57eb3.png

clipboard_eb3c3df1af855302fba35aaa7e325838a.png

Click Save then Save Configuration to complete.

With iSCSI successfully enabled on the host, we can now add it to the FlashArray to complete the storage layer connection.  In the FlashArray GUI, click on Storage and then the Hosts tab.

iconfig4.jpg

Select the Host (or create one) to associate with the ESXi host we just enabled iSCSI on.  Keeping consistency between ESXi hostname and the host entry on the FlashArray is strongly recommended.  Click on the vertical ellipses in the Host Ports panel on the right and then click on Configure IQNs...

iconfig5.jpg

Copy and paste the ESXi host IQN that was shown in the Alias field when Software iSCSI was enabled on the ESXi host earlier in this section and then click Add.

iconfig6.jpg

 

Repeat the above process for all other iSCSI ESXi hosts you wish to add to the Workload Domain.

Create a Host Group and attach it to Pure Storage Protocol Endpoint on FlashArray

A protocol endpoint (PE) is a logical I/O proxy that establishes the data path between the ESXi hosts and FlashArray for using vVols.  Establishing this connection at the FlashArray level is a simple, but required step before the VMware Cloud Foundation Workload Domain can be successfully deployed.

As mentioned earlier, it is expected that the ESXi hosts to be used in the Workload Domain have been added to the FlashArray as Host Objects.  Another required step is to create a Host Group and add those hosts to it.

From there, select the Host Group (1), click on the radio button under Connected Volumes and click on the Connect... button (2).

iscsipe2.jpg

Click the checkbox next to pure-protocol-endpoint (1) and then click Connect (2) to complete the operation.

iscsipe3.jpg

Inspect the Host Group to confirm that the pure-protocol-endpoint has been successfully connected to it.

iscsipe4.jpg

Alternatively, the Purity CLI can be used to connect the PE to a Host Group via the following command, which also provides confirmation of PE connectivity to each host within the host group:

pureuser@FlashArray> purevol connect --hgroup Bend pure-protocol-endpoint
Name                    Host Group  Host           LUN
pure-protocol-endpoint  Bend        sn1-m4-ch1-03  254
pure-protocol-endpoint  Bend        sn1-m4-ch1-04  254
pure-protocol-endpoint  Bend        sn1-m4-ch1-02  254
pure-protocol-endpoint  Bend        sn1-m4-ch1-01  254

Now that the PE has been connected to the host group, we can move forward with commissioning the ESXi hosts into SDDC Manager for use.

Commission ESXi Hosts within SDDC Manager for Workload Domain

With all preparatory work completed for our vVols-based Workload Domain, we can now proceed to commission the hosts into SDDC Manager.

To get started, click on Hosts under Inventory and then select the Commission Hosts button on the top-right.

ch1.jpg

Below the next screen capture we will break out what the various fields are and what to populate them with.

iscsi-commission1.jpg

  1. ESXi Host FQDN:  Enter in the ESXi Host name you wish to commission.
  2. Storage Type:  Select the vVol option
  3. vVol Storage Protocol Type:  Pick iSCSI from the drop-down menu of available protocols.
  4. Network Pool Name:  Select a Network Pool that has both iSCSI and vMotion network ranges like the example from an earlier section.
  5. Provide the root username for the ESXi host.
  6. Provide the root password for the ESXi host.
  7. Click the Add button to save the hosts entries and then repeat the above process for as many additional hosts as you will be adding.  Note that a JSON template can be populated and imported to speed up this process by importing ESXi hosts in batches.

Once all hosts have been added, the following actions are needed to finish the commissioning process.

ch3.jpg

  1. Select all or individual hosts you wish to validate.
  2. Click on Confirm FingerPrint.
  3. Click on Validate All to precheck the hosts to confirm that they are ready for use in SDDC Manager and Workload Domains.
  4. Once validation has completed successfully click on Next to proceed.

Confirm that the hosts are as expected (making certain that Storage Type is VVOL) before clicking the Commission button to add them to SDDC Manager inventory.

iscsi-commission2.jpg

With the ESXi hosts now available in SDDC Manager inventory, we can now use them to build a vVols-based Workload Domain.

Create a vVols-based Workload Domain

All of the previous steps come together in this section when we create our Workload Domain as this next procedure will showcase.

To get started, select Workload Domains from under the Inventory menu item and then click on + Workload Domain and VI-Workload Domains.

fcwld1.jpg

In the first window that spawns, select vVol from the available Storage Selections and then click on Begin.

fcwld2.jpg

Provide a descriptive Virtual Infrastructure and Organization Name.  For our vVols-based Workload Domain we will later be using Workload Management, so we select the option to Enable vSphere Lifecycle Manager Baselines rather than using vSphere Lifecycle Manager and then click on Next.

wldiscsi1.jpg

Provide a Cluster Name for the Workload Domain and click on Next.

wldiscsi2.jpg
Input the vCenter FQDN.  This should have already been added to your DNS server which can be confirmed when the IP address, subnet mask and gateway all auto-populate when the FQDN is correctly resolved.  Provide the vCenter Root Password then click Next.

wldiscsi3.jpg

NSX-T Deployment parameters are provided in more detail below:

fcwld6.jpg

  1. Host Overlay (TEP) VLAN needs to be provided.  This VLAN should have an available DHCP scope that the ESXi hosts can grab an IP address from.  This is a critical piece for Workload Management/vSphere with Kubernetes to function properly and should also be routable to the Edge TEP network on a separate VLAN.
  2. Similar to vCenter, all NSX-T component FQDNs should be added to DNS and when the FQDNs are added the IP addresses associated with them should be automatically resolved.
  3. Provide a stong Admin Password for NSX-T.
  4. Click on Next to proceed.

The vVol Storage section allow us to specify what array and protocol we wish to associate the Workload Domain vVol datastore to.

wldiscsi4.jpg

  1. Select the iSCSI protocol from the drop-down list.
  2. Select the name of the VASA Provider we entered in the first section of this KB article.
  3. Pick Vvol container as the Storage Container.  Note that this is the only container option currently supported.
  4. Pick the FlashArray user account added during VASA registration
  5. Provide a descriptive Datastore Name for the vVol datastore that will be deployed with the Workload Domain
  6. Click the Next button to proceed.

Hosts that match the vVol storage protocol (iSCSI in our example) will be shown as available to be used with the Workload Domain.  Select at minimum 3 hosts and then click Next to proceed.

wldiscsi5.jpg

Pick the licenses you wish to use for vSphere and NSX-T (page redacted to not show license info).

Review the object names to be used with the Workload Domain deployment and click Next.

wldiscsi6.jpg

Review the overall Workload Domain deployment specification and then click Finish to kick off the deployment process.

wldiscsi7.jpg

Typical deployments can take around an hour to complete.  Once the Workload Domain has been built, we can see it within SDDC Manager; we also see that it is indeed using the vVol storage option as shown here:

fcwld12.jpg

Upon logging in to the Workload Domain vCenter instance, we can see that the correct FlashArray has been added as a Storage Provider.

However, for failover and other performance considerations, we highly recommend registering the 2nd FlashArray controller as a Storage Provider as well.  We will outline that procedure in the next section.

fcwld11.jpg

Complete VASA registration and Set iSCSI Best Practices with Pure Storage vSphere Plugin

The first step is to install the Pure Storage vSphere Plugin.  There are multiple ways to install the plugin which can be found here.

Note that it is not required to install the Pure Storage Plugin for the vSphere Client, but it is generally recommended. There are many other ways to register the VASA provider besides the plugin--such as manually in the vSphere Client, PowerShell, or vRealize Orchestrator. Find more information here.

The first step is to register an array against the plugin.  Click on Menu and then Pure Storage.

fcfinish1.jpg

Click on the + Add button to register an array.

fcfinish2.jpg

Arrays can be added individually or via Pure1 if you have authenticated against it.  The procedure for authenticating and then importing arrays via Pure1 is available here.

To add a Single array, we break out each field below.

fcfinish3.jpg

  1. Click on Add a Single Array.
  2. Provide a descriptive Array Name.
  3. Give the FlashArray IP address (management VIP is recommended).
  4. Provide the pureuser username and password.
  5. Click on Submit to add the array.

Next, select the array we just added and select the Register Storage Provider button.

fcfinish4.jpg

To Register the Storage Provider, we recommend providing the FlashArray username and password created specifically for VASA use and then select the Workload Domain vCenter.  Select Register to complete the process.

fcfinish5.jpg

Returning to the Storage Providers registered against our Workload Domain vCenter instance, we can see that both FlashArray controllers have been added

fcfinish6.jpg

Narrated Demo Video