Skip to main content
Pure Technical Services

Tanzu User Guide: Enabling vSphere Workload Management

Currently viewing public documentation. Please login to access the full scope of documentation.

KP_Ext_Announcement.png

Prerequisites for Using VMware Workload Management with the FlashArray

Before using this guide it is important to review and confirm that the following conditions in the prerequisite section are met.  Much of the setup for the prerequisites in this section can be found either in the User Guides or How-To sections of the VMware Platform Guide.

For enabling Kubernetes with vSphere with the Workload Management option introduced in vSphere 7.0, we assume a considerable amount of work has already been accomplished prior to following the steps in this section.  The supported method for enabling Workload Management as shown within this KB is to use VMware Cloud Foundation v4.0+ to automate the deployment of vCenter, NSX-T and the underlying ESXi hosts in a consistent manner to ensure infrastructure stability as users deploy containers and virtual machines side by side.

VMware Workload Management Requirements

  • VMware Cloud Foundation 4.0+ with Management Domain and SDDC Manager deployed via CloudBuilder.
  • vSphere with Kubernetes license feature applied for all hosts and vCenter (version 7.0+) in Workload Domain.
  • One or more Pure Storage FlashArray(s) with iSCSI and/or Fibre Channel running Purity 5.3.6+.  
  • Workload Domain using VMFS on FC Principal Storage option and/or iSCSI/vVols as Supplemental Storage.
  • Minimum of one VMFS volume tagged and referenced within a Storage Policy Based Management (SPBM) policy for Supervisor Cluster storage requirements.
  • Minimum of one Workload Domain built via SDDC Manager with an associated NSX-T instance.
  • Jumbo Frames (minimum 1600 MTU) enabled end-to-end in environment.  Workload Management with VMware Cloud Foundation will not function if this is not enabled.
  • Inter-VLAN routing of jumbo frames enabled at network layer.
  • At least one NSX Edge Node* 

*For production environments it's recommended that a minimum of 2 Edge Nodes using the Large form-factor be used for availability and performance.  If you are using the Edge Cluster Deployment wizard via SDDC Manager, a minimum of two nodes is required for completion, and it will automate their deployment.  A video showing how to set up an edge cluster manually can be found here.

  • One NSX Edge Cluster#
  • One NSX Tier-0 Gateway#

# The Edge Cluster deployment wizard within SDDC Manager will provide automated deployment of these components.

  • 5 sequential IP addresses on the vCenter Management Network for Kubernetes Supervisor VM Cluster
  • Minimum of 32 sequential IP addresses (/27 network) for Ingress Network on a routable VLAN^
  • Minimum of 32 sequential IP addresses (/27 network) for Egress Network on a routable VLAN^

^Note that ingress and egress networks may be on the same VLAN.  One or more static IP addresses on the ingress and egress VLAN needs to be used for the NSX Edge Node Uplink associated with the Tier-0 gateway.

  • TOR switch access if using BGP
    • In our examples below we will assume that the typical user does not have top-of-rack switch access so we will be using the static routing option instead.
  • NTP Server or accessible external NTP server (e.g. pool.ntp.org) that all components are set to.
  • Pure Storage vSphere Plugin version 4.3+ installed on Workload Domain vCenter.
  • Optional:  FlashArray VASA provider registered and vVol datastore created on Workload Domain vCenter.  vVol tagged and referenced with SPBM if vVols use is desired for persistent volumes.

Here are some useful articles on how to accomplish the prerequisites that are listed above, as well as further reading on using Workload Management:

Enabling vSphere Workload Management

This section will show an example of how to enable Workload Management using Pure Storage FlashArray.  Before proceeding please make sure to confirm that all other steps listed in the Prerequisites section above have been completed or Workload Management will not work properly.

To get started, click on the main vSphere client menu and select the Workload Management option.  Note that this option is only available in vSphere 7 and up.

wm1.jpg

Click on the Clusters tab and select Add Cluster.

wm2.jpg

In the window that opens one or more compatible clusters will be shown where Workload Management can be enabled.  Select the desired Cluster and click Next.

wm3.jpg

Depending on the use case, select the appropriate Control Plane size.  Smaller exploratory test/dev instances should use the Tiny or Small options while larger production kubernetes deployments should consider using Medium or Large to have the appropriate amount of resources available.  Once the Size is selected click Next.

wm4.jpg

The next couple steps in Workload Management enablement are assigning the correct networking components.  We will break out each entry in the networking section individually here:

  1. Network:  Select the VM Management Network
  2. Starting IP Address:  Enter the starting IP address that resides on the Management Network selected in the previous step.  Note that 5 consecutive  IP addresses must be available.  So in this example, 10.21.143.225 - 10.21.143.229 are all free for use by the Supervisor Cluster.
  3. Subnet Mask for Management Network.
  4. Gateway for Management Network.
  5. DNS Server IP or FQDN.  While optional having this field populated is recommended.
  6. NTP Server:  It is recommended to set this field to whatever you are using for your ESXi hosts and vCenter instance to ensure all components remain in time sync.
  7. Optionally include any DNS Search Domains.

wm5.jpg

Here are the relevant entries for the 2nd section of Networking Information:

  1. Select the Distributed vSwitch that your Edge Cluster resides on.
  2. Select the NSX-T Edge Cluster you wish to use.
  3. Optionally enter the API Server Endpoint FQDN for Kubernetes.  This entry needs to be pre-populated in DNS and will be associated to the Ingress CIDR entry below in field 7 below.  For this example the API Server endpoint FQDN would have an IP address of 10.21.132.65 set in DNS.
  4. Enter a DNS server that can be reached by the VDS selected in the first step.
  5. Pod CIDRs:  No need to change this from default unless that network range is already in use.
  6. Service CIDRs:  No need to change this from default unless that network range is already in use.
  7. Ingress CIDRs:  This needs to be a minimum of a /27 network to be usable.  The Ingress CIDR also needs to be on the same network as one of the Uplinks that were defined on the Tier0 Gateway from the Edge Node VM.  
  8. Egress CIDRs:  This needs to be a minimum of a /27 network to be usable.  The Ingress CIDR also needs to be on the same network as one of the Uplinks that were defined on the Tier0 Gateway from the Edge Node VM. Note that in this example we are using a single VLAN for both ingress and egress.

With all of the above fields populated, click on Next to proceed.

wm6.jpg

The Supervisor Control Plane has multiple storage components associated with it.  In this section of Workload Management is where we select the SPBM policy previously created earlier within this user guide.  Both vVols or VMFS are supported for the Supervisor cluster. 

Click on the Select Storage text.

wm7.jpg

From the Select Storage Policies window, pick a VMFS-based SPBM policy and click OK.

Repeat this procedure for Ephemeral Disks and Image Cache.

wm8.jpg

With a policy now associated with all 3 storage pieces of the Control Plane, click on Next.

wm9.jpg

In the summary window you have a chance to review how Workload Management will be deployed.  Click on Finish to begin enabling Workload Management and note that it can take anywhere from 30 minutes to an hour to complete.

The relevant log file for Workload Management can be viewed from an ssh session to vCenter via running the following command:

tail -f /var/log/vmware/wcp/wcpsvc.log

wm10.jpg

Here we can see that Workload Management has been successfully enabled and that the API endpoint for Kubernetes can be found on an IP address specified on our Ingress CIDR.

wm11.jpg

Workload Management Enablement Technical Video Demo