Skip to main content
Pure Technical Services

Tanzu User Guide: vSphere Namespace Overview and Setup

Currently viewing public documentation. Please login to access the full scope of documentation.

Introduction

VMware Tanzu enables vSphere administrators to manage, create and delete container-based Kubernetes workloads side-by-side with their existing virtual machine environments from a familiar single pane of glass.  This common framework greatly simplifies daily operations while providing a simple path for migrating applications to and from a container-based ecosystem as business needs evolve.  Users of the Pure Storage vSphere Plugin and other VMware integration points with Pure Storage will benefit as the functionalities provided from the storage plugin are applicable to both traditional virtual machines and containers.  Replication from an on-premises FlashArray to Cloud Block Store in AWS or Azure for use with public cloud solutions like Azure VMware Solution (AVS) or VMware Cloud (VMC) helps realize the many benefits of the modern hybrid-cloud infrastructure. 

This user guide will present technical guidance and best practices on how storage is consumed, interacts and is backed up and restored within the VMware Tanzu ecosystem.  As Kubernetes and by extension, Tanzu, iterates at a rapid pace, please check this user guide often as supported features and functions will be added as new versions of Kubernetes, Tanzu and Pure Storage are released.  The scope of this guide will also be purposefully narrow and specifically handle how the Pure Storage interacts with, and is consumed by VMware Tanzu. 

The following resources provide insight into Pure Service Orchestrator (PSO), generic Kubernetes and Portworx:

Pure Service Orchestrator

Kubernetes Concepts

Portworx on Kubernetes

To begin, we will provide some basic concepts, definitions and examples of how the storage layer interacts with VMware Tanzu with respect to the Pure Storage FlashArray with respect to setting up and managing vSphere Namespace and contexts.

A prerequisite is that Workload Management in vSphere has been enabled and is operational in order for this user guide to be useful beyond this point. 

To see the steps for enabling vSphere Workload Management with the FlashArray, please visit the following resources:

Enabling Workload Management KB Article

Enabling Workload Management Demo Videos

Namespace Overview and Setup

A Namespace (ns) represents an isolated pool of resources that the VMware administrator creates for Kubernetes developers and users to access, build and manage their container environments.  In many ways, it is similar to a vSphere Resource Group.  The Namespace also serves as the gatekeeper for user access/denial, reports resource usage and optionally allows limitations to be set and applied against it.  Within the kubectl CLI, the term Namespace is differentiated from the term Context as a context has an individual user associated with it, while the Namespace is representative of the larger cluster of resources aggregated together that groups of users can access. 

Namespace Creation

To begin, click on the vSphere Menu and select Workload Management.

ns-create1.png

Make sure that the Namespaces tab is selected and then click on New Namespace.

ns-create2.png

Select the vCenter Cluster where Workload Management is enabled and where you want to create the new Namespace.  Provide the Namespace a DNS-compliant Name and optionally give it a brief Description to help identify its usage then click on the Create button to finish.

ns-create5.png

The newly created Namespace Summary tab shown below provides important monitoring and management functions for vSphere administrators to report, allocate and access the Namespace:

ns-create4.png

The six main tabs are summarized here with more detail on components 2-4 covered in the remainder of this guide.  vSphere Pods and Tanzu Kubenetes Clusters (items 5 and 6) will have their own dedicated KB article.

  1. Status: Provides kubernetes service status information, the cluster and vCenter instances the Namespace belongs to and useful URL links to CLI tools.
  2. Permissions: Who can view and who can edit the Namespace through the cluster API endpoint provided via the Workload Management Supervisor Cluster.
  3. Storage: Which StorageClass or StorageClasses defined via vSphere SPBM are assigned to the Namespace.
  4. Capacity and Usage Limits:  How much CPU, memory and storage on a per datastore basis can be used by the Namespace.
  5. vSphere Pods:  vSphere Pods are supported by NSX-T backed Workload Management instances only with VMware Cloud Foundation.  This summary screen shows how many are running and in what state they are in.
  6. Tanzu Kubernetes Clusters:  This shows the number of Tanzu Kubernetes Cluster (TKC) instances running underneath this Namespace and also provides the ability to change the Content Library associated with them and view more detail about the TKC components.  
Namespace Permissions

Adding one or more developers or groups of developers to the Namespace gives the ability to access the it with the kubectl command via the Workspace control plane node IP or FQDN.  Depending on the role assigned, developers can build, monitor and/or delete container deployments.

To add a user to the Namespace, start by clicking on the Add Permissions button on the Permissions window.  

ns-mgmt1.png

Next, choose your Identity Source from the pull down menu.  The Identity source can be the local vSphere SSO (vsphere.local user as our example shows) or something like Active Directory if that has been integrated with your vCenter instance.

Once the proper Identity Source has been selected, search for the user or group that you want to add for access and select it.

Finally, give them the desired Role.  As you might expect, Can view gives read-only permissions to the Namespace while Can edit gives administrative rights to the Namespace.

ns-mgmt4.png

Click on OK when you have made the desired selections.

ns-mgmt5.png

The Control Plane IP address can be found in the Workload Management menu under the Clusters tab. 

ns-mgmt8.png

Use this IP address (alternatively the IP address can be assigned an FQDN on your DNS server as well) to connect to the Namespace we added the user to in this section.

An example of how to authenticate to the Tanzu cluster is shown in this code snippet:

# kubectl vsphere login --server <Control Plane Node> --vsphere-username <user@domain> --insecure-skip-tls-verify

The users provides their password to authenticate to the cluster.

Logged in successfully.

You have access to the following contexts:
   <Control Plane Node>
   tkc-test

 

In this example we see we have access to the tkc-test context as expected.  

To select our new context to use, enter the following command:

# kubectl config use-context tkc-test
Switched to context "tkc-test".
#

Next, let's see what storageclass is available to use:

kubectl describe namespace tkc-test
Name:         tkc-test
Labels:       vSphereClusterID=domain-c8
Annotations:  ls_id-0: 4a03e9a0-beea-4198-bbf1-ce0516653567
              ncp/extpoolid: domain-c8:aeb518f4-209a-4d70-87ae-4e267b5c9338-ippool-10-21-132-129-10-21-132-190
              ncp/router_id: t1_06f663d9-4330-4651-b8cc-62613a48ffc7_rtr
              ncp/snat_ip: 10.21.132.130
              ncp/subnet-0: 10.244.0.16/28
              vmware-system-resource-pool: resgroup-3009
              vmware-system-vm-folder: group-v3010
Status:       Active

No resource quota.

No LimitRange resource.

This output is expected as we have not yet assigned any SPBM policies to our Namespace, thus no storageclass is available for use.  Tanzu will translate assigned SPBM policies to the Namespace into usable storageclasses for users with permissions.  We will return to the vSphere UI to assign SPBM policies in the next section of this guide to resolve this.

Namespace Storage

In this section we will show the relationship between an SPBM policy and a storageclass for VMware Tanzu.  The prerequisite for this section is that one or more SPBM policies for VMFS and/or vVols has been previously created.  These KB articles will walk through the procedure and differences between SPBM for vVols and VMFS on the FlashArray:

SPBM with vVols

SPBM with VMFS

A storageclass determines what underlying storage and what storage features are available to be consumed by a given Namespace.  Both VMFS and vVols can be used with VMware Tanzu.

To assign one or more SPBM policies to the Namespace, start by clicking on the Add Storage button on the Namespace Summary tab.

ns-mgmt2.png

All SPBM policies associated with the vCenter instance that can be used with your Namespace will be displayed.  To add them simply select the checkbox or checkboxes for one or more policies that you wish to use with the Namespace.  You optionally can click on the  button to expand each policy and see what datastore(s) are mapped to it in order to confirm that's the policy you wish to use.  In our example we will select a VMFS SPBM policy and a vVols SPBM policy and then click on the OK button to assign them.

ns-mgmt6.png

Successful policy assignment is confirmed in the Storage Window.  Note that by default no limits/quotas are assigned to them.  We will cover assigning limits in the next section.

ns-mgmt9.png

With SPBM policies now added to the Namespace, we can re-run the kubectl describe namespace command and we see that storageclass resources are now available to the sample context as resources.

$ kubectl describe namespace tkc-test
Name:         tkc-test
Labels:       vSphereClusterID=domain-c8
Annotations:  ls_id-0: 4a03e9a0-beea-4198-bbf1-ce0516653567
              ncp/extpoolid: domain-c8:aeb518f4-209a-4d70-87ae-4e267b5c9338-ippool-10-21-132-129-10-21-132-190
              ncp/router_id: t1_06f663d9-4330-4651-b8cc-62613a48ffc7_rtr
              ncp/snat_ip: 10.21.132.130
              ncp/subnet-0: 10.244.0.16/28
              vmware-system-resource-pool: resgroup-3009
              vmware-system-vm-folder: group-v3010
Status:       Active

Resource Quotas
 Name:                                                       tkc-test-storagequota
 Resource                                                    Used  Hard
 --------                                                    ---   ---
 wld1-vmfs-k8.storageclass.storage.k8s.io/requests.storage   0     9223372036854775807
 wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage  0     9223372036854775807

Depending on the vCenter permissions assigned to the user accessing the context, the kubectl get storageclass command will also provide available storageclasses, their reclaimpolicy and a few other important characteristics for the context as shown in the below example.  If the user does not have sufficient permissions the command will throw an error saying the user lacks sufficient permissions to access the API.  More information on vCenter storage permissions with Tanzu can be found here.

# kubectl get storageclass
NAME                       PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
wld1-vmfs-k8               csi.vsphere.vmware.com   Delete          Immediate           true                   23h
wld1-vvols-k8              csi.vsphere.vmware.com   Delete          Immediate           true                   23h

Namespace Resource and Object Limits

The Capacity and Usage panel within the vSphere Namespace allow the vSphere administrator to set how much CPU, memory and/or Storage will be available to the it and associated user contexts.  This is an extremely useful feature as it prevents individual Kubernetes applications from negatively impacting other running workloads on the same vSphere cluster.  This also provides insights for the vSphere administrator to differentiate between those Namespaces and clusters which have more resource demands and those that are being underutilized.

To set limitations against the Namespace, click on the Edit Limits button.

ns-mgmt3.png

That will spawn the below window:

ns-mgmt7.png

  1. CPU: Sets the maximum amount of CPU that the Namespace can consume.  Available units are MHz and GHz.
  2. Memory: Sets the maximum amount of ESXi host RAM that the Namespace can consume.  Available units are MB and GB.
  3. Storage: This sets the overall maximum amount of storage space that the Namespace can consume and includes images and persistent volumes.  Available units are MB and GB.  An overall storage usage limit for all SPBM policies associated with the Namespace can be set.
  4. In addition, Storage limits can be set on a per SPBM policy basis.
  5. Click the OK button once one or more Resource Limits has been set.

Below is an example of how Resource Limits that are set within vSphere can be viewed by a Tanzu user and how they are enforced.

First, we set some arbitrary limits for CPU, RAM and a limit for one of our two SPBM policies.

limit2.png

When we describe the Namespace, this time we see new annotations that show our newly added CPU/memory limitations and also a new storage quota limitation against our vVols-based storageclass (in bold).

$ kubectl describe ns tkc-test
Name:         tkc-test
Labels:       vSphereClusterID=domain-c8
Annotations:  ls_id-0: 4a03e9a0-beea-4198-bbf1-ce0516653567
              ncp/extpoolid: domain-c8:aeb518f4-209a-4d70-87ae-4e267b5c9338-ippool-10-21-132-129-10-21-132-190
              ncp/router_id: t1_06f663d9-4330-4651-b8cc-62613a48ffc7_rtr
              ncp/snat_ip: 10.21.132.130
              ncp/subnet-0: 10.244.0.16/28
              vmware-system-resource-pool: resgroup-3009
              vmware-system-resource-pool-cpu-limit: 5.2310
              vmware-system-resource-pool-memory-limit: 25600Mi
              vmware-system-vm-folder: group-v3010
Status:       Active

Resource Quotas
 Name:     tkc-test
 Resource  Used  Hard
 --------  ---   ---

 Name:                                                       tkc-test-storagequota
 Resource                                                    Used  Hard
 --------                                                    ---   ---
 wld1-vmfs-k8.storageclass.storage.k8s.io/requests.storage   0     9223372036854775807
 wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage  0     6Gi

No LimitRange resource.
To see storage quota enforcement in action, this is a simple yaml file that creates a persistent volume claim of 30GB using the above vVols storageclass with a 6GB quota applied to it.

# cat ./mysql-storage.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-vvols-mysql
spec:
  storageClassName: wld1-vvols-k8
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi

Attempting to apply the yaml file yields the following error message, informing the Tanzu user that they have requested more space than has been made available to that storageclass.
# kubectl apply -f ./mysql-storage.yaml
Error from server (Forbidden): error when creating "./mysql-storage.yaml": persistentvolumeclaims "pvc-vvols-mysql" is forbidden: exceeded quota: tkc-test-storagequota, requested: wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage=30Gi, used: wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage=0, limited: wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage=6Gi

If we return to the vSphere UI and increase the quota to 50GB as shown below, we should be able to successfully apply the yaml file.  The alternative to this not shown is to decrease the storage request in the yaml file itself to something below 6GB.

limit3.png

Now when we apply the yaml file we can see that the PVC is successfully created.

$ kubectl apply -f ./mysql-storage.yaml
persistentvolumeclaim/pvc-vvols-mysql created

Taking an excerpt from the namespace description with the PVC now present, we can see that the 30GB persistent volume claim is used against the new 50GB quota:

 Name:                                                       tkc-test-storagequota
 Resource                                                    Used  Hard
 --------                                                    ---   ---
 wld1-vmfs-k8.storageclass.storage.k8s.io/requests.storage   0     9223372036854775807
 wld1-vvols-k8.storageclass.storage.k8s.io/requests.storage  30Gi  50Gi


The other method by which vSphere administrators can set limitations on a Namespace is through setting maximum object counts for common Kubernetes components.  To access this menu, go to the Namespace in vCenter and select the following:

limit4.png

The next screen shot shows the various Kubernetes items that can have limits optionally set against them.  For our example we will set a limit of 5 persistent volume claims for this Namespace:

limit6.png

Once this limit has been applied, we can see the available number of persistent volume claims from the Namepsace description command:

Resource Quotas
 Name:                         tkc-test
 Resource                      Used  Hard
 --------                      ---   ---
 count/persistentvolumeclaims  1     5