Tanzu User Guide: Persistent Storage Usage with Pure Storage
VMware Tanzu Overview and Prerequisites for Usage with Pure Storage
Using VMware vVols and VMFS within the Tanzu ecosystem combines VMware's premier technologies in powerful and important ways. While a key benefit of containers themselves is that they are efficient and can run on pretty much any infrastructure anywhere - a core requirement for them to be useful is the ability to generate, retain and re-use persistent application data on individual volumes. That requirement is what makes vVols in particular a perfect storage option to use with Tanzu. As a single data vVol maps to a persistent volume (PV) within Tanzu, devops users and vSphere administrators now have a simple and granular storage choice for matching persistent data to its corresponding Kubernetes cluster. This KB article will step through some of the core storage concepts with Tanzu, and show how they map and are used against the Pure Storage FlashArray.
As we will present in this guide, using VMware Tanzu with Pure Storage works out of the box with no tuning required at the storage layer outside of our standard recommended VMware best practices. We encourage customers and prospective customers to use this guide to familiarize themselves with basic concepts around Kubernetes and how they are presented and consumed via vSphere and Tanzu. As familiarity is gained and production workloads are shifted from a traditional virtual machine deployment to Tanzu - we highly suggest readers check out our Portworx offering as Portworx provides enterprise-grade features and functionality that seamlessly layer on top of a Tanzu deployment.
Portworx documentation specific to VMware can be found here and note that we will be updating this documentation in the near future upon the release of the direct integration between Portworx and Tanzu.
Prior to using the Pure Storage FlashArray with VMware Tanzu as PVs, there are a few overarching prerequisites that need to be completed to bring Workload Management online. Links are provided to KB articles or other resources that explain how to complete that particular prerequisite:
- vSphere Workload Management enabled and using either NSX-T (with VCF), HA-Proxy or NSX-Advanced as the Load Balancer with an API endpoint available for kubectl.
- Register FlashArray Storage Provider against the vCenter instance where Workload Management is running for vVols enablement.
- VMFS and vVol datastores created.
- Create at least one SPBM policy associated with the VMFS and vVol datastore. Alternatively, use the default vVols No Requirements SPBM policy.
- Workload Management Namespace created with appropriate user access, and one or more SPBM policies assigned to it.
- Tanzu Kubernetes Guest Cluster deployed within the Namespace.
- Optional: Set CPU/RAM and/or Storage limits against the Namespace (see above Namespace KB for more info).
Persistent Volume Claims, Persistent Volumes and StorageClass
A Persistent Volume Claim (PVC) is how a Kubernetes deployment requests storage for data the needs to persist at least for as long as the Kubernetes pod or node exists. While Kubernetes pods request system CPU and memory, a PVC is how the user requests a certain amount and/or type of storage. As pods are ephemeral and easily replaced - having a durable storage volume for application data to be saved for that eventuality is a critical component. The PVC includes a specific set of characteristics the developer requires for that particular Kubernetes application; notable options may include things like reclamation policy, access mode and amount of storage required. If a PVC that matches those characteristics exists and is available (i.e. not bound to another pod), the application will claim it for its exclusive use. The PVC, in turn, is bound to a Persistent Volume (PV) with that set of needed features.
The Persistent Volume is what is actually created and written to on the underlying storage device. The key difference between a PV and a PVC is that generally the Kubernetes node or pod binds to the Persistent Volume Claim, and the Persistent Volume Claim in turn binds to a Persistent Volume. Both PVs and PVCs can be unbound and re-used with other PVCs or nodes, respectively. Another important differentiator between PVs and PVCs is that the Persistent Volume Reclaim Policy only exists for a Persistent Volume and we will go into more detail on that later within this guide. If there is not a persistent volume that matches the required characteristics of the PVC, a persistent volume (PV) with the required characteristics will be created automatically if dynamic provisioning is enabled via a StorageClass (more on that in the next section). If dynamic provisioning is not an option, the pod or node will not become available and require a storage administrator to statically provision a persistent volume that's compatible with the claim before the node or pod can come online successfully.
The de-facto method for a vCenter/storage administrator to provide their Kubernetes developers with storage options within Tanzu is via a StorageClass. A StorageClass is mapped on a 1:1 basis from an SPBM policy which is defined from within vCenter and then assigned to one or more Namespaces. The StorageClass has a manifest of features that enables a Persistent Volume to be created from dynamically when a corresponding Persistent Volume Claim uses it. From a FlashArray perspective, some of the available policy-driven characteristics encapsulated within a StorageClass could include things like datastore type (VMFS or vVols), performance (e.g. an //X90 or a //X10), if snapshots are enabled, if a replication option such as ActiveCluster (VMFS only) or asynchronous replication (VMFS and vVols) is needed.
Now that we have defined what PVCs, PVs and StorageClasses are and how they are related to one another, the next important concept to dive into is the conditions under which they are provisioned: dynamic or static.
Dynamic and Static Provisioning
Dynamic Provisioning is the method used to create persistent volumes for a Kubernetes application on-demand as it frees the Kubernetes administrator from having to pre-provision potentially many persistent volumes up front that the PVC can then claim when needed (i.e. static provisioning). Dynamic Provisioning entails creating a persistent volume claim that is matched to an application and a StorageClass. In fact, the StorageClass is the key construct which enables dynamic provisioning to be available as it includes the volume characteristics consumed by the developer and is defined and setup by the vCenter administrator.
The beauty of dynamic provisioning is two-fold: no storage is actually provisioned until the Kubernetes deployment is instantiated with a persistent volume claim - meaning that a data vVol or VMFS VMDK and its associated space are not consumed on the FlashArray until the Kubernetes application is in the process of being created. As environments grow, dynamic provisioning quickly becomes the only reasonable option to use as the alternative with static provisioning means that storage administrators have to manually create each persistent volume before it can be used, which does not scale as easily.
Let's break down a simple persistent volume claim YAML file into the important components for dynamic provisioning. There are certainly more options than what is shown here; many of which we will expand upon in later sections of this KB and overall guide:
apiVersion: v1
kind: PersistentVolumeClaim (1)
metadata:
name: pvc-vvols-mysql (2)
spec:
storageClassName: kubernetes-vvols (3)
accessModes:
- ReadWriteOnce (4)
resources:
requests:
storage: 30Gi (5)
- kind: This where we designate what the YAML file is creating. In this case, we are creating a PVC.
- name: This is the name we provide our PVC. A Kubernetes deployment YAML file, for example, will reference this name to use this PVC if you provide this name.
- storageClassName: This is an optional field unless no default storageclass has been defined in your Namespace, in which case it is required. This field dictates what storageClass you want the persistent volume to use. The recommended way to create a StorageClass in Tanzu is via an SPBM policy and then assign that policy to a Namespace.
- accessModes: Today only ReadWriteOnce is supported for the FlashArray. This means that the persistent volume can only be attached to, and consumed by a single Kubernetes pod. We are optimistic that sometime in the future ReadWriteMany will also become a supported option. There are also numerous application-specific methods for mirroring data across multiple persistent volumes within something like a statefulset.
- storagerequest: This is where the user inputs how much storage they want the persistent volume to use when it is created. Persistent Volume expansion is supported once the PVC has been created and is covered in a later section of this KB.
Static Provisioning, meanwhile is when a Kubernetes application is bound to a pre-existing Persistent Volume through a Persistent Volume Claim. Most commonly, Static Provisioning is used when a PV already containing relevant data for the application needs to be used in its course of operations. A system administrator may also manually provision one or more PVs that can later be used by a PVC so long as they match the specifications of what the PVC is looking for. It is obvious but is important to mention: while Static Provisioning certainly works; it does become more and more untenable as environments increase in size and scale as it requires system administrators to manually provision them to be used. After some additional concepts and examples are introduced, an example of using Static Provisioning to migrate a persistent volume between Tanzu Kubernetes Guest Clusters will be provided at the end of this guide.
The below demo video provides a brief demonstration of the differences between dynamic and static provisioning:
At this point we have covered the two methods behind how and when PVs are created for use. However, of equal if not more importance is defining their behavior when it comes time for the associated application to be deleted or migrated. The capability to retain, copy and move data from one Kubernetes instance to potentially many others is obviously of huge importance, and is the subject of the next section.
Persistent Volume Reclaim Policy
By default within VMware Tanzu, a Persistent Volume will exist as long as the deployment - meaning it will be deleted along with the rest of the Kubernetes deployment when the devops user shuts it down. This is due to the default persistentVolumeReclaimPolicy being set to Delete. However, changing the persistentVolumeReclaimPolicy of a persistent volume to Retain enables Tanzu users to save and re-use persistent data easily.
The easiest way to retain an existing PV is with the following kubectl CLI command:
kubectl patch pv (PV_Name) -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
The below sequence shows an example of how to change a PV reclaim policy. Note that this operation can be done while the volume is bound to a PVC without causing any issues.
With the retention policy set to Retain, that means that the persistent data will be available to be used once either the application or the PVC are deleted.
Here we see that our PV is set to Retain and is bound to a PVC called pvc-vvols-mysql.
$ kubectl get pv
NAME CAPACITY RECLAIM POLICY STATUS CLAIM STORAGECLASS
pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 6Gi Retain Bound default/pvc-vvols-mysql cns-vvols
Looking at the PVC also confirms the connection between the PV and PVC:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY STORAGECLASS
pvc-vvols-mysql Bound pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 6Gi cns-vvols
If we want to use the PV with a new PVC, the claim that the PV is attached to must first be removed or deleted
$ kubectl delete pvc pvc-vvols-mysql
persistentvolumeclaim "pvc-vvols-mysql" deleted
There can be a caveat with re-using a persistent volume after the associated persistent volume claim is deleted, though, and that is seen when the Persistent Volume is in the Released status after the PVC has been deleted. This is because while the PVC is gone, the claim associated with the PV still exists and needs to be manually removed.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 6Gi RWO Retain Released default/pvc-vvols-mysql cns-vvols
In order to revert the PV into an available state, the following kubectl patch command must be run against the volume to remove the stale PVC:
kubectl patch pv pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c -p '{"spec":{"claimRef": null}}'
With the PV patched to remove the old PVC claim, it now reverts to an available state and can be bound to a different PVC:
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 6Gi RWO Retain Available cns-vvols
Persistent Volume Expansion
A Persistent Volume is assigned a given capacity when it is created as part of the Persistent Volume Claim for dynamic provisioning. Over time, the volume will fill with data and the Kubernetes developer will need to make a decision on if they want to expand the volume to allow for additional capacity. There are two methods to expand a PV: Online Expansion and Offline Expansion.
In vCenter 7.0U2, VMware has added the capability for online volume expansion, meaning that the Persistent Volume Claim does not need to be brought out of service in order to be expanded as was the case in previous versions of Tanzu. This means that a simple patch command can dynamically expand the volume transparently without impacting the running Kubernetes application.
Source: https://docs.vmware.com/en/VMware-vS...2FF46F59B.html
Offline Persistent Volume Claim Expansion
For developers using relatively older versions of Kubernetes, the primary method to expand a Persistent Volume was accomplished through Offline Expansion. Basically this means that a Persistent Volume Claim must be unbound from a pod prior to expansion completing successfully. One way that this annoyance has been addressed by setting up deployments or statefulsets which have multiple persistent volumes sharing data amongst themselves, thereby enabling a single volume to be taken offline, expanded and then brought back into service sequentially until all volumes are at the new desired size.
For this simple example of offline expansion, we have a single replica mysql deployment with an attached PVC:
NAME STATUS VOLUME CAPACITY STORAGE CLASS
pvc-vvols-mysql Bound pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 5Gi cns-vvols
By describing the mysql deployment, we can see that the above PVC is bound to it:
$ kubectl describe deployment mysql-deployment
Name: mysql-deployment
Namespace: default
CreationTimestamp: Wed, 09 Jun 2021 14:41:11 -0700
Labels: app=mysql
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=mysql
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=mysql
Containers:
mysql:
Image: mysql:5.7
Port: 3306/TCP
Host Port: 0/TCP
Environment:
MYSQL_ROOT_PASSWORD: password
Mounts:
/var/lib/mysql from mysql-data (rw,path="mysql")
Volumes:
mysql-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-vvols-mysql
ReadOnly: false
The below patch command is applied to the PVC to request the additional storage needed:
$ kubectl patch pvc pvc-vvols-mysql -p '{"spec": {"resources": {"requests": {"storage": "6Gi"}}}}'
However, since this version (Supervisor cluster 7.0U1 in this example) does not support online expansion, we see this message when we describe the PVC:
$ kubectl describe pvc pvc-vvols-mysql
Essentially, the FileSystemResizePending condition shown above means that the PVC is waiting to be unmounted from its pod before it can complete the resize operation. One way to accomplish this is simply to delete the mysql deployment, wait, and then recreate it.
kubectl delete deployment mysql-deployment
kubectl apply -f ./mysql-deployment.yaml
Now when we describe the PVC again after waiting a few moments for the mysql deployment to be recreated we can see that it has been successfully expanded under the events section:
An example of Offline Volume Expansion can be viewed in the below technical demo:
Online Persistent Volume Claim Expansion
As the name suggests, Online Volume expansion means that a given PV can have its capacity expanded without first being unattached from a pod or node. The key benefits of this expansion method are that users do not need to take the associated application or deployment offline and the volume expansion completes transparently without performance degradation. This is a much simpler operation than offline volume expansion and we will show the exact same operation as the previous example.
We started with a PVC of 5GB as before:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY STORAGE CLASS
pvc-vvols-mysql Bound pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 5Gi cns-vvols
Then run the same patch command against the PVC, increasing the amount of storage it is requesting from 5Gi to 6Gi:
$ kubectl patch pvc pvc-vvols-mysql -p '{"spec": {"resources": {"requests": {"storage": "6Gi"}}}}'
After waiting a few moments we can see that the PVC has increased its capacity to 6Gi:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
pvc-vvols-mysql Bound pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c 6Gi RWO cns-vvols
Taking a closer look at the PVC via the describe command shows that it indeed increased the PV size while it remained mounted to the mysql-deployment node:
$ kubectl describe pvc
Name: pvc-vvols-mysql
Namespace: default
StorageClass: cns-vvols
Status: Bound
Volume: pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
volumehealth.storage.kubernetes.io/health: accessible
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 6Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: mysql-deployment-5d8574cb78-xhhq5
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ExternalExpanding 52s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
Normal Resizing 52s external-resizer csi.vsphere.vmware.com External resizer is resizing volume pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c
Normal FileSystemResizeRequired 51s external-resizer csi.vsphere.vmware.com Require file system resize of volume on node
Normal FileSystemResizeSuccessful 40s kubelet, tkc-120-workers-mbws2-68d7869b97-sdkgh MountVolume.NodeExpandVolume succeeded for volume "pvc-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c"
Below you can find a technical demo video showing a quick example of online volume expansion:
Persistent Volume Usage and Monitoring
Once one or more persistent volumes have been created, there are multiple ways to monitor them from vCenter, the kubectl CLI and Pure Storage.
vCenter provides a UI-based method to check on persistent volume claims starting in 7.0U1. To find the Container Volumes associated with your environment start by clicking on the Cluster of interest, then Monitor and lastly Container Volumes. This is depicted in the below screenshot.
Once you are in the Container Volumes menu, you can see some basic information about each persistent volume claim. Worth noting is that a persistent volume will not be displayed if it hasn't been claimed via a PVC and/or if it is not associated with a SPBM policy/storageclass that was defined within vCenter.
To get additional information on a certain Container Volume, click on the volume of interest. That will spawn a new window and if the Basics tab is selected, you can view such things as the Volume path on the array, the datastore the Container Volume resides upon and the SPBM policy it is associated with as well as compliance and health status of the volume.
Using Static Provisioning to Migrate a Persistent Volume Between Tanzu Kubernetes Clusters
Data mobility of Persistent Volumes is a critical element of what makes Kubernetes so attractive: being able to move data and their associated applications to almost any underlying infrastructure whether it be off-premises or on-premises. Within the scope of this guide, another key consideration is being able to move PVs from one Tanzu Kubernetes Cluster to another. For example, moving from Kubernetes release v1.17 to v1.20 in order to take advantage of some of the new features and functions available in the later release, such as online volume expansion.
This section will show how to move a persistent volume from a Tanzu Kubernetes Cluster running at v1.17 to Tanzu Kubernetes Cluster running at v1.20 within the same Tanzu Namespace. In it and as in earlier examples, we assume that we have a single instance of mysql with a persistent volume running via TKC v1.17. We wish to retain that persistent volume and redeploy the mysql instance in a new Tanzu Kubernetes Cluster at v1.20.
We will first need to pull the volumeHandle and storageclass values from the existing mysql PV. This information can be found in a couple of ways: either by describing the PV from within the 'old' TKC context or by switching the the vSphere Namespace within which the TKC resides and describing the Persistent Volume Claim. As shown in these examples, it is also expected that the Persistent Volume reclaim policy has been set to Retain previously.
$ kubectl describe pv
Name: pvc-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
Finalizers: [kubernetes.io/pv-protection]
StorageClass: cns-vvols
Status: Bound
Claim: default/pvc-mysql
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: csi.vsphere.vmware.com
FSType: ext4
VolumeHandle: c88a36fb-6035-44b8-9eca-1b913d72954b-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/csiProvisionerIdentity=1623435682465-8081-csi.vsphere.vmware.com
type=vSphere CNS Block Volume
Events: <none>
For the 2nd method of finding the volumeHandle and storageclass information, we see that both TKC clusters (tkc-117 and tkc-120) reside inside of the kg-ns1 vSphere namespace, so we will switch to that.
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
10.21.132.25 10.21.132.25 wcp:10.21.132.25:administrator@vsphere.local
kg-ns1 10.21.132.25 wcp:10.21.132.25:administrator@vsphere.local kg-ns1
* tkc-117 10.21.132.27 wcp:10.21.132.27:administrator@vsphere.local
tkc-120 10.21.132.27 wcp:10.21.132.27:administrator@vsphere.local
$ kubectl config use-context kg-ns1
Switched to context "kg-ns1".
The name of the pvc is the volumeHandle value we require and the storageclass is also shown:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY STORAGECLASS
a0394f5b-87d6-4716-be75-9f114498bb68-f37c39fd-dbe9-4f27-abe8-bca85bf9e87c Bound pvc-c0a73e9b-77ba-471c-a70f-b945662cf1bb 6Gi cns-vvols
c88a36fb-6035-44b8-9eca-1b913d72954b-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb Bound pvc-b547a08b-b61d-4c27-8840-da5058eaf8f2 5Gi cns-vvols
With the needed values for creating the Persistent Volume for the 'new' v1.20TKC cluster acquired, we can now remove the old v1.17 TKC. If the older v1.17 TKC is still needed for other uses; it does not need to be removed, however, the PVC and PV objects need to be deleted from the old v1.17TKC cluster before they can be used in the new v1.20 TKC.
$ kubectl config use-context tkc-117
Switched to context "tkc-117".
$ kubectl delete pvc pvc-mysql
persistentvolumeClaim "pvc-mysql" deleted
$ kubectl delete pv pvc-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb
persistentvolume "pvc-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb" deleted
Moving along with our example we now change contexts to the new v1.20 TKC cluster and create a yaml file to statically provision an existing persistent volume in the new v1.20 TKGS cluster which include the information gathered above from the persistent volume.
$ kubectl config use-context tkc-120
Switched to context "tkc-120".
apiVersion: v1
kind: PersistentVolume (1)
metadata:
name: pv-mysql-new (2)
annotations:
pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
spec:
storageClassName: cns-vvols (3)
capacity:
storage: 5Gi (4)
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete (5) claimRef:
namespace: default
name: pvc-mysql-new (6)
csi:
driver: "csi.vsphere.vmware.com"
volumeAttributes:
type: "vSphere CNS Block Volume"
"volumeHandle":"c88a36fb-6035-44b8-9eca-1b913d72954b-9e7822f0-2a81-4e3f-b1a4-986b4e9f98bb" (7)
- kind: This where we designate what the YAML file is creating. In this case, we are creating a PV.
- name: This is the name we provide our PV.
- storageClassName: This points the persistent volume to the StorageClass to use and is the same storage class the persistent volume was associated with.
- storage: The amount of capacity we want to give the persistent volume. This needs to match the original persistent volume storage capacity.
- persistentVolumeReclaimPolicy: Because the storageclass uses Delete as default, this value needs to be set to delete as well. It can be changed to retain after the persistent volume is created.
- persistent volume claim: We need to provide a persistent volume claim name that we will use to match the persistent volume to in the next step.
- volumeHandle: The volumehandle is an alpha-numeric code that we found earlier in this section from the old TKC v1.17 cluster.
Once the above YAML file has been applied and the persistent volume has been created, we can now bind it to a PVC. Note that the subtle difference between this PVC is that it is using static provisioning (while the earlier example in this guide was using dynamic provisioning) via calling out the volumeName from the PV we just created above.
kind: PersistentVolumeClaim (1)
apiVersion: v1
metadata:
name: mysql-pvc-new (2)
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi (3)
storageClassName: cns-vvols (4)
volumeName: pv-mysql-new (5)
- kind: This where we designate what the YAML file is creating. In this case, we are creating a PVC.
- name: This is the name we provide our PVC. This value must match the claimRef.Name value (#6) from the above persistent volume YAML.
- storage: The amount of capacity we want to give the persistent volume. This needs to match the requested storage amount from the above persistent volume.
- storageClassName: This points the persistent volume to the StorageClass to use and is the same storage class the persistent volume was associated with.
- volumeName: This is the persistent volume name value (#2) from the above persistent volume YAML file.
After running the above YAML file we see that our static volume has been correct bound to the new Persistent Volume Claim in the v1.20 TKC cluster. It can now be used by a new Mysql instance deployed within the new TKC v1.20.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
mysql-pvc-new Bound pvc-mysql-new 5Gi RWO cns-vvols
A more in-depth example of this procedure can be viewed in this technical demo video: