vSphere CNS/CSI – Architecture

Container applications are highly dynamic with hundreds and sometimes even thousands of containers being created and destroyed in a short span. vSphere solves the problem of managing the lifecycle of all these containers by leveraging Kubernetes and Tanzu. However managing the storage for the application is a different challenge.

vSphere Cloud Native Storage (CNS) is a vSphere and Kubernetes feature that allows for dynamic storage provisioning and management for the containers by making Kubernetes aware of the underlying vSphere Infrastructure.

CNS is natively deployed within vCenter Server and allows the vSphere CSI driver running in the Kubernetes clusters to provision and manage the container storage on-demand.

CSI watches for any new request for PV/PVCs and forwards that information to CNS control plane running in VCSA, which in turn provisions a new volume, attaches the volume to the Kubernetes nodes and relays the information back to CSI.

  1. User creates a new Persistent Volume Claim.
  2. PVC information is communicated to CNS.
  3. CNS communicates with vCenter Server for volume provisioning.
  4. vCenter Server provisions container volume and attaches to the kubernetes plane node.
  5. Information is passed back to the CNS.
  6. CSI get the PV information and marks the PVC bound to the PV and makes it available for the pods.

To see all of this in action first we need to create a StorageClass that will be mapped to either a datastore or a storage policy to provision the volume.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: datastorestorageclass
provisioner: csi.vsphere.vmware.com
parameters:
datastoreurl: "ds:///vmfs/volumes/<datastore id>/"
kind: StorageClass

Note: datastoreurl can be substituted with storagePolicyName if you want to use SPBM for placement.

Once we have the storageClass created, we can create a pvc using the storage class.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testpvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: datastorestorageclass

As soon as the PVC is created, it should trigger the creation of a new PV in K8s and a new container volume creation in the vCenter.

You can view the details of the PV by going to the Cluster -> Monitor -> Cloud Native Storage ->Container Volumes

Next, You can mount the PVC to a container inside a pod.

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  imagePullSecrets:
  - name: regcred
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: testpvc
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage 

The volume access can be validated by opening an interactive shell for the container.

The latest version of CNS with vCenter Server 7.0 provide below functionalities:

TKG Deployment Series

Tanzu Kubernetes Grid – Overview

Tanzu Kubernetes Grid – Lab Walkthrough

Tanzu Kubernetes Grid – Prepare Bootstrap Machine

Tanzu Kubernetes Grid – Bash Autocompletion

Tanzu Kubernetes Grid – Networking with Advanced Load Balancer

Tanzu Kubernetes Grid – Deploy NSX Advanced Load Balancer

Tanzu Kubernetes Grid – Configure NSX Advanced Load Balancer

Tanzu Kubernetes Grid – Pre-requisites

Tanzu Kubernetes Grid – Deploy Management Cluster with CLI

Tanzu Kubernetes Grid – Deploy Workload Cluster with CLI

Tanzu Kubernetes Grid – Authentication – Under the Hood

TKG Day-2 Series

MinIO – Introduction and Installation

Tanzu Kubernetes Grid – Backup and Restore with Velero

Leave a comment