This documentation is not applicable to vSphere CSI Driver. Please visit for information about vSphere CSI Driver.
Edit me

This section describes the steps to create persistent storage for containers to be consumed by MongoDB services on vSAN. After these steps are completed, Cloud Provider will create the virtual disks (volumes in Kubernetes) and mount them to the Kubernetes nodes automatically. The virtual disks are created with the vSAN default policy.

Define StorageClass

A StorageClass provides a mechanism for the administrators to describe the “classes” of storage they offer. Different classes map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. The YAML format defines a “platinum” level StorageClass.

kind: StorageClass
 name: platinum
 diskformat: thin

Note: Although all volumes are created on the same vSAN datastore, user can adjust the policy according to actual storage capability requirement by modifying the vSAN policy in vCenter Server. User can also specify VSAN storage capabilities in StorageClass definition based on this application needs.

Claim Persistent Volume

A PersistentVolumeClaim (PVC) is a request for storage by a user. Claims can request specific size and access modes (for example, can be mounted once read/write or many times read-only). The YAML format claims a 128GB volume with read and write capability.

apiVersion: v1
kind: PersistentVolumeClaim
 name: pvc128gb
 annotations: "platinum"
  - ReadWriteOnce
   storage: 128Gi

Specify the Volume to be Mounted for the Consumption by the Containers

The YAML format specifies a MongoDB 3.4 image to use the volume from Step 2 and mount it to path /data/db.

     - image: mongo:3.4
       name: mongo-ps
       - name: mongo-ps
        containerPort: 27017
          hostPort: 27017
       - name: pvc-128gb
             mountPath: /data/db
       - name: pvc-128gb
              claimName: pvc128gb

Storage was created and provisioned from vSAN for containers for the MongoDB service by using dynamic provisioning in YAML files. Storage volumes were claimed as persistent ones to preserve the data on the volumes. All mongo servers are combined into one Kubernetes pod per node.

In Kubernetes, as each pod gets one IP address assigned, each service within a pod must have a distinct port. As the mongos are the services by which user can access shard from other applications, the standard MongoDB port 27017 is assigned to them.

Please refer this Reference Architecture for detailed understanding of how persistent storage for containers is consumed by MongoDB services on vSAN.

Download the yaml files for deploying MondoDB on Kubernetes with vSphere Cloud Provider from here

To understand the configuration mentioned in these YAMLs please refer this link

Execute following commands to deploy Sharded MongoDB Cluster on Kubernetes with vSphere Cloud Provider.

Create StorageClass

kubectl create -f

Create Storage Volumes for Shared MondoDB Cluster

kubectl create -f
kubectl create -f
kubectl create -f
kubectl create -f

Create MongoDB Pods

kubectl create -f
kubectl create -f
kubectl create -f
kubectl create -f

Create Services

kubectl create -f
kubectl create -f
kubectl create -f
kubectl create -f