## Deprecation Notice
CSE Server and Kubernetes Container Clusters plugin will soon drop support for TKGi (previously known as Enterprise PKS). Consider using VMware Tanzu Kubernetes Grid (TKG) or VMware Tanzu Kubernetes Grid Service (TKGs) for management of Kubernetes clusters with VCD.
Enterprise PKS enablement
Overview
CSE 2.0 enables orchestration of K8 cluster deployments on VMware Enterprise PKS. At the same time, it maintains the CSE 1.x feature set of Native K8 cluster deployments directly on VMware Cloud Director. As a result, the capabilities of CSE 2.0 allow tenants to leverage both K8 Providers, Native and Enterprise PKS, for seamless K8 cluster deployments while ensuring clusters’ isolation between tenants. It also offers great flexibility to administrators to onboard tenants on K8 Provider(s) of their choice, be it Native and/or Enterprise PKS.
This page talks in detail about CSE 2.0 architecture with Enterprise PKS, the infrastructure set-up, configuration steps, as well as, key command line interfaces for K8 deployments.
Architecture
CSE 2.0 architecture comprises of Enterprise PKS Infrastructure stack, VMware Cloud Director Infrastructure stack, and CSE 2.0 modules. The Enterprise PKS Infrastructure stack is necessary only if there is an intention to leverage it for K8 cluster deployments. The diagram below illustrates a physical view of the complete infrastructure, as well as, its logical mapping in to VMware Cloud Director hierarchy, for ease of understanding.
Legend:
- Green - Depicts vSphere infrastructure managed by VMware Cloud Director, just as CSE 1.x, without any of Enterprise PKS.
- Blue - Depicts the Enterprise PKS infrastructure stack managed and available for use in VMware Cloud Director for K8 cluster deployments. It also illustrates multi-tenancy for K8 cluster deployments on single Enterprise PKS infrastructure.
- Purple - Depicts a single tenant dedicated Enterprise PKS infrastructure stack managed and available for use in VMware Cloud Director for K8s cluster deployments. It also illustrates the use-case of a tenant leveraging multiple instances of Enterprise PKS infrastructure stack, say, to segregate K8s cluster workloads.
- K8-prov - This label depicts the K8 Provider that is enabled on a given tenant’s Organization VDC in VMware Cloud Director.
Infrastructure set-up and configuration
Before you begin
- Ensure fresh installation of Enterprise PKS infrastructure stack. Also, ensure there are no prior K8 cluster deployments on this stack.
- Ensure CSE, vCloud Director infrastructure stack, and Enterprise PKS infrastructure stack are all in the same management network, without proxy in between.
Enterprise PKS on-boarding
Below timeline diagram depicts infrastructure set-up and tenant on-boarding. Cloud-provider has to do below steps before on-boarding tenants.
- Set up one or more Enterprise PKS-vSphere-NSX-T instances.
- Ensure OpenID Connect feature is disabled on each Enterprise-PKS instance. Refer FAQ for more details.
- Create Enterprise PKS service accounts per each Enterprise PKS instance.
- On-board Enterprise PKS instance(s) in VCD
- Attach Enterprise PKS’ corresponding vSphere in VCD through VCD UI.
- Create provider-vdc(s) in VCD from underlying resources of newly attached Enterprise PKS’ vSphere(s). Ensure these pvdc(s) are dedicated for Enterprise PKS K8 deployments only.
- Install, configure and start CSE
- Follow instructions to install CSE 2.0 beta here
- Use
cse sample
command to generateconfig.yaml
andpks.yaml
skeleton config files. - Configure
config.yaml
with VCD details. - Configure
pks.yaml
with Enterprise PKS details. This file is necessary only if there is an intention to leverage Enterprise PKS for K8 deployments. Refer here for more details on how to fill inpks.yaml
. - Run
cse install
command. Specify the Enterprise PKS configuration file along with regular CSE configuration file via the flag –pks-config-file and –config respectively. The install process will prepare NSX-T(s) of Enterprise PKS instances for tenant isolation. Ensure this command is run again for on-boarding of new Enterprise PKS instances at later point of time. - Start the CSE service. Specify the Enterprise PKS configuration file along with regular CSE configuration file via the flag –pks-config-file and –config respectively.
Enabling Enterprise PKS as a K8s provider changes the default behavior of CSE
as described below. Presence of option --pks-config <pks-config-file>
while
executing cse run
gives an indication to CSE that Enterprise PKS is enabled
(in addition to Native VCD) as a K8s provider in the system.
- CSE begins to mandate that all
ovdc
has to be enabled for either Native or Enterprise PKS as a backing K8s provider. Cloud Administrators can do so viavcd cse ovdc enable
command. This step is mandatory for ovdc(s) with pre-existing native K8s clusters as well i.e., if CSE is upgraded from 1.2.x to 2.0.0 and CSE is started with--pks-config-file
option, then it becomes mandatory to enable those ovdc(s) with pre-existing native K8s clusters. - In other words, if CSE runs with
--pks-config-file PKS_CONFIG_FILE_PATH
and if an ovdc is not enabled for either of the supported K8s providers, users will not be able to do any further K8s deployments on that ovdc.
If CSE runs without --pks-config-file
option, there will not be any change in
CSE’s default behavior i.e., all ovdc-s are open for native K8s cluster deployments.
Tenant on-boarding
- Create ovdc(s) in tenant organization from newly created provider-vdc(s) above via VCD UI. Do not choose Pay-as-you-go model for ovdc(s). Refer FAQ for more details.
- Use these CSE commands to grant K8 deployment rights to chosen tenants and tenant-users. Refer RBAC feature for more details
- Use CSE command to enable organization vdc(s) with a chosen K8-provider (native (or) TKGi).
Below diagram illustrates a time sequence view of setting up the infrastructure for CSE 2.0, followed by the on boarding of tenants. The expected steps are executed by Cloud providers or administrators.
CSE, VCD, Enterprise PKS Component Illustration
Below diagram outlines the communication flow between components for the tenant’s work-flow to create a new K8 cluster.
Legend:
- The path depicted in pink signifies the work-flow of K8 cluster deployments on Native K8 Provider Solution in CSE 2.0.
- The path depicted in blue signifies the work-flow of K8 cluster deployments on Enterprise K8 Provider Solution in CSE 2.0.
Refer tenant-work-flow to understand the below decision box in grey color in detail.
Tenant work-flow of create-cluster operation
To understand the creation of new K8 cluster work-flow in detail, review below flow chart in its entirety. In this illustration, user from tenant “Pepsi” attempts to create a new K8 cluster in organization VDC “ovdc-1”, and based on the administrator’s enablement for “ovdc-1”, the course of action can alter.
CSE commands
Administrator commands to on board a tenant
Granting rights to Tenants and Users:
Below steps of granting rights are required only if RBAC feature is turned on.
* vcd right add "{cse}:CSE NATIVE DEPLOY RIGHT" -o tenant1
* vcd right add "{cse}:CSE NATIVE DEPLOY RIGHT" -o tenant2
* vcd right add "{cse}:PKS DEPLOY RIGHT" -o tenant1
* vcd role add-right "Native K8 Author" "{cse}:CSE NATIVE DEPLOY RIGHT"
* vcd role add-right "PKS K8 Author" "{cse}:PKS DEPLOY RIGHT"
* vcd role add-right "Omni K8 Author" "{cse}:CSE NATIVE DEPLOY RIGHT"
* vcd role add-right "Omni K8 Author" "{cse}:PKS DEPLOY RIGHT"
* vcd user create 'native-user' 'password' 'Native K8 Author'
* vcd user create 'pks-user' 'password' 'PKS K8 Author'
* vcd user create 'power-user' 'password' 'Omni K8 Author'
Enabling ovdc(s) for TKGi deployments: Starting CSE 3.0, separate command group has been dedicated to TKGi (Enterprise PKS)
* vcd cse pks ovdc list
* vcd cse pks ovdc enable ovdc2 -o tenant1 -k ent-pks --pks-plan "gold" --pks-cluster-domain "tenant1.com"
Cluster management commands
Starting CSE 3.0, separate command group has been dedicated to TKGi (Enterprise PKS)
* vcd cse pks cluster list
* vcd cse pks cluster create
* vcd cse pks cluster info
* vcd cse pks cluster resize
* vcd cse pks cluster delete
FAQ
- How to create an Enterprise PKS service account?
- Refer UAA Client to grant PKS access to a client.
- Define your own
client_id
andclient_secret
. The scope should beuaa.none
and theauthorized_grant_types
should beclient_credentials
- Example to create client using UAA CLI:
uaac client add test --name test --scope uaa.none --authorized_grant_types client_credentials --authorities clients.read,clients.write,clients.secret,scim.read,scim.write,pks.clusters.manage
- Log in to PKS:
pks login -a https://${PKS_UAA_URL}:9021 -k --client-name test --client-secret xx
- Input credentials in pks.yaml
- Why OpenID connect feature needs to remain disabled in Enterprise PKS?
- OpenID Connect based authentication of VMware Enterprise PKS is a global configuration for all tenants. Its enablement misaligns with multi-tenant model of Container Services Extension.
- What allocation models are supported for organizational vdc(s) powered by Enterprise PKS?
- Allocation model and reservation models only. Pay-as-you-go is unsupported. Elasticity with other models is also not supported.
- Are Enterprise PKS based clusters visible in VCD UI?
- Kubernetes Container Clusters UI Plugin versions 2.0 and 1.0.3, both can be used to manage Enterprise PKS Clusters. Refer compatibility matrix
- Do Enterprise PKS based clusters adhere to their parent organization-vdc compute settings?
- Yes. Both native and Enterprise PkS clusters’ combined usage is accounted towards reaching compute-limits of a given organization-vdc resource-pool.
- Are Enterprise PKS clusters isolated at network layer?
- Yes. Tenant-1 clusters cannot reach Tenant-2 clusters via Node IP addresses.
- Do Enterprise PKS based clusters adhere to its parent organization-vdc storage limits?
- This functionality is not available yet. As of today, organization-vdc storage limits apply only for native K8 clusters.
- Can native K8 clusters be deployed in organization-vdc(s) dedicated for TKGi?
- This functionality is not available yet.
- Can tenant get a dedicated storage for their Enterprise PKS based clusters?
- This functionality is not available yet.
- Why is response-time of commands slower sometimes?
- The response times for commands can be slow due to variety of reasons. For example - RBAC feature is known to impose some slowness in the system. Enterprise PKS based K8 cluster deployments have some performance implications. The performance optimizations will be coming in near future
- If there are Extension time out errors while executing commands, how can they be remedied?
- Increase the VCD extension timeout to a higher value. Refer to Setting the API Extension Timeout
Enterprise PKS Limitations
- Once
vcd cse pks cluster resize
is run on Enterprise PKS based clusters, organization administrator’s attempts to view and perform CRUD operations on those clusters will begin to fail with errors. - Once
vcd cse pks cluster resize
is run on Enterprise PKS based clusters, commandsvcd cse cluster info
andvcd cse cluster list
on those resized clusters will begin to display incomplete results. - Once a given OrgVDC is enabled for Enterprise PKS, renaming that OrgVDC in VCD will cause further K8 cluster deployment failures in that OrgVDC.