This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Kubernetes on Photon OS

You can use Kubernetes with Photon OS. The instructions in this section present a manual configuration that gets one worker node running to help you understand the underlying packages, services, ports, and so forth.

The Kubernetes package provides several services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd. Their configuration resides in a central location: /etc/kubernetes.

1 - Prerequisites

You need two or more machines with version 3.0 “GA” or later of Photon OS installed. It is recommended to use the latest GA version.

2 - Running Kubernetes on Photon OS

The procedure describes how to break the services up between the hosts.

The first host, photon-master, is the Kubernetes master. This host runs the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master also runs etcd. Although etcd is not needed on the master if etcd runs on a different host, this guide assumes that etcd and the Kubernetes master run on the same host. The remaining host, photon-node, is the node and runs kubelet, proxy, and docker.

2.1 - System Information

Hosts:

photon-master = 192.168.121.9
photon-node = 192.168.121.65

2.2 - Prepare the Hosts

The following packages have to be installed. If the tdnf command returns “Nothing to do,” the package is already installed.

  • Install Kubernetes on all hosts (both photon-master and photon-node).

    tdnf install kubernetes
    
  • Install iptables on photon-master and photon-node:

    tdnf install iptables
    
  • Open the tcp port 8080 (api service) on the photon-master in the firewall

    iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
    
  • Open the tcp port 10250 (api service) on the photon-node in the firewall

    iptables -A INPUT -p tcp --dport 10250 -j ACCEPT
    
  • Install Docker on photon-node:

    tdnf install docker
    
  • Add master and node to /etc/hosts on all machines (not needed if the hostnames are already in DNS). Make sure that communication works between photon-master and photon-node by using a utility such as ping.

    echo "192.168.121.9	photon-master
    192.168.121.65	photon-node" >> /etc/hosts
    
  • Edit /etc/kubernetes/config, which will be the same on all the hosts (master and node), so that it contains the following lines:

    # Comma separated list of nodes in the etcd cluster
    KUBE_MASTER="--master=http://photon-master:8080"
    
    # logging to stderr routes it to the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal message level, 0 is debug
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow_privileged=false"
    

2.3 - Configure Kubernetes Services on the Master

Perform the following steps to configure Kubernetes services on the master:

  1. Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.

    # The address on the local server to listen to.
    KUBE_API_ADDRESS="--address=0.0.0.0"
    
    # Comma separated list of nodes in the etcd cluster
    KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
    
    # Address range to use for services
    KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
    
    # Add your own
    KUBE_API_ARGS=""
    
  2. Start the appropriate services on master:

    for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
    	systemctl restart $SERVICES
    	systemctl enable $SERVICES
    	systemctl status $SERVICES
    done
    
  3. To add the other node, create the following node.json file on the Kubernetes master node:

    {
        "apiVersion": "v1",
        "kind": "Node",
        "metadata": {
            "name": "photon-node",
            "labels":{ "name": "photon-node-label"}
        },
        "spec": {
            "externalID": "photon-node"
        }
    }
    
  4. Create a node object internally in your Kubernetes cluster by running the following command:

    $ kubectl create -f ./node.json
    
    $ kubectl get nodes
    NAME                LABELS              STATUS
    photon-node         name=photon-node-label     Unknown
    

Note: The above example only creates a representation for the node photon-node internally. It does not provision the actual photon-node. Also, it is assumed that photon-node (as specified in name) can be resolved and is reachable from the Kubernetes master node.

2.4 - Configure the Kubernetes services on Node

Perform the following steps to configure the kubelet on the node:

  1. Edit /etc/kubernetes/kubelet to appear like this:

    ###
    # Kubernetes kubelet (node) config
    
    # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
    KUBELET_ADDRESS="--address=0.0.0.0"
    
    # You may leave this blank to use the actual hostname
    KUBELET_HOSTNAME="--hostname_override=photon-node"
    
    # location of the api-server
    KUBELET_API_SERVER="--kubeconfig=/etc/kubernetes/kubeconfig"
    
    # Add your own
    #KUBELET_ARGS=""
    
  2. Make sure that the api-server end-point located in /etc/kubernetes/kubeconfig, targets the api-server in the master node and does not fall into the loopback interface:

    apiVersion: v1
    clusters:
    - cluster:
        server: <ip_master_node>:8080
    
  3. Start the appropriate services on the node (photon-node):

    for SERVICES in kube-proxy kubelet docker; do 
        systemctl restart $SERVICES
        systemctl enable $SERVICES
        systemctl status $SERVICES 
    done
    
  4. Check to make sure that the cluster can now see the photon-node on photon-master and that its status changes to Ready.

    kubectl get nodes
    NAME                LABELS              STATUS
    photon-node          name=photon-node-label     Ready
    

    If the node status is NotReady, verify that the firewall rules are permissive for Kubernetes.

    • Deletion of nodes: To delete photon-node from your Kubernetes cluster, one should run the following on photon-master (please do not do it, it is just for information):
    kubectl delete -f ./node.json
    

Result

You should have a functional cluster. You can now launch a test pod. For an introduction to working with Kubernetes, see Kubernetes documentation.

3 - Kubernetes-Kubeadm Cluster on Photon OS

This section of the document describes how to set up Kubernetes-Kubeadm Cluster on Photon OS. You need to configure the following two nodes:

  • Master Photon OS VM
  • Worker Photon OS VM

The following sections in the document describe how to configure the master and worker nodes, and then run a sample application.

3.1 - Configuring a Master Node

This section describes how to configure a master node with the following details:

Node Name: kube-master
Node IP Address: 10.197.103.246

Host Names

Change the host name on the VM using the following command:

hostnamectl set-hostname kube-master

To ensure connectivity with the future working node, kube-worker, modify the file /etc/hosts as follows:

cat /etc/hosts
# Begin /etc/hosts (network card version)
10.197.103.246 kube-master
10.197.103.232 kube-worker
  
::1         ipv6-localhost ipv6-loopback
127.0.0.1   localhost.localdomain
127.0.0.1   localhost
127.0.0.1   photon-machine
# End /etc/hosts (network card version)

System Tuning

IP Tables

Run the following iptables commands to open the required ports for Kubernetes to operate.

Save the updated set of rules so that they become available the next time you reboot the VM.

Firewall Settings
# ping
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
  
# etcd
iptables -A INPUT -p tcp -m tcp --dport 2379:2380 -j ACCEPT
  
# kubernetes
iptables -A INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
  
# calico
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
  
# save rules
iptables-save > /etc/systemd/scripts/ip4save

Kernel Configuration

You need to enable IPv4 IP forwarding and iptables filtering on the bridge devices. Create the file /etc/sysctl.d/kubernetes.conf as follows:

# Load br_netfilter module to facilitate traffic between pods
modprobe br_netfilter
 
 
cat /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Apply the new sysctl setttings as follows:

sysctl --system
...
 
* Applying /etc/sysctl.d/kubernetes.conf ...
.........
/proc/sys/net/ipv4/ip_forward = 1
/proc/sys/net/bridge/bridge-nf-call-ip6tables = 1
/proc/sys/net/bridge/bridge-nf-call-iptables = 1
/proc/sys/net/bridge/bridge-nf-call-arptables = 1

Containerd Runtime Configuration

Use the following command to install crictl and use containerd as runtime endpoint:

#install crictl
tdnf install -y cri-tools
 
#modify crictl.yaml
cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false

Use systemd as cgroup for containerd as shown in the followng command:

Configuration File
cat /etc/containerd/config.toml
#disabled_plugins = ["cri"]
 
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
version = 2
 
#[grpc]
#  address = "/run/containerd/containerd.sock"
#  uid = 0
#  gid = 0
 
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
  [plugins."io.containerd.grpc.v1.cri".containerd]
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
 
#[debug]
#  address = "/run/containerd/debug.sock"
#  uid = 0
#  gid = 0
#  level = "info"

Use the following command to check if containerd is running with systemd cgroup:

Restart containerd service
systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd.service
systemctl status containerd
 
crictl info | grep -i cgroup | grep true
            "SystemdCgroup": true

Kubeadm

Install kubernetes-kubeadm and other packages on the master node, and then use Kubeadm to install and configure Kubernetes.

Installing Kubernetes

Run the following commands to install kubeadm, kubectl, kubelet, and apparmor-parser:

tdnf install -y kubernetes-kubeadm apparmor-parser
systemctl enable --now kubelet

Pull the Kubernetes images using the following commands:

kubeadm config images pull

Run Kubeadm

Use the following commands to run Kubeadm and initialize the system:

kubeadm init
#For Flannel/Canal
kubeadm init --pod-network-cidr=10.244.0.0/16
 
 
I0420 05:45:08.440671    2794 version.go:256] remote version is much newer: v1.27.1; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.4
..........
..........
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
Alternatively, if you are the root user, you can run:
 
  export KUBECONFIG=/etc/kubernetes/admin.conf
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io./concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.197.103.246:6443 --token bf9mwy.vhs88r1g2vlwprsg \
    --discovery-token-ca-cert-hash sha256:be5f76dde01285a6ec9515f20abc63c4af890d9741e1a6e43409d1894043c19b
 
 
#For Calico
kubeadm init --pod-network-cidr=192.168.0.0/16

If everything goes well, the kubeadm init command should end with a message as displayed above.

Note: Copy and save the sha256 token value at the end. You need to use this token for the worker node to join the cluster.

The --pod-network-cidr parameter is a requirement for Calico. The 192.168.0.0/16 network is Calico’s default. For Flannel/Canal, it is 10.244.0.0/16.

You need to export the kubernetes configuration. For any new session, this step of export is repeated.

Also, untaint the control plane VM to schedule pods on the master VM.

Use the following command to export the Kubernetes configuration and untaint the control plane VM:

export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

The Network Plugin

Install the Canal network plugin using the following command:

#canal
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/canal.yaml -o canal.yaml
 
# Alternatively if using flannel
curl https://raw.githubusercontent.com/flannel-io/flannel/v0.21.4/Documentation/kube-flannel.yml -o flannel.yaml
# Alternatively if using calico
curl  https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -o calico.yaml

Get cni images for the network policy to work:

tdnf install -y docker
systemctl restart docker
docker login -u $username
 
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull flannelcni/flannel:v0.16.3
docker pull calico/kube-controllers:v3.25.0

Note: Here we are proceeding with the downloading images required by canal.

Use the following command to apply the network policy:

#Apply network plugin configuration
kubectl apply -f canal.yaml

The Kubernetes master node should be up and running now. Try the following commands to verify the state of the cluster:

kubectl cluster-info
Kubernetes control plane is running at https://10.197.103.246:6443
CoreDNS is running at https://10.197.103.246:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
 
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
 
kubectl get nodes
NAME          STATUS   ROLES           AGE   VERSION
kube-master   Ready    control-plane   10m   v1.26.1
 
kubectl get pods --all-namespaces
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-qxz4s   1/1     Running   0          6m54s
kube-system   canal-w4d5r                               1/2     Running   0          6m54s
kube-system   coredns-787d4945fb-nnll2                  1/1     Running   0          10m
kube-system   coredns-787d4945fb-wfv8j                  1/1     Running   0          10m
kube-system   etcd-kube-master                          1/1     Running   1          11m
kube-system   kube-apiserver-kube-master                1/1     Running   1          11m
kube-system   kube-controller-manager-kube-master       1/1     Running   1          11m
kube-system   kube-proxy-vjwwr                          1/1     Running   0          10m
kube-system   kube-scheduler-kube-master                1/1     Running   1          11m

3.2 - Configure a Worker Node

This section describes how to configure a worker node with the following details:

Node Name: kube-worker Node IP Address: 10.197.103.232

Install the worker VM using the same Photon OS image.

Note: The VM configuration is similar to that of the master node, just with a different IP address.

Host Names

Change the hostname on the VM using the following command:

hostnamectl set-hostname kube-worker

To ensure connectivity with the future working node, kube-worker, modify the file /etc/hosts as follows:

cat /etc/hosts
# Begin /etc/hosts (network card version)
10.197.103.246 kube-master
10.197.103.232 kube-worker
  
::1         ipv6-localhost ipv6-loopback
127.0.0.1   localhost.localdomain
127.0.0.1   localhost
127.0.0.1   photon-machine
# End /etc/hosts (network card version)

System Tuning

IP Tables

Run the following iptables commands to open the required ports for Kubernetes to operate. Save the updated set of rules so that they become available the next time you reboot the VM.

Firewall settings
# ping
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
  
# kubernetes
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
  
# workloads
iptables -A INPUT -p tcp -m tcp --dport 30000:32767 -j ACCEPT
  
# calico
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
  
# save rules
iptables-save > /etc/systemd/scripts/ip4save

Kernel Configuration

You need to enable IPv4 IP forwarding and iptables filtering on the bridge devices. Create the file /etc/sysctl.d/kubernetes.conf as follows:

# Load br_netfilter module to facilitate traffic between pods
modprobe br_netfilter
 
 
cat /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Apply the new sysctl settings as follows:

sysctl --system
...
 
* Applying /etc/sysctl.d/kubernetes.conf ...
.........
/proc/sys/net/ipv4/ip_forward = 1
/proc/sys/net/bridge/bridge-nf-call-ip6tables = 1
/proc/sys/net/bridge/bridge-nf-call-iptables = 1
/proc/sys/net/bridge/bridge-nf-call-arptables = 1

Containerd Runtime Configuration

Use the following command to install crictl and use containerd as the runtime endpoint:

#install crictl
tdnf install -y cri-tools
 
#modify crictl.yaml
cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false

Use systemd as cgroup for containerd as shown in the following command:

Configuration File
cat /etc/containerd/config.toml
#disabled_plugins = ["cri"]
 
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
version = 2
 
#[grpc]
#  address = "/run/containerd/containerd.sock"
#  uid = 0
#  gid = 0
 
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
  [plugins."io.containerd.grpc.v1.cri".containerd]
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          runtime_type = "io.containerd.runc.v2"
          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            SystemdCgroup = true
 
#[debug]
#  address = "/run/containerd/debug.sock"
#  uid = 0
#  gid = 0
#  level = "info"

Use the following command to check if containerd is running with systemd cgroup:

Restart containerd service
systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd.service
systemctl status containerd
 
crictl info | grep -i cgroup | grep true
            "SystemdCgroup": true

Kubeadm

Install kubernetes-kubeadm and other packages on the worker node, and then use Kubeadm to install and configure Kubernetes.

Installing Kubernetes

Run the following commands to install kubeadm, kubectl, kubelet, and apparmor-parser:

tdnf install -y kubernetes-kubeadm apparmor-parser
systemctl enable --now kubelet

Pull the Kubernetes images using the following commands:

kubeadm config images pull

Join the Cluster

Use Kubeadm to join the cluster with the token you got after running the kubeadm init command on the master node. Use the following command to join the cluster:

Join the master
kubeadm join 10.197.103.246:6443 --token eaq5cl.gqnzgmqj779xtym7 \
    --discovery-token-ca-cert-hash sha256:90b9da1b34de007c583aec6ca65f78664f35b3ff03ceffb293d6ec9332142d05

Use the following command to get cni images for network policy pods to work:

Pull required docker images
tdnf install -y docker
systemctl restart docker
docker login -u $username
 
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull flannelcni/flannel:v0.16.3
docker pull calico/kube-controllers:v3.25.0

Cluster Test

The Kubernetes worker node should be up and running now. Run the following command from the kube-master node to verify the state of the cluster:

kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
kube-master   Ready    control-plane   21m     v1.26.1
kube-worker   Ready    <none>          6m26s   v1.26.1

It takes a few seconds for the kube-worker node to appear and display the ready status.

3.3 - Run a Hello-World Application

Run a hello-world application to verify that the new two-node cluster works properly. All commands in this section must be executed from kube-master.

Create a pod with the name “hello” to print “Hello Kubernetes”.

Use the following command to create a pod with the name “hello”:

cat hello.yaml
 
apiVersion: v1
kind: Pod
metadata:
  name: hello
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: projects.registry.vmware.com/photon/photon4:latest
    command: ["/bin/bash"]
    args: ["-c", "echo Hello Kubernetes"]

Use the following command to create a hello Kubernetes application:

kubectl apply -f hello.yaml
#check status
kubectl get pods
#check logs
kubectl logs hello | grep "Hello Kubernetes"

You have successfully set up the two VM Kubernetes Kubeadm Cluster.