You can use Kubernetes with Photon OS. The instructions in this section present a manual configuration that gets one worker node running to help you understand the underlying packages, services, ports, and so forth.
The Kubernetes package provides several services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd. Their configuration resides in a central location: /etc/kubernetes.
1 - Prerequisites
You need two or more machines with version 3.0 “GA” or later of Photon OS installed. It is recommended to use the latest GA version.
2 - Running Kubernetes on Photon OS
The procedure describes how to break the services up between the hosts.
The first host, photon-master, is the Kubernetes master. This host runs the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master also runs etcd. Although etcd is not needed on the master if etcd runs on a different host, this guide assumes that etcd and the Kubernetes master run on the same host. The remaining host, photon-node, is the node and runs kubelet, proxy, and docker.
The following packages have to be installed. If the tdnf command returns “Nothing to do,” the package is already installed.
Install Kubernetes on all hosts (both photon-master and photon-node).
tdnf install kubernetes
Install iptables on photon-master and photon-node:
tdnf install iptables
Open the tcp port 8080 (api service) on the photon-master in the firewall
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
Open the tcp port 10250 (api service) on the photon-node in the firewall
iptables -A INPUT -p tcp --dport 10250 -j ACCEPT
Install Docker on photon-node:
tdnf install docker
Add master and node to /etc/hosts on all machines (not needed if the hostnames are already in DNS). Make sure that communication works between photon-master and photon-node by using a utility such as ping.
Edit /etc/kubernetes/config, which will be the same on all the hosts (master and node), so that it contains the following lines:
# Comma separated list of nodes in the etcd clusterKUBE_MASTER="--master=http://photon-master:8080"# logging to stderr routes it to the systemd journalKUBE_LOGTOSTDERR="--logtostderr=true"# journal message level, 0 is debugKUBE_LOG_LEVEL="--v=0"# Should this cluster be allowed to run privileged docker containersKUBE_ALLOW_PRIV="--allow_privileged=false"
2.3 - Configure Kubernetes Services on the Master
Perform the following steps to configure Kubernetes services on the master:
Edit /etc/kubernetes/apiserver to appear as such. The service_cluster_ip_range IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
# The address on the local server to listen to.KUBE_API_ADDRESS="--address=0.0.0.0"# Comma separated list of nodes in the etcd clusterKUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"# Address range to use for servicesKUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"# Add your ownKUBE_API_ARGS=""
Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler;do systemctl restart $SERVICES systemctl enable$SERVICES systemctl status $SERVICESdone
To add the other node, create the following node.json file on the Kubernetes master node:
Create a node object internally in your Kubernetes cluster by running the following command:
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
photon-node name=photon-node-label Unknown
Note: The above example only creates a representation for the node photon-node internally. It does not provision the actual photon-node. Also, it is assumed that photon-node (as specified in name) can be resolved and is reachable from the Kubernetes master node.
2.4 - Configure the Kubernetes services on Node
Perform the following steps to configure the kubelet on the node:
Edit /etc/kubernetes/kubelet to appear like this:
#### Kubernetes kubelet (node) config# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)KUBELET_ADDRESS="--address=0.0.0.0"# You may leave this blank to use the actual hostnameKUBELET_HOSTNAME="--hostname_override=photon-node"# location of the api-serverKUBELET_API_SERVER="--kubeconfig=/etc/kubernetes/kubeconfig"# Add your own#KUBELET_ARGS=""
Make sure that the api-server end-point located in /etc/kubernetes/kubeconfig, targets the api-server in the master node and does not fall into the loopback interface:
Start the appropriate services on the node (photon-node):
for SERVICES in kube-proxy kubelet docker;do systemctl restart $SERVICES systemctl enable$SERVICES systemctl status $SERVICESdone
Check to make sure that the cluster can now see the photon-node on photon-master and that its status changes to Ready.
kubectl get nodes
NAME LABELS STATUS
photon-node name=photon-node-label Ready
If the node status is NotReady, verify that the firewall rules are permissive for Kubernetes.
Deletion of nodes: To delete photon-node from your Kubernetes cluster, one should run the following on photon-master (please do not do it, it is just for information):
kubectl delete -f ./node.json
Result
You should have a functional cluster. You can now launch a test pod. For an introduction to working with Kubernetes, see Kubernetes documentation.
3 - Kubernetes-Kubeadm Cluster on Photon OS
This section of the document describes how to set up Kubernetes-Kubeadm Cluster on Photon OS. You need to configure the following two nodes:
Master Photon OS VM
Worker Photon OS VM
The following sections in the document describe how to configure the master and worker nodes, and then run a sample application.
3.1 - Configuring a Master Node
This section describes how to configure a master node with the following details:
Node Name: kube-master Node IP Address: 10.197.103.246
Host Names
Change the host name on the VM using the following command:
hostnamectl set-hostname kube-master
To ensure connectivity with the future working node, kube-worker, modify the file /etc/hosts as follows:
Pull the Kubernetes images using the following commands:
kubeadm config images pull
Run Kubeadm
Use the following commands to run Kubeadm and initialize the system:
kubeadm init
#For Flannel/Canal
kubeadm init --pod-network-cidr=10.244.0.0/16
I0420 05:45:08.440671 2794 version.go:256] remote version is much newer: v1.27.1; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.4
..........
..........
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io./concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.197.103.246:6443 --token bf9mwy.vhs88r1g2vlwprsg \
--discovery-token-ca-cert-hash sha256:be5f76dde01285a6ec9515f20abc63c4af890d9741e1a6e43409d1894043c19b
#For Calico
kubeadm init --pod-network-cidr=192.168.0.0/16
If everything goes well, the kubeadm init command should end with a message as displayed above.
Note: Copy and save the sha256 token value at the end. You need to use this token for the worker node to join the cluster.
The --pod-network-cidr parameter is a requirement for Calico. The 192.168.0.0/16 network is Calico’s default. For Flannel/Canal, it is 10.244.0.0/16.
You need to export the kubernetes configuration. For any new session, this step of export is repeated.
Also, untaint the control plane VM to schedule pods on the master VM.
Use the following command to export the Kubernetes configuration and untaint the control plane VM:
Run the following iptables commands to open the required ports for Kubernetes to operate.
Save the updated set of rules so that they become available the next time you reboot the VM.
Pull the Kubernetes images using the following commands:
kubeadm config images pull
Join the Cluster
Use Kubeadm to join the cluster with the token you got after running the kubeadm init command on the master node. Use the following command to join the cluster: