User Guide
The Photon OS User Guide provides information about how to use Photon OS as a developer.
The User Guide covers the basics of setting up a Network PXE Boot Server, working with Kickstart and Kubernetes, and mounting remote file systems.
Product version: 5.0
This documentation applies to all 5.0.x releases.
Intended Audiences
This information is intended for Photon OS developers who use Photon OS.
1 - Setting Up Network PXE Boot
Photon OS supports the Preboot Execution Environment, or PXE, over a network connection. This document describes how to set up a PXE boot server to install Photon OS.
Server Setup
To set up a PXE server, you will need to have the following:
- A DHCP server to allow hosts to get an IP address.
- A TFTP server, which is a file transfer protocol similar to FTP with no authentication.
- Optionally, an HTTP server. The HTTP server will serve the RPMs yum repo, or you can use the official VMware Photon Packages repo. Also, this HTTP server can be used if you want to provide a kickstart config for unattended installation.
The instructions to set up the servers assume you have an Ubuntu 14.04 machine with a static IP address of 172.16.78.134
.
DHCP Setup
Install the DHCP server:
sudo apt-get install isc-dhcp-server
Edit the Ethernet interface in /etc/default/isc-dhcp-server
to INTERFACES="eth0"
Edit the DHCP configuration in /etc/dhcp/dhcpd.conf
to allow machines to boot and get an IP address via DHCP in the range 172.16.78.230 - 172.16.78.250
, for example:
subnet 172.16.78.0 netmask 255.255.255.0 {
range 172.16.78.230 172.16.78.250;
option subnet-mask 255.255.255.0;
option routers 172.16.78.134;
option broadcast-address 172.16.78.255;
filename "pxelinux.0";
next-server 172.16.78.134;
}
Restart the DHCP server:
sudo service isc-dhcp-server restart
TFTP Setup
Install the TFTP server:
sudo apt-get install tftpd-hpa
Enable the boot service and restart the service:
sudo update-inetd --enable BOOT
sudo service tftpd-hpa restart
Optional: HTTP server setup
This step is only needed if you are planning to serve the ks (kickstart) config file through this server; refer to Kickstart support for details.
- Serving your local yum repo.
You can install apache http web server:
sudo apt-get install apache2
Mount the Photon iso to get the RPMS repo and sample ks config file:
mkdir /mnt/photon-iso
sudo mount <photon_iso> /mnt/photon-iso/
Copy the RPMS repo:
cp -r /mnt/photon-iso/RPMS /var/www/html/
To support ks, you can copy the sample config file from the iso and edit it; refer to Kickstart support for details.
cp /mnt/photon-iso/isolinux/sample_ks.cfg /var/www/html/my_ks.cfg
PXE boot files setup
- Mount photon.iso to get Linux and initrd images:
mkdir /mnt/photon-iso
sudo mount <photon_iso> /mnt/photon-iso/
- Setting the PXE boot files:
wget https://www.kernel.org/pub/linux/utils/boot/syslinux/syslinux-6.03.tar.gz
tar -xvf syslinux-6.03.tar.gz
pushd /var/lib/tftpboot
cp -r /mnt/photon-iso/isolinux/* .
cp ~/syslinux-6.03/bios/com32/elflink/ldlinux/ldlinux.c32 .
cp ~/syslinux-6.03/bios/com32/lib/libcom32.c32 .
cp ~/syslinux-6.03/bios/com32/libutil/libutil.c32 .
cp ~/syslinux-6.03/bios/com32/menu/vesamenu.c32 .
cp ~/syslinux-6.03/bios/core/pxelinux.0 .
mkdir pxelinux.cfg
mv isolinux.cfg pxelinux.cfg/default
- Update repo param to point to http yum repo; you may pass official photon packages repo.
sed -i "s/append/append repo=http:\/\/172.16.78.134\/RPMS/g" menu.cfg
popd
2 - Kickstart Support in Photon OS
Photon OS works with kickstart for unattended and automated installations. The kickstart configuration file can be served through an HTTP server. You can also provide the kickstart configuration file through a secondary device or a CD-ROM attached to the host.
Kickstart also allows you to configure the installer and deploy virtual machines.
Ways to Provide Kickstart File
You can provide the path to the kickstart file in the following way:
Remote kickstart
To provide a remote path for the kickstart file, use the following format:
ks=http://<kickstart-link>
Kickstart from CD-ROM attached with ISO
To provide a path for the kickstart file in a CD-ROM with ISO, use the following format:
ks=cdrom:/isolinux/sample_ks.cfg
Secondary Device Kickstart
To provide a secondary device path for the kickstart file, use the following format:
ks=<device-path>:<path-referential-to-device>
Example:
ks=/dev/sr1:/isolinux/sample_ks.cfg
Kickstart Capabilities
On Photon OS, you can configure many settings such as the hostname, password, disk to install, post installation script, and so on.
To find out more about the Kickstart capabilities and the permitted JSON parameters in Kickstart, see the following page: Kickstart Features
Sample Configuration File
Example kickstart configuration file:
{
"hostname": "photon-machine",
"password":
{
"crypted": false,
"text": "changeme"
},
"disk": "/dev/sda",
"partitions":[
{
"mountpoint":"/",
"size":0,
"filesystem":"ext4"
},
{
"mountpoint":"/boot",
"size":128,
"filesystem":"ext4"
},
{
"mountpoint":"/root",
"size":128,
"filesystem":"ext4"
},
{
"size":128,
"filesystem":"swap"
}
],
"bootmode": "bios",
"packagelist_file": "packages_minimal.json",
"additional_packages": [
"vim"
],
"postinstall": [
"#!/bin/sh",
"echo \"Hello World\" > /etc/postinstall"
],
"public_key": "<ssh-key-here>",
"linux_flavor": "linux",
"network": {
"type": "dhcp"
}
}
To see more such sample Kickstart configuration files, see the following page: Kickstart Sample Configuration Files
Installing Root Partition as LVM
In the kickstart file modify the partitions field to mount root partition as LVM.
For example:
"disk":"/dev/sda"
"partitions":[
{
"mountpoint":"/",
"size":0,
"filesystem":"ext4",
"lvm":{
"vg_name":"vg1",
"lv_name":"rootfs"
}
},
{
"mountpoint":"/boot",
"size":128,
"filesystem":"ext4"
},
{
"mountpoint":"/root",
"size":128,
"filesystem":"ext4",
"lvm":{
"vg_name":"vg1",
"lv_name":"root"
}
},
{
"size":128,
"filesystem":"swap",
"lvm":{
"vg_name":"vg2",
"lv_name":"swap"
}
}
]
Note:
- vg_name : Volume Group Name
- lv_name : Logical Volume Name
In above example rootfs, root are logical volumes in the volume group vg1 and swap is logical volume in volume group vg2, physical volumes are part of disk /dev/sda.
Multiple disks are also supported. For example:
"disk": "/dev/sda"
"partitions":[
{
"mountpoint":"/",
"size":0,
"filesystem":"ext4",
"lvm":{
"vg_name":"vg1",
"lv_name":"rootfs"
}
},
{
"mountpoint":"/boot",
"size":128,
"filesystem":"ext4"
},
{
"disk":"/dev/sdb",
"mountpoint":"/root",
"size":128,
"filesystem":"ext4",
"lvm":{
"vg_name":"vg1",
"lv_name":"root"
}
},
{
"size":128,
"filesystem":"swap",
"lvm":{
"vg_name":"vg1",
"lv_name":"swap"
}
}
]
If disk name is not specified, the physical volumes will be part of the default disk: dev/sda.
In above example rootfs,root and swap are logical volumes in volume group vg1, physical volumes are in the disk /dev/sdb and partitions are present in /dev/sda.
Note: Mounting /boot partition as LVM is not supported.
Unattended Installation Through Kickstart
For an unattended installation, you pass the ks=<config_file>
parameter to the kernel command. To pass the config file, there are three options:
- Provide it in the ISO through a CD-ROM attached to the host.
- Provide it in the ISO through a specified secondary device.
- Serving it from an HTTP server.
The syntax to pass the configuration file to the kernel through the CD-ROM takes the following form:
ks=cdrom:/<config_file_path>
For example:
ks=cdrom:/isolinux/ks.cfg
The syntax to pass the configuration file to the kernel through a secondary device takes the following form:
ks=<device-path>:<path-referential-to-device>
For example:
ks=/dev/sr1:/isolinux/sample_ks.cfg
The syntax to serve the configuration file to the kernel from a HTTPS server takes the following form:
ks=https://<server>/<config_file_path>
To use HTTP path or self-signed HTTPS path, you have to enable insecure_installation
by using insecure_installation=1 along with defining the ks path. The kernel command line argument, insecure_installation
, acts as a flag that user can set to 1 to allow some operations that are not normally allowed due to security concerns. This is disabled by default and it is up to the user to the ensure security when this options is enabled.
HTTP example:
ks=http://<server>/<config_file_path> insecure_installation=1
HTTPS (self-signed) example:
ks=https://<server>/<config_file_path> insecure_installation=1
Building an ISO with a Kickstart Config File
Here’s an example of how to add a kickstart config file to the Photon OS ISO by mounting the ISO on an Ubuntu machine and then rebuilding the ISO. The following example assumes you can adapt the sample kickstart configuration file that comes with the Photon OS ISO to your needs. You can obtain the Photon OS ISO for free from VMware at the following URL:
https://packages.vmware.com/photon
Once you have the ISO, mount it.
mkdir /tmp/photon-iso
sudo mount photon.iso /tmp/photon-iso
Then copy the content of the ISO to a writable directory and push it into the directory stack:
mkdir /tmp/photon-ks-iso
cp -r /tmp/photon-iso/* /tmp/photon-ks-iso/
pushd /tmp/photon-ks-iso/
Next, copy the sample kickstart configuration file that comes with the Photon OS ISO and modify it to suit your needs. In the ISO, the sample kickstart config file appears in the isolinux
directory and is named sample_ks.cfg.
The name of the directory and the name of the file might be in all uppercase letters.
cp isolinux/sample_ks.cfg isolinux/my_ks.cfg
nano isolinux/my_ks.cfg
With a copy of the sample kickstart config file open in nano, make the changes that you want.
Now add a new item to the installation menu by modifying isolinux/menu.cfg
and boot/grub2/grub.cfg
:
cat >> isolinux/menu.cfg << EOF
label my_unattended
menu label ^My Unattended Install
menu default
kernel vmlinuz
append initrd=initrd.img root=/dev/ram0 ks=<ks_path>/my_ks.cfg loglevel=3 photon.media=cdrom
EOF
cat >> boot/grub2/grub.cfg << EOF
set default=0
set timeout=3
loadfont ascii
set gfxmode="1024x768"
gfxpayload=keep
set theme=/boot/grub2/themes/photon/theme.txt
terminal_output gfxterm
probe -s photondisk -u ($root)
menuentry "Install" {
linux /isolinux/vmlinuz root=/dev/ram0 ks=<ks_path>/my_ks.cfg loglevel=3 photon.media=UUID=$photondisk
initrd /isolinux/initrd.img
}
EOF
Following is an example of the ks path:
`ks_path=cdrom:/isolinux`
Note: You can specify any mount media through which you want to boot Photon OS. To specify the mount media, specify the path of the mount media device in the photon.media
field. You can specify the path as shown in the following syntax:
photon.media=/dev/<path of the Photon OS ISO>
Finally, rebuild the ISO so that it includes your kickstart config file:
mkisofs -R -l -L -D -b isolinux/isolinux.bin -c isolinux/boot.cat \
-no-emul-boot -boot-load-size 4 -boot-info-table \
-eltorito-alt-boot --eltorito-boot boot/grub2/efiboot.img -no-emul-boot \
-V "PHOTON_$(date +%Y%m%d)" . > <new_iso_path>.iso
popd
3 - Packer Examples for Photon OS
Packer is an open source tool that enables you to create identical machine images for multiple platforms.
VMware maintains two GitHub projects with that include examples for creating Photon OS machine images using Packer.
All examples are authored in the HashiCorp Configuration Language (“HCL2”).
vSphere Virtual Machine Images
GitHub Project: vmware-samples/packer-examples-for-vsphere
This project provides examples to automate the creation of virtual machine images and their guest operating systems on VMware vSphere using Packer and the Packer Plugin for VMware vSphere (vsphere-iso
). This project includes Photon OS as one of the guest operating systems.
Vagrant Boxes
GitHub Project: vmware/photon-packer-templates
This project provides examples to automate the creation of Photon OS machine images as Vagrant boxes using Packer and the Packer Plugins for VMware (vmware-iso
) and Virtualbox (virtualbox
).
The Vagrant boxes included in the project can be run on the following providers:
- VMware Fusion (
vmware_desktop
) - VMware Workstation Pro (
vmware_desktop
) - VirtualBox (
virtualbox
)
This project is also used to generate the offical vmware/photon
Vagrant boxes.
4 - Kubernetes on Photon OS
You can use Kubernetes with Photon OS. The instructions in this section present a manual configuration that gets one worker node running to help you understand the underlying packages, services, ports, and so forth.
The Kubernetes package provides several services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd. Their configuration resides in a central location: /etc/kubernetes.
4.1 - Prerequisites
You need two or more machines with version 3.0 “GA” or later of Photon OS installed. It is recommended to use the latest GA version.
4.2 - Running Kubernetes on Photon OS
The procedure describes how to break the services up between the hosts.
The first host, photon-master
, is the Kubernetes master. This host runs the kube-apiserver
, kube-controller-manager
, and kube-scheduler
. In addition, the master also runs etcd
. Although etcd
is not needed on the master if etcd
runs on a different host, this guide assumes that etcd
and the Kubernetes master run on the same host. The remaining host, photon-node
, is the node and runs kubelet
, proxy
, and docker
.
4.2.1 - System Information
Hosts:
photon-master = 192.168.121.9
photon-node = 192.168.121.65
4.2.2 - Prepare the Hosts
The following packages have to be installed. If the tdnf
command returns “Nothing to do,” the package is already installed.
Install Kubernetes on all hosts (both photon-master
and photon-node
).
Install iptables on photon-master and photon-node:
Open the tcp port 8080 (api service) on the photon-master in the firewall
iptables -A INPUT -p tcp --dport 8080 -j ACCEPT
Open the tcp port 10250 (api service) on the photon-node in the firewall
iptables -A INPUT -p tcp --dport 10250 -j ACCEPT
Install Docker on photon-node:
Add master and node to /etc/hosts on all machines (not needed if the hostnames are already in DNS). Make sure that communication works between photon-master and photon-node by using a utility such as ping.
echo "192.168.121.9 photon-master
192.168.121.65 photon-node" >> /etc/hosts
Edit /etc/kubernetes/config, which will be the same on all the hosts (master and node), so that it contains the following lines:
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://photon-master:8080"
# logging to stderr routes it to the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow_privileged=false"
4.2.3 - Configure Kubernetes Services on the Master
Perform the following steps to configure Kubernetes services on the master:
Edit /etc/kubernetes/apiserver
to appear as such. The service_cluster_ip_range
IP addresses must be an unused block of addresses, not used anywhere else. They do not need to be routed or assigned to anything.
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own
KUBE_API_ARGS=""
Start the appropriate services on master:
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
To add the other node, create the following node.json
file on the Kubernetes master node:
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "photon-node",
"labels":{ "name": "photon-node-label"}
},
"spec": {
"externalID": "photon-node"
}
}
Create a node object internally in your Kubernetes cluster by running the following command:
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
photon-node name=photon-node-label Unknown
Note: The above example only creates a representation for the node photon-node
internally. It does not provision the actual photon-node
. Also, it is assumed that photon-node
(as specified in name
) can be resolved and is reachable from the Kubernetes master node.
4.2.4 - Configure the Kubernetes services on Node
Perform the following steps to configure the kubelet on the node:
Edit /etc/kubernetes/kubelet to appear like this:
###
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname_override=photon-node"
# location of the api-server
KUBELET_API_SERVER="--kubeconfig=/etc/kubernetes/kubeconfig"
# Add your own
#KUBELET_ARGS=""
Make sure that the api-server end-point located in /etc/kubernetes/kubeconfig, targets the api-server in the master node and does not fall into the loopback interface:
apiVersion: v1
clusters:
- cluster:
server: <ip_master_node>:8080
Start the appropriate services on the node (photon-node):
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
Check to make sure that the cluster can now see the photon-node on photon-master and that its status changes to Ready.
kubectl get nodes
NAME LABELS STATUS
photon-node name=photon-node-label Ready
If the node status is NotReady
, verify that the firewall rules are permissive for Kubernetes.
- Deletion of nodes: To delete photon-node from your Kubernetes cluster, one should run the following on photon-master (please do not do it, it is just for information):
kubectl delete -f ./node.json
Result
You should have a functional cluster. You can now launch a test pod. For an introduction to working with Kubernetes, see Kubernetes documentation.
4.3 - Kubernetes-Kubeadm Cluster on Photon OS
This section of the document describes how to set up Kubernetes-Kubeadm Cluster on Photon OS. You need to configure the following two nodes:
- Master Photon OS VM
- Worker Photon OS VM
The following sections in the document describe how to configure the master and worker nodes, and then run a sample application.
4.3.1 - Configuring a Master Node
This section describes how to configure a master node with the following details:
Node Name: kube-master
Node IP Address: 10.197.103.246
Host Names
Change the host name on the VM using the following command:
hostnamectl set-hostname kube-master
To ensure connectivity with the future working node, kube-worker, modify the file /etc/hosts
as follows:
cat /etc/hosts
# Begin /etc/hosts (network card version)
10.197.103.246 kube-master
10.197.103.232 kube-worker
::1 ipv6-localhost ipv6-loopback
127.0.0.1 localhost.localdomain
127.0.0.1 localhost
127.0.0.1 photon-machine
# End /etc/hosts (network card version)
System Tuning
IP Tables
Run the following iptables
commands to open the required ports for Kubernetes to operate.
Save the updated set of rules so that they become available the next time you reboot the VM.
Firewall Settings
# ping
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
# etcd
iptables -A INPUT -p tcp -m tcp --dport 2379:2380 -j ACCEPT
# kubernetes
iptables -A INPUT -p tcp -m tcp --dport 6443 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
# calico
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
# save rules
iptables-save > /etc/systemd/scripts/ip4save
Kernel Configuration
You need to enable IPv4 IP forwarding and iptables filtering on the bridge devices. Create the file /etc/sysctl.d/kubernetes.conf
as follows:
# Load br_netfilter module to facilitate traffic between pods
modprobe br_netfilter
cat /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
Apply the new sysctl
setttings as follows:
sysctl --system
...
* Applying /etc/sysctl.d/kubernetes.conf ...
.........
/proc/sys/net/ipv4/ip_forward = 1
/proc/sys/net/bridge/bridge-nf-call-ip6tables = 1
/proc/sys/net/bridge/bridge-nf-call-iptables = 1
/proc/sys/net/bridge/bridge-nf-call-arptables = 1
Containerd Runtime Configuration
Use the following command to install crictl
and use containerd as runtime endpoint:
#install crictl
tdnf install -y cri-tools
#modify crictl.yaml
cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false
Use systemd
as cgroup for containerd as shown in the followng command:
Configuration File
cat /etc/containerd/config.toml
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
version = 2
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
Use the following command to check if containerd is running with systemd cgroup
:
Restart containerd service
systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd.service
systemctl status containerd
crictl info | grep -i cgroup | grep true
"SystemdCgroup": true
Kubeadm
Install kubernetes-kubeadm and other packages on the master node, and then use Kubeadm to install and configure Kubernetes.
Installing Kubernetes
Run the following commands to install kubeadm
, kubectl
, kubelet
, and apparmor-parser
:
tdnf install -y kubernetes-kubeadm apparmor-parser
systemctl enable --now kubelet
Pull the Kubernetes images using the following commands:
kubeadm config images pull
Run Kubeadm
Use the following commands to run Kubeadm and initialize the system:
kubeadm init
#For Flannel/Canal
kubeadm init --pod-network-cidr=10.244.0.0/16
I0420 05:45:08.440671 2794 version.go:256] remote version is much newer: v1.27.1; falling back to: stable-1.26
[init] Using Kubernetes version: v1.26.4
..........
..........
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io./concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.197.103.246:6443 --token bf9mwy.vhs88r1g2vlwprsg \
--discovery-token-ca-cert-hash sha256:be5f76dde01285a6ec9515f20abc63c4af890d9741e1a6e43409d1894043c19b
#For Calico
kubeadm init --pod-network-cidr=192.168.0.0/16
If everything goes well, the kubeadm init
command should end with a message as displayed above.
Note: Copy and save the sha256
token value at the end. You need to use this token for the worker node to join the cluster.
The --pod-network-cidr
parameter is a requirement for Calico. The 192.168.0.0/16 network is Calico’s default. For Flannel/Canal, it is 10.244.0.0/16.
You need to export the kubernetes configuration. For any new session, this step of export is repeated.
Also, untaint the control plane VM to schedule pods on the master VM.
Use the following command to export the Kubernetes configuration and untaint the control plane VM:
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
The Network Plugin
Install the Canal network plugin using the following command:
#canal
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/canal.yaml -o canal.yaml
# Alternatively if using flannel
curl https://raw.githubusercontent.com/flannel-io/flannel/v0.21.4/Documentation/kube-flannel.yml -o flannel.yaml
# Alternatively if using calico
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -o calico.yaml
Get cni images for the network policy to work:
tdnf install -y docker
systemctl restart docker
docker login -u $username
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull flannelcni/flannel:v0.16.3
docker pull calico/kube-controllers:v3.25.0
Note: Here we are proceeding with the downloading images required by canal.
Use the following command to apply the network policy:
#Apply network plugin configuration
kubectl apply -f canal.yaml
The Kubernetes master node should be up and running now. Try the following commands to verify the state of the cluster:
kubectl cluster-info
Kubernetes control plane is running at https://10.197.103.246:6443
CoreDNS is running at https://10.197.103.246:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready control-plane 10m v1.26.1
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-57b57c56f-qxz4s 1/1 Running 0 6m54s
kube-system canal-w4d5r 1/2 Running 0 6m54s
kube-system coredns-787d4945fb-nnll2 1/1 Running 0 10m
kube-system coredns-787d4945fb-wfv8j 1/1 Running 0 10m
kube-system etcd-kube-master 1/1 Running 1 11m
kube-system kube-apiserver-kube-master 1/1 Running 1 11m
kube-system kube-controller-manager-kube-master 1/1 Running 1 11m
kube-system kube-proxy-vjwwr 1/1 Running 0 10m
kube-system kube-scheduler-kube-master 1/1 Running 1 11m
4.3.2 - Configure a Worker Node
This section describes how to configure a worker node with the following details:
Node Name: kube-worker
Node IP Address: 10.197.103.232
Install the worker VM using the same Photon OS image.
Note: The VM configuration is similar to that of the master node, just with a different IP address.
Host Names
Change the hostname on the VM using the following command:
hostnamectl set-hostname kube-worker
To ensure connectivity with the future working node, kube-worker, modify the file /etc/hosts
as follows:
cat /etc/hosts
# Begin /etc/hosts (network card version)
10.197.103.246 kube-master
10.197.103.232 kube-worker
::1 ipv6-localhost ipv6-loopback
127.0.0.1 localhost.localdomain
127.0.0.1 localhost
127.0.0.1 photon-machine
# End /etc/hosts (network card version)
System Tuning
IP Tables
Run the following iptables
commands to open the required ports for Kubernetes to operate.
Save the updated set of rules so that they become available the next time you reboot the VM.
Firewall settings
# ping
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
# kubernetes
iptables -A INPUT -p tcp -m tcp --dport 10250:10252 -j ACCEPT
# workloads
iptables -A INPUT -p tcp -m tcp --dport 30000:32767 -j ACCEPT
# calico
iptables -A INPUT -p tcp -m tcp --dport 179 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 4789 -j ACCEPT
# save rules
iptables-save > /etc/systemd/scripts/ip4save
Kernel Configuration
You need to enable IPv4 IP forwarding and iptables
filtering on the bridge devices. Create the file /etc/sysctl.d/kubernetes.conf
as follows:
# Load br_netfilter module to facilitate traffic between pods
modprobe br_netfilter
cat /etc/sysctl.d/kubernetes.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
Apply the new sysctl
settings as follows:
sysctl --system
...
* Applying /etc/sysctl.d/kubernetes.conf ...
.........
/proc/sys/net/ipv4/ip_forward = 1
/proc/sys/net/bridge/bridge-nf-call-ip6tables = 1
/proc/sys/net/bridge/bridge-nf-call-iptables = 1
/proc/sys/net/bridge/bridge-nf-call-arptables = 1
Containerd Runtime Configuration
Use the following command to install crictl
and use containerd as the runtime endpoint:
#install crictl
tdnf install -y cri-tools
#modify crictl.yaml
cat /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false
disable-pull-on-run: false
Use systemd
as cgroup for containerd as shown in the following command:
Configuration File
cat /etc/containerd/config.toml
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
version = 2
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
[plugins."io.containerd.grpc.v1.cri"]
enable_selinux = true
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
Use the following command to check if containerd is running with systemd cgroup
:
Restart containerd service
systemctl daemon-reload
systemctl restart containerd
systemctl enable containerd.service
systemctl status containerd
crictl info | grep -i cgroup | grep true
"SystemdCgroup": true
Kubeadm
Install kubernetes-kubeadm and other packages on the worker node, and then use Kubeadm to install and configure Kubernetes.
Installing Kubernetes
Run the following commands to install kubeadm
, kubectl
, kubelet
, and apparmor-parser
:
tdnf install -y kubernetes-kubeadm apparmor-parser
systemctl enable --now kubelet
Pull the Kubernetes images using the following commands:
kubeadm config images pull
Join the Cluster
Use Kubeadm to join the cluster with the token you got after running the kubeadm init
command on the master node. Use the following command to join the cluster:
Join the master
kubeadm join 10.197.103.246:6443 --token eaq5cl.gqnzgmqj779xtym7 \
--discovery-token-ca-cert-hash sha256:90b9da1b34de007c583aec6ca65f78664f35b3ff03ceffb293d6ec9332142d05
Use the following command to get cni images for network policy pods to work:
Pull required docker images
tdnf install -y docker
systemctl restart docker
docker login -u $username
docker pull calico/cni:v3.25.0
docker pull calico/node:v3.25.0
docker pull flannelcni/flannel:v0.16.3
docker pull calico/kube-controllers:v3.25.0
Cluster Test
The Kubernetes worker node should be up and running now. Run the following command from the kube-master node to verify the state of the cluster:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready control-plane 21m v1.26.1
kube-worker Ready <none> 6m26s v1.26.1
It takes a few seconds for the kube-worker node to appear and display the ready status.
4.3.3 - Run a Hello-World Application
Run a hello-world application to verify that the new two-node cluster works properly. All commands in this section must be executed from kube-master
.
Create a pod with the name “hello” to print “Hello Kubernetes”.
Use the following command to create a pod with the name “hello”:
cat hello.yaml
apiVersion: v1
kind: Pod
metadata:
name: hello
spec:
restartPolicy: Never
containers:
- name: hello
image: projects.registry.vmware.com/photon/photon4:latest
command: ["/bin/bash"]
args: ["-c", "echo Hello Kubernetes"]
Use the following command to create a hello Kubernetes application:
kubectl apply -f hello.yaml
#check status
kubectl get pods
#check logs
kubectl logs hello | grep "Hello Kubernetes"
You have successfully set up the two VM Kubernetes Kubeadm Cluster.
5 - Photon NFS Utilities for Mounting Remote File Systems
This document describes how to mount a remote file system on Photon OS by using nfs-utils, a commonly used package that contains tools to work with the Network File System protocol (NFS).
Check a Remote Server
showmount -e nfs-servername or ip
Example:
showmount -e eastern-filer.eng.vmware.com
showmount -e 10.109.87.129
Mount a Remote File System in Photon Full
The nfs-utils package is installed by default in the full version of Photon OS. Here is how to mount a directory through NFS on Photon OS:
mount -t nfs nfs-ServernameOrIp:/exportfolder /mnt/folder
Example:
mount -t nfs eastern-filer.eng.vmware.com:/export/filer /mnt/filer
mount -t nfs 10.109.87.129:/export /mnt/export
Mount a Remote File System in Photon Minimal
The nfs-utils package is not installed in the minimal version of Photon OS. You install it by running the following command:
tdnf install nfs-utils
For more information on installing packages with the tdnf command, see the Photon OS Administration Guide.
Once nfs-utils is installed, you can mount a file system by running the following commands, replacing the placeholders with the path of the directory that you want to mount:
mount nfs
mount -t nfs nfs-ServernameOrIp:/exportfolder /mnt/folder
6 - Seamless Update with A/B Partition System
You can seamlessly update or roll back Photon OS with the support for A/B partition system. You can create a shadow partition set of the system and maintain the two partition sets. For example, an active set of partitions (partition A) and an inactive set of partitions (shadow partition or partition B).
The two partition sets ensure that the working system runs seamlessly on the active partition set while the update is performed on the inactive partition set. After the inactive partition set is updated, you can execute a kexec to boot quickly into the updated partition set. If the updated partition set does not work, the system can reboot and roll back to the previously working state on partition A.
Note: The kexec boot is executed with the abupdate switch
command. The kexec boot does not modify the EFI boot manager (or MBR in the case of BIOS).
6.1 - Configuring A/B Partition System
You need to create a shadow partition set and configure the A/B partition system to use it for Photon OS updates and modifications.
To use the A/B partition system, ensure the following prerequisites:
If you boot with BIOS, only a root filesystem pair is needed. If you boot with UEFI, an EFI partition pair is also needed.
In the kickstart configuration file, when you create a partition, set the value of the ab
parameter as true
to create a shadow partition of the user-defined partition.
To know more about the kickstart configuration, see the following page: Kickstart Support in Photon OS
The following example shows how to create a shadow partition mounted at /
:
{
"partitions": [
{
"disk": "/dev/sda",
"mountpoint": "/",
"size": 0,
"filesystem": "ext4",
"ab": true
},
{
"disk": "/dev/sda",
"mountpoint": "/sda",
"size": 100,
"filesystem": "ext4"
}
]
}
Configure the system details of the partitions for A/B update in the following configuration file: /etc/abupdate.conf
The following template shows how a configuration file looks like:
# either UEFI, BIOS, or BOTH
# BOOT_TYPE=<boot type>
# automatically switch to other partition set after update?
# AUTO_SWITCH=NO
# automatically finalize the update after a switch?
# AUTO_FINISH=no
# can choose to either use tdnf or rpm as a package manager
# if not specified, tdnf is used
# PACKAGE_MANAGER=tdnf
# Provide information about partition sets
# PartUUID info can be found with the "blkid" command
#
# EFI is needed if booting with UEFI
# Format: PARTUUID A, PARTUUID B, mount point
#
# Example: HOME=("PARTUUID A" "PARTUUID B" "/home")
# Note that the / partition should be labeled as _ROOT
# EFI=("PARTUUID A" "PARTUUID B" "/boot/efi")
# _ROOT=("PARTUUID A" "PARTUUID B" "/")
# List of all partition sets
# SETS=( "ROOT" )
# exclude the following directories/files from being synced
# note that these directory paths are absolute, not relative to current working directory
#
# Format: <set name>_EXCLUDE=( "/dir1/" "/dir2" "/dir3/subdir/file" ... "/dirN/" )
#
# Example:
# HOME_EXCLUDE=( "/mnt" "lost+found" )
You can use the abupdate init
command to auto-populate all the fields. However, it is recommended that you manually enter the fields for better accuracy.
Note: Persistent or shared partitions that exist outside the active and inactive partition sets are also supported in the A/B partition system. You need not specify the persistent or shared partitions in the configuration files.
6.2 - Executing Update and Rollback Using A/B Partition System
To modify or rollback Photon OS updates using A/B partition system, perform the following workflow:
Edit the files on the inactive partition set.
You can use the command options like mount
, update
, and deploy
to mount and edit the files based on your requirements.
Switch to an inactive partition set using the abupdate switch
command.
If you are not satisfied with the update, execute abupdate switch
or reboot
to roll back to the old active partition set.
If you are satisfied with the update on the inactive partition set, finalize the switch with the abupdate finish
command.
Note: Once you execute the abupdate finish
command, a reboot does not roll back to the previous partition set.
To know more about the commands for various operations, visit the following topic: Commands for Operations
6.3 - Commands for Operations
You can perform various operations in the A/B partition system using the following commands:
abupdate mount/unmount
: Use this command to mount or unmount the inactive partition set. The partition set is mounted as a tree at the following location: /mnt/abupdate/
. After you mount the partition set, the files are accessible for modifications.
abupdate update
: Use this command to automatically upgrade the packages on the inactive partition set. This command supports tdnf and rpm as the package managers.
abupdate sync
: Use this command to synchronize the active partition set with the inactive partition set. Note that this command eliminates the ability to rollback to a safe system anymore as both becomes mirrored partition sets after the command is executed.
abupdate clean
: Use this command to erase everything on the inactive partition set.
abupdate deploy <tar.gz>
: Use this command to erase and clean the inactive partition set, mount the inactive partition set, and then install or unpack the specified OS image in the inactive partition set from a tar file.
abupdate check
: Use this command to run checks on the inactive partition set from the active partition set. Execute this command before you execute the switch command to ensure that the inactive partition set is not broken. This command also runs checks on tools needed to update or switch from the active partition set.
abupdate switch
: Use this command to switch from the active partition set to the inactive partition set. Note that this command does not modify the EFI boot manager (or MBR in case of BIOS), and hence, any subsequent reboot rolls back to the previously active partition set.
If the AUTO_SWITCH
parameter is set to yes in the configuration file, then the system automatically switches into the updated partition set after the update is complete.
abupdate finish
: Use this command to finalize the update. This command modifies the EFI boot manager (or MBR in case of BIOS). After you execute this command, the subsequent reboots load this partition set instead of rolling back to the previous partition set.
Note: If the AUTO_FINISH
parameter is set to yes
in the configuration file, then the system automatically finalizes the switch with the finish command.
abupdate help
: Use this command to print the help menu.