1 - Photon Network Config Manager Command-line Interface (nmctl)

You can use network-config-manager (nmctl) to configure and introspect the state of the network links as seen by systemd-networkd. nmctl can be used to query and configure devices for Addresses, Routes, Gateways, DNS, NTP, Domain, and hostname. You can also use nmctl to create virtual NetDevs (VLAN, VXLAN, Bridge, Bond, and so on). You can configure various configuration of links such as WakeOnLanPassword, Port, BitsPerSecond, Duplex and Advertise, and so on. nmctl uses sd-bus, sd-device APIs to interact with systemd, systemd-networkd, systemd-resolved, systemd-hostnamed, and systemd-timesyncd via dbus. nmctl uses networkd verbs to explain output. nmctl can generate configurations that persist between reboots.

The following example shows the system status:

❯ nmctl
         System Name: zeus
              Kernel: Linux (5.10.152-3.ph4)
     systemd version: v252-1
        Architecture: x86-64
      Virtualization: vmware
    Operating System: VMware Photon OS/Linux
          Machine ID: aa6e4cb92bee4c1aa8b304eafe28166c
        System State: routable
        Online State: partial
           Addresses: fe80::982e:b0ff:fe07:cc12/64   on device cni-podman0
                      fe80::20c:29ff:fe64:cb18/64    on device eth0
                      172.16.130.145/24              on device eth1
                      172.16.130.144/24              on device eth0
                      127.0.0.1/8                    on device lo
                      fe80::20c:29ff:fe5f:d143/64    on device eth1
                      ::1/128                        on device lo
                      fe80::c027:acff:fe19:d741/64   on device vethe8dc6ac9
                      10.88.0.1/16                   on device cni-podman0
             Gateway: 172.16.130.2	                 on device eth1
                      172.16.130.2	                 on device eth0
                 DNS: 172.16.130.2 172.16.130.1 172.16.130.126
                 NTP: 10.128.152.81 10.166.1.120 10.188.26.119 10.84.55.42`

The following example shows the network status:

❯ nmctl status eth0
           Alternative names: eno1 enp11s0 ens192
                       Flags: UP BROADCAST RUNNING MULTICAST LOWERUP
                        Type: ether
                        Path: pci-0000:0b:00.0
                      Driver: vmxnet3
                      Vendor: VMware
                       Model: VMXNET3 Ethernet Controller
                   Link File: /usr/lib/systemd/network/99-default.link
                Network File: /etc/systemd/network/99-dhcp-en.network
                       State: routable (configured)
               Address State: routable
          IPv4 Address State: routable
          IPv6 Address State: degraded
                Online State: online
         Required for Online: yes
           Activation Policy: up
                  HW Address: 00:0c:29:64:cb:18 (VMware, Inc.)
                         MTU: 1500 (min: 60 max: 9000)
                      Duplex: full
                       Speed: 10000
                       QDISC: mq
              Queues (Tx/Rx): 2/2
             Tx Queue Length: 1000
IPv6 Address Generation Mode: eui64
                GSO Max Size: 65536 GSO Max Segments: 65535
                     Address: 10.197.103.228/23 (DHCPv4 via 10.142.7.86) lease time: 7200 seconds T1: 3600 seconds T2: 6300 seconds
                              fe80::20c:29ff:fe64:cb18/64
                     Gateway: 172.16.130.2
                         DNS: 172.16.130.3 172.16.130.4 172.16.130.5
                         NTP: 172.16.130.6 172.16.130.7 172.16.130.8 172.16.130.9
           DHCP6 Client DUID: DUID-EN/Vendor:0000ab119a69db91b911f3180000

To add DNS, use the following command:

nmctl add-dns dev eth0 dns 192.168.1.45 192.168.1.46

To set mtu, use the following command:

nmctl set-mtu dev eth0 mtu 1400

To set mac, use the following command:

nmctl set-mac dev eth0 mac 00:0c:29:3a:bc:11

To set link options, use the following command:

nmctl set-link-option dev eth0 arp yes mc yes amc no pcs no

To add a static address, use the following command:

nmctl add-addr dev eth0 a 192.168.1.45/24

To add a default gateway, use the following command:

nmctl add-default-gw dev eth0 gw 192.168.1.1 onlink  yes

The following example shows how to create VLAN via nmctl The following command creates .netdev and .network and assigns them to the underlying device. It sets all these file permissions to systemd-network automatically.

❯ nmctl create-vlan [VLAN name] dev [MASTER DEVICE] id [ID INTEGER] proto [PROTOCOL {802.1q|802.1ad}] Creates vlan netdev and network file

❯ sudo nmctl create-vlan vlan-95 dev eth0 id 19

The following example shows how to create VXLAN via nmctl:

❯ sudo nmctl create-vxlan vxlan-98 vni 32 local 192.168.1.2 remote 192.168.1.3 port 7777 independent yes

The following example shows how to create virtual routing and forwarding (VRF):

❯ sudo nmctl create-vrf test-vrf table 555                                                                                               
❯ ip -d link show test-vrf
4: test-vrf: <NOARP,MASTER,UP,LOWER_UP> mtu 65575 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 86:ad:9b:50:83:1f brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 1280 maxmtu 65575 
    vrf table 555 addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535  

The following example shows how to remove a virtual netdev:

❯ sudo nmctl remove-netdev vlan-95                                                                                         
❯ ip -d link show vlan-95 
Device "vlan-95" does not exist.

Note: nmctl not only removes the .netdev and .network files but also removes the virtual netdev.

1.1 - Configuring WireGuard using Network Configuration Manager

WireGuard is a lightweight, simple, fast, and secure VPN that is built into Linux kernel 5.6 and above. This topic provides sample WireGuard configurations for systemd-networkd using network-config-manager on Photon OS, a Linux-based operating system.

To generate the required configuration, you need to install WireGuard tools. You can download the WireGuard tools or install the WireGuard tools using tdnf.

To install the WireGuard tools using tdnf, run the following command:

❯ sudo tdnf install wireguard-tools -y

To configure WireGuard VPN, you need to create a pair of keys on both the sites between which you want to establish the VPN connection. Each site needs the public key of the other site. To create the pair of keys, use the following command:

❯ wg genkey | tee wg-private.key | wg pubkey > wg-public.key

You also need to change the permission of the files to make them readable for systemd-network users as shown in the following example:

❯ chown root:systemd-network wg-privatge.key wg-public.key

The following examples show the configurations of the two sites:

Site 1

❯ nmctl
         System Name: photon
              Kernel: Linux (5.10.152-6.ph4)
     systemd version: v247.11-4.ph4
        Architecture: x86-64
      Virtualization: vmware
    Operating System: VMware Photon OS/Linux
          Machine ID: 5103175aac7f4967acbdf97946c27ca3
        System State: routable
           Addresses: fe80::20c:29ff:fe3c:d58f/64    on device eth0
                      fe80::20c:29ff:fe3c:d599/64    on device eth1
                      127.0.0.1/8                    on device lo
                      192.168.1.10/24                on device eth0
                      192.168.1.9/24                 on device eth1
                      ::1/128                        on device lo
             Gateway: 192.168.1.1                    on device eth0
                      192.168.1.1                    on device eth1
                 DNS: 125.99.61.254 116.72.253.254



❯ cat wg-public.key 
d0AR4V68TJPA65ddKADmyTBbEgPTo75Xq/EVE1nsVFA=y

Site 2

❯ nmctl        
         System Name: Zeus
              Kernel: Linux (6.1.10-8.ph5)
     systemd version: v253-1
        Architecture: x86-64
      Virtualization: vmware
    Operating System: VMware Photon OS/Linux
          Machine ID: d4f740d7e70d423cb46c8b1def547701
        System State: routable
        Online State: partial
           Addresses: fe80::20c:29ff:fe5f:d139/64    on device ens33
                      fe80::20c:29ff:fe5f:d143/64    on device ens37
                      127.0.0.1/8                    on device lo
                      ::1/128                        on device lo
                      192.168.1.8/24                 on device ens33
                      192.168.1.7/24                 on device ens37
             Gateway: 192.168.1.1                    on device ens33
                      192.168.1.1                    on device ens37
                 DNS: 125.99.61.254 116.72.253.254


➜ cat wg-public.key lhR9C3iZGKC+CIibXsOxDql8m7YulZA5I2tqgU2PnhM=y

To generate the WireGuard configuration using nmctl for Site 1, use the following command:

➜ nmctl create-wg wg99 private-key-file /etc/systemd/network/wg-private.key listen-port 34966 public-key lhR9C3iZGKC+CIibXsOxDql8m7YulZA5I2tqgU2PnhM= endpoint 192.168.1.11:34966 allowed-ips 10.0.0.2/32

➜ nmctl add-addr dev wg99 a 10.0.0.1/24

The following configuration is generated for systemd-networkd:

❯ cat 10-wg99.netdev

[NetDev]
Name=wg99
Kind=wireguard


[WireGuard]
PrivateKeyFile=/etc/systemd/network/wg-private.key
ListenPort=34966


[WireGuardPeer]
# Public key of Site #2
PublicKey=lhR9C3iZGKC+CIibXsOxDql8m7YulZA5I2tqgU2PnhM=
Endpoint=192.168.1.11:34966
AllowedIPs=10.0.0.2/32

❯ cat 10-wg99.network
[Match]
Name=wg99


[Address]
Address=10.0.0.1/24

➜  ~ nmctl status wg99
    Flags: UP RUNNING NOARP LOWERUP 
                        Kind: wireguard
                        Type: wireguard
                      Driver: wireguard
                   Link File: /usr/lib/systemd/network/99-default.link
                Network File: /etc/systemd/network/10-wg99.network
                       State: routable (configured) 
               Address State: routable
          IPv4 Address State: routable
          IPv6 Address State: off
                Online State: online
         Required for Online: yes
           Activation Policy: up
                         MTU: 1420 (min: 0 max: 2147483552) 
                       QDISC: noqueue 
              Queues (Tx/Rx): 1/1 
             Tx Queue Length: 1000 
IPv6 Address Generation Mode: eui64 
                GSO Max Size: 65536 GSO Max Segments: 65535 
                     Address: 10.0.0.2/24

The following output is generated for WireGuard:

➜  wg

interface: wg99
  public key: lhR9C3iZGKC+CIibXsOxDql8m7YulZA5I2tqgU2PnhM=
  private key: (hidden)
  listening port: 34966

peer: d0AR4V68TJPA65ddKADmyTBbEgPTo75Xq/EVE1nsVFA=
  endpoint: 192.168.1.7:34966
  allowed ips: 10.0.0.1/32
  latest handshake: 20 minutes, 36 seconds ago
  transfer: 57.70 KiB received, 58.37 KiB sent

To generate the WireGuard configuration using nmctl for Site 2, use the following command:

➜ nmctl create-wg wg99 private-key-file /etc/systemd/network/wg-private.key listen-port 34966 public-key d0AR4V68TJPA65ddKADmyTBbEgPTo75Xq/EVE1nsVFA= endpoint 192.168.1.7:34966 allowed-ips 10.0.0.1/32

➜ nmctl add-addr dev wg99 a 10.0.0.2/242

The following configuration is generated for systemd-networkd:

➜ cat 10-wg99.netdev 
                 
[NetDev]
Name=wg99
Kind=wireguard


[WireGuard]
PrivateKeyFile=/etc/systemd/network/wg-private.key
ListenPort=34966


[WireGuardPeer]
# Public key of Site #1
PublicKey=d0AR4V68TJPA65ddKADmyTBbEgPTo75Xq/EVE1nsVFA=
Endpoint=192.168.1.7:34966
AllowedIPs=10.0.0.1/32


➜ network cat 10-wg99.network
[Match]
Name=wg99


[Address]
Address=10.0.0.2/24


❯ nmctl status wg99
                       Flags: UP RUNNING NOARP LOWERUP 
                        Kind: wireguard
                        Type: wireguard
                      Driver: wireguard
                   Link File: /usr/lib/systemd/network/99-default.link
                Network File: /etc/systemd/network/wg99.network
                       State: routable (configured) 
               Address State: routable
          IPv4 Address State: routable
          IPv6 Address State: off
                Online State: online
         Required for Online: yes
           Activation Policy: up
                         MTU: 1420 (min: 0 max: 2147483552) 
                       QDISC: noqueue 
              Queues (Tx/Rx): 1/1 
             Tx Queue Length: 1000 
IPv6 Address Generation Mode: eui64 
                GSO Max Size: 65536 GSO Max Segments: 65535 
                     Address: 10.0.0.2/24
                                                

➜ wg

interface: wg9
  public key: lhR9C3iZGKC+CIibXsOxDql8m7YulZA5I2tqgU2PnhM=
  private key: (hidden)
  listening port: 34966


peer: d0AR4V68TJPA65ddKADmyTBbEgPTo75Xq/EVE1nsVFA=
  endpoint: 192.168.1.7:34966
  allowed ips: 10.0.0.1/32
  latest handshake: 23 minutes, 57 seconds ago
  transfer: 57.70 KiB received, 58.37 KiB sent9

To verify the connectivity of Site 1, use the following command to ping and confirm the connectivity:

❯ ip a show wg99

Response:

25: wg99: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state 
UNKNOWN group default qlen 1000link/none 
    inet 10.0.0.1/24 brd 10.0.0.255 scope global wg99
       valid_lft forever preferred_lft forever

❯ ping 10.0.0.2

PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=4.90 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=3.77 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=23.0 ms

To verify the connectivity of Site 2, use the following command to ping and confirm the connectivity:

➜  ip a show wg

Response:

209: wg99: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000 link/none     inet 10.0.0.2/24 scope global wg99       valid_lft forever preferred_lft forever

➜  ping 10.0.0.1

PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=1.92 ms99

1.2 - Configure SR-IOV using Network Configuration Manager

SR-IOV technology enables multiple virtual machines to share a single PCIe device. SR-IOV allows a single PCIe device to appear as multiple and separate PCIe interfaces. This enables direct connection of multiple virtual machines to the PCIe devices. PCI-SIG (Peripheral Component Interconnect Special Interest Group) defines the standard interface and requirements in the SR-IOV specification to promote interoperability of the SR-IOV enabled devices.

SR-IOV introduces the concept of Physical Functions (PFs) and Virtual Functions (VFs). PFs refer to full-featured PCIe functions. VFs refer to the lightweight functions that lack certain configuration resources.

You can configure SR-IOV on Photon OS using the Network Configuration Manager (nmctl). Note that the systemd-networkd also supports SR-IOV.

You can use kernel module netdevsim to configure and test it as shown in the following example:

➜  ~  modprobe netdevsim                                                                                                                    
➜  ~  lsmod | grep netdevsim 
netdevsim             102400  0
psample                20480  1 netdevsim

➜  ~  echo "10 1" > /sys/bus/netdevsim/new_device 

➜  ~ sudo echo "99 1" > /sys/bus/netdevsim/new_device

➜  ~ ip -d link show eni99np1
287: eni99np1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ca:28:ff:4e:73:2a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 portname p1 switchid 82ae398327c5db81a27dc2756c43f00315f442de1779fcfbfc582bbb3e62cb parentbus netdevsim parentdev netdevsim99 

➜  ~ echo "3" > /sys/bus/netdevsim/devices/netdevsim99/sriov_numvfs

➜  ~ ip -d link show eni99np1                                      
287: eni99np1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ca:28:ff:4e:73:2a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 portname p1 switchid 82ae398327c5db81a27dc2756c43f00315f442de1779fcfbfc582bbb3e62cb parentbus netdevsim parentdev netdevsim99 
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off, query_rss off
    vf 1     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off, query_rss off
    vf 2     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust off, query_rss off

To configure SR-IOV using nmctl, use the command as shown in the following example:

➜  ~ nmctl add-sr-iov dev eni99np1 vf 0 vlanid 5 qos 1 macspoofck yes qrss True trust yes linkstate yes macaddr 00:11:22:33:44:55

➜  ~ ip -d link show eni99np1                                                                                                
287: eni99np1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ca:28:ff:4e:73:2a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 portname p1 switchid 82ae398327c5db81a27dc2756c43f00315f442de1779fcfbfc582bbb3e62cb parentbus netdevsim parentdev netdevsim99 
    vf 0     link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff, vlan 5, qos 1, spoof checking on, link-state enable, trust on, query_rss on


➜  ~ sudo cat /etc/systemd/network/10-eni99np1.network
[Match]
Name=eni99np1


[SR-IOV]
VirtualFunction=0
VLANId=5
QualityOfService=1
MACSpoofCheck=yes
QueryReceiveSideScaling=yes
Trust=yes
LinkState=yes
MACAddress=00:11:22:33:44:55

The nmctl generates the SR-IOV configuration in the systemd-networkd format. Since nmctl reloads the configuration, systemd-networkd also configures the VF.

To configure the other VFs, use the command as shown in the following example:

➜  ~ nmctl add-sr-iov dev eni99np1 vf 1 vlanid 6 qos 2 macspoofck yes qrss True trust yes linkstate yes macaddr 00:11:22:33:44:56
➜  ~ nmctl add-sr-iov dev eni99np1 vf 1 vlanid 6 qos 2 macspoofck yes qrss True trust yes linkstate yes macaddr 00:11:22:33:44:5

➜  ~ ip -d link show eni99np1                                                                                                
287: eni99np1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether ca:28:ff:4e:73:2a brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 portname p1 switchid 82ae398327c5db81a27dc2756c43f00315f442de1779fcfbfc582bbb3e62cb parentbus netdevsim parentdev netdevsim99 
    vf 0     link/ether 00:11:22:33:44:55 brd ff:ff:ff:ff:ff:ff, vlan 5, qos 1, spoof checking on, link-state enable, trust on, query_rss on
    vf 1     link/ether 00:11:22:33:44:56 brd ff:ff:ff:ff:ff:ff, vlan 6, qos 2, spoof checking on, link-state enable, trust on, query_rss on
    vf 2     link/ether 00:11:22:33:44:57 brd ff:ff:ff:ff:ff:ff, vlan 7, qos 3, spoof checking on, link-state enable, trust on, query_rss on

nmctl generates the .network:

➜  ~ cat /etc/systemd/network/10-eni99np1.network
[Match]
Name=eni99np1


[SR-IOV]
VirtualFunction=0
VLANId=5
QualityOfService=1
MACSpoofCheck=yes
QueryReceiveSideScaling=yes
Trust=yes
LinkState=yes
MACAddress=00:11:22:33:44:55


[SR-IOV]
VirtualFunction=1
VLANId=6
QualityOfService=2
MACSpoofCheck=yes
QueryReceiveSideScaling=yes
Trust=yes
LinkState=yes
MACAddress=00:11:22:33:44:56


[SR-IOV]
VirtualFunction=2
VLANId=7
QualityOfService=3
MACSpoofCheck=yes
QueryReceiveSideScaling=yes
Trust=yes
LinkState=yes
MACAddress=00:11:22:33:44:57

2 - Photon Real-Time Operating System Command-line Interface

Photon Real-Time Operating System provides commands for manipulating real-time properties of processes.

tuna

The tuna utility can be used to view and modify process priorities, CPU isolation and other real time characteristics in the system.

Examples:

View processes and their RT scheduling policies and priorities: $ tuna -P

                 thread    ctxt_switches

pid   SCHED_rtpri  affinity   voluntary    nonvoluntary                    cmd

  1    OTHER    0         0       1211           917                     systemd 

  2    OTHER     0        0       281             0                      kthreadd 

  3    OTHER     0        0         3             1                         rcu_gp

  4    OTHER     0        0         2             1                    rcu_par_gp

  6    OTHER     0        0         8             1            kworker/0:0H-kblockd

 13    FIFO      1        0       317             1                     rcu_sched

 16    FIFO     99        0         3             2                posixcputmr/0 

 17    FIFO     99        0         6             2                    migration/0

679    FIFO     50        0     1647541           1                irq/58-eth0-rxt

The following tasks are performed by using the tuna command:

  • Isolate a set of CPUs

    $ tuna -c <cpulist> -i (where <cpulist> can be X,Y-Z)

  • See the list of processes running on the specific CPUs before and after isolation

      $ tuna -c <cpulist> --show_threads
      $ tuna -c <cpulist> -i --show_threads
    

taskset

The taskset command can be used to get/set CPU affinity of tasks:

  • Run a program bound to a set of CPUs

    $ taskset -c <cpulist> ./program (where <cpulist> can be X,Y-Z)

  • Move a running task to a set of CPUs $ taskset -c p <cpulist> <pid>

  • View the CPU affinity settings of a running task $ taskset -c -p <pid>

chrt

The chrt command can be used to get or set the real-time scheduling policies and priorities of processes:

  • Modify the scheduling policy and priority of a running task

$ chrt -f -p <priority> <pid> (sets the task with pid to SCHED_FIFO policy with priority )

  • View the current scheduling policy and priority of a running task

    $ chrt -p <pid>

ps

The ps command can be used to list processes with their scheduling policies and priorities:

$ ps -eo cmd,pid,cpu,pri,cls


 `CMD `                                              ` PID  CPU   PRI    CLS`

`/lib/systemd/systemd --swit `                       ` 1     -    19    TS`

`[kthreadd]`                                         ` 2     -    19    TS`