Module 12 — Configure CNI Networking
In Module 11, both worker nodes registered with the cluster but show NotReady. The reason: kubelet needs a Container Network Interface (CNI) plugin to set up networking for each pod. Without CNI, kubelet refuses to schedule pods.
In this module you configure the bridge CNI plugin on each worker, set up static routes between nodes for cross-node pod communication, and verify pods can talk to each other across the cluster.
1. How CNI Works
When kubelet creates a pod, it does not set up networking itself. Instead, it calls a CNI plugin — an executable in /opt/cni/bin/ — and passes it the pod's network namespace. The plugin is responsible for:
- Creating a virtual ethernet (veth) pair — one end in the pod, one end on the host
- Assigning an IP address to the pod from the node's pod CIDR
- Connecting the host end to a bridge interface
- Setting up routing so the pod can reach other pods and the outside world
The bridge plugin
The bridge plugin creates a Linux bridge (cni0) on each node. All pods on that node connect to this bridge. Traffic between pods on the same node goes through the bridge. Traffic to pods on other nodes goes through the host's routing table.
worker1 (10.200.0.0/24) worker2 (10.200.1.0/24)
┌─────────────────────────┐ ┌─────────────────────────┐
│ Pod A Pod B │ │ Pod C Pod D │
│ 10.200.0.2 10.200.0.3│ │ 10.200.1.2 10.200.1.3│
│ │ │ │ │ │ │ │
│ ┌──┴────────────┴──┐ │ │ ┌──┴────────────┴──┐ │
│ │ cni0 bridge │ │ │ │ cni0 bridge │ │
│ │ 10.200.0.1 │ │ │ │ 10.200.1.1 │ │
│ └────────┬─────────┘ │ │ └────────┬─────────┘ │
│ │ │ │ │ │
│ eth1 │ .23 │ │ eth1 │ .24 │
└───────────┼──────────────┘ └───────────┼──────────────┘
│ static route │
└────────────────────────────────────────────┘
10.200.1.0/24 via 192.168.56.24
10.200.0.0/24 via 192.168.56.23
Why static routes?
In production, overlay networks (Flannel, Calico, Cilium) handle cross-node routing automatically using VXLAN tunnels or BGP. In this training you use static routes to understand what those tools do under the hood. Each worker needs a route to the other worker's pod CIDR.
2. Verify CNI Plugins Are Installed
The CNI plugin binaries were installed in Module 11. Verify on both worker1 and worker2:
ls /opt/cni/bin/
Expected: You should see binaries including bridge, host-local, loopback, portmap, and others.
The three you need:
bridge— creates the Linux bridge and veth pairshost-local— assigns IP addresses from a local range (IPAM)loopback— configures the loopback interface inside each pod
Checkpoint:
/opt/cni/bin/bridge,/opt/cni/bin/host-local, and/opt/cni/bin/loopbackexist on both workers.
3. Configure the Bridge Plugin
CNI reads configuration files from /etc/cni/net.d/ in alphabetical order. The first matching config wins.
3.1 Create the bridge configuration
On worker1 (pod CIDR 10.200.0.0/24):
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "1.0.0",
"name": "bridge",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "10.200.0.0/24"}]
],
"routes": [
{"dst": "0.0.0.0/0"}
]
}
}
EOF
On worker2 (pod CIDR 10.200.1.0/24):
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "1.0.0",
"name": "bridge",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
[{"subnet": "10.200.1.0/24"}]
],
"routes": [
{"dst": "0.0.0.0/0"}
]
}
}
EOF
What each field does
| Field | Purpose |
|---|---|
type: bridge | Use the bridge CNI plugin |
bridge: cni0 | Name of the Linux bridge to create |
isGateway: true | Assign the bridge an IP (first usable IP in the subnet, e.g., 10.200.0.1) |
ipMasq: true | SNAT pod traffic going outside the pod CIDR (so pods can reach the internet) |
ipam.type: host-local | Use the host-local IPAM plugin to assign IPs from a local range |
ipam.ranges | The subnet to allocate pod IPs from |
ipam.routes | Default route for pods — all traffic goes through the bridge gateway |
3.2 Create the loopback configuration
Run on both worker1 and worker2:
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "1.0.0",
"name": "lo",
"type": "loopback"
}
EOF
This ensures every pod has a working 127.0.0.1 loopback interface.
Checkpoint:
/etc/cni/net.d/10-bridge.confand/etc/cni/net.d/99-loopback.confexist on both workers. The subnet in10-bridge.confis different on each worker.
4. Add Static Routes Between Workers
Without routes, worker1 does not know how to reach pods on worker2 (and vice versa). You need to add a route to the other node's pod CIDR pointing to that node's IP.
4.1 Add routes
On worker1 — route to worker2's pod CIDR:
sudo ip route add 10.200.1.0/24 via 192.168.56.24
On worker2 — route to worker1's pod CIDR:
sudo ip route add 10.200.0.0/24 via 192.168.56.23
4.2 Verify routes
On worker1:
ip route | grep 10.200
Expected:
10.200.0.0/24 dev cni0 proto kernel scope link src 10.200.0.1 (appears after first pod)
10.200.1.0/24 via 192.168.56.24 dev eth1
On worker2:
ip route | grep 10.200
Expected:
10.200.0.0/24 via 192.168.56.23 dev eth1
10.200.1.0/24 dev cni0 proto kernel scope link src 10.200.1.1 (appears after first pod)
4.3 Make routes persistent
Routes added with ip route add are lost on reboot. Make them persistent using a netplan configuration.
On worker1, create /etc/netplan/60-pod-routes.yaml:
cat <<EOF | sudo tee /etc/netplan/60-pod-routes.yaml
network:
version: 2
ethernets:
eth1:
routes:
- to: 10.200.1.0/24
via: 192.168.56.24
EOF
sudo netplan apply
On worker2, create /etc/netplan/60-pod-routes.yaml:
cat <<EOF | sudo tee /etc/netplan/60-pod-routes.yaml
network:
version: 2
ethernets:
eth1:
routes:
- to: 10.200.0.0/24
via: 192.168.56.23
EOF
sudo netplan apply
Tip: If your VMs do not use netplan (e.g., older Ubuntu), create a systemd-networkd
.networkfile instead:# On worker1
cat <<EOF | sudo tee /etc/systemd/network/60-pod-routes.network
[Match]
Name=eth1
[Route]
Destination=10.200.1.0/24
Gateway=192.168.56.24
EOF
sudo systemctl restart systemd-networkd
4.4 Verify persistence
After applying, confirm the route is still there:
ip route | grep 10.200
Checkpoint: Each worker has a route to the other worker's pod CIDR. Routes are persistent across reboots.
5. Verify Nodes Are Ready
5.1 Restart kubelet
Kubelet checks for CNI configuration at startup and periodically. Restart it to pick up the new config immediately:
On both worker1 and worker2:
sudo systemctl restart kubelet
5.2 Check node status
From a control plane node (cp1 or cp2):
kubectl get nodes --kubeconfig /var/lib/kubernetes/admin.kubeconfig
Expected:
NAME STATUS ROLES AGE VERSION
worker1 Ready <none> 10m v1.31.0
worker2 Ready <none> 10m v1.31.0
Both nodes should now show Ready. The change from NotReady to Ready happens because kubelet found the CNI configuration in /etc/cni/net.d/.
Checkpoint:
kubectl get nodesshows both workers inReadystatus.
6. Test Cross-Node Pod Networking
Deploy a test pod on each worker and verify they can communicate.
6.1 Create a pod on worker1
From a control plane node:
kubectl run test-worker1 \
--image=busybox:1.36 \
--restart=Never \
--overrides='{"spec":{"nodeName":"worker1"}}' \
--kubeconfig /var/lib/kubernetes/admin.kubeconfig \
-- sleep 3600
6.2 Create a pod on worker2
kubectl run test-worker2 \
--image=busybox:1.36 \
--restart=Never \
--overrides='{"spec":{"nodeName":"worker2"}}' \
--kubeconfig /var/lib/kubernetes/admin.kubeconfig \
-- sleep 3600
6.3 Get pod IPs
kubectl get pods -o wide --kubeconfig /var/lib/kubernetes/admin.kubeconfig
Expected:
NAME READY STATUS RESTARTS AGE IP NODE
test-worker1 1/1 Running 0 30s 10.200.0.2 worker1
test-worker2 1/1 Running 0 25s 10.200.1.2 worker2
6.4 Test connectivity
Ping from the pod on worker1 to the pod on worker2:
kubectl exec test-worker1 --kubeconfig /var/lib/kubernetes/admin.kubeconfig -- ping -c 3 10.200.1.2
Expected:
PING 10.200.1.2 (10.200.1.2): 56 data bytes
64 bytes from 10.200.1.2: seq=0 ttl=62 time=1.234 ms
64 bytes from 10.200.1.2: seq=1 ttl=62 time=0.567 ms
64 bytes from 10.200.1.2: seq=2 ttl=62 time=0.489 ms
And from worker2 to worker1:
kubectl exec test-worker2 --kubeconfig /var/lib/kubernetes/admin.kubeconfig -- ping -c 3 10.200.0.2
6.5 Clean up test pods
kubectl delete pod test-worker1 test-worker2 --kubeconfig /var/lib/kubernetes/admin.kubeconfig
Checkpoint: Pods on different workers can ping each other by IP. Cross-node networking works.
7. Troubleshooting
Routes lost after reboot
The route was added with ip route add but not persisted. Verify your netplan or systemd-networkd configuration:
cat /etc/netplan/60-pod-routes.yaml
Re-apply with sudo netplan apply and reboot to test.
cni0 bridge does not exist
The bridge is created when the first pod is scheduled on the node. Before any pods run, there is no bridge. Run a test pod to trigger bridge creation:
kubectl run test --image=busybox --restart=Never -- sleep 60
Then check:
ip addr show cni0
kubelet still shows NotReady after CNI configuration
- Check CNI config exists:
ls /etc/cni/net.d/ - Check kubelet logs:
sudo journalctl -u kubelet --no-pager | grep -i cni - Restart kubelet:
sudo systemctl restart kubelet - Verify the CNI config JSON is valid:
cat /etc/cni/net.d/10-bridge.conf | python3 -m json.tool
Pods cannot ping across nodes
- Verify routes exist:
ip route | grep 10.200 - Check IP forwarding is enabled:
sysctl net.ipv4.ip_forward(should be1) - If forwarding is disabled, enable it:
sudo sysctl -w net.ipv4.ip_forward=1
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
- Check that the VirtualBox host-only network allows traffic between VMs (it should by default)
Pod stuck in ContainerCreating
Check kubelet logs for CNI errors:
sudo journalctl -u kubelet --no-pager | grep -i "cni\|network" | tail -10
Common cause: malformed JSON in /etc/cni/net.d/10-bridge.conf. Validate the JSON syntax.
8. What You Have Now
| Capability | Verification Command |
|---|---|
| CNI bridge plugin configured | cat /etc/cni/net.d/10-bridge.conf on workers |
| Loopback plugin configured | cat /etc/cni/net.d/99-loopback.conf on workers |
| Workers show Ready | kubectl get nodes — STATUS is Ready |
| Pods get IPs from node subnet | kubectl get pods -o wide — IPs in 10.200.x.0/24 |
| Cross-node pod networking works | Ping between pods on different workers |
| Routes are persistent | ip route | grep 10.200 after reboot |
The cluster networking is operational. Pods can communicate across nodes, and nodes are Ready to accept workloads. The missing piece is cluster DNS (so pods can find services by name) and API server load balancing.
Next up: Module 13 — CoreDNS & HAProxy Load Balancer — deploy CoreDNS for service discovery and configure HAProxy for API server high availability.