Skip to main content

Module 11 — Bootstrap the Worker Nodes

Worker nodes are where your application pods actually run. Each worker runs three components:

  • containerd — the container runtime. It pulls images, creates containers, and manages their lifecycle. Kubernetes talks to containerd through the Container Runtime Interface (CRI).
  • kubelet — the node agent. It receives pod specifications from the API server, tells containerd to create containers, and reports node/pod status back.
  • kube-proxy — maintains network rules on the node. It programs iptables rules so that traffic to a ClusterIP service gets forwarded to the correct backend pod.

In this module you install all three on worker1 and worker2. Nodes will show NotReady after this module — that is expected because CNI networking is not configured yet (Module 12).


1. Download Worker Binaries

Run these steps on both worker1 and worker2. SSH into each node:

ssh worker1

1.1 Set version variables

K8S_VERSION=v1.31.0
CONTAINERD_VERSION=1.7.22
RUNC_VERSION=v1.1.15
CNI_VERSION=v1.5.1
CRICTL_VERSION=v1.31.1

1.2 Download all binaries

# Kubernetes binaries
for binary in kubelet kube-proxy kubectl; do
curl -sL "https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/${binary}" \
-o "/tmp/${binary}"
done

# containerd
curl -sL "https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz" \
-o /tmp/containerd.tar.gz

# runc
curl -sL "https://github.com/opencontainers/runc/releases/download/${RUNC_VERSION}/runc.amd64" \
-o /tmp/runc

# CNI plugins
curl -sL "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" \
-o /tmp/cni-plugins.tgz

# crictl (CRI CLI tool)
curl -sL "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" \
-o /tmp/crictl.tar.gz

1.3 Install binaries

# Create directories
sudo mkdir -p \
/opt/cni/bin \
/etc/cni/net.d \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/run/kubernetes \
/etc/containerd

# Install containerd
sudo tar -xzf /tmp/containerd.tar.gz -C /usr/local

# Install runc
chmod +x /tmp/runc
sudo mv /tmp/runc /usr/local/bin/

# Install CNI plugins
sudo tar -xzf /tmp/cni-plugins.tgz -C /opt/cni/bin/

# Install crictl
sudo tar -xzf /tmp/crictl.tar.gz -C /usr/local/bin/

# Install Kubernetes binaries
for binary in kubelet kube-proxy kubectl; do
chmod +x "/tmp/${binary}"
sudo mv "/tmp/${binary}" /usr/local/bin/
done

# Cleanup
rm -f /tmp/containerd.tar.gz /tmp/cni-plugins.tgz /tmp/crictl.tar.gz

1.4 Verify

containerd --version
runc --version
kubelet --version
kube-proxy --version
kubectl version --client
crictl --version
ls /opt/cni/bin/ | head -5

Expected: containerd 1.7.22, runc 1.1.15, Kubernetes binaries v1.31.0, crictl v1.31.1, and CNI plugin binaries in /opt/cni/bin/.

Repeat all steps on worker2 before continuing.

Checkpoint: All binaries are installed on both worker1 and worker2. CNI plugins are in /opt/cni/bin/.


2. Configure and Start containerd

Run on both worker1 and worker2.

2.1 Generate the default configuration

sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null

2.2 Enable the systemd cgroup driver

Kubernetes requires the container runtime to use the systemd cgroup driver so that kubelet and the runtime agree on resource accounting. Edit the configuration:

sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml

Verify the change:

grep SystemdCgroup /etc/containerd/config.toml

Expected: SystemdCgroup = true

2.3 Create the systemd unit file

cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity

[Install]
WantedBy=multi-user.target
EOF

Key settings:

  • Delegate=yes — allows containerd to manage cgroups for its containers
  • KillMode=process — only kill the containerd process, not the containers it manages
  • OOMScoreAdjust=-999 — protect containerd from the OOM killer

2.4 Start containerd

sudo systemctl daemon-reload
sudo systemctl enable containerd
sudo systemctl start containerd
sudo systemctl status containerd

Expected: Active: active (running).

Checkpoint: sudo systemctl status containerd shows active (running) on both workers.


3. Trust the Private Registry

Your container images are stored in the private registry at 192.168.56.12:5000 (set up in Module 05). containerd needs to trust the registry's TLS certificate.

3.1 Create the registry certificate directory

sudo mkdir -p /etc/containerd/certs.d/192.168.56.12:5000

3.2 Copy the registry's CA certificate

The registry uses a certificate signed by the same CA. Copy the CA cert:

sudo cp ~/ca.pem /etc/containerd/certs.d/192.168.56.12:5000/ca.pem

3.3 Configure containerd to use the registry

cat <<EOF | sudo tee /etc/containerd/certs.d/192.168.56.12:5000/hosts.toml
server = "https://192.168.56.12:5000"

[host."https://192.168.56.12:5000"]
capabilities = ["pull", "resolve"]
ca = "/etc/containerd/certs.d/192.168.56.12:5000/ca.pem"
EOF

3.4 Update containerd config to use the certs directory

Add the config path for the registry mirror in the containerd configuration:

sudo sed -i '/\[plugins."io.containerd.grpc.v1.cri".registry\]/a\      config_path = "/etc/containerd/certs.d"' /etc/containerd/config.toml

3.5 Restart containerd

sudo systemctl restart containerd

3.6 Test registry access

sudo crictl pull 192.168.56.12:5000/customerapp:v1 2>&1 || echo "Pull test complete (image may not exist yet — that is OK)"

Note: If the image does not exist in the registry yet, the pull will fail with "not found" — that is fine. What matters is that you do NOT see a TLS/certificate error. If you see x509: certificate signed by unknown authority, the CA cert was not placed correctly.

Checkpoint: Registry TLS trust is configured. crictl pull does not return certificate errors.


4. Configure kubelet

Each worker node runs its own kubelet with a node-specific certificate and kubeconfig.

4.1 Copy certificates and kubeconfig

The certificates (Module 07) and kubeconfig (Module 08) were distributed to ~/ on each worker.

On worker1:

HOSTNAME=worker1
sudo cp ~/${HOSTNAME}-key.pem ~/${HOSTNAME}.pem /var/lib/kubelet/
sudo cp ~/${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo cp ~/ca.pem /var/lib/kubelet/

On worker2:

HOSTNAME=worker2
sudo cp ~/${HOSTNAME}-key.pem ~/${HOSTNAME}.pem /var/lib/kubelet/
sudo cp ~/${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo cp ~/ca.pem /var/lib/kubelet/

4.2 Set the pod CIDR

Each worker gets a unique pod subnet from the cluster CIDR (10.200.0.0/16):

On worker1:

POD_CIDR=10.200.0.0/24

On worker2:

POD_CIDR=10.200.1.0/24

4.3 Create the kubelet configuration file

HOSTNAME=$(hostname -s)

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubelet/ca.pem"
authorization:
mode: Webhook
cgroupDriver: systemd
clusterDNS:
- "10.32.0.10"
clusterDomain: "cluster.local"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
containerRuntimeEndpoint: "unix:///var/run/containerd/containerd.sock"
EOF

What each field does

FieldPurpose
authentication.webhookkubelet asks the API server to validate bearer tokens
authentication.x509.clientCAFileCA to verify client certificates (API server connecting to kubelet)
authorization.mode: Webhookkubelet asks the API server for authorization decisions
cgroupDriver: systemdMust match containerd's cgroup driver
clusterDNSIP of the CoreDNS service (deployed in Module 13)
clusterDomainDNS domain for the cluster
podCIDRPod subnet for this node
resolvConfHost DNS config file (systemd-resolved path)
tlsCertFile / tlsPrivateKeyFilekubelet's serving certificate for API server callbacks
containerRuntimeEndpointSocket path for containerd

4.4 Create the kubelet systemd unit file

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Checkpoint: /var/lib/kubelet/kubelet-config.yaml and /etc/systemd/system/kubelet.service exist on both workers.


5. Configure kube-proxy

kube-proxy handles service networking. It watches the API server for Service and Endpoint objects and programs iptables rules to route traffic.

5.1 Copy the kube-proxy kubeconfig

Run on both worker1 and worker2:

sudo cp ~/kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

5.2 Create the kube-proxy configuration file

cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
FieldPurpose
mode: iptablesUse iptables for service routing (simpler than IPVS for learning)
clusterCIDRPod network CIDR — used for masquerade rules

5.3 Create the kube-proxy systemd unit file

cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Checkpoint: /var/lib/kube-proxy/kube-proxy-config.yaml and /etc/systemd/system/kube-proxy.service exist on both workers.


6. Start Worker Services

Run on both worker1 and worker2:

sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy

6.1 Verify services

sudo systemctl status kubelet
sudo systemctl status kube-proxy

Expected: Both show Active: active (running).

Note: kubelet logs will show warnings about CNI not being configured. This is expected — you will configure CNI in Module 12.

6.2 Verify nodes registered

From a control plane node (cp1 or cp2):

kubectl get nodes --kubeconfig /var/lib/kubernetes/admin.kubeconfig

Expected:

NAME      STATUS     ROLES    AGE   VERSION
worker1 NotReady <none> 30s v1.31.0
worker2 NotReady <none> 25s v1.31.0

Both nodes appear with NotReady status. This is correct — the nodes are registered with the API server but cannot schedule pods because there is no CNI network plugin configured yet.

Checkpoint: kubectl get nodes shows both worker1 and worker2 in NotReady status.


7. Configure crictl

crictl is a CLI tool for debugging the container runtime. Configure it to use the containerd socket:

Run on both worker1 and worker2:

cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
EOF

Now you can use crictl without specifying the socket each time:

sudo crictl info | head -5
sudo crictl images

crictl info shows the runtime status. crictl images lists cached images (should be empty at this point).


8. Troubleshooting

Node does not appear in kubectl get nodes

  1. kubelet is not running: sudo systemctl status kubelet
  2. Check kubelet logs: sudo journalctl -u kubelet --no-pager -l | tail -30
  3. The kubeconfig is missing or has wrong server URL: cat /var/lib/kubelet/kubeconfig | grep server
  4. The API server is unreachable from the worker: curl -k https://192.168.56.20:6443/healthz (should return ok)

kubelet fails with "x509: certificate signed by unknown authority"

The CA cert is missing or wrong. Verify:

ls -la /var/lib/kubelet/ca.pem
openssl x509 -in /var/lib/kubelet/ca.pem -noout -subject

The subject should show CN = Kubernetes.

containerd fails to start — "failed to load plugin"

The config file may be malformed after the sed edit. Regenerate it:

sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sudo systemctl restart containerd

Registry pull fails with "x509: certificate signed by unknown authority"

The registry CA cert is not in the right location. Verify:

ls /etc/containerd/certs.d/192.168.56.12:5000/
cat /etc/containerd/certs.d/192.168.56.12:5000/hosts.toml

The directory should contain ca.pem and hosts.toml.

kubelet logs show "network plugin is not ready: cni"

This is expected at this stage. CNI is configured in Module 12. The node will stay NotReady until then.

kube-proxy fails with "dial tcp 192.168.56.20:6443: connect: connection refused"

The load balancer (HAProxy) is not configured yet. kube-proxy connects through the LB IP. Until Module 13, kube-proxy will retry connections. This is normal — it will work once HAProxy is running.

Tip: If you want kube-proxy to work before the load balancer is set up, you can temporarily edit the kube-proxy kubeconfig to point directly to a control plane IP (192.168.56.21:6443). Remember to change it back after Module 13.


9. What You Have Now

CapabilityVerification Command
containerd running on both workersssh worker1 "sudo systemctl status containerd"
kubelet running on both workersssh worker1 "sudo systemctl status kubelet"
kube-proxy running on both workersssh worker1 "sudo systemctl status kube-proxy"
Workers registered with API serverkubectl get nodes — worker1, worker2 listed
Nodes show NotReady (expected)kubectl get nodes — STATUS is NotReady
Private registry trustedNo x509 errors when pulling from 192.168.56.12:5000
CNI plugins installedls /opt/cni/bin/ on workers
crictl configuredsudo crictl info on workers

The worker nodes are registered with the cluster but cannot run pods yet. The NotReady status will change to Ready once you configure CNI networking in the next module.


Next up: Module 12 — Configure CNI Networking — set up the bridge CNI plugin and routing so pods can communicate across nodes.