Module 14 — Deploy the Customer Information App
In the Fundamentals track you deployed the Customer Information App across three VMs — a database server, an application server, and a web server. Each component was installed and configured manually with systemd services and config files.
Now you deploy the same application on your Kubernetes cluster using native Kubernetes resources:
| Fundamentals (bare VM) | Kubernetes equivalent |
|---|---|
| VM with systemd service | Pod managed by a Deployment |
| Static IP address | ClusterIP Service |
| Config file on disk | ConfigMap |
Database password in .env | Secret |
| Nginx config file | ConfigMap mounted as volume |
| Exposing port on VM | NodePort Service |
The architecture:
┌──────────────────────────┐
│ NodePort :30080 │
│ (any worker node IP) │
└────────────┬─────────────┘
│
┌────────────▼─────────────┐
│ Nginx (1 replica) │
│ reverse proxy │
│ ClusterIP Service │
└────────────┬─────────────┘
│
┌────────────▼─────────────┐
│ Go Backend (2 replicas) │
│ ClusterIP Service │
└────────────┬─────────────┘
│
┌────────────▼─────────────┐
│ PostgreSQL (1 replica) │
│ PV + PVC (hostPath) │
│ ClusterIP Service │
└──────────────────────────┘
All kubectl commands in this module are run from your Mac (with the kubectl context configured in Module 13).
1. Create the Namespace
kubectl create namespace customerapp
All application resources go into the customerapp namespace to keep them separate from system components.
Checkpoint:
kubectl get namespace customerappreturns the namespace.
2. Create the Registry Pull Secret
Your container images are stored in the private registry at 192.168.56.12:5000 which requires authentication. Kubernetes needs a pull secret to authenticate when downloading images.
kubectl create secret docker-registry regcred \
--namespace=customerapp \
--docker-server=192.168.56.12:5000 \
--docker-username=admin \
--docker-password=registrypass123
Verify:
kubectl get secret regcred -n customerapp
Checkpoint: The
regcredsecret exists in thecustomerappnamespace.
3. Deploy PostgreSQL
PostgreSQL needs persistent storage so data survives pod restarts. You use a hostPath PersistentVolume pinned to worker1.
3.1 Create the database Secret
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Secret
metadata:
name: postgres-secret
namespace: customerapp
type: Opaque
stringData:
POSTGRES_USER: "customerapp"
POSTGRES_PASSWORD: "customerpass123"
POSTGRES_DB: "customerdb"
EOF
3.2 Create the database init ConfigMap
This SQL runs when PostgreSQL starts for the first time:
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init
namespace: customerapp
data:
init.sql: |
CREATE TABLE IF NOT EXISTS customers (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password_hash VARCHAR(255) NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Default admin user (password: admin123)
INSERT INTO users (username, password_hash)
VALUES ('admin', '$2a$10$N9qo8uLOickgx2ZMRZoMyeIjZAgcfl7p92ldGxad68LJZdL17lhWy')
ON CONFLICT (username) DO NOTHING;
EOF
3.3 Create the PersistentVolume and PersistentVolumeClaim
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres
type: DirectoryOrCreate
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: customerapp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
EOF
The nodeAffinity ensures the PV is only usable on worker1, where the hostPath directory exists. The PVC binds to this PV automatically.
3.4 Create the PostgreSQL Deployment and Service
kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: customerapp
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
nodeName: worker1
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secret
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
subPath: pgdata
- name: init-sql
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
- name: init-sql
configMap:
name: postgres-init
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: customerapp
spec:
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
EOF
Note: The
subPath: pgdataprevents PostgreSQL from complaining about a non-empty data directory when the PV is first created. PostgreSQL creates its data files inside/var/lib/postgresql/data/pgdata/.
3.5 Verify PostgreSQL
kubectl get pods -n customerapp -l app=postgres
Expected: postgres-xxxxxxxxxx-xxxxx with STATUS Running.
Wait for it to be ready, then test the database:
kubectl exec -n customerapp deploy/postgres -- \
psql -U customerapp -d customerdb -c "SELECT tablename FROM pg_tables WHERE schemaname='public';"
Expected: The customers and users tables are listed.
Checkpoint: PostgreSQL is running with persistent storage. The init SQL created the tables.
4. Deploy the Go Backend
The Go backend connects to PostgreSQL and serves the REST API and login endpoints.
4.1 Create the backend Deployment and Service
kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: customerapp
spec:
replicas: 2
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: backend
image: 192.168.56.12:5000/customerapp:v1
ports:
- containerPort: 8080
env:
- name: DB_HOST
value: "postgres"
- name: DB_PORT
value: "5432"
- name: DB_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: DB_NAME
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: customerapp
spec:
selector:
app: backend
ports:
- port: 8080
targetPort: 8080
EOF
Key points:
imagePullSecrets— tells Kubernetes to use theregcredsecret when pulling from the private registryDB_HOST: postgres— the backend uses the ClusterIP service name. CoreDNS resolvespostgrestopostgres.customerapp.svc.cluster.local2 replicas— pods are spread across worker1 and worker2 by the scheduler- Liveness and readiness probes ensure unhealthy pods are restarted and excluded from service traffic
4.2 Verify the backend
kubectl get pods -n customerapp -l app=backend
Expected: Two pods in Running status.
kubectl get pods -n customerapp -l app=backend -o wide
Verify the pods are on different nodes (worker1 and worker2).
Checkpoint: Two backend pods are running on different worker nodes.
5. Deploy the Nginx Reverse Proxy
Nginx acts as the entry point, proxying requests to the backend service.
5.1 Create the Nginx ConfigMap
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: customerapp
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream backend {
server backend:8080;
}
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
}
EOF
The server backend:8080 entry uses the Kubernetes Service name. CoreDNS resolves it to the backend Service's ClusterIP, which distributes traffic to the backend pods.
5.2 Create the Nginx Deployment and NodePort Service
kubectl apply -f - <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: customerapp
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: regcred
containers:
- name: nginx
image: 192.168.56.12:5000/nginx:alpine
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: nginx-config
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: customerapp
spec:
type: NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
nodePort: 30080
EOF
The NodePort Service exposes the app on port 30080 on every worker node. You can access the app at http://192.168.56.23:30080 or http://192.168.56.24:30080.
5.3 Verify Nginx
kubectl get pods -n customerapp -l app=nginx
Expected: One pod in Running status.
Checkpoint: Nginx pod is running with the config mounted from the ConfigMap.
6. Verify the Full Application
6.1 Check all pods
kubectl get pods -n customerapp
Expected:
NAME READY STATUS RESTARTS AGE
backend-xxxxxxxxxx-xxxxx 1/1 Running 0 2m
backend-xxxxxxxxxx-yyyyy 1/1 Running 0 2m
nginx-xxxxxxxxxx-xxxxx 1/1 Running 0 1m
postgres-xxxxxxxxxx-xxxxx 1/1 Running 0 3m
All four pods should be Running with 1/1 ready.
6.2 Check services
kubectl get svc -n customerapp
Expected:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.32.0.x <none> 8080/TCP 2m
nginx NodePort 10.32.0.y <none> 80:30080/TCP 1m
postgres ClusterIP 10.32.0.z <none> 5432/TCP 3m
6.3 Test the health endpoint
curl http://192.168.56.23:30080/health
Expected: {"status":"ok"} or similar health response.
6.4 Test from a browser
Open http://192.168.56.23:30080 in your browser. You should see the Customer Information App login page. Either worker IP works:
http://192.168.56.23:30080http://192.168.56.24:30080
6.5 Test login and CRUD operations
- Log in with the default credentials (
admin/admin123) - Add a new customer
- View the customer list
- Edit a customer
- Delete a customer
The app should behave exactly like the Fundamentals bare-VM version.
Checkpoint: The app is accessible on port 30080, login works, and CRUD operations succeed.
7. Troubleshooting
Backend pods in ImagePullBackOff
The registry is not accessible or the credentials are wrong:
kubectl describe pod -n customerapp -l app=backend | grep -A 5 "Events"
Common fixes:
- Verify the pull secret:
kubectl get secret regcred -n customerapp -o yaml - Test registry from a worker:
ssh worker1 "sudo crictl pull 192.168.56.12:5000/customerapp:v1" - Verify the image exists:
curl -u admin:registrypass123 https://192.168.56.12:5000/v2/_catalog -k
PostgreSQL pod in CrashLoopBackOff
Check PostgreSQL logs:
kubectl logs -n customerapp deploy/postgres
Common causes:
- Init SQL syntax error — check the
postgres-initConfigMap - PV permission error — verify
/data/postgreson worker1:ssh worker1 "ls -la /data/" - Database already initialized with different credentials — delete the PV data and redeploy:
ssh worker1 "sudo rm -rf /data/postgres/*"
kubectl delete pod -n customerapp -l app=postgres
Backend cannot connect to PostgreSQL
kubectl logs -n customerapp deploy/backend | head -20
Common causes:
- DNS not resolving
postgres— verify CoreDNS is running:kubectl get pods -n kube-system - Wrong database credentials — check the Secret values match what PostgreSQL expects
- PostgreSQL not ready yet — the backend pod may have started before PostgreSQL finished initialization. Check readiness probes:
kubectl describe pod -n customerapp -l app=backend | grep -A 3 "Readiness"
NodePort not accessible
- Verify the service:
kubectl get svc nginx -n customerapp - Verify the pod is running:
kubectl get pods -n customerapp -l app=nginx - Test from inside the cluster first:
kubectl run curl-test --image=busybox --rm -it -- wget -qO- http://nginx.customerapp:80/health
- Check kube-proxy is running on workers:
ssh worker1 "sudo systemctl status kube-proxy" - Check iptables rules exist:
ssh worker1 "sudo iptables -t nat -L KUBE-SERVICES | grep 30080"
Nginx returns 502 Bad Gateway
Nginx cannot reach the backend. Check:
- Backend pods are running and ready
- The upstream name in nginx.conf (
backend:8080) matches the Service name and port - Test directly:
kubectl exec -n customerapp deploy/nginx -- wget -qO- http://backend:8080/health
8. What You Have Now
| Capability | Verification Command |
|---|---|
| Application namespace | kubectl get ns customerapp |
| Registry pull secret | kubectl get secret regcred -n customerapp |
| PostgreSQL with persistent storage | kubectl get pods,pvc -n customerapp -l app=postgres |
| Go backend (2 replicas) | kubectl get pods -n customerapp -l app=backend |
| Nginx reverse proxy | kubectl get pods -n customerapp -l app=nginx |
| App accessible on NodePort 30080 | curl http://192.168.56.23:30080/health |
| Login and CRUD working | Browser test at http://192.168.56.23:30080 |
| Service discovery via DNS | Backend connects to postgres by service name |
The full application stack is running on Kubernetes — the same app from the Fundamentals track, now managed by the cluster. Deployments handle restarts, Services handle load balancing, ConfigMaps and Secrets manage configuration, and PersistentVolumes handle storage.
Next up: Module 15 — End-to-End Verification — run comprehensive cluster tests including scaling, rolling updates, node failure simulation, and secret encryption verification.