Skip to main content

Module 03 — Docker Networking & Volumes

In Module 02 you containerized the Go backend and pointed it at the bare-metal PostgreSQL instance on db-server (192.168.56.11). That works, but it is not how multi-container applications typically run. In production, the database is usually a container too.

When you move PostgreSQL into a container, two new problems appear:

  1. Discovery — How does the app container find the database container? IP addresses change every time a container restarts.
  2. Persistence — Container filesystems are ephemeral. If you docker rm the database container, the data is gone.

Docker solves problem 1 with custom bridge networks (automatic DNS) and problem 2 with named volumes (storage that survives container removal). This module teaches both.

Everything runs on app-server (192.168.56.12) where Docker is already installed.


1. Why This Matters

Compare where you were at the end of Module 02 with where you are going:

AspectModule 02 (bare-metal DB)Module 03 (containerized DB)
PostgreSQL runs ondb-server VM (systemd)app-server as a Docker container
App finds DB viaHardcoded IP 192.168.56.11Container name postgres (DNS)
Data survives restart?Yes (files on disk)Only if a named volume is mounted
Network pathContainer → host LAN → db-serverContainer → Docker bridge → container
Port exposureDB listens on LANDB only reachable inside Docker network

By the end of this module you will have two containers talking to each other over a private Docker network, with database files stored on a named volume that survives container removal.


2. Docker Network Types

Docker creates three networks by default. List them:

docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
a1b2c3d4e5f6 bridge bridge local
f6e5d4c3b2a1 host host local
1a2b3c4d5e6f none null local

What each type does

Network TypeDNS between containersPort mapping (-p)IsolationUse when
Default bridgeNo (IP only)YesPartialQuick one-off containers
Custom bridgeYes (by container name)YesFullMulti-container apps
HostN/A (shares host stack)No (not needed)NonePerformance-critical networking
NoneNo networkingNoCompleteSecurity-hardened batch jobs

The default bridge gives each container an IP address, but containers cannot find each other by name — you would have to look up the IP manually every time a container restarts. Fragile.

A custom bridge adds automatic DNS. Any container on the network can resolve other containers by their --name. This is what you want for a multi-container app.


3. Create a Custom Bridge Network

docker network create customerapp-net

Inspect it:

docker network inspect customerapp-net

Key fields in the output:

{
"Name": "customerapp-net",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Containers": {}
}

The network exists but has no containers yet. Any container you attach to customerapp-net will get an IP from the 172.18.0.0/16 subnet and will be able to resolve other containers on the same network by name.

Checkpoint: docker network ls shows customerapp-net with driver bridge.


4. Docker Volumes

Before running PostgreSQL, you need somewhere to store its data files that survives container removal.

The problem

When you remove a container with docker rm, its entire filesystem is deleted — including any data the application wrote. For a web server that is fine (stateless). For a database it is catastrophic.

Named volumes vs bind mounts

Docker offers two solutions:

FeatureNamed VolumeBind Mount
Managed byDocker (/var/lib/docker/volumes/)You (any host directory)
Created withdocker volume create or auto on first use-v /host/path:/container/path
Survives docker rmYesYes (files are on the host)
Portable across hostsYes (volume name is the identifier)No (depends on host directory structure)
PermissionsDocker handles themYou must match UID/GID
Best forDatabase storage, application dataConfig files, development source code

For production databases, named volumes are the standard choice. They are portable, Docker-managed, and you do not have to worry about host directory permissions.

Create a volume

docker volume create pgdata

Inspect it:

docker volume inspect pgdata
[
{
"Name": "pgdata",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/pgdata/_data",
"Labels": {},
"Scope": "local"
}
]

The volume is empty. When PostgreSQL starts and writes to /var/lib/postgresql/data inside the container, Docker stores those files at the Mountpoint path on the host.

Checkpoint: docker volume ls shows pgdata.


5. Run PostgreSQL in a Container

Pull the official PostgreSQL image:

docker pull postgres:16-alpine

Run it on the custom network with the named volume:

docker run -d \
--name postgres \
--network customerapp-net \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=apppassword123 \
-e POSTGRES_DB=customerdb \
-v pgdata:/var/lib/postgresql/data \
postgres:16-alpine

What each flag does:

FlagPurpose
--name postgresContainer name — also becomes its DNS hostname on the network
--network customerapp-netAttach to the custom bridge (enables DNS resolution)
-e POSTGRES_USERCreates this database user on first start
-e POSTGRES_PASSWORDSets the password for that user
-e POSTGRES_DBCreates this database owned by POSTGRES_USER
-v pgdata:/var/lib/postgresql/dataMounts the named volume at PostgreSQL's data directory

The official postgres image reads these environment variables on first startup (when the data directory is empty) and automatically creates the user and database. No manual CREATE USER or CREATE DATABASE needed.

Verify it started

docker logs postgres

Look for the line near the end:

LOG:  database system is ready to accept connections

If you see that, PostgreSQL is running and the customerdb database exists.

Checkpoint: docker ps shows the postgres container running. docker logs postgres shows "ready to accept connections".


6. Initialize the Database Schema

The postgres image created the database, but it has no tables yet. You need the customers and users tables from the Fundamentals track.

6.1 The docker-entrypoint-initdb.d mechanism

The official postgres image has a built-in feature: on first start (when the data directory is empty), it executes any .sql or .sh files found in /docker-entrypoint-initdb.d/ inside the container. This is the standard way to seed a PostgreSQL container.

6.2 Create the init script

On app-server, create the SQL file:

mkdir -p ~/customerapp/docker
~/customerapp/docker/init.sql
-- Create tables for the Customer Information App

CREATE TABLE IF NOT EXISTS customers (
id SERIAL PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(100) NOT NULL,
phone VARCHAR(20),
address TEXT,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);

CREATE TABLE IF NOT EXISTS users (
id SERIAL PRIMARY KEY,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(100) NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);

-- Seed the default admin user
INSERT INTO users (username, password) VALUES ('admin', 'admin123');

6.3 Re-create the postgres container with the init script

The init mechanism only runs when the data directory is empty. Since the current container already initialized the volume (without tables), you need to remove both the container and the volume, then start fresh:

docker stop postgres && docker rm postgres
docker volume rm pgdata

Now re-create the volume and run postgres with a bind mount for the init script:

docker volume create pgdata
docker run -d \
--name postgres \
--network customerapp-net \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=apppassword123 \
-e POSTGRES_DB=customerdb \
-v pgdata:/var/lib/postgresql/data \
-v ~/customerapp/docker/init.sql:/docker-entrypoint-initdb.d/init.sql:ro \
postgres:16-alpine

The second -v flag is a bind mount — it maps a single file from the host into the container. The :ro suffix makes it read-only inside the container.

Note: This uses both a named volume (for database data) and a bind mount (for the init script) in the same container. They serve different purposes: the named volume persists data, the bind mount injects configuration.

6.4 Verify the tables

Wait a few seconds for PostgreSQL to finish initialization, then check:

docker exec -it postgres psql -U appuser -d customerdb -c '\dt'

Expected output:

          List of relations
Schema | Name | Type | Owner
--------+-----------+-------+---------
public | customers | table | appuser
public | users | table | appuser
(2 rows)

Verify the admin user:

docker exec -it postgres psql -U appuser -d customerdb \
-c "SELECT username FROM users;"
 username
----------
admin
(1 row)

Checkpoint: Both customers and users tables exist in customerdb. The admin user is seeded.


7. Connect the App Container

Now connect the Go app to the containerized PostgreSQL. The app container must be on the same Docker network so it can resolve the hostname postgres.

7.1 Remove the old app container

If the app container from Module 02 is still running:

docker stop customerapp && docker rm customerapp

7.2 Run the app on the same network

docker run -d \
-p 8080:8080 \
--name customerapp \
--network customerapp-net \
-e DB_HOST=postgres \
-e DB_USER=appuser \
-e DB_PASSWORD=apppassword123 \
-e DB_NAME=customerdb \
customerapp:v1

The critical change: DB_HOST=postgres instead of DB_HOST=192.168.56.11. Docker's built-in DNS resolves the name postgres to the IP address of the postgres container on customerapp-net. No hardcoded IPs, no fragile lookups.

7.3 Verify the connection

docker logs customerapp

Expected output:

2024/01/01 12:00:00 Connected to database
2024/01/01 12:00:00 Server starting on :8080

7.4 Test the full flow

Health check:

curl http://localhost:8080/health
{"database":"connected","status":"healthy"}

Login and list customers:

# Login
curl -c cookies.txt -X POST http://localhost:8080/api/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'

# List customers (empty at first)
curl -b cookies.txt http://localhost:8080/api/customers

The app is now talking to a containerized PostgreSQL over the Docker bridge network — no host LAN involved.

Checkpoint: curl http://localhost:8080/health returns {"database":"connected","status":"healthy"}. Both containers are on customerapp-net.


8. Verify Data Persistence

This is the most important test. You need to prove that data survives container removal because the named volume exists independently of the container.

8.1 Create test data

curl -b cookies.txt -X POST http://localhost:8080/api/customers \
-H "Content-Type: application/json" \
-d '{"name":"Persistence Test","email":"test@example.com","phone":"555-0100"}'

Verify it exists:

curl -b cookies.txt http://localhost:8080/api/customers

You should see the "Persistence Test" customer in the response.

8.2 Destroy and recreate the postgres container

docker stop postgres && docker rm postgres

The container is gone, but the pgdata volume still exists:

docker volume ls
DRIVER    VOLUME NAME
local pgdata

Re-run the postgres container with the same volume. Do not include the init.sql bind mount this time — the data directory already has data, so the init mechanism would be skipped anyway:

docker run -d \
--name postgres \
--network customerapp-net \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=apppassword123 \
-e POSTGRES_DB=customerdb \
-v pgdata:/var/lib/postgresql/data \
postgres:16-alpine

8.3 Restart the app container

The app container lost its connection when postgres was removed. Restart it:

docker restart customerapp

8.4 Verify data survived

# Re-login (session cookie may have expired)
curl -c cookies.txt -X POST http://localhost:8080/api/login \
-H "Content-Type: application/json" \
-d '{"username":"admin","password":"admin123"}'

# Check customers
curl -b cookies.txt http://localhost:8080/api/customers

The "Persistence Test" customer is still there. The named volume preserved the database files across container destruction and recreation.

Checkpoint: Data created before docker rm postgres is still present after recreating the container with the same volume.


9. Inspect and Debug Networks

Now that both containers are running, look at how Docker networking works under the hood.

9.1 Inspect the network

docker network inspect customerapp-net

In the Containers section you will see both containers with their assigned IPs:

"Containers": {
"...": {
"Name": "postgres",
"IPv4Address": "172.18.0.2/16"
},
"...": {
"Name": "customerapp",
"IPv4Address": "172.18.0.3/16"
}
}

9.2 Test container-to-container connectivity

From the postgres container, ping the app:

docker exec postgres ping -c 2 customerapp
PING customerapp (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.100 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms

Docker DNS resolved customerapp to its IP address. This is the same mechanism the app uses to resolve postgres.

9.3 Port exposure rules

An important distinction:

CommunicationPort mapping (-p) needed?
Container → Container (same network)No — they communicate directly over the bridge
Host → ContainerYes — use -p hostPort:containerPort
External machine → ContainerYes — and the host firewall must allow it

Notice that the postgres container has no -p flag. It is not reachable from the host or external machines — only from other containers on customerapp-net. This is more secure than exposing the database port to the network.

9.4 Useful debug commands

CommandPurpose
docker network inspect customerapp-netSee containers and their IPs
docker exec postgres ping customerappTest DNS + connectivity
docker exec customerapp nslookup postgresVerify DNS resolution (if nslookup available)
docker logs postgresCheck PostgreSQL startup/errors
docker exec -it postgres psql -U appuser -d customerdbOpen an interactive SQL session

10. Clean Up

When you are done experimenting, clean up all resources:

# Stop and remove containers
docker stop customerapp postgres
docker rm customerapp postgres

# Remove the network
docker network rm customerapp-net

# Remove the volume (deletes all database data!)
docker volume rm pgdata

To remove everything at once (unused containers, networks, images, and build cache):

docker system prune

Note: docker system prune does not remove named volumes. Use docker volume prune separately if you want to delete unused volumes.


11. Troubleshooting

lookup postgres: no such host

The app container cannot resolve the hostname postgres. This means the containers are on different networks. Verify both are on customerapp-net:

docker inspect customerapp --format '{{json .NetworkSettings.Networks}}' | python3 -m json.tool
docker inspect postgres --format '{{json .NetworkSettings.Networks}}' | python3 -m json.tool

Both must show customerapp-net. If either is on the default bridge, remove the container and re-run with --network customerapp-net.

connection refused on port 5432

PostgreSQL takes a few seconds to initialize on first start (especially when running init scripts). The app container may start before postgres is ready. Fix: restart the app container after postgres logs show "ready to accept connections":

docker restart customerapp

In Module 04 (Docker Compose), you will learn about health checks that handle this startup ordering automatically.

Init script not running (no tables after start)

The /docker-entrypoint-initdb.d/ mechanism only runs when the data directory is empty. If the volume already has data from a previous run, the init scripts are silently skipped. To re-run them:

docker stop postgres && docker rm postgres
docker volume rm pgdata
docker volume create pgdata
# Re-run the docker run command with the init.sql bind mount

Data lost after docker rm

You ran the postgres container without -v pgdata:/var/lib/postgresql/data. Without a named volume, the data directory lives inside the container filesystem and is deleted with the container. Always use a named volume for database containers.

Cannot connect to postgres from host machine

The postgres container has no -p flag, so it is only reachable from other containers on the same Docker network. If you need host access for debugging:

docker stop postgres && docker rm postgres
docker run -d \
--name postgres \
--network customerapp-net \
-p 5432:5432 \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=apppassword123 \
-e POSTGRES_DB=customerdb \
-v pgdata:/var/lib/postgresql/data \
postgres:16-alpine

Then connect from the host: psql -h localhost -U appuser -d customerdb


12. What You Have Now

CapabilityVerification Command
Custom bridge network with DNSdocker network inspect customerapp-net
PostgreSQL running in a containerdocker logs postgres — "ready to accept connections"
Named volume for persistent datadocker volume inspect pgdata
App resolves DB by container namedocker logs customerapp — "Connected to database"
Container-to-container communicationdocker exec postgres ping -c 1 customerapp
Data survives container removalRemove + recreate postgres, verify data persists
Init script seeds database schemadocker exec -it postgres psql -U appuser -d customerdb -c '\dt'

Next up: Module 04 — Docker Compose — define the entire multi-container stack in a single YAML file so you can start everything with one command.