G: Are you tired of juggling multiple docker run
commands just to get your application stack up and running? π© Do you wish there was a simpler, more efficient way to define, link, and deploy your services, like a web app with a database, or a microservices architecture?
If so, you’re in the right place! Docker Compose is your ultimate solution. It allows you to define and run multi-container Docker applications with a single command. Think of it as a blueprint for your entire application stack. πΊοΈ
In this comprehensive guide, we’ll dive deep into Docker Compose, showing you how to write practical docker-compose.yml
files, complete with real-world examples that you can follow along with. Let’s turn that deployment headache into a deployment breeze! π¨
π‘ What is Docker Compose and Why Do You Need It?
At its core, Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file (typically docker-compose.yml
) to configure your application’s services, networks, and volumes. Then, with a single command, you can create and start all the services from your configuration.
Key Benefits of Docker Compose:
- Simplicity: Define your entire application stack in one file. No more complex shell scripts or remembering long
docker run
commands. β¨ - Reproducibility: Ensure that your application runs consistently across different environments (development, testing, production) because the entire stack is defined in code. π
- Dependency Management: Easily define dependencies between services (e.g., your web app depends on a database), ensuring they start in the correct order. π
- Isolation: Services within a Compose application run in isolated environments, making it easier to manage dependencies and avoid conflicts. π‘οΈ
- Portability: Your
docker-compose.yml
file can be shared and used by anyone with Docker installed, making collaboration seamless. π
π οΈ Prerequisites Before We Start
To follow along with this guide, you’ll need:
- Docker Desktop (or Docker Engine and Docker Compose): Make sure you have Docker installed on your system. Docker Desktop conveniently bundles Docker Engine, Docker Compose, and other tools. You can download it from the official Docker website.
- A Text Editor: Any code editor will do (VS Code, Sublime Text, Atom, etc.).
- Basic Understanding of Docker: Familiarity with concepts like images, containers, volumes, and networks will be helpful.
Once Docker Desktop is installed, you’ll have access to the docker compose
command (note the space, as opposed to the older docker-compose
hyphenated command, though both often work). We’ll use the modern docker compose
syntax throughout this guide.
π Understanding the docker-compose.yml
File Structure
The docker-compose.yml
file is the heart of your Compose application. It’s a YAML file, which means indentation and spacing are crucial!
Here’s a breakdown of its fundamental sections:
version: '3.8' # Specifies the Compose file format version
services: # Defines the services (containers) that make up your application
web: # Name of your first service
image: nginx:latest # Docker image to use for this service
ports:
- "80:80" # Port mapping: HOST_PORT:CONTAINER_PORT
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d # Mount host path to container path
- ./nginx/html:/usr/share/nginx/html
environment: # Environment variables to pass to the container
- NGINX_ROOT=/usr/share/nginx/html
db: # Name of your second service
image: postgres:13
environment:
- POSTGRES_DB=mydb
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=mypassword
volumes:
- db_data:/var/lib/postgresql/data # Mount a named volume for persistence
networks:
- backend-network # Assign to a custom network
networks: # Defines custom networks for inter-service communication
backend-network:
driver: bridge # Default driver for networks
volumes: # Defines named volumes for persistent data storage
db_data: # Name of the volume
Let’s break down the most common directives you’ll use within a services
definition:
image
: Specifies the Docker image to use for the service (e.g.,nginx:latest
,mysql:8.0
). Docker will pull this image if it’s not already present locally.build
: If you have a customDockerfile
for your service, usebuild
to specify the context path (usually.
for the current directory) where theDockerfile
resides.ports
: Maps ports from the host machine to the container. Format:"HOST_PORT:CONTAINER_PORT"
.volumes
: Mounts host paths or named volumes into the container. Essential for data persistence or injecting configuration files.- Host bind mounts:
./my-data:/app/data
- Named volumes:
my-volume:/app/data
- Host bind mounts:
environment
: Sets environment variables inside the container. Great for configuring applications without modifying the image.depends_on
: Defines dependencies between services. Compose will start services in dependency order. Note: This only waits for the container to start, not necessarily for the application inside to be ready. For more robust readiness checks, consider health checks.networks
: Connects a service to specified networks. Services on the same network can communicate with each other using their service names as hostnames.restart
: Defines the restart policy for the container (e.g.,always
,on-failure
,no
).
π― Practical Examples: Let’s Get Our Hands Dirty!
We’ll start with simple examples and gradually move to more complex, real-world scenarios.
Example 1: Simple Static Website (NGINX) π
This is the “Hello World” of multi-container apps β even though it’s just one container, it demonstrates the basic structure! We’ll serve a static HTML page using an NGINX web server.
1. Project Structure:
my-static-site/
βββ docker-compose.yml
βββ html/
βββ index.html
2. Create my-static-site/html/index.html
:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Hello Docker Compose!</title>
<style>
body { font-family: Arial, sans-serif; text-align: center; margin-top: 50px; }
h1 { color: #333; }
p { color: #666; }
</style>
</head>
<body>
<h1>π Hello from Docker Compose! π</h1>
<p>This page is served by an NGINX container.</p>
<p>π Easy peasy deployments!</p>
</body>
</html>
3. Create my-static-site/docker-compose.yml
:
version: '3.8'
services:
webserver:
image: nginx:alpine # Using a lightweight Alpine-based NGINX image
ports:
- "80:80" # Map host port 80 to container port 80
volumes:
- ./html:/usr/share/nginx/html:ro # Mount the 'html' directory into NGINX's web root (read-only)
restart: always # Always restart the container if it stops
Explanation:
webserver
: This is the name of our service.image: nginx:alpine
: We’re using the official NGINX image.alpine
variant is smaller.ports: - "80:80"
: This exposes port 80 of the NGINX container to port 80 on your host machine.volumes: - ./html:/usr/share/nginx/html:ro
: This is crucial! It mounts your localhtml
directory (relative todocker-compose.yml
) into the NGINX default web server root inside the container.:ro
makes it read-only for safety.restart: always
: Ensures NGINX automatically restarts if it crashes or Docker restarts.
4. Run it!
Navigate to the my-static-site
directory in your terminal and run:
docker compose up -d
up
: Builds, creates, starts, and attaches to containers for a service.-d
: Runs containers in “detached” mode (in the background).
5. Verify:
Open your web browser and go to http://localhost
. You should see your “Hello from Docker Compose!” page! π
6. Clean up:
When you’re done, stop and remove the containers, networks, and volumes defined in the docker-compose.yml
file:
docker compose down
Example 2: WordPress Application (Web App + Database) βοΈ
This is a classic example of a multi-container application, demonstrating how to link a web server (WordPress) with a database (MySQL). We’ll use named volumes for data persistence.
1. Project Structure:
my-wordpress-blog/
βββ docker-compose.yml
2. Create my-wordpress-blog/docker-compose.yml
:
version: '3.8'
services:
db:
image: mysql:8.0 # Using MySQL as our database
container_name: wordpress_db # Optional: gives a specific name to the container
environment:
MYSQL_ROOT_PASSWORD: my_super_secret_password # IMPORTANT: Change this in production!
MYSQL_DATABASE: wordpress # Database name for WordPress
MYSQL_USER: wordpressuser # User for WordPress
MYSQL_PASSWORD: wordpresspassword # Password for WordPress user
volumes:
- db_data:/var/lib/mysql # Persistent storage for database data
networks:
- wordpress-network # Connect to our custom network
restart: always
wordpress:
depends_on: # Ensure 'db' service starts before 'wordpress'
- db
image: wordpress:latest # Official WordPress image
container_name: my_wordpress_blog # Optional: specific name for the container
ports:
- "8000:80" # Map host port 8000 to container port 80
environment:
WORDPRESS_DB_HOST: db:3306 # Hostname is the service name 'db', default MySQL port
WORDPRESS_DB_NAME: wordpress
WORDPRESS_DB_USER: wordpressuser
WORDPRESS_DB_PASSWORD: wordpresspassword
volumes:
- wordpress_data:/var/www/html # Persistent storage for WordPress files (themes, plugins, uploads)
networks:
- wordpress-network # Connect to our custom network
restart: always
networks: # Define the custom network for internal communication
wordpress-network:
driver: bridge # Default network driver
volumes: # Define named volumes for data persistence
db_data: # Volume for MySQL data
wordpress_data: # Volume for WordPress files
Explanation:
db
service:image: mysql:8.0
: Uses the official MySQL 8.0 image.environment
: Crucial for configuring MySQL (root password, database name, user, password). Remember to change passwords for real applications! β οΈvolumes: - db_data:/var/lib/mysql
: Creates a named volume calleddb_data
which persists the database’s files even if thedb
container is removed. This is vital for your data!networks: - wordpress-network
: Connects thedb
service to our customwordpress-network
. This allowswordpress
to communicate withdb
using its service name.
wordpress
service:depends_on: - db
: Tells Compose to start thedb
service first.image: wordpress:latest
: Uses the official WordPress image.ports: - "8000:80"
: Exposes WordPress on port 8000 of your host.environment
: Configures WordPress to connect to the MySQL database. NoticeWORDPRESS_DB_HOST: db:3306
βdb
is the service name, which acts as the hostname within thewordpress-network
.volumes: - wordpress_data:/var/www/html
: Similar todb_data
, this creates a named volume for WordPress files (themes, plugins, uploads) ensuring they persist.networks: - wordpress-network
: Connects to the same custom network asdb
.
networks
andvolumes
sections at the root level: These define the custom network and named volumes referenced by the services. Using a custom network (likewordpress-network
) provides better isolation and makes communication between services explicit.
3. Run it!
Navigate to the my-wordpress-blog
directory in your terminal and run:
docker compose up -d
It might take a few moments for WordPress and MySQL to fully initialize.
4. Verify:
Open your web browser and go to http://localhost:8000
. You should see the WordPress installation wizard! Follow the steps to set up your blog. Your data (database and WordPress files) will be safely stored in the db_data
and wordpress_data
named volumes. β
5. Clean up:
docker compose down -v # -v also removes named volumes
down -v
: The-v
flag is important here. It tells Docker Compose to also remove the named volumes (db_data
andwordpress_data
) associated with this Compose file. Be careful with this in production, as it deletes your data! For development, it’s great for a clean slate.
Example 3: Simple Python Flask API with Redis Cache ππ¨
This example demonstrates building a custom image from a Dockerfile
, using a depends_on
relationship, and inter-service communication over a custom network for a simple API that uses Redis as a cache.
1. Project Structure:
my-api-cache/
βββ docker-compose.yml
βββ app/
βββ Dockerfile
βββ app.py
βββ requirements.txt
2. Create my-api-cache/app/requirements.txt
:
Flask==2.3.2
redis==5.0.1
3. Create my-api-cache/app/Dockerfile
:
# Use an official Python runtime as a parent image
FROM python:3.9-slim-buster
# Set the working directory in the container
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY app.py ./
# Expose port 5000 for the Flask app
EXPOSE 5000
# Run the Flask app
CMD ["python", "app.py"]
4. Create my-api-cache/app/app.py
:
from flask import Flask, jsonify
import redis
import time
app = Flask(__name__)
cache = redis.Redis(host='redis', port=6379) # 'redis' is the service name!
def get_hit_count():
retries = 5
while True:
try:
return cache.incr('hits')
except redis.exceptions.ConnectionError as exc:
if retries == 0:
raise exc
retries -= 1
time.sleep(0.5)
@app.route('/')
def hello():
count = get_hit_count()
return jsonify({"message": "Hello from Flask API!", "hits": count})
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000, debug=True)
- Important Note: Notice
redis.Redis(host='redis', port=6379)
. Thehost
is simply the service name defined indocker-compose.yml
(redis
), notlocalhost
or an IP address. Docker Compose’s internal networking makes this possible!
5. Create my-api-cache/docker-compose.yml
:
version: '3.8'
services:
redis:
image: redis:6.2-alpine # Lightweight Redis image
container_name: my_redis_cache
ports:
- "6379:6379" # Expose Redis port (optional, but useful for debugging)
networks:
- app-network # Connect to our custom network
restart: always
api:
build: ./app # Build the image from the Dockerfile in the 'app' directory
container_name: my_flask_api
depends_on:
- redis # Ensure Redis starts before the API
ports:
- "5000:5000" # Expose Flask API on host port 5000
networks:
- app-network # Connect to our custom network
volumes:
- ./app:/app # Mount the app directory for live code changes during development (optional)
restart: always
networks:
app-network:
driver: bridge
Explanation:
redis
service:image: redis:6.2-alpine
: Uses a stable, lightweight Redis image.ports: - "6379:6379"
: While the API connects internally, exposing the port allows you to connect to Redis directly from your host for debugging if needed.networks: - app-network
: Connects to our custom network.
api
service:build: ./app
: Instead ofimage
, we usebuild
to tell Compose to build a Docker image from theDockerfile
located in the./app
directory.depends_on: - redis
: Ensures theredis
container starts before theapi
container.ports: - "5000:5000"
: Exposes the Flask API on host port 5000.networks: - app-network
: Connects to the same custom network asredis
, enabling internal communication using service names.volumes: - ./app:/app
: This is a great development trick! It mounts your localapp
directory into the container. Any changes you make toapp.py
orDockerfile
(after rebuilding if it’s theDockerfile
) will be reflected inside the container, without needing to rebuild the image every time for code changes.
6. Run it!
Navigate to the my-api-cache
directory in your terminal and run:
docker compose up -d --build
--build
: This flag ensures that Docker Compose rebuilds the images defined withbuild
instructions. It’s good practice to include this if you’ve made changes to yourDockerfile
or any files within the build context.
7. Verify:
Open your web browser and go to http://localhost:5000
. You should see something like:
{
"message": "Hello from Flask API!",
"hits": 1
}
Refresh the page multiple times, and the hits
count should increment, confirming that your Flask API is communicating with Redis! π
8. Clean up:
docker compose down
π Essential Docker Compose Commands
You’ve already seen up
and down
, but here are other frequently used commands:
docker compose up
: (ordocker compose up -d
for detached mode) Builds, creates, starts, and attaches to containers for a service.docker compose down
: Stops and removes containers, networks, and (by default) the default volumes defined indocker-compose.yml
. Usedocker compose down -v
to also remove named volumes.docker compose ps
: Lists all services and their current status for the current Compose project.docker compose logs [service_name]
: Displays log output from services. You can specify a service name (e.g.,docker compose logs api
) or omit it to see logs from all services.docker compose build [service_name]
: Builds or re-builds services. Useful if you’ve changed yourDockerfile
or the build context.docker compose exec [service_name] [command]
: Executes a command in a running container. For example,docker compose exec db bash
to open a bash shell inside your database container.docker compose restart [service_name]
: Restarts containers.docker compose stop [service_name]
: Stops running containers without removing them.docker compose start [service_name]
: Starts stopped containers.
π‘ Best Practices and Tips
- Specify Image Versions: Always use specific image tags (e.g.,
nginx:1.23.4
,python:3.9-slim-buster
) instead oflatest
. This ensures reproducibility and prevents unexpected breakage when a newlatest
version is released. - Use Named Volumes for Persistence: For any data you want to keep (databases, uploaded files), use named volumes. They are managed by Docker and easier to back up and restore.
- Environment Variables for Configuration: Externalize sensitive information (passwords, API keys) using environment variables. Never hardcode them directly in your
docker-compose.yml
orDockerfile
. Consider.env
files for local development. - Separate Development and Production Configurations: For more complex applications, you might have different configurations for development (e.g., bind mounts for hot-reloading) and production (e.g., optimized builds, different networking). Docker Compose allows you to use multiple Compose files with
docker compose -f docker-compose.yml -f docker-compose.prod.yml up
. - Define Custom Networks: While Compose creates a default network, defining your own (as shown in the examples) provides better organization and explicit control over service communication.
- Health Checks: For production applications, add
healthcheck
directives to your services to ensure that a service is not just “running” but actually “ready” to accept connections (e.g., database is fully initialized). This is more robust thandepends_on
.
β¨ Conclusion
Docker Compose is an incredibly powerful and indispensable tool for anyone working with multi-container Docker applications. It simplifies your development workflow, enhances reproducibility, and makes deploying complex application stacks a breeze. By mastering the docker-compose.yml
file and its various directives, you gain full control over your application’s environment.
We’ve covered the basics, walked through practical examples, and explored essential commands. Now you have the knowledge and hands-on experience to start composing your own multi-container applications.
So, go ahead, give it a try! Start streamlining your deployments and build more robust, reproducible applications with Docker Compose. Happy composing! π³π