In recent years, containerization has revolutionized software development and deployment by addressing the it works on my machine problem. At the forefront of this revolution is Docker—a platform that simplifies building, shipping, and running applications in isolated environments called containers. Unlike traditional virtual machines (VMs), which virtualize an entire operating system, Docker containers share the host system’s OS kernel, making them lightweight, fast, and resource-efficient. This guide is designed for Linux users new to Docker. We’ll start with core concepts, walk through installation on major Linux distributions, explore essential commands, learn to build custom images with Dockerfiles, and cover best practices to ensure efficient and secure container management. By the end, you’ll have the skills to start using Docker for development, testing, and small-scale deployment.
Table of Contents
- Understanding Docker Fundamentals
- Installing Docker on Linux
- Basic Docker Commands
- Working with Dockerfiles: Building Custom Images
- Common Docker Practices
- Docker Best Practices
- Troubleshooting Common Issues
- Conclusion
- References
1. Understanding Docker Fundamentals
Before diving into commands, let’s clarify key Docker concepts:
Containers vs. Virtual Machines (VMs)
- VMs: Virtualize hardware to run multiple OS instances (e.g., Windows on Linux). Each VM includes a full OS, making them heavy and slow to start.
- Containers: Share the host OS kernel and isolate application dependencies (libraries, binaries). They are lightweight (~MBs vs. GBs for VMs) and start in seconds.
Key Docker Components
- Docker Engine: The core runtime that manages containers (daemon + CLI).
- Images: Read-only templates containing instructions to build a container (e.g., an Ubuntu OS with Nginx installed). Think of images as “blueprints.”
- Containers: Runnable instances of images. You can create, start, stop, or delete containers from an image.
- Dockerfile: A text file with instructions to build a custom image (e.g., “install Python, copy app code, run the app”).
- Docker Hub: A public registry (like GitHub for Docker images) with pre-built images (e.g.,
ubuntu,nginx,node). - Docker Compose: A tool to define and run multi-container apps (e.g., a web app + database), covered briefly later.
2. Installing Docker on Linux
Docker provides official packages for most Linux distributions. Below are steps for Ubuntu, Fedora, and Debian.
Prerequisites
- A 64-bit Linux distribution with kernel version ≥ 4.15 (check with
uname -r). - sudo privileges.
Install Docker on Ubuntu
-
Update package lists and install dependencies:
sudo apt update sudo apt install -y apt-transport-https ca-certificates curl software-properties-common -
Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg -
Add Docker’s stable repository:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -
Install Docker Engine:
sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io -
Start and enable Docker (to run on boot):
sudo systemctl start docker sudo systemctl enable docker -
Verify installation: Run the
hello-worldtest image to confirm Docker works:sudo docker run hello-worldYou’ll see a message like: “Hello from Docker! This message shows your installation appears to be working correctly.”
Install on Fedora
-
Add Docker’s repository:
sudo dnf -y install dnf-plugins-core sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo -
Install Docker:
sudo dnf install -y docker-ce docker-ce-cli containerd.io -
Start and enable Docker:
sudo systemctl start docker sudo systemctl enable docker -
Verify:
sudo docker run hello-world
Install on Debian
Similar to Ubuntu, but use debian instead of ubuntu in repository URLs:
sudo apt update
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io
sudo systemctl start docker && sudo systemctl enable docker
Post-Installation: Run Docker Without sudo
By default, Docker requires sudo to run commands. To avoid this:
- Add your user to the
dockergroup:sudo usermod -aG docker $USER - Log out and back in (or restart your terminal) for changes to take effect.
- Verify with
docker run hello-world(nosudoneeded now!).
3. Basic Docker Commands
Let’s explore essential commands to work with images and containers.
Check Docker Version
docker --version # Docker Engine version
docker info # Detailed system info (e.g., number of containers/images)
Working with Images
Images are pulled from Docker Hub or built locally.
Pull an Image
Download an image from Docker Hub (e.g., ubuntu:22.04):
docker pull ubuntu:22.04 # ":22.04" is the tag (version); omit for latest
docker pull nginx # Pulls the latest Nginx image
List Local Images
docker images # List all images
docker images -a # Include intermediate images (rarely needed)
Delete an Image
docker rmi <image-id> # Replace <image-id> with the IMAGE ID from `docker images`
docker rmi nginx:latest # Delete by name:tag
Working with Containers
Containers are created from images. Let’s run, manage, and interact with them.
Run a Container
The docker run command creates and starts a container. Let’s break it down with examples:
Example 1: Run an interactive Ubuntu shell
docker run -it --name my-ubuntu ubuntu:22.04 /bin/bash
-it:i(interactive) keeps STDIN open;t(tty) allocates a terminal.--name my-ubuntu: Assign a name to the container (optional but recommended)./bin/bash: Command to run in the container (starts a shell).
You’ll now be inside the Ubuntu container. Type exit to leave.
Example 2: Run Nginx in detached mode (background)
Nginx is a web server; we’ll run it in “detached” mode and map port 8080 on the host to port 80 in the container:
docker run -d --name my-nginx -p 8080:80 nginx
-d: Run in detached mode (container runs in background).-p 8080:80: Map host port 8080 to container port 80 (so you can access Nginx viahttp://localhost:8080).
Visit http://localhost:8080 in your browser to see Nginx’s default page!
List Containers
docker ps # List running containers
docker ps -a # List all containers (running + stopped)
Stop/Restart/Delete Containers
docker stop my-nginx # Stop a running container
docker start my-nginx # Restart a stopped container
docker restart my-nginx # Restart a running container
docker rm my-nginx # Delete a stopped container (add -f to force-delete running)
Execute Commands in a Running Container
Use docker exec to run commands in a running container. For example, get a shell in the my-nginx container:
docker exec -it my-nginx /bin/bash
Now you can explore the Nginx container (e.g., check /usr/share/nginx/html/index.html).
Quick Reference: Common Commands
| Command | Purpose |
|---|---|
docker pull <image> | Download an image from Docker Hub |
docker run <image> | Create and start a container |
docker ps -a | List all containers |
docker exec -it <name> <cmd> | Run a command in a container |
docker stop <name> | Stop a container |
docker rm <name> | Delete a container |
docker rmi <image> | Delete an image |
4. Working with Dockerfiles: Building Custom Images
A Dockerfile is a script that defines how to build a custom image. Let’s create a simple Node.js app and package it into a Docker image.
Step 1: Create a Sample App
Create a project folder with these files:
mkdir my-node-app && cd my-node-app
touch app.js Dockerfile
app.js (a simple web server):
const http = require('http');
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello from Docker!\n');
});
server.listen(port, () => {
console.log(`Server running on port ${port}`);
});
Step 2: Write the Dockerfile
Dockerfile (no file extension):
# Use Node.js 18 as the base image
FROM node:18-alpine
# Set working directory in the container
WORKDIR /app
# Copy package.json (if we had dependencies, we'd run `npm install` here)
# For this example, we skip dependencies since there are none.
# Copy app code into the container
COPY app.js .
# Expose port 3000 (documentation only; doesn't publish the port)
EXPOSE 3000
# Command to run the app
CMD ["node", "app.js"]
Key Dockerfile Instructions
FROM: Specify the base image (use official images for security).node:18-alpineis a lightweight Node.js image.WORKDIR: Set the working directory for subsequent commands (avoids messy paths like/app/app.js).COPY: Copy files from the host to the container (useCOPYinstead ofADDunless you need URL support).EXPOSE: Document which port the container listens on (doesn’t actually publish the port—use-pindocker runfor that).CMD: Define the command to run when the container starts (overridable viadocker run).
Step 3: Build the Image
Run this in the my-node-app folder to build the image:
docker build -t my-node-app:1.0 .
-t my-node-app:1.0: Tag the image with a name (my-node-app) and version (1.0)..: Path to the Dockerfile (current directory).
Step 4: Run the Container
Start the app with port mapping (host port 3000 → container port 3000):
docker run -d -p 3000:3000 --name my-app-container my-node-app:1.0
Visit http://localhost:3000—you’ll see “Hello from Docker!” 🎉
5. Common Docker Practices
Managing Containers and Images
- Clean up stopped containers: Use
docker container pruneto delete all stopped containers (add-fto skip confirmation). - Clean up unused images:
docker image prune -adeletes all unused images (use with caution!). - Auto-delete containers: Add
--rmtodocker runto delete the container when it stops (useful for testing):docker run --rm -it ubuntu:22.04 /bin/bash # Container is deleted after exit
Networking
Docker creates a default “bridge” network for containers. Containers on the same network can communicate via their names. For example:
# Run a PostgreSQL container (database)
docker run -d --name my-db -e POSTGRES_PASSWORD=pass postgres:14
# Run a Python app that connects to "my-db" (uses container name as hostname)
docker run --rm --link my-db:db python:3.11 python -c "import psycopg2; psycopg2.connect(host='db', password='pass')"
Persistent Data with Volumes
Containers are ephemeral: data inside is lost when the container is deleted. Use volumes to persist data:
Named Volumes (Docker-managed storage):
# Create a volume
docker volume create my-data
# Mount the volume to a container (data in /app/data persists)
docker run -d -v my-data:/app/data --name data-container nginx
Bind Mounts (link host directory to container):
Useful for development (e.g., sync code changes between host and container):
# Mount current host directory to /app in the container
docker run -it -v $(pwd):/app node:18-alpine /bin/sh
Now, edits to app.js on your host will reflect immediately in the container!
6. Docker Best Practices
Security
- Use official images: Avoid untrusted images from Docker Hub (check “Official Image” badge).
- Run containers as non-root: Add a non-root user in your Dockerfile:
FROM node:18-alpine RUN addgroup -S appgroup && adduser -S appuser -G appgroup USER appuser # Switch to non-root user - Scan images for vulnerabilities: Use
docker scan my-node-app:1.0(requires a Docker Hub account). - Limit container capabilities: Restrict what containers can do (e.g.,
--cap-drop=ALLto remove all Linux capabilities).
Image Optimization
- Use multi-stage builds: Reduce image size by discarding build tools. Example for a Go app:
# Stage 1: Build the binary FROM golang:1.21 AS builder WORKDIR /app COPY . . RUN go build -o myapp . # Stage 2: Run with a tiny image FROM alpine:3.18 COPY --from=builder /app/myapp . CMD ["./myapp"] - Minimize layers: Combine
RUNcommands with&&to reduce layers:# Bad: 3 layers RUN apt update RUN apt install -y curl RUN rm -rf /var/lib/apt/lists/* # Good: 1 layer RUN apt update && apt install -y curl && rm -rf /var/lib/apt/lists/* - Use
.dockerignore: Exclude unnecessary files (e.g.,node_modules,.git) from the image:# .dockerignore file node_modules .git .env
Resource Limits
Prevent containers from hogging resources with --memory and --cpus:
docker run -d --name limited-nginx --memory 512m --cpus 0.5 nginx
Limits to 512MB RAM and 0.5 CPU cores.
7. Troubleshooting Common Issues
- “permission denied” when running Docker: Add your user to the
dockergroup (see Post-Installation Steps). - Container exits immediately: Ensure your
CMDruns a foreground process (e.g., `nginx -g ‘daemon off;’