Docker – Container Platform for Application Development and Deployment
Complete Docker Guide: Containerization for Modern Software Development
Docker has revolutionized software development and deployment by introducing lightweight containerization that packages applications with their dependencies into portable, consistent environments. Unlike traditional virtual machines, Docker containers share the host operating system kernel, resulting in minimal overhead and near-instant startup times. This efficiency has made Docker the industry standard for microservices architecture, continuous integration pipelines, and cloud-native development.
The containerization paradigm solves the perennial “works on my machine” problem by ensuring identical execution environments from development through production. Developers can define exact dependencies, configurations, and system requirements in a Dockerfile, then build images that run consistently across any Docker-enabled system. This consistency accelerates development cycles, simplifies deployment, and enables unprecedented scalability.
Core Docker Concepts
Understanding Docker’s architecture requires familiarity with its fundamental concepts. Images serve as read-only templates containing application code, runtime, libraries, and configuration. Containers are running instances of images, isolated from other containers and the host system. Registries store and distribute images, with Docker Hub serving as the primary public registry.
The Docker daemon manages containers on a host system, responding to commands from the Docker CLI or API. Volumes provide persistent storage that survives container restarts and deletion. Networks enable communication between containers and external systems, with multiple network drivers supporting different isolation and connectivity requirements.
Installing Docker
Linux Installation
# Ubuntu/Debian - Docker Engine installation
sudo apt update
sudo apt install ca-certificates curl gnupg
# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add repository
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Run without sudo
sudo usermod -aG docker $USER
newgrp docker
# Verify installation
docker --version
docker run hello-world
# Fedora
sudo dnf install dnf-plugins-core
sudo dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
sudo dnf install docker-ce docker-ce-cli containerd.io
# Start Docker service
sudo systemctl start docker
sudo systemctl enable docker
# Arch Linux
sudo pacman -S docker
sudo systemctl start docker
sudo systemctl enable docker
macOS Installation
# Download Docker Desktop from docker.com
# Or install via Homebrew
brew install --cask docker
# Start Docker Desktop application
# Verify in terminal
docker --version
docker run hello-world
Windows Installation
# Install Docker Desktop from docker.com
# Enable WSL 2 backend (recommended)
# Or via Winget
winget install Docker.DockerDesktop
# Verify in PowerShell
docker --version
docker run hello-world
Essential Docker Commands
Mastering Docker’s command-line interface enables efficient container management.
# Image commands
docker images # List images
docker pull nginx # Download image
docker pull nginx:alpine # Specific tag
docker build -t myapp:latest . # Build image
docker tag myapp:latest user/myapp # Tag image
docker push user/myapp # Push to registry
docker rmi image-name # Remove image
docker image prune # Remove unused images
# Container commands
docker run nginx # Run container
docker run -d nginx # Run detached
docker run -p 8080:80 nginx # Port mapping
docker run --name web nginx # Named container
docker run -v /host:/container nginx # Volume mount
docker run -e VAR=value nginx # Environment variable
docker run --rm nginx # Remove on exit
docker run -it ubuntu bash # Interactive terminal
# Container management
docker ps # Running containers
docker ps -a # All containers
docker stop container-id # Stop container
docker start container-id # Start container
docker restart container-id # Restart container
docker rm container-id # Remove container
docker rm -f container-id # Force remove
docker container prune # Remove stopped containers
# Container inspection
docker logs container-id # View logs
docker logs -f container-id # Follow logs
docker exec -it container-id bash # Execute command
docker inspect container-id # Detailed info
docker stats # Resource usage
docker top container-id # Running processes
# System commands
docker system df # Disk usage
docker system prune # Clean everything
docker system prune -a # Include unused images
docker info # System information
Writing Dockerfiles
Dockerfiles define how images are built, specifying base images, dependencies, and configuration.
# Basic Dockerfile example
# Python application
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Copy requirements first (caching optimization)
COPY requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Expose port
EXPOSE 8000
# Run command
CMD ["python", "app.py"]
# Node.js application
FROM node:20-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Build application
RUN npm run build
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Multi-stage build (smaller final image)
FROM node:20 AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
EXPOSE 3000
CMD ["node", "dist/index.js"]
# Best practices
# - Use specific image tags
# - Minimize layers
# - Order commands by change frequency
# - Use .dockerignore
# - Run as non-root user
# Example with non-root user
FROM node:20-alpine
RUN addgroup -S app && adduser -S app -G app
WORKDIR /app
COPY --chown=app:app . .
USER app
CMD ["node", "index.js"]
Docker Compose
Docker Compose manages multi-container applications through declarative YAML configuration.
# docker-compose.yml
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DATABASE_URL=postgresql://user:pass@db:5432/myapp
depends_on:
- db
- redis
volumes:
- ./src:/app/src
restart: unless-stopped
db:
image: postgres:15
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U user -d myapp"]
interval: 10s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web
volumes:
postgres_data:
redis_data:
networks:
default:
driver: bridge
# Docker Compose commands
docker compose up # Start services
docker compose up -d # Start detached
docker compose up --build # Rebuild images
docker compose down # Stop services
docker compose down -v # Remove volumes too
docker compose logs # View logs
docker compose logs -f web # Follow specific service
docker compose ps # List services
docker compose exec web bash # Execute in service
docker compose pull # Pull latest images
docker compose restart # Restart services
Volume Management
Volumes provide persistent storage that outlives containers.
# Create named volume
docker volume create mydata
# List volumes
docker volume ls
# Inspect volume
docker volume inspect mydata
# Remove volume
docker volume rm mydata
# Remove unused volumes
docker volume prune
# Use volume in container
docker run -v mydata:/app/data nginx
docker run -v $(pwd)/local:/app/data nginx # Bind mount
# Volume in compose
volumes:
- mydata:/app/data # Named volume
- ./local:/app/data # Bind mount
- ./config.json:/app/config.json:ro # Read-only
# Backup volume
docker run --rm -v mydata:/data -v $(pwd):/backup alpine tar cvf /backup/backup.tar /data
# Restore volume
docker run --rm -v mydata:/data -v $(pwd):/backup alpine tar xvf /backup/backup.tar -C /
Networking
Docker networking enables container communication and isolation.
# List networks
docker network ls
# Create network
docker network create mynetwork
docker network create --driver bridge mynetwork
# Inspect network
docker network inspect mynetwork
# Connect container to network
docker network connect mynetwork container-id
docker network disconnect mynetwork container-id
# Run container on network
docker run --network mynetwork nginx
# Network drivers:
# bridge - Default, isolated network
# host - Share host network
# none - No networking
# overlay - Multi-host networking (Swarm)
# Container DNS
# Containers on same network can reach each other by name
docker run --network mynetwork --name web nginx
docker run --network mynetwork alpine ping web
# Expose specific ports
docker run -p 80:80 nginx # Map 80 to 80
docker run -p 8080:80 nginx # Map 8080 to 80
docker run -p 127.0.0.1:80:80 nginx # Localhost only
docker run -P nginx # Random ports
Docker Registry
Registries store and distribute Docker images.
# Docker Hub
docker login
docker tag myapp username/myapp:latest
docker push username/myapp:latest
docker pull username/myapp:latest
# Private registry
docker run -d -p 5000:5000 --name registry registry:2
docker tag myapp localhost:5000/myapp
docker push localhost:5000/myapp
# AWS ECR
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ACCOUNT.dkr.ecr.us-east-1.amazonaws.com
docker tag myapp ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/myapp
docker push ACCOUNT.dkr.ecr.us-east-1.amazonaws.com/myapp
# Google Container Registry
gcloud auth configure-docker
docker tag myapp gcr.io/project/myapp
docker push gcr.io/project/myapp
# GitHub Container Registry
docker login ghcr.io -u USERNAME -p TOKEN
docker tag myapp ghcr.io/username/myapp
docker push ghcr.io/username/myapp
Production Best Practices
Production deployments require attention to security, efficiency, and reliability.
# Security hardening
FROM alpine:3.18
RUN addgroup -S app && adduser -S -G app app
USER app
COPY --chown=app:app . /app
WORKDIR /app
# Resource limits
docker run --memory=512m --cpus=1 myapp
# Health checks
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Read-only filesystem
docker run --read-only myapp
# Drop capabilities
docker run --cap-drop=ALL myapp
# Security scanning
docker scan myimage
docker scout cves myimage
# Logging
docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 myapp
# Restart policies
docker run --restart=unless-stopped myapp
docker run --restart=on-failure:5 myapp
Debugging Containers
Effective debugging techniques help diagnose container issues.
# View logs
docker logs container-id
docker logs --tail 100 container-id
docker logs --since 1h container-id
# Execute shell
docker exec -it container-id /bin/sh
docker exec -it container-id /bin/bash
# Copy files
docker cp container-id:/app/file.txt ./file.txt
docker cp ./file.txt container-id:/app/
# Inspect container
docker inspect container-id
docker inspect -f '{{.State.Status}}' container-id
docker inspect -f '{{.NetworkSettings.IPAddress}}' container-id
# Resource usage
docker stats
docker stats container-id
# Processes
docker top container-id
# Events
docker events
docker events --filter container=container-id
# Debug failed build
docker build --no-cache .
docker build --progress=plain .
DOCKER_BUILDKIT=0 docker build .
Conclusion
Docker has transformed software development by making environments reproducible, deployments consistent, and scaling straightforward. From local development to production orchestration, Docker’s containerization technology provides the foundation for modern application architecture. Understanding Docker’s core concepts and command-line tools enables developers to build, ship, and run applications with confidence across any environment.
Download Options
Download Docker – Container Platform for Application Development and Deployment
Version 24.0
File Size: 500 MB
Download NowSafe & Secure
Verified and scanned for viruses
Regular Updates
Always get the latest version
24/7 Support
Help available when you need it