Intermediaire 15 min de lecture · 3 206 mots

Docker pour Developpeurs : Dockerfile, docker-compose, et Multi-Stage Builds

Estimated reading time: 16 minutes

Introduction

Docker transforme le développement en garantissant que votre application fonctionne identiquement partout. Ce guide couvre les pratiques essentielles pour conteneuriser vos applications efficacement.

1. Concepts Fondamentaux

Architecture Docker

┌─────────────────────────────────────┐
│   Application (Votre Code)          │
├─────────────────────────────────────┤
│   Container Runtime (Docker)        │
├─────────────────────────────────────┤
│   Host OS (Linux/Windows/Mac)       │
├─────────────────────────────────────┤
│   Infrastructure (Physique/Cloud)   │
└─────────────────────────────────────┘

Installation Docker

# Ubuntu/Debian
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER

# Configuration post-installation
sudo systemctl enable docker
sudo systemctl start docker

# Vérification
docker --version
docker run hello-world

# macOS/Windows
# Télécharger Docker Desktop depuis:
# https://www.docker.com/products/docker-desktop

# Vérifier l'installation
docker info
docker compose version

Configuration Daemon Docker

// /etc/docker/daemon.json
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "default-address-pools": [
    {
      "base": "172.17.0.0/16",
      "size": 24
    }
  ],
  "dns": ["8.8.8.8", "8.8.4.4"],
  "insecure-registries": [],
  "registry-mirrors": [],
  "features": {
    "buildkit": true
  }
}
# Recharger la configuration
sudo systemctl daemon-reload
sudo systemctl restart docker

2. Dockerfile : De Base a Avance

Dockerfile Simple (Node.js)

# Mauvaise approche (À NE PAS FAIRE)
FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["npm", "start"]

# Problèmes:
# - Image énorme (~1GB)
# - Installe dev dependencies
# - Rebuild cache inefficace
# - Tourne en root
# - Pas de healthcheck

Dockerfile Optimise (Node.js)

# Meilleure approche
FROM node:18-alpine AS base

# Metadata
LABEL maintainer="dev@example.com"
LABEL version="1.0"
LABEL description="Production-ready Node.js application"

# Install security updates
RUN apk update && 
    apk upgrade && 
    apk add --no-cache dumb-init && 
    rm -rf /var/cache/apk/

# Create app directory
WORKDIR /app

# Create non-root user
RUN addgroup -g 1001 -S nodejs && 
    adduser -S nodejs -u 1001

# Copy dependency files first (better cache)
COPY --chown=nodejs:nodejs package.json ./

# Install dependencies
RUN npm ci --only=production && 
    npm cache clean --force

# Copy application code
COPY --chown=nodejs:nodejs . .

# Switch to non-root user
USER nodejs

# Expose port
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 
  CMD node healthcheck.js || exit 1

# Use dumb-init to handle signals properly
ENTRYPOINT ["dumb-init", "--"]

# Start application
CMD ["node", "server.js"]

Multi-Stage Build (Application Complete)

# Stage 1: Dependencies
FROM node:18-alpine AS deps

WORKDIR /app

COPY package.json ./

RUN npm ci --only=production && 
    npm cache clean --force

# Stage 2: Build
FROM node:18-alpine AS builder

WORKDIR /app

# Copy dependencies from deps stage
COPY --from=deps /app/nodemodules ./nodemodules

# Copy source code
COPY . .

# Build application
RUN npm run build && 
    npm prune --production

# Stage 3: Test (optional, can be skipped in prod builds)
FROM builder AS test

# Install dev dependencies for testing
RUN npm install --only=development

# Run tests
RUN npm run test && 
    npm run lint

# Stage 4: Production
FROM node:18-alpine AS production

WORKDIR /app

# Install only runtime dependencies
RUN apk add --no-cache dumb-init

# Create non-root user
RUN addgroup -g 1001 -S nodejs && 
    adduser -S nodejs -u 1001

# Copy built application from builder
COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
COPY --from=builder --chown=nodejs:nodejs /app/nodemodules ./nodemodules
COPY --from=builder --chown=nodejs:nodejs /app/package.json ./

# Create necessary directories
RUN mkdir -p /app/logs && 
    chown -R nodejs:nodejs /app

USER nodejs

EXPOSE 3000

ENV NODEENV=production 
    PORT=3000

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 
  CMD wget --no-verbose --tries=1 --spider http://localhost:3000/health || exit 1

ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Dockerfile pour Different Languages

Python (Flask/Django)

# Multi-stage Python application
FROM python:3.11-slim AS base

# Prevent Python from writing .pyc files
ENV PYTHONDONTWRITEBYTECODE=1 
    PYTHONUNBUFFERED=1

WORKDIR /app

# Install system dependencies
RUN apt-get update && 
    apt-get install -y --no-install-recommends 
    gcc 
    libpq-dev && 
    rm -rf /var/lib/apt/lists/

# Stage: Dependencies
FROM base AS deps

COPY requirements.txt .

RUN pip install --no-cache-dir --upgrade pip && 
    pip install --no-cache-dir -r requirements.txt

# Stage: Production
FROM python:3.11-slim AS production

WORKDIR /app

# Copy installed packages
COPY --from=deps /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --from=deps /usr/local/bin /usr/local/bin

# Create non-root user
RUN useradd -m -u 1001 appuser && 
    chown -R appuser:appuser /app

# Copy application
COPY --chown=appuser:appuser . .

USER appuser

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=3s 
  CMD python -c "import requests; requests.get('http://localhost:8000/health')" || exit 1

CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]

Go Application

# Stage 1: Build
FROM golang:1.21-alpine AS builder

WORKDIR /app

# Copy go mod files
COPY go.mod go.sum ./

# Download dependencies
RUN go mod download

# Copy source code
COPY . .

# Build binary
RUN CGOENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .

# Stage 2: Production
FROM alpine:latest

RUN apk --no-cache add ca-certificates

WORKDIR /root/

# Copy binary from builder
COPY --from=builder /app/main .

# Expose port
EXPOSE 8080

# Run
CMD ["./main"]

Java (Spring Boot)

# Stage 1: Build
FROM maven:3.9-eclipse-temurin-17 AS builder

WORKDIR /app

# Copy pom.xml first for better caching
COPY pom.xml .

# Download dependencies
RUN mvn dependency:go-offline

# Copy source and build
COPY src ./src

RUN mvn package -DskipTests

# Stage 2: Production
FROM eclipse-temurin:17-jre-alpine

WORKDIR /app

# Create non-root user
RUN addgroup -g 1001 spring && 
    adduser -S spring -u 1001 -G spring

# Copy jar from builder
COPY --from=builder /app/target/.jar app.jar

# Change ownership
RUN chown spring:spring app.jar

USER spring

EXPOSE 8080

HEALTHCHECK --interval=30s --timeout=3s 
  CMD wget --quiet --tries=1 --spider http://localhost:8080/actuator/health || exit 1

ENTRYPOINT ["java", "-jar", "app.jar"]

3. Docker Compose : Orchestration Locale

Application Web Complete (MERN Stack)

# docker-compose.yml
version: '3.8'

# Réseaux personnalisés
networks:
  frontend:
    driver: bridge
  backend:
    driver: bridge

# Volumes persistants
volumes:
  mongo-data:
    driver: local
  redis-data:
    driver: local
  nginx-logs:
    driver: local

services:
  # MongoDB Database
  mongodb:
    image: mongo:7
    containername: app-mongodb
    restart: unless-stopped
    environment:
      MONGOINITDBROOTUSERNAME: ${MONGOROOTUSER:-admin}
      MONGOINITDBROOTPASSWORD: ${MONGOROOTPASSWORD:-secret}
      MONGOINITDBDATABASE: ${MONGODATABASE:-appdb}
    volumes:
      - mongo-data:/data/db
      - ./mongo-init:/docker-entrypoint-initdb.d:ro
    networks:
      - backend
    ports:
      - "27017:27017"
    healthcheck:
      test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
      interval: 10s
      timeout: 5s
      retries: 5
    command: --quiet --logpath /dev/null

  # Redis Cache
  redis:
    image: redis:7-alpine
    containername: app-redis
    restart: unless-stopped
    volumes:
      - redis-data:/data
      - ./redis.conf:/usr/local/etc/redis/redis.conf:ro
    networks:
      - backend
    ports:
      - "6379:6379"
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 3s
      retries: 5
    command: redis-server /usr/local/etc/redis/redis.conf

  # Backend API (Node.js)
  api:
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: production
      args:
        NODEENV: development
    containername: app-api
    restart: unless-stopped
    dependson:
      mongodb:
        condition: servicehealthy
      redis:
        condition: servicehealthy
    environment:
      NODEENV: ${NODEENV:-development}
      PORT: 5000
      MONGODBURI: mongodb://${MONGOROOTUSER:-admin}:${MONGOROOTPASSWORD:-secret}@mongodb:27017/${MONGODATABASE:-appdb}?authSource=admin
      REDISURL: redis://redis:6379
      JWTSECRET: ${JWTSECRET:-your-secret-key}
      LOGLEVEL: ${LOGLEVEL:-info}
    volumes:
      - ./backend:/app
      - /app/nodemodules
      - ./logs:/app/logs
    networks:
      - frontend
      - backend
    ports:
      - "5000:5000"
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost:5000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      startperiod: 40s

  # Frontend (React)
  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
      target: development
      args:
        REACTAPPAPIURL: http://localhost:5000/api
    containername: app-frontend
    restart: unless-stopped
    dependson:
      - api
    environment:
      REACTAPPAPIURL: ${APIURL:-http://localhost:5000/api}
      CHOKIDARUSEPOLLING: true
    volumes:
      - ./frontend:/app
      - /app/nodemodules
    networks:
      - frontend
    ports:
      - "3000:3000"
    stdinopen: true
    tty: true

  # Nginx Reverse Proxy
  nginx:
    image: nginx:alpine
    containername: app-nginx
    restart: unless-stopped
    dependson:
      - api
      - frontend
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./nginx/conf.d:/etc/nginx/conf.d:ro
      - nginx-logs:/var/log/nginx
    networks:
      - frontend
    ports:
      - "80:80"
      - "443:443"
    healthcheck:
      test: ["CMD", "wget", "--quiet", "--tries=1", "--spider", "http://localhost/health"]
      interval: 30s
      timeout: 10s
      retries: 3

  # Worker for background jobs
  worker:
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: production
    containername: app-worker
    restart: unless-stopped
    dependson:
      mongodb:
        condition: servicehealthy
      redis:
        condition: servicehealthy
    environment:
      NODEENV: ${NODEENV:-development}
      MONGODBURI: mongodb://${MONGOROOTUSER:-admin}:${MONGOROOTPASSWORD:-secret}@mongodb:27017/${MONGODATABASE:-appdb}?authSource=admin
      REDISURL: redis://redis:6379
    volumes:
      - ./backend:/app
      - /app/nodemodules
    networks:
      - backend
    command: node workers/queue-processor.js

  # Monitoring - Prometheus
  prometheus:
    image: prom/prometheus:latest
    containername: app-prometheus
    restart: unless-stopped
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus-data:/prometheus
    networks:
      - backend
    ports:
      - "9090:9090"
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'

  # Monitoring - Grafana
  grafana:
    image: grafana/grafana:latest
    containername: app-grafana
    restart: unless-stopped
    dependson:
      - prometheus
    environment:
      GFSECURITYADMINUSER: ${GRAFANAUSER:-admin}
      GFSECURITYADMINPASSWORD: ${GRAFANAPASSWORD:-admin}
    volumes:
      - grafana-data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning:ro
    networks:
      - backend
      - frontend
    ports:
      - "3001:3000"

volumes:
  prometheus-data:
  grafana-data:

Fichier .env

# .env
# Application
NODEENV=development
LOGLEVEL=debug

# Database
MONGOROOTUSER=admin
MONGOROOTPASSWORD=supersecretpassword
MONGODATABASE=appdb

# API
JWTSECRET=your-jwt-secret-key-change-in-production
APIURL=http://localhost:5000/api

# Monitoring
GRAFANAUSER=admin
GRAFANAPASSWORD=admin123

Configuration Nginx

# nginx/conf.d/default.conf
upstream frontend {
    server frontend:3000;
}

upstream api {
    server api:5000;
}

# Rate limiting
limitreqzone $binaryremoteaddr zone=apilimit:10m rate=10r/s;
limitconnzone $binaryremoteaddr zone=connlimit:10m;

server {
    listen 80;
    servername localhost;

    clientmaxbodysize 20M;

    # Security headers
    addheader X-Frame-Options "SAMEORIGIN" always;
    addheader X-Content-Type-Options "nosniff" always;
    addheader X-XSS-Protection "1; mode=block" always;

    # Compression
    gzip on;
    gzipvary on;
    gziptypes text/plain text/css text/xml text/javascript application/json application/javascript application/xml+rss;

    # Frontend
    location / {
        proxypass http://frontend;
        proxyhttpversion 1.1;
        proxysetheader Upgrade $httpupgrade;
        proxysetheader Connection 'upgrade';
        proxysetheader Host $host;
        proxycachebypass $httpupgrade;
        proxysetheader X-Real-IP $remoteaddr;
        proxysetheader X-Forwarded-For $proxyaddxforwardedfor;
        proxysetheader X-Forwarded-Proto $scheme;
    }

    # API with rate limiting
    location /api {
        limitreq zone=apilimit burst=20 nodelay;
        limitconn connlimit 10;

        proxypass http://api;
        proxyhttpversion 1.1;
        proxysetheader Host $host;
        proxysetheader X-Real-IP $remoteaddr;
        proxysetheader X-Forwarded-For $proxyaddxforwardedfor;
        proxysetheader X-Forwarded-Proto $scheme;

        # Timeout settings
        proxyconnecttimeout 60s;
        proxysendtimeout 60s;
        proxyreadtimeout 60s;
    }

    # Health check endpoint
    location /health {
        accesslog off;
        return 200 "healthyn";
        addheader Content-Type text/plain;
    }
}

Docker Compose Commands Essentiels

# Démarrer tous les services
docker compose up -d

# Démarrer avec rebuild
docker compose up -d --build

# Voir les logs
docker compose logs -f

# Logs d'un service spécifique
docker compose logs -f api

# Arrêter tous les services
docker compose down

# Arrêter et supprimer les volumes
docker compose down -v

# Redémarrer un service
docker compose restart api

# Exécuter une commande dans un service
docker compose exec api npm test

# Scaler un service
docker compose up -d --scale worker=3

# Voir l'état des services
docker compose ps

# Voir les ressources utilisées
docker compose stats

# Valider le fichier docker-compose
docker compose config

# Pull des dernières images
docker compose pull

# Build sans cache
docker compose build --no-cache

# Créer et démarrer seulement certains services
docker compose up -d mongodb redis api

4. Optimisations Avancees

.dockerignore Complet

# .dockerignore
# Dependencies
nodemodules/
npm-debug.log
yarn-error.log
package-lock.json

# Testing
coverage/
.nycoutput/
test-results/

# Build
dist/
build/
.log

# IDE
.vscode/
.idea/
.swp
.swo
.DSStore

# Git
.git/
.gitignore
.gitattributes

# CI/CD
.github/
.gitlab-ci.yml
.travis.yml

# Docker
Dockerfile
docker-compose
.dockerignore

# Documentation
README.md
CHANGELOG.md
LICENSE
docs/

# Environment
.env
.env.
!.env.example

# OS
Thumbs.db

BuildKit et Cache Optimization

# syntax=docker/dockerfile:1.4

FROM node:18-alpine AS base

# Enable BuildKit cache mounts
WORKDIR /app

# Stage: Dependencies avec cache mount
FROM base AS deps

RUN --mount=type=cache,target=/root/.npm 
    --mount=type=bind,source=package.json,target=package.json 
    --mount=type=bind,source=package-lock.json,target=package-lock.json 
    npm ci --only=production

# Stage: Build avec cache mount
FROM base AS builder

COPY --from=deps /app/nodemodules ./nodemodules

COPY . .

RUN --mount=type=cache,target=/root/.npm 
    npm run build

# Stage: Production minimale
FROM base AS production

COPY --from=builder /app/dist ./dist
COPY --from=deps /app/nodemodules ./nodemodules

CMD ["node", "dist/server.js"]
# Build avec BuildKit
DOCKERBUILDKIT=1 docker build -t myapp:latest .

# Build avec cache registry
docker buildx build 
  --cache-from=type=registry,ref=myregistry.com/myapp:cache 
  --cache-to=type=registry,ref=myregistry.com/myapp:cache,mode=max 
  -t myapp:latest 
  --push 
  .

Multi-Architecture Builds

# Setup buildx
docker buildx create --name multiarch --use
docker buildx inspect --bootstrap

# Build pour plusieurs architectures
docker buildx build 
  --platform linux/amd64,linux/arm64,linux/arm/v7 
  -t myapp:latest 
  --push 
  .

5. Production Patterns

Docker Compose pour Production

# docker-compose.prod.yml
version: '3.8'

services:
  api:
    image: myregistry.com/myapp:${VERSION:-latest}
    deploy:
      replicas: 3
      updateconfig:
        parallelism: 1
        delay: 10s
        order: start-first
      restartpolicy:
        condition: on-failure
        delay: 5s
        maxattempts: 3
      resources:
        limits:
          cpus: '1'
          memory: 512M
        reservations:
          cpus: '0.5'
          memory: 256M
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "3"
    healthcheck:
      test: ["CMD", "wget", "--spider", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      startperiod: 40s

Healthcheck Script

// healthcheck.js
const http = require('http');

const options = {
  host: 'localhost',
  port: process.env.PORT || 3000,
  path: '/health',
  timeout: 2000
};

const request = http.request(options, (res) => {
  console.log(STATUS: ${res.statusCode});
  if (res.statusCode === 200) {
    process.exit(0);
  } else {
    process.exit(1);
  }
});

request.on('error', (err) => {
  console.error('ERROR:', err);
  process.exit(1);
});

request.end();

Security Best Practices

# Dockerfile avec sécurité renforcée
FROM node:18-alpine AS base

# Scan vulnerabilities pendant le build
RUN apk add --no-cache 
    curl 
    && curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

# Stage de scan
FROM base AS scan

COPY package.json ./

RUN trivy fs --severity HIGH,CRITICAL --exit-code 1 .

# Stage production
FROM node:18-alpine AS production

# Installer seulement les paquets nécessaires
RUN apk add --no-cache dumb-init

# Créer utilisateur non-root
RUN addgroup -g 1001 -S nodejs && 
    adduser -S nodejs -u 1001

# Ne pas exposer de secrets
ARG BUILDDATE
ARG VCSREF

LABEL org.opencontainers.image.created=$BUILDDATE 
      org.opencontainers.image.revision=$VCSREF

# Copier seulement le nécessaire
COPY --chown=nodejs:nodejs dist/ ./dist/
COPY --chown=nodejs:nodejs nodemodules/ ./nodemodules/

# Read-only filesystem sauf /tmp
RUN chmod -R 555 /app && 
    mkdir /tmp/app && 
    chown nodejs:nodejs /tmp/app

USER nodejs

# Utiliser ENTRYPOINT pour le init system
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

6. Debugging et Troubleshooting

Debug Dockerfile Build

# Build avec verbose output
docker build --progress=plain --no-cache -t myapp .

# Build jusqu'à un stage spécifique
docker build --target builder -t myapp:builder .

# Inspecter une layer
docker history myapp:latest

# Voir les détails d'une image
docker inspect myapp:latest

# Entrer dans un container failed
docker run -it --entrypoint /bin/sh myapp:latest

# Copier des fichiers depuis un container
docker cp containerid:/app/logs ./local-logs

Debug Container Runtime

# Voir les logs
docker logs -f containername

# Voir les logs avec timestamps
docker logs -f --timestamps containername

# Exec dans un container en cours
docker exec -it containername sh

# Voir les processus
docker top containername

# Voir les stats en temps réel
docker stats containername

# Inspecter le network
docker network inspect bridge

# Voir les events
docker events

# Debug avec strace
docker run --cap-add=SYSPTRACE --security-opt seccomp=unconfined 
  -it myapp strace -f node server.js

Problemes Communs

# 1. Container exit immédiatement
# Problème: CMD invalide ou erreur au démarrage
# Debug:
docker logs containername
docker run -it --entrypoint /bin/sh myapp

# 2. Build lent
# Solution: Optimiser l'ordre des COPY et utiliser cache
# Vérifier:
docker build --progress=plain .

# 3. Image trop grosse
# Analyser:
docker history myapp:latest
dive myapp:latest  # Installer: https://github.com/wagoodman/dive

# 4. Network issues entre containers
# Debug:
docker network ls
docker network inspect mynetwork
docker exec container1 ping container2
docker exec container1 nslookup container2

# 5. Permission denied
# Fix:
RUN chown -R nodejs:nodejs /app
USER nodejs

# 6. Out of space
docker system df
docker system prune -a --volumes

7. Performance Tuning

Monitoring Docker

# docker-compose.monitoring.yml
version: '3.8'

services:
  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    containername: cadvisor
    privileged: true
    devices:
      - /dev/kmsg:/dev/kmsg
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker:/var/lib/docker:ro
      - /cgroup:/cgroup:ro
    ports:
      - "8080:8080"

  node-exporter:
    image: prom/node-exporter:latest
    containername: node-exporter
    restart: unless-stopped
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.rootfs=/rootfs'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    ports:
      - "9100:9100"

Resource Limits Best Practices

services:
  api:
    # Soft limits (réservations)
    deploy:
      resources:
        reservations:
          cpus: '0.5'
          memory: 256M
        limits:
          cpus: '2'
          memory: 1G

    # Limites mémoire au niveau runtime
    memlimit: 1g
    memreservation: 512m
    cpus: 2

    # OOM killer priority
    oomscoreadj: -500

    # ulimits
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
      nproc: 65535

8. CI/CD Integration

GitHub Actions avec Docker

# .github/workflows/docker.yml
name: Docker Build and Push

on:
  push:
    branches: [main]
    tags: ['v']

env:
  REGISTRY: ghcr.io
  IMAGENAME: ${{ github.repository }}

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write

    steps:
      - name: Checkout
        uses: actions/checkout@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2

      - name: Log in to registry
        uses: docker/login-action@v2
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUBTOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v4
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGENAME }}

      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Scan image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ env.REGISTRY }}/${{ env.IMAGENAME }}:latest
          format: 'sarif'
          output: 'trivy-results.sarif'

9. Scripts Utilitaires

Makefile pour Docker

# Makefile
.PHONY: help build up down logs clean

COMPOSE=docker compose
COMPOSEPROD=docker compose -f docker-compose.yml -f docker-compose.prod.yml

help: ## Afficher l'aide
	@grep -E '^[a-zA-Z-]+:.?## .$$' $(MAKEFILELIST) | sort | awk 'BEGIN {FS = ":.?## "}; {printf " 33[36m%-30s 33[0m %sn", $$1, $$2}'

build: ## Build les images
	$(COMPOSE) build

up: ## Démarrer en mode dev
	$(COMPOSE) up -d

down: ## Arrêter les services
	$(COMPOSE) down

logs: ## Voir les logs
	$(COMPOSE) logs -f

restart: ## Redémarrer
	$(COMPOSE) restart

clean: ## Nettoyer containers, images, volumes
	$(COMPOSE) down -v --rmi local
	docker system prune -f

prod: ## Démarrer en mode production
	$(COMPOSEPROD) up -d

test: ## Run tests
	$(COMPOSE) run --rm api npm test

shell: ## Shell dans container api
	$(COMPOSE) exec api sh

db-shell: ## Shell MongoDB
	$(COMPOSE) exec mongodb mongosh -u admin -p secret

backup: ## Backup MongoDB
	mkdir -p backups
	$(COMPOSE) exec -T mongodb mongodump --archive > backups/dump$$(date +%Y%m%d%H%M%S).archive

restore: ## Restore MongoDB (utiliser: make restore FILE=backups/dump.archive)
	$(COMPOSE) exec -T mongodb mongorestore --archive < $(FILE)

Script de Déploiement

#!/bin/bash
# deploy.sh

set -e

VERSION=${1:-latest}
ENVIRONMENT=${2:-staging}

echo "Deploying version $VERSION to $ENVIRONMENT..."

# Pull latest images
docker compose -f docker-compose.${ENVIRONMENT}.yml pull

# Backup database
echo "Creating backup..."
docker compose exec -T mongodb mongodump --archive > backup$(date +%Y%m%d%H%M%S).archive

# Deploy with zero-downtime
echo "Starting new containers..."
docker compose -f docker-compose.${ENVIRONMENT}.yml up -d --no-deps --scale api=2 api

# Wait for health check
echo "Waiting for health check..."
sleep 10

# Health check
if curl -f http://localhost:5000/health; then
    echo "Health check passed"

    # Scale down old containers
    docker compose -f docker-compose.${ENVIRONMENT}.yml up -d --no-deps --scale api=2 api

    echo "Deployment successful!"
else
    echo "Health check failed, rolling back..."
    docker compose -f docker-compose.${ENVIRONMENT}.yml up -d --no-deps --scale api=2 api
    exit 1
fi

# Cleanup old images
docker image prune -f

Conclusion

Docker transforme le développement moderne en assurant :

  • Portabilité : Fonctionne partout identiquement
  • Isolation : Pas de conflits de dépendances
  • Reproductibilité : Builds déterministes
  • Efficacité : Utilisation optimale des ressources
  • Scalabilité : Facilite le passage en production
  • Checklist de production :

  • [ ] Multi-stage builds pour images minimales
  • [ ] Non-root user dans containers
  • [ ] Health checks configurés
  • [ ] Resource limits définis
  • [ ] Logging centralisé
  • [ ] Security scanning intégré
  • [ ] Backups automatisés
  • [ ] Monitoring en place

Avec ces pratiques, Docker devient un outil puissant pour le développement et la production.

Une remarque, un retour ?

Cet article est vivant — corrections, contre-arguments et retours de production sont les bienvenus. Trois canaux, choisissez celui qui vous convient.