Avance 11 min de lecture · 2 342 mots

Kubernetes : Deploiement d’Applications Conteneurisees

Estimated reading time: 12 minutes

Introduction

Kubernetes (K8s) est le standard pour l’orchestration de containers en production. Ce guide couvre le déploiement d’applications réelles avec haute disponibilité, scaling, et resilience.

1. Architecture Kubernetes

Composants Cluster

┌─────────────────────────────────────────────────────┐
│                  Control Plane                      │
│  ┌────────────┐  ┌──────────┐  ┌───────────────┐  │
│  │ API Server │  │ Scheduler │  │ Controller Mgr│  │
│  └────────────┘  └──────────┘  └───────────────┘  │
│  ┌────────────┐                                     │
│  │    etcd    │                                     │
│  └────────────┘                                     │
└─────────────────────────────────────────────────────┘
                        │
    ┌───────────────────┼───────────────────┐
    │                   │                   │
┌───▼────┐         ┌────▼───┐         ┌────▼───┐
│ Node 1 │         │ Node 2 │         │ Node 3 │
│┌──────┐│         │┌──────┐│         │┌──────┐│
││Kubelet││         ││Kubelet││         ││Kubelet││
│└──────┘│         │└──────┘│         │└──────┘│
│┌──────┐│         │┌──────┐│         │┌──────┐│
││ Pods ││         ││ Pods ││         ││ Pods ││
│└──────┘│         │└──────┘│         │└──────┘│
└────────┘         └────────┘         └────────┘

Installation Kubernetes Local

# Installation avec kind (Kubernetes in Docker)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Créer un cluster de dev
cat << EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
    kubeadmConfigPatches:
    - |
      kind: InitConfiguration
      nodeRegistration:
        kubeletExtraArgs:
          node-labels: "ingress-ready=true"
    extraPortMappings:
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    - containerPort: 443
      hostPort: 443
      protocol: TCP
  - role: worker
  - role: worker
EOF

kind create cluster --config kind-config.yaml --name dev

# Installation kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

# Vérification
kubectl cluster-info
kubectl get nodes

# Installation Helm
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version

2. Application Web Complete

Architecture de l’Application

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    name: production
    environment: prod

ConfigMaps et Secrets

# configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: production
data:
  # Configuration application
  NODEENV: "production"
  LOGLEVEL: "info"
  PORT: "3000"

  # Configuration Redis
  REDISHOST: "redis-service"
  REDISPORT: "6379"

  # Configuration MongoDB
  MONGODBHOST: "mongodb-service"
  MONGODBPORT: "27017"
  MONGODBDATABASE: "appdb"

  # Nginx config
  nginx.conf: |
    upstream backend {
      server api-service:3000;
    }

    server {
      listen 80;
      servername ;

      location / {
        proxypass http://backend;
        proxysetheader Host $host;
        proxysetheader X-Real-IP $remoteaddr;
      }

      location /health {
        return 200 "healthyn";
      }
    }
---
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
  namespace: production
type: Opaque
stringData:
  # Database credentials
  MONGODBUSERNAME: "admin"
  MONGODBPASSWORD: "SuperSecretPassword123!"

  # JWT Secret
  JWTSECRET: "your-super-secret-jwt-key-change-in-production"

  # API Keys
  STRIPEAPIKEY: "sklivexxxxxxxxxxxxx"
  SENDGRIDAPIKEY: "SG.xxxxxxxxxxxxx"

MongoDB StatefulSet

# mongodb-statefulset.yaml
apiVersion: v1
kind: Service
metadata:
  name: mongodb-service
  namespace: production
  labels:
    app: mongodb
spec:
  ports:
  - port: 27017
    targetPort: 27017
    name: mongodb
  clusterIP: None
  selector:
    app: mongodb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb
  namespace: production
spec:
  serviceName: mongodb-service
  replicas: 3
  selector:
    matchLabels:
      app: mongodb
  template:
    metadata:
      labels:
        app: mongodb
    spec:
      terminationGracePeriodSeconds: 10

      containers:
      - name: mongodb
        image: mongo:7
        ports:
        - containerPort: 27017
          name: mongodb

        env:
        - name: MONGOINITDBROOTUSERNAME
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBUSERNAME
        - name: MONGOINITDBROOTPASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBPASSWORD

        volumeMounts:
        - name: mongodb-data
          mountPath: /data/db

        resources:
          requests:
            cpu: 500m
            memory: 1Gi
          limits:
            cpu: 2
            memory: 4Gi

        livenessProbe:
          exec:
            command:
            - mongosh
            - --eval
            - "db.adminCommand('ping')"
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3

        readinessProbe:
          exec:
            command:
            - mongosh
            - --eval
            - "db.adminCommand('ping')"
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 3

  volumeClaimTemplates:
  - metadata:
      name: mongodb-data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: "standard"
      resources:
        requests:
          storage: 10Gi

Redis Deployment

# redis-deployment.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-config
  namespace: production
data:
  redis.conf: |
    maxmemory 2gb
    maxmemory-policy allkeys-lru
    save 900 1
    save 300 10
    save 60 10000
    appendonly yes
    appendfsync everysec
---
apiVersion: v1
kind: Service
metadata:
  name: redis-service
  namespace: production
spec:
  selector:
    app: redis
  ports:
  - port: 6379
    targetPort: 6379
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: production
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:7-alpine
        ports:
        - containerPort: 6379

        command:
        - redis-server
        - /etc/redis/redis.conf

        volumeMounts:
        - name: redis-config
          mountPath: /etc/redis
        - name: redis-data
          mountPath: /data

        resources:
          requests:
            cpu: 100m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 2Gi

        livenessProbe:
          tcpSocket:
            port: 6379
          initialDelaySeconds: 15
          periodSeconds: 10

        readinessProbe:
          exec:
            command:
            - redis-cli
            - ping
          initialDelaySeconds: 5
          periodSeconds: 5

      volumes:
      - name: redis-config
        configMap:
          name: redis-config
      - name: redis-data
        persistentVolumeClaim:
          claimName: redis-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: redis-pvc
  namespace: production
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: standard

API Deployment avec HPA

# api-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  namespace: production
  labels:
    app: api
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0

  selector:
    matchLabels:
      app: api

  template:
    metadata:
      labels:
        app: api
        version: v1
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "3000"
        prometheus.io/path: "/metrics"

    spec:
      # Anti-affinity pour spread sur différents nodes
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - api
              topologyKey: kubernetes.io/hostname

      # Init container pour attendre MongoDB
      initContainers:
      - name: wait-for-mongodb
        image: busybox:1.35
        command:
        - sh
        - -c
        - |
          until nc -z mongodb-service 27017; do
            echo "Waiting for MongoDB..."
            sleep 2
          done

      containers:
      - name: api
        image: myregistry.io/myapp-api:v1.2.0
        imagePullPolicy: Always

        ports:
        - containerPort: 3000
          name: http
          protocol: TCP

        env:
        # From ConfigMap
        - name: NODEENV
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: NODEENV
        - name: LOGLEVEL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: LOGLEVEL
        - name: PORT
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: PORT

        # From Secrets
        - name: JWTSECRET
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: JWTSECRET
        - name: MONGODBUSERNAME
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBUSERNAME
        - name: MONGODBPASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBPASSWORD

        # Constructed values
        - name: MONGODBURI
          value: "mongodb://$(MONGODBUSERNAME):$(MONGODBPASSWORD)@mongodb-service:27017/appdb?authSource=admin"
        - name: REDISURL
          value: "redis://redis-service:6379"

        resources:
          requests:
            cpu: 250m
            memory: 512Mi
          limits:
            cpu: 1000m
            memory: 1Gi

        livenessProbe:
          httpGet:
            path: /health
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
          successThreshold: 1

        readinessProbe:
          httpGet:
            path: /ready
            port: 3000
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          failureThreshold: 2
          successThreshold: 1

        # Lifecycle hooks
        lifecycle:
          preStop:
            exec:
              command:
              - sh
              - -c
              - sleep 15

        # Security context
        securityContext:
          runAsNonRoot: true
          runAsUser: 1001
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          capabilities:
            drop:
            - ALL

        volumeMounts:
        - name: tmp
          mountPath: /tmp
        - name: cache
          mountPath: /app/.cache

      volumes:
      - name: tmp
        emptyDir: {}
      - name: cache
        emptyDir: {}

      # Graceful shutdown
      terminationGracePeriodSeconds: 30

      # Image pull secrets
      imagePullSecrets:
      - name: registry-credentials
---
apiVersion: v1
kind: Service
metadata:
  name: api-service
  namespace: production
spec:
  selector:
    app: api
  ports:
  - port: 3000
    targetPort: 3000
    protocol: TCP
  type: ClusterIP
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 10800
---
# Horizontal Pod Autoscaler
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: api-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: api
  minReplicas: 3
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 15
    scaleUp:
      stabilizationWindowSeconds: 0
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 2
        periodSeconds: 15
      selectPolicy: Max

Frontend Deployment

# frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: production
spec:
  replicas: 2
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: myregistry.io/myapp-frontend:v1.2.0
        ports:
        - containerPort: 80

        volumeMounts:
        - name: nginx-config
          mountPath: /etc/nginx/conf.d
        - name: nginx-cache
          mountPath: /var/cache/nginx

        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi

        livenessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 10

        readinessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5

      volumes:
      - name: nginx-config
        configMap:
          name: app-config
          items:
          - key: nginx.conf
            path: default.conf
      - name: nginx-cache
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
  namespace: production
spec:
  selector:
    app: frontend
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP

Ingress avec TLS

# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app-ingress
  namespace: production
  annotations:
    # NGINX Ingress Controller
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"

    # Rate limiting
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/limit-connections: "10"

    # Timeouts
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"

    # Body size
    nginx.ingress.kubernetes.io/proxy-body-size: "20m"

    # CORS
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
    nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"

    # Security headers
    nginx.ingress.kubernetes.io/configuration-snippet: |
      moresetheaders "X-Frame-Options: SAMEORIGIN";
      moresetheaders "X-Content-Type-Options: nosniff";
      moresetheaders "X-XSS-Protection: 1; mode=block";

    # Cert-manager for TLS
    cert-manager.io/cluster-issuer: "letsencrypt-prod"

spec:
  tls:
  - hosts:
    - example.com
    - www.example.com
    - api.example.com
    secretName: example-com-tls

  rules:
  # Frontend
  - host: example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend-service
            port:
              number: 80

  # API
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 3000

Installation Cert-Manager

# Installation cert-manager
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.0/cert-manager.yaml

# ClusterIssuer pour Let's Encrypt
cat << EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: admin@example.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: nginx
EOF

3. Advanced Patterns

CronJob pour Backups

# backup-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: mongodb-backup
  namespace: production
spec:
  schedule: "0 2   "  # Tous les jours à 2h du matin
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  concurrencyPolicy: Forbid

  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: OnFailure

          containers:
          - name: backup
            image: mongo:7
            command:
            - /bin/bash
            - -c
            - |
              BACKUPFILE="backup-$(date +%Y%m%d-%H%M%S).archive"

              mongodump 
                --host=mongodb-service 
                --username=$MONGODBUSERNAME 
                --password=$MONGODBPASSWORD 
                --authenticationDatabase=admin 
                --archive=/backup/$BACKUPFILE 
                --gzip

              # Upload to S3
              aws s3 cp /backup/$BACKUPFILE s3://my-backups/mongodb/$BACKUPFILE

              # Cleanup local
              rm /backup/$BACKUPFILE

              # Cleanup old backups (keep 30 days)
              aws s3 ls s3://my-backups/mongodb/ | 
                while read -r line; do
                  createDate=$(echo $line | awk {'print $1" "$2'})
                  createTimestamp=$(date -d "$createDate" +%s)
                  olderThan=$(date -d "30 days ago" +%s)
                  if [[ $createTimestamp -lt $olderThan ]]; then
                    fileName=$(echo $line | awk {'print $4'})
                    aws s3 rm s3://my-backups/mongodb/$fileName
                  fi
                done

            env:
            - name: MONGODBUSERNAME
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: MONGODBUSERNAME
            - name: MONGODBPASSWORD
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: MONGODBPASSWORD
            - name: AWSACCESSKEYID
              valueFrom:
                secretKeyRef:
                  name: aws-credentials
                  key: access-key-id
            - name: AWSSECRETACCESSKEY
              valueFrom:
                secretKeyRef:
                  name: aws-credentials
                  key: secret-access-key

            volumeMounts:
            - name: backup-storage
              mountPath: /backup

          volumes:
          - name: backup-storage
            emptyDir: {}

Jobs pour Migrations

# migration-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: db-migration-v1-2-0
  namespace: production
spec:
  ttlSecondsAfterFinished: 86400  # Cleanup after 24h
  backoffLimit: 3

  template:
    spec:
      restartPolicy: OnFailure

      initContainers:
      - name: wait-for-db
        image: busybox:1.35
        command:
        - sh
        - -c
        - until nc -z mongodb-service 27017; do sleep 2; done

      containers:
      - name: migrate
        image: myregistry.io/myapp-api:v1.2.0
        command:
        - npm
        - run
        - migrate:up

        env:
        - name: MONGODBURI
          value: "mongodb://$(MONGODBUSERNAME):$(MONGODBPASSWORD)@mongodb-service:27017/appdb?authSource=admin"
        - name: MONGODBUSERNAME
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBUSERNAME
        - name: MONGODBPASSWORD
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: MONGODBPASSWORD

Network Policies

# network-policies.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: api-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  - Egress

  ingress:
  # Allow from frontend
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 3000

  # Allow from ingress controller
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx
    ports:
    - protocol: TCP
      port: 3000

  egress:
  # Allow to MongoDB
  - to:
    - podSelector:
        matchLabels:
          app: mongodb
    ports:
    - protocol: TCP
      port: 27017

  # Allow to Redis
  - to:
    - podSelector:
        matchLabels:
          app: redis
    ports:
    - protocol: TCP
      port: 6379

  # Allow DNS
  - to:
    - namespaceSelector:
        matchLabels:
          name: kube-system
    ports:
    - protocol: UDP
      port: 53

  # Allow external HTTPS
  - to:
    - namespaceSelector: {}
    ports:
    - protocol: TCP
      port: 443
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: mongodb-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: mongodb
  policyTypes:
  - Ingress

  ingress:
  # Only allow from API pods
  - from:
    - podSelector:
        matchLabels:
          app: api
    ports:
    - protocol: TCP
      port: 27017

Pod Disruption Budget

# pdb.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: api-pdb
  namespace: production
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: api
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: mongodb-pdb
  namespace: production
spec:
  maxUnavailable: 1
  selector:
    matchLabels:
      app: mongodb

4. Monitoring avec Prometheus

ServiceMonitor

# servicemonitor.yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: api-metrics
  namespace: production
  labels:
    app: api
spec:
  selector:
    matchLabels:
      app: api
  endpoints:
  - port: http
    path: /metrics
    interval: 30s

PrometheusRule pour Alertes

# prometheus-rules.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: api-alerts
  namespace: production
spec:
  groups:
  - name: api
    interval: 30s
    rules:
    - alert: HighErrorRate
      expr: |
        sum(rate(httprequeststotal{status=~"5.."}[5m])) /
        sum(rate(httprequeststotal[5m])) > 0.05
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "High error rate detected"
        description: "Error rate is {{ $value | humanizePercentage }}"

    - alert: HighMemoryUsage
      expr: |
        containermemoryusagebytes{pod=~"api-."} /
        containerspecmemorylimitbytes{pod=~"api-.*"} > 0.9
      for: 5m
      labels:
        severity: warning
      annotations:
        summary: "High memory usage on {{ $labels.pod }}"

    - alert: PodCrashLooping
      expr: |
        rate(kubepodcontainerstatusrestartstotal{namespace="production"}[15m]) > 0
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "Pod {{ $labels.pod }} is crash looping"

5. Deploiement et Operations

Kustomize pour Multi-Env

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: production

resources:
  • namespace.yaml
  • configmap.yaml
  • secrets.yaml
  • mongodb-statefulset.yaml
  • redis-deployment.yaml
  • api-deployment.yaml
  • frontend-deployment.yaml
  • ingress.yaml
  • hpa.yaml
  • pdb.yaml
  • commonLabels: app.kubernetes.io/managed-by: kustomize
# overlays/staging/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: staging

bases:
  • ../../base
  • namePrefix: staging- replicas:
  • name: api
  • count: 2
  • name: frontend
  • count: 1 images:
  • name: myregistry.io/myapp-api
  • newTag: staging-latest patches:
  • target:
  • kind: Deployment name: api patch: |- - op: replace path: /spec/template/spec/containers/0/resources/requests/cpu value: 100m
    # Deployer avec kustomize
    kubectl apply -k overlays/staging/
    kubectl apply -k overlays/production/
    
    # Preview des changements
    kubectl diff -k overlays/production/
    

    Helm Chart

    # Chart.yaml
    apiVersion: v2
    name: myapp
    description: Full-stack application
    version: 1.2.0
    appVersion: "1.2.0"
    dependencies:
    
  • name: mongodb
  • version: "13.x.x" repository: "https://charts.bitnami.com/bitnami"
  • name: redis
  • version: "18.x.x" repository: "https://charts.bitnami.com/bitnami"
    # values.yaml
    replicaCount: 3
    
    image:
      repository: myregistry.io/myapp-api
      tag: "1.2.0"
      pullPolicy: IfNotPresent
    
    service:
      type: ClusterIP
      port: 3000
    
    ingress:
      enabled: true
      className: nginx
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
      hosts:
      - host: api.example.com
        paths:
        - path: /
          pathType: Prefix
      tls:
      - secretName: api-example-com-tls
        hosts:
        - api.example.com
    
    resources:
      requests:
        cpu: 250m
        memory: 512Mi
      limits:
        cpu: 1000m
        memory: 1Gi
    
    autoscaling:
      enabled: true
      minReplicas: 3
      maxReplicas: 10
      targetCPUUtilizationPercentage: 70
    
    mongodb:
      enabled: true
      auth:
        rootPassword: "changeme"
        database: appdb
    
    redis:
      enabled: true
      auth:
        enabled: false
    
    # Installation avec Helm
    helm install myapp ./myapp-chart 
      --namespace production 
      --create-namespace 
      --values values-production.yaml
    
    # Upgrade
    helm upgrade myapp ./myapp-chart 
      --namespace production 
      --values values-production.yaml
    
    # Rollback
    helm rollback myapp 0
    
    # Status
    helm status myapp -n production
    

    Blue/Green Deployment

    # blue-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: api-blue
      namespace: production
      labels:
        app: api
        version: blue
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: api
          version: blue
      template:
        metadata:
          labels:
            app: api
            version: blue
        spec:
          containers:
          - name: api
            image: myregistry.io/myapp-api:v1.1.0
            # ... rest of spec
    ---
    # green-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: api-green
      namespace: production
      labels:
        app: api
        version: green
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: api
          version: green
      template:
        metadata:
          labels:
            app: api
            version: green
        spec:
          containers:
          - name: api
            image: myregistry.io/myapp-api:v1.2.0
            # ... rest of spec
    ---
    # service.yaml (switch between blue/green)
    apiVersion: v1
    kind: Service
    metadata:
      name: api-service
      namespace: production
    spec:
      selector:
        app: api
        version: green  # Change to 'blue' for rollback
      ports:
      - port: 3000
        targetPort: 3000
    

    6. Troubleshooting

    Debug Commands

    # Pod information
    kubectl get pods -n production
    kubectl describe pod api-xxx -n production
    kubectl logs api-xxx -n production
    kubectl logs api-xxx -n production --previous  # Previous container
    
    # Exec into pod
    kubectl exec -it api-xxx -n production -- /bin/sh
    
    # Port forward
    kubectl port-forward svc/api-service 3000:3000 -n production
    
    # Events
    kubectl get events -n production --sort-by='.lastTimestamp'
    
    # Resource usage
    kubectl top pods -n production
    kubectl top nodes
    
    # Debug networking
    kubectl run -it --rm debug --image=nicolaka/netshoot --restart=Never -- /bin/bash
    # Inside container:
    nslookup api-service
    curl api-service:3000/health
    traceroute api-service
    
    # Check ConfigMaps and Secrets
    kubectl get configmap app-config -n production -o yaml
    kubectl get secret app-secrets -n production -o yaml
    
    # Rollout status
    kubectl rollout status deployment/api -n production
    kubectl rollout history deployment/api -n production
    kubectl rollout undo deployment/api -n production
    
    # Cordon/Drain nodes
    kubectl cordon node-1
    kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data
    kubectl uncordon node-1
    

    Common Issues

    # 1. Image Pull Errors
    # Check:
    kubectl describe pod failing-pod -n production
    # Fix: Verify image exists and credentials are correct
    kubectl create secret docker-registry registry-credentials 
      --docker-server=myregistry.io 
      --docker-username=user 
      --docker-password=pass
    
    # 2. CrashLoopBackOff
    # Debug:
    kubectl logs failing-pod -n production --previous
    kubectl describe pod failing-pod -n production
    # Check readiness/liveness probes
    
    # 3. Pending Pods
    # Check:
    kubectl describe pod pending-pod -n production
    # Usually resource constraints or PVC issues
    kubectl get pvc -n production
    
    # 4. Service not accessible
    # Debug DNS:
    kubectl run -it dnsutils --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 --rm
    nslookup api-service.production.svc.cluster.local
    
    # Check endpoints:
    kubectl get endpoints api-service -n production
    

    7. Best Practices Production

    Resource Quotas

    # resourcequota.yaml
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: production-quota
      namespace: production
    spec:
      hard:
        requests.cpu: "100"
        requests.memory: 200Gi
        limits.cpu: "200"
        limits.memory: 400Gi
        persistentvolumeclaims: "50"
        pods: "100"
    

    LimitRange

    # limitrange.yaml
    apiVersion: v1
    kind: LimitRange
    metadata:
      name: production-limits
      namespace: production
    spec:
      limits:
      - max:
          cpu: "4"
          memory: 8Gi
        min:
          cpu: 100m
          memory: 128Mi
        default:
          cpu: 500m
          memory: 512Mi
        defaultRequest:
          cpu: 250m
          memory: 256Mi
        type: Container
    

    Security Policies

    # podsecuritypolicy.yaml (deprecated, use Pod Security Standards)
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: restricted
    spec:
      privileged: false
      allowPrivilegeEscalation: false
      requiredDropCapabilities:
      - ALL
      volumes:
      - 'configMap'
      - 'emptyDir'
      - 'projected'
      - 'secret'
      - 'downwardAPI'
      - 'persistentVolumeClaim'
      runAsUser:
        rule: 'MustRunAsNonRoot'
      seLinux:
        rule: 'RunAsAny'
      fsGroup:
        rule: 'RunAsAny'
      readOnlyRootFilesystem: true
    

    Conclusion

    Kubernetes offre une plateforme robuste pour déployer des applications conteneurisées en production:

    Checklist de production:

  • [ ] Multi-replica deployments avec HPA
  • [ ] Health checks (liveness + readiness)
  • [ ] Resource limits et requests
  • [ ] Pod Disruption Budgets
  • [ ] Network Policies
  • [ ] Secrets management sécurisé
  • [ ] Monitoring et alerting
  • [ ] Backup automatisés
  • [ ] Blue/Green ou Canary deployments
  • [ ] Disaster recovery plan
  • Avec ces patterns, vous pouvez opérer des applications hautement disponibles et scalables sur Kubernetes.

    Une remarque, un retour ?

    Cet article est vivant — corrections, contre-arguments et retours de production sont les bienvenus. Trois canaux, choisissez celui qui vous convient.