Kubernetes : Ressources, commandes kubectl, et configurations
Introduction à Kubernetes
Architecture de base
Cluster Kubernetes:
- Control Plane (Master):
- API Server (kube-apiserver)
- Scheduler (kube-scheduler)
- Controller Manager (kube-controller-manager)
- etcd (base de données)
# - Nodes (Workers):
- kubelet (agent)
- kube-proxy (réseau)
- Container Runtime (Docker, containerd, CRI-O)
Vérifier version
kubectl version --short
Informations cluster
kubectl cluster-info
kubectl get componentstatuses
kubectl get nodes
kubectl describe node
Configuration kubectl
Fichier de config par défaut: ~/.kube/config
Afficher la config
kubectl config view
Contextes (cluster + user + namespace)
kubectl config get-contexts
kubectl config current-context
kubectl config use-context
Créer un contexte
kubectl config set-context dev
--cluster=dev-cluster
--user=dev-user
--namespace=development
Changer de namespace par défaut
kubectl config set-context --current --namespace=production
Clusters
kubectl config get-clusters
kubectl config set-cluster
--server=https://k8s.example.com:6443
--certificate-authority=/path/to/ca.crt
Users (credentials)
kubectl config set-credentials admin
--client-certificate=/path/to/admin.crt
--client-key=/path/to/admin.key
kubectl config set-credentials user
--token=
Supprimer
kubectl config delete-context
kubectl config delete-cluster
kubectl config delete-user
Commandes kubectl essentielles
Syntaxe générale
Structure: kubectl [commande] [type] [nom] [flags]
Commandes principales:
get - Lister ressources
describe - Détails d'une ressource
create - Créer depuis fichier
apply - Appliquer config (create/update)
delete - Supprimer ressource
edit - Éditer ressource
exec - Exécuter dans container
logs - Afficher logs
port-forward - Redirection port
Flags courants:
-n, --namespace - Spécifier namespace
-o, --output - Format sortie (yaml, json, wide, name)
-w, --watch - Surveiller changements
-l, --selector - Filtrer par labels
--all-namespaces - Toutes les namespaces
--dry-run=client - Simulation
-f, --filename - Fichier config
Commandes de base
GET - Lister ressources
kubectl get pods
kubectl get pods -n kube-system
kubectl get pods --all-namespaces
kubectl get pods -o wide # Plus de colonnes
kubectl get pods -o yaml # Format YAML
kubectl get pods -o json # Format JSON
kubectl get pods -o name # Seulement les noms
kubectl get pods --show-labels # Afficher labels
kubectl get pods -l app=nginx # Filtrer par label
kubectl get pods --field-selector status.phase=Running
kubectl get pods -w # Watch (surveiller)
Multiples ressources
kubectl get pods,services
kubectl get all # Toutes ressources principales
kubectl get all -A # Tous namespaces
DESCRIBE - Détails
kubectl describe pod
kubectl describe node
kubectl describe service
CREATE - Créer
kubectl create -f pod.yaml
kubectl create -f ./configs/ # Dossier
kubectl create namespace dev
kubectl create deployment nginx --image=nginx
kubectl create service clusterip my-svc --tcp=80:80
APPLY - Appliquer (recommandé)
kubectl apply -f pod.yaml
kubectl apply -f ./configs/
kubectl apply -k ./kustomize-dir/ # Kustomize
DELETE - Supprimer
kubectl delete pod
kubectl delete -f pod.yaml
kubectl delete pods --all
kubectl delete pods -l app=nginx
kubectl delete namespace dev
kubectl delete all --all -n dev # Tout dans namespace
EDIT - Éditer
kubectl edit pod
kubectl edit deployment
SCALE - Mettre à l'échelle
kubectl scale deployment nginx --replicas=5
kubectl scale statefulset mysql --replicas=3
ROLLOUT - Gestion déploiements
kubectl rollout status deployment/nginx
kubectl rollout history deployment/nginx
kubectl rollout undo deployment/nginx
kubectl rollout undo deployment/nginx --to-revision=2
kubectl rollout restart deployment/nginx
kubectl rollout pause deployment/nginx
kubectl rollout resume deployment/nginx
EXEC - Exécuter dans container
kubectl exec -- ls /app
kubectl exec -it -- bash
kubectl exec -it -c -- sh
LOGS - Afficher logs
kubectl logs
kubectl logs -f # Follow
kubectl logs -c
kubectl logs --previous # Container précédent
kubectl logs --since=1h
kubectl logs --tail=100
kubectl logs -l app=nginx # Par label
PORT-FORWARD - Redirection port
kubectl port-forward pod/ 8080:80
kubectl port-forward service/ 8080:80
kubectl port-forward deployment/ 8080:80
CP - Copier fichiers
kubectl cp :/path/to/file ./local-file
kubectl cp ./local-file :/path/to/file
TOP - Métriques (nécessite metrics-server)
kubectl top nodes
kubectl top pods
kubectl top pods --containers
LABEL - Gérer labels
kubectl label pods env=prod
kubectl label pods env=prod --overwrite
kubectl label pods env-
ANNOTATE - Gérer annotations
kubectl annotate pods description="Application web"
kubectl annotate pods description-
DRAIN - Vider node (maintenance)
kubectl drain
kubectl drain --ignore-daemonsets
kubectl drain --delete-emptydir-data --force
CORDON/UNCORDON - Empêcher scheduling
kubectl cordon
kubectl uncordon
TAINT - Gérer taints
kubectl taint nodes key=value:NoSchedule
kubectl taint nodes key=value:NoExecute
kubectl taint nodes key- # Retirer
API-RESOURCES - Lister types ressources
kubectl api-resources
kubectl api-resources --namespaced=true
kubectl api-resources --api-group=apps
EXPLAIN - Documentation ressource
kubectl explain pod
kubectl explain pod.spec
kubectl explain pod.spec.containers
kubectl explain deployment.spec.strategy
Pods
Pod simple
pod-simple.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: default
labels:
app: nginx
env: prod
annotations:
description: "Serveur web Nginx"
spec:
containers:
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
name: http
protocol: TCP
env:
- name: ENVVAR
value: "production"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Pod multi-containers
pod-multi.yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp
spec:
containers:
# Container principal
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
volumeMounts:
- name: shared-data
mountPath: /app/data
# Sidecar: proxy
- name: proxy
image: nginx:1.24
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: shared-data
mountPath: /usr/share/nginx/html
# Sidecar: log collector
- name: log-collector
image: fluent/fluentd:v1.16
volumeMounts:
- name: shared-logs
mountPath: /var/log
volumes:
- name: shared-data
emptyDir: {}
- name: shared-logs
emptyDir: {}
- name: nginx-config
configMap:
name: nginx-config
Init Containers
pod-init.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-with-init
spec:
# Init containers (s'exécutent avant les containers principaux)
initContainers:
- name: init-db
image: busybox:1.36
command:
- sh
- -c
- |
echo "Attente de la base de données..."
until nslookup mysql-service; do
echo "En attente..."
sleep 2
done
echo "Base de données prête!"
- name: init-config
image: busybox:1.36
command:
- sh
- -c
- |
echo "Génération configuration..."
echo "configdata" > /config/app.conf
volumeMounts:
- name: config
mountPath: /config
# Containers principaux
containers:
- name: app
image: myapp:1.0
volumeMounts:
- name: config
mountPath: /etc/app
volumes:
- name: config
emptyDir: {}
Probes (Santé)
pod-probes.yaml
apiVersion: v1
kind: Pod
metadata:
name: app-with-probes
spec:
containers:
- name: app
image: myapp:1.0
ports:
- containerPort: 8080
# Liveness Probe - Redémarre si échoue
livenessProbe:
httpGet:
path: /health
port: 8080
httpHeaders:
- name: X-Custom-Header
value: Health
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
successThreshold: 1
# Readiness Probe - Retire du service si échoue
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Startup Probe - Pour apps lentes au démarrage
startupProbe:
httpGet:
path: /startup
port: 8080
initialDelaySeconds: 0
periodSeconds: 10
failureThreshold: 30 # 30 10s = 5min max
---
Autres types de probes
apiVersion: v1
kind: Pod
metadata:
name: probe-examples
spec:
containers:
- name: app
image: myapp:1.0
# TCP Socket Probe
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
# Exec Probe (commande)
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
# gRPC Probe (K8s 1.24+)
startupProbe:
grpc:
port: 9090
initialDelaySeconds: 0
periodSeconds: 10
Ressources et Limites
pod-resources.yaml
apiVersion: v1
kind: Pod
metadata:
name: resource-demo
spec:
containers:
- name: app
image: myapp:1.0
resources:
# Requests - Garanties minimales
requests:
memory: "256Mi" # 256 mebibytes
cpu: "500m" # 500 milliCPU (0.5 CPU)
ephemeral-storage: "2Gi"
# Limits - Maximums
limits:
memory: "512Mi" # OOMKilled si dépassé
cpu: "1000m" # Throttle si dépassé
ephemeral-storage: "4Gi"
---
QoS Classes (automatique selon resources)
1. Guaranteed: requests = limits pour CPU et memory
apiVersion: v1
kind: Pod
metadata:
name: qos-guaranteed
spec:
containers:
- name: app
image: nginx
resources:
requests:
memory: "200Mi"
cpu: "700m"
limits:
memory: "200Mi"
cpu: "700m"
---
2. Burstable: requests < limits
apiVersion: v1
kind: Pod
metadata:
name: qos-burstable
spec:
containers:
- name: app
image: nginx
resources:
requests:
memory: "128Mi"
limits:
memory: "256Mi"
---
3. BestEffort: aucune request/limit
apiVersion: v1
kind: Pod
metadata:
name: qos-besteffort
spec:
containers:
- name: app
image: nginx
Sécurité Pod
pod-security.yaml
apiVersion: v1
kind: Pod
metadata:
name: secure-pod
spec:
# Security Context au niveau Pod
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
seccompProfile:
type: RuntimeDefault
supplementalGroups: [4000, 5000]
containers:
- name: app
image: myapp:1.0
# Security Context au niveau Container
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2000
capabilities:
drop:
- ALL
add:
- NETBINDSERVICE
seLinuxOptions:
level: "s0:c123,c456"
volumeMounts:
- name: tmp
mountPath: /tmp
- name: cache
mountPath: /app/cache
volumes:
- name: tmp
emptyDir: {}
- name: cache
emptyDir: {}
---
Pod avec ServiceAccount
apiVersion: v1
kind: Pod
metadata:
name: pod-with-sa
spec:
serviceAccountName: my-service-account
automountServiceAccountToken: true
containers:
- name: app
image: myapp:1.0
Affinity et Anti-Affinity
pod-affinity.yaml
apiVersion: v1
kind: Pod
metadata:
name: with-node-affinity
spec:
# Node Affinity - Placement sur nodes
affinity:
nodeAffinity:
# Required - DOIT matcher
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-1
- node-2
- key: node-type
operator: NotIn
values:
- spot
# Preferred - Préférence (non bloquant)
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: zone
operator: In
values:
- zone-a
- weight: 50
preference:
matchExpressions:
- key: disk
operator: In
values:
- ssd
containers:
- name: app
image: myapp:1.0
---
Pod Affinity/Anti-Affinity - Placement relatif aux autres Pods
apiVersion: v1
kind: Pod
metadata:
name: with-pod-affinity
labels:
app: web
spec:
affinity:
# Pod Affinity - Proche d'autres pods
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cache
topologyKey: kubernetes.io/hostname
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: database
topologyKey: zone
# Pod Anti-Affinity - Éloigné d'autres pods
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: web
topologyKey: kubernetes.io/hostname
containers:
- name: web
image: nginx:1.24
---
NodeSelector (simple)
apiVersion: v1
kind: Pod
metadata:
name: with-node-selector
spec:
nodeSelector:
disktype: ssd
environment: production
containers:
- name: app
image: myapp:1.0
Taints et Tolerations
Appliquer taint sur node:
kubectl taint nodes node1 key=value:NoSchedule
kubectl taint nodes node1 key=value:NoExecute
kubectl taint nodes node1 key=value:PreferNoSchedule
pod-tolerations.yaml
apiVersion: v1
kind: Pod
metadata:
name: with-tolerations
spec:
tolerations:
# Tolère taint exact
- key: "node-role"
operator: "Equal"
value: "database"
effect: "NoSchedule"
# Tolère n'importe quelle valeur pour la clé
- key: "environment"
operator: "Exists"
effect: "NoExecute"
# Tolère pendant 300s avant éviction
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 300
# Tolère tous les taints
- operator: "Exists"
containers:
- name: app
image: myapp:1.0
Deployments
Deployment basique
deployment-basic.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
labels:
app: nginx
annotations:
kubernetes.io/change-cause: "Update to nginx 1.24"
spec:
# Nombre de replicas
replicas: 3
# Sélecteur pods
selector:
matchLabels:
app: nginx
# Template Pod
template:
metadata:
labels:
app: nginx
version: v1
spec:
containers:
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "100m"
limits:
memory: "128Mi"
cpu: "200m"
Stratégies de déploiement
deployment-strategies.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-rolling
spec:
replicas: 5
# Rolling Update (défaut)
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max nouveaux pods (+1 = 6 total max)
maxUnavailable: 1 # Max pods indisponibles (min 4 running)
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:2.0
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
# Historique des révisions
revisionHistoryLimit: 10
# Délai avant considéré comme failed
progressDeadlineSeconds: 600
# Délai avant suppression pods terminés
minReadySeconds: 10
---
Recreate Strategy - Tout supprimer puis recréer
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-recreate
spec:
replicas: 3
strategy:
type: Recreate
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: myapp:2.0
Deployment avancé
deployment-advanced.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
labels:
app: web
tier: frontend
spec:
replicas: 4
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
version: v2.1
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
spec:
# Service Account
serviceAccountName: web-sa
# Image Pull Secret
imagePullSecrets:
- name: registry-credentials
# Init Container
initContainers:
- name: migration
image: myapp:2.1
command:
- /bin/sh
- -c
- |
echo "Running migrations..."
/app/migrate.sh
env:
- name: DBHOST
value: "mysql-service"
# Containers
containers:
- name: web
image: myapp:2.1
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: metrics
containerPort: 9090
env:
- name: PORT
value: "8080"
- name: DBPASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
- name: CONFIGFILE
valueFrom:
configMapKeyRef:
name: app-config
key: config.json
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: PODIP
valueFrom:
fieldRef:
fieldPath: status.podIP
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
volumeMounts:
- name: config
mountPath: /etc/config
readOnly: true
- name: data
mountPath: /var/data
- name: cache
mountPath: /tmp/cache
# Volumes
volumes:
- name: config
configMap:
name: app-config
- name: data
persistentVolumeClaim:
claimName: data-pvc
- name: cache
emptyDir:
sizeLimit: 1Gi
# Affinity
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchLabels:
app: web
topologyKey: kubernetes.io/hostname
# Topology Spread
topologySpreadConstraints:
- maxSkew: 1
topologyKey: zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: web
Services
Service ClusterIP
service-clusterip.yaml
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
app: backend
spec:
# Type par défaut - IP interne cluster uniquement
type: ClusterIP
# Sélecteur pods
selector:
app: backend
# Ports
ports:
- name: http
port: 80 # Port du service
targetPort: 8080 # Port du container
protocol: TCP
- name: metrics
port: 9090
targetPort: 9090
# IP fixe (optionnel)
clusterIP: 10.0.0.10
# Session affinity
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
Service NodePort
service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
# Accessible depuis l'extérieur via :
type: NodePort
selector:
app: web
ports:
- name: http
port: 80
targetPort: 8080
nodePort: 30080 # Port sur chaque node (30000-32767)
protocol: TCP
# Restreindre aux nodes spécifiques
externalTrafficPolicy: Local # Préserve source IP
Service LoadBalancer
service-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
# Annotations cloud provider (exemple AWS)
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-internal: "false"
spec:
# Crée un load balancer externe (cloud provider)
type: LoadBalancer
selector:
app: web
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
# IP publique fixe (si supporté)
loadBalancerIP: 203.0.113.10
# Restreindre IPs sources
loadBalancerSourceRanges:
- 192.168.1.0/24
- 10.0.0.0/8
externalTrafficPolicy: Local
Service ExternalName
service-externalname.yaml
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
# Alias DNS vers service externe
type: ExternalName
externalName: database.example.com
# Pas de selector ni de ports
Service Headless
service-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
spec:
# Pas de ClusterIP - retourne IPs pods directement
clusterIP: None
selector:
app: mysql
ports:
- name: mysql
port: 3306
targetPort: 3306
# Publier pods non-ready
publishNotReadyAddresses: true
---
Utilisé avec StatefulSet pour DNS stable
mysql-0.mysql-headless.default.svc.cluster.local
mysql-1.mysql-headless.default.svc.cluster.local
Endpoints manuel
Service sans selector
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
ports:
- port: 80
targetPort: 80
---
Endpoints manuel
apiVersion: v1
kind: Endpoints
metadata:
name: external-service
subsets:
- addresses:
- ip: 192.168.1.100
- ip: 192.168.1.101
ports:
- port: 80
ConfigMaps et Secrets
ConfigMap
configmap-literal.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
# Paires clé-valeur simples
databaseurl: "mysql://db.example.com:3306"
maxconnections: "100"
loglevel: "info"
# Fichiers de configuration
nginx.conf: |
server {
listen 80;
servername example.com;
location / {
proxypass http://backend:8080;
}
}
app.json: |
{
"api": {
"version": "v2",
"timeout": 30
},
"features": {
"analytics": true,
"cache": true
}
}
---
Utilisation dans Pod
apiVersion: v1
kind: Pod
metadata:
name: app-with-config
spec:
containers:
- name: app
image: myapp:1.0
# Variable d'environnement depuis ConfigMap
env:
- name: DATABASEURL
valueFrom:
configMapKeyRef:
name: app-config
key: databaseurl
- name: LOGLEVEL
valueFrom:
configMapKeyRef:
name: app-config
key: loglevel
# Toutes les clés comme env vars
envFrom:
- configMapRef:
name: app-config
prefix: CONFIG
# Volume depuis ConfigMap
volumeMounts:
- name: config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: app-config
mountPath: /etc/app
volumes:
- name: config
configMap:
name: app-config
items:
- key: nginx.conf
path: nginx.conf
- name: app-config
configMap:
name: app-config
defaultMode: 0644
Créer ConfigMap
Depuis littéraux
kubectl create configmap app-config
--from-literal=key1=value1
--from-literal=key2=value2
Depuis fichier
kubectl create configmap nginx-config
--from-file=nginx.conf
Depuis dossier
kubectl create configmap configs
--from-file=./config-dir/
Depuis env file
kubectl create configmap env-config
--from-env-file=.env
Voir
kubectl get configmap
kubectl describe configmap app-config
kubectl get configmap app-config -o yaml
Secrets
secret-generic.yaml
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
# Valeurs encodées en base64
username: YWRtaW4= # admin
password: cGFzc3dvcmQxMjM= # password123
stringData:
# Valeurs en clair (encodées automatiquement)
host: "mysql.example.com"
port: "3306"
---
Secret TLS
apiVersion: v1
kind: Secret
metadata:
name: tls-cert
type: kubernetes.io/tls
data:
tls.crt: LS0tLS1CRUdJTi... # Base64 encoded
tls.key: LS0tLS1CRUdJTi...
---
Secret Docker Registry
apiVersion: v1
kind: Secret
metadata:
name: registry-credentials
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: eyJhdXRocyI6eyJodH...
---
Secret SSH Auth
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
type: kubernetes.io/ssh-auth
data:
ssh-privatekey: LS0tLS1CRUdJTi...
---
Secret Basic Auth
apiVersion: v1
kind: Secret
metadata:
name: basic-auth
type: kubernetes.io/basic-auth
stringData:
username: admin
password: secretpass
---
Utilisation dans Pod
apiVersion: v1
kind: Pod
metadata:
name: app-with-secrets
spec:
containers:
- name: app
image: myapp:1.0
# Variable d'environnement depuis Secret
env:
- name: DBUSERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: DBPASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
# Toutes les clés comme env vars
envFrom:
- secretRef:
name: db-credentials
# Volume depuis Secret
volumeMounts:
- name: tls
mountPath: /etc/tls
readOnly: true
- name: ssh
mountPath: /root/.ssh
defaultMode: 0400
volumes:
- name: tls
secret:
secretName: tls-cert
- name: ssh
secret:
secretName: ssh-key
items:
- key: ssh-privatekey
path: idrsa
# Pull secret
imagePullSecrets:
- name: registry-credentials
Créer Secrets
Secret générique depuis littéraux
kubectl create secret generic db-secret
--from-literal=username=admin
--from-literal=password=secret123
Depuis fichiers
kubectl create secret generic ssh-key
--from-file=ssh-privatekey=~/.ssh/idrsa
Secret TLS
kubectl create secret tls tls-cert
--cert=path/to/tls.crt
--key=path/to/tls.key
Docker registry
kubectl create secret docker-registry registry-cred
--docker-server=registry.example.com
--docker-username=user
--docker-password=pass
--docker-email=user@example.com
Encoder/décoder base64
echo -n 'admin' | base64 # Encoder
echo 'YWRtaW4=' | base64 -d # Décoder
Voir secrets
kubectl get secrets
kubectl describe secret db-secret
kubectl get secret db-secret -o yaml
kubectl get secret db-secret -o jsonpath='{.data.password}' | base64 -d
Volumes
Volume Types
pod-volumes.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-examples
spec:
containers:
- name: app
image: nginx:1.24
volumeMounts:
- name: empty-dir
mountPath: /cache
- name: host-path
mountPath: /host-data
- name: config
mountPath: /etc/config
- name: secret
mountPath: /etc/secrets
readOnly: true
- name: pvc
mountPath: /data
- name: nfs
mountPath: /shared
volumes:
# emptyDir - Temporaire, partagé entre containers
- name: empty-dir
emptyDir:
sizeLimit: 1Gi
medium: Memory # ou "" pour disque
# hostPath - Montage depuis node (à éviter en prod)
- name: host-path
hostPath:
path: /data
type: DirectoryOrCreate
# Types: Directory, File, Socket, etc.
# configMap
- name: config
configMap:
name: app-config
defaultMode: 0644
items:
- key: config.json
path: app.json
# secret
- name: secret
secret:
secretName: db-credentials
defaultMode: 0400
# persistentVolumeClaim
- name: pvc
persistentVolumeClaim:
claimName: my-pvc
# nfs
- name: nfs
nfs:
server: nfs-server.example.com
path: /exports/data
readOnly: false
PersistentVolume et PersistentVolumeClaim
pv-local.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce # RWO - Un seul node en lecture/écriture
# - ReadOnlyMany # ROX - Plusieurs nodes en lecture
# - ReadWriteMany # RWX - Plusieurs nodes en lecture/écriture
# - ReadWriteOncePod # RWOP - Un seul pod (K8s 1.22+)
persistentVolumeReclaimPolicy: Retain
# Retain - Manuel (données conservées)
# Delete - Supprime automatiquement
# Recycle - Nettoyage basique (déprécié)
storageClassName: local-storage
local:
path: /mnt/data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node-1
---
pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
mountOptions:
- hard
- nfsvers=4.1
nfs:
server: nfs-server.example.com
path: /exports/data
---
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage
# Sélecteur PV spécifique (optionnel)
selector:
matchLabels:
type: fast
matchExpressions:
- key: environment
operator: In
values:
- prod
---
Pod utilisant PVC
apiVersion: v1
kind: Pod
metadata:
name: app-with-pvc
spec:
containers:
- name: app
image: nginx:1.24
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
persistentVolumeClaim:
claimName: data-pvc
StorageClass
storageclass-local.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-local
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
storageclass-aws-ebs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: aws-ebs-gp3
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: ebs.csi.aws.com
parameters:
type: gp3
iops: "3000"
throughput: "125"
encrypted: "true"
kmsKeyId: arn:aws:kms:us-east-1:123456789012:key/...
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
reclaimPolicy: Delete
---
storageclass-nfs.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "true"
pathPattern: "${.PVC.namespace}/${.PVC.name}"
reclaimPolicy: Delete
allowVolumeExpansion: true
---
PVC avec StorageClass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamic-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: aws-ebs-gp3
resources:
requests:
storage: 20Gi
Commandes volumes
PersistentVolumes
kubectl get pv
kubectl describe pv
kubectl delete pv
PersistentVolumeClaims
kubectl get pvc
kubectl describe pvc
kubectl delete pvc
StorageClasses
kubectl get storageclass
kubectl describe storageclass
kubectl get sc -o yaml
Expansion volume (si allowVolumeExpansion: true)
kubectl edit pvc
Modifier spec.resources.requests.storage
Debug
kubectl get events --sort-by='.lastTimestamp'
kubectl describe pvc | grep -A 5 Events
StatefulSets
StatefulSet basique
statefulset-basic.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-headless
spec:
clusterIP: None
selector:
app: mysql
ports:
- port: 3306
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
serviceName: mysql-headless
replicas: 3
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8.0
ports:
- containerPort: 3306
name: mysql
env:
- name: MYSQLROOTPASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: root-password
volumeMounts:
- name: data
mountPath: /var/lib/mysql
# VolumeClaimTemplates - PVC par pod
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-ssd
resources:
requests:
storage: 10Gi
Pods créés avec noms stables:
mysql-0, mysql-1, mysql-2
DNS: mysql-0.mysql-headless.default.svc.cluster.local
StatefulSet avancé
statefulset-advanced.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: web-headless
replicas: 5
# Ordre de déploiement
podManagementPolicy: OrderedReady
# OrderedReady - Un par un dans l'ordre
# Parallel - Tous en parallèle
# Stratégie de mise à jour
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 2 # Met à jour uniquement pods >= partition
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
terminationGracePeriodSeconds: 30
containers:
- name: nginx
image: nginx:1.24
ports:
- containerPort: 80
name: http
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd
resources:
requests:
storage: 1Gi
# Historique révisions
revisionHistoryLimit: 10
# Temps min avant pod considéré ready
minReadySeconds: 10
Commandes StatefulSet
Lister
kubectl get statefulset
kubectl get sts # Alias
Détails
kubectl describe sts mysql
Scale
kubectl scale sts mysql --replicas=5
Update
kubectl apply -f statefulset.yaml
Stratégie partition
kubectl patch sts web -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}'
Rollout
kubectl rollout status sts web
kubectl rollout history sts web
kubectl rollout undo sts web
Supprimer
kubectl delete sts mysql
kubectl delete sts mysql --cascade=orphan # Garde les pods
Pods
kubectl get pods -l app=mysql
kubectl delete pod mysql-0 # Recréé automatiquement
PVCs
kubectl get pvc -l app=mysql
kubectl delete pvc data-mysql-0
DaemonSets
DaemonSet basique
daemonset-basic.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-exporter
labels:
app: node-exporter
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
# Tolérer taints master
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
# Host network pour accéder métriques node
hostNetwork: true
hostPID: true
containers:
- name: node-exporter
image: prom/node-exporter:v1.6.1
args:
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --path.procfs=/host/proc
ports:
- containerPort: 9100
hostPort: 9100
name: metrics
volumeMounts:
- name: sys
mountPath: /host/sys
readOnly: true
- name: root
mountPath: /host/root
mountPropagation: HostToContainer
readOnly: true
- name: proc
mountPath: /host/proc
readOnly: true
resources:
requests:
memory: 30Mi
cpu: 100m
limits:
memory: 50Mi
cpu: 200m
volumes:
- name: sys
hostPath:
path: /sys
- name: root
hostPath:
path: /
- name: proc
hostPath:
path: /proc
DaemonSet avec sélection nodes
daemonset-selective.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ssd-monitor
spec:
selector:
matchLabels:
app: ssd-monitor
template:
metadata:
labels:
app: ssd-monitor
spec:
# Node selector - Seulement nodes avec label
nodeSelector:
disktype: ssd
environment: production
# Ou Node Affinity
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
- nvme
containers:
- name: monitor
image: ssd-monitor:1.0
securityContext:
privileged: true
Stratégie de mise à jour DaemonSet
daemonset-update.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1 # ou pourcentage: 25%
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- name: fluentd
image: fluent/fluentd:v1.16
Jobs et CronJobs
Job simple
job-simple.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: pi-calculation
spec:
# Nombre de complétions réussies requises
completions: 1
# Nombre de pods parallèles
parallelism: 1
# Nombre de tentatives en cas d'échec
backoffLimit: 4
# Délai avant suppression automatique
ttlSecondsAfterFinished: 100
# Timeout
activeDeadlineSeconds: 600
template:
spec:
restartPolicy: Never # Ou OnFailure
containers:
- name: pi
image: perl:5.34
command:
- perl
- -Mbignum=bpi
- -wle
- print bpi(2000)
Job parallèle
job-parallel.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: parallel-job
spec:
completions: 10 # 10 complétions totales
parallelism: 3 # 3 pods à la fois
backoffLimit: 5
template:
metadata:
labels:
app: worker
spec:
restartPolicy: OnFailure
containers:
- name: worker
image: busybox:1.36
command:
- /bin/sh
- -c
- |
echo "Processing job $HOSTNAME"
sleep 30
echo "Job $HOSTNAME completed"
Job avec index
job-indexed.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: indexed-job
spec:
completions: 5
parallelism: 3
completionMode: Indexed # Chaque pod a un index 0-4
template:
spec:
restartPolicy: OnFailure
containers:
- name: worker
image: busybox:1.36
command:
- /bin/sh
- -c
- |
echo "Processing index $JOBCOMPLETIONINDEX"
# Traitement basé sur index
sleep 10
CronJob
cronjob-basic.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup-job
spec:
# Schedule (cron format)
schedule: "0 2 " # Tous les jours à 2h du matin
# ┌───────────── minute (0 - 59)
# │ ┌───────────── heure (0 - 23)
# │ │ ┌───────────── jour du mois (1 - 31)
# │ │ │ ┌───────────── mois (1 - 12)
# │ │ │ │ ┌───────────── jour de la semaine (0 - 6, 0 = dimanche)
# │ │ │ │ │
#
# Timezone (K8s 1.25+)
timeZone: "Europe/Paris"
# Politique concurrence
concurrencyPolicy: Forbid
# Allow - Permet jobs concurrents
# Forbid - Interdit (skip si précédent running)
# Replace - Remplace job en cours
# Délai de démarrage autorisé
startingDeadlineSeconds: 300
# Historique
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
# Suspendre
suspend: false
jobTemplate:
spec:
backoffLimit: 3
ttlSecondsAfterFinished: 86400 # 24h
template:
spec:
restartPolicy: OnFailure
containers:
- name: backup
image: backup-tool:1.0
command:
- /bin/sh
- -c
- |
echo "Starting backup at $(date)"
/usr/local/bin/backup.sh
echo "Backup completed at $(date)"
env:
- name: BACKUPDIR
value: /backups
volumeMounts:
- name: backup
mountPath: /backups
volumes:
- name: backup
persistentVolumeClaim:
claimName: backup-pvc
---
Exemples de schedules
"/5 " - Toutes les 5 minutes
"0 " - Toutes les heures
"0 0 " - Tous les jours à minuit
"0 0 0" - Tous les dimanches à minuit
"0 0 1 " - Premier jour du mois
"0 0 1 1 " - 1er janvier
"30 2 1-5" - 2h30 du lundi au vendredi
"0 /6 " - Toutes les 6 heures
"0 9-17 1-5" - 9h à 17h, lundi à vendredi
Commandes Jobs et CronJobs
Jobs
kubectl get jobs
kubectl describe job pi-calculation
kubectl logs job/pi-calculation
kubectl delete job pi-calculation
Créer job depuis CronJob
kubectl create job backup-manual --from=cronjob/backup-job
CronJobs
kubectl get cronjobs
kubectl get cj # Alias
kubectl describe cronjob backup-job
Suspendre/reprendre
kubectl patch cronjob backup-job -p '{"spec":{"suspend":true}}'
kubectl patch cronjob backup-job -p '{"spec":{"suspend":false}}'
Logs dernière exécution
kubectl logs -l job-name=backup-job-28239200
Nettoyer jobs terminés
kubectl delete jobs --field-selector status.successful=1
Ingress
Ingress basique
ingress-basic.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Ingress avec TLS
ingress-tls.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-ingress
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- example.com
- www.example.com
secretName: example-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
Ingress multi-services
ingress-multi.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: multi-service-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
# Path types:
# Prefix - Match préfixe (/api, /api/users)
# Exact - Match exact uniquement
# ImplementationSpecific - Dépend de l'ingress controller
- path: /v1(/|$)(.)
pathType: ImplementationSpecific
backend:
service:
name: api-v1
port:
number: 8080
- path: /v2(/|$)(.)
pathType: ImplementationSpecific
backend:
service:
name: api-v2
port:
number: 8080
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 3000
Annotations Ingress courantes
ingress-annotations.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: advanced-ingress
annotations:
# Redirection
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, PUT, DELETE, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
# Rate limiting
nginx.ingress.kubernetes.io/limit-rps: "10"
nginx.ingress.kubernetes.io/limit-connections: "5"
# Timeouts
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "30"
# Body size
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
# IP Whitelist
nginx.ingress.kubernetes.io/whitelist-source-range: "10.0.0.0/8,192.168.1.0/24"
# Auth
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: basic-auth
nginx.ingress.kubernetes.io/auth-realm: "Authentication Required"
# Custom headers
nginx.ingress.kubernetes.io/configuration-snippet: |
moresetheaders "X-Custom-Header: Value";
moresetheaders "X-Frame-Options: DENY";
# Backend protocol
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
# Session affinity
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-max-age: "86400"
# Canary deployments
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "20"
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
NetworkPolicies
NetworkPolicy par défaut
network-policy-deny-all.yaml
Deny all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
Deny all egress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
---
Allow all ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-all-ingress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
NetworkPolicy ingress
network-policy-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: production
spec:
# S'applique aux pods
podSelector:
matchLabels:
app: backend
tier: api
policyTypes:
- Ingress
ingress:
# Règle 1: Depuis frontend sur port 8080
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
# Règle 2: Depuis namespace monitoring
- from:
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 9090
# Règle 3: Depuis IPs spécifiques
- from:
- ipBlock:
cidr: 192.168.1.0/24
except:
- 192.168.1.5/32
ports:
- protocol: TCP
port: 8080
NetworkPolicy egress
network-policy-egress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-egress
spec:
podSelector:
matchLabels:
app: webapp
policyTypes:
- Egress
egress:
# Accès à la base de données
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
# Accès à l'API externe
- to:
- ipBlock:
cidr: 203.0.113.0/24
ports:
- protocol: TCP
port: 443
# DNS (kube-dns/CoreDNS)
- to:
- namespaceSelector:
matchLabels:
name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
NetworkPolicy complexe
network-policy-complex.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: complex-policy
namespace: production
spec:
podSelector:
matchLabels:
role: api
policyTypes:
- Ingress
- Egress
ingress:
# Frontend dans même namespace OU monitoring namespace
- from:
- podSelector:
matchLabels:
role: frontend
- namespaceSelector:
matchLabels:
name: monitoring
ports:
- protocol: TCP
port: 8080
- protocol: TCP
port: 9090
egress:
# Base de données
- to:
- podSelector:
matchLabels:
role: database
ports:
- protocol: TCP
port: 5432
# Services externes
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32 # Metadata service
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 80
# DNS
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
RBAC (Role-Based Access Control)
Role et RoleBinding
role-developer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: developer
namespace: development
rules:
Pods
apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch", "create", "delete"]
Deployments
apiGroups: ["apps"]
resources: ["deployments", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
Services
apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "create", "delete"]
ConfigMaps et Secrets (lecture seule)
apiGroups: [""]
resources: ["configmaps", "secrets"]
verbs: ["get", "list"]
---
rolebinding-developer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developer-binding
namespace: development
subjects:
User
kind: User
name: john@example.com
apiGroup: rbac.authorization.k8s.io
Group
kind: Group
name: developers
apiGroup: rbac.authorization.k8s.io
ServiceAccount
kind: ServiceAccount
name: dev-sa
namespace: development
roleRef:
kind: Role
name: developer
apiGroup: rbac.authorization.k8s.io
ClusterRole et ClusterRoleBinding
clusterrole-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-viewer
rules:
Lecture tous les pods
apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
Lecture deployments, daemonsets, statefulsets
apiGroups: ["apps"]
resources: ["deployments", "daemonsets", "statefulsets", "replicasets"]
verbs: ["get", "list", "watch"]
Lecture services et ingress
apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get", "list", "watch"]
apiGroups: ["networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
Lecture nodes
apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list"]
---
clusterrolebinding-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: viewer-binding
subjects:
kind: Group
name: viewers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-viewer
apiGroup: rbac.authorization.k8s.io
ServiceAccount
serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-sa
namespace: production
automountServiceAccountToken: true
imagePullSecrets:
name: registry-credentials
---
Role pour ServiceAccount
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: app-role
namespace: production
rules:
apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
apiGroups: [""]
resources: ["secrets"]
resourceNames: ["app-secret"] # Secret spécifique uniquement
verbs: ["get"]
---
RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: app-rolebinding
namespace: production
subjects:
kind: ServiceAccount
name: app-sa
namespace: production
roleRef:
kind: Role
name: app-role
apiGroup: rbac.authorization.k8s.io
---
Utilisation dans Pod
apiVersion: v1
kind: Pod
metadata:
name: app-with-sa
namespace: production
spec:
serviceAccountName: app-sa
containers:
- name: app
image: myapp:1.0
Commandes RBAC
Roles
kubectl get roles -n development
kubectl describe role developer -n development
kubectl create role developer --verb=get,list --resource=pods -n development
RoleBindings
kubectl get rolebindings -n development
kubectl describe rolebinding developer-binding -n development
kubectl create rolebinding dev-binding --role=developer --user=john@example.com -n development
ClusterRoles
kubectl get clusterroles
kubectl describe clusterrole cluster-viewer
kubectl create clusterrole viewer --verb=get,list --resource=pods
ClusterRoleBindings
kubectl get clusterrolebindings
kubectl describe clusterrolebinding viewer-binding
ServiceAccounts
kubectl get serviceaccounts
kubectl get sa # Alias
kubectl create serviceaccount app-sa -n production
Vérifier permissions
kubectl auth can-i get pods --as=john@example.com
kubectl auth can-i delete deployments --as=system:serviceaccount:prod:app-sa -n prod
kubectl auth can-i --list --as=john@example.com
Qui peut faire quoi
kubectl auth reconcile -f rbac.yaml
ResourceQuotas et LimitRanges
ResourceQuota
resourcequota.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
# Compute resources
requests.cpu: "10"
requests.memory: 20Gi
limits.cpu: "20"
limits.memory: 40Gi
# Storage
requests.storage: 100Gi
persistentvolumeclaims: "10"
# Objects count
pods: "50"
services: "20"
services.loadbalancers: "2"
services.nodeports: "5"
replicationcontrollers: "20"
secrets: "50"
configmaps: "50"
---
Quota par priorité
apiVersion: v1
kind: ResourceQuota
metadata:
name: high-priority-quota
namespace: production
spec:
hard:
pods: "10"
requests.cpu: "5"
requests.memory: 10Gi
scopeSelector:
matchExpressions:
- operator: In
scopeName: PriorityClass
values: ["high"]
---
Quota par QoS
apiVersion: v1
kind: ResourceQuota
metadata:
name: besteffort-quota
namespace: development
spec:
hard:
pods: "5"
scopes:
- BestEffort
LimitRange
limitrange.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: resource-limits
namespace: development
spec:
limits:
# Contraintes Containers
- type: Container
max:
cpu: "2"
memory: 2Gi
min:
cpu: 100m
memory: 64Mi
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 250m
memory: 256Mi
maxLimitRequestRatio:
cpu: 2
memory: 2
# Contraintes Pods
- type: Pod
max:
cpu: "4"
memory: 4Gi
min:
cpu: 200m
memory: 128Mi
# Contraintes PVC
- type: PersistentVolumeClaim
max:
storage: 10Gi
min:
storage: 1Gi
HorizontalPodAutoscaler (HPA)
HPA basique
hpa-basic.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-hpa
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
- type: Pods
value: 2
periodSeconds: 60
selectPolicy: Min
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 30
- type: Pods
value: 4
periodSeconds: 30
selectPolicy: Max
HPA avec métriques custom
hpa-custom.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: custom-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: api-server
minReplicas: 3
maxReplicas: 20
metrics:
# Resource metrics
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 60
# Pods metrics (depuis annotations Prometheus)
- type: Pods
pods:
metric:
name: httprequestspersecond
target:
type: AverageValue
averageValue: "1000"
# Object metrics
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1
kind: Ingress
name: main-route
target:
type: Value
value: "10k"
# External metrics
- type: External
external:
metric:
name: queuemessagesready
selector:
matchLabels:
queue: worker-tasks
target:
type: AverageValue
averageValue: "30"
Commandes HPA
Créer HPA
kubectl autoscale deployment web --cpu-percent=70 --min=2 --max=10
Lister
kubectl get hpa
kubectl describe hpa web-hpa
Éditer
kubectl edit hpa web-hpa
Supprimer
kubectl delete hpa web-hpa
Voir métriques (nécessite metrics-server)
kubectl top nodes
kubectl top pods
Watch HPA
kubectl get hpa -w
Kustomize
Structure de base
Structure projet Kustomize
base/
├── kustomization.yaml
├── deployment.yaml
├── service.yaml
└── configmap.yaml
overlays/
├── dev/
│ ├── kustomization.yaml
│ └── patch-replicas.yaml
├── staging/
│ ├── kustomization.yaml
│ └── patch-replicas.yaml
└── production/
├── kustomization.yaml
├── patch-replicas.yaml
└── patch-resources.yaml
Base Kustomization
base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
Common labels
commonLabels:
app: myapp
managed-by: kustomize
Common annotations
commonAnnotations:
version: "1.0.0"
Name prefix/suffix
namePrefix: app-
nameSuffix: -v1
Namespace
namespace: default
Resources
resources:
deployment.yaml
service.yaml
configmap.yaml
ConfigMap generator
configMapGenerator:
name: app-config
literals:
- LOGLEVEL=info
- MAXCONNECTIONS=100
files:
- config.properties
Secret generator
secretGenerator:
name: db-credentials
literals:
- username=admin
- password=secret123
type: Opaque
Images
images:
name: myapp
newName: registry.example.com/myapp
newTag: 1.2.3
Overlay Development
overlays/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
Base
bases:
../../base
Namespace override
namespace: development
Replicas patch
patchesStrategicMerge:
patch-replicas.yaml
ConfigMap overlay
configMapGenerator:
name: app-config
behavior: merge
literals:
- LOGLEVEL=debug
- ENVIRONMENT=development
Labels
commonLabels:
environment: dev
---
overlays/dev/patch-replicas.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
Overlay Production
overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
../../base
namespace: production
patchesStrategicMerge:
patch-replicas.yaml
patch-resources.yaml
Patches JSON
patchesJson6902:
target:
group: apps
version: v1
kind: Deployment
name: myapp
patch: |-
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: CACHEENABLED
value: "true"
configMapGenerator:
name: app-config
behavior: merge
literals:
- LOGLEVEL=warn
- ENVIRONMENT=production
- CACHEENABLED=true
commonLabels:
environment: prod
images:
name: myapp
newTag: 1.2.3-stable
---
overlays/production/patch-resources.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
spec:
containers:
- name: myapp
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: 1000m
memory: 1Gi
Commandes Kustomize
Build (voir YAML généré)
kubectl kustomize ./base
kubectl kustomize ./overlays/dev
kubectl kustomize ./overlays/production
Apply directement
kubectl apply -k ./overlays/dev
kubectl apply -k ./overlays/production
Delete
kubectl delete -k ./overlays/dev
Diff
kubectl diff -k ./overlays/production
Helm
Commandes Helm essentielles
Installation Helm 3
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
Ajouter repository
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
Mettre à jour repos
helm repo update
Chercher charts
helm search repo nginx
helm search hub wordpress
Installer chart
helm install my-release bitnami/nginx
helm install my-nginx bitnami/nginx --version 13.2.0
helm install my-app ./my-chart
helm install my-app ./my-chart --namespace production --create-namespace
helm install my-app ./my-chart -f values-prod.yaml
helm install my-app ./my-chart --set replicaCount=3 --set image.tag=1.2.3
Dry-run (voir ce qui serait créé)
helm install my-app ./my-chart --dry-run --debug
Lister releases
helm list
helm list --all-namespaces
helm list -n production
Status
helm status my-release
helm status my-release -n production
Upgrade
helm upgrade my-release bitnami/nginx
helm upgrade my-release ./my-chart -f values.yaml
helm upgrade --install my-release ./my-chart # Install si n'existe pas
Rollback
helm rollback my-release
helm rollback my-release 2 # Revision spécifique
helm history my-release
Uninstall
helm uninstall my-release
helm uninstall my-release --keep-history
Get values
helm get values my-release
helm get values my-release --all
helm get manifest my-release
Show chart info
helm show chart bitnami/nginx
helm show values bitnami/nginx
helm show all bitnami/nginx
Template (générer YAML)
helm template my-release ./my-chart
helm template my-release ./my-chart -f values-prod.yaml
Lint (valider chart)
helm lint ./my-chart
Package
helm package ./my-chart
helm package ./my-chart --version 1.2.3
Create chart
helm create my-app
Debugging et Troubleshooting
Commandes de debug
Logs
kubectl logs -f
kubectl logs --previous
kubectl logs -c
kubectl logs -l app=nginx --all-containers=true
kubectl logs --since=1h
kubectl logs --tail=100
Describe (événements)
kubectl describe pod
kubectl describe node
kubectl describe service
Events
kubectl get events --sort-by='.lastTimestamp'
kubectl get events --field-selector involvedObject.name=
kubectl get events -n kube-system
Exec dans pod
kubectl exec -it -- /bin/bash
kubectl exec -it -c -- sh
kubectl exec -- env
kubectl exec -- ps aux
kubectl exec -- cat /etc/resolv.conf
Port forward
kubectl port-forward pod/ 8080:80
kubectl port-forward service/ 8080:80
Debug avec container ephemeral (K8s 1.23+)
kubectl debug -it --image=busybox:1.36
kubectl debug -it --image=nicolaka/netshoot --target=
Debug node
kubectl debug node/ -it --image=ubuntu
Proxy API server
kubectl proxy --port=8001
Accès: http://localhost:8001/api/v1/namespaces/default/pods
Top resources
kubectl top nodes
kubectl top pods --all-namespaces
kubectl top pods --containers
Get raw
kubectl get pod -o yaml
kubectl get pod -o json
kubectl get pods -o wide
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,IP:.status.podIP
Dry run
kubectl create deployment test --image=nginx --dry-run=client -o yaml
kubectl apply -f pod.yaml --dry-run=server
Diff
kubectl diff -f deployment.yaml
Explain
kubectl explain pod
kubectl explain pod.spec.containers
kubectl explain pod --recursive
API resources
kubectl api-resources
kubectl api-resources --namespaced=true
kubectl api-resources --api-group=apps
Problèmes courants
Pod en CrashLoopBackOff
kubectl logs --previous
kubectl describe pod # Voir events
kubectl get pod -o yaml # Vérifier config
Pod en ImagePullBackOff
kubectl describe pod # Erreur pull
kubectl get pod -o jsonpath='{.spec.containers[*].image}'
Pod Pending
kubectl describe pod # Raison scheduling
kubectl get events --field-selector involvedObject.name=
kubectl describe nodes # Resources disponibles
Service ne répond pas
kubectl get endpoints # Vérifier endpoints
kubectl get pods -l app=
Annexes
Labels et selectors courants
Labels recommandés
metadata:
labels:
app.kubernetes.io/name: myapp
app.kubernetes.io/instance: myapp-prod
app.kubernetes.io/version: "1.2.3"
app.kubernetes.io/component: backend
app.kubernetes.io/part-of: ecommerce
app.kubernetes.io/managed-by: helm
# Custom
environment: production
tier: backend
team: platform
Selectors
kubectl get pods -l app=nginx
kubectl get pods -l 'environment in (prod,staging)'
kubectl get pods -l 'environment!=dev'
kubectl get pods -l 'app=nginx,tier=frontend'
kubectl get pods -l 'version' # Has label
kubectl get pods -l '!version' # Doesn't have label
Annotations courantes
metadata:
annotations:
# General
description: "Application web principale"
owner: "team-platform@example.com"
documentation: "https://docs.example.com/myapp"
# Kubernetes
kubernetes.io/change-cause: "Update to v1.2.3"
# Ingress
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-prod
# Prometheus
prometheus.io/scrape: "true"
prometheus.io/port: "9090"
prometheus.io/path: "/metrics"
# Service mesh (Istio)
sidecar.istio.io/inject: "true"
Variables d’environnement utiles
env:
Pod metadata
name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
name: PODNAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
name: PODIP
valueFrom:
fieldRef:
fieldPath: status.podIP
name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
name: SERVICEACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Resource limits
name: CPUREQUEST
valueFrom:
resourceFieldRef:
containerName: app
resource: requests.cpu
name: MEMORYLIMIT
valueFrom:
resourceFieldRef:
containerName: app
resource: limits.memory
Raccourcis kubectl
`bash
Alias
alias k=’kubectl’
alias kg=’kubectl get’
alias kd=’kubectl describe’
alias kdel=’kubectl delete’
alias kl=’kubectl logs’
alias kx=’kubectl exec -it’
alias kaf=’kubectl apply -f’
alias kdf=’kubectl delete -f’
Completion bash
source <(kubectl completion bash) complete -o default -F _startkubectl k
Changer namespace rapidement
alias kns=’kubectl config set-context –current –namespace’
kns production