Durée estimée : 20 minutes
Objective: Understand how to direct Pods to specific nodes using nodeSelector
, nodeName
, and basic nodeAffinity
.
Context: You have different types of nodes in your cluster (e.g., some with fast SSDs, some general purpose) and need to schedule Pods accordingly.
Instructions:
Assume you have at least two worker nodes. Let's call them node01
and controlplane
for this exercise. If you are using minikube or a single-node cluster, adapt the node names accordingly (e.g., minikube
).
scheduling-test
.disk=ssd
to node01
.tier=frontend
to node01
.tier=backend
to controlplane
.kubectl label node
commands used in /opt/cka_exercises/attribution-avancee/Ex1/label_commands.sh
.nodeSelector
:
nginx-ssd
in the scheduling-test
namespace using the image nginx:1.27.4
.disk=ssd
./opt/cka_exercises/attribution-avancee/Ex1/pod-nodeselector.yaml
and apply it.node01
. Store the verification command (e.g., kubectl get pod ... -o wide
) output in /opt/cka_exercises/attribution-avancee/Ex1/nodeselector_pod_location.txt
.nodeName
:
busybox-specific
in scheduling-test
using busybox:1.36
and the command sleep 3600
.controlplane
(or your second node name)./opt/cka_exercises/attribution-avancee/Ex1/pod-nodename.yaml
and apply it./opt/cka_exercises/attribution-avancee/Ex1/nodename_pod_location.txt
.nodeAffinity
(Required):
frontend-app
in scheduling-test
using nginx:1.27.4
.nodeAffinity
to require scheduling on a node that has the label tier=backend
./opt/cka_exercises/attribution-avancee/Ex1/pod-nodeaffinity.yaml
and apply it.controlplane
). Store verification in /opt/cka_exercises/attribution-avancee/Ex1/nodeaffinity_pod_location.txt
.1. Création du Namespace :
kubectl create namespace scheduling-test
mkdir -p /opt/cka_exercises/attribution-avancee/Ex1
2. Étiquetage des Nœuds :
Fichier /opt/cka_exercises/attribution-avancee/Ex1/label_commands.sh
:
#!/bin/bash
# Remplacez node01 et controlplane par vos noms de nœuds réels si nécessaire.
kubectl label nodes node01 disk=ssd
kubectl label nodes node01 tier=frontend
kubectl label nodes controlplane tier=backend
# Vérification (optionnelle):
kubectl get nodes --show-labels
Exécutez bash /opt/cka_exercises/attribution-avancee/Ex1/label_commands.sh
.
3. Pod avec nodeSelector
:
Fichier /opt/cka_exercises/attribution-avancee/Ex1/pod-nodeselector.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: nginx-ssd
namespace: scheduling-test
spec:
containers:
- name: nginx
image: nginx:1.27.4
nodeSelector:
disk: ssd
Appliquez et vérifiez :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex1/pod-nodeselector.yaml
# Attendre que le pod soit Running
kubectl get pod nginx-ssd -n scheduling-test -o wide > /opt/cka_exercises/attribution-avancee/Ex1/nodeselector_pod_location.txt
# Le contenu de nodeselector_pod_location.txt devrait montrer node01
4. Pod avec nodeName
:
Fichier /opt/cka_exercises/attribution-avancee/Ex1/pod-nodename.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: busybox-specific
namespace: scheduling-test
spec:
containers:
- name: busybox
image: busybox:1.36
command: ["sleep", "3600"]
nodeName: controlplane # Adaptez si votre nom de nœud est différent
Appliquez et vérifiez :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex1/pod-nodename.yaml
# Attendre que le pod soit Running
kubectl get pod busybox-specific -n scheduling-test -o wide > /opt/cka_exercises/attribution-avancee/Ex1/nodename_pod_location.txt
# Le contenu de nodename_pod_location.txt devrait montrer controlplane.
5. Pod avec nodeAffinity
(Requis) :
Fichier /opt/cka_exercises/attribution-avancee/Ex1/pod-nodeaffinity.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: frontend-app
namespace: scheduling-test
spec:
containers:
- name: nginx
image: nginx:1.27.4
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: tier
operator: In
values:
- backend
Appliquez et vérifiez :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex1/pod-nodeaffinity.yaml
# Attendre que le pod soit Running
kubectl get pod frontend-app -n scheduling-test -o wide > /opt/cka_exercises/attribution-avancee/Ex1/nodeaffinity_pod_location.txt
# Le contenu de nodeaffinity_pod_location.txt devrait montrer controlplane.
Commandes de Nettoyage :
kubectl delete namespace scheduling-test
# Remplacez node01 et controlplane par vos noms de nœuds réels si nécessaire.
kubectl label node node01 disk- tier-
kubectl label node controlplane tier-
rm -rf /opt/cka_exercises/attribution-avancee/Ex1
Durée estimée : 25 minutes
Objective: Use taints to prevent Pods from scheduling on certain nodes, tolerations to allow specific Pods to use those nodes, and DaemonSets to run Pods on all eligible nodes.
Context: You have a specialized node (e.g., for GPU tasks) that should not run general workloads, and you want to deploy a monitoring agent on all nodes in your cluster.
Instructions:
Assume node01
is the specialized node.
advanced-scheduling
.node01
: special=true:NoSchedule
.kubectl taint node
command in /opt/cka_exercises/attribution-avancee/Ex2/taint_command.sh
.regular-pod
in advanced-scheduling
without any tolerations./opt/cka_exercises/attribution-avancee/Ex2/pod-no-toleration.yaml
and apply it.Pending
if node01
is the only available/suitable node or be scheduled on another node (e.g., controlplane
) if available and untainted.Pending
, use kubectl describe pod regular-pod -n advanced-scheduling
to find out why. Note the message in the Events
section related to taints. Store your findings in /opt/cka_exercises/attribution-avancee/Ex2/pod_no_toleration_status.txt
.special-pod
in advanced-scheduling
.node01
(tolerating special=true:NoSchedule
)./opt/cka_exercises/attribution-avancee/Ex2/pod-with-toleration.yaml
and apply it.node01
. Store verification in /opt/cka_exercises/attribution-avancee/Ex2/pod_with_toleration_location.txt
.node-monitor
in the namespace advanced-scheduling
.app=monitoring,component=agent
.busybox:1.36
with the command sh -c "while true; do echo 'Monitoring node $(hostname)'; sleep 30; done"
.node01
despite its taint./opt/cka_exercises/attribution-avancee/Ex2/daemonset-monitor.yaml
and apply it.node01
. Store the output of kubectl get pods -n advanced-scheduling -l app=monitoring -o wide
in /opt/cka_exercises/attribution-avancee/Ex2/daemonset_pods_status.txt
.1. Création du Namespace :
kubectl create namespace advanced-scheduling
mkdir -p /opt/cka_exercises/attribution-avancee/Ex2
2. Taint sur le Nœud node01
:
Fichier /opt/cka_exercises/attribution-avancee/Ex2/taint_command.sh
:
#!/bin/bash
# Remplacez node01 par votre nom de nœud réel si nécessaire.
kubectl taint nodes node01 special=true:NoSchedule
# Vérification (optionnelle):
kubectl describe node node01 | grep Taints
Exécutez bash /opt/cka_exercises/attribution-avancee/Ex2/taint_command.sh
.
3. Pod sans Toleration :
Fichier /opt/cka_exercises/attribution-avancee/Ex2/pod-no-toleration.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: regular-pod
namespace: advanced-scheduling
spec:
containers:
- name: nginx
image: nginx:1.27.4
Appliquez-le :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex2/pod-no-toleration.yaml
Fichier /opt/cka_exercises/attribution-avancee/Ex2/pod_no_toleration_status.txt
:
Le Pod `regular-pod` devrait rester à l'état Pending si `node01` est le seul nœud disponible.
Si d'autres nœuds (comme `controlplane`) sont disponibles et sans taint, il sera planifié sur l'un d'eux.
Si le pod est `Pending`, la sortie de `kubectl describe pod regular-pod -n advanced-scheduling` montrera dans la section Events quelque chose comme :
`Warning FailedScheduling <time> default-scheduler 0/X nodes are available: 1 node(s) had taint {special: true}, that the pod didn't tolerate.` (où X est le nombre de nœuds dans le cluster).
La raison est que `node01` a un taint `special=true:NoSchedule`, et `regular-pod` n'a pas de tolérance pour ce taint spécifique.
Pour vérifier son statut : kubectl get pod regular-pod -n advanced-scheduling -o wide
4. Pod avec Toleration :
Fichier /opt/cka_exercises/attribution-avancee/Ex2/pod-with-toleration.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: special-pod
namespace: advanced-scheduling
spec:
containers:
- name: nginx
image: nginx:1.27.4
tolerations:
- key: "special"
operator: "Equal"
value: "true"
effect: "NoSchedule"
Appliquez et vérifiez :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex2/pod-with-toleration.yaml
# Attendre que le pod soit Running
kubectl get pod special-pod -n advanced-scheduling -o wide > /opt/cka_exercises/attribution-avancee/Ex2/pod_with_toleration_location.txt
# Le contenu de pod_with_toleration_location.txt devrait montrer node01.
5. DaemonSet de Surveillance :
Fichier /opt/cka_exercises/attribution-avancee/Ex2/daemonset-monitor.yaml
:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-monitor
namespace: advanced-scheduling
spec:
selector:
matchLabels:
app: monitoring
component: agent
template:
metadata:
labels:
app: monitoring
component: agent
spec:
containers:
- name: monitor-agent
image: busybox:1.36
command:
- sh
- -c
- "while true; do echo 'Monitoring node $(hostname)'; sleep 30; done"
resources:
limits:
memory: "100Mi"
requests:
cpu: "10m"
memory: "50Mi"
tolerations:
- key: "special"
operator: "Equal"
value: "true"
effect: "NoSchedule"
Appliquez et vérifiez :
kubectl apply -f /opt/cka_exercises/attribution-avancee/Ex2/daemonset-monitor.yaml
# Attendre que tous les pods soient Running
kubectl get pods -n advanced-scheduling -l app=monitoring -o wide > /opt/cka_exercises/attribution-avancee/Ex2/daemonset_pods_status.txt
# daemonset_pods_status.txt devrait montrer un pod par nœud, y compris node01
Commandes de Nettoyage :
kubectl delete namespace advanced-scheduling
# Remplacez node01 par votre nom de nœud réel si nécessaire.
kubectl taint nodes node01 special-
rm -rf /opt/cka_exercises/attribution-avancee/Ex2