0% found this document useful (0 votes)
306 views11 pages

Lab23 - Liveness and Readiness Probes

The document discusses liveness and readiness probes in Kubernetes. It explains that liveness probes check if an application is alive, while readiness probes check if an application is ready to receive traffic. The lab demonstrates creating deployments with liveness probes to restart failed pods, and readiness probes to delay routing traffic to pods until they are initialized. Without readiness probes, services may route traffic to pods that are not yet ready, degrading performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
306 views11 pages

Lab23 - Liveness and Readiness Probes

The document discusses liveness and readiness probes in Kubernetes. It explains that liveness probes check if an application is alive, while readiness probes check if an application is ready to receive traffic. The lab demonstrates creating deployments with liveness probes to restart failed pods, and readiness probes to delay routing traffic to pods until they are initialized. Without readiness probes, services may route traffic to pods that are not yet ready, degrading performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Lab : Readiness Probes and Liveness

Introduction:
Liveness: - Liveness probes let Kubernetes know if your app is alive or dead. If your app is alive, then
Kubernetes leaves it alone. If your app is dead, Kubernetes removes the Pod and starts a new one to
replace it.

Readiness: - Readiness probes are designed to let Kubernetes know when your app is ready to serve
traffic. Kubernetes makes sure the readiness probe passes before allowing a service to send traffic to
the pod. If a readiness probe starts to fail, Kubernetes stops sending traffic to the pod until it passes.

In this Lab, you will learn

• Liveness Probes
• Readiness Probes
• Clean up

Note: Ensure you have running cluster deployed


1. Ensure that you have logged-in as root user with password as linux on kube-master node.

1.1 Let us clone the git repository which contains manifests required for this exercise, by
executing the below command.

# git clone https://github.com/EyesOnCloud/k8s-probes.git


Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.2 Let us view the manifest file to create a Deployment and a Service, by executing the below
command.
# cat -n ~/k8s-probes/deployment.yaml
Output:

Note: We have created a deployment that starts Nginx server in the background and a while
loop that keeps the container up and running, even if the Nginx Service fails.

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.3 Let us create the deployment, by executing the below command.

# kubectl apply -f ~/k8s-probes/deployment.yaml


Output:

1.4 Let us verify the details, by executing the below command.


# kubectl get all -l app=nginx

Output:

1.5 Let us capture the cluster ip of the service, by executing the below commands.
# kubectl get svc nginx
# CLUSTER_IP=$(kubectl get svc nginx -o
jsonpath='{.spec.clusterIP}')
# echo $CLUSTER_IP
Output:

1.6 Let us access the webserver, by executing the below commands.

# curl -v -s $CLUSTER_IP 2>&1 | head -n 10


Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.7 Let us kill the Nginx Server Process, by executing the below command.

Let us kill the Nginx process to simulate a broken POD, that still lives in the sense that the
container is up and running. For that, let us find the process ID.

# POD=$(kubectl get pod | grep nginx | awk '{print $1}')

# kubectl exec $POD -- bash -c 'find /proc -mindepth 2 -


maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null'
Output:

1.8 Here, we can see that the process ID of the nginx process is 6. Let us kill the process now:

# kubectl exec $POD -- bash -c 'kill 6'

1.9 Let us access and notice that it is unresponsive:

# curl -s -v $ CLUSTER_IP 2>&1 | head -n 10


Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.10 However, the POD is not restarted. It is still the old POD
# kubectl get pods
Output:

1.11 To overcome this issue, let us add a Liveness Probe and re-deploy the application. First
let us delete the old deployment, by executing the below command.
# kubectl delete -f ~/k8s-probes/deployment.yaml
1.12 Let us view the manifest file, by executing the below command.

# cat -n ~/k8s-probes/deployment_live_probe.yaml
Output: Truncated output, Liveness probe is added to previous deployment.

1.13 Let us create the deployment by adding livenessProbe


# kubectl apply -f ~/k8s-probes/deployment_live_probe.yaml
Output:

1.14 Let us verify the details, by executing the below command.


# kubectl get all -l app=nginx

Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.15 Let us capture the cluster ip of the service, by executing the below commands.
# kubectl get svc nginx
# CLUSTER_IP=$(kubectl get svc nginx -o
jsonpath='{.spec.clusterIP}')
# echo $CLUSTER_IP
Output:

1.16 Let us access the application, by executing the below command.


# curl -v -s $CLUSTER_IP 2>&1 | head -n 10
Output:

1.17 Kill the Nginx Process again

# POD=$(kubectl get pod | grep nginx | awk '{print $1}')

# kubectl exec $POD -- bash -c 'find /proc -mindepth 2 -


maxdepth 2 -name exe -exec ls -lh {} \; 2>/dev/null' | grep
nginx
Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
1.18 We see that the process ID of the nginx process is 6. Let us kill the process now:
# kubectl exec $POD -- bash -c 'kill 6'

Note: Let us wait for 20 seconds and run the next command.
1.19 Let us list the pod, by executing the below command.

# kubectl get pod $POD


Output:

1.20 Now the let us access the application, by executing the below command.
# curl -s -v $CLUSTER_IP 2>&1 | head -n 10
Output:

Note: With the help of the Liveness Probe, the problem has healed itself. Any POD that does
not respond properly is restarted automatically.

1.21 Let us delete the deployment, by executing the below command.


# kubectl delete -f ~/k8s-probes/deployment_live_probe.yaml
Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
2 Readiness Probes

Readiness probes, in turn, are used to detect how long a container needs for booting up the
application properly. Without a readiness probe, A rollout of a non-functional version of a
ReplicaSet or Deployment will stop the rollout after detecting that the first new POD is non -
functional. With that, the service is not degraded, giving the administrator a chance to mitigate
the problem. We want to see, what happens to an exposed service, if a POD/container starts
with a long boot time. For this, we create a service with two PODs, one of which has a boot
time of 120 sec. We simulate this situation by running a 120 seconds sleep command before we
run the actual application Therefore, statistically, every second curl request will fail. This is
because the HTTP requests are load-balanced between the POD that is working already and the
one, which is still in the boot process, leading to a „failed “status. The problem is caused by
both endpoints being added the service right away, even though one of the PODs is not
responsive yet.

2.1 Let us view the manifest file, by executing the below command.

# cat -n ~/k8s-probes/pods.yaml
Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
2.2 Let us create the pods and service, by executing the below command.

# kubectl apply -f ~/k8s-probes/pods.yaml

Output:

2.3 Let us capture the cluster ip of the service, by executing the below command.

# CLUSTER_IP=$(kubectl get svc | grep nginx | awk '{print


$3}')

2.4 Let us access the application , by executing the below command.

# while true; do curl -s $CLUSTER_IP -o /dev/null && echo


success || echo failure; sleep 2; done

Output:

Ctrl+c

If you wait long enough (> 2 minutes), then all curl commands will be successful again,
indicating that the slower nginx POD is ready as well.

In the next step, we will improve the initialization procedure by adding a readiness probe.

2.5 Let us delete the slow-boot pod and recreate it with readiness probe
# kubectl delete pod nginx-slowboot
2.6 Let us view the manifest of POD with a readiness probe, by executing the below command.
# cat -n ~/k8s-probes/slowboot.yaml
Output: Truncated output…

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
2.7 Let us create the pod, by executing the below command.
# kubectl apply -f ~/k8s-probes/slowboot.yaml
Output:

2.8 Let us list the pods, by executing the below command.


# kubectl get pods

Output:

2.9 Let us list the end points, by executing the below command.
# kubectl get ep | grep nginx

Output:

# while true; do curl -s $CLUSTER_IP -o /dev/null && echo


success || echo failure; sleep 2; done
Output:

Ctrl+C

All curl requests will be successful. The reason is, that the endpoint of the slowly booting POD is
not added to the list of the endpoints of the service before it successfully replies to the
readiness probe. This way, you never will create a black hole for the HTTP requests.

If you wait for more than 60 seconds, stop the while loop with <ctrl> -C and look for the list of
endpoints, the second endpoint will be available, though:
# kubectl get ep | grep nginx
Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/
2.10 Let us cleanup, by executing the below command.
# kubectl delete -f ~/k8s-probes/pods.yaml
Output:

Student Material – Do Not Re-distribute. For any queries contact:


[email protected] or https://www.linkedin.com/in/naushadpasha/

You might also like