0% found this document useful (0 votes)
30 views

Kubernetes

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Kubernetes

Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 92

Managing

Containers
Effectively with
Kubernetes
Introduction to Kubernetes
 There are two major container orchestration tools on the market:

 Docker Swarm

 Kubernetes

 The major difference between the platforms is based on complexity.


Kubernetes is well suited for complex applications.
 On the other hand, Docker Swarm is designed for ease of use, making
it a preferable choice for simple applications.
2
Introduction to Kubernetes
 Features of Kubernetes

 Automated Scheduling

 Self-Healing Capabilities

 Automated rollouts & rollback

 Horizontal Scaling & Load Balancing

 Offers environment consistency for development, testing, and


production

3
Introduction to Kubernetes
 Features of Kubernetes

 Infrastructure is loosely coupled to each component can act as a


separate unit.
 Provides a higher density of resource utilization

 Offers enterprise-ready features

 Application-centric management

 Auto-scalable infrastructure

 You can create predictable infrastructure


4
Introduction to Kubernetes
 Kubernetes Basics

 Cluster: It is a collection of hosts(servers) that helps you to


aggregate their available resources. That includes ram, CPU, ram,
disk, and their devices into a usable pool.
 Master: The master is a collection of components which make up the
control panel of Kubernetes. These components are used for all
cluster decisions. It includes both scheduling and responding to
cluster events.

5
Introduction to Kubernetes
 Kubernetes Basics

 Node: It is a single host which is capable of running on a physical or


virtual machine. A node should run both kube-proxy, minikube, and
kubelet which are considered as a part of the cluster.
 Namespace: It is a logical cluster or environment. It is a widely used
method which is used for scoping access or dividing a cluster.

6
Kubernetes Architecture

7
Kubernetes Architecture
 Master Node:
 The master node is the first and most vital component which is
responsible for the management of Kubernetes cluster.
 It is the entry point for all kind of administrative tasks.

 There might be more than one master node in the cluster to check for
fault tolerance.
 The master node has various components like API Server, Controller
Manager, Scheduler, and ETCD.

8
Kubernetes Architecture
 API Server:
 The API server acts as an entry point for all the REST commands used for
controlling the cluster.

 Scheduler:
 The scheduler schedules the tasks to the slave node.

 It stores the resource usage information for every slave node.

 It is responsible for distributing the workload.

9
Kubernetes Architecture
 Scheduler:
 It also helps you to track how the working load is used on cluster nodes.

 It helps you to place the workload on resources which are available and
accept the workload.

 Etcd:
 etcd components store configuration detail and wright values.

 It communicates with the most component to receive commands and


work.
 It also manages network rules and port forwarding activity.
10
Kubernetes Architecture
 Worker/Slave nodes:
 Worker nodes are another essential component which contains all the
required services to manage the networking between the containers,
communicate with the master node, which allows you to assign resources
to the scheduled containers.

 Kubelet:
 This gets the configuration of a Pod from the API server and ensures that
the described containers are up and running.

11
Kubernetes Architecture
 Docker Container:
 Docker container runs on each of the worker nodes, which runs the
configured pods.

 Kube-proxy:
 Kube-proxy acts as a load balancer and network proxy to perform service
on a single worker node.

 Pods:
 A pod is a combination of single or multiple containers that logically run
together on nodes.
12
Kubernetes - Other Key Terminologies
 Replication Controllers
 A replication controller is an object which defines a pod template.

 It also controls parameters to scale identical replicas of Pod horizontally


by increasing or decreasing the number of running copies.

 Replication Sets
 Replication sets are an interaction on the replication controller design with
flexibility in how the controller recognizes the pods it is meant to manage.
 It replaces replication controllers because of their higher replicate
selection capability.
13
Kubernetes - Other Key Terminologies
 Deployments
 Deployment is a common workload which can be directly created and
manage.
 Deployment use replication set as a building block which adds the feature
of life cycle management.

 Stateful Sets
 It is a specialized pod control which offers ordering and uniqueness.

 It is mainly used to have fine-grained control, which you have a particular


need regarding deployment order, stable networking, and persistent data.
14
Kubernetes - Other Key Terminologies
 Daemon Sets
 Daemon sets are another specialized form of pod controller that runs a
copy of a pod on every node in the cluster.
 This type of pod controller is an effective method for deploying pods that
allows you to perform maintenance and offers services for the nodes
themselves.

15
Kubernetes vs. Docker Swarm

16
Introduction to Kubernetes
 Advantages of Kubernetes:
 Easy organization of service with pods

 It is developed by Google, who bring years of valuable industry


experience to the table
 Largest community among container orchestration tools

 Offers a variety of storage options, including on-premises, SANs and


public clouds
 Adheres to the principals of immutable infrastructure

 Kubernetes can run on-premises bare metal, OpenStack, public clouds


17
Google, Azure, AWS, etc.
Introduction to Kubernetes
 Advantages of Kubernetes:
 Helps you to avoid vendor lock issues as it can use any vendor-specific
APIs or services except where Kubernetes provides an abstraction, e.g.,
load balancer and storage.
 Containerization using kubernetes allows package software to serve these
goals. It will enable applications that need to be released and updated
without any downtime.
 Kubernetes allows you to assure those containerized applications run
where and when you want and helps you to find resources and tools which
you want to work.
18
Introduction to Kubernetes
 Disadvantages of Kubernetes:
 Kubenetes dashboard not as useful as it should be

 Kubernetes is a little bit complicated and unnecessary in environments


where all development is done locally.
 Security is not very effective.

19
Installing Kubernetes on a local machine
 When developing a containerized application that is to be hosted on
Kubernetes, it is important to be able to run the application (with its
containers) on your local machine, before deploying it on remote
Kubernetes production clusters.
 In order to install a Kubernetes cluster locally, there are several
solutions, which are as follows:
 The first solution is to use Docker Desktop.

1. In Docker Desktop, activate the Enable Kubernetes option in Settings


in Kubernetes tab
20
Installing Kubernetes on a local machine

1. In Docker Desktop, activate the Enable Kubernetes option in Settings


in Kubernetes tab

21
Installing Kubernetes on a local machine

2. After clicking on the Apply button, Docker Desktop will install a mini
Kubernetes cluster, and the kubectl client tool, on the local machine.
 The second solution is to install Minikube, which also installs a
simplified Kubernetes cluster locally.
 Following the local installation of Kubernetes, check its installation by
executing the following command in a Terminal:
 kubectl version --short

22
Installing the Kubernetes dashboard
 After installing our Kubernetes cluster, there is a need for another
element, which is the Kubernetes dashboard.
 In order to install the Kubernetes dashboard, which is a pre-packaged
containerized web application that will be deployed in our cluster, we will run
the following command in a Terminal:

23
Installing the Kubernetes dashboard
 Its execution is shown in the following screenshot:

24
Installing the Kubernetes dashboard
 To open the dashboard and connect to it from our local machine, first
create a proxy between the Kubernetes cluster and our machine by
performing the following steps:
 1. To create the proxy, we execute the kubectl proxy command in a
Terminal, and the detail of the execution is shown in the following
screenshot:

 The proxy is open on the localhost address (127.0.0.1) with the 8001
port.
25
Installing the Kubernetes dashboard
 Then, in a web browser, open the
URL
 http://
localhost:8001/api/v1/namespace
s/kubernetes-dashboard/services/
https:kubernetes-dashboard
:/proxy/#/login
 This is a local URL (localhost and
8001) that is created by the
proxy, and that points to the
Kubernetes dashboard application 26
Installing the Kubernetes dashboard
 After clicking on the SIGN IN button, the dashboard is displayed as
follows:

27
First example of Kubernetes application deployment

 After installing our Kubernetes cluster, deploy an application in it.

 First of all, it is important to know that when deploying an application


in Kubernetes, create a new instance of the Docker image in a cluster
pod, and need to have a Docker image that contains the application.
 To deploy a instance of the Docker image, create a new k8sdeploy
folder, and, inside it, create a Kubernetes deployment YAML
specification file (myappdeployment.yml) with the following content:

28
First example of Kubernetes application deployment

29
First example of Kubernetes application deployment

 In this code, description of deployment is as follows:

 The apiVersion property is the version of api that should be used.

 In the Kind property, we indicate that the specification type is


deployment.
 The replicas property indicates the number of pods that
Kubernetes will create in the cluster; here, we choose two
instances.

30
First example of Kubernetes application deployment

 In this example, chose two replicas, which can, at the very least,
distribute the traffic charge of the application (put in more replicas if
there is a high volume of load).
 And also ensure the proper functioning of the application.

 Therefore, if one of the two pods has a problem, the other, which is
an identical replica, will ensure the proper functioning of the
application.
 Then, in the containers section, we indicate the image (from the
Docker Hub) with name and tag.
31
 Finally, the ports property indicates the port that the container will
First example of Kubernetes application deployment

 To deploy our application, we go to our Terminal, and execute one of


the essential kubectl commands (kubectl apply) as follows:
 kubectl apply -f myapp-deployment.yml

 The -f parameter corresponds to the YAML specification file.

 This command applies the deployment that is described in the YAML


specification file on the Kubernetes cluster.
 Following the execution of this command, check the status of this
deployment, by displaying the list of pods in the cluster.

32
First example of Kubernetes application deployment

 To do this in the Terminal, we execute the kubectl get pods command,


which returns the list of cluster pods.
 The following screenshot shows the execution of the deployment and
displays the information in the pods, which we use to check the
deployment:

33
First example of Kubernetes application deployment

 In the preceding screenshot, the second command displays two pods,


with the name (webapp) specified in the YAML file, followed by a
unique ID, and Running status.
 Also visualize the status of cluster on the Kubernetes web dashboard,
the webapp deployment with the Docker image that has been used,
and the two pods that have been created.
 The application has been successfully deployed in Kubernetes cluster.

 But, for the moment, it is only accessible inside the cluster only.

 And for it to be usable, we need to expose it outside the cluster.


34
First example of Kubernetes application deployment

 In order to access the web application


outside the cluster, add a service type and
a NodePort category element to the cluster.
 To add this service type and NodePort, in
the same way as for deployment, create a
second YAML file (myapp-service.yml) of
the service specification in the same
k8sdeploy directory, which has the
following code:
 In this code, we specify the kind, Service,
35
as well as the type of service, NodePort.
First example of Kubernetes application deployment

 Then, in the ports section, we specify the port translation: the 80


port, which is exposed internally, and the 31000 port, which is
exposed externally to the cluster.
 To create this service on the cluster, we execute the kubectl apply
command, but this time with our myapp-service.yaml file as a
parameter, as follows:
 kubectl apply -f myapp-service.yml

36
First example of Kubernetes application deployment

 The execution of the command creates the service within the cluster,
and, to test the application, open a web browser with the
http://localhost:31000 URL, and the page is displayed as follows:

 The application is now deployed on a Kubernetes cluster, and it can


be accessed from outside the cluster. 37
Automatic deployment of the application in Kubernetes

● The settings for the Deploy to Kubernetes task are as follows:


○ Choose the endpoint of the Kubernetes cluster—the New button allows us
to add a new endpoint configuration of a cluster.

○ Then, choose the apply command to be executed by kubectl—here, we


will execute an application.

○ Finally, choose the directory, coming from the artifacts, which contains
the YAML specification files.

4. We save the release definition by clicking on the Save button.

5. Finally, we click on the Create a new release button, which triggers a


deployment in our AKS cluster.
38
Automatic deployment of the application in Kubernetes

● At the end of the release execution, it is possible to check that the


application has been deployed by executing the command in a
Terminal as follows:
○ kubectl get pods,services

● This command displays the list of pods and services that are present
in our AKS Kubernetes cluster, and the result of this command is
shown in the following screenshot:

39
Automatic deployment of the application in Kubernetes

● We can see our two deployed web applications pods and the
NodePort service that exposes our applications outside the cluster.
● Then, we open a web browser with the http://localhost:31000 URL,
and our application is displayed correctly:

40
Kubernetes Core Concepts
● Kubernetes is built on a few basic objects, each of which can be manipulated
via kubectl and yaml files.
● Common Components
○ When writing Kubernetes .yaml you will always use a “metadata header” that
looks like this:
kind: KIND
apiVersion: VERSION
metadata:
name: NAME

● Basic Objects
○ Containers name: nginx
image: nginx:1.10
○ A container is a standardized group of software ports:
packaged as a unit and run on a container platform.
- containerPort: 80
Note: containers are always embedded in a pod spec.
41
Kubernetes Core ●Concepts
Services
● Pods o A service is a logical group of pods
o A pod is a logical unit of one or more together with a policy for accessing
containers which represents a single them. It is a networking abstraction
unit of deployment. that allows the containers in a pod to
kind: Pod
apiVersion: v1
be accessed via DNS entries and TCP
metadata: or UDP channels.
kind: Service
name: my-favorite-pod
apiVersion: v1
namespace: ckad-notes
metadata:
labels:
name: my-unique-name
app: my-app
spec:
spec:
ports:
containers:
- name: http
- name: first-container
port: 80
image: busybox
targetPort: 80
command: ["echo"]
- name: https
args: ["Hello World"]
port: 443
- name: second-container
targetPort: 443
image: nginx
selector:
ports:
app: random-app
- containerPort: 80
42
Kubernetes Core Concepts
apiVersion: v1
● Volumes kind: Pod
metadata:
● A volume is a local filesystem which is
name: test-pd
accessible to the containers in a pod as spec:
though it were present in a local containers:
- image: k8s.gcr.io/test-webserver
directory. The lifetime of a volume is tied name: test-container
to the lifetime of the pod it is attached volumeMounts:
- mountPath: /cache
to.
name: cache-volume
● There is no “generic” volume spec. The volumes:
specification is highly dependent on the - name: cache-volume
emptyDir: {}
source of the filesystem, which will
probably be a cloud provider.

43
Kubernetes Core Concepts
● Namespaces
o A namespace is a group of distinctly named cluster entities. No two entities of the
same type in the same namespace can share the same name.
o It can be useful to create these for scratch space when playing around.
o One-liner to change namespace:
kubectl config set-context $(kubectl config current-context) --namespace=<insert-namespace-name-here>

● Controllers
o Controllers build on the basic sets and are where the power of Kubernetes lies.
Most allow for horizontal scaling of pods to meet demand.

44
Kubernetes Core Concepts apiVersion: apps/v1
kind: Deployment
metadata:
● Deployments name: nginx-deployment
labels:
o Deployments manage a app: nginx
spec:
group of replicated pods replicas: 3
selector:
which use the same spec. matchLabels:
app: nginx
They can be scaled up and template:
metadata:
down on demand. labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

45
apiVersion: v1
kind: Service

Kubernetes Core Concepts


metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
● StatefulSets name: web
clusterIP: None

● StatefulSets are deployments which selector:


app: nginx
---
additionally provide guarantees on the apiVersion: apps/v1
kind: StatefulSet

ordering and uniqueness of the pods. metadata:


name: web
spec:
They provide persistent network selector:
matchLabels:

identifiers, persistent storage, and all app: nginx # MUST BE SAME


serviceName: "nginx" # Should match service name
replicas: 3 # by default is 1
deployments and rolling updates also template:
metadata:
occur in order. labels:
app: nginx # SAME AS THIS

● A StatefulSet cannot exist without an spec:


containers:
- name: nginx
accompanying Service, specified in image: k8s.gcr.io/nginx-slim:0.8
ports:

spec.serviceName. - containerPort: 80
name: web

46
Kubernetes Core Concepts apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
● DaemonSet namespace: kube-system
labels:

● A DaemonSet is a deployment k8s-app: fluentd-logging


spec:
selector:
which ensures that all nodes in matchLabels:
name: fluentd-elasticsearch
the system run a copy of a pod. template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
nodeSelector: # Selects nodes to run this pod on.
env: prod
containers:
- name: fluentd-elasticsearch
image: k8s.gcr.io/fluentd-elasticsearch:1.20
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi

47
Kubernetes Core Concepts apiVersion: batch/v1
kind: Job
metadata:
● Job name: pi
spec:
● A Job is a temporary deployment template:
which Kubernetes will start and spec:
containers:
ensure that a required number of - name: pi
image: perl
them successfully terminate.
command: ["perl", "-Mbignum=bpi", "-wle",
"print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4

48
Kubernetes Health check
● Kubernetes is the leading orchestration platform for containerized applications. To
manage containers effectively, Kubernetes needs a way to check their health to
see if they are working correctly and receiving traffic. Kubernetes uses health
checks—also known as probes—to determine if instances of your app are running
and responsive.
● Probes are the technique for Kubernetes applications to provide monitoring of the
internal state of the application using information. They let your cluster identify the
running pods (containers) that monitor the health of an application and make sure
that only the healthy Pods serve the traffic.
● As you deploy and operate distributed applications, containers are created,
started, run, and terminated. To check a container's health in the different stages
of its lifecycle, Kubernetes uses different types of probes.

49
Why Probes are Important

● Applications can become unreliable for a variety of reasons such as temporary


connection loss, configuration errors, and application errors. Developers monitor their
application health using probes. Probes help the developers to get to know about the
application status, resource utilization, and bugs. It’s easy to fix the application
problems, resource management, and organization of resources effectively by
monitoring the application information.
● You can only assert the system's health if all of its components are working. Using
probes, you can determine whether a container is dead or alive, and decide if
Kubernetes should temporarily prevent other containers from accessing it. Kubernetes
verifies individual containers’ health to determine the overall pod health.
● Probes are used to detect the following:
○ Containers that haven’t progressed yet and can’t serve the traffic.
○ Containers that are overloaded and can’t serve the supplementary traffic.
○ Containers that are completely dead and not serving any traffic.

50
Types of Probes

● As you deploy and operate distributed applications, containers are created, started,
run, and terminated. To check a container's health in the different stages of its
lifecycle, Kubernetes uses different types of probes.
● Liveness probes
○ Allow Kubernetes to check if your app is alive. The kubelet agent that runs on each node uses
the liveness probes to ensure that the containers are running as expected. If a container app is
no longer serving requests, kubelet will intervene and restart the container.
● Readiness probes
○ Readiness probes run during the entire lifecycle of the container. Kubernetes uses this probe to
know when the container is ready to start accepting traffic. If a readiness probe fails,
Kubernetes will stop routing traffic to the pod until the probe passes again.
● Startup probes
○ Startup probes are used to determine when a container application has been initialized
successfully. If a startup probe fails, the pod is restarted. When pod containers take too long to
become ready, readiness probes may fail repeatedly. In this case, containers risk being
terminated by kubelet before they are up and running. This is where the startup probe comes to
the rescue. 51
Creating Probes

● To create health check probes, you must issue requests against a


container. There are three ways of implementing Kubernetes
liveness, readiness, and startup probes:
○ tcpSocket: Just check that TCP socket for a port is successful
○ exec: Run a command that returns 0 for success
○ httpGet: an HTTP request returns a response code between 200 and 399
● A probe/health check may return the following results:
○ Success: The container passed the health check.
○ Failure: The container failed the health check.
○ Unknown: The health check failed for unknown reasons.

52
Configure Probes

You can further utilise different options in these probes spec to control the behaviour
of liveness and readiness probes:
•initialDelaySeconds: Number of seconds after the container has started before liveness or
readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
•periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds.
Minimum value is 1.
•timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1
second. Minimum value is 1.
•successThreshold: Minimum consecutive successes for the probe to be considered
successful after having failed. Defaults to 1. Must be 1 for liveness and startup
Probes. Minimum value is 1.
•failureThreshold: When a probe fails, Kubernetes will try failureThreshold times before
giving up. Giving up in case of liveness probe means restarting the container. In case
of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.
HTTP probes

HTTP probes have additional fields that can be set on httpGet:


● host: Host name to connect to, defaults to the pod IP. You
probably want to set "Host" in httpHeaders instead.
● scheme: Scheme to use for connecting to the host (HTTP or
HTTPS). Defaults to HTTP.
● path: Path to access on the HTTP server. Defaults to /.
● httpHeaders: Custom headers to set in the request. HTTP allows
repeated headers.
● port: Name or number of the port to access on the container.
Number must be in the range 1 to 65535.

54
livenessProbe

● Allow Kubernetes to check if your app is alive. The kubelet agent that runs on each node uses the liveness
probes to ensure that the containers are running as expected. If a container app is no longer serving
requests, kubelet will intervene and restart the container.
● For example, if an application is not responding and cannot make progress because of a deadlock, the
liveness probe detects that it is faulty. Kubelet then terminates and restarts the container. Even if the
application carries defects that cause further deadlocks, the restart will increase the container's availability.
It also gives your developers time to identify the defects and resolve them later.
● Liveness probes can catch situations when the application is no longer responding or unable to make
progress and restart it. We address the case of HTTP liveness probes that send a request to the application’s
back-end (e.g., some server) and decide whether the application is healthy based on its response.
● Liveness probes let Kubernetes know if your app is alive or dead. If you app is alive, then Kubernetes leaves
it alone. If your app is dead, Kubernetes removes the Pod and starts a new one to replace it.
● Restart Policy :We can define the restart Policy in the pod to instruct the controller about the conditions
required to restart the Pod.
○ Always: Always restart the pod when it terminates.
○ OnFailure: Restart the pod only when it terminates with failure.
○ Never: Never restart the pod after it terminates.

55
livenessProbe : Hands on
● In this yaml file we will define the Liveness Probe and no restart Policy.

liveness.yml

kind: Pod
apiVersion: v1
metadata:
name: liveness-probe
spec:
containers:
- name: ubuntu-container
image: ubuntu
command:
- /bin/bash
- -ec
- touch /tmp/live; sleep 30; rm /tmp/live; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/live
initialDelaySeconds: 5
periodSeconds: 5

56
Liveness probe: Hands on
Pod configuration
• Creating a container with ubuntu image
• When container starts it will create a
file /tmp/live then sleep for 30 seconds
and at last remove the file /tmp/live
• This means the file will be available
only for 30 seconds and after that it is
no longer available in the container
• In liveness configuration — It will try to
find the file every 5 seconds with an
initial delay of 5 seconds
initalDelaySeonds : Number of seconds
controller will wait before launching the
probe
periodSeconds : Number of seconds after
which the probe will be repeated
periodically
Create a pod
kubectl create -f liveness.yml
kubectl describe pod liveness-probe
You will see the liveness-probe is succeed because
the command is executed successfully.
Now wait for 30 seconds and the run the below
command

kubectl describe pod liveness-probe


You will see that liveness-probe has
failed
57
● Now you can see that container is restarting again and again
because of the default Restart policy

58
readinessProbe
● Rediness Probe indicates whether the Container is ready to service requests. If the readiness
probe fails, the endpoints controller removes the Pod’s IP address from the endpoints of all
Services that match the Pod. The default state of readiness before the initial delay is Failure. If a
Container does not provide a readiness probe, the default state is Success.
● Let’s imagine that your app takes a minute to warm up and start. Your service won’t work until it
is up and running, even though the process has started. You will also have issues if you want to
scale up this deployment to have multiple copies. A new copy shouldn’t receive traffic until it is
fully ready, but by default Kubernetes starts sending it traffic as soon as the process inside the
container starts. By using a readiness probe, Kubernetes waits until the app is fully started before
it allows the service to send traffic to the new copy.
● Readiness probes are configured similarly to liveness probes. Readiness probes runs on the
container during its whole lifecycle.
● Caution: Incorrect implementation of readiness probes may result in an ever growing number of
processes in the container, and resource starvation if this is left unchecked.

59
Busybox Container with a Kubernetes Readiness Probe
readiness.yml
kubectl apply -f readiness.yml
kubectl describe pod readiness-exec
apiVersion: v1
kind: Pod
kubectl get pod readiness-exec
metadata: kubectl get events --watch
labels:
test: readiness
name: readiness-exec
spec:
containers:
- name: readiness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; After few seconds again execute
sleep 30 kubectl describe pod readiness-exec
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
AfterperiodSeconds:
40 seconds again5 execute
kubectl describe pod readiness-exec

After 40 seconds (sleep 30+ Initial delay 5 seconds + period 5 seconds) 60


readinessProbe: Hands-on

Pod configuration
● Creating a container with busybox image
● The kubelet executes the touch /tmp/healthy command in the target container. Kubelet considers your
container is ready to accept and serve traffic.
● Will sleep for 30 seconds, this means the file will be available only for 30 seconds

61
● The job of the Kubelet here will run the touch /tmp/healthy command
in the container to execute a ReadinessProbe.
● Below is the output when you perform a event –watch command:

• In readiness configuration — It will try to find the file every 5 seconds with an initial delay of 5
seconds. Kubernetes terminates and restarts the container.

62
Creating readiness Probes for a Node.js application
const express = require('express');
● Step 1: Creating a Node JS App
const app = express();
Prepared for readiness Probes
o To implement a working readiness app.get('/', (req, res) => {
res.send(`
probe, we designed a containerized <h1>Kubernetes-Demo app!</h1>
application capable of processing it. We <p>Try sending a request to /error and see what happens</p>
`);
containerized a simple Node JS web
});
server with two routes configured to
process requests from the readiness app.get('/error', (req, res) => {
process.exit(1);
probes. The application was });
containerized using Docker container app.get('/healthz', (req, res) => {
res.send(`
runtime and pushed to the public Docker
<h1>Kubernetes-Demo app!</h1>
repository. The code that implements <p>health check</p>
basic server functionality and routing is `);
});
located in the app.js file: app.listen(8080);

63
Creating readiness Probes for a Node.js application

● We’ve configured three server routes responding to client GET requests. The first one
serves requests to the server’s web root path / that sends a basic greeting from the
server.
● The second path named /error terminates with Exit Code 1 indicates that a container
shut down and telling kubelet that the application has crashed or deadlocked.
● The third path named /healthz returns a 200 HTTP success status telling a rediness
probe that our application is healthy and running. By default, any HTTP status code
greater than or equal to 200 and less than 400 indicates success. Status codes greater
than 400 indicate failure.
● This application is just a simple example to illustrate how you can configure your
server to respond to readiness/liveness probes. All you need to implement HTTP
readiness probes is to allocate some paths in your application and expose your
server’s port to Kubernetes.
64
Creating readiness Probes for a Node.js application

● Step 2: Configure your Pod to use readiness Probes


Let’s create a pod spec defining for our Node JS application: • spec.containers.readinessProbe.httpGet.path — a path on
the HTTP server that processes a readiness probe. Note:
apiVersion: apps/v1
kind: Deployment
by default, spec.readinessProbe.httpGet.host is set to the
metadata: pod’s IP. Since we will access our application from within
name: my-app dep.yml the cluster, we don’t need to specify the external host.
spec: • spec.containers.readinessProbe.httpGet.port — a name
replicas: 1
or a number of the port to access the HTTP server on. A
selector:
matchLabels: port’s number must be in the range of 1 to 65535.
app: second-app • spec.containers.readiessProbe.initialDelaySeconds —
tier: backend number of seconds since the container has started before
template: the readiness probe can be initiated.
metadata:
labels:
• spec.containers.readinessProbe.periodSeconds — how
app: second-app often to perform the readiness probe. Default value is 10
tier: backend seconds and the minimum value is 1
spec:
containers:
- name: my-app-container
image: geetha0405/healthcheck
readinessProbe: kubectl apply -f dep.yaml
httpGet:
kubectl expose deployment my-app --type=LoadBalancer --port=8080
path: /healthz
port: 8080
kubectl get services
initialDelaySeconds: 3
periodSeconds: 3
65
● As you see. we defined /healthz as a
server path for our readiness probe. In
this case, our Node JS server will always
return the success 200 status code. This
means that the readiness probe will
always succeed and the pod will continue
running.
● However, what happens when the
readiness probe fails? By changing server
path to /error
● Now, our readiness probe will be sending
requests to the /error path that returns a
500 HTTP error. This error will make
kubelet restart the pod.
● As you might have noticed, the readiness
/error will display error
probe started exactly after three
page after 3 seconds
seconds specified in the
spec.containers.livenessProbe.initialDelay
Seconds Afterwards, the probe failed with
a status code 500 that triggered killing
and recreating the container. 66
Check pod events at the end of the pod description

67
Startup Probes

● Startup probes are a newly developed feature supported as a beta in Kubernetes v.1.18. These probes
are very useful on slow-start applications; it is much better than increasing initialDelaySeconds on
readiness or liveness probes. Startup probe allows our application to become ready, joined with
readiness and liveness probes, it can dramatically increase our applications' availability.
● It is responsible for the application which is deployed inside the container. It indicates if the application
started successfully.
● Kubernetes can determine whether your software, which executes within a container inside a pod, has
properly started using startup probes.
● To review, the liveness probe will restart a container when it becomes unresponsive and the
readiness probe is used to decide when a container is ready to start or stop accepting traffic
and startup probe, which allows your containers to notify the Kubernetes when they start up
and are prepared to be evaluated for liveness and readiness.
● Use startup probes to decouple liveness and readiness checks from application initialization and
ultimately make services more reliable.

68
Creating a Startup Probe

startup.yml • The container will start and


Startup probes are created apiVersion: v1
run normally. You can
verify this by viewing its
by adding kind: Pod
metadata: details in kubectl.
a startupProbe field within name: startup-probe-demo
spec:
• The probe in the example
above uses the presence
the spec.containers portio containers:
- name: startup-probe-demo of the /etc/hostname file to
n of a pod’s manifest. image: busybox:latest
args:
determine whether the
container has started. As
Here’s a simple example of - /bin/sh
- -c this file exists inside the
- sleep 300
a startup probe using startupProbe:
container, the startup
exec: probe will succeed without
the exec mechanism. It command: logging any events.
- cat
runs a command inside - /etc/hostname
periodSeconds: 10
the container: failureThreshold: 10

kubectl apply -f startup.yml


kubectl get events –watch
kubectl describe pod startup-probe-demo

69
Creating a Startup Probe
● In the example above, a periodSeconds of 10 and
a failureThreshold of 10 means the container will have up to a
hundred seconds in which to start—up to ten checks with ten
seconds between them. The container will be restarted if the probe
still doesn’t succeed after this time.
● You can use the other config parameters to further tune your probe.
If you know a container has a minimum startup time,
setting initialDelaySeconds will prevent it from being probed
immediately after creation, when you know the check will fail.

70
Creating a Startup Probe
A pod with a startup probe that will
fail: In this case, the probe looks at /etc/foobar, which doesn’t exist in
the container. The probe will run every ten seconds, as specified
apiVersion: v1
kind: Pod
by the value of periodSeconds. Up to ten attempts will be made,
metadata: as allowed by failureThreshold.
name: startup-probe-demo
spec:
containers:
- name: startup-probe-demo
image: busybox:latest
args:
- /bin/sh
- -c
- sleep 300
startupProbe:
exec:
command:
- cat
- /etc/foobar
periodSeconds: 10
failureThreshold: 10

kubectl apply -f startup.yml


kubectl get events –watch
kubectl describe pod startup-probe-demo

71
Creating a Startup Probe
If the container creates /etc/foobar before the last attempt, the probe will succeed, and Kubernetes will begin to direct liveness
and readiness probes to the container. Otherwise, the startup probe will be marked as failed, and the container will be killed.

This event log shows that the startup probe failed because of the
missing /etc/foobar file. After ten attempts, the container’s status changed
to Killing, and a restart was scheduled. Looking for failed startup probe lines
in your pod’s logs will help you find containers that have been restarted for
this reason.

72
Kubenetes Networking
● The communication between pods, services and external services to the ones in a cluster brings in
the concept of Kubernetes networking.
● Kubernetes networking allows Kubernetes components to communicate with each other
and with other applications. The Kubernetes platform is different from other networking
platforms because it is based on a flat network structure that eliminates the need to map host
ports to container ports.
● When pods are created, they are assigned an IP address. You use this IP to access the pod from
anywhere within the Kubernetes cluster. Containers inside a pod share the same network space,
which means that, within the pod, containers can communicate with each other by using the
localhost address.
● A Kubernetes cluster might be split across different nodes. A node is a physical machine where
resources run. A cluster is a logical view of a set of nodes. These nodes are different machines, but
they work together as a logical unit. This makes it easier to work with different machines at the
same time because you can simply deploy resources to the cluster and not to individual nodes.

73
Kubernetes Networking Visualized

Kubernetes Networking addresses


four concerns:
• Containers within a pod use
networking to communicate via
loopback.
• Cluster Networking provides
communication between different
pods.
• The service resources let you
expose an application running in
pods to be reachable from outside
of your cluster.
• You can also use services to
publish services only for
consumption inside your cluster.

74
Networking in a Kubernetes cluster
● There are mainly 4 problems to solve with the networking concepts.
○ Container to container communication
○ Pod to pod Communication
○ Pod to service communication
○ External to service Communication

75
Container to Container communication on the same pod

● Happens through localhost within the containers.

Container can communicate with


each other on same pod only
(via localhost)

Containers do not have IP


76
Try Yourself?
Create a manifest file for creating a Apply manifest file and create a pod with 2 containers.
pod with 2 containers
pod_nw.yml kubectl get pods

kind: Pod ####Exec inside one container c00


apiVersion: v1
metadata:
kubectl exec testpod -it -c c00 -- /bin/bash
name: testpod
spec: apt update
containers: apt install curl
- name: c00 curl localhost:80
image: ubuntu
command: ["/bin/bash", "-c", "while
true; do echo Hello; sleep 5; done"] Curl localhost:80 for communicating
- name: c01 to container 2 from container 1
image: httpd
ports:
- containerPort: 80

77
Communication between two different Pods within the same machine(Node)

● Pod to Pod communication on same worker node through Pod IP.


● By Default Pod’s IP will not be accessible outside the node.

In a kubernetes cluster, a pod could be


scheduled in any node in the cluster. The
another pod which wants to access it
should not ideally know where this pod is
running or its POD IP address.
Kubernetes provides a basic service
discovery mechanism by providing DNS
names to the kubernetes services (which
are associated with pods). When a pod
wants to talk to another pod, it should use
the DNS name

78
Try Yourself?
Create 2 Pods on same node.
pod1_nw.yml pod2_nw.yml

kind: Pod kubectl apply -f pod1_nw.yml


kind: Pod kubectl apply -f pod2_nw.yml
apiVersion: v1 apiVersion: v1
metadata: metadata: Inside node, run commands to get
name: testpod1 name: testpod2 requests on pods IP addresses
spec: spec:
containers: containers: kubectl exec testpod1 -it -c
c00 -- /bin/bash
- name: c00 - name: c03
image: nginx image: httpd curl testpod1:80
ports: ports:
- containerPort: 80 - containerPort: 80

79
Pod-to-Service Communication

● A Kubernetes service maps a single IP address to a pod group to address the issue of a pod's changing address. The service
creates a single endpoint, an immutable IP address and hostname, and forwards requests to a pod in that service.
● Basically, services are a type of resource that configures a proxy to forward the requests to a set of pods, which will receive
traffic & is determined by the selector. Once the service is created it has an assigned IP address which will accept requests on
the port.
● A service in kubernetes can be considered as a virtual load balancer for set of pods in the cluster, so the flow for any request
coming to kubernetes cluster will be redirected to pods through a service. The matching between service and pod is done
through a service selector and a pod label. Also routing requests between different pods in kubernetes cluster is done through
services
● Now, there are various service types that give you the option for exposing a service outside of your cluster IP address.

Types of Services

● ClusterIP: This is the default service type which exposes the service on a cluster-internal IP by making the service only
reachable within the cluster.
● NodePort: This exposes the service on each Node’s IP at a static port. Since, a ClusterIP service, to which
the NodePort service will route, is automatically created. We can contact the NodePort service outside the cluster.
● LoadBalancer: This is the service type which exposes the service externally using a cloud provider’s load balancer. So,
the NodePort and ClusterIP services, to which the external load balancer will route, are automatically created.
● ExternalName: This service type maps the service to the contents of the externalName field by returning a CNAME record
with its value.

80
ClusterIP Service type
Cluster IP service type is used when we want
to expose pods only inside kuberntes
cluster, so pods will not be available to the
outside world. Usually database pods are
managed by ClusterIP services as we don’t
want database endpoints to be exposed to
the outside world.
As you can notice below the declaritive
definition for a service, this service has a
selector of (layer: db), So only pods with
label (layer: db) will match the service,
means any request coming to this service
will be redirected to one of these pods:

81
NodePort Service type
NodePort service type exposes the service
to the outside world through
<NodeIP>.<NodePort>. The default range
of the NodePort is (30000–32767). It is an
optional field in the definition of the
service. Each node of the cluster proxies
that port to the kubernetes service port.
Don’t get confused here about the
difference between port, targetPort and
nodePort. So “port” is the port of the
service, “targetPort” is the port of the pod.
The request flow will start from the
nodePort, proxied to the port then
forwarded to the targetPort.

82
LoadBalancer service type

● Loadbalancer service type exposes the


service through a loadbalancer resource
issued by the cloud provider (AWS, Azure,
GCP, etc). Once this laodbalancer is
created, its DNS name will be assigned to
the service. Traffic will come to the
loadbalancer, then redirected to the
service to be forwarded to the pods. This
loadbalancer maintenance is the cloud
provider responsibility in terms of
availability and scaling.
● LoadBalancer service is an extension of
NodePort service. NodePort and ClusterIP
Services, to which the external load
balancer routes, are automatically created.

83
ExternalName

● Services of type ExternalName


map a Service to a DNS name, not
to a typical selector such as my-
service. You specify these
Services with the
‘spec.externalName’ parameter. It
maps the Service to the contents
of the externalName field (e.g.
database.external.com), by
returning a CNAME record with its
value. No proxying of any kind is
established.

84
Try Yourself?
● Deploying NGINX on a Kubernetes Cluster and expose
kubectl create deployment nginx --image=nginx
kubectl create service nodeport nginx --tcp=80:80
kubectl get svc

85
Kubernetes Service Discovery

● In Kubernetes, an application
deployment consists of a pod or set
of pods. Those pods are dynamic in
nature, meaning that the IP
addresses and ports change
constantly. This constant change
makes service discovery a significant
challenge in the Kubernetes world.
● One way Kubernetes provides service
discovery is through its endpoints
API. With the endpoints API, client
software can discover the IP and
ports of pods in an application.

86
Try Yourself?
Create a test namesp
kubectl create ns demo

Next, create an nginx app deployment:

kubectl -n demo apply -f


https://k8s.io/examples/application/de
ployment-update.yaml

Next, see if the pods are up and running and


confirm if the endpoints are available.

kubectl -n demo get pods

kubectl -n demo get ep

You will notice that the endpoints are not


available. This is because we have not created
a service object yet.

87
Create a service object for the
deployment using the kubectl
expose command.

kubectl -n demo expose


deployment/nginx-deployment
kubectl -n demo get svc
Now, check the endpoints and see
they report pod IP/port addresses.
Note there are two addresses ,
because we are running two replica
pods for the deployment.

kubectl -n demo get endpoints


You can see the service definition
created by the expose command
using this command:

kubectl -n demo get svc nginx-


deployment -o yaml 88
Note the IP address of the service. It is auto-mapped by DNS. Additionally, as we can see
below, env vars are automatically injected into the service name by Kubernetes for service
discovery.
Now, let’s create a client pod to connect to the application deployment. We will test service
discovery by doing nslookup on the service name and see the auto-created environment
variables related to service discovery.

kubectl -n demo get svc

89
Now, let’s create a client pod to
connect to the application deployment.
We will test service discovery by doing
nslookup on the service name and see
the auto-created environment
variables related to service discovery.

kubectl -n demo run tmp-shell2 --rm -i --tty


--image nginx -- /bin/bash

Let’s do a name lookup for the ngi


nx service if we can find.

nslookup nginx-deployment

curl nginx-deployment

90
Kubenetes volume apiVersion: v1
kind: Pod
metadata:
name: two-containers
● Create a Pod that runs two Containers. The spec:
two containers share a Volume that they can restartPolicy: Never
use to communicate. volumes:
- name: shared-data
emptyDir: {}
containers:
- name: nginx-container
image: nginx
volumeMounts:
- name: shared-data
mountPath: /usr/share/nginx/html
- name: debian-container
image: debian
volumeMounts:
- name: shared-data
mountPath: /pod-data
command: ["/bin/sh"]
args: ["-c", "echo Hello from the debian
container > /pod-data/index.txt"]

91
• kubectl apply -f
deploy_multicontainer.yaml
• kubectl describe po mc1
• kubectl exec mc1 -c 1st -- /bin/cat
/usr/share/nginx/html/index.html
• kubectl exec mc1 -c 2nd -- /bin/cat
/html/index.html
• kubectl exec --stdin --tty mc1 -c 2nd
-- /bin/sh

92

You might also like