Kubernetes
Kubernetes
Email: [email protected]
LinkedIn: linkedin.com/in/sherif-yehia-389071178/
1|P age
Table of Contents.
• Install k8s from scratch…………………………………………………………………………………….……….3
➢ Requirements………………………………………………………………….……………………………………….………….3
➢ Architecture…………………………………………………………………………………………………………….…………..3
➢ Installation Steps……………………………………………………………………………………………………….…………3
• What does k8s consist of?..........................................................................................10
➢ Kubernetes Architecture is consisting of Master Node and Worker nodes ……………….………...10
➢ Master Node consists of ……………………………………………………………………………………………………..10
➢ Worker Nodes consists of…………………………………………………………………………………….………………11
• How to manage k8s?..................................................................................................12
➢ What is Pod and how to create one?.....................................................................................12
➢ What is Replication Controller and how to create one?.........................................................14
➢ What is Replica Set and how to create one?..........................................................................16
➢ How to scale up Replica set?.................................................................................................17
➢ What is Deployment and how to create one?........................................................................19
➢ Some Important command………………………………………………………………………………………………….20
➢ What is Service and how to create one?....………………………………………………………………………....22
➢ NodePort-Service ……………………………………………………………………………………………………………….23
➢ ClusterIP-Service ………………………………………………………………………………………………………………..26
➢ What is a namespace and how to create one?......................................................................27
➢ What is scheduling and how to create pod on specific node?................................................31
➢ What is Labels and Selectors and how to create one?...........................................................32
➢ What is Taints and Tolerations and how to create one?........................................................33
➢ What is Node Selectors and how to create one?...................................................................35
➢ What is Node Affinity and how to create one?......................................................................36
➢ Taints and Tolerations vs Node Affinity…………………………………………………………………….…………39
➢ What is Resource Requirements and Limits and how to create one?.....................................40
➢ What is DaemonSets and how to create one?.......................................................................44
• Logging & Monitoring……………………………………………………………………………………………….46
➢ Kubernetes Dashboard Solution………………………………………………………………………………………….46
➢ What is Rancher?..................................................................................................................46
➢ How to Implement rancher?.................................................................................................47
➢ How to import our cluster in rancher dashboard?.................................................................48
➢ How to Implement Grafana and Prometheus using Rancher Dashboard?..............................50
• Configure Environment Variables Vs ConfigMap Vs Secrets………………………………………52
➢ Configure environment variables in applications…………………………………………………………………52
➢ Configuring ConfigMaps in Applications………………………………………………………………………………52
➢ Configure Secrets in Applications………………………………………………………………………………………..54
2|P age
• Storage in Kubernetes……………………………………………………………………………………………..57
➢ Persistent Volumes and Persistent Volumes Claims……………………………………………………………58
• Ingress………………………………………………………………………………………………..…………………..61
➢ What is Ingress?...................................................................................................................61
➢ Another Easy way………………………………………………………………………………………………………………64
3|P age
• Install k8s from scratch: -
➢ Requirements: -
1- 3 machines – OS: Ubuntu 20.04.
2- 2 GB or more of RAM per machine.
3- 2 CPUs or more.
4- Full network connectivity between all machines in the cluster, (you can disable firewall just testing
environment).
5- Unique hostname, MAC address, and product_uuid for every node.
6- Root user.
➢ Architecture: -
Master Node
192.168.1.1
➢ Installation Steps: -
a) Command run through ( Master node – Worker Node 1 – Worker Node 2 ) :-
1- You MUST disable swap
$$ sudo swapoff -a
4|P age
2- Set up the IPV4 bridge on all nodes ( run all below at one time).
3- Verify that the br_netfilter, overlay modules are loaded by running the following
commands:
$$lsmod | grep br_netfilter
$$lsmod | grep overlay
Master-node
Master-node
5|P age
5- Disable firewall (to make sure 3 machines can connect together): -
$$ufw status
Master-node
$$echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg]
https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Master-node
6|P age
7- Set up Cgroup as a true (systemd)
$$ nano /etc/containerd/config.toml
8- Let’s install kubelet, kubeadm, and kubectl to create a Kubernetes cluster. They play an
important role in managing a Kubernetes cluster.
$$ mkdir -p /etc/apt/keyrings
$$ kubeadm version
$$ kubelet --version
$$ kubectl version –short
Master-node
Master-node
Master-node
Master-node
7|P age
b) Command run through ( Master node only – 192.168.1.1 ) :-
1- initialize your master node. The --pod-network-cidr flag is setting the IP address range for the
pod network (Must use range 10.244.0.0 for flannel network package – Next step).
192.168.1.1
$$ mkdir -p $HOME/.kube
$$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
c) Join workers node to the cluster through (WorkerNode1 192.168.1.2 - WorkerNode2 192.168.1.3)
in upper photo.
8|P age
2- From machine WorkerNode2 run
➢ Finally,
$$ kubectl get nodes
Master-node
Master-node
9|P age
• What does k8s consist of?
o Master Node: responsible for cluster management and for providing the API that is used to configure and
manage resources within the Kubernetes cluster.
o Worker Node: are responsible for running the containers and doing any work assigned to them by the master
node.
1- ETCD CLUSTER: data stores information regarding the cluster such as nodes, pods, configs, secrets, Accounts, roles,
bindings, others. When you run the Kube control get command is from Etcd server. Every change in cluster such as adding
nodes, deploy pods are updated in the Etcd server.
10 | P a g e
2- Kube-scheduler: scheduler identifies the right node to place a container on based on the containers resource
requirements worker node capacity.
3- Kube Controller Manager: used to take care of nodes, the responsible for onboarding new nodes to the cluster,
handling node become unavailable or destroyed it and the replications nodes.
By default, Kube controller management check node every 5 second by using kube-api if it stop receiving heartbeat from
node the node will be marked as unreachable but it will waiting 40 second before market it as unreachable then waiting 5
mint as unreachable to comeback up it, if it is not up again ,it will remove the pods=container which creating on that node
and put it in another node replication
4- Kube-Apiserver: is APIS used to handle request and response between master node and worker node.
Kube-API: by using postman you can post a request (curl –X POST /api/v1/namespaces/default/pods1) to create new
pod=container=application; -
1- Kubelet: every worker node has its own Kubelet. Kubelet is the captain of Worker nodes which take the order from
master node and send reports to master node and status of worker node by using Kube-Apiserver
2- Kube-Proxy: Kube-proxy maintains network rules on nodes. These network rules allow network communication to your
Pods from network sessions inside or outside of your cluster.
11 | P a g e
• How to manage k8s?
➢ What is Pod and how to create one?
Pods: Pods are the smallest deployable units of computing that you can create and manage in Kubernetes, pods contain
the container that contains the applications + any another dependency.
$$nano pod-definition.yaml
2-yaml contain: -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
metadata: This part is used to give my creation name and labels (using this name and labels to connect service to
this pod and many another purpose).
labels: It is used to identify this object, so we are using (app - type) to identify this object. (up to you)
- Think about 1000 pods.
- you can add more of labels under app and type
spec: To specific the container in this Pod we are going to create single container to single pod.
containers: Part of information about the container of the application
- name: nginx-container Name of the container of the application (up to you)
image: nginx Name of the image on docker repository
12 | P a g e
3-Apply yaml file: -
de
worker-node-1/192.168.1.2
worker
13 | P a g e
➢ What is Replication Controller and how to create one: -
Replication Controller: is a Kubernetes resource that ensures a pod (or multiple copies of the same pod) is always up and
running. If the pod disappears for any reason (like in the event of a node disappearing from the cluster), the replication
controller creates a new pod immediately.
2-yaml contain: -
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc This part of name and labels related to the ReplicationController
labels:
app: myapp
type: front-end
spec:
replicas: 3
template: This part of name and labels related to the Pod
metadata:
name: myapp-pod Number of pods is 3
labels:
Replicationcontroller uses labels to bind pod to its own replication controller.
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
14 | P a g e
3-Apply yaml file: -
worker-node-1
worker-node-2
worker-node-2
15 | P a g e
➢ What is Replica Set and how to create one: -
Replication Set: is the same concept of Replication controller but ReplicaSet is new technology of replica
than replication controller.
2-yaml contain: -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 3
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector: --selector used as a label to match every replica with its pod by using type.
matchLabels:
type: front-end --using to monitor three instances if any instance of 3 down it will redeploy it again
worker-node-1
worker-node-2
worker-node-2
16 | P a g e
➢ How to scale up Replica set: -
2-yaml contain: -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 6
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end
17 | P a g e
--To get status of the new pod
$$kubectl get pods -owide
worker-node-1
worker-node-2
worker-node-2
worker-node-1
4-Another way: -
$$kubectl scale –replicas=6 -f replica-set.yaml
Or:
$$kubectl edit replicaset myapp-rs >> myapp-rs is name of replicaset in yaml file
18 | P a g e
➢ What is Deployment and how to create one: -
Deployment: is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods
along with a lot of other useful features.
Deployments ReplicaSet
High-level abstractions that manage replica sets. A lower-level abstraction that manages the
desired number of replicas of a pod.
Deployment manages a template of pods and uses ReplicaSet only manages the desired number of
replica sets to ensure that the specified number of replicas of a pod.
replicas of the pod is running.
Deployment provides a mechanism for rolling Applications must be manually updated or rolled
updates and rollbacks of the application, enabling back.
seamless updates and reducing downtime.
2-yaml contain: -
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 3
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end
19 | P a g e
3-Apply yaml file: -
It will automatically create a pod by pulling nginx image from docker hub officially image then run the image in
pod on worker node without yaml file.
To get all resources (pod – deployment – replicaset - service - daemonset - etc) on default namespace (we will
learn more about namespaces next slides)
Master-node
To get all resources in all namespaces with more information about location of all pods on which workernode ,
virtual IP of every pod
It is used to open the terminal inside the pod to execute any command inside the pod.
20 | P a g e
6- $$ kubectl describe pod myapp-rs-7c4d4f7fc6-bjjm6
It is easy way to create yaml file with basic configuration without run this pod (--dry-run=client).
The output: -
8-$$ kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml
-n = namespace
21 | P a g e
➢ What is Service and how to create one: -
Kubernetes Service: enables communication between various components within and outside of the
application, it helps us to connect applications together with other applications or users.
1- NodePort-Service: Exposes the service on a static port on each node's IP. It makes the service accessible
externally at the specified node port.
2- ClusterIP-Service: The default service type, which provides a cluster-internal IP address. It is used for
communication between different parts of an application within the cluster.
3- LoadBalancer-Service: Exposes the service externally using a cloud provider's load balancer. The external
IP is provisioned, and traffic is distributed to the service.
22 | P a g e
1-- NodePort-Service:
After creating deployment of nginx as shown Pg:17, We are going to create service (Type: NodePort) to make those
pods accessible from out the world.
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- targetPort: 80
port: 80
nodePort: 31500
selector:
type: front-end - It must be same as labels on
app: myapp Yaml deployment to bind service with
type: NodePort pods need to route to.
23 | P a g e
◼ Now very Important example to understand how NodePort working, we have demo application
called consumer, this application consisting of 2 pods on two worker-nodes working on port 4040
and I will expose on port 31040.
---
apiVersion: v1
kind: Service
metadata:
name: consumer
spec:
ports:
- targetPort: 4040
port: 4040
nodePort: 31040
selector:
app: consumer
type: NodePort
192.168.1.3
.168.1.3 (root) 4040
192.168.1.3
.168.1.3 (root)
So, it is important to know that NodePort service used to export our application out of cluster using
nodePort only, but what is targetport used to? We will see next slide.
24 | P a g e
◼ Now very Important to understand how to connect between two pods (nginx application connect to
consumer application) [nginx pod consumer service consumer pod].
4040
.168.1.3 (root)
25 | P a g e
2-- ClusterIP-Service:
It is same concept of NodePort service, so it just used to connect between two pod inside cluster only.
---
apiVersion: v1
kind: Service
metadata:
name: consumer
spec:
ports:
- targetPort: 4040
port: 4040
selector:
app: consumer
type: ClusterIP
4040
.168.1.3 (root)
26 | P a g e
➢ What is namespace and how to create one: -
- by default, any pods or service or any other components you created without specify
namespace will create in [default namespace] as example Pg: 20
- Namespaces: used to divide PODS under difference various Name to avoid accidentally execute modify on
wrong production PODS.
let’s start: -
we have namespace > (backend) which consist of consumer app.
and namespace > (gateway) which consist of nginx.
2--modify yaml file of consumer app to add namespace then apply it.
27 | P a g e
3 -- modify the service of consumer app to add namespace then apply it.
---
apiVersion: v1
kind: Service
metadata:
name: consumer
namespace: backend
spec:
ports:
- targetPort: 4040
port: 4040
nodePort: 31040
selector:
app: consumer
type: NodePort
4-- modify yaml file of nginx app to add namespace then apply it.
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
namespace: gateway
labels:
app: myapp
type: front-end
spec:
replicas: 2
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end
28 | P a g e
5-- modify the service of nginx to add namespace then apply it.
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: gateway
spec:
ports:
- targetPort: 80
port: 80
nodePort: 31500
selector:
type: front-end
app: myapp
type: NodePort
29 | P a g e
◼ How to connect between two pods in different namespaces?
1- go inside nginx pod (any pod): -
$$ kubectl exec -it myapp-rs-7c4d4f7fc6-ccmx8 -n gateway -- /bin/bash
-n = namespaces
4040
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: backend
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: 5Gi
limits.cpu: "10"
limits.memory: 10Gi
30 | P a g e
➢ What is scheduling and how to create pod on specific node: -
apiVersion: apps/v1
Scheduling: scheduler identifies the right node to place a kind: Deployment
pod on, based on the pod’s resource requirements and metadata:
worker node capacity. name: myapp-rs
namespace: gateway
labels:
app: myapp
- By default, scheduling arranged by [Kube-scheduler] spec:
components in master node. replicas: 2
template:
- You can manually schedule a pod on a specific node.
metadata:
- Create deployment.yaml then apply it. name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-container
image: nginx
nodeName: worker-node-2
selector:
matchLabels:
app: myapp
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
namespace: gateway
labels:
app: myapp
type: front-end
spec:
replicas: 2
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end
32 | P a g e
➢ What is Taints and Tolerations and how to create one: -
Taints and Tolerations: It is the relationship between pods and node, how you can restrict pod on specific
node.
- As you see in the upper figure, we have 3 pods with 3 Toleration and 3 nodes with 3 Taints.
- Any node with Taint will not accept any pod except pod with Toleration same value of Taint.
- Important Note: Any pod with Toleration can scheduler on node with Taint same value of Toleration, also
it can scheduler on node which have NO Taint, to avoid that we will make something called node affinity we
will see it next slides.
- Why are pods automatically not deployed on master node?
because a Taint is set automatically on master node that prevents any pods to deployed on.
Master-node
worker-node-1
worker-node-1
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
labels:
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
tolerations:
- key: "myapp"
operator: "Equal" >> Equal mean that Toleration = [myapp = red]
value: "red"
effect: "NoSchedule"
selector:
matchLabels:
type: front-end
34 | P a g e
➢ What is Node Selectors and how to create one: -
Node Selectors: if I have container with application need a high node resource such as RAM: 64 , so we
used Node Selectors to assign the node with high resources as a node with labels size = large , so we can
edit application .yaml to put label size = large , so it make this node deployed on node of high resources
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
nodeSelector:
size: Large
35 | P a g e
➢ What is Node Affinity and how to create one: -
Node Affinity: it is the same as node selector but Node Affinity more complex which give you more
feature.
2-Edit yaml file to deploy on pod with label Large then apply it: -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large
36 | P a g e
Or edit yaml file to deploy on pod with label Large or medium then apply it: -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large
- Medium
Or edit yaml file to deploy on pod with label small then apply it: -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: NotIn
values:
- Small
37 | P a g e
Example to deploy pod on any node have label size without value just key only: -
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: Exists
◼ What could happen if there is node with Labels size and someone deleted label size from node which
contain pod with nodeAffinity ?
1- requiredDuringSchedulingIgnoredDuringExecution >>> pod must find node with same labels if not the
pod will not deploy.
>>>> IgnoredDuringExecution : it mean that any change on node like ( delete label on node ) will not affect
after pod had been deployed
2- preferredDuringSchedulingIgnoredDuringExecution >>> pod no need to find node with same labels so the
pod will deploy on another node.
stage one (DuringScheduling): which pod does not exist and created for first time.
stage two (DuringExecution): pod exist and there is a change on environment which affect node affinity such as ( delete
label on node ) which already contain pods with node affinity.
38 | P a g e
➢ Taints and Tolerations vs Node Affinity: -
Taints and Tolerations: as you know the Taints pod will deploy on the Tolerations node but also it can deploy on
normal node.
Node Affinity: as you know the pod with node Affinity will deploy on Node with same label but also the normal pod
can be deployed on node with label.
so if you want to specific pod deployed on specific node it needed combination between Taints and Tolerations / and
Node Affinity
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
nodeName: Node-1
39 | P a g e
➢ What is Resource Requirements and Limits and how to create one: -
Resource Requirements and Limits: we are going to specify resources for every POD.
Note that by default: pod has no limit of resources as increase on load of application the resource will be increase so
it led to suffocate another pod.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
resources:
requests:
memory: "4Gi" >>>>> 1Gi = 1073741824 bytes / 1Mi = 1048576 bytes / 1Ki = 1024 bytes
cpu: 2 >>>>> it can be also 100m or 0.1 AS 1 CPU = 1000m
40 | P a g e
2- Also, we have another example to limit maximum utilization.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
resources:
requests:
memory: "4Gi"
cpu: 2
limits:
memory: "8Gi"
cpu: 4
NOTE: As increase on load the pod never take more than 4 cpu, even it needs 2 more cpu it will not take more than 2
CPU
NOTE: AS increase on load the pod will take more than 8Gi RAM but it will be terminated with error OOM (out of
memory)
So the best practice of (Resource Requirements and Limits) is adding requests without limits BUT you must (put
requests for all pods) because pod with no requests may be led to consume all resource:-
41 | P a g e
3- We can Create a LimitRange it is something like service to specify a default for all pods on my cluster of using
CPU and RAM: -
NOTE this LimitRange will not affect pod that is created it just affect new pods!
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-resource-constraint
spec:
limits:
- default:
cpu: 500m
defaultRequest:
cpu: 500m
max:
cpu: "1"
min:
cpu: 100m
type: Container
apiVersion: v1
kind: LimitRange
metadata:
name: memory-resource-constraint
spec:
limits:
- default:
memory: 1Gi
defaultRequest:
memory: 1Gi
max:
memory: 1Gi
min:
memory: 1Gi
type: Container
42 | P a g e
4- Also we can limit all of cpu and ram that is used by all pod (sum of ram and cpu for all pods together) for all
namespace:-
apiVersion: v1
kind: ResourceQuota
metadata:
name: my-resource-quota
spec:
hard:
requests.cpu: 4
requests.memory: 4Gi
limit.cpu: 10
limits.memory: 10Gi
-To extract the pod definition in YAML format to a file using the command
43 | P a g e
➢ What is DaemonSets and how to create one: -
DaemonSets: Used to deploy one copy of your specific pod on each node in your cluster, If you joined new node in
cluster by default, DaemonSets going to deploy specific pod just like monitoring agent
Let’s say you want to deploy a monitoring agent as a pod or log collector as a pod on each of your nodes in the
cluster, to monitor your cluster better, A DaemonSets is perfect for that.
-DaemonSets deployed one copy of your specific pod on each node in your cluster.
-ReplicaSet can be deployed 2 or 3 copy your pod on same node in your cluster.
Example of DaemonSet: -
44 | P a g e
Let’s create one: -
1- create yaml file (daemonset.yaml) of pod then apply it:-
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemon
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: monitoring-agent
image: monitoring-agent
45 | P a g e
• Logging & Monitoring.
➢ Kubernetes Dashboard Solution: -
-- Kubernetes does not come with a full-featured built-in monitor solution, so you can use open-source tool like
Prometheus, elastic, Datadog, Dynatrace.
-- as you know Kubernetes monitoring pods in case that one is down it automatic run new one, that is happened
because of Kubelet have small agent called cAdvisor.
-- cAdvisor is responsible for retrieving performance metrics from pod and exposing them through kubelet API to
(metrics server).
-- every cluster Kubernetes have (metrics server) which responsible for retrieves metrics from each of nodes and pods
, but it cannot store the metrics data on the disk so you cannot see historical performance data , so you must
implement open source tool.
➢ What is Rancher: -
Rancher is a complete software stack for teams adopting containers. It addresses the operational and security
challenges of managing multiple Kubernetes clusters, while providing DevOps teams with integrated tools for running
containerized workloads.
46 | P a g e
➢ How to Implement rancher: -
1- Install Docker CLI on any WorkerNode of your cluster for example server (192.168.1.2): -
$$ sudo apt-get install docker-ce docker-ce-cli
2- We will run rancher as a docker image (Containerized App) separated from kubernetes cluster: -
$$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:latest
3- To get password for user: admin run below command on same server:-
$$ docker logs 9503e066bb38 2>&1 | grep "Bootstrap Password:"
Then take the password result from command docker logs and add it in dashboard.
47 | P a g e
➢ How to import our cluster in rancher dashboard: -
1- Import Existing Cluster as you see below.
48 | P a g e
4- Then import it in below then click next: -
After that you MUST wait about 20 Mint until see all pods running: -
49 | P a g e
➢ How to Implement Grafana and prometheus using Rancher Dashboard: -
1- Access the cluster named Kubernetes.
3- Then search for Grafana then click to install then wait about 10 Mint until pod up and running.
50 | P a g e
4- Then access Grafana by clicking Monitoring itself then click Grafana one.
5- Congratulations.
51 | P a g e
• Configure Environment Variables Vs ConfigMap Vs Secrets.
➢ Configure environment variables in applications: -
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
env:
- name: APP_COLOR
value: pink
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_COLOR: blue
APP_MODE: prod
52 | P a g e
2- Then bind the pod to ConfigMap.
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: app-config
-In case we need to retrieve only one variable APP_COLOR, check below.
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
env:
- name: APP_COLOR
valueFrom:
configMapKeyReF:
name: app-config
key: APP_COLOR
53 | P a g e
-You can get all ConfigMap on the cluster.
apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
DB_Host: bXlzcWw= >>> on linux run $$echo -n 'mysql' | base64 >>> it will give you bXlzcWw=
DB_User: cm9vdA== >>> on linux run $$echo -n 'root' | base64 >>> it will give you cm9vdA==
DB_Password: cGFzd3Jk >>> on linux run $$echo -n 'paswrd' | base64 >>> it will give you cGFzd3Jk
54 | P a g e
2- Then bind the pod to Secret.
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
envFrom:
- secretRef:
name: app-secret
-In case we need to retrieve only one secret DB_Password, check below.
apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
env:
- name: DB_Password
valueFrom:
secretKeyReF:
name: app-secret
key: DB_Password
55 | P a g e
Important note: -
______________
-- Anyone able to create pods/deployments in the same namespace can access the secrets
so, you must configure least-privilege access to Secrets - RBAC
-- you can manage the secrets from external provider such as Vault providers
56 | P a g e
• Storage in Kubernetes.
- Kubernetes is same as docker it saves data while pod is running, if the pod is terminated the data will be deleted!!
As you see below a demo pod which generates random numbers in text file on path /opt/number.out inside the pod.
apiVersion: v1
kind: Pod
metadata:
name: random-number-generator
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh","-c"]
args: ["shuf -i 0-100 -n 1 >> /opt/number.out;"]
volumeMounts:
- mountPath: /opt >>> volumeMounts: used to Indicates to path of data inside pod.
name: data-volume
volumes:
- name: data-volume
hostPath: >>> volumes: used to take data inside volumeMounts copy it to
path: /data path /data on the node.
type: Directory
volumes:
- name: data-volume
awsElasticBlockStore:
volumeID: <volume-Id> >>> ID of the volume
fsType: ext4
57 | P a g e
➢ Persistent Volumes and Persistent Volumes Claims: -
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol1
labels:
name: my-pv
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
#persistentVolumeReclaimPolicy: Delete
capacity:
storage: 1Gi
hostPath:
path: /data
accessModes:
- ReadWriteOnce >>> There are three type of access mode (ReadOnlyMany-ReadWriteOnce-ReadWriteMany) . accessMode
means how to access data mount to that volume – Ready data only – Read and edit data one time – Read and edit data many
times.
58 | P a g e
persistentVolumeReclaimPolicy: Retain >>> when you delete PVC the volume that is used by PVC will not be automatically
deleted or released , you must clean up manual before used by another PVC.
#persistentVolumeReclaimPolicy: Delete >>> when you delete pvc the volume that is used by pvc will be automatically
deleted, so you can used by another PVC.
To get it
$$kubectl get persistentvolume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce >>> access mode must be same as persistent volume that will bind with
resources:
requests:
storage: 500Mi
>>> storage must be less than persistent volume that will bind with
selector:
matchLabels:
name: my-pv >>> using label to bind PVC with PV
To get it
$$kubectl get persistentvolumeclaim
59 | P a g e
3- Create yaml file of the Pod to mount data to PVC then apply it: -
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html" >>> path of data on pod that I want to mount
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim Same name of PVC (PersistentVolumeClaim)
60 | P a g e
• Ingress.
➢ What is Ingress: -
- Ingress: an API object that helps developers expose their applications and manage external access by providing http/s
routing rules to the services within a Kubernetes cluster.
61 | P a g e
- Let’s start?
1-Create ConfigMap-ingress.yaml with your custom "err-log-path","keep-alive","ssl protocols" and apply it: -
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
2-Create ServiceAccount-ingress.yaml with your custom Roles, ClusterRoles, RoleBindings and apply it: -
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
62 | P a g e
3-Create ingress-deployment.yaml and apply it: -
apiVersion: networking.k8s.io/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingressSTX
controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
63 | P a g e
4-Create NodePort-ingress.yaml service and apply it: -
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress
64 | P a g e
➢ Another Easy way: -
- From any of worker nodes run below which create configMap – service account – ingress deployment – ingress service: -
65 | P a g e
-To get port of ingress to route (31500): -
$$kubectl get all -n ingress-nginx
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend-dev
annotations:
nginx.ingress.kubernetes.io/error-page: "/(/|$)(.*)"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /consumer(/|$)(.*)
/consumer used to route just like
pathType: Prefix http://192.168.1.2:31500/consumer
backend:
service:
name: consumer
port: - consumer is the name of the service of application pod
that I want to route to.
number: 4040
- It is the port of the service 4040
66 | P a g e
It was an amazing journey!!!
"In the intricate dance of containers and orchestration, where pods waltz and services harmonize, Kubernetes
emerges as the maestro orchestrating a symphony of scalability and resilience. As I navigate this ever-evolving
landscape, I am reminded that in the world of distributed systems, Kubernetes is the conductor that transforms
chaos into seamless serenity."
"I am Sherif Yehia, a DevOps engineer. I trust that this reference proves beneficial to DevOps
engineers navigating the dynamic realm of technology. I am enthusiastic about sharing insights on
various technologies, and you can explore more of my posts and papers on my LinkedIn profile.
Thank you for your time and consideration.
Reference: -
1- https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/
2- https://kubernetes.io/docs/home/
3- https://www.ibm.com/topics/kubernetes
4- https://www.redhat.com/en/topics/containers/what-is-kubernetes
5- https://medium.com/@marko.luksa/kubernetes-in-action-introducing-replication-controllers-aaa2c05e0b4e
6- https://www.baeldung.com/ops/kubernetes-deployment-vs-replicaset
7- https://avinetworks.com/glossary/kubernetes-ingress-services/
8- https://cloud.google.com/learn/what-is-kubernetes
67 | P a g e