0% found this document useful (0 votes)
128 views

Kubernetes

The document provides instructions for installing Kubernetes from scratch on 3 Ubuntu machines. It outlines the requirements, architecture, and installation steps. The requirements include 3 machines with at least 2GB RAM, 2 CPUs, and full network connectivity. The architecture shows a master node and 2 worker nodes. The installation steps guide the user through setting up containerd, kubelet, kubeadm and kubectl on all nodes, and initializing the master node and joining the worker nodes to create a Kubernetes cluster.

Uploaded by

pkrmst
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
128 views

Kubernetes

The document provides instructions for installing Kubernetes from scratch on 3 Ubuntu machines. It outlines the requirements, architecture, and installation steps. The requirements include 3 machines with at least 2GB RAM, 2 CPUs, and full network connectivity. The architecture shows a master node and 2 worker nodes. The installation steps guide the user through setting up containerd, kubelet, kubeadm and kubectl on all nodes, and initializing the master node and joining the worker nodes to create a Kubernetes cluster.

Uploaded by

pkrmst
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 67

By: Eng.

Sherif Yehia Mostafa

Email: [email protected]

LinkedIn: linkedin.com/in/sherif-yehia-389071178/

1|P age
Table of Contents.
• Install k8s from scratch…………………………………………………………………………………….……….3
➢ Requirements………………………………………………………………….……………………………………….………….3
➢ Architecture…………………………………………………………………………………………………………….…………..3
➢ Installation Steps……………………………………………………………………………………………………….…………3
• What does k8s consist of?..........................................................................................10
➢ Kubernetes Architecture is consisting of Master Node and Worker nodes ……………….………...10
➢ Master Node consists of ……………………………………………………………………………………………………..10
➢ Worker Nodes consists of…………………………………………………………………………………….………………11
• How to manage k8s?..................................................................................................12
➢ What is Pod and how to create one?.....................................................................................12
➢ What is Replication Controller and how to create one?.........................................................14
➢ What is Replica Set and how to create one?..........................................................................16
➢ How to scale up Replica set?.................................................................................................17
➢ What is Deployment and how to create one?........................................................................19
➢ Some Important command………………………………………………………………………………………………….20
➢ What is Service and how to create one?....………………………………………………………………………....22
➢ NodePort-Service ……………………………………………………………………………………………………………….23
➢ ClusterIP-Service ………………………………………………………………………………………………………………..26
➢ What is a namespace and how to create one?......................................................................27
➢ What is scheduling and how to create pod on specific node?................................................31
➢ What is Labels and Selectors and how to create one?...........................................................32
➢ What is Taints and Tolerations and how to create one?........................................................33
➢ What is Node Selectors and how to create one?...................................................................35
➢ What is Node Affinity and how to create one?......................................................................36
➢ Taints and Tolerations vs Node Affinity…………………………………………………………………….…………39
➢ What is Resource Requirements and Limits and how to create one?.....................................40
➢ What is DaemonSets and how to create one?.......................................................................44
• Logging & Monitoring……………………………………………………………………………………………….46
➢ Kubernetes Dashboard Solution………………………………………………………………………………………….46
➢ What is Rancher?..................................................................................................................46
➢ How to Implement rancher?.................................................................................................47
➢ How to import our cluster in rancher dashboard?.................................................................48
➢ How to Implement Grafana and Prometheus using Rancher Dashboard?..............................50
• Configure Environment Variables Vs ConfigMap Vs Secrets………………………………………52
➢ Configure environment variables in applications…………………………………………………………………52
➢ Configuring ConfigMaps in Applications………………………………………………………………………………52
➢ Configure Secrets in Applications………………………………………………………………………………………..54
2|P age
• Storage in Kubernetes……………………………………………………………………………………………..57
➢ Persistent Volumes and Persistent Volumes Claims……………………………………………………………58
• Ingress………………………………………………………………………………………………..…………………..61
➢ What is Ingress?...................................................................................................................61
➢ Another Easy way………………………………………………………………………………………………………………64

3|P age
• Install k8s from scratch: -
➢ Requirements: -
1- 3 machines – OS: Ubuntu 20.04.
2- 2 GB or more of RAM per machine.
3- 2 CPUs or more.
4- Full network connectivity between all machines in the cluster, (you can disable firewall just testing
environment).
5- Unique hostname, MAC address, and product_uuid for every node.
6- Root user.

➢ Architecture: -

Master Node
192.168.1.1

Worker Node 1 Worker Node 2


192.168.1.2 192.168.1.3

➢ Installation Steps: -
a) Command run through ( Master node – Worker Node 1 – Worker Node 2 ) :-
1- You MUST disable swap
$$ sudo swapoff -a

Then disable swap as a below


$$ nano /etc/fstab

4|P age
2- Set up the IPV4 bridge on all nodes ( run all below at one time).

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf


overlay
br_netfilter
EOF

sudo modprobe overlay


sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots


cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot


sudo sysctl --system

3- Verify that the br_netfilter, overlay modules are loaded by running the following
commands:
$$lsmod | grep br_netfilter
$$lsmod | grep overlay

Master-node

Master-node

4- Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and


net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the
following command:

$$sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables


net.ipv4.ip_forward

5|P age
5- Disable firewall (to make sure 3 machines can connect together): -
$$ufw status

Master-node

6- Install Containerd from docker website: -

$$sudo mkdir /etc/apt/keyrings

$$sudo apt-get update

$$sudo apt-get install ca-certificates curl gnupg

$$sudo install -m 0755 -d /etc/apt/keyrings

$$curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

$$sudo chmod a+r /etc/apt/keyrings/docker.gpg

$$echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg]
https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

$$sudo apt-get update

$$sudo apt-get install containerd.io

$$systemctl status containerd.service

Check Containerd service! Should be as below active!

Master-node

6|P age
7- Set up Cgroup as a true (systemd)

$$ nano /etc/containerd/config.toml

Then restart service to read new configuration.


$$ systemctl restart containerd.service

8- Let’s install kubelet, kubeadm, and kubectl to create a Kubernetes cluster. They play an
important role in managing a Kubernetes cluster.

$$ sudo apt-get update

$$ sudo apt-get install -y apt-transport-https ca-certificates curl

$$ mkdir -p /etc/apt/keyrings

$$ curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o


/etc/apt/keyrings/kubernetes-archive-keyring.gpg

$$ echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial


main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

$$ sudo apt-get update

$$ sudo apt install -y kubelet=1.25.4-00 kubeadm=1.25.4-00 kubectl=1.25.4-00

$$ sudo apt-mark hold kubelet kubeadm kubectl

To check the installation of kubelet-kubeadm-kubectl :-

$$ kubeadm version
$$ kubelet --version
$$ kubectl version –short

Master-node

Master-node

Master-node

Master-node

7|P age
b) Command run through ( Master node only – 192.168.1.1 ) :-

1- initialize your master node. The --pod-network-cidr flag is setting the IP address range for the
pod network (Must use range 10.244.0.0 for flannel network package – Next step).

- 192.168.1.1 is the IP of master node.

$$kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.1

The result of initialize is below:

192.168.1.1

2- Set up config file.

$$ mkdir -p $HOME/.kube
$$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

c) Join workers node to the cluster through (WorkerNode1 192.168.1.2 - WorkerNode2 192.168.1.3)
in upper photo.

1- From machine WorkerNode1 run

$$ kubeadm join 192.168.1.1:6443 --token d2tgh8.f9q3vjf1i5t1uneu \


--discovery-token-ca-cert-hash
sha256:59b6ac2294eb69ccf84743fc2b9ea5113b64bbe5ea0d5372938b1e81468c47da

Copy certificate from MasterNode path /root/.kube/config


Then paste it in WorkerNode1 in same path
/root/.kube/config

8|P age
2- From machine WorkerNode2 run

$$ kubeadm join 192.168.1.1:6443 --token d2tgh8.f9q3vjf1i5t1uneu \


--discovery-token-ca-cert-hash
sha256:59b6ac2294eb69ccf84743fc2b9ea5113b64bbe5ea0d5372938b1e81468c47da

Copy certificate from MasterNode path /root/.kube/config


Then paste it in WorkerNode2 in same path
/root/.kube/config

d) Command run through ( Master node only 192.168.1.1 ) :-


1- Install flannel package for network solution.
$$ sudo kubectl apply -f
https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

➢ Finally,
$$ kubectl get nodes

Master-node

Master-node

9|P age
• What does k8s consist of?

➢ Kubernetes Architecture is consisting of Master Node and Worker nodes: -

o Master Node: responsible for cluster management and for providing the API that is used to configure and
manage resources within the Kubernetes cluster.

o Worker Node: are responsible for running the containers and doing any work assigned to them by the master
node.

➢ Master Node consists of: -

1- ETCD CLUSTER: data stores information regarding the cluster such as nodes, pods, configs, secrets, Accounts, roles,
bindings, others. When you run the Kube control get command is from Etcd server. Every change in cluster such as adding
nodes, deploy pods are updated in the Etcd server.

--ETCD listen to port 2379 to connect between ETCD and kube-API.

10 | P a g e
2- Kube-scheduler: scheduler identifies the right node to place a container on based on the containers resource
requirements worker node capacity.

3- Kube Controller Manager: used to take care of nodes, the responsible for onboarding new nodes to the cluster,
handling node become unavailable or destroyed it and the replications nodes.

By default, Kube controller management check node every 5 second by using kube-api if it stop receiving heartbeat from
node the node will be marked as unreachable but it will waiting 40 second before market it as unreachable then waiting 5
mint as unreachable to comeback up it, if it is not up again ,it will remove the pods=container which creating on that node
and put it in another node replication

4- Kube-Apiserver: is APIS used to handle request and response between master node and worker node.

Kube-API: by using postman you can post a request (curl –X POST /api/v1/namespaces/default/pods1) to create new
pod=container=application; -

How kube-API manage that request?

1- Kube-api go to ETCD to Authenticate user.


2- Then Kube-api validated the request.
3- Kube-Api has retrieved data from ETCD and updates new pod in ETCD.
4- Kube-Api sent to Kube-scheduler to release there is a new pod need to create to define which node suitable to
create on.
5- Then Kube-Api gets information from Kube-Scheduler to define the place of new pod and update it in ETCD
6- Then Kube-Api sends all information to the Kubelet which manages the place that scheduler defines it to create
pod on.
7- Kubelet starts to create pod on same node that Kubelet manages it.

➢ Worker Nodes consists of: -

1- Kubelet: every worker node has its own Kubelet. Kubelet is the captain of Worker nodes which take the order from
master node and send reports to master node and status of worker node by using Kube-Apiserver

2- Kube-Proxy: Kube-proxy maintains network rules on nodes. These network rules allow network communication to your
Pods from network sessions inside or outside of your cluster.

11 | P a g e
• How to manage k8s?
➢ What is Pod and how to create one?

Pods: Pods are the smallest deployable units of computing that you can create and manage in Kubernetes, pods contain
the container that contains the applications + any another dependency.

1-create yaml file such as pod-definition.yaml

$$nano pod-definition.yaml

2-yaml contain: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx

apiVersion: v1 Related to kube-API version on master node (default is v1)

kind: Pod kind of creation such as (pod – service – ReplicaSet – Deployment)

metadata: This part is used to give my creation name and labels (using this name and labels to connect service to
this pod and many another purpose).

name: myapp-pod Name of the pod. (up to you)

labels: It is used to identify this object, so we are using (app - type) to identify this object. (up to you)
- Think about 1000 pods.
- you can add more of labels under app and type

spec: To specific the container in this Pod we are going to create single container to single pod.
containers: Part of information about the container of the application
- name: nginx-container Name of the container of the application (up to you)
image: nginx Name of the image on docker repository

12 | P a g e
3-Apply yaml file: -

$$kubectl create -f pod-definition.yaml


-f : file name
Or
--We can use kubectl apply if we have modified yaml file, to terminate old pod and create new one.
$$kubectl apply -f pod-definition.yaml
--To get status of the new pod
$$kubectl get pods -owide

de

--To get more information about the new pod


$$kubectl describe pod myapp-pod

worker-node-1/192.168.1.2
worker

--To open session inside pod to run any command


$$ kubectl exec -it myapp-pod -- /bin/bash

4-To delete it: -

$$kubectl delete -f pod-definition.yaml

13 | P a g e
➢ What is Replication Controller and how to create one: -

Replication Controller: is a Kubernetes resource that ensures a pod (or multiple copies of the same pod) is always up and
running. If the pod disappears for any reason (like in the event of a node disappearing from the cluster), the replication
controller creates a new pod immediately.

Let’s create one: -

1-create yaml file such as replication-controller-definition.yaml


$$nano replication-controller-definition.yaml

2-yaml contain: -

apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc This part of name and labels related to the ReplicationController
labels:
app: myapp
type: front-end
spec:
replicas: 3
template: This part of name and labels related to the Pod
metadata:
name: myapp-pod Number of pods is 3
labels:
Replicationcontroller uses labels to bind pod to its own replication controller.
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx

14 | P a g e
3-Apply yaml file: -

$$kubectl apply -f replication-controller-definition.yaml


--To get status of the new pod
$$kubectl get pods -owide

worker-node-1
worker-node-2
worker-node-2

$$kubectl get replicationcontroller

4-To delete it: -

$$kubectl delete -f replication-controller-definition.yaml

15 | P a g e
➢ What is Replica Set and how to create one: -

Replication Set: is the same concept of Replication controller but ReplicaSet is new technology of replica
than replication controller.

Let’s create one: -

1-create yaml file such as replication-set-definition.yaml


$$nano replica-set.yaml

2-yaml contain: -

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 3
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector: --selector used as a label to match every replica with its pod by using type.
matchLabels:
type: front-end --using to monitor three instances if any instance of 3 down it will redeploy it again

3-Apply yaml file: -

$$kubectl apply -f replica-set.yaml


--To get status of the new pod
$$kubectl get pods -owide

worker-node-1
worker-node-2
worker-node-2

16 | P a g e
➢ How to scale up Replica set: -

Scale up of replication means increased number of pods.

Let’s edit uppear file: -

1-Edit yaml file replication-set-definition.yaml by changing number of replicas from 3 to 6 .


$$nano replica-set.yaml

2-yaml contain: -

apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 6
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end

3-Apply yaml file: -

$$kubectl apply -f replica-set.yaml

17 | P a g e
--To get status of the new pod
$$kubectl get pods -owide

worker-node-1
worker-node-2
worker-node-2
worker-node-1

4-Another way: -
$$kubectl scale –replicas=6 -f replica-set.yaml
Or:
$$kubectl edit replicaset myapp-rs >> myapp-rs is name of replicaset in yaml file

18 | P a g e
➢ What is Deployment and how to create one: -

Deployment: is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods
along with a lot of other useful features.

Deployments ReplicaSet
High-level abstractions that manage replica sets. A lower-level abstraction that manages the
desired number of replicas of a pod.
Deployment manages a template of pods and uses ReplicaSet only manages the desired number of
replica sets to ensure that the specified number of replicas of a pod.
replicas of the pod is running.
Deployment provides a mechanism for rolling Applications must be manually updated or rolled
updates and rollbacks of the application, enabling back.
seamless updates and reducing downtime.

Let’s create one: -

1-create yaml file such as deployment.yaml


$$nano deployment.yaml

2-yaml contain: -

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
labels:
app: myapp
type: front-end
spec:
replicas: 3
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end

19 | P a g e
3-Apply yaml file: -

$$kubectl apply -f deployment.yaml


--To get status of the new pod
$$kubectl get pods -owide

➢ Some Important command: -

1-$$kubectl run nginx --image=nginx

It will automatically create a pod by pulling nginx image from docker hub officially image then run the image in
pod on worker node without yaml file.

2-$$kubectl get all

To get all resources (pod – deployment – replicaset - service - daemonset - etc) on default namespace (we will
learn more about namespaces next slides)

Master-node

3-$$kubectl get all -A

To get all resources in all namespaces

4-$$kubectl get all -A -owide

To get all resources in all namespaces with more information about location of all pods on which workernode ,
virtual IP of every pod

5-$$kubectl exec -it myapp-rs-7c4d4f7fc6-bjjm6 -- /bin/bash

It is used to open the terminal inside the pod to execute any command inside the pod.

myapp-rs-7c4d4f7fc6-bjjm6 >>> name of the pod

20 | P a g e
6- $$ kubectl describe pod myapp-rs-7c4d4f7fc6-bjjm6

7-$$kubectl run redis --image=redis123 --dry-run=client -o yaml > redis-definition.yaml

It is easy way to create yaml file with basic configuration without run this pod (--dry-run=client).

The output: -

8-$$ kubectl create deployment --image=nginx nginx --replicas=4 --dry-run=client -o yaml > nginx-deployment.yaml

9-To get logs of any pod


$$ kubectl logs consumer-55c97bfbc7-7nfn6 -n backend-dev

-n = namespace

21 | P a g e
➢ What is Service and how to create one: -

Kubernetes Service: enables communication between various components within and outside of the
application, it helps us to connect applications together with other applications or users.

--There are three types of main service: -

1- NodePort-Service: Exposes the service on a static port on each node's IP. It makes the service accessible
externally at the specified node port.

2- ClusterIP-Service: The default service type, which provides a cluster-internal IP address. It is used for
communication between different parts of an application within the cluster.

3- LoadBalancer-Service: Exposes the service externally using a cloud provider's load balancer. The external
IP is provisioned, and traffic is distributed to the service.

22 | P a g e
1-- NodePort-Service:

After creating deployment of nginx as shown Pg:17, We are going to create service (Type: NodePort) to make those
pods accessible from out the world.

Let’s create one: -

1-create yaml file such as nginx-service.yaml.

---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
ports:
- targetPort: 80
port: 80
nodePort: 31500
selector:
type: front-end - It must be same as labels on
app: myapp Yaml deployment to bind service with
type: NodePort pods need to route to.

3-Apply yaml file: -

$$kubectl apply -f nginx-service.yaml.

--To get status of the new pod


$$kubectl get all -owide

23 | P a g e
◼ Now very Important example to understand how NodePort working, we have demo application
called consumer, this application consisting of 2 pods on two worker-nodes working on port 4040
and I will expose on port 31040.

Let’s create one: -

1-create yaml file such as consumer-service.yaml and apply it.

---
apiVersion: v1
kind: Service
metadata:
name: consumer
spec:
ports:
- targetPort: 4040
port: 4040
nodePort: 31040
selector:
app: consumer
type: NodePort

2- to check connection by telnet.


192.168.1.1 (root) 192.168.1.2 (root) 192.168.1.3 (root)
192.168.1.3
.168.1.3 (root)31040
192.168.1.3
.168.1.3 (root)…
192.168.1.3
.168.1.3 (root)

What about check telnet with port 4040

192.168.1.1 (root) 192.168.1.2 (root) 192.168.1.3 (root)

192.168.1.3
.168.1.3 (root) 4040
192.168.1.3
.168.1.3 (root)

So, it is important to know that NodePort service used to export our application out of cluster using
nodePort only, but what is targetport used to? We will see next slide.

24 | P a g e
◼ Now very Important to understand how to connect between two pods (nginx application connect to
consumer application) [nginx pod consumer service consumer pod].

1- go inside nginx pod (any pod): -


$$kubectl exec -it myapp-rs-7c4d4f7fc6-h2tj9 -- /bin/bash

2- install telnet package: -


$$ apt-get update && apt-get install telnet

3- check connection from inside nginx: -


$$telnet consumer.default.svc.cluster.local 4040

192.168.1.1 (root) 192.168.1.2 (root) 192.168.1.3 (root)

4040
.168.1.3 (root)

What is consumer.default.svc.cluster.local mean ???

consumer: name of service Pg:22


default: is the default namespace (we will learn more about namespace next slides)
svc: referring to Kubernetes services.
cluster.local: default cluster

Let’s have a brief about NodePort service: -

- The range of nodeport must be between 30000 and 32767.


- To connect between out of cluster and pod inside cluster using any IP of worker node and nodePort.
- To connect between to pod inside cluster by using name of service and target port.

25 | P a g e
2-- ClusterIP-Service:

It is same concept of NodePort service, so it just used to connect between two pod inside cluster only.

Let’s create one: -

1-create yaml file such as ClusterIP-service.yaml and apply it.

---
apiVersion: v1
kind: Service
metadata:
name: consumer
spec:
ports:
- targetPort: 4040
port: 4040
selector:
app: consumer
type: ClusterIP

2- go inside nginx pod (any pod): -


$$kubectl exec -it myapp-rs-7c4d4f7fc6-h2tj9 -- /bin/bash

3- install telnet package: -


$$ apt-get update && apt-get install telnet

4- check connection from inside nginx: -


$$telnet consumer.default.svc.cluster.local 4040

192.168.1.1 (root) 192.168.1.2 (root) 192.168.1.3 (root)

4040
.168.1.3 (root)

3-- LoadBalancer-Service (working with cloud only):


For VMware (on premise) we use NodePort-Service to make loadbalancer
For cloud AWS , AZURE we use LoadBalancer-service

26 | P a g e
➢ What is namespace and how to create one: -

- by default, any pods or service or any other components you created without specify
namespace will create in [default namespace] as example Pg: 20

- Namespaces: used to divide PODS under difference various Name to avoid accidentally execute modify on
wrong production PODS.

let’s start: -
we have namespace > (backend) which consist of consumer app.
and namespace > (gateway) which consist of nginx.

1—create a namespace called backend.


$$ kubectl create namespace backend

2--modify yaml file of consumer app to add namespace then apply it.

27 | P a g e
3 -- modify the service of consumer app to add namespace then apply it.

---
apiVersion: v1
kind: Service
metadata:
name: consumer
namespace: backend
spec:
ports:
- targetPort: 4040
port: 4040
nodePort: 31040
selector:
app: consumer
type: NodePort

3 – create a namespace called gateway.


$$ kubectl create namespace gateway

4-- modify yaml file of nginx app to add namespace then apply it.

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
namespace: gateway
labels:
app: myapp
type: front-end
spec:
replicas: 2
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end
28 | P a g e
5-- modify the service of nginx to add namespace then apply it.

---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: gateway
spec:
ports:
- targetPort: 80
port: 80
nodePort: 31500
selector:
type: front-end
app: myapp
type: NodePort

6—let’s get all about namespace backend


$$kubectl get all -owide -n backend
-n = namespaces

7-- let’s get all about namespace gateway


$$kubectl get all -owide -n gateway

29 | P a g e
◼ How to connect between two pods in different namespaces?
1- go inside nginx pod (any pod): -
$$ kubectl exec -it myapp-rs-7c4d4f7fc6-ccmx8 -n gateway -- /bin/bash
-n = namespaces

2- install telnet package: -


$$ apt-get update && apt-get install telnet

3- telnet from nginx pod to consumer pod: -


$$ telnet consumer.backend.svc.cluster.local 4040

4040

◼ To specify every Namespace with specific resources: -

Creart pod-resources.yml and apply it:-

apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: backend
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: 5Gi
limits.cpu: "10"
limits.memory: 10Gi

30 | P a g e
➢ What is scheduling and how to create pod on specific node: -

apiVersion: apps/v1
Scheduling: scheduler identifies the right node to place a kind: Deployment
pod on, based on the pod’s resource requirements and metadata:
worker node capacity. name: myapp-rs
namespace: gateway
labels:
app: myapp
- By default, scheduling arranged by [Kube-scheduler] spec:
components in master node. replicas: 2
template:
- You can manually schedule a pod on a specific node.
metadata:
- Create deployment.yaml then apply it. name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx-container
image: nginx
nodeName: worker-node-2
selector:
matchLabels:
app: myapp

- We can check it.


31 | P a g e
worker-node-2
worker-node-2

➢ What is Labels and Selectors and how to create one: -

Labels and Selectors: Are a standard method to group things together.

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
namespace: gateway
labels:
app: myapp
type: front-end
spec:
replicas: 2
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
selector:
matchLabels:
type: front-end

you can search in pods by labels: -

$$kubectl get pods --selector app=myapp

32 | P a g e
➢ What is Taints and Tolerations and how to create one: -

Taints and Tolerations: It is the relationship between pods and node, how you can restrict pod on specific
node.

- As you see in the upper figure, we have 3 pods with 3 Toleration and 3 nodes with 3 Taints.
- Any node with Taint will not accept any pod except pod with Toleration same value of Taint.
- Important Note: Any pod with Toleration can scheduler on node with Taint same value of Toleration, also
it can scheduler on node which have NO Taint, to avoid that we will make something called node affinity we
will see it next slides.
- Why are pods automatically not deployed on master node?
because a Taint is set automatically on master node that prevents any pods to deployed on.

Master-node

how to make node as a Taints ?


$$kubectl taint nodes node-1 key=Red:taint-effect
- There are three types of taint-effect: -
NoSchedule : Which means pods will not be scheduled on the node except pod with Toleration same value
of Taint of node.
PreferNoSchedule : System will try to avoid placing a pod on node but it is not guaranteed.
33 | P a g e
NoExecute: The new pods will not be scheduled on the node and existing pods on the node, if any, will be
evicted, if they do not tolerate the taint in case that those pods are applied to node before we Taints the
node.
1- Assigns node (worker-node-1) with Taints = [myapp=red]
$$ kubectl taint nodes worker-node-1 myapp=red:NoSchedule

worker-node-1
worker-node-1

2- Create pod have Toleration = [myapp=red] then apply it: -

apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-rs
labels:
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
type: front-end
spec:
containers:
- name: nginx-container
image: nginx
tolerations:
- key: "myapp"
operator: "Equal" >> Equal mean that Toleration = [myapp = red]
value: "red"
effect: "NoSchedule"
selector:
matchLabels:
type: front-end

3- IF you want to remove Taints from node (worker-node-1): -


$$ kubectl taint nodes worker-node-1 myapp=red:NoSchedule-
Just add (-) at end of command

34 | P a g e
➢ What is Node Selectors and how to create one: -

Node Selectors: if I have container with application need a high node resource such as RAM: 64 , so we
used Node Selectors to assign the node with high resources as a node with labels size = large , so we can
edit application .yaml to put label size = large , so it make this node deployed on node of high resources

1- Assign node as a Size = large: -


$$ kubectl label nodes woker-node-1 size=Large

2- Edit yaml file with node selector and apply it: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
nodeSelector:
size: Large

35 | P a g e
➢ What is Node Affinity and how to create one: -

Node Affinity: it is the same as node selector but Node Affinity more complex which give you more
feature.

1- Assign node as a Size = large: -


$$ kubectl label nodes worker-node-1 size=Large

2-Edit yaml file to deploy on pod with label Large then apply it: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large

36 | P a g e
Or edit yaml file to deploy on pod with label Large or medium then apply it: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: In
values:
- Large
- Medium

Or edit yaml file to deploy on pod with label small then apply it: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: NotIn
values:
- Small

37 | P a g e
Example to deploy pod on any node have label size without value just key only: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: size
operator: Exists

◼ What could happen if there is node with Labels size and someone deleted label size from node which
contain pod with nodeAffinity ?

- The answer in requiredDuringSchedulingIgnoredDuringExecution

- There are Three type: -

1- requiredDuringSchedulingIgnoredDuringExecution >>> pod must find node with same labels if not the
pod will not deploy.

>>>> IgnoredDuringExecution : it mean that any change on node like ( delete label on node ) will not affect
after pod had been deployed

2- preferredDuringSchedulingIgnoredDuringExecution >>> pod no need to find node with same labels so the
pod will deploy on another node.

stage one (DuringScheduling): which pod does not exist and created for first time.

stage two (DuringExecution): pod exist and there is a change on environment which affect node affinity such as ( delete
label on node ) which already contain pods with node affinity.

38 | P a g e
➢ Taints and Tolerations vs Node Affinity: -
Taints and Tolerations: as you know the Taints pod will deploy on the Tolerations node but also it can deploy on
normal node.

Node Affinity: as you know the pod with node Affinity will deploy on Node with same label but also the normal pod
can be deployed on node with label.

so if you want to specific pod deployed on specific node it needed combination between Taints and Tolerations / and
Node Affinity

or we have easy way by adding nodeName to specify node to deploy on: -

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
nodeName: Node-1

39 | P a g e
➢ What is Resource Requirements and Limits and how to create one: -

Resource Requirements and Limits: we are going to specify resources for every POD.

Note that by default: pod has no limit of resources as increase on load of application the resource will be increase so
it led to suffocate another pod.

Let’s create one: -


1- create yaml file of pod then apply it:-

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
resources:
requests:
memory: "4Gi" >>>>> 1Gi = 1073741824 bytes / 1Mi = 1048576 bytes / 1Ki = 1024 bytes
cpu: 2 >>>>> it can be also 100m or 0.1 AS 1 CPU = 1000m

40 | P a g e
2- Also, we have another example to limit maximum utilization.

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 8080
resources:
requests:
memory: "4Gi"
cpu: 2
limits:
memory: "8Gi"
cpu: 4

NOTE: As increase on load the pod never take more than 4 cpu, even it needs 2 more cpu it will not take more than 2
CPU
NOTE: AS increase on load the pod will take more than 8Gi RAM but it will be terminated with error OOM (out of
memory)

So the best practice of (Resource Requirements and Limits) is adding requests without limits BUT you must (put
requests for all pods) because pod with no requests may be led to consume all resource:-

41 | P a g e
3- We can Create a LimitRange it is something like service to specify a default for all pods on my cluster of using
CPU and RAM: -

NOTE this LimitRange will not affect pod that is created it just affect new pods!

Let’s create one: -


- Limit for CPU

apiVersion: v1
kind: LimitRange
metadata:
name: cpu-resource-constraint
spec:
limits:
- default:
cpu: 500m
defaultRequest:
cpu: 500m
max:
cpu: "1"
min:
cpu: 100m
type: Container

- Limit for RAM

apiVersion: v1
kind: LimitRange
metadata:
name: memory-resource-constraint
spec:
limits:
- default:
memory: 1Gi
defaultRequest:
memory: 1Gi
max:
memory: 1Gi
min:
memory: 1Gi
type: Container

42 | P a g e
4- Also we can limit all of cpu and ram that is used by all pod (sum of ram and cpu for all pods together) for all
namespace:-

apiVersion: v1
kind: ResourceQuota
metadata:
name: my-resource-quota
spec:
hard:
requests.cpu: 4
requests.memory: 4Gi
limit.cpu: 10
limits.memory: 10Gi

Just for your knowledge: -

-To extract the pod definition in YAML format to a file using the command

$$kubectl get pod webapp -o yaml > my-new-pod.yaml

43 | P a g e
➢ What is DaemonSets and how to create one: -

DaemonSets: Used to deploy one copy of your specific pod on each node in your cluster, If you joined new node in
cluster by default, DaemonSets going to deploy specific pod just like monitoring agent

Let’s say you want to deploy a monitoring agent as a pod or log collector as a pod on each of your nodes in the
cluster, to monitor your cluster better, A DaemonSets is perfect for that.

what is different between DaemonSets and ReplicaSet ?


___________________________________________________

-DaemonSets deployed one copy of your specific pod on each node in your cluster.
-ReplicaSet can be deployed 2 or 3 copy your pod on same node in your cluster.

Example of DaemonSet: -

kube-proxy is deployed on each node of the cluster As a DaemonSets


Flannel is deployed on each node of the cluster As a DaemonSets

44 | P a g e
Let’s create one: -
1- create yaml file (daemonset.yaml) of pod then apply it:-

apiVersion: apps/v1
kind: DaemonSet
metadata:
name: monitoring-daemon
spec:
selector:
matchLabels:
app: monitoring-agent
template:
metadata:
labels:
app: monitoring-agent
spec:
containers:
- name: monitoring-agent
image: monitoring-agent

2- You can check it.

$$ kubectl get daemonsets --all-namespaces

45 | P a g e
• Logging & Monitoring.
➢ Kubernetes Dashboard Solution: -
-- Kubernetes does not come with a full-featured built-in monitor solution, so you can use open-source tool like
Prometheus, elastic, Datadog, Dynatrace.

-- Kubernetes is going to implement a tool called (heapster) but it is deprecated.

-- as you know Kubernetes monitoring pods in case that one is down it automatic run new one, that is happened
because of Kubelet have small agent called cAdvisor.

-- cAdvisor is responsible for retrieving performance metrics from pod and exposing them through kubelet API to
(metrics server).

-- every cluster Kubernetes have (metrics server) which responsible for retrieves metrics from each of nodes and pods
, but it cannot store the metrics data on the disk so you cannot see historical performance data , so you must
implement open source tool.

➢ What is Rancher: -
Rancher is a complete software stack for teams adopting containers. It addresses the operational and security
challenges of managing multiple Kubernetes clusters, while providing DevOps teams with integrated tools for running
containerized workloads.

46 | P a g e
➢ How to Implement rancher: -

1- Install Docker CLI on any WorkerNode of your cluster for example server (192.168.1.2): -
$$ sudo apt-get install docker-ce docker-ce-cli

2- We will run rancher as a docker image (Containerized App) separated from kubernetes cluster: -
$$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:latest

3- To get password for user: admin run below command on same server:-
$$ docker logs 9503e066bb38 2>&1 | grep "Bootstrap Password:"

9503e066bb38 >>> is the container ID


To get the container ID run below.
$$docker ps

Then take the password result from command docker logs and add it in dashboard.

4- From your PC go to browser and access rancher using https://192.168.1.2 .

47 | P a g e
➢ How to import our cluster in rancher dashboard: -
1- Import Existing Cluster as you see below.

2- Choose generic one.

3- To get the name of the cluster.


$$ kubectl config view

48 | P a g e
4- Then import it in below then click next: -

5- Run this command on master node (192.168.1.1):-

After that you MUST wait about 20 Mint until see all pods running: -

49 | P a g e
➢ How to Implement Grafana and prometheus using Rancher Dashboard: -
1- Access the cluster named Kubernetes.

2- Click to charts one.

3- Then search for Grafana then click to install then wait about 10 Mint until pod up and running.

50 | P a g e
4- Then access Grafana by clicking Monitoring itself then click Grafana one.

5- Congratulations.

51 | P a g e
• Configure Environment Variables Vs ConfigMap Vs Secrets.
➢ Configure environment variables in applications: -

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
env:
- name: APP_COLOR
value: pink

➢ Configuring ConfigMaps in Applications: -


ConfigMaps: it is same as environment variable but in separated yaml file when we start pod the ConfigMap will
inject into pod as environment variable.

Let’s create one: -


1- Create yaml file of the ConfigMap then apply it: -

apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
APP_COLOR: blue
APP_MODE: prod

52 | P a g e
2- Then bind the pod to ConfigMap.

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: app-config

-In case we need to retrieve only one variable APP_COLOR, check below.

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
env:
- name: APP_COLOR
valueFrom:
configMapKeyReF:
name: app-config
key: APP_COLOR

53 | P a g e
-You can get all ConfigMap on the cluster.

$$kubectl get configmaps -A >>> for all namespaces.

$$kubectl get configmaps -n ingress-nginx >>> for namespace called ingress-nginx.

$$kubectl describe configmap app-config

➢ Configure Secrets in Applications: -


I have application python connect to SQL database, host, user, password is appeared in my code and that is not a
good idea, so we are going to Configure secrets: -

Let’s create one: -


1- Create yaml file of the Secrets then apply it: -

apiVersion: v1
kind: Secret
metadata:
name: app-secret
data:
DB_Host: bXlzcWw= >>> on linux run $$echo -n 'mysql' | base64 >>> it will give you bXlzcWw=
DB_User: cm9vdA== >>> on linux run $$echo -n 'root' | base64 >>> it will give you cm9vdA==
DB_Password: cGFzd3Jk >>> on linux run $$echo -n 'paswrd' | base64 >>> it will give you cGFzd3Jk

[ bXlzcWw= ] is equal [ mysql ] using encoded base 64


[ cm9vdA== ] is equal [ root ]
[ cGFzd3Jk ] is equal [ paswrd ]

-You can get secrets

$$kubectl get secrets

$$kubectl describe secrets app-secret

$$kubectl get secret app-secret –o yaml

54 | P a g e
2- Then bind the pod to Secret.

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
envFrom:
- secretRef:
name: app-secret

-In case we need to retrieve only one secret DB_Password, check below.

apiVersion: v1
kind: Pod
metadata:
name: simple-webapp-color
labels:
name: simple-webapp-color
spec:
containers:
- name: simple-webapp-color
image: simple-webapp-color
ports:
- containerPort: 8080
env:
- name: DB_Password
valueFrom:
secretKeyReF:
name: app-secret
key: DB_Password

55 | P a g e
Important note: -
______________

-- Secrets are not encrypted. only encoded.

-- Anyone able to create pods/deployments in the same namespace can access the secrets
so, you must configure least-privilege access to Secrets - RBAC
-- you can manage the secrets from external provider such as Vault providers

56 | P a g e
• Storage in Kubernetes.

- Kubernetes is same as docker it saves data while pod is running, if the pod is terminated the data will be deleted!!

How to solve it?


---------------------

As you see below a demo pod which generates random numbers in text file on path /opt/number.out inside the pod.

apiVersion: v1
kind: Pod
metadata:
name: random-number-generator
spec:
containers:
- image: alpine
name: alpine
command: ["/bin/sh","-c"]
args: ["shuf -i 0-100 -n 1 >> /opt/number.out;"]
volumeMounts:
- mountPath: /opt >>> volumeMounts: used to Indicates to path of data inside pod.
name: data-volume
volumes:
- name: data-volume
hostPath: >>> volumes: used to take data inside volumeMounts copy it to
path: /data path /data on the node.
type: Directory

- for Amazon web services just edit volumes part: -

volumes:
- name: data-volume
awsElasticBlockStore:
volumeID: <volume-Id> >>> ID of the volume
fsType: ext4

57 | P a g e
➢ Persistent Volumes and Persistent Volumes Claims: -

- Persistent Volumes Claims: Is object used to bind PersistentVolume to pods.


- Persistent Volumes: volume which takes disk space from all nodes.
- if there is no PersistentVolume the PersistentVolumeClaims will be pending until find the PersistentVolume to bind with.

Let’s create one: -


1- Create yaml file of the PersistentVolume then apply it: -

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vol1
labels:
name: my-pv
spec:
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
#persistentVolumeReclaimPolicy: Delete
capacity:
storage: 1Gi
hostPath:
path: /data

accessModes:
- ReadWriteOnce >>> There are three type of access mode (ReadOnlyMany-ReadWriteOnce-ReadWriteMany) . accessMode
means how to access data mount to that volume – Ready data only – Read and edit data one time – Read and edit data many
times.

58 | P a g e
persistentVolumeReclaimPolicy: Retain >>> when you delete PVC the volume that is used by PVC will not be automatically
deleted or released , you must clean up manual before used by another PVC.

#persistentVolumeReclaimPolicy: Delete >>> when you delete pvc the volume that is used by pvc will be automatically
deleted, so you can used by another PVC.

To get it
$$kubectl get persistentvolume

2- Create yaml file of the PersistentVolumeClaim then apply it: -

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce >>> access mode must be same as persistent volume that will bind with
resources:
requests:
storage: 500Mi
>>> storage must be less than persistent volume that will bind with
selector:
matchLabels:
name: my-pv >>> using label to bind PVC with PV

To get it
$$kubectl get persistentvolumeclaim

59 | P a g e
3- Create yaml file of the Pod to mount data to PVC then apply it: -

apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html" >>> path of data on pod that I want to mount
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim Same name of PVC (PersistentVolumeClaim)

60 | P a g e
• Ingress.
➢ What is Ingress: -

- Ingress: an API object that helps developers expose their applications and manage external access by providing http/s
routing rules to the services within a Kubernetes cluster.

- What does ingress consist of?


1-ConfigMap: is used to config parameters just like "err-log-path","keep-alive","ssl protocols" same as nginx configuration.
2-ServiceAccount: is used to set permissions and roles, clusterRoles, RoleBindings to access all of this pods (ConfigMap-
ingress, ingress-deployment, nodeport service)
3-ingress deployment: it is ingress itself.
4-NodePort service: create service to expose the ingress controller to the external world
5-ingress resource: used to config routing to my application service

61 | P a g e
- Let’s start?
1-Create ConfigMap-ingress.yaml with your custom "err-log-path","keep-alive","ssl protocols" and apply it: -

kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration

2-Create ServiceAccount-ingress.yaml with your custom Roles, ClusterRoles, RoleBindings and apply it: -

apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount

62 | P a g e
3-Create ingress-deployment.yaml and apply it: -

apiVersion: networking.k8s.io/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
name: nginx-ingress
template:
metadata:
labels:
name: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingressSTX
controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

POD_NAME : this variable takes from ConfigMap.


POD_NAMESPACE : this variable takes from ConfigMap.

63 | P a g e
4-Create NodePort-ingress.yaml service and apply it: -

apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: nginx-ingress

5-Create ingress-resources.yaml and apply it:- apiVersion: networking.k8s.io/v1


kind: Ingress
metadata:
namespace: critical-space >>> in case you want a specific name space
name: ingress-wear-watch
-------------------------------------------------------------------------------------------
namespace: critical-space
/wear >>> to access using application through ingress
spec:
http://{IP_OF_ANY_NODE}:{PORT_OF_INGRESS}/wear rules:
- http:
wear-service >>> name of service of your application. paths:
- path: /wear
number: 80 >>> port of service of your application.
pathType: Prefix
------------------------------------------------------------------------------------------- backend:
/watch >>> to access using application through ingress service:
name: wear-service
http://{IP_OF_ANY_NODE}:{PORT_OF_INGRESS}/watch port:
number: 80
watch-service >>> name of service of your application.
- path: /watch
number: 80 >>> port of service of your application. pathType: Prefix
backend:
service:
name: watch-service
Port:
number: 80

64 | P a g e
➢ Another Easy way: -

- From any of worker nodes run below which create configMap – service account – ingress deployment – ingress service: -

$$kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-


v1.1.3/deploy/static/provider/baremetal/deploy.yaml

-Then delete ValidatingWebhookConfiguration by run below: -

$$kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

-Check all is running by run below: -

$$kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx


$$kubectl get services ingress-nginx-controller --namespace=ingress-nginx

65 | P a g e
-To get port of ingress to route (31500): -
$$kubectl get all -n ingress-nginx

-Then create yaml for ingress routing and apply it: -

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-backend-dev
annotations:
nginx.ingress.kubernetes.io/error-page: "/(/|$)(.*)"
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
# nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /consumer(/|$)(.*)
/consumer used to route just like
pathType: Prefix http://192.168.1.2:31500/consumer
backend:
service:
name: consumer
port: - consumer is the name of the service of application pod
that I want to route to.
number: 4040
- It is the port of the service 4040

66 | P a g e
It was an amazing journey!!!

"In the intricate dance of containers and orchestration, where pods waltz and services harmonize, Kubernetes
emerges as the maestro orchestrating a symphony of scalability and resilience. As I navigate this ever-evolving
landscape, I am reminded that in the world of distributed systems, Kubernetes is the conductor that transforms
chaos into seamless serenity."

"I am Sherif Yehia, a DevOps engineer. I trust that this reference proves beneficial to DevOps
engineers navigating the dynamic realm of technology. I am enthusiastic about sharing insights on
various technologies, and you can explore more of my posts and papers on my LinkedIn profile.
Thank you for your time and consideration.

Best regards, Sherif Yehia"

Reference: -
1- https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/
2- https://kubernetes.io/docs/home/
3- https://www.ibm.com/topics/kubernetes
4- https://www.redhat.com/en/topics/containers/what-is-kubernetes
5- https://medium.com/@marko.luksa/kubernetes-in-action-introducing-replication-controllers-aaa2c05e0b4e
6- https://www.baeldung.com/ops/kubernetes-deployment-vs-replicaset
7- https://avinetworks.com/glossary/kubernetes-ingress-services/
8- https://cloud.google.com/learn/what-is-kubernetes
67 | P a g e

You might also like