0% found this document useful (0 votes)
89 views53 pages

Kubernetes Zero To Zero

The document provides a comprehensive overview of Kubernetes, detailing its architecture, key objects, installation process, and various components such as Pods, Services, Volumes, and Health Probes. It includes practical examples and commands for creating and managing these components, along with explanations of features like ReplicationController, ReplicaSet, and Deployment. Additionally, it covers the use of namespaces for resource isolation and the types of storage options available in Kubernetes.

Uploaded by

nag251
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views53 pages

Kubernetes Zero To Zero

The document provides a comprehensive overview of Kubernetes, detailing its architecture, key objects, installation process, and various components such as Pods, Services, Volumes, and Health Probes. It includes practical examples and commands for creating and managing these components, along with explanations of features like ReplicationController, ReplicaSet, and Deployment. Additionally, it covers the use of namespaces for resource isolation and the types of storage options available in Kubernetes.

Uploaded by

nag251
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Index

Why Kubernetes ------------------ 2


Kubernetes object ------------------ 3
Kubernetes Installation ------------------ 9
Pod creation ------------------ 11
Kubernetes service ------------------ 13
ReplicationController ------------------ 16
ReplicaSet ------------------ 19
Deployment ------------------ 21
HealthProbe ------------------ 26
Namespace ------------------ 31
Volume ------------------ 34
DaemonSet ------------------ 44
Job ------------------ 45
StatefulSet ------------------ 47
HorizontalPodAutoscaling ------------------ 49
Ingress ------------------ 52

1
Why Kubernetes –
• Auto start of container
• Health check
• Autoscaling
• Load balancing
• Network
• Auto-healing
• Auto-node allocating
• Updates/new release/deployment
• Secrete / config (master node)

Kubernetes architecture –

2
Kubernetes Objects –
• Pod
• Service
• Volume
• Network
• Configmap
• Secrete
• Ingress

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-deployment

spec:

selector:

matchLabels:

app: nginx

replicas: 2 # tells deployment to run 2 pods matching the template

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.14.2

ports:

- containerPort: 80

3
1. Pod
• Definition: The smallest and simplest Kubernetes object,
representing a single instance of a running process in your
cluster.
• Purpose: Groups one or more containers (e.g., Docker
containers) that share the same network namespace and
storage volumes.
• Key Features:
o Containers in a pod share the same IP address and port
space.
o Pods are ephemeral; when a pod is deleted, a new pod may
be created with a different IP.
• Example –
apiVersion: v1

kind: Pod

metadata:

name: my-pod

spec:

containers:

- name: my-container

image: nginx

2. Service
• Definition: A stable, permanent endpoint to access one or more
pods.
• Purpose: Provides load balancing and service discovery for pods.
• Key Features:

4
o Types: ClusterIP (default), NodePort, LoadBalancer,
ExternalName.
o Abstracts away the dynamic nature of pod IPs.
• Example:
apiVersion: v1

kind: Service

metadata:

name: my-service

spec:

selector:

app: my-app

ports:

- protocol: TCP

port: 80

targetPort: 8080

type: ClusterIP

3. Volume
• Definition: A way to provide persistent storage to containers
running inside a pod.
• Purpose: Persist data beyond the lifetime of a pod and share
data between containers in a pod.
• Key Features:
o Types include emptyDir, hostPath, NFS, ConfigMap, Secret,
PersistentVolume, PersistentVolumeClaim.
o Decouples storage from pod lifecycle.
• Example
apiVersion: v1

kind: Pod

5
metadata:

name: my-pod

spec:

containers:

- name: my-container

image: nginx

volumeMounts:

- mountPath: "/data"

name: my-volume

volumes:

- name: my-volume

emptyDir: {}

4. Network
• Definition: Refers to Kubernetes networking objects and
policies that manage communication within and outside the
cluster.
• Purpose: Allow communication between pods, services, and
external resources.
• Key Features:
o Kubernetes assigns each pod a unique IP address.
o Supports network policies for controlling traffic flow.
o Uses CNI (Container Network Interface) plugins like
Calico, Flannel, or WeaveNet.
• Example (Network Policy:
apiVersion: networking.k8s.io/v1

kind: NetworkPolicy

metadata:

name: allow-traffic

spec:

6
podSelector:

matchLabels:

app: my-app

ingress:

- from:

- podSelector:

matchLabels:

app: allowed-app

5. ConfigMap
• Definition: Stores configuration data as key-value pairs.
• Purpose: Allows separation of configuration from application
code.
• Key Features:
o Inject configuration data into containers as environment
variables, command-line arguments, or mounted files.
o Non-sensitive data storage
• Example
apiVersion: v1

kind: ConfigMap

metadata:

name: my-config

data:

key1: value1

key2: value2

7. Ingress
• Definition: Manages external HTTP/S access to services within
the cluster.

7
• Purpose: Acts as a reverse proxy and load balancer, enabling
custom URLs and SSL/TLS termination.
• Key Features:
o Provides routing rules based on hostnames and paths.
o Requires an ingress controller (e.g., NGINX, Traefik) to
function.
• Example :
apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: my-ingress

spec:

rules:

- host: example.com

http:

paths:

- path: /

pathType: Prefix

backend:

service:

name: my-service

port:

number: 80

These objects together provide the building blocks for deploying,


scaling, and managing applications in Kubernetes.

8
Kubernetes Installation –
Launching two ubuntu server named masternode (t2. small) and
workernode(t2. micro)

On masternode and workernode - ->


sudo apt-get update
sudo apt-get install docker.io
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --
dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg]
https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee
/etc/apt/sources.list.d/kubernetes.list
sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list
sudo apt-get install -y kubectl kubeadm kubelet

On masternode - - >
sudo kubeadm init --ignore-preflight-errors=all
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/cal
ico.yaml
kubeadm token create --print-join-command

9
[add port 6443 in both master and worker node]

On Worker - >
Use kubeadm join command with token (copy the command from master and
paste on worker)
sudo (command)

On Master - ->
kubectl get nodes

10
Pod creation –
kubectl get nodes

kubectl get pods -n kube-system

mkdir nginxpod
cd nginxpod
nano nginxpod.yml

kubectl get pods

kubectl describe pod podname


kubectl logs pod_name
kubectl exec -it nginxpod - - /bin/bash

11
12
Kubernetes Services –
1)NodePort –

cd nodepod
ls
nano myservicepod.yml
apiVersion: v1

kind: Service

metadata:

name: mynodeportservice

spec:

type: NodePort

selector:

app: myapp

ports:

- protocol: TCP

port: 80

targetPort: 80

nodePort: 30001

kubectl apply -f myservice.yml


kubectl get pods
kubectl get services
Add 30000-32000 in worker node security inbound rule.

13
kubectl exec -it nginxpod - - /bin/bash
cd /usr/share/nginx/html
ls
touch ravi.html
echo “this is ravi.html page under nginxpod” <ravi.html

For load balancing –


cd nginxpod
cp myservice.yml loadwalaservice.yml
(chages in loadwalaservice.yml)

14
Create load balancer in AWS and at target group (add load balancer service port)
Copy DNS of load balancer.

15
ReplicationController –
ReplicationController –
• Pod recreation
• Scaling (manual)
• Replicas

Pratical -
kubectl get nodes
kubectl delete all –all –force
mkdir replicationController
cd replicationController
nano mynginx.yml

kubectl apply -f mynginx.yml


kubectl get rc

kubectl get pods

kubectl delete pod anypod_name –force

16
(to checking auto creation of pod of not)

kubectl get pods -o wide

kubectl scale rc myrc –replicas=4

kubectl get rc
kubectl get pod -o wide

kubectl delete all - -all - -force

cp mynginx.yml yournginx.yml
cp mynginx.yml ournginx.yml
ls

kubectl apply -f yournginx.yml

17
kubectl apply -f ournginx.yml

kubectl get all

kubectl get rc -o wide

kubectl get pod -l env

kubectl get pod -l env=dev

18
ReplicaSet –

cp -r replicationcontroller replicasetwala
cd replicasetwala
ls
nano mynginx.yml
nano yournginx.yml
nano ournginx.yml

19
kubectl apply -f mynginx.yml
kubectl apply -f yournginx.yml
kunectl apply -f ournginx.yml

kubectl get all

kubectl get pods - - selector ‘env in (dev, testing)’

20
Deployment –

Deployment is internally replica set, internally


replication controller, internally pod.
Deployment solved problem of -
a) Rolling update
b) Give batch wise update
c) Rollback functionality

mkdir deployment
cd deployment

nano mynginx.yml

21
kubectl apply -f mynginix.yml

kubectl get deploy


kubectl get all

For update –
kubectl set image deployment mydeploy
nginxcontainer=httpd
kubectl rollout status deployment mydeploy

kubectl rollout history deployment mydeploy

kubectl get deployment mydeploy -o wide

kubectl get deployment mydeploy -o yaml

22
kubectl apply -f mynginx.yml
kubectl rollout history deployment mydeploy

kubectl rollout undo deploy mydeploy - -to-revision=2


kubectl rollout history deploy mydeploy

kubectl annotate deployments.apps mydeploy


kubernetes.io/change-cause=”version4”

23
nano updatefile.yml

kubectl apply -f updatefile.yml


kubectl rollout history deployment mydeployment
kubectl get deploy - -watch
kubectl get deploy/mydeploy - -watch

For configure (healthProbe) -


nano configurefile.yml #livenessProbe

24
kubectl apply -f configurefile.yml

kubectl get pod


kubectl get all

kubectl delete pod_name - - force

kubectl get pods

Check another pod is created or not intent of crashed Pod.

25
Health Probe –
Kubernetes has various types of probes:
Liveness probe
Liveness probes determine when to restart a container. For example, liveness probes could catch a
deadlock when an application is running but unable to make progress.

If a container fails its liveness probe repeatedly, the kubelet restarts the container.

Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a
liveness probe, you can either define initialDelaySeconds or use startup probe.

26
Readiness probe
Readiness probes determine when a container is ready to start accepting traffic. This is useful when
waiting for an application to perform time-consuming initial tasks, such as establishing network
connections, loading files, and warming caches.

If the readiness probe returns a failed state, Kubernetes removes the pod from all matching service
endpoints.

Readiness probes run on the container during its whole lifecycle.

mkdir healthprobe
cd healthprobe
nano readinessprobe.yml

kubectl apply -f readinessprobe.yml

kubectl get all

27
kubectl exec -it pod_name - - /bin/bash
cd /usr/share/nginx/html
ls
rm index.html
exit

kubectl get pods - - watch

kubectl exec -it pod_name - - /bin/bash


cd /usr/share/nginx/html
touch index.html
echo “hi there” >index.html
exit

kubectl get pods

28
Startup probe
A startup probe verifies whether the application within a container is started. This can be used to
adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before
they are up and running.

If such a probe is configured, it disables liveness and readiness checks until it succeeds.

This type of probe is only executed at startup, unlike liveness and readiness probes, which are run
periodically.

nano startupprobe.yml

kubectl apply -f startupprobe.yml

kubectl describe pod pod_name

29
kubectl get pods

kubectl get pods #After startup

30
Namespace -
In Kubernetes, namespaces provide a mechanism for isolating groups
of resources within a single cluster. Names of resources need to be
unique within a namespace, but not across namespaces. Namespace-
based scoping is applicable only for namespaced object (e.g.
Deployments, Services, etc.) and not for cluster-wide objects (e.g.
StorageClass, Nodes, PersistentVolumes, etc.).

Namespace by commands.
kubectl get namespace

kubectl get pods -n kube-system

kubectl create namespace mynamespace

kubectl get namespace

31
kubectl apply -f namespaceexample.yml -n mynamespace

kubectl get pods


kubectl get pods -n mynamespace

kubectl config set-context - -current - - namespace mynamespace


#To creating custom namespace as default namespace

kubectl get pods

kubectl config set-context - -current - -namespace default


kubectl get pods

kubectl delete namespace mynamespace

32
Namespace by .yml file
nano customnamespace.yml

kubectl apply -f customnamespace.yml


kubectl get namespace

33
Volume -
In Kubernetes, Volumes are used to provide persistent or ephemeral
storage to containers running within Pods. Unlike container storage,
which is ephemeral and tied to the lifecycle of a container, volumes
allow data to persist beyond the container's lifecycle.
• emptyDir
• hostPath
• nfs
• persistent volume and persistent volume chain
• configmap
• secrete

emptyDir:
emptyDir is a type of Kubernetes volume that is created when a Pod is
assigned to a Node. As the name suggests, it starts out empty and is
used as a temporary storage location for data. The data in an emptyDir
volume is deleted when the Pod is terminated or removed. It's typically
used for tasks like caching, temporary file storage, or storing data that
doesn't need to persist across Pod restarts.

hostPath:
hostPath is a Kubernetes volume that mounts a file or directory from the
host node’s filesystem into the Pod. This allows the Pod to access data or
configuration files from the host machine. It can be useful in scenarios
where you need to access specific files from the host or interact with
hardware devices on the host machine. However, it can have security
implications, as it provides the Pod access to the host’s filesystem.

34
nfs:
nfs (Network File System) is a volume type that allows Pods to access
shared storage over a network. NFS volumes are used when you need to
share data across multiple Pods or Nodes. This is helpful in situations
where the data needs to be accessible by multiple Pods concurrently or
when you want to store data externally.

Persistent Volume (PV) and Persistent Volume Claim (PVC):


• Persistent Volume (PV): A PV is a piece of storage in the cluster that has
been provisioned by an administrator. It can be backed by various
storage backends like NFS, cloud storage, or even local disks. The PV has
a lifecycle independent of the Pods that use it.
• Persistent Volume Claim (PVC): A PVC is a request for storage by a user.
It is similar to a Pod in the sense that it is a resource request for storage.
A PVC binds to a PV that matches the requested size and access modes.
Once a PVC is bound to a PV, the Pod can use the storage defined in the
PV.

ConfigMap:
A ConfigMap is a Kubernetes object that allows you to store
configuration data as key-value pairs. ConfigMaps provide a way to
inject configuration into Pods without altering the container image. They
are often used for environment variables, command-line arguments, or
configuration files that Pods can reference at runtime. ConfigMaps help
to decouple configuration from the application code, making it easier to
manage and change settings without rebuilding or redeploying the
application.

Secret:
A Secret is similar to a ConfigMap, but it is used to store sensitive
information, such as passwords, API keys, or certificates. Kubernetes
secrets are encoded in base64, and while they are more secure than
plain text, you should still be careful with access control and encryption
to protect sensitive data.

35
36
37
38
39
40
41
42
43
DaemonSet –
A daemonSet in Kubernetes ensures that a copy of a specific Pod is
running on all (or some) nodes in a cluster. It's commonly used to
deploy system-level workloads like logging agents, monitoring
daemons, or other infrastructure-related components.
Key Features of DaemonSet :
• Node-Level Deployment: Ensures that one Pod is running on
each eligible node.
• Dynamic Scaling: Automatically deploys Pods to new nodes
when they are added to the cluster.
• Selective Deployment: Can be restricted to specific nodes using
node selectors, affinity rules, or taints and tolerations.
• Self-Healing: Automatically redeploys Pods if they fail or if the
node, they are on becomes unavailable.

44
Jobs –
A Job creates one or more Pods and will continue to retry execution
of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful
completions. When a specified number of successful completions is
reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created. Suspending a Job will delete its active Pods until
the Job is resumed again.
Definition: A Kubernetes Job is a higher-level abstraction that
manages one or more Pods to ensure a task runs to completion
successfully.
Purpose: Ideal for batch or one-time tasks like data processing,
backups, or running scripts.
Lifecycle: The Job ensures that the specified number of Pods
complete successfully.

45
Cronjob –
A corn job is a scheduled task in Unix-like operating systems that allows you to automate
repetitive tasks at specified times or intervals. These tasks are executed by the corn
daemon.

46
StatefulSet -
A StatefulSet is a Kubernetes API object used to manage and deploy
applications that require stable and unique network identifiers,
persistent storage, and ordered or graceful deployment and scaling.
It's ideal for stateful applications like databases, messaging systems,
and distributed systems.
Key Features of StatefulSet:
1. Stable Pod Identity:
o Each pod in a StatefulSet gets a unique, stable hostname and identity (e.g.,
pod-name-0, pod-name-1).
o This identity persists across pod restarts.
2. Stable Storage:
o Each pod can be associated with its own PersistentVolume (PV), ensuring
data persists even if the pod is rescheduled or restarted.
3. Ordered Deployment and Scaling:
o Pods are deployed, updated, or deleted sequentially.
o Ensures that dependent systems can rely on the order of operations.
4. Graceful Rollout and Termination:
o Pods are scaled up or down gracefully to ensure consistent application
behavior.
5. DNS Management:
o StatefulSets automatically assign DNS names to pods, simplifying service
discovery

47
48
Horizontal Pod Autoscaling –
In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload
resource (such as deployment or statefullet), with the aim of automatically
scaling the workload to match demand.
Horizontal scaling means that the response to increased load is to deploy
more pods. This is different from vertical scaling, which for Kubernetes would
mean assigning more resources (for example: memory or CPU) to the Pods that
are already running for the workload.
If the load decreases, and the number of Pods is above the configured
minimum, the HorizontalPodAutoscaler instructs the workload resource (the
Deployment, StatefulSet, or other similar resource) to scale back down.
Horizontal pod autoscaling does not apply to objects that can't be scaled (for
example: a daemonset.)
The HorizontalPodAutoscaler is implemented as a Kubernetes API resource and
a controller. The resource determines the behavior of the controller. The
horizontal pod autoscaling controller, running within the Kubernetes control
plane, periodically adjusts the desired scale of its target (for example, a
Deployment) to match observed metrics such as average CPU utilization,
average memory utilization, or any other custom metric you specify.

49
50
51
Ingress –
Ingress in the context of Kubernetes refers to a set of rules that
govern how external access to services within a Kubernetes cluster is
managed. It acts as an entry point to the cluster, routing external
HTTP/HTTPS traffic to the appropriate services.

Benefits of Using Ingress:


1. Single Entry Point: Centralized management of external access.
2. Load Balancing: Balances traffic across services.
3. TLS/SSL Termination: Secure communication with HTTPS.
4. Path-Based Routing: Route traffic based on URL paths.
5. Name-Based Virtual Hosting: Use multiple domain names to
access different services.

52

You might also like