Kubernetes Zero To Zero
Kubernetes Zero To Zero
1
Why Kubernetes –
• Auto start of container
• Health check
• Autoscaling
• Load balancing
• Network
• Auto-healing
• Auto-node allocating
• Updates/new release/deployment
• Secrete / config (master node)
Kubernetes architecture –
2
Kubernetes Objects –
• Pod
• Service
• Volume
• Network
• Configmap
• Secrete
• Ingress
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
3
1. Pod
• Definition: The smallest and simplest Kubernetes object,
representing a single instance of a running process in your
cluster.
• Purpose: Groups one or more containers (e.g., Docker
containers) that share the same network namespace and
storage volumes.
• Key Features:
o Containers in a pod share the same IP address and port
space.
o Pods are ephemeral; when a pod is deleted, a new pod may
be created with a different IP.
• Example –
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
2. Service
• Definition: A stable, permanent endpoint to access one or more
pods.
• Purpose: Provides load balancing and service discovery for pods.
• Key Features:
4
o Types: ClusterIP (default), NodePort, LoadBalancer,
ExternalName.
o Abstracts away the dynamic nature of pod IPs.
• Example:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: ClusterIP
3. Volume
• Definition: A way to provide persistent storage to containers
running inside a pod.
• Purpose: Persist data beyond the lifetime of a pod and share
data between containers in a pod.
• Key Features:
o Types include emptyDir, hostPath, NFS, ConfigMap, Secret,
PersistentVolume, PersistentVolumeClaim.
o Decouples storage from pod lifecycle.
• Example
apiVersion: v1
kind: Pod
5
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: "/data"
name: my-volume
volumes:
- name: my-volume
emptyDir: {}
4. Network
• Definition: Refers to Kubernetes networking objects and
policies that manage communication within and outside the
cluster.
• Purpose: Allow communication between pods, services, and
external resources.
• Key Features:
o Kubernetes assigns each pod a unique IP address.
o Supports network policies for controlling traffic flow.
o Uses CNI (Container Network Interface) plugins like
Calico, Flannel, or WeaveNet.
• Example (Network Policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-traffic
spec:
6
podSelector:
matchLabels:
app: my-app
ingress:
- from:
- podSelector:
matchLabels:
app: allowed-app
5. ConfigMap
• Definition: Stores configuration data as key-value pairs.
• Purpose: Allows separation of configuration from application
code.
• Key Features:
o Inject configuration data into containers as environment
variables, command-line arguments, or mounted files.
o Non-sensitive data storage
• Example
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
key1: value1
key2: value2
7. Ingress
• Definition: Manages external HTTP/S access to services within
the cluster.
7
• Purpose: Acts as a reverse proxy and load balancer, enabling
custom URLs and SSL/TLS termination.
• Key Features:
o Provides routing rules based on hostnames and paths.
o Requires an ingress controller (e.g., NGINX, Traefik) to
function.
• Example :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80
8
Kubernetes Installation –
Launching two ubuntu server named masternode (t2. small) and
workernode(t2. micro)
On masternode - - >
sudo kubeadm init --ignore-preflight-errors=all
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f
https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/cal
ico.yaml
kubeadm token create --print-join-command
9
[add port 6443 in both master and worker node]
On Worker - >
Use kubeadm join command with token (copy the command from master and
paste on worker)
sudo (command)
On Master - ->
kubectl get nodes
10
Pod creation –
kubectl get nodes
mkdir nginxpod
cd nginxpod
nano nginxpod.yml
11
12
Kubernetes Services –
1)NodePort –
cd nodepod
ls
nano myservicepod.yml
apiVersion: v1
kind: Service
metadata:
name: mynodeportservice
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30001
13
kubectl exec -it nginxpod - - /bin/bash
cd /usr/share/nginx/html
ls
touch ravi.html
echo “this is ravi.html page under nginxpod” <ravi.html
14
Create load balancer in AWS and at target group (add load balancer service port)
Copy DNS of load balancer.
15
ReplicationController –
ReplicationController –
• Pod recreation
• Scaling (manual)
• Replicas
Pratical -
kubectl get nodes
kubectl delete all –all –force
mkdir replicationController
cd replicationController
nano mynginx.yml
16
(to checking auto creation of pod of not)
kubectl get rc
kubectl get pod -o wide
cp mynginx.yml yournginx.yml
cp mynginx.yml ournginx.yml
ls
17
kubectl apply -f ournginx.yml
18
ReplicaSet –
cp -r replicationcontroller replicasetwala
cd replicasetwala
ls
nano mynginx.yml
nano yournginx.yml
nano ournginx.yml
19
kubectl apply -f mynginx.yml
kubectl apply -f yournginx.yml
kunectl apply -f ournginx.yml
20
Deployment –
mkdir deployment
cd deployment
nano mynginx.yml
21
kubectl apply -f mynginix.yml
For update –
kubectl set image deployment mydeploy
nginxcontainer=httpd
kubectl rollout status deployment mydeploy
22
kubectl apply -f mynginx.yml
kubectl rollout history deployment mydeploy
23
nano updatefile.yml
24
kubectl apply -f configurefile.yml
25
Health Probe –
Kubernetes has various types of probes:
Liveness probe
Liveness probes determine when to restart a container. For example, liveness probes could catch a
deadlock when an application is running but unable to make progress.
If a container fails its liveness probe repeatedly, the kubelet restarts the container.
Liveness probes do not wait for readiness probes to succeed. If you want to wait before executing a
liveness probe, you can either define initialDelaySeconds or use startup probe.
26
Readiness probe
Readiness probes determine when a container is ready to start accepting traffic. This is useful when
waiting for an application to perform time-consuming initial tasks, such as establishing network
connections, loading files, and warming caches.
If the readiness probe returns a failed state, Kubernetes removes the pod from all matching service
endpoints.
mkdir healthprobe
cd healthprobe
nano readinessprobe.yml
27
kubectl exec -it pod_name - - /bin/bash
cd /usr/share/nginx/html
ls
rm index.html
exit
28
Startup probe
A startup probe verifies whether the application within a container is started. This can be used to
adopt liveness checks on slow starting containers, avoiding them getting killed by the kubelet before
they are up and running.
If such a probe is configured, it disables liveness and readiness checks until it succeeds.
This type of probe is only executed at startup, unlike liveness and readiness probes, which are run
periodically.
nano startupprobe.yml
29
kubectl get pods
30
Namespace -
In Kubernetes, namespaces provide a mechanism for isolating groups
of resources within a single cluster. Names of resources need to be
unique within a namespace, but not across namespaces. Namespace-
based scoping is applicable only for namespaced object (e.g.
Deployments, Services, etc.) and not for cluster-wide objects (e.g.
StorageClass, Nodes, PersistentVolumes, etc.).
Namespace by commands.
kubectl get namespace
31
kubectl apply -f namespaceexample.yml -n mynamespace
32
Namespace by .yml file
nano customnamespace.yml
33
Volume -
In Kubernetes, Volumes are used to provide persistent or ephemeral
storage to containers running within Pods. Unlike container storage,
which is ephemeral and tied to the lifecycle of a container, volumes
allow data to persist beyond the container's lifecycle.
• emptyDir
• hostPath
• nfs
• persistent volume and persistent volume chain
• configmap
• secrete
emptyDir:
emptyDir is a type of Kubernetes volume that is created when a Pod is
assigned to a Node. As the name suggests, it starts out empty and is
used as a temporary storage location for data. The data in an emptyDir
volume is deleted when the Pod is terminated or removed. It's typically
used for tasks like caching, temporary file storage, or storing data that
doesn't need to persist across Pod restarts.
hostPath:
hostPath is a Kubernetes volume that mounts a file or directory from the
host node’s filesystem into the Pod. This allows the Pod to access data or
configuration files from the host machine. It can be useful in scenarios
where you need to access specific files from the host or interact with
hardware devices on the host machine. However, it can have security
implications, as it provides the Pod access to the host’s filesystem.
34
nfs:
nfs (Network File System) is a volume type that allows Pods to access
shared storage over a network. NFS volumes are used when you need to
share data across multiple Pods or Nodes. This is helpful in situations
where the data needs to be accessible by multiple Pods concurrently or
when you want to store data externally.
ConfigMap:
A ConfigMap is a Kubernetes object that allows you to store
configuration data as key-value pairs. ConfigMaps provide a way to
inject configuration into Pods without altering the container image. They
are often used for environment variables, command-line arguments, or
configuration files that Pods can reference at runtime. ConfigMaps help
to decouple configuration from the application code, making it easier to
manage and change settings without rebuilding or redeploying the
application.
Secret:
A Secret is similar to a ConfigMap, but it is used to store sensitive
information, such as passwords, API keys, or certificates. Kubernetes
secrets are encoded in base64, and while they are more secure than
plain text, you should still be careful with access control and encryption
to protect sensitive data.
35
36
37
38
39
40
41
42
43
DaemonSet –
A daemonSet in Kubernetes ensures that a copy of a specific Pod is
running on all (or some) nodes in a cluster. It's commonly used to
deploy system-level workloads like logging agents, monitoring
daemons, or other infrastructure-related components.
Key Features of DaemonSet :
• Node-Level Deployment: Ensures that one Pod is running on
each eligible node.
• Dynamic Scaling: Automatically deploys Pods to new nodes
when they are added to the cluster.
• Selective Deployment: Can be restricted to specific nodes using
node selectors, affinity rules, or taints and tolerations.
• Self-Healing: Automatically redeploys Pods if they fail or if the
node, they are on becomes unavailable.
44
Jobs –
A Job creates one or more Pods and will continue to retry execution
of the Pods until a specified number of them successfully terminate.
As pods successfully complete, the Job tracks the successful
completions. When a specified number of successful completions is
reached, the task (ie, Job) is complete. Deleting a Job will clean up
the Pods it created. Suspending a Job will delete its active Pods until
the Job is resumed again.
Definition: A Kubernetes Job is a higher-level abstraction that
manages one or more Pods to ensure a task runs to completion
successfully.
Purpose: Ideal for batch or one-time tasks like data processing,
backups, or running scripts.
Lifecycle: The Job ensures that the specified number of Pods
complete successfully.
45
Cronjob –
A corn job is a scheduled task in Unix-like operating systems that allows you to automate
repetitive tasks at specified times or intervals. These tasks are executed by the corn
daemon.
46
StatefulSet -
A StatefulSet is a Kubernetes API object used to manage and deploy
applications that require stable and unique network identifiers,
persistent storage, and ordered or graceful deployment and scaling.
It's ideal for stateful applications like databases, messaging systems,
and distributed systems.
Key Features of StatefulSet:
1. Stable Pod Identity:
o Each pod in a StatefulSet gets a unique, stable hostname and identity (e.g.,
pod-name-0, pod-name-1).
o This identity persists across pod restarts.
2. Stable Storage:
o Each pod can be associated with its own PersistentVolume (PV), ensuring
data persists even if the pod is rescheduled or restarted.
3. Ordered Deployment and Scaling:
o Pods are deployed, updated, or deleted sequentially.
o Ensures that dependent systems can rely on the order of operations.
4. Graceful Rollout and Termination:
o Pods are scaled up or down gracefully to ensure consistent application
behavior.
5. DNS Management:
o StatefulSets automatically assign DNS names to pods, simplifying service
discovery
47
48
Horizontal Pod Autoscaling –
In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload
resource (such as deployment or statefullet), with the aim of automatically
scaling the workload to match demand.
Horizontal scaling means that the response to increased load is to deploy
more pods. This is different from vertical scaling, which for Kubernetes would
mean assigning more resources (for example: memory or CPU) to the Pods that
are already running for the workload.
If the load decreases, and the number of Pods is above the configured
minimum, the HorizontalPodAutoscaler instructs the workload resource (the
Deployment, StatefulSet, or other similar resource) to scale back down.
Horizontal pod autoscaling does not apply to objects that can't be scaled (for
example: a daemonset.)
The HorizontalPodAutoscaler is implemented as a Kubernetes API resource and
a controller. The resource determines the behavior of the controller. The
horizontal pod autoscaling controller, running within the Kubernetes control
plane, periodically adjusts the desired scale of its target (for example, a
Deployment) to match observed metrics such as average CPU utilization,
average memory utilization, or any other custom metric you specify.
49
50
51
Ingress –
Ingress in the context of Kubernetes refers to a set of rules that
govern how external access to services within a Kubernetes cluster is
managed. It acts as an entry point to the cluster, routing external
HTTP/HTTPS traffic to the appropriate services.
52