0% found this document useful (0 votes)
10 views

Lecture 3

Container orchestration automates the management, deployment, scaling, and operation of containerized applications, ensuring efficiency and reliability, especially for microservices. Tools like Kubernetes, Docker Swarm, and Apache Mesos are commonly used, with Kubernetes being the most widely adopted due to its robust features. Key concepts include clusters, nodes, pods, services, and various components that facilitate communication and resource management within the orchestration framework.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Lecture 3

Container orchestration automates the management, deployment, scaling, and operation of containerized applications, ensuring efficiency and reliability, especially for microservices. Tools like Kubernetes, Docker Swarm, and Apache Mesos are commonly used, with Kubernetes being the most widely adopted due to its robust features. Key concepts include clusters, nodes, pods, services, and various components that facilitate communication and resource management within the orchestration framework.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Learn First to Lead The Rest

Container Orchestration

DevOps Fundamentals
Container Orchestration

🆃 It is automated management, deployment, scaling, and operation of


containerized applications.
🆃 It ensures that applications composed of multiple containers run efficiently,
reliably, and securely, even in complex and dynamic environments.
🆃 Container orchestration is especially important when managing
microservices-based applications that use multiple containers across
clusters of servers.
🆃 Tools like Kubernetes, Docker Swarm, and Apache Mesos are commonly
used for this purpose.
Key Features & Benefits of CO

🆃 Automated Deployment and 🆃 High Availability


Scaling 🆃 Resource Efficiency
🆃 Load Balancing 🆃 Simplified Management
🆃 Resource Allocation 🆃 Portability
🆃 Health Monitoring and 🆃 Configuration Management
Self-Healing 🆃 Rolling Updates and Rollbacks
🆃 Service Discovery and
Networking
🆃 Persistent Storage Management
Popular Container Orchestration Tools

🆃 Kubernetes: 🆃 Apache Mesos:


🅣 The most widely used container 🅣 A more generalized tool for
orchestration tool. managing distributed systems,
🅣 Features robust scalability, with container orchestration as
extensive community support, and one of its use cases.
integration with cloud providers. 🆃 AWS ECS (Elastic Container Service):
🆃 Docker Swarm: 🅣 A cloud-native orchestration
🅣 Docker's native orchestration tool. service provided by AWS.
🅣 Easy to use but less feature-rich 🆃 Red Hat OpenShift:
compared to Kubernetes. 🅣 Built on Kubernetes but offers
additional enterprise-level
features.
Container Orchestration with Kubernetes

🆃 An open-source container orchestration platform initially developed by


Google.
🆃 Manages containerized applications across clusters of machines.
🆃 Automated deployment and scaling.
🆃 Self-healing and fault tolerance.
🆃 Service discovery and load balancing.
🆃 Declarative configuration management.

K8s
Kubernetes Core Concepts

🆃 Cluster: A set of worker nodes and a control plane.


🆃 Node: A single machine (virtual or physical) in the cluster.
🆃 Pod: The smallest deployable unit in Kubernetes, typically encapsulates one
or more containers.
🆃 Service: Provides a stable IP and DNS name to access a group of pods.
🆃 Deployment: Manages stateless application pods.
🆃 ConfigMap and Secret: Externalize configuration and manage sensitive
information.
Kubernetes Architecture
Kubernetes Architecture

Control Plane Components


Worker Node Components
🅣 Kubelet: Ensures containers
🆃 API Server: Entry point for all are running in pods.
administrative tasks. 🅣 Kube-proxy: Handles
🆃 Scheduler: Assigns pods to nodes networking and load
based on resource availability. balancing.
🆃 Controller Manager: Ensures the
🅣 Container Runtime: Runs
the containers (e.g., Docker,
desired state of the system.
containerd).
🆃 etcd: A distributed key-value store
for configuration and state data.
Kubernetes Installation

🆃 Using Minikube (For local setups)


🅣 Install Minikube: sudo apt install minikube.
🅣 Start a cluster: minikube start.
🅣 Verify setup: kubectl get nodes.
Prerequisites

🆃 2 CPUs or more
🆃 2GB of free memory
🆃 20GB of free disk space
🆃 Internet connection
🆃 Container or virtual machine manager, such as: Docker,
QEMU, Hyperkit, Hyper-V, KVM, Parallels, Podman,
VirtualBox, or VMware Fusion/Workstation
🆃 References:
https://minikube.sigs.k8s.io/docs/start/?arch=%2Fmacos%2
Farm64%2Fstable%2Fbinary+download#Service
Installation Complete
Let’s Get Started Test Application

kubernetes_application.yaml

apiVersion: v1

kind: Pod

metadata:

name: kub-app-pod

spec:
🆃 kubectl apply -f kubernetes_application.yaml
containers: 🆃 kubectl get pods
- name: kub-app-container 🆃 kubectl port-forward kub-app-pod 8080:80
image: nginx:latest
🆃 kubectl port-forward kub-app-pod 8080:80 &
(run in background)
ports:

- containerPort: 80
Architecture Explained (Master Node)

The Master Node serves as the central control unit of a Kubernetes cluster,
managing its overall state and operations. It oversees key functions such as
scheduling pods, monitoring the health of nodes and applications, and scaling
resources as needed.
Key components of the Master Node include:

🆃 API Server: The primary interface for managing the cluster,


exposing the Kubernetes API for interaction.
🆃 etcd: A distributed key-value store that holds the cluster’s
configuration and state data.
🆃 Controller Manager: A collection of controllers that monitor the
cluster via the API Server and ensure it operates as desired, such
as maintaining the correct number of pod replicas.
🆃 Scheduler: Allocates new pods to nodes, optimizing resource use
and balancing workloads across worker nodes.
Architecture Explained (Worker Nodes)

Worker Nodes are the machines in a Kubernetes cluster where


containers, organized into pods, are deployed and executed. They
make up the data plane, handling the actual workloads. Each
Worker Node includes several essential components:

🆃 Kubelet: An agent on each Worker Node that interacts with


the Master Node, ensuring containers specified in pod
definitions are running and functioning correctly.
🆃 Container Runtime: Responsible for pulling container
images and running containers. Kubernetes supports
various runtimes, such as Docker and containerd.
🆃 Kube Proxy: Manages internal cluster networking by
handling service routing and load balancing.
Architecture Explained (Master & Worker Interaction)

🆃 The Master Node and Worker Nodes communicate through


the Kubernetes API Server, which also serves as the main
interaction point for users and components. For instance,
application deployment configurations are sent to the API
Server and stored in etcd.
🆃 The Controller Manager monitors the cluster state via the
API Server and corrects any deviations from the desired
state, such as ensuring all required pods are running.
🆃 The Scheduler assigns new pods to suitable Worker Nodes
by evaluating resource availability and constraints. The API
Server then notifies the selected Worker Node, where the
Kubelet launches the container.
🆃 Worker Nodes report the status of running pods back to the
Master Node through the Kubelet, ensuring the Master
Node has a real-time view of the cluster.
Architecture Explained (Master & Worker Interaction)

🆃 The Master Node and Worker Nodes communicate through


the Kubernetes API Server, which also serves as the main
interaction point for users and components. For instance,
application deployment configurations are sent to the API
Server and stored in etcd.
🆃 The Controller Manager monitors the cluster state via the
API Server and corrects any deviations from the desired
state, such as ensuring all required pods are running.
🆃 The Scheduler assigns new pods to suitable Worker Nodes
by evaluating resource availability and constraints. The API
Server then notifies the selected Worker Node, where the
Kubelet launches the container.
🆃 Worker Nodes report the status of running pods back to the
Master Node through the Kubelet, ensuring the Master
Node has a real-time view of the cluster.
K8s Major Component

🆃 Nodes (Master and 🆃 Volume


Worker) 🆃 Deployment
🆃 Pods 🆃 StatefulSet
🆃 Service
🆃 Ingress
🆃 Configmap
🆃 Secrets
Nodes

Nodes

🆃 A Node is a worker machine in Kubernetes, responsible for running containers.


🆃 It can be a physical or virtual machine and provides resources for workloads.
🆃 A cluster typically consists of multiple nodes to distribute and manage workloads.

Example:

🆃 A Kubernetes cluster with three worker nodes:


🅣 kubectl get nodes
𝕿 Node A
𝕿 Node B
𝕿 Node C
Pods & Container

🆃 A Pod is the smallest deployable unit in apiVersion: v1 Explanation


Kubernetes.
kind: Pod
🆃 It can contain one or more containers 🆃 The pod is named app_1.
that share resources such as
networking and storage.
metadata: 🆃 It contains two containers:
🆃 Containers within a pod communicate name: app_1 🅣 web &
over localhost. 🅣 database.
🆃 A pod represents a single instance of a labels: 🆃 The webapp container runs
running process in the cluster.
app: software_app the nginx:latest image, a
🆃 Pods provide several advantages over widely used web server and
containers: spec:
🅣 Logical Grouping reverse proxy. Port 80 is
🅣 Shared Resources → Containers in a containers: exposed to allow access to
pod share the same network
namespace and can use shared
the web application.
- name: web
storage volumes. 🆃 The database container
🅣 Atomic Deployment → Pods act as a
single unit of deployment, simplifying
image: nginx:latest runs the mysql:latest
scaling and updates. ports:
image, a popular database.
🅣 Efficient Scheduling → Kubernetes 🆃 Both containers share the
schedules pods (not individual
containers), ensuring related
- containerPort: 80 same network namespace,
containers are placed on the same - name: database enabling communication via
node.
localhost.
image: mysql:latest
Service

🆃 Expose Pods Example:


internally or apiVersion: v1 # Specifies the API version used for this resource
externally. kind: Service # Defines the resource type as a Service
🆃 Types:
metadata:
🅣 ClusterIP,
🅣 NodePort, name: webapp-service # The name of the Service
🅣 LoadBalancer
spec:
selector:
app: software_app # Matches the Pod using the "app: software_app" label
ports:
- protocol: TCP # Defines the protocol
port: 80 # The port on which the Service is exposed within the cluster
targetPort: 8080 # The port the request is forwarded to inside the Pod (Nginx)
Ingress

🆃 Service allows internal communication 🆃 apiVersion: networking.k8s.io/v1


between pods within a Kubernetes cluster. 🆃 kind: Ingress
🆃 While, Ingress enables external access by 🆃 metadata:
🆃 name: webapp-ingress
acting as an entry point for services. 🆃 spec:
🆃 It allows you to define routing rules and 🆃 rules:
implement load balancing for incoming 🆃 - host: xyz.com
🆃 http:
traffic. 🆃 paths:
🆃 To use Ingress, an Ingress Controller must 🆃 - path: /
🆃 pathType: Prefix
be deployed in the Kubernetes cluster. 🆃 backend:
🆃 The controller is responsible for 🆃 service:
implementing routing rules and managing 🆃 name: webapp-service
🆃 port:
external traffic. 🆃 number: 80
🆃 By combining Pods, Services, and Ingress,
you can deploy a web application along with
its database. 🆃 networking.k8s.io → Refers to the networking API
🆃 This setup ensures scalability, reliability, and group in Kubernetes.
🆃 v1 → Indicates the stable version of the API.
efficient request handling for external users.
Ingress Controller

An Ingress Controller is required to manage NGINX Ingress Controller


Ingress resources and handle external traffic
routing to services in a Kubernetes cluster. 🆃 kubectl apply -f
https://raw.githubusercontent.com/kubern
Kubernetes does not install an Ingress etes/ingress-nginx/main/deploy/static/pro
Controller by default, so you need to deploy vider/cloud/deploy.yaml
🅣 This will create the necessary
one separately. deployments, services, and roles for the
NGINX Ingress Controller.
Choose an Ingress Controller 🆃 kubectl get pods -n default
🆃 kubectl get pods -n ingress-nginx
🆃 There are multiple Ingress Controllers available, including: 🅣 Verification → You should see a running
🅣 NGINX Ingress Controller (Most popular) pod with a name like
🅣 Traefik (Lightweight and dynamic) ingress-nginx-controller-xxxx
🅣 HAProxy (High-performance) 🆃 Testing
🅣 minikube addons enable ingress
🅣 AWS ALB, GCE, Istio (Cloud-provider-specific
🅣 echo "127.0.0.1 xyz.com" | sudo tee -a
options) /etc/hosts
🅣 App should be running in http://xyz.com
ConfigMap

🆃 A ConfigMap in Kubernetes is used to store


non-sensitive configuration data as apiVersion: v1
key-value pairs.
🆃 It allows separating configuration from kind: ConfigMap
application code, making applications more
flexible and manageable. metadata:
🆃 ConfigMaps can store entire configuration
files, environment variables, or name: db-config
command-line arguments.
🆃 Applications running inside pods can data:
consume ConfigMaps as environment
variables, command-line arguments, or DB_HOST: "mysql-service"
mounted volumes.
🆃 Unlike Secrets, ConfigMaps do not encrypt DB_PORT: "3306"
data and are meant for non-sensitive
information. 🆃 This ConfigMap can be used inside a
pod to configure a database
connection.
Secrets

🆃 A Secret in Kubernetes is used to store


sensitive data such as passwords, API keys, apiVersion: v1
and tokens securely.
🆃 It helps prevent exposing confidential kind: Secret
information directly in pod definitions or
ConfigMaps. metadata:
🆃 Secrets are Base64-encoded but not
encrypted by default; additional security name: db-secret
measures (e.g., encryption at rest) should be
used. type: Opaque
🆃 Secrets can be mounted as files, injected as
environment variables, or accessed via the data:
Kubernetes API.
🆃 Kubernetes provides different Secret types, DB_PASSWORD: cGFzc3dvcmQ= #
including Opaque (default), docker-registry,
and tls for specific use cases. Base64-encoded "password"
Volume

🆃 A Volume in Kubernetes provides persistent or


temporary storage for pods. 🆃 apiVersion: v1
🆃 Unlike container storage, a volume lives as long 🆃 kind: Pod
as the pod exists, preventing data loss when a 🆃 metadata:
container restarts. 🆃 name: nginx-pod
🆃 Kubernetes supports various volume types, 🆃 spec:
including emptyDir, hostPath,
persistentVolumeClaim (PVC), configMap, secret, 🆃 containers:
and cloud-based storage (EBS, Azure Disk, etc.). 🆃 - name: nginx
🆃 Volumes can be shared between multiple 🆃 image: nginx
containers within the same pod. 🆃 volumeMounts:
🆃 Persistent Volumes (PVs) and Persistent Volume 🆃 - name: cache-volume
Claims (PVCs) enable long-term storage
independent of pod lifetimes.
🆃 mountPath: /cache
🆃 volumes:
The volume cache-volume is created using emptyDir, 🆃 - name: cache-volume
which exists only as long as the pod runs. 🆃 emptyDir: {}
The container mounts this volume at /cache, allowing
data to persist across container restarts.
Deployment
🆃 apiVersion: apps/v1
🆃 A Deployment in Kubernetes is used to 🆃 kind: Deployment
manage and scale stateless applications 🆃
🆃
metadata:
name: nginx-deployment
by ensuring the desired number of pod 🆃 spec:
🆃 replicas: 3
replicas are running. 🆃 selector:
🆃 It supports rolling updates, allowing 🆃 matchLabels:
🆃 app: nginx
seamless updates without downtime. 🆃 template:
🆃 Deployments ensure high availability by 🆃
🆃
metadata:
labels:
automatically replacing failed pods and 🆃 app: nginx
🆃 spec:
distributing workloads across nodes. 🆃 containers:
🆃 You can scale up or down the number of 🆃 - name: nginx
🆃 image: nginx:latest
replicas dynamically using kubectl scale 🆃 ports:
or Horizontal Pod Autoscaler (HPA). 🆃 - containerPort: 80

🆃 Deployments create and manage Creates a Deployment named nginx-deployment with 3 replicas.
ReplicaSets, which in turn manage the
Uses a ReplicaSet to manage pod scaling and availability.
actual pods.
Runs Nginx on port 80, ensuring multiple instances for load balancing
and fault tolerance.
StatefulSet
🆃 apiVersion: apps/v1
🆃 A StatefulSet in Kubernetes is used to manage 🆃 kind: StatefulSet
🆃 metadata:
stateful applications that require stable, 🆃 name: mysql
persistent storage and network identities. 🆃
🆃
spec:
serviceName: "mysql"
🆃 Unlike Deployments, StatefulSets assign each 🆃 replicas: 2
🆃 selector:
pod a unique, stable hostname and maintain 🆃 matchLabels:
their order during scaling or restarts. 🆃
🆃
app: mysql
template:
🆃 Pods in a StatefulSet are named sequentially 🆃 metadata:
🆃 labels:
(e.g., db-0, db-1, db-2) and maintain the same 🆃 app: mysql

identity even if restarted or rescheduled. 🆃


🆃
spec:
containers:
🆃 They are commonly used for databases (MySQL, 🆃 - name: mysql
🆃 image: mysql:latest
PostgreSQL, MongoDB), distributed systems 🆃 env:

(Kafka, Zookeeper), and other stateful 🆃


🆃
- name: MYSQL_ROOT_PASSWORD
value: "rootpassword"
applications. 🆃 volumeMounts:
🆃 - name: mysql-storage
🆃 StatefulSets work with Persistent Volume Claims 🆃 mountPath: /var/lib/mysql

(PVCs) to ensure each pod has dedicated, 🆃


🆃
volumeClaimTemplates:
- metadata:
persistent storage. 🆃 name: mysql-storage
🆃 spec:
🆃 accessModes: [ "ReadWriteOnce" ]
🆃 resources:
🆃 requests:
🆃 storage: 10Gi
Kubernetes Architecture Sample
Commands and Troubleshooting

🆃 minikube ip
🆃 curl http://$(minikube ip):30050
🆃 kubectl get svc -A
🆃 minikube
status/stop/start/dashboard
🆃 minikube service nginx-service --url
Thank You!! :)

You might also like