0% found this document useful (0 votes)
55 views96 pages

Introduction To Kubernetes

The document provides an overview of container orchestration, focusing on Kubernetes as a leading platform. It discusses the benefits of container orchestration, including simplified operations, resilience, and security, while comparing Kubernetes with Docker Swarm. Additionally, it outlines Kubernetes architecture, installation methods, and key features such as automated scaling, service discovery, and health monitoring.

Uploaded by

Prince lalulucky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views96 pages

Introduction To Kubernetes

The document provides an overview of container orchestration, focusing on Kubernetes as a leading platform. It discusses the benefits of container orchestration, including simplified operations, resilience, and security, while comparing Kubernetes with Docker Swarm. Additionally, it outlines Kubernetes architecture, installation methods, and key features such as automated scaling, service discovery, and health monitoring.

Uploaded by

Prince lalulucky
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

Containers Orchestration and Kubernetes

Agenda
INTRODUCTION TO DOCKER KUBERNETES
01 KUBERNETES
02 SWARM
03 ARCHITECTUR
VS E
KUBERNETES

KUBERNETES WORKING OF DEPLOYMENTS IN


04 05 KUBERNETE 06 KUBERNETES
INSTALLATIO
N S

SERVICES IN INGRESS IN KUBERNETE


07 08 KUBERNETE
09
KUBERNETE S
S S DASHBOARD
Containers are Good…

Both Linux Containers and Docker Containers


Isolate the application from the host

FASTER, RELIABLE, EFFICIENT, LIGHTWEIGHT,


AND SCALABLE
Containers Problems!

Both Linux Containers and Docker Containers


Isolate the application from the host Not easily Scalable

FASTER, RELIABLE, EFFICIENT, LIGHTWEIGHT,


AND SCALABLE
Containers Problems!

Both Linux Containers and Docker Containers


Isolate the application from the host Not easily Scalable

FASTER, RELIABLE, EFFICIENT, LIGHTWEIGHT,


AND SCALABLE
Problems with Scaling up Containers

It was not Scalable 1 Containers could communicate with each other

2 Containers had to be deployed appropriately

3 Containers had to be managed carefully

4 Auto scaling was not possible

5 Distributing traffic was still challenging


Containers Without Orchestration
Why do we need container orchestration?
• Because containers are lightweight and ephemeral by nature, running them
in production can quickly become a massive effort.
• Particularly when paired with microservices—which typically each run in
their own containers—a containerized application might translate into
operating hundreds or thousands of containers, especially when building
and operating any large-scale system.
• This can introduce significant complexity if managed manually.
• Container orchestration is what makes that operational complexity
manageable for development and operations—or DevOps—because it
provides a declarative way of automating much of the work.
Why Do We Need Container Orchestration?
Container orchestration is used to automate the following tasks at scale:
• Configuring and scheduling of containers
• Provisioning and deployments of containers
• Availability of containers
• The configuration of applications in terms of the containers that
they run in
• Scaling of containers to equally balance application workloads
across infrastructure
• Allocation of resources between containers
• Load balancing, traffic routing and service discovery of containers
• Health monitoring of containers
• Securing the interactions between containers.
Container Orchestration
What is container orchestration?

• Container orchestration is the automation of much of the operational


effort required to run containerized workloads and services.
• This includes a wide range of things software teams need to manage a
container’s lifecycle, including:
provisioning,
deployment,
scaling (up and down),
networking,
load balancing and more.
Container Orchestration
Container orchestration automates and simplifies provisioning, and deployment
and management of containerized applications.
Container Orchestration

• Container orchestration is the


automatic process of managing
or scheduling the work of
individual containers for
applications based on
microservices within multiple
clusters.

• The widely deployed container


orchestration platforms are
based on open-source versions
like Kubernetes, Docker Swarm
or the commercial version from
Red Hat OpenShift.
Cloud Orchestration
Infrastructure-as-a-Service (IaaS)
Provisioning virtual machines from a cloud service provider (CSP)
Container Orchestration

Application containers
Lightweight OS-virtualization
Application packaging for portable, reusable software
Container Orchestration
Container Orchestration Benefits
• Container orchestration is key to working with containers, and it allows
organizations to unlock their full benefits. It also offers its own benefits
for a containerized environment, including:
• Simplified operations
• Most important benefit of container orchestration and the main
reason for its adoption.
• Manages the complexity of Containers
• Resilience
• Automatically restart or scale a container or cluster, boosting
resilience.
• Added security
• Keeping containerized applications secure by reducing or
eliminating the chance of human error.
Container Orchestration Benefits

• Fault-tolerance
• On-demand scalability
• Optimal resource usage
• Auto-discovery to automatically discover and communicate with
each other
• Accessibility from the outside world
• Seamless updates/rollbacks without any downtime.
Container Orchestration Tools
Container Orchestration Tools
These are container orchestration and cloud management
platforms, each serving different purposes:
1.Kubernetes (K8s) – An open-source container orchestration system
for automating deployment, scaling, and management of containerized
applications.
2.Nomad – A lightweight, decentralized orchestrator from
HashiCorp that supports containerized and non-containerized
workloads.
3.OpenShift – A Kubernetes-based enterprise PaaS
(Platform-as-a-Service) by Red Hat, providing additional security,
CI/CD tools, and developer-friendly features.
4.Cloudify – A cloud orchestration tool for managing multi-cloud
deployments, providing Infrastructure-as-Code (IaC) capabilities.
5.Minikube – A tool for running a local Kubernetes cluster on a
personal machine for development and testing purposes.
Introduction to Kubernetes
Introduction to Kubernetes

Kubernetes is an open-source container orchestration software


It was originally developed by

Google It was first released on July

21st 2015

It is the ninth most active repository on GitHub in terms of


number of commits
Features of Kubernetes

Pods Service Discovery

Replication Controller Networking

Storage Management Secret Management

Resource Monitoring Rolling Updates

Health Checks
Kubernetes Features
Automated Bin Packing – Kubernetes efficiently schedules
workloads (Pods) onto Nodes, optimizing resource utilization while
considering CPU, memory, and constraints.
Self-Healing – Kubernetes automatically detects and replaces failed
containers, restarts unhealthy Pods, and reschedules workloads on
available Nodes to ensure continuous application availability.
Storage Orchestration – Kubernetes dynamically manages storage
volumes, provisions persistent storage, and handles failover, replication,
and reattachment of volumes in case of node failures.
Service Discovery & Load Balancing – Kubernetes automatically
assigns DNS names to services and distributes traffic across Pods using
internal and external load balancers.
Automated Rollouts & Rollbacks – Kubernetes enables seamless
application updates, supports canary deployments, and automatically
rolls back in case of failures.
Secret & Configuration Management – Kubernetes securely stores
and manages sensitive data (Secrets) and non-sensitive configurations
(ConfigMaps) for dynamic application settings.
Kubernetes Myths

Kubernetes is not:
To be compared with Docker
X
For containerizing apps
For apps with simple architecture


Kubernetes is actually:
Robust and reliable
A container orchestration platform
A solution for scaling up Containers
Backed by huge community
Docker Swarm vs
Kubernetes
Docker Swarm vs Kubernetes

The main reason


Kubernetes is more
famous than Docker
Swarm is its advanced
automation and
scalability features for
managing containerized
applications in
production.

Source: trends.google.com
Docker Swarm vs Kubernetes

Docker Swarm Kubernetes

Easy to Install and Initialize Complex Procedure to install Kubernetes

Faster when compared to Kubernetes Slower when compared with Docker Swarm

Not Reliable and has less More Reliable and comparatively has
features more features
Features: Kubernetes vs Docker Swarm
Kubernetes vs Docker Swarm

Features Docker Swarm Kubernetes


Installation and Cluster Easy and Fast Complicated and Time-consuming
Configuration
GUI Not Available Available
Scalability Scaling up is faster than K8S; but cluster Scaling up is slower but guarantees
strength is not as robust stronger cluster state
Load Balancing Built-in load balancing technique Requires manual service configuration
Updates and Rollback Progressive updates and service health Process scheduling to maintain services
monitoring while updating
Data Volumes Can be shared with any container Only shared with containers in the same
Pod
Logging and Only 3rd party logging and monitoring Built-in logging and monitoring tools
Monitoring tools
Introduction to Kubernetes

Kubernetes is an open-source container orchestration software

It was originally developed by

Google It was first released on July

21st 2015

It is the ninth most active repository on GitHub in terms of


number of commits
Kubernetes Installation
There are numerous ways to install
Kubernetes, following are some of the
popular ways:

▪ Kubeadm – Bare Metal Installation

▪ Minikube – Virtualized Environment


for Kubernetes

▪ Kops – Kubernetes on AWS

▪ Kubernetes on GCP – Kubernetes


running on Google Cloud Platform
Kubernetes Architecture
Kubernetes Architecture

Master Node

Slave Node Slave Node Slave Node


Kubernetes Architecture

Master Node Docker

etcd API Server Scheduler

Controller Manager

Slave Node Slave Node Slave Node


Docker Docker Docker

Kubelet Kube-proxy Kubelet Kube-proxy Kubelet Kube-proxy


Kubernetes Architecture –
Master Components
Kubernetes Architecture – Master Components

It is a highly available distributed key value store, which is used to store


cluster wide secrets. It is only accessible by Kubernetes API server, as it has
etcd
sensitive information.

API Server

Scheduler
Master Node Docker

etcd API Server Scheduler


Controller Manager
Controller Manager
Kubernetes Architecture – Master Components

It exposes the Kubernetes API. The Kubernetes API is the front-end for
Kubernetes Control Plane, and is used to deploy and execute all
etcd
operations in Kubernetes

API Server

Scheduler
Master Node Docker

etcd API Server Scheduler


Controller Manager
Controller Manager
Kubernetes Architecture – Master Components

The scheduler takes care of scheduling of all the processes, Dynamic Resource
Management and manages present and future events on the cluster
etcd

API Server

Scheduler
Master Node Docker

etcd API Server Scheduler


Controller Manager
Controller Manager
Kubernetes Architecture – Master Components

The controller manager, runs all the controllers on the Kubernetes


Cluster. Although each controller, is a separate process, but to reduce
complexity, all the controllers are compiled into a single process. They
etcd are as follows: Node Controller, Replication Controller, Endpoints
Controller, Service Accounts and Token Controllers

API Server

Scheduler
Master Node Docker

etcd API Server Scheduler


Controller Manager
Controller Manager
Kubernetes Architecture –
Slave Components
Kubernetes Architecture – Master Components

Kubelet takes the specification from the API server, and ensures the
application is running according to the specifications which were mentioned.
Each node has it’s kubelet service

Kubelet

Kube-Proxy Slave Node


Docker

Kubelet Kube-proxy
Kubernetes Architecture – Master Components

This proxy service runs on each node and helps in making services available
to the external host. It helps in connection forwarding to the correct
resources, it is also capable of doing primitive load balancing

Kubelet

Kube-Proxy Slave Node


Docker

Kubelet Kube-proxy
Kubernetes Installation
There are numerous ways to install
Kubernetes, following are some of the
popular ways:

▪ Kubeadm – Bare Metal Installation

▪ Minikube – Virtualized Environment


for Kubernetes

▪ Kops – Kubernetes on AWS

▪ Kubernetes on GCP – Kubernetes


running on Google Cloud Platform
Working of Kubernetes
Working of Kubernetes

Pod – Replica 1

Pod – Replica 2

Pods can have one or more containers coupled


together. They are the basic unit of Kubernetes.
To increase High Availability, we always prefer
pods to be in replicas Pod – Replica 3
Working of Kubernetes

Services are used to load balance the


traffic among the pods. It follows round
robin distribution among the healthy pods

Pod – Replica 1

Service

Pod – Replica 2

Pod – Replica 3
Working of Kubernetes

Pod – Replica 1
Image Processing

Service
Pod – Replica 2

Iyahoo.com/image

Pod – Replica 3
Ingress

Google.com/video
Pod – Replica 1

Ingress is a traffic manager in Kubernetes that controls how external users access
Service
Video Processing
services inside a cluster using HTTP/HTTPS routes. Instead of creating a separate Pod – Replica 2

LoadBalancer for each service, Ingress allows you to use one entry point for multiple
services. You configure access by creating acollection of rules that define which inbound
Pod – Replica 3
connections reach which services. Routing traffic based on URLs or domain names.
Deployments in Kubernetes
Deployments in Kubernetes

Deployment in Kubernetes is a controller which helps your applications reach the


desired state, the desired state is defined inside the deployment file

Deployment

Pods
YAML Syntax for Deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
This YAML file will deploy 3 pods for nginx, and selector:
maintain the desired state which is 3 pods, matchLabels:
app: nginx
until this deployment is deleted template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Creating a Deployment
Once the file is created, to deploy this deployment use the following syntax:

Syntax
kubectl create –f nginx.yaml
List the Pods
To view the pods, type the following command:

Syntax
kubectl get po

As you can see, the number of pods are matching with the number of replicas specified in the deployment file
Creating a Service
Creating a Service

A Service is basically a round-robin load balancer for all the pods, which match with it’s name or selector. It
constantly monitors the pods, in case a pod gets unhealthy, the service will start deploying the traffic to the
other healthy pods.

Pod – Replica 1

Service
Pod – Replica 2

Pod – Replica 3
Service Types

ClusterIP: Exposes the service on cluster-internal IP

NodePort: Exposes the service on each Node’s IP at a static port

LoadBalancer: Exposes the service externally using a cloud provider’s load balancer.

ExternalName: Maps the service to the contents of the ExternalName

Pod – Replica 1

Service
Pod – Replica 2

Pod – Replica 3
Creating a NodePort Service
We can create a NodePort service using the following syntax:

Syntax
kubectl create service nodeport <name-of-service> --tcp=<port-of-service>:<port-of-container>
Creating a NodePort Service
To know the port, on which the service is being exposed type the following command:

Syntax
kubectl get svc
nginx
Creating a NodePort Service
To know the port, on which the service is being exposed type the following command:

Syntax
kubectl get svc
nginx
Creating an Ingress
What is an Ingress?

Kubernetes ingress is a collection of routing rules that govern how external users
access services running in a Kubernetes cluster.

Service
yahoo.com/image

Ingress

google.com/video
Service
What is an Ingress?
Ingress
Rules
Pod – Replica 1

Service
Pod – Replica 2
ClusterIP
Intellipaat.com/video

Pod – Replica 3

Ingress Ingress
Service Controller
Intellipaat.com/image
Pod – Replica 1

NodePort
Service
ClusterIP Pod – Replica 2

Pod – Replica 3
Installing Ingress Controller
We will be using the nginx ingress controller, for our demo. We can download it from the following link:

Link
https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.m
d
Define Ingress Rules

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
annotations:
The following rule, will redirect traffic which asks for
nginx.ingress.kubernetes.io/rewrite-target: /
/foo to nginx service. All the other requests, will be spec:
redirected to ingress controller’s default page rules:
- http:
paths:
- path: /foo
backend:
serviceName: nginx
servicePort: 80
Deploying Ingress Rules
To deploy the ingress rules, we use the following syntax:

Syntax
kubectl create –f ingress.yaml
Viewing Ingress Rules
To deploy the ingress rules, we use the following syntax:

Syntax
kubectl get ing
Kubernetes Dashboard
Kubernetes Dashboard

Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy
containerized applications to a Kubernetes cluster, troubleshoot your containerized
application, and manage the cluster resources.
Installing Kubernetes Dashboard
To install Kubernetes Dashboard, execute the following command:

Syntax
kubectl create -f
https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernete
s- dashboard.yaml
Accessing Kubernetes Dashboard
Change the service type for Kubernetes-Dashboard to Nodeport

Syntax
kubectl -n kube-system edit service kubernetes-dashboard
Logging into Kubernetes Dashboard
1. Check the NodePort from the kubernetes-dashboard service
2. Browse to your cluster on the internet browser, and enter the IP address
3. Click on Token, it will ask you for the token entry
4. Generate a token using the following command

$ kubectl create serviceaccount cluster-admin-dashboard-sa


$ kubectl create clusterrolebinding cluster-admin-dashboard-sa \
--clusterrole=cluster-admin \
--serviceaccount=default:cluster-admin-dashboard-sa

$ TOKEN=$(kubectl describe secret $(kubectl -n kube-system get secret | awk '/^cluster-admin-dashboard-sa-


token-/{print $1}') | awk '$1=="token:"{print $2}')

$ echo $TOKEN

5. Finally, enter the token and login to your dashboard

You might also like