0% found this document useful (0 votes)
13 views18 pages

Muthayammal Engineering College: IT Iii/Vi

This document provides lecture handouts on Kubernetes, covering topics such as cluster management, deployment strategies, and deploying Kubernetes on AWS and Google Cloud platforms. It details the lifecycle management of Kubernetes clusters, various deployment strategies like rolling updates and canary deployments, and step-by-step instructions for setting up clusters on cloud services. Additionally, it includes resources for further learning and important books on Docker and Kubernetes.

Uploaded by

Dhamu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

Muthayammal Engineering College: IT Iii/Vi

This document provides lecture handouts on Kubernetes, covering topics such as cluster management, deployment strategies, and deploying Kubernetes on AWS and Google Cloud platforms. It details the lifecycle management of Kubernetes clusters, various deployment strategies like rolling updates and canary deployments, and step-by-step instructions for setting up clusters on cloud services. Additionally, it includes resources for further learning and important books on Docker and Kubernetes.

Uploaded by

Dhamu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

MUTHAYAMMAL ENGINEERING COLLEGE

(An Autonomous Institution)


(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-28
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Cluster Management

Introduction: (Maximum 5 sentences) : A cluster manager is usually backend graphical user


interface (GUI) or command-line interface (CLI) software that runs on a set of cluster nodes that it manages (in
some cases it runs on a different server or cluster of management servers). The cluster manager works together
with a cluster management agent.

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
These agents run on each node of the cluster to manage and configure services, a set of
services, or to manage and configure the complete cluster server itself (see super computing.) In some cases the
cluster manager is mostly used to dispatch work for the cluster (or cloud) to perform. In this last case a subset of
the cluster manager can be a remote desktop application that is used not for configuration but just to send work
and get back work results from a cluster.

Kubernetes cluster lifecycle management allows you to:


 Create a new cluster
 Remove a cluster
 Update the control plane and compute nodes
 Maintain and update the node
 Upgrade the Kubernetes API version
 Secure the cluster
 Upgrade the cluster Developers want it to be easy access to new clusters.
Operations teams and site reliability engineers (SREs) need new clusters to be configured correctly so
that apps will be available in production, while being able to monitor the health of clusters in their
environment. Creating and configuring a Cluster: To install Kubernetes on a set of machines depending
on your environment Upgrading a cluster The current state of cluster upgrades is provider dependent,
and some releases may require special care when upgrading. It is recommended that administrators
consult both the release notes, as well as the version specific upgrade notes prior to upgrading their
clusters Upgrading an Azure Kubernetes Service (AKS) cluster: Azure Kubernetes Service enables easy
self-service upgrades of the control plane and nodes in your cluster. The process is currently user-
initiated and is described in the Azure AKS documentation. Upgrading Google Compute Engine clusters
Google Compute Engine Open Source (GCE-OSS) support master upgrades by deleting and recreating
the master, while maintaining the same Persistent Disk (PD) to ensure that data is retained across the
upgrade. Node upgrades for GCE use a Managed Instance Group, each node is sequentially destroyed
and then recreated with new software. Any Pods that are running on that node need to be controlled by
a Replication Controller, or manually re-created after the roll out. Upgrades on open source Google
Compute Engine (GCE) clusters are controlled by the cluster/gce/upgrade.sh script. Get its usage by
running cluster/gce/upgrade.sh -h. For example, to upgrade just your master to a specific version
(v1.0.2): cluster/gce/upgrade.sh -M v1.0.2 Alternatively, to upgrade your entire cluster to the latest
stable release: cluster/gce/upgrade.sh release/stable Upgrading Google Kubernetes Engine clusters
Google Kubernetes Engine automatically updates master components (e.g. kube-apiserver,
kubescheduler) to the latest version. It also handles upgrading the operating system and other
components that the master runs on. Upgrading an Amazon EKS Cluster: Amazon EKS cluster‟s master
components can be upgraded by using eksctl, AWS Management Console, or AWS CLI. The process is
user-initiated and is described in the Amazon EKS documentation. Upgrading an Oracle Cloud
Infrastructure Container Engine for Kubernetes (OKE) cluster Oracle creates and manages a set of
master nodes in the Oracle control plane on your behalf (and associated Kubernetes infrastructure such
as etcd nodes) to ensure you have a highly available managed Kubernetes control plane. You can also
seamlessly upgrade these master nodes to new versions of Kubernetes with zero downtime. These
actions are described in the OKE documentation. Upgrading clusters on other platforms Different
providers, and tools, will manage upgrades differently. It is recommended that you consult their main
documentation regarding upgrades.
 kops
 kubespray
 CoreOS Tectonic
 Digital Rebar
To upgrade a cluster on a platform not mentioned in the above list, check the order of component
upgrade on the Skewed versions page.
Video Content/Details of website for further learning(if any)
https://www.youtube.com/watch?v=F3WbvybRzr8

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-29
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Deploy Kubernetes

Introduction: (Maximum 5 sentences) :


I describe step by step what to do from creating an account, setting up the cluster, setting up a registry,
accessing the cluster with your local client, up to the actual deployment and running the Kubernetes
Dashboard via proxy.
If you want to deploy your containerized application to a Kubernetes cluster, you have a choice between
several cloud providers.

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
 Kubernetes Deployments
 Deploying your first app on Kubernetes
 Deploying the Application

Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. To
do so, you create a Kubernetes Deployment configuration. The Deployment instructs Kubernetes how to
create and update instances of your application. Once you've created a Deployment, the Kubernetes control
plane schedules the application instances included in that Deployment to run on individual Nodes in the
cluster. Once the application instances are created, a Kubernetes Deployment Controller continuously
monitors those instances. If the Node hosting an instance goes down or is deleted, the Deployment controller
replaces the instance with an instance on another Node in the cluster. This provides a self-healing mechanism
to address machine failure or maintenance. In a pre-orchestration world, installation scripts would often be
used to start applications, but they did not allow recovery from machine failure. By both creating your
application instances and keeping them running across Nodes, Kubernetes Deployments provide a
fundamentally different approach to application management. Deploying your first app on Kubernetes You
can create and manage a Deployment by using the Kubernetes command line interface, Kubectl. Kubectl uses
the Kubernetes API to interact with the cluster. In this module, you'll learn the most common Kubectl
commands needed to create Deployments that run your applications on a Kubernetes cluster. When you create
a Deployment, you'll need to specify the container image for your application and the number of replicas that
you want to run. You can change that information later by updating your Deployment; Modules 5 and 6 of the
bootcamp discuss how you can scale and update your Deployments. Applications need to be packaged into
one of the supported container formats in order to be deployed on Kubernetes For your first Deployment,
you'll use a hello-node application packaged in a Docker container that uses NGINX to echo back all the
requests.
What are Kubernetes Deployment Strategies?
Kubernetes offers several deployment strategies to handle a broad range of application development and
deployment needs. Once you define the desired state of the application, the deployment controller goes to
work. It can make changes at a controlled rate to optimize the deployment.
What is a Kubernetes Recreate Deployment?
The recreate strategy terminates pods that are currently running and ‘recreates’ them with the new version.
This approach is commonly used in a development environment where user activity isn’t an issue.
Because the recreate deployment entirely refreshes the pods and the state of the application, you can expect
downtime due to the shutdown of the old deployment and the initiation of new deployment instances.
What is a Kubernetes Rolling Update Deployment?
The rolling update deployment provides an orderly, ramped migration from one version of an application to a
newer version. A new ReplicaSet with the new version is launched, and replicas of the old version are
terminated systematically as replicas of the new version launch. Eventually, all pods from the old version are
replaced by the new version.
The rolling update deployment is beneficial because it provides an organized transition between versions.
However, it can take time to complete.
What is a Kubernetes Blue/Green Deployment?
The Blue/Green strategy offers a rapid transition from the old to new version once the new version is tested in
production. Here the new ‘green’ version is deployed along with the existing ‘blue’ version. Once there is
enough confidence that the ‘green’ version is working as designed, the version label is replaced in the selector
field of the Kubernetes Service object that performs load balancing. This action immediately switches traffic
to the new version.
The Kubernetes blue/green deployment option provides a rapid rollout that avoids versioning issues.
However, this strategy doubles the resource utilization since both versions run until cutover.
What is a Kubernetes Canary Deployment?
In a canary deployment, a small group of users is routed to the new version of an application, which runs on a
smaller subset of pods. The purpose of this approach is to test functionality in a production environment.
Once satisfied that testing is error-free, replicas of the new version are scaled up, and the old version is
replaced in an orderly manner.
Canary deployments are beneficial when you want to test new functionality on a smaller group of users. Since
you can easily roll back canary deployments, this strategy helps gauge how new code will impact the overall
system operation without significant risk.
What are use cases for Kubernetes Deployments?
Deployments are the easiest way to manage and scale how applications run on a Kubernetes cluster, and
Kubernetes’ open API simplifies integration into CI/CD pipelines.
Here are some common use cases for deployments:
Run stateless web servers, like the popular open-source Nginx. The deployment can request that a fixed
number of pod replicas be instantiated, and Kubernetes will maintain that number of pods during the
deployment.
Applications that require persistent storage, like a database instance, would use the StatefulSet type
deployment and mount a persistent volume to ensure data integrity and longevity.
Deployments can automatically scale the number of replicas in the cluster as the workload increases. For
example, they can automatically balance incoming requests between the replicas, create new replicas as
demand increases, and terminate replicas as demand subsides.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=F3WbvybRzr8

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-30
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Deploy Kubernetes on AWS and Google cloud platforms

Introduction: (Maximum 5 sentences) : Kubernetes on AWS offers one of the most powerful
distributed computing platforms. Thanks to technologies like Amazon Fargate and the vast outreach of
Amazon‟s cloud computing infrastructure, the Elastic Kubernetes Service, or EKS, can offer a truly
distributed environment where your applications can run and scale.

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)

 Amazon AWS

 Microsoft Azure

 IBM Kubernetes Service (IKS) on IBM Cloud

 RedHat OpenShift (now IBM).

Credit goes to the big drivers of Kubernetes, Linux Foundation, Open Container Initiative (OCI) and
the Cloud Native Computing Foundation (CNCF) and also great advocates of open source software for a long
time now: Google, IBM and RedHat. I really like the way IBM offers IBM Cloud Kubernetes Services
(IKS) and as an IBM employee that is what I know best. IBM actually offers a free, open source Community
Edition (CE) on Github of their on-prem cloud called IBM Cloud Private (ICP). There is also RedHat’s
OpenShift platform which has a firm market share of the Kubernetes market, but here I want to venture into
how to use Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) instead. A more complete list
of Kubernetes Certified Service Providers (KCSPs) can be found on the Kubernetes Partners site.
These are the steps to deploy a service to GKE:
 Create a Google Cloud Account
 Create a Project
 Create a Kubernetes Cluster
 Install Google Cloud SDK
 Initialize Google Cloud SDK
 Set kube config
 Push Image to the Container Registry
 Deploy Kubernetes Resources
 Create an Ingress Load Balancer
 Run Kubernetes Web UI Dashboard
Create a Kubernetes Cluster on AWS The easiest way to get started with EKS is to use the
command-line utilities that include: 1. AWS-CLI to interact with your AWS account 2. eksctl to create,
manage and delete EKS clusters, and 3. kubectl to interact with the Kubernetes Cluster itself. 4. docker
to create and containerize your application. 5. Docker Hub account to host your Docker images (free
tier will work) 1. Setting Up AWS CLI AWS provides users with a command-line tool and the
possibility to provision AWS resources straight from the terminal. It directly talks to the AWS API and
provisions resources on your behalf. This eliminates the need to manually configure EKS Cluster or
other resources using the AWS Web Console. Automating it with CLI also makes the process less
error-prone. Let‟s set up AWS CLI on our local computer. 1. First, get the CLI binaries suitable for
your system. 2. AWS CLI allows you to quickly and programmatically create resources in AWS‟ cloud
without having to mess around in the dashboard. This eliminates human errors as well. 3. In order to
create and manage EKS Clusters you need to be either the root user or an IAM user with Administrator
access. 4. I will be using my root account for the sake of brevity. Click on your Profile on the top right
corner in your AWS Web Console and select “My Security Credentials”. 5. Open up your terminal and
type in the following command and when prompted, enter your Access Key ID and Secret Access Key:
$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name
[None]:us-east-2 Default output format [None]: text You will also be asked to select a default region.
We are going with us-east-2, but you can pick the region that benefits you the most (or is closest to
you). And the default output format is going to be text in our case. Your configuration and credentials
live in your HOME directory in a subdirectory called .aws and will be used by both aws and eksctl to
manage resources. Now, we can move on to creating a cluster. 2. Creating and Deleting an EKS
Cluster using Fargate To create a cluster with Fargate Nodes, simply run the following command: $
eksctl create cluster --name my-fargate-cluster --fargate That is it! The command can take around 15-
30 minutes to finish, and as it runs, it will output in your terminal all the resources that are being
created to launch the cluster.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=ojfh2t_sQWY

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-31
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Pods and Deployments

Introduction: (Maximum 5 sentences) : A Deployment manages a set of Pods to run an application


workload, usually one that doesn't maintain state. A Deployment provides declarative updates for Pods and
ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the actual
state to the desired state at a controlled rate.

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
Pods that run a single container. The "one-container-per-Pod" model is the most common Kubernetes use case; in
this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than
managing the containers directly. Pods that run multiple containers that need to work together

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod (as in a
pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a
specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a
shared context. A Pod models an application-specific "logical host": it contains one or more application containers
which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual
machine are analogous to cloud applications executed on the same logical host. As well as application containers, a
Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging
if your cluster offers this. What is a Pod? Note: While Kubernetes supports more container runtimes than just
Docker, Docker is the most commonly known runtime, and it helps to describe Pods using some terminology from
Docker. The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation
- the same things that isolate a Docker container. Within a Pod's context, the individual applications may have
further sub-isolations applied. In terms of Docker concepts, a Pod is similar to a group of Docker containers with
shared namespaces and shared filesystem volumes. Using Pods The following is an example of a Pod which
consists of a container running the image nginx:1.14.2. apiVersion: v1 kind: Pod metadata: name: nginx spec:
containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 To create the Pod shown above, run the
following command: kubectl apply -f https://k8s.io/examples/pods/simple-pod.yaml Pods are generally not created
directly and are created using workload resources. See Working with Pods for more information on how Pods are
used with workload resources. Workload resources for managing pods Usually you don't need to create Pods
directly, even singleton Pods. Instead, create them using workload resources such as Deployment or Job. If your
Pods need to track state, consider the StatefulSet resource. Pods in a Kubernetes cluster are used in two main ways:
. A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and
need to share resources. These co-located containers form a single cohesive unit of service—for example, one
container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or
updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together
as a single unit. Note: Grouping multiple co-located and co-managed containers in a single Pod is a relatively
advanced use case. You should use this pattern only in specific instances in which your containers are tightly
coupled. Each Pod is meant to run a single instance of a given application. If you want to scale your application
horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for
each instance. In Kubernetes, this is typically referred to as replication. Replicated Pods are usually created and
managed as a group by a workload resource and its controller. See Pods and controllers for more information on
how Kubernetes uses workload resources, and their controllers, to implement application scaling and auto-healing.
How Pods manage multiple containers Pods are designed to support multiple cooperating processes (as containers)
that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the
same physical or virtual machine in the cluster. The containers can share resources and dependencies,
communicate with one another, and coordinate when and how they are terminated. For example, you might have a
container that acts as a web server for files in a shared volume, and a separate "sidecar" container that updates
those files from a remote source, as in the following diagram: Some Pods have init containers as well as app
containers. Init containers run and complete before the app containers are started. Pods natively provide two kinds
of shared resources for their constituent containers: networking and storage. Working with Pods You'll rarely
create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively
ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller), the new
Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution,
the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails. Note: Restarting a container in a
Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running
container(s). A Pod persists until it is deleted. When you create the manifest for a Pod object, make sure the name
specified is a valid DNS subdomain name. Pods and controllers You can use workload resources to create and
manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing
in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have stopped
working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node. Here are
some examples of workload resources that manage one or more Pods:  Deployment  StatefulSet  DaemonSet
Pod templates Controllers for workload resources create Pods from a pod template and manage those Pods on your
behalf. PodTemplates are specifications for creating Pods, and are included in workload resources such as
Deployments, Jobs, and DaemonSets.
Video Content/Details of website for further learning(if any)
https://www.youtube.com/watch?v=ojfh2t_sQWY

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-32
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Kubernetes Master

Introduction: (Maximum 5 sentences) :


The master node is responsible for cluster management and for providing the API that is used to configure and
manage resources within the Kubernetes cluster. Kubernetes master node components can be run within
Kubernetes itself, as a set of containers within a dedicated pod

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
The Kubernetes master runs the Scheduler, Controller Manager, API Server and etcd components and is
responsible for managing the Kubernetes cluster. Essentially, it’s the brain of the cluster! Now, let’s dive into each
master component.

In production, you should set up Kubernetes with multiple masters for high availability. See how to build high-
Availability clusters official guides for further information.
Etcd
Etcd is a distributed, consistent key-value store used for configuration management, service discovery, and
coordinating distributed work.
When it comes to Kubernetes, etcd reliably stores the configuration data of the Kubernetes cluster, representing the
state of the cluster (what nodes exist in the cluster, what pods should be running, which nodes they are running on,
and a whole lot more) at any given point of time.
As all cluster data is stored in etcd, you should always have a backup plan for it. You can easily back up your etcd
data using the etcdctl snapshot save command. In case you are running Kubernetes on AWS, you can also back up
etcd by taking a snapshot of the EBS volume.
Etcd is written in Go and uses the Raft consensus algorithm to manage a highly-available replicated log. Raft is a
consensus algorithm designed as an alternative to Paxos. The Consensus problem involves multiple servers
agreeing on values; a common problem that arises in the context of replicated state machines. Raft defines three
different roles (Leader, Follower, and Candidate) and achieves consensus via an elected leader. For further
information, please read the Raft paper.
Etcdctl is the command-line interface tool written in Go that allows manipulating an etcd cluster. It can be used to
perform a variety of actions, such as:
Set, update and remove keys.
Verify the cluster health.
Add or remove etcd nodes.
Generating database snapshots.
You can play online with a 5-node etcd cluster at http://play.etcd.io.
Etcd also implements a watch feature, which provides an event-based interface for asynchronously monitoring
changes to keys. Once a key is changed, its “watchers” get notified. This is a crucial feature in the context of
Kubernetes, as the API Server component heavily relies on this to get notified and call the appropriate business
logic components to move the current state towards the desired state.
API Server
When you interact with your Kubernetes cluster using the kubectl command-line interface, you are actually
communicating with the master API Server component.
The API Server is the main management point of the entire cluster. In short, it processes REST operations,
validates them, and updates the corresponding objects in etcd. The API Server serves up the Kubernetes API and is
intended to be a relatively simple server, with most business logic implemented in separate components or in
plugins.
The API Server is the only Kubernetes component that connects to etcd; all the other components must go through
the API Server to work with the cluster state.
The API Server is also responsible for the authentication and authorization mechanism. All API clients should be
authenticated in order to interact with the API Server.
The API Server also implements a watch mechanism (similar to etcd) for clients to watch for changes. This allows
components such as the Scheduler and Controller Manager to interact with the API Server in a loosely coupled
manner.
This pattern is extensively used in Kubernetes. For example, when you create a pod using kubectl, this what
happens:

Create Pod Flow. Source: heptio.com


kubectl writes to the API Server.
API Server validates the request and persists it to etcd.
etcd notifies back the API Server.
API Server invokes the Scheduler.
Scheduler decides where to run the pod on and return that to the API Server.
API Server persists it to etcd.
etcd notifies back the API Server.
API Server invokes the Kubelet in the corresponding node.
Kubelet talks to the Docker daemon using the API over the Docker socket to create the container.
Kubelet updates the pod status to the API Server.
API Server persists the new state in etcd.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=3W53ftHeceY

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-33
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: - master nodes

Introduction: (Maximum 5 sentences) : Master nodes are part of the infrastructure that sustains
cryptocurrencies such as Bitcoin, Ethereum, and Dash. Unlike regular nodes, master nodes do not add new blocks
of transactions to the blockchain. Instead, they verify new blocks and perform special roles in governing the
blockchain

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
Master nodes verify new blocks of transactions in a cryptocurrency but unlike
other nodes do not submit new blocks to the network for verification.Master
nodes operate on a collateral-based system, meaning the operators need to own
a significant amount of the cryptocurrency.

There are several types of nodes that together form the infrastructure of a decentralized blockchain, collectively
providing transparency and security and running the software that implements a cryptocurrency's rules and
functionality. Nodes maintain the massive ledger of public transactions in a given cryptocurrency and verify new
transactions. Master nodes also play a special role in the management and governance of the blockchain's protocol.
Operating a master node requires a significant financial investment and running costs, including a significant stake
in the cryptocurrency itself and computer hardware that is far more expensive than your average laptop. It also
requires expertise. As an incentive for people to maintain master nodes, operators are rewarded with
cryptocurrency earnings, usually a share of block rewards.
Dash, a fork of Bitcoin, was the first virtual currency to adopt the master node model but many other
cryptocurrencies have adopted the model.1
Master Nodes vs. Full Nodes
Full nodes play critical roles in keeping a cryptocurrency functioning. Each full node contains an entire copy of the
blockchain's history of transactions and submits new blocks of transactions for verification by other nodes. Every
time a new block of transactions is submitted, all the other nodes must verify the transactions before they are added
to the permanent ledger. This includes master nodes. The difference is, master nodes generally don't submit
transactions for verification—they only verify those submitted by other modes.
Master nodes also have other responsibilities that full nodes do not. This includes governing voting events on
changes to the ecosystem and executing protocol operations.
Profitability of Master Nodes
Master nodes are seen as a relatively simple alternative to mining, requiring far less expertise and incurring lower
operating costs. But it can still be challenging to make an attractive profit from operating a master node,
particularly given the relatively high initial investments, including the currency stake and equipment, and running
costs such as power charges and hosting fees.
Investing in cryptocurrencies and other Initial Coin Offerings (“ICOs”) is highly risky and speculative, and this
article is not a recommendation by Investopedia or the writer to invest in cryptocurrencies or other ICOs. Since
each individual's situation is unique, a qualified professional should always be consulted before making any
financial decisions. Investopedia makes no representations or warranties as to the accuracy or timeliness of the
information contained herein. The master node is also usually configured as a worker node within the cluster.
Therefore, the master node also runs the standard node services: the kubelet service, the container runtime and the
kube proxy service. Note that it is possible to taint a node to prevent workloads from running on an inappropriate
node. The kubeadm utility automatically taints the master node so that no other workloads or containers can run on
this node. This helps to ensure that the master node is never placed under any unnecessary load and that backup
and restore of the master node for the cluster is simplified.
If the master node becomes unavailable for a period, cluster functionality is suspended, but the worker nodes
continue to run container applications without interruption.
For single node clusters, when the master node is offline, the API is unavailable, so the environment is unable to
respond to node failures and there is no way to perform new operations like creating new resources or editing or
moving existing resources.
A high availability cluster with multiple master nodes ensures that more requests for master node functionality can
be handled, and with the assistance of master replica nodes, uptime is significantly improved.
Copyrig
API Server (kube-apiserver): The Kubernetes REST API is exposed by the API Server. This component processes
and validates operations and then updates information in the Cluster State Store to trigger operations on the worker
nodes. The API is also the gateway to the cluster.Cluster State Store (etcd): Configuration data relating to the
cluster state is stored in the Cluster State Store, which can roll out changes to the coordinating components like the
Controller Manager and the Scheduler. It is essential to have a backup plan in place for the data stored in this
component of your cluster.Cluster Controller Manager (kube-controller-manager): This manager is used to perform
many of the cluster-level functions, as well as application management, based on input from the Cluster State Store
and the API Server.Scheduler (kube-scheduler): The Scheduler handles automatically determining where
containers should be run by monitoring availability of resources, quality of service and affinity and anti-affinity
specifications.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=wz6tzaOhu58

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-34
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Introduction

Introduction: (Maximum 5 sentences) :


Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and
management. Originally designed by Google, the project is now maintained by the Cloud Native Computing
Foundation. The name Kubernetes originates from Ancient Greek, meaning 'helmsman' or 'pilot'

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
Kubernetes was first developed by engineers at Google before being open sourced in 2014. It is a descendant of
Borg, a container orchestration platform used internally at Google. Kubernetes is Greek for helmsman or pilot,
hence the helm in the Kubernetes logo

the basics of the Kubernetes cluster orchestration system. Each module contains some background information on
major Kubernetes features and concepts, and a tutorial for you to follow along.
 Deploy a containerized application on a cluster.
 Scale the deployment.
Update the containerized application with a new software version.
Debug the containerized application.
What can Kubernetes do for you?
With modern web services, users expect applications to be available 24/7, and developers expect to deploy new
versions of those applications several times a day. Containerization helps package software to serve these goals,
enabling applications to be released and updated without downtime. Kubernetes helps you make sure those
containerized applications run where and when you want, and helps them find the resources and tools they need to
work. Kubernetes is a production-ready, open source platform designed with Google's accumulated experience in
container orchestration, combined with best-of-breed ideas from the community. It may be easier or more helpful
to understand containers as the latest point on the continuum of IT infrastructure automation and abstraction.
In traditional infrastructure, applications run on a physical server and grab all the resources they can get. This
leaves you the choice of running multiple applications on a single server and hoping one doesn’t hog resources at
the expense of the others or dedicating one server per application, which wastes resources and doesn’t scale.
Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run multiple
VMs on one physical server or a single VM that spans more than one physical server. Each VM runs its own OS
instance, and you can isolate each application in its own VM, reducing the chance that applications running on the
same underlying physical hardware will impact each other. VMs make better use of resources and are much easier
and more cost-effective to scale than traditional infrastructure. And, they’re disposable — when you no longer
need to run the application, you take down the VM.
For more information on VMs, see "What are virtual machines?"
Containers take this abstraction to a higher level—specifically, in addition to sharing the underlying virtualized
hardware, they share an underlying, virtualized OS kernel as well. Containers offer the same isolation, scalability,
and disposability of VMs, but because they don’t carry the payload of their own OS instance, they’re lighter weight
(that is, they take up less space) than VMs. They’re more resource-efficient—they let you run more applications on
fewer machines (virtual and physical), with fewer OS instances. Containers are more easily portable across
desktop, data center, and cloud environments. And they’re an excellent fit for Agile and DevOps development
practices.
"What are containers?" provides a complete explanation of containers and containerization. And the blog post
"Containers vs. VMs: What's the difference?" gives a full rundown of the differences.
What is Docker?
Docker is the most popular tool for creating and running Linux® containers. While early forms of containers were
introduced decades ago (with technologies such as FreeBSD Jails and AIX Workload Partitions), containers were
democratized in 2013 when Docker brought them to the masses with a new developer-friendly and cloud-friendly
implementation.
Docker began as an open source project, but today it also refers to Docker Inc., the company that produces Docker
—a commercial container toolkit that builds on the open source project (and contributes those improvements back
to the open source community).
Docker was built on traditional Linux container (LXC) technology, but enables more granular virtualization of
Linux kernel processes and adds features to make containers easier for developers to build, deploy, manage, and
secure.
While alternative container platforms exist today (such as Open Container Initiative (OCI), CoreOS, and Canonical
(Ubuntu) LXD), Docker is so widely preferred that it is virtually synonymous with containers and is sometimes
mistaken as a competitor to complimentary technologies such as Kubernetes (see the video “Kubernetes vs,
Docker: It’s Not an Either/Or Question” further below).
Container orchestration with Kubernetes
As containers proliferated — today, an organization might have hundreds or thousands of them — operations
teams needed to schedule and automate container deployment, networking, scalability, and availability. And so, the
container orchestration market was born.
While other container orchestration options — most notably Docker Swarm and Apache Mesos — gained some
traction early on, Kubernetes quickly became the most widely adopted (in fact, at one point, it was the fastest-
growing project in the history of open source software).
Developers chose and continue to choose Kubernetes for its breadth of functionality, its vast and
growing ecosystem of open source supporting tools, and its support and portability across cloud service providers.
All leading public cloud providers — including Amazon Web Services (AWS), Google Cloud, IBM Cloud and
Microsoft Azure — offer fully managed Kubernetes services.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=wz6tzaOhu58

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-35
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: Kubernetes Architecture-

Introduction: (Maximum 5 sentences) :


Kubernetes is an architecture that offers a loosely coupled mechanism for service discovery across a cluster. A
Kubernetes cluster has one or more control planes, and one or more compute nodes

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)

A Kubernetes control plane is the control plane for a Kubernetes cluster. Its components include:
kube-apiserver. As its name suggests the API server exposes the Kubernetes API, which is communications
central. External communications via command line interface (CLI) or other user interfaces (UI) pass to the kube-
apiserver, and all control planes to node communications also goes through the API server.
etcd: The key value store where all data relating to the cluster is stored. etcd is highly available and consistent since
all access to etcd is through the API server. Information in etcd is generally formatted in human-readable YAML
(which stands for the recursive “YAML Ain’t Markup Language”).
kube-scheduler: When a new Pod is created, this component assigns it to a node for execution based on resource
requirements, policies, and ‘affinity’ specifications regarding geolocation and interference with other workloads.
kube-controller-manager: Although a Kubernetes cluster has several controller functions, they are all compiled into
a single binary known as kube-controller-manager.
Controller functions included in this process include:
Replication controller: Ensures the correct number of pods is in existence for each replicated pod running in the
cluster
Node controller: Monitors the health of each node and notifies the cluster when nodes come online or become
unresponsive
Endpoints controller: Connects Pods and Services to populate the Endpoints object
Service Account and Token controllers: Allocates API access tokens and default accounts to new namespaces in
the cluster
cloud-controller-manager: If the cluster is partly or entirely cloud-based, the cloud controller manager links the
cluster to the cloud provider’s API. Only those controls specific to the cloud provider will run. The cloud controller
manager does not exist on clusters that are entirely on-premises. More than one cloud controller manager can be
running in a cluster for fault tolerance or to improve overall cloud performance.
Elements of the cloud controller manager include:
Node controller: Determines status of a cloud-based node that has stopped responding, i.e., if it has been deleted
Route controller: Establishes routes in the cloud provider infrastructure
Service controller: Manages cloud provider’s load balancers
What is Kubernetes node architecture?
Nodes are the machines, either VMs or physical servers, where Kubernetes place Pods to execute. Node
components include:
kubelet: Every node has an agent called kubelet. It ensures that the container described in PodSpecs are up and
running properly.
kube-proxy: A network proxy on each node that maintains network nodes which allows for the communication
from Pods to network sessions, whether inside or outside the cluster, using operating system (OS) packet filtering
if available.
container runtime: Software responsible for running the containerized applications. Although Docker is the most
popular, Kubernetes supports any runtime that adheres to the Kubernetes CRI (Container Runtime Interface).
What are other Kubernetes infrastructure components?
Pods: By encapsulating one (or more) application containers, pods are the most basic execution unit of a
Kubernetes application. Each Pod contains the code and storage resources required for execution and has its own
IP address. Pods include configuration options as well. Typically, a Pod contains a single container or few
containers that are coupled into an application or business function and that share a set of resources and data.
Deployments: A method of deploying containerized application Pods. A desired state described in a Deployment
will cause controllers to change the actual state of the cluster to achieve that state in an orderly manner. Learn more
about Kubernetes Deployments.
ReplicaSet: Ensures that a specified number of identical Pods are running at any given point in time.
Cluster DNS: serves DNS records needed to operate Kubernetes services.
Container Resource Monitoring: Captures and records container metrics in a central database.
What are Kubernetes architecture best practices and design principles?
Gartner’s Container Best Practices suggest a platform strategy that considers security, governance, monitoring,
storage, networking, container lifecycle management and orchestration like Kubernetes.
Here are some best practices for architecting Kubernetes clusters:
Ensure you have updated to the latest Kubernetes version (1.18 as of this writing).
Invest up-front in training for developer and operations teams.
Establish governance enterprise-wide. Ensure tools and vendors are aligned and integrated with Kubernetes
orchestration.
Enhance security by integrating image-scanning processes as part of your CI/CD process, scanning during build
and run phases. Open-source code pulled from a Github repository should always be considered suspect.
Adopt role-based access control (RBAC) across the cluster. Least privilege, zero-trust models should be the
standard it makes sense. Automating CI/CD pipeline lets you avoid manual Kubernetes
Video Content/Details of website for further learning(if any)
https://www.youtube.com/watch?v=wz6tzaOhu58

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD
MUTHAYAMMAL ENGINEERING COLLEGE
(An Autonomous Institution)
(Approved by AICTE, New Delhi, Accredited by NAAC & Affiliated to Anna University)
Rasipuram-637408, Namakkal Dist., TamilNadu

L-36
LECTURE HANDOUTS

IT III/VI
Course Name with Code Docker And Kubernetes -19ITE26

Course Teacher : Mr.M.Dhamodaran

Unit : IV - INTRODUCTION TO KUBERNETES Date of Lecture:

Topic of Lecture: C omponents of kubernetes cluster

Introduction: (Maximum 5 sentences) :


A Kubernetes cluster consists of a set of worker machines, called nodes, that run containerized applications. Every
cluster has at least one worker node.
The worker node(s) host the Pods that are the components of the application workload. The control plane manages
the worker nodes and the Pods in the cluster.

Prerequisite knowledge for Complete understanding and learning of Topic: (Max. Four
important topics)
It is comprised of five components Kube-api-server, etc, Kube-scheduler, Kube-controller-manager, and cloud-
controller-manager

When you’re just getting started with Kubernetes, the first step is to understand the foundations of the functionality
and how clusters are set up. At a base level, a Kubernetes cluster is made up of a control plane (master), distributed
storage system for keeping the cluster state (etcd) consistent and a group of cluster nodes (workers).

diagram
01 1
The master is there to expose the API and manage the general running of the cluster. The nodes are at the centre of
a Kubernetes cluster and are often referred to as the ‘powerhouses’ or ‘workhorses’.Below is a deeper dive into not
only nodes, but each of the various components that you’ll need to have a complete and working Kubernetes
cluster.

The Control plane (master)


The Control plane is made up of the kube-api server, kube scheduler, cloud-controller-manager and kube-
controller-manager. Kube proxies and kubelets live on each node, talking to the API and managing the workload of
each node.

As the control plane handles most of Kubernetes’ ‘decision making’, nodes which have these components running
generally don’t have any user containers running – these are normally named as master nodes.

Cloud-controller-manager
The cloud-controller-manager runs in the control plane as a replicated set of processes (typically, these would be
containers in Pods). The cloud-controller-manager is what allows you to connect your clusters with the cloud
provider’s API and only runs controllers specific to the cloud provider you’re using.
Just note: If you’re running Kubernetes on-prem, your cluster won’t have a cloud-controller-manager.
Etcd
etcd is a distributed key-value store and the primary datastore of Kubernetes. It stores and replicates the
Kubernetes cluster state.To run etcd, you first need to have a Kubernetes cluster and the command-line tool
configured to communicate with said cluster.If you don’t already have a Kubernetes cluster, follow our tutorial to
play around with Kubernetes and create one.

Kubelet
The kubelet functions as an agent within nodes and is responsible for the runnings of pod cycles within each node.
Its functionality is watching for new or changed pod specifications from master nodes and ensuring that pods
within the node that it resides in are healthy, and the state of pods matches the pod specification.

Kube-proxy
Kube-proxy is a network proxy that runs on each node. It maintains network rules, which allow for network
communication to Pods from network sessions inside or outside of a cluster. Kube-proxy is used to
reach kubernetes services in addition to load balancing of services.\

Desired state in Kubernetes (vs actual state)


Desired state is a core concept of Kubernetes. It means that, through a declarative or an imperative API, you
describe the state of the objects that will run your containers.
Kubernetes is able to handle constant change and will continuously try hard to match your desired state with your
actual state, which is how your containers are actually running. You might not potentially ever reach your desired
state, but it doesn’t matter because the controllers for your cluster should be constantly running and working to fix
and recover from errors, if possible.

Video Content/Details of website for further learning(if any)


https://www.youtube.com/watch?v=y3WTwzx5ABk

Important Books/Journals for further learning including the page nos.:


Karl Matthias, Sean P. Kane, Docker: Up and Running, O'Reilly Media, 2015
Deepak Vohra, KubernetesManagement Design Patterns, Apress, 2017

Course Teacher

Verified by HoD

You might also like