0% found this document useful (0 votes)
3 views6 pages

Kubernetes - Sunil

The document provides an overview of Kubernetes architecture, detailing the control plane components such as kube-apiserver, kube-scheduler, and kube-controller-manager, which manage the cluster's state and configuration. It explains the roles of nodes and pods, including the use of various controllers like ReplicaSet, Deployment, StatefulSet, Job, and CronJob for managing workloads. Additionally, it covers the types of services in Kubernetes, including ClusterIP, NodePort, and LoadBalancer, which facilitate communication and load balancing within the cluster.

Uploaded by

madhuanu0228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views6 pages

Kubernetes - Sunil

The document provides an overview of Kubernetes architecture, detailing the control plane components such as kube-apiserver, kube-scheduler, and kube-controller-manager, which manage the cluster's state and configuration. It explains the roles of nodes and pods, including the use of various controllers like ReplicaSet, Deployment, StatefulSet, Job, and CronJob for managing workloads. Additionally, it covers the types of services in Kubernetes, including ClusterIP, NodePort, and LoadBalancer, which facilitate communication and load balancing within the cluster.

Uploaded by

madhuanu0228
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Kubernetes Architecture

Control plane
Let’s begin in the nerve center of our Kubernetes cluster: The control plane. Here we find
the Kubernetes components that control the cluster, along with data about the cluster’s
state and configuration. These core Kubernetes components handle the important work
of making sure your containers are running in sufficient numbers and with the necessary
resources.

The control plane is in constant contact with your compute machines. You’ve configured
your cluster to run a certain way. The control plane makes sure it does.

kube-apiserver
Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the
front end of the Kubernetes control plane, handling internal and external requests. The
API server determines if a request is valid and, if it is, processes it. You can access the
API through REST calls, through the kubectl command-line interface, or through other
command-line tools such as kubeadm.

kube-scheduler
Is your cluster healthy? If new containers are needed, where will they fit? These are the
concerns of the Kubernetes scheduler.

The scheduler considers the resource needs of a pod, such as CPU or memory, along
with the health of the cluster. Then it schedules the pod to an appropriate compute
node.

kube-controller-manager
Controllers take care of actually running the cluster, and the Kubernetes controller-
manager contains several controller functions in one. One controller consults the
scheduler and makes sure the correct number of pods is running. If a pod goes down,
another controller notices and responds. A controller connects services to pods, so
requests go to the right endpoints. And there are controllers for creating accounts and
API access tokens.
etcd
Configuration data and information about the state of the cluster lives in etcd, a key-
value store database. Fault-tolerant and distributed, etcd is designed to be the ultimate
source of truth about your cluster.

What happens in a Kubernetes node?


Nodes
A Kubernetes cluster needs at least one compute node, but will normally have many.
Pods are scheduled and orchestrated to run on nodes. Need to scale up the capacity of
your cluster? Add more nodes.

Pods
A pod is the smallest and simplest unit in the Kubernetes object model. It represents a
single instance of an application. Each pod is made up of a container or a series of
tightly coupled containers, along with options that govern how the containers are run.
Pods can be connected to persistent storage in order to run stateful applications.

Container runtime engine


To run the containers, each compute node has a container runtime engine. Docker is one
example, but Kubernetes supports other Open Container Initiative-compliant runtimes as
well, such as rkt and CRI-O.

kubelet
Each compute node contains a kubelet, a tiny application that communicates with the
control plane. The kublet makes sure containers are running in a pod. When the control
plane needs something to happen in a node, the kubelet executes the action.

kube-proxy

Each compute node also contains kube-proxy, a network proxy for facilitating
Kubernetes networking services. The kube-proxy handles network communications
inside or outside of your cluster—relying either on your operating system’s packet
filtering layer, or forwarding the traffic itself.

Controllers

This is how a Kubernetes controller works - it is a loop that watches the state of your
cluster and makes changes as needed, always working to maintain your desired state.

Controllers can track many objects including:

 What workloads are running and where


 Resources available to those workloads
 Policies around how the workloads behave (restart, upgrades, fault-tolerance)

When the controller notices a divergence between the actual state and the desired
state, it will send messages to the Kubernetes API server to make any necessary
changes.

Types of Controllers

In our essential Kubernetes concepts, we highlight a number of Pod Controllers including


the below. We’ll then dig into the four most used controllers.

 ReplicaSet - A ReplicaSet creates a stable set of pods, all running the same
workload. You will almost never create this directly.
 Deployment - A Deployment is the most common way to get your app on
Kubernetes. It maintains a ReplicaSet with the desired configuration, with some
additional configuration for managing updates and rollbacks.
 StatefulSet - A StatefulSet is used to manage stateful applications with persistent
storage. Pod names are persistent and are retained when rescheduled (app-0, app-
1). Storage stays associated with replacement pods, and volumes persist when
pods are deleted.
 Job - A Job creates one or more short-lived Pods and expects them to successfully
terminate.
 CronJob - A CronJob creates Jobs on a schedule.
 DaemonSet - A DaemonSet ensures that all (or some) Nodes run a copy of a Pod.
As nodes are added to the cluster, Pods are added to them. As nodes are removed
from the cluster, those Pods are garbage collected. Common for system processes
like CNI, Monitor agents, proxies, etc.

ReplicaSet

A ReplicaSet is a set of multiple, identical pods with no unique identities. ReplicaSets


were designed to address two requirements:

 Containers are ephemeral. When they fail, we need their Pod to restart.
 We need to run a defined number of Pods. If one is terminated or fails, we need
new Pods to be activated.

A ReplicaSet ensures that a specified number of Pod replicas are running at any given
time. Even though you’ll probably never create a ReplicaSet directly, it’s an important
part of keeping your Kubernetes infrastructure up and running.

Deployment

Like a ReplicaSet, a Deployment is a set of multiple, identical pods with no unique


identities. However, Deployments can be upgraded and patched easier than ReplicaSets.
Users can configure the strategy for rolling out new versions of a Deployment, making it
possible to roll out changes with minimal downtime.

For example, we might choose a RollingUpdate strategy. Then, when we go to update


our Deployment, a second ReplicaSet will be created alongside the existing one, and as
more Pods become ready in the new ReplicaSet, some will be removed from the old
ReplicaSet, until the old ReplicaSet is empty and can be deleted. We can even set
parameters like maxUnavailable and maxSurge to control the mechanics of this switch
over.

StatefulSet

While Deployments manage stateless applications, StatefulSets are valuable for


applications that require one or more of the following:

 Stable, unique network identifiers


 Stable, persistent storage
 Ordered, graceful deployment and scaling
 Ordered, automated rolling updates

A StatefulSet keeps a unique identity for each Pod it manages. It uses the same identity
whenever it needs to reschedule those Pods.
StatefulSets are recommended when running Cassandra, MongoDB, MySQL, PostgreSQL
or any other workload utilizing persistent storage. They can help maintain state during
scaling and update events, and are particularly useful for maintaining high availability.

Job

Jobs are short-lived workloads that can be used to carry out a single task. A Job will
simply create one or more Pods, and wait for those Pods to finish successfully. Jobs can
be helpful for doing things like:

 Running database migrations


 Performing maintenance tasks
 Rotating logs

Jobs can also run multiple Pods in parallel, giving you some extra throughput.

CronJob

CronJobs simply run Jobs on a user-defined schedule. You can provide the schedule
using cron syntax, and the CronJob controller will automatically create a Job every time
the schedule requires one.

The CronJob controller also allows you to specify how many Jobs can run concurrently,
and how many succeeded or failed Jobs to keep around for logging and debugging.

Pod phases

1. Pending: If a pod is in pending state, it means that the pod has been accepted by the
cluster. But, one or more container’s setup is still is process and is in the process of
being ready.
2. Succeeded: The pod has performed the task, all the containers within it have teriminated
successfully and they wont restart.
3. Failed: Every container within the pod has terminated but one or more containers has
exited with a failure.
4. Running: The pod has been scheduled on a node and at least one container within the
pod is running.
5. Unknown: The kubelet is unable to get the state of the pod for some reason. Usually it
happens when the connection to the node on which the pod should sit fails.

Pod Conditions

As well as pod phases, there are pod conditions. These also give information about the
state the pod is in.

 PodScheduled: A Node has been successfully selected to schedule the pod, and
scheduling is completed.
 ContainersReady: All the containers are ready.
 Initialized: Init containers are started.
 Ready: The pod is able to serve requests; hence it needs to be included in the
service and load balancers.

Common Probe Parameters


Each type of probe has common configurable fields:

 initialDelaySeconds: Seconds after the container started and before probes


start. (default: 0)
 periodSeconds: Frequency of the pod. (default: 10)
 timeoutSeconds: Timeout for the expected response. (default: 1)
 successThreshold: How many success results received to transition from failure
to a healthy state. (default: 1)
 failureThreshold: How many failed results received to transition from healthy to
failure state. (default: 3)

Probe Handlers
There are three available handlers that can cover almost any scenario.
#Exec Action
ExecAction executes a command inside the container; this also is a gateway feature that
can handle anything since we can run any executable; this might be a script running
several curl requests to determine the status or an executable that connects to an
external dependency. Make sure that the executable does not create zombie processes.
#TCP Socket Action
TCPSocketAction Connects to a defined port to check if the port is open, mostly used for
endpoints that are not talking HTTP.HTTP Get Action
HTTPGetAction sends an HTTP Get request as a probe to the path defined, HTTP
response code determines whether the probe is successful or not.

Startup Probes
If your process requires time to get ready, reading a file, parsing a large configuration,
preparing some data, and so on, you should use Startup Probes. If the probe fails, the
threshold is exceeded, it will be restarted so the operation can start over. You need to
adjust initialDelaySeconds and periodSeconds accordingly to make sure the process has
sufficient time to complete. Otherwise, you can find your pod in a loop of restarts.

#Readiness Probes
If you want to control the traffic sent to the pod, you ought to use readiness
probes. Readiness Probes modify Pod Conditions: Ready to change whether the pod
should be included in the service and load-balancers. When the probe succeeds enough
times (threshold), it means that the pod can receive traffic, and it should be included in
the service and load-balancers. If your process has the ability to take itself out of the
service for maintenance, reading a large amount of data to be used for the service, etc.,
again, you ought to use readiness probes. So that pod can signal to kublet via readiness
probe that it wants out of the service for a while.

#Liveness Probes
If your container cannot crash by itself when there is an unexpected error occur, then
use liveness probes. Using liveness probes can overcome some of the bugs the process
might have. Kublet restarts the pod once the Liveness Probe fails.
If your process can handle these errors by exiting, you don’t need to use liveness
probes; however, it is advantageous to accommodate unknown bugs until they are fixed.

K8S Services
A service can be defined as a logical set of pods. It can be defined as an abstraction on the top of
the pod which provides a single IP address and DNS name by which pods can be accessed. With
Service, it is very easy to manage load balancing configuration. It helps pods to scale very easily.
A service is a REST object in Kubernetes whose definition can be posted to Kubernetes apiServer
on the Kubernetes master to create a new instance.

Types of Services

ClusterIP − This helps in restricting the service within the cluster. It exposes the
service within the defined Kubernetes cluster.
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: NodeportService
NodePort − It will expose the service on a static port on the deployed node.
A ClusterIP service, to which NodePort service will route, is automatically created.
The service can be accessed from outside the cluster using the NodeIP:nodePort
spec:
ports:
- port: 8080
nodePort: 31999
name: NodeportService
clusterIP: 10.20.30.40
Load Balancer − It uses cloud providers’ load
balancer. NodePort and ClusterIP services are created automatically to which the
external load balancer will route.
A full service yaml file with service type as Node Port. Try to create one yourself.
apiVersion: v1
kind: Service
metadata:
name: appname
labels:
k8s-app: appname
spec:
type: NodePort
ports:
- port: 8080
nodePort: 31999
name: omninginx
selector:
k8s-app: appname
component: nginx
env: env_name

You might also like