0% found this document useful (0 votes)
55 views20 pages

Kube

The document discusses container orchestration and challenges in managing containers at scale without orchestration. It introduces container orchestration engines like Kubernetes and Docker Swarm that help organize containers across machines and ensure services are healthy and distributed. It then provides more details on Kubernetes, describing its architecture, features for automated scheduling, self-healing, rollouts/rollbacks, and horizontal scaling. It outlines the main components of Kubernetes including the master node, API server, scheduler, etc. and worker nodes with pods, Kubelet, Kube-proxy, etc. Finally, it discusses different Kubernetes service types like ClusterIP, NodePort, LoadBalancer, and ExternalName.

Uploaded by

Deepak D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views20 pages

Kube

The document discusses container orchestration and challenges in managing containers at scale without orchestration. It introduces container orchestration engines like Kubernetes and Docker Swarm that help organize containers across machines and ensure services are healthy and distributed. It then provides more details on Kubernetes, describing its architecture, features for automated scheduling, self-healing, rollouts/rollbacks, and horizontal scaling. It outlines the main components of Kubernetes including the master node, API server, scheduler, etc. and worker nodes with pods, Kubelet, Kube-proxy, etc. Finally, it discusses different Kubernetes service types like ClusterIP, NodePort, LoadBalancer, and ExternalName.

Uploaded by

Deepak D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 20

Challenges Without Container Orchestration

As you can see in the above diagram when multiple services run inside containers,
you may want to scale these containers. In large scale industries, this is really tough
to do. That’s because it would increase the cost to maintain services, and the
complexity to run them side by side.

Now, to avoid setting up services manually & overcome the challenges, something
big was needed. This is where Container Orchestration Engine comes into the
picture.

This engine, lets us organize multiple containers, in such a way that all the
underlying machines are launched, containers are healthy and distributed in a
clustered environment. In today’s world, there are mainly two such engines:
Kubernetes & Docker Swarm.

Kubernetes Tutorial: Kubernetes vs Docker Swarm


Kubernetes and Docker Swarm are leading container orchestration tools in
today’s market

What is Kubernetes?
Kubernetes is an open-source system that handles the work of scheduling
containers onto a computer cluster and manages the workloads to ensure they run
as the user intends. Being Google’s brainchild, it offers excellent community and
works brilliantly with all the cloud providers to become a multi-container
management solution.

Kubernetes Features

 Automated Scheduling: Kubernetes provides advanced scheduler to launch


container on cluster nodes based on their resource requirements and other
constraints, while not sacrificing availability.
 Self Healing Capabilities: Kubernetes allows to replaces and reschedules
containers when nodes die. It also kills containers that don’t respond to user-
defined health check and doesn’t advertise them to clients until they are
ready to serve.
 Automated rollouts & rollback: Kubernetes rolls out changes to the
application or its configuration while monitoring application health to ensure
it doesn’t kill all your instances at the same time. If something goes wrong,
with Kubernetes you can rollback the change.

 Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale
down the application as per the requirements with a simple command, using
a UI, or automatically based on CPU usage.

Kubernetes Tutorial: Kubernetes Architecture


Kubernetes Architecture has the following main components:

 Master nodes
 Worker/Slave nodes

Master Node
The master node is responsible for the management of Kubernetes cluster. It is
mainly the entry point for all administrative tasks. There can be more than one
master node in the cluster to check for fault tolerance.
As you can see in the above diagram, the master node has various components like
API Server, Controller Manager, Scheduler and ETCD.

 API Server: The API server is the entry point for all the REST commands used
to control the cluster.
 Controller Manager: Is a daemon that regulates the Kubernetes cluster, and
manages different non-terminating control loops.
 Scheduler: The scheduler schedules the tasks to slave nodes. It stores the
resource usage information for each slave node.
 ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly
used for shared configuration and service discovery.

Worker/Slave nodes
Worker nodes contain all the necessary services to manage the networking
between the containers, communicate with the master node, and assign resources
to the scheduled containers.
As you can see in the above diagram, the worker node has various components like
Docker Container, Kubelet, Kube-proxy, and Pods.

 Docker Container: Docker runs on each of the worker nodes, and runs the
configured pods
 Kubelet: Kubelet gets the configuration of a Pod from the API server and
ensures that the described containers are up and running.
 Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a
service on a single worker node
 Pods: A pod is one or more containers that logically run together on nodes.

Kubernetes — Service Types Overview


Introduction to Service types in K8s — Types of Kubernetes Services.
Kubernetes — Service Types
TL;DR
There are four types of Kubernetes services — ClusterIP, NodePort,
LoadBalancer and ExternalName. The type property in the Service's spec
determines how the service is exposed to the network.

Read about Kubernetes Services and Ingress

1. ClusterIP
 ClusterIP is the default and most common service type.
 Kubernetes will assign a cluster-internal IP address to ClusterIP
service. This makes the service only reachable within the cluster.
 You cannot make requests to service (pods) from outside the cluster.
 You can optionally set cluster IP in the service definition file.
Use Cases
 Inter service communication within the cluster. For example,
communication between the front-end and back-end components of
your app.
Example
Kubernetes ClusterIP Service

2. NodePort
 NodePort service is an extension of ClusterIP service. A ClusterIP
Service, to which the NodePort Service routes, is automatically
created.
 It exposes the service outside of the cluster by adding a cluster-wide
port on top of ClusterIP.
 NodePort exposes the service on each Node’s IP at a static port (the
NodePort). Each node proxies that port into your Service. So, external
traffic has access to fixed port on each Node. It means any request to
your cluster on that port gets forwarded to the service.
 You can contact the NodePort Service, from outside the cluster, by
requesting <NodeIP>:<NodePort>.
 Node port must be in the range of 30000–32767. Manually allocating a
port to the service is optional. If it is undefined, Kubernetes will
automatically assign one.
 If you are going to choose node port explicitly, ensure that the port
was not already used by another service.
Use Cases
 When you want to enable external connectivity to your service.
 Using a NodePort gives you the freedom to set up your own load
balancing solution, to configure environments that are not fully
supported by Kubernetes, or even to expose one or more nodes’ IPs
directly.
 Prefer to place a load balancer above your nodes to avoid node
failure.
Example
Kubernetes NodePort Service

3. LoadBalancer
 LoadBalancer service is an extension of NodePort service. NodePort
and ClusterIP Services, to which the external load balancer routes, are
automatically created.
 It integrates NodePort with cloud-based load balancers.
 It exposes the Service externally using a cloud provider’s load
balancer.
 Each cloud provider (AWS, Azure, GCP, etc) has its own native load
balancer implementation. The cloud provider will create a load
balancer, which then automatically routes requests to your
Kubernetes Service.
 Traffic from the external load balancer is directed at the backend
Pods. The cloud provider decides how it is load balanced.
 The actual creation of the load balancer happens asynchronously.
 Every time you want to expose a service to the outside world, you
have to create a new LoadBalancer and get an IP address.
Use Cases
 When you are using a cloud provider to host your Kubernetes cluster.

This type of service is typically heavily dependent on the cloud provider.


Example
Kubernetes LoadBalancer Service

4. ExternalName
 Services of type ExternalName map a Service to a DNS name, not to a
typical selector such as my-service.
 You specify these Services with the `spec.externalName` parameter.
 It maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record with its value.
 No proxying of any kind is established.
Use Cases
 This is commonly used to create a service within Kubernetes to
represent an external datastore like a database that runs externally to
Kubernetes.
 You can use that ExternalName service (as a local service) when Pods
from one namespace to talk to a service in another namespace.
Example

Ingress
You can also use Ingress to expose your Service. Ingress is not a Service
type, but it acts as the entry point for your cluster. It lets you consolidate
your routing rules into a single resource as it can expose multiple services
under the same IP address.

Pods are made up of multiple containers, so when volume attach to the pod all the containers running
inside the pod can access to volume as a result for read and write purpose.
Volumes can be store in Virtual machines and Physical machines. We use PV and PVC to provide storage
in Kubernetes.
Persistent Volume (PV)
Kubernetes makes physical storage devices such as our SSDs, NFS servers available to your cluster in the
form of objects called Persistent Volumes.
It is a piece of pre-provision storage inside the cluster and provision by Administrator.
Data Inside this volumes can exist beyond the Life-cycle of pod.
A Persistent Volume is an abstraction for the physical storage device that you have attached to the cluster
and pods can use this storage space using Persistent Volume Claims.
Persistent Volume Claim (PVC)
Persistent Volume Claim is a storage request by a user, typically developer.
Developer request for some capacity along with some access mode like read/write or read-only.
Claims can request specific size and access modes, for example they can be mounted once read/write or
many times read-only.
If none of the static persistent volumes match the user’s PVC request, the cluster may attempt to
dynamically create a PV that matches the PVC request based on storage class.
Creating PV and PVC :-
Example: Claiming 3GB storage from the 10GB capacity.

PV Manifest file (pv-1.yml)


apiVersion: v1kind: PersistentVolumemetadata: name: pv-volume-2 labels:
type: localspec: storageClassName: manual capacity: storage: 10Gi
accessModes: - ReadWriteOnce hostPath: path: "/mnt/data"

PVC Manifest file (pvc-claim.yml)


apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pvc-claim-2spec:
storageClassName: manual accessModes: - ReadWriteOnce resources:
requests: storage: 3Gi

Commands to run:
1. Create the persistence volume file using :
$ kubectl create -f pv.yml

2. Get the persistence volume information :


$ kubectl get pv pv-volume-2

3. Create the persistence volume claim file using:


$ kubectl create -f pvc-1.yml

4. Check for the status of PV :


$ kubectl get pv pv-volume-1

5. Claiming the required storage :


$ kubectl get pvc pvc-claim-1

Clean Up : –
$ kubectl delete pv -f pv-1.yml$ kubectl delete pvc -f pvc-claim.yml

What Are Stateful Applications?


Stateful applications are applications that store data and keep tracking it. All databases, such as MySQL,
Oracle, and PostgreSQL, are examples of stateful applications. Stateless applications, on the other hand,
do not keep the data. Node.js and Nginx are examples of stateless applications. For each request, the
stateless application will receive new data and process it.
In a modern web application, the stateless application connects with stateful applications to serve the
user’s request. A Node.js application is a stateless application that receives new data on each request from
the user. This application is then connected with a stateful application, such as a MySQL database, to
process the data. MySQL stores data and keeps updating the data based on the user’s request.
Read on to learn more about StatefulSets in the Kubernetes cluster—what they are, when to use them,
how to create them, and what the best practices are.
#What Are StatefulSets?
A StatefulSet is the Kubernetes controller used to run the stateful application as containers (Pods) in the
Kubernetes cluster. StatefulSets assign a sticky identity—an ordinal number starting from zero—to each
Pod instead of assigning random IDs for each replica Pod. A new Pod is created by cloning the previous
Pod’s data. If the previous Pod is in the pending state, then the new Pod will not be created. If you delete
a Pod, it will delete the Pod in reverse order, not in random order. For example, if you had four replicas
and you scaled down to three, it will delete the Pod numbered 3.
The diagram below shows how the Pod is numbered from zero and how the persistent volume is attached
to the Pod in the StatefulSets.
#When to Use StatefulSets
There are several reasons to consider using StatefulSets. Here are two examples:
2. Assume you deployed a MySQL database in the Kubernetes cluster and scaled this to three replicas, and a frontend
application wants to access the MySQL cluster to read and write data. The read request will be forwarded to three Pods.
However, the write request will only be forwarded to the first (primary) Pod, and the data will be synced with the other
Pods. You can achieve this by using StatefulSets.
3. Deleting or scaling down a StatefulSet will not delete the volumes associated with the stateful application. This gives
you your data safety. If you delete the MySQL Pod or if the MySQL Pod restarts, you can have access to the data in the
same volume.
#Deployment vs. StatefulSets
You can also create Pods (containers) using the Deployment object in the Kubernetes cluster. This allows
you to easily replicate Pods and attach a storage volume to the Pods. The same thing can be done by using
StatefulSets. What then is the advantage of using StatefulSets?
Well, the Pods created using the Deployment object are assigned random IDs. For example, you are
creating a Pod named “my-app”, and you are scaling it to three replicas. The names of the Pods are
created like this:
my-app-123abmy-app-098bdmy-app-890yt

After the name “my-app”, random IDs are added. If the Pod restarts or you scale it down, then again, the
Kubernetes Deployment object will assign different random IDs for each Pod. After restarting, the names
of all Pods appear like this:
my-app-jk879my-app-kl097my-app-76hf7

All these Pods are associated with one load balancer service. So in a stateless application, changes in the
Pod name are easily identified, and the service object easily handles the random IDs of Pods and
distributes the load. This type of deployment is very suitable for stateless applications.

However, stateful applications cannot be deployed like this. The stateful application needs a sticky
identity for each Pod because replica Pods are not identical Pods.
Take a look at the MySQL database deployment. Assume you are creating Pods for the MySQL database
using the Kubernetes Deployment object and scaling the Pods. If you are writing data on one MySQL
Pod, do not replicate the same data on another MySQL Pod if the Pod is restarted. This is the first
problem with the Kubernetes Deployment object for the stateful application.
Stateful applications always need a sticky identity. While the Kubernetes Deployment object offers
random IDs for each Pod, the Kubernetes StatefulSets controller offers an ordinal number for each Pod
starting from zero, such as mysql-0, mysql-1, mysql-2, and so forth.

For stateful applications with a StatefulSet controller, it is possible to set the first Pod as primary and
other Pods as replicas—the first Pod will handle both read and write requests from the user, and other
Pods always sync with the first Pod for data replication. If the Pod dies, a new Pod is created with the
same name.
The diagram below shows a MySQL primary and replica architecture with persistent volume and data
replication architecture.
Now, add another Pod to that. The fourth Pod will only be created if the third Pod is up and running, and
it will clone the data from the previous Pod.

In summary, StatefulSets provide the following advantages when compared to Deployment objects:
4. Ordered numbers for each Pod
5. The first Pod can be a primary, which makes it a good choice when creating a replicated database setup, which handles
both reading and writing
6. Other Pods act as replicas
7. New Pods will only be created if the previous Pod is in running state and will clone the previous Pod’s data
8. Deletion of Pods occurs in reverse order

#How to Create a StatefulSet in Kubernetes


In this section, you will learn how to create a Pod for MySQL database using the StatefulSets controller.
#Create a Secret
To start, you will need to create a Secret for the MySQL application that will store sensitive information,
such as usernames and passwords. Here, I am creating a simple Secret. However, in a production
environment, using the HashiCorp Vault is recommended. Use the following code to create a Secret for
MySQL:
apiVersion: v1kind: Secretmetadata: name: mysql-passwordtype:
opaquestringData: MYSQL_ROOT_PASSWORD: password

Save the code using the file name mysql-secret.yaml and execute the code using the following
command on your Kubernetes cluster:
kubectl apply -f mysql-secret.yaml

Get the list of Secrets:


kubectl get secrets

#Create a MySQL StatefulSet Application


Before creating a StatefulSet application, check your volumes by getting the persistent volume list:
kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM
STATUSpvc-e0567 10Gi RWO Retain Bound

Next, get the persistent volume claim list:


kubectl get pvcNAME STATUS VOLUME
CAPACITY ACCESS mysql-store-mysql-set-0 Bound pvc-e0567d43ffc6405b
10Gi RWO

Last, get the storage class list:


kubectl get storageclassNAME PROVISIONER
RECLAIMPOLICY linode-block-storage linodebs.csi.linode.com Delete
linode-block-storage-retain (default) linodebs.csi.linode.com Retain

Then use the following code to create a MySQL StatefulSet application in the Kubernetes cluster:
apiVersion: apps/v1kind: StatefulSetmetadata: name: mysql-setspec:
selector: matchLabels: app: mysql serviceName: "mysql" replicas: 3
template: metadata: labels: app: mysql spec:
terminationGracePeriodSeconds: 10 containers: - name: mysql
image: mysql:5.7 ports: - containerPort: 3306
volumeMounts: - name: mysql-store mountPath: /var/lib/mysql
env: - name: MYSQL_ROOT_PASSWORD valueFrom:
secretKeyRef: name: mysql-password key:
MYSQL_ROOT_PASSWORD volumeClaimTemplates: - metadata: name: mysql-
store spec: accessModes: ["ReadWriteOnce"] storageClassName:
"linode-block-storage-retain" resources: requests:
storage: 5Gi

Here are a few things to note:


9. The kind is a StatefulSet. kind tells Kubernetes to create a MySQL application with the stateful feature.
10. The password is taken from the Secret object using the secretKeyRef.
11. The Linode block storage was used in the volumeClaimTemplates. If you are not mentioning any storage class
name here, then it will take the default storage class in your cluster.
12. The replication count here is 3 (using the replica parameter), so it will create three Pods named mysql-set-0,
mysql-set-1, and mysql-set-2.

Next, save the code using the file name mysql.yaml and execute using the following command:
kubectl apply -f mysql.yaml

Now that the MySQL Pods are created, get the Pods list:
kubectl get podsNAME READY STATUS RESTARTS AGEmysql-set-0
1/1 Running 0 142smysql-set-1 1/1
Running 0 132smysql-set-2 1/1 Running
0 120s

#Create a Service for the StatefulSet Application


Now, create the service for the MySQL Pod. Do not use the load balancer service for a stateful
application, but instead, create a headless service for the MySQL application using the following code:
apiVersion: v1kind: Servicemetadata: name: mysql labels: app: mysqlspec:
ports: - port: 3306 clusterIP: None selector: app: mysql

Save the code using the file name mysql-service.yaml and execute using the following command:
kubectl apply -f mysql-service.yaml

Get the list of running services:


kubectl get svc

#Create a Client for MySQL


If you want to access MySQL, then you will need a MySQL client tool. Deploy a MySQL client using the
following manifest code:
apiVersion: v1kind: Podmetadata: name: mysql-clientspec: containers: -
name: mysql-container image: alpine command: ['sh','-c', "sleep 1800m"]
imagePullPolicy: IfNotPresent

Save the code using the file name mysql-client.yaml and execute using the following command:
kubectl apply -f mysql-client.yaml

Then enter this into the MySQL client:


kubectl exec --stdin --tty mysql-client -- sh

Finally, install the MySQL client tool:


apk add mysql-client

#Access the MySQL Application Using the MySQL Client


Next, access the MySQL application using the MySQL client and create databases on the Pods.
If you are not already in the MySQL client Pod, enter it now:
kubectl exec -it mysql-client /bin/sh

To access MySQL, you can use the same standard MySQL command to connect with the MySQL server:
mysql -u root -p -h host-server-name

For access, you will need a MySQL server name. The syntax of the MySQL server in the Kubernetes
cluster is given below:
stateful_name-ordinal_number.mysql.default.svc.cluster.local#Examplemysql-
set-0.mysql.default.svc.cluster.local

Connect with the MySQL primary Pod using the following command. When asked for a password, enter
the one you made in the “Create a Secret” section above.
mysql -u root -p -h mysql-set-0.mysql.default.svc.cluster.local

Next, create a database on the MySQL primary, then exit:


create database erp;exit;

Now connect the other Pods and create the database like above:
mysql -u root -p -h mysql-set-1.mysql.default.svc.cluster.localmysql -u root
-p -h mysql-set-2.mysql.default.svc.cluster.local

You might also like