Kube
Kube
As you can see in the above diagram when multiple services run inside containers,
you may want to scale these containers. In large scale industries, this is really tough
to do. That’s because it would increase the cost to maintain services, and the
complexity to run them side by side.
Now, to avoid setting up services manually & overcome the challenges, something
big was needed. This is where Container Orchestration Engine comes into the
picture.
This engine, lets us organize multiple containers, in such a way that all the
underlying machines are launched, containers are healthy and distributed in a
clustered environment. In today’s world, there are mainly two such engines:
Kubernetes & Docker Swarm.
What is Kubernetes?
Kubernetes is an open-source system that handles the work of scheduling
containers onto a computer cluster and manages the workloads to ensure they run
as the user intends. Being Google’s brainchild, it offers excellent community and
works brilliantly with all the cloud providers to become a multi-container
management solution.
Kubernetes Features
Horizontal Scaling & Load Balancing: Kubernetes can scale up and scale
down the application as per the requirements with a simple command, using
a UI, or automatically based on CPU usage.
Master nodes
Worker/Slave nodes
Master Node
The master node is responsible for the management of Kubernetes cluster. It is
mainly the entry point for all administrative tasks. There can be more than one
master node in the cluster to check for fault tolerance.
As you can see in the above diagram, the master node has various components like
API Server, Controller Manager, Scheduler and ETCD.
API Server: The API server is the entry point for all the REST commands used
to control the cluster.
Controller Manager: Is a daemon that regulates the Kubernetes cluster, and
manages different non-terminating control loops.
Scheduler: The scheduler schedules the tasks to slave nodes. It stores the
resource usage information for each slave node.
ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly
used for shared configuration and service discovery.
Worker/Slave nodes
Worker nodes contain all the necessary services to manage the networking
between the containers, communicate with the master node, and assign resources
to the scheduled containers.
As you can see in the above diagram, the worker node has various components like
Docker Container, Kubelet, Kube-proxy, and Pods.
Docker Container: Docker runs on each of the worker nodes, and runs the
configured pods
Kubelet: Kubelet gets the configuration of a Pod from the API server and
ensures that the described containers are up and running.
Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a
service on a single worker node
Pods: A pod is one or more containers that logically run together on nodes.
1. ClusterIP
ClusterIP is the default and most common service type.
Kubernetes will assign a cluster-internal IP address to ClusterIP
service. This makes the service only reachable within the cluster.
You cannot make requests to service (pods) from outside the cluster.
You can optionally set cluster IP in the service definition file.
Use Cases
Inter service communication within the cluster. For example,
communication between the front-end and back-end components of
your app.
Example
Kubernetes ClusterIP Service
2. NodePort
NodePort service is an extension of ClusterIP service. A ClusterIP
Service, to which the NodePort Service routes, is automatically
created.
It exposes the service outside of the cluster by adding a cluster-wide
port on top of ClusterIP.
NodePort exposes the service on each Node’s IP at a static port (the
NodePort). Each node proxies that port into your Service. So, external
traffic has access to fixed port on each Node. It means any request to
your cluster on that port gets forwarded to the service.
You can contact the NodePort Service, from outside the cluster, by
requesting <NodeIP>:<NodePort>.
Node port must be in the range of 30000–32767. Manually allocating a
port to the service is optional. If it is undefined, Kubernetes will
automatically assign one.
If you are going to choose node port explicitly, ensure that the port
was not already used by another service.
Use Cases
When you want to enable external connectivity to your service.
Using a NodePort gives you the freedom to set up your own load
balancing solution, to configure environments that are not fully
supported by Kubernetes, or even to expose one or more nodes’ IPs
directly.
Prefer to place a load balancer above your nodes to avoid node
failure.
Example
Kubernetes NodePort Service
3. LoadBalancer
LoadBalancer service is an extension of NodePort service. NodePort
and ClusterIP Services, to which the external load balancer routes, are
automatically created.
It integrates NodePort with cloud-based load balancers.
It exposes the Service externally using a cloud provider’s load
balancer.
Each cloud provider (AWS, Azure, GCP, etc) has its own native load
balancer implementation. The cloud provider will create a load
balancer, which then automatically routes requests to your
Kubernetes Service.
Traffic from the external load balancer is directed at the backend
Pods. The cloud provider decides how it is load balanced.
The actual creation of the load balancer happens asynchronously.
Every time you want to expose a service to the outside world, you
have to create a new LoadBalancer and get an IP address.
Use Cases
When you are using a cloud provider to host your Kubernetes cluster.
4. ExternalName
Services of type ExternalName map a Service to a DNS name, not to a
typical selector such as my-service.
You specify these Services with the `spec.externalName` parameter.
It maps the Service to the contents of the externalName field (e.g.
foo.bar.example.com), by returning a CNAME record with its value.
No proxying of any kind is established.
Use Cases
This is commonly used to create a service within Kubernetes to
represent an external datastore like a database that runs externally to
Kubernetes.
You can use that ExternalName service (as a local service) when Pods
from one namespace to talk to a service in another namespace.
Example
Ingress
You can also use Ingress to expose your Service. Ingress is not a Service
type, but it acts as the entry point for your cluster. It lets you consolidate
your routing rules into a single resource as it can expose multiple services
under the same IP address.
Pods are made up of multiple containers, so when volume attach to the pod all the containers running
inside the pod can access to volume as a result for read and write purpose.
Volumes can be store in Virtual machines and Physical machines. We use PV and PVC to provide storage
in Kubernetes.
Persistent Volume (PV)
Kubernetes makes physical storage devices such as our SSDs, NFS servers available to your cluster in the
form of objects called Persistent Volumes.
It is a piece of pre-provision storage inside the cluster and provision by Administrator.
Data Inside this volumes can exist beyond the Life-cycle of pod.
A Persistent Volume is an abstraction for the physical storage device that you have attached to the cluster
and pods can use this storage space using Persistent Volume Claims.
Persistent Volume Claim (PVC)
Persistent Volume Claim is a storage request by a user, typically developer.
Developer request for some capacity along with some access mode like read/write or read-only.
Claims can request specific size and access modes, for example they can be mounted once read/write or
many times read-only.
If none of the static persistent volumes match the user’s PVC request, the cluster may attempt to
dynamically create a PV that matches the PVC request based on storage class.
Creating PV and PVC :-
Example: Claiming 3GB storage from the 10GB capacity.
Commands to run:
1. Create the persistence volume file using :
$ kubectl create -f pv.yml
Clean Up : –
$ kubectl delete pv -f pv-1.yml$ kubectl delete pvc -f pvc-claim.yml
After the name “my-app”, random IDs are added. If the Pod restarts or you scale it down, then again, the
Kubernetes Deployment object will assign different random IDs for each Pod. After restarting, the names
of all Pods appear like this:
my-app-jk879my-app-kl097my-app-76hf7
All these Pods are associated with one load balancer service. So in a stateless application, changes in the
Pod name are easily identified, and the service object easily handles the random IDs of Pods and
distributes the load. This type of deployment is very suitable for stateless applications.
However, stateful applications cannot be deployed like this. The stateful application needs a sticky
identity for each Pod because replica Pods are not identical Pods.
Take a look at the MySQL database deployment. Assume you are creating Pods for the MySQL database
using the Kubernetes Deployment object and scaling the Pods. If you are writing data on one MySQL
Pod, do not replicate the same data on another MySQL Pod if the Pod is restarted. This is the first
problem with the Kubernetes Deployment object for the stateful application.
Stateful applications always need a sticky identity. While the Kubernetes Deployment object offers
random IDs for each Pod, the Kubernetes StatefulSets controller offers an ordinal number for each Pod
starting from zero, such as mysql-0, mysql-1, mysql-2, and so forth.
For stateful applications with a StatefulSet controller, it is possible to set the first Pod as primary and
other Pods as replicas—the first Pod will handle both read and write requests from the user, and other
Pods always sync with the first Pod for data replication. If the Pod dies, a new Pod is created with the
same name.
The diagram below shows a MySQL primary and replica architecture with persistent volume and data
replication architecture.
Now, add another Pod to that. The fourth Pod will only be created if the third Pod is up and running, and
it will clone the data from the previous Pod.
In summary, StatefulSets provide the following advantages when compared to Deployment objects:
4. Ordered numbers for each Pod
5. The first Pod can be a primary, which makes it a good choice when creating a replicated database setup, which handles
both reading and writing
6. Other Pods act as replicas
7. New Pods will only be created if the previous Pod is in running state and will clone the previous Pod’s data
8. Deletion of Pods occurs in reverse order
Save the code using the file name mysql-secret.yaml and execute the code using the following
command on your Kubernetes cluster:
kubectl apply -f mysql-secret.yaml
Then use the following code to create a MySQL StatefulSet application in the Kubernetes cluster:
apiVersion: apps/v1kind: StatefulSetmetadata: name: mysql-setspec:
selector: matchLabels: app: mysql serviceName: "mysql" replicas: 3
template: metadata: labels: app: mysql spec:
terminationGracePeriodSeconds: 10 containers: - name: mysql
image: mysql:5.7 ports: - containerPort: 3306
volumeMounts: - name: mysql-store mountPath: /var/lib/mysql
env: - name: MYSQL_ROOT_PASSWORD valueFrom:
secretKeyRef: name: mysql-password key:
MYSQL_ROOT_PASSWORD volumeClaimTemplates: - metadata: name: mysql-
store spec: accessModes: ["ReadWriteOnce"] storageClassName:
"linode-block-storage-retain" resources: requests:
storage: 5Gi
Next, save the code using the file name mysql.yaml and execute using the following command:
kubectl apply -f mysql.yaml
Now that the MySQL Pods are created, get the Pods list:
kubectl get podsNAME READY STATUS RESTARTS AGEmysql-set-0
1/1 Running 0 142smysql-set-1 1/1
Running 0 132smysql-set-2 1/1 Running
0 120s
Save the code using the file name mysql-service.yaml and execute using the following command:
kubectl apply -f mysql-service.yaml
Save the code using the file name mysql-client.yaml and execute using the following command:
kubectl apply -f mysql-client.yaml
To access MySQL, you can use the same standard MySQL command to connect with the MySQL server:
mysql -u root -p -h host-server-name
For access, you will need a MySQL server name. The syntax of the MySQL server in the Kubernetes
cluster is given below:
stateful_name-ordinal_number.mysql.default.svc.cluster.local#Examplemysql-
set-0.mysql.default.svc.cluster.local
Connect with the MySQL primary Pod using the following command. When asked for a password, enter
the one you made in the “Create a Secret” section above.
mysql -u root -p -h mysql-set-0.mysql.default.svc.cluster.local
Now connect the other Pods and create the database like above:
mysql -u root -p -h mysql-set-1.mysql.default.svc.cluster.localmysql -u root
-p -h mysql-set-2.mysql.default.svc.cluster.local