Kubernetes Tutorial - A Comprehensive Guide For Kubernetes
Kubernetes Tutorial - A Comprehensive Guide For Kubernetes
Now, before moving forward in this blog, let me just quickly brief you about containerization.
So, before containers came into existence, the developers and the testers always had a tiff
between them. This usually, happened because what worked on the dev side, would not work
on the testing side. Both of them existed in different environments. Now, to avoid such
scenarios containers were introduced so that both the Developers and Testers were on the same
page.
Handling a large number of containers all together was also a problem. Sometimes while
running containers, on the product side, few issues were raised, which were not present at the
development stage. This kind of scenarios introduced the Container Orchestration System.
Before I deep dive into the orchestration system, let me just quickly list down the challenges
faced without this system.
Now, to avoid setting up services manually & overcome the challenges, something big was
needed. This is where Container Orchestration Engine comes into the picture.
This engine, lets us organize multiple containers, in such a way that all the underlying machines
are launched, containers are healthy and distributed in a clustered environment. In today’s
world, there are mainly two such engines: Kubernetes & Docker Swarm.
As you can refer to the above image, Kubernetes, when compared with Docker Swarm owns a
great active community and empowers auto-scaling in many organizations. Similarly, Docker
Swarm has an easy to start cluster when compared to Kubernetes, but it is limited to the Docker
API’s capabilities.
What is Kubernetes?
Kubernetes is an open-source system that handles the work of scheduling containers onto a
compute cluster and manages the workloads to ensure they run as the user intends. Being the
Google’s brainchild, it offers excellent community and works brilliantly with all the cloud
providers to become a multi-container management solution.
Kubernetes Features
The features of Kubernetes, are as follows:
Kubernetes Architecture
Kubernetes Architecture has the following main components:
Master nodes
Worker/Slave nodes
I am going to discuss each one of them one by one. So, initially let’s start by understanding the
Master Node.
Master Node
The master node is responsible for the management of Kubernetes cluster. It is mainly the entry
point for all administrative tasks. There can be more than one master node in the cluster to
check for fault tolerance.
As you can see in the above diagram, the master node has various components like API Server,
Controller Manager, Scheduler and ETCD.
API Server: The API server is the entry point for all the REST commands used to
control the cluster.
Controller Manager: Is a daemon that regulates the Kubernetes cluster, and manages
different non-terminating control loops.
Scheduler: The scheduler schedules the tasks to slave nodes. It stores the resource
usage information for each slave node.
ETCD: ETCD is a simple, distributed, consistent key-value store. It’s mainly used for
shared configuration and service discovery.
Worker/Slave nodes
Worker nodes contain all the necessary services to manage the networking between the
containers, communicate with the master node, and assign resources to the scheduled
containers.
As you can see in the above diagram, the worker node has various components like Docker
Container, Kubelet, Kube-proxy, and Pods.
Docker Container: Docker runs on each of the worker nodes, and runs the configured
pods
Kubelet: Kubelet gets the configuration of a Pod from the API server and ensures that
the described containers are up and running.
Kube-proxy: Kube-proxy acts as a network proxy and a load balancer for a service on
a single worker node
Pods: A pod is one or more containers that logically run together on nodes.
Kubernetes Case-Study
Problem: How to create images for all required platforms from one application code, and
deploy those images onto each platform?
For your better understanding, refer to the below image. When the code is changed at the code
registry, then bare metal images, Docker containers, and VM images are created by continuous
integration tools, pushed into the image registry, and then deployed to each infrastructure
platform.
Now, let us focus on container workflow to understand how they used Kubernetes as a
deployment platform. Refer to the below image to sneak peek into platform architecture.
OpenStack instances are used, with Docker, Kubernetes, Calico, etcd on top of it to perform
various operations like Container Networking, Container Registry, and so on.
When you have a number of clusters, then it becomes hard to manage them right?
So, they just wanted to create a simple, base OpenStack cluster to provide the basic
functionality needed for Kubernetes and make the OpenStack environment easier to manage.
By the combination of Image creation workflow and Kubernetes, they built the below toolchain
which makes it easy from code push to deployment.
This kind of toolchain made sure that all factors for production deployment such as multi-
tenancy, authentication, storage, networking, service discovery were considered.
That’s how folks, Yahoo! JAPAN built an automation toolchain for “one-click” code
deployment to Kubernetes running on OpenStack, with help from Google and Solinea.
Hands-On
In this Hands-On, I will show you how to create a deployment and a service. I am using an
Amazon EC2 instance, to use Kubernetes. Well, Amazon has come up with Amazon Elastic
Container Service for Kubernetes (Amazon EKS), which allows them to create Kubernetes
clusters in the cloud very quickly and easily.
Step 1: First create a folder inside which you will create your deployment and service. After
that, use an editor and open a Deployment file.
1mkdir handsOn
2cd handsOn
3vi Deploy.yaml
Step 2: Once you open the deployment file, mention all the specifications for the application
you want to deploy. Here I am trying to deploy an httpd application.
Step 4: Now, once the deployment is applied, get the list of pods running.
Here, -o wide are used to know on which node is the deployment running.
Step 5: After you have created a deployment, now you have to create a service. For that again
use an editor and open a blank service.yaml file.
1vi service.yaml
Step 6: Once you open a service file, mention all the specifications for the service.
Step 7: After you write your service file, apply the service file using the following command.
Step 8: Now, once your service is applied to check whether the service is running or not use
the following command.
Step 9: Now, to see the specifications of service, and check which Endpoint it is binded to,
use the following command.
Step 10: Now since we are using amazon ec2 instance, to fetch the webpage and check the
output, use the following command.
1curl ip-address