0% found this document useful (0 votes)
129 views9 pages

Kubernetes For World PDF

These components work together to provide a scalable and resilient platform for deploying and managing containerized applications in a Kubernetes cluster. The master node manages the Kubernetes cluster and consists of the Kube API server, ETCD, Kube controller manager, and cloud controller manager. The worker node runs containers and has the Kubelet, container runtime, and Kube proxy. Pods are the basic unit of deployment, with ReplicaSets ensuring a specified number of pods run. Deployments manage ReplicaSets and enable rolling updates and scaling. Services provide load balancing and service discovery for pods. Namespaces logically separate clusters in a single physical cluster.

Uploaded by

cumar2014
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views9 pages

Kubernetes For World PDF

These components work together to provide a scalable and resilient platform for deploying and managing containerized applications in a Kubernetes cluster. The master node manages the Kubernetes cluster and consists of the Kube API server, ETCD, Kube controller manager, and cloud controller manager. The worker node runs containers and has the Kubelet, container runtime, and Kube proxy. Pods are the basic unit of deployment, with ReplicaSets ensuring a specified number of pods run. Deployments manage ReplicaSets and enable rolling updates and scaling. Services provide load balancing and service discovery for pods. Namespaces logically separate clusters in a single physical cluster.

Uploaded by

cumar2014
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

These components work together to provide a scalable and resilient platform for deploying

and managing containerized applications in a Kubernetes cluster.

● Master Node is responsible for managing the Kubernetes cluster and consists of these:
1. Kube API Server: Allows users and other components to interact with the cluster. All
communications internal or external go through it. Port 443. API talks to worker kubelet.
2. ETCD: Used for storing the current state of the cluster and its configuration data.
3. Kube controller Manager: Runs various controllers that handle tasks, such as node
management and replication. Always watching-out for changes then applies accordingly
4. Cloud controller Manager: Responsible for interacting with the cloud provider.
5. Scheduler: Responsible for assigning pods to nodes based on resource requirements.
● Worker Node (minion node) is a member of a cluster where the containers run and it has:
1. Kubelet: An agent that runs on a worker which communicates with the master (scheduler). It
manages the containers, ensures they’re running correctly. Reports node and pod state.
2. Container Runtime: Responsible for pulling container images and running containers on the
node. Docker, containerd, ark , and CRI-O are common containers used. Runtime abstracts
container management for Kubernetes. Containers don’t have permanent IP addresses.
3. Kube Proxy: Responsible for local cluster networking and load balancing between services
across the cluster. If you want to send a request to a cluster, you go through it.

● Pod is the basic unit of deployment in Kubernetes. It can consist of one or more containers that share
the same resources. It is an abstraction over a container. Pods get IP addresses. They’re ephemeral.
● ReplicaSet ensures that a specified number of identical pods are running at all times. It provides scaling
and high availability by creating or removing pods as needed.
● A deployment is a higher-level abstraction that manages ReplicaSets. It’s a blueprint for pods. It
enables rolling updates, rollbacks, and scaling of application. It ensures that the desired number of
replicas are running and monitors their health. You’ll never create pods. You’ll create deployments.
● StatefullSets: Are like deployments, but used for databases. You should create databases this way.
● A service provides a permanent IP for accessing a group of pods. It abstracts the dynamic IP
addresses of the pods and provides load balancing and service discovery.
● A namespace is a logical boundary within a cluster that allows multiple virtual clusters to coexist.
Docker is deprecated. Now Kubernetes works with Containerd. Concept and commands are the same

● Ingress: Ingress forwards traffic from


outside to your service.
● Volumes: Volume is an external storage
where your pods store data. Volume is not part of
your cluster.

Minikube
One node K8 cluster. Both master and worker
processes run on one node (machine)
Also, the node comes with Docker runtime
pre-installed on it. Minicube will create a virtual
environment on your laptop. You can use K8 this
way for testing and learning.

Kubectl
To create components and configure them, you need a way to interact with Minikube. Kubectl (which is a CLI
tool) is your gateway to connect to the Minikube and do the configurations. The Kubectl will communicate with
the API server. There are 3 ways/clients to contact the API server and they are: UI, API and CLI. Kubectl is the
most powerful client of them all. You can use Kubectl for any type of cluster, for example cloud cluster, or Hybrid
cluster.
Installation of Minikube and Kubectl (L-122)
Install virtualization environment first:
brew update
brew install hyperkit
brew install minikube
Install Minikube (it comes with Kubectl (also known as Kubernetes-cli), so no need to install it separately)

Creating a Minikube Kubernetes Cluster


minikube start --vm-driver=hyperkit (put in your laptop password if asked)
If all goes well, your new minikube cluster is ready, and your kubectl is connected to it.

How do you check the status of the nodes: kubectl get nodes
How do you check the status of our minikube: minikube status
What version of Kubernetes have you installed ? kubectl version (check both server & client)

This command “minikube start --vm-driver=hyperkit” is only for starting the cluster. We will be
using kubectl to interact with our minikube cluster from here onwards.

Basic Kubectl commands (L-123)

kubectl get nodes (We already know that we have 1 node that is acting as a worker and master)
kubectl get pod (checks what pods are available, but we don’t have any pods yet)
kubectl get services (Will check what IP and port are available)
kubectl get deployment
kubectl get replicaset
Let us create an nginx pod (We don’t create pods. We create deployment. Deployment then creates the pod)
kubectl create deployment nginx-depl --image=nginx
kubectl get deployment Checks the deployment we’ve just created.

Remember, a deployment is just a blueprint for a pod. Between a deployment and a pod, there is another layer
called replicaset, that kubernets manages for you. Let’s check it: kubectl get replicaset
Your pod inherits the hash and image name from the replicaset, then adds its own ID at the end and that how it's
made. You don’t update, create or delete a replicaset. You ONLY deal with deployments.

Layers of abstraction: (Remember, anything below them is managed by kubernetes so you don’t have to worry!
Deployment manages a replicaset. Replicaset manages a pod. A pod is an abstraction of a container.

If you want to edit the name of an image, you do it in the deployment, and NOT in the pod itself. Let’s do it.
kubectl edit deployment nginx-depl add :1.0 after the name under “containers” (editor is VI)

Debugging the application running inside the pod


kubectl logs <pod name>
If the container is still starting, the command above won’t show anything. To see additional logs, use:
kubectl describe pod <pod name>
To access a specific container (pod) you can use:
kubectl exec -it <pod name> bash

Let’s delete our pods (Let’s find out our blueprint first: kubectl get deployment)Then delete if you want.
kubectl delete deployment <deployment name> This will delete the pod made by that deployment.
Check your the replicaset and pods are both gone. This is your challenge!
Kubernetes Configuration File (L-124)
Typing these lines (kubectl create deployment nginx-depl --image=nginx) is tedious and that’s
why we need configuration files. We type all the commands in the file, the execute the file with this command:
kubectl apply -f <file-name.yml>

Basic kubernetes config file:


apiVersion: apps/v1
kind: Deployment Creating your own YML config files and
metadata:
name: nginx-deployment
The pink section specifies the deployment. understanding the syntax for a config file (L-124)
labels:
app: nginx The red section specifies the pod.
spec:
replicas: 2 This red part comes under the pink section Kubernetes file has 3 parts.
selector: (the deployment). This means that the pod
matchLabels:
configuration is inside the deployment. 1. Metadata (mainly contains the name of
app: nginx
template:
metadata:
configuration. what you’re building)
A config inside another config.
labels:
app: nginx
2. Spec (how many replicas, ports etc)
spec:
containers:
3. Status
- name: nginx
image: nginx:1.20 See the script on the left hand column.
ports:
- containerPort: 80

● You can store your K8 Config file in any location but it is better practice to keep it with the rest of the
code files.

Ports and Port-forwarding: When external networks want to communicate with the pods, they contact the
Services first through port 80. See the black box on top right. The ‘Services’ then forwards all legitimate
communications to the pods through port 8080 in this case. This means, external entities don’t have direct
access to the pods which is good for security.

Do this hands-on lab

Nginx deployment Service deployment Explanation & Execution


apiVersion: apps/v1 apiVersion: v1
kind: Deployment kind: Service Name these 2 scripts:
metadata:
name: nginx-deployment
metadata:
name: nginx-service
nginx-deployment.yaml and
labels: spec: nginx-service.yaml
app: nginx selector:
spec: app: nginx Then, execute both with:
replicas: 2 ports: kubectl apply -f <file-name>
selector: - protocol: TCP
matchLabels: port: 80 Do this to both files.
app: nginx targetPort: 8080
template: Check the new pods and services:
metadata: _______________________________________ kubectl get pods
labels:
app: nginx kubectl get service
spec: (If you see a service called
containers: 2 = How many replicas that we need. ‘kubernetes’, remember that it is
- name: nginx targetPort = forwarding to port…
image: nginx:1.16
the default network that comes with
app : nginx
ports: app = key Kubernetes. We did not create it).
- containerPort: 8080 nginx = value Our service is listening to port 80
and the default one is on port 443

Check what pods the service is forwarding the requests to with:


kubectl describe service nginx-service (green bit is a service name, don’t add .yml)
To see a long list of multiple pods and all their IPs, use: kubectl get pod -o wide

There is a file for the status of your deployment which is available in ETCD. To see this file, type this:
kubectl get deployment nginx-deployment -o yaml

You can redirect the output to another file if you like to save it and investigate more (the red bit)
kubectl get deployment nginx-deployment -o yaml > nginx-deployment.yaml
Open the output file side by side with the deployment and concentrate on the status part. Useful for debugging.

Delete the deployment: kubectl delete -f nginx-deployment.yaml


Delete the service: kubectl delete -f nginx-service.yaml
So, kubectl apply and kubectl delete both work with configuration files.

Demo Project (L-125)


Deploy MongoDB and Mongo-express
We will create a MongoDB pod with only internal service, so one can contact the pod from the outside world.
Only other pods from the same cluster can talk to it.
We also will create a Mongo-express deployment.

Things we need:
1. We will need a database URL of MongoDB so Mongo-express can connect to it. So, we will create a
ConfigMap. This file will contain the database URL
2. We will need credentials. Those are the username and password of the database so they can
authenticate to DB. We will create a Secret. This will contain the credentials. We can pass these credentials to
Mongo-express through its deployment file using Environmental Variables.
We will reference both in our deployment file. Once we finish, we can access the Mongo-express through the
browser.

The flow of connections:

The steps:
1. Let’s check and see everything in our cluster: kubectl get all (mine is completely empty)
2. First, create a file that will contain our credentials. Call this file mongo-secret.yaml (this secret file has to
be applied first before pods deployment, otherwise your pods will throw errors)

In your terminal, type: apiVersion: v1


kind: Secret
echo -n ‘jubeeyste’ | base64 and metadata:
echo -n ‘Pa33w0rd’ | base64 name: mongodb-secret
Then paste the output after the username and type: Opaque
data:
password sections of the file respectively. mongo-root-username: 4oCYanViZWV5c3Rl4oCZ
mongo-root-password: 4oCYUGEzM3cwcmTigJk=
‘Opaque’ is a type of secret certificate.

3. Apply the secret file kubectl apply -f mongo-secret.yaml (you can see it with kubectl get secret)
4. Create a mongoDB deployment and call it mongo.yaml
apiVersion: apps/v1
kind: Deployment Those 3 dashes (---) in the script on the left are officially known in the YML format as a
metadata: syntax for document separation. This means you can put multiple deployments in a
name: mongodb-deployment
labels: single file. YML will understand this and create the deployments separated by these
app: mongodb
spec:
dashes.
replicas: 1
selector:
matchLabels: The pink part at the bottom creates a service, while the rest at the top creates a
app: mongodb
template:
mongoDB. You can make separate files if you want.
metadata:
labels:
app: mongodb We will combine these 2 deployments in the same file as they belong together.
spec:
containers:
- name: mongodb The service part has a key-value pair (app: mongodb underlined) and that’s how your
image: mongo
ports:
service knows what to connect to.
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME Keep an eye on these parts:
valueFrom:
secretKeyRef:
1. -kind “Service”
name: mongodb-secret 2. -metadata / name random name
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD 3. -selector connection to pods through a label
valueFrom: 4. -ports
secretKeyRef:
name: mongodb-secret ports Service port
key: mongo-root-password

---
apiVersion: v1
kind: Service
metadata: Here, for the internal service, we don’t mention the type. That way, this service
name: mongodb-service
spec: automatically becomes for internal use. Later, in the next script for the Mongo
selector: Express, we will put the type as LoadBalancer. That will make the next Service an
app: mongodb
ports: external one.
- protocol: TCP
port: 27017
targetPort: 27017

5. Apply the deployment kubectl apply -f mongo.yaml


6. It might take a while for the pod to be ready. You can see the progress: kubectl get pod --watch
7. If it is taking too long, you can use: kubectl describe pod <pod name>
8. Create an internal service, so that other components can talk to your mongodb.
9. Type kubectl describe service <service name> to see more info about it.
10. Create Mongo Express deployment and service, plus ConfigMap for the MongoDB URL
Your Mongo Express needs 3 things (variables in red colour)
1. The application it will connect to. Which is the Internal Service (MONGODB_SERVER)
2. Which credentials to authenticate. (ADMINUSERNAME, ADMINPASSWORD)

Call this file Combined Mongo Express and


mongo-configmap.yaml Service.
Create this file first before the apiVersion: apps/v1
kind: Deployment

Mongo Express metadata:


name: mongo-express
labels:
ConfigMap must be in K8 first, app: mongo-express
spec:
because Mongo Express will replicas: 1
selector:
reference it. matchLabels:
app: mongo-express
template:
metadata:
The value of ‘LoadBalancer’
This is the ConfigMap script.
labels:
app: mongo-express is what is making this
spec:
containers:
- name: mongo-express
service external
apiVersion: v1 image: mongo-express
ports:
kind: ConfigMap - containerPort: 8081
env:
metadata: - name: ME_CONFIG_MONGODB_ADMINUSERNAME
name: mongodb-configmap valueFrom:
secretKeyRef: This nodePort: 30000 is
data: name: mongodb-secret
key: mongo-root-username what will followthe server
database_url: mongodb-service - name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef: IP. The nodePort has to be
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
between 30000 and 32767
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url
---
apiVersion: v1
kind: Service
metadata:
name: mongo-express-service
spec:
selector:
app: mongo-express
type: LoadBalancer
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30000
Type kubectl get services LoadBalancer is for external. The other is CluserIP which is for internal
usage. Your LoadBalance should have an External IP but in this case it does have it. The way around it would
be these commands: minikube service mongo-express-service This will assign an IP to the mongo
express

K8s NameSpaces (L-126)


In Kubernetes, you can organise your resources in namespaces. When
you set up kubernetes, you get some namespaces (4 of them) out of
the box. To see what you already have, type: kubectl get
namespaces The kubernetes dashboard only comes with MiniKube
and you won’t find it in a standard cluster. You can create your own
namespace with: kubectl create namespace my-namespace
Check the new namespace : kubectl get namespaces A better
way is to put it in the ConfigMap. See diagram on the right.
The new resource you create will be put in the default namespace if you don’t specify where they should go in.

Why do we need namespace? We need them for the following reasons:


1. If you create everything in the Default, soon it will be filled with too many resources. Unorganised!
2. We create multiple namespaces for better organisation. database, nginx,
monitoring etc. Grouping them for better management.
3. Two teams with the same application names can overwrite each other, if in
the same namespace.
4. Using Development and Staging environments in one cluster where they
can share the same components. This way, you don’t have to allocate
separate resources to them. See diagram on the right.
5. Limiting a team to a specific namespace when resources are scarce or you want to control their access.
You can even control how much resources (CPU, RAM or Storage) a specific namespace can consume.

Consider these before you use namespaces (NS)


1. You can’t use some resources for different NSs. E.g, ConfigMap in projectA cannot be used in projectB.
You can make a copy of the configMag and use it in projectB if you want. Same for Secret.
For example, Service is one of the resources that can be shared for different NSs.
2. Some components cannot be created in a NS, E.g: Volumes and Nodes (They’re global. Shared by all)
kubectl api-resources --namespace=false will show resources that are not bound to a NS
kubectl api-resources --namespace=true will show resources that are bound to a NS (try!)

Create components in a NS (remember, if we don’t mention a NS, components get created in the default)

If we apply the configMap in the diagram on the left with :


kubectl apply -f mysql-configmap.yaml It gets created in the default NS
If we do it this way:
kubectl apply -f mysql-configmap.yaml --namespace=my-namespace
Better way is to add the NS ( namespace: my-namespace) in the metadata.
If you create your own NS, you can only find it later by calling that name, like:
kubectl get configmap -n my-namespace (this is important)
Putting the NS inside the file is better
for documentation and automation If you don’t mention a name at all, system will always assume the default

You can change the system’s behaviour of creating components in the default NS by using kubens.
Install it on a Mac with brew install kubectx Once it is installed, type kubens which will show all NSs
while also highlighting the active one. Typing kubens my-namespace will make my-namespace the default.
4 Services - Connecting to Applications inside cluster (L-127)
1) ClusterIP Services, 2) Headless Services, 3) NodePort Services, and 4) LoadBalancer Services
A Pod is ephemeral. When it dies, it loses everything including IP. A new Pod will get a new IP and that is not
good because you’ll have to adjust your environment each time this happens. With Services however, pods get
static IPs which will stay even when pods die. Service also provides load-balancing. If you have 3 application
replicas, your Service will get the request from your clients then it will forward the traffic to any of the pods that’s
available. Here are the types:
1. ClusterIP Services. If you don’t specify the type of Service in your Service file, you get ClusterIP by default .

Example in the diagram. First Pod in node 2 hosts a


microservice app and a side-car container that collects logs
from the microservice app. Each will need its own IP and port.

The Service will know what Pod it should forward requests to


by looking at the Selector attribute in the Service specification.
This will match the Pod’s Label specification in the metadata.

The Service will also know what port to forward traffic to by


looking at the ‘targetPort’ attribute if a Pod has multiple ports.
When you create a Service, K8 creates an ‘endpoint’ object
with the same name you gave to the Service. K8 will use this
endpoint object to keep track of which pods are members of the Service. Type these commands and see how they’re related:
kubectl get endpoints and kubectl get services
In your Service specification, you can choose any number as the port number but targetPort has to match the application
container.
Port vs targetPort
If a Service is connecting to 2 endpoints, ports have to be named in the file. For 1 endpoint, there is no need.

2. Headless Service.
In Headless Service, clusterIP value in the spec is set to “None”
Headless is used when a client wants to communicate with a specific pod
and not any pod. Also, it is used when pods are communicating with one
another for data syncing without the need to go through the K8’s Service first.
If you set the clusterIP to “None”, then the DNS will return an IP for the Pod.
This way, clients can use the IP to get to the Pod. Headless is used for
stateful applications (like databases).

3. NodePort Service. The ‘type’ attribute can be one of three (ClusterIP, NodePort or LoadBalancer)
Unlike ClusterIP (which was accessible ONLY from within the cluster), NodePort will let external traffic into the
cluster using a static fixed port. This means, traffic from outside will reach the worker node via ports, without
using ingress. This port (range has to be between 30000 and 32767) is defined in the Service file, at the
nodePort attribute. This type Service is neither efficient nor secure because external traffic can reach your pods
directly.

4. LoadBalancer Service External traffic can reach the Service through LoadBalancer from Cloud Provider.
When LoadBalancer gets created, ClusterIP and NodePort also get created automatically. This way, LoadBalancer will
forward traffic from outside to the NodePort. This means, NodePort cannot be reached directly. LoadBalance type is an
extension of NodePort type, which itself is an extension of the ClusterIP type. In a real K8 production, you may use
NodePort for quick testing but not for external connection. If you have an application that will be used through browser,
you’ll either configure ingress which will forward traffic to your ClusterIP, or you’ll use the LoadBalancer from your cloud
provider.
Ingress - Connecting to Applications outside cluster (L-128)
For External Services, you’ll have to visit your
app through an IP and port which is not the
way a company app would serve clients. A
better way is to use an Ingress Service.
When the clients visit from browser, request
first reaches the Ingress Service, then the
Ingress will redirect traffic to the Internal
Service (See diagram)

In the ingress, the domain name


under ‘rules’ must be real. Any request
that comes at that domain name then
will be forwarded to the service at the
bottom. The ingress has a port for the
service where it will forward traffic to. In
the Ingress, we don’t have nodePort

Let us create an Ingress Service.


Before we configure an Ingress, we first need an Implementation called Ingress Controller which is another
pod running in your cluster. This Ingress Controller will do the redirections and process the ingress rules. In
other words, the Ingress Controller will be the entrypoint of your cluster for re-directions to domain-name.

Configuring Ingress Controller in Minikube.


There are many third party Implementations but we will use Kubenetes’ Nginx Ingress Controller. Type this:
minikube addons enable ingress Check it is there with kubectl get pods -n ingress-nginx
You should get an ingress-nginx controller running.
Create rules the Ingress controller can evaluate apiVersion: networking.k8s.io/v1
kind: Ingress
For exercise, we will create rules for the
metadata:
kubernetes-dashboard component. Your name: dashboard-ingress
kubernetes-dashboard is in its own NS (namespace) Check namespace: kubernetes-dashboard
all NS’s with kubectl get ns and you’ll find them. We spec:
rules:
cannot access the dashboard in its current state and that’s - http:
why we are creating an ingress rule for it. First, let’s see paths:
what is available inside the dashboard with: - path: /testpath
pathType: Prefix
kubectl get ingress -n kubernetes-dashboard backend:
As you can see, the dashboard has its own internal service service:
and pod. We know that the service is internal as its type name: kubernetes-dashboard
port:
shows “ClusterIP”. number: 80
Rule is on the right.

Type this: kubectl get ingress -n kubernetes-dashboard into your terminal. It will assign an IP to the
dashboard. Take that IP and map it to dashboard.com in your hosts file in your local machine (/etc/hosts)
Now, visit dashboard.com through your browser.

Extra commands=

minikube profile list


kubectl create -h (asking for help on that specific command, ie kubectl)
minikube stop opposite of minikube start

You might also like