0% found this document useful (0 votes)
93 views9 pages

Kubernetes-Personal Notes-Naresh Kumar Chityala

The document provides an overview of Kubernetes including its architecture, components, and basic CLI commands. Kubernetes is a container orchestration platform that automates deployment, scaling, and management of containerized applications. It includes a master node that manages worker nodes, which run pod containers and services to enable communication between applications.

Uploaded by

Abdul Hakeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views9 pages

Kubernetes-Personal Notes-Naresh Kumar Chityala

The document provides an overview of Kubernetes including its architecture, components, and basic CLI commands. Kubernetes is a container orchestration platform that automates deployment, scaling, and management of containerized applications. It includes a master node that manages worker nodes, which run pod containers and services to enable communication between applications.

Uploaded by

Abdul Hakeem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Kubernetes Personal Notes - Naresh Kumar Chityala

22 August 2023 16:05

What is Kubernetes
• Kubernetes is a container orchestration(systematic) platform, also called K8s developed by Google.
• It is Designed to Automate the Deployment, Scaling and Management of Containerized applications across a cluster of nodes.
Cluster of Nodes Example
• Like we have a Docker host. Docker host will run multiple Containers. Containers will have applications(Microservices) running in it.
• In Kubernetes, a host running containers is called worker node.
• A Entity containing multiple worker nodes is called Cluster of nodes.
• So Kubernetes is a tool which manages these cluster of nodes.
• Kubernetes will not interact with container directly. It will interact with a layer called "POD"
• Within "POD", containers will run.
• Kubernetes specific commands will run on "POD".
Microservices
• Combination of multiple Microservices in a software is called a Service
• Microservices are independent deployable services. These are deployed in containers.
• Each Microservices is designed to perform a specific function and can also communicate with other microservices over defined APIs and protocols
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Architecture of Kubernetes
• Works on Master - Slave Architectures
• Master node is Brain of Infrastructure. Multiple Master Nodes can also be configured for High Availability. In Cloud Services, Cloud Service Provider will take care of Master Node.
Only Cloud Administrator needs to take care of Worker Nodes
• Worker Nodes - Slave Nodes which does actual functionality
• All Worker Nodes needs to be communicate with Master Node.

Master Node (Control Plane)Components


API Server
• Entry Point to Cluster, It acts as a front end for Master Node (Control Plane). Worker Node in cluster should communicate with Master node and this will happen through API
server.
• It will take requests from Worker Node, communicates with other components within Master node whenever required and get the task completed.
• It will take care of Authentication, authorization of Worker node.
Controller Manager
There are so many controllers like Pod Controller, Service Controllers, etc. This Controller Manages such controllers.
Scheduler
• As per the information available in Etcd, Scheduler will watch newly created pods and assigns tasks to Pods which is free and Capable.
Etcd
• It is a Database which stores data in Key Value pairs about the cluster data. It is a Backup information of all cluster data and also highly available.

Worker Node (Node) Components


Kubelet
• This agent will pass information like Containers Running, Services Running, Memory, etc on Worker node to API Server in Master Mode. With this Master node will know the
information about Worker node and saves the data in etcd (Database of all worker nodes in Master node)
• This will continuously Run on the container.
K-Proxy
• K-Proxy will take care of network communication between containers in worker nodes. This will help when microservices are running on multiple containers in multiple
nodes.
• This will also help to find the best path for communication between containers. It optimizes the communication.
Container Platform
• Manages the container Platform like docker, which is installed on Nodes

Workload Components
Pod
Collection of one or more containers. Lowest level of component within Worker nodes.
Deployments
To create multiple copies of Pod, we use deployments.
Services & Ingresses
To Enable communication between deployments and Pods we use services and Ingresses.
To Access the application within Containers/Pods, we use services and ingresses.
ConfigMaps
To save the configuration of Pods, we use config maps
Secrets
To Save Secrets
Namespaces
Logical partition within cluster are called Namespaces.
Volumes
Volumes are used for storage purpose.
StatefulSets
These are used when Database containers are used.
There are other Workload components
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Installation of Kubernetes via Google Cloud Platform

To Connect to Google Kubernetes Services


1. Install Google SDK
choco install gcloudsdk
2. Update Google SDK using below command
gcloud components update
3. Install Authorization Plugin
gcloud components install gke-gcloud-auth-plugin
4. Login to Google Kuberenetes services
gcloud auth login

To Connect Kubernetes Services.


1. Copy the command from Kubernetes services online , Click on Connect, you will get a code like below Example
gcloud container clusters get-credentials myfirstakscluster1 --zone us-central1-c --project verdant-upgrade-396713

Some Basic Kuberenetes CLI Commands

To get worker nodes list


kubectl get nodes
Kubectl get nodes -o wide

To get Information about particular Node


Syntax: kubectl describe node <node name>
Example : kubectl describe node gke-myfirstakscluster1-default-pool-42207af3-s6l0

-----------------------------------------------------------
To get all kubernetes workload info
• kubectl get all -A

-----------------------------------------------------------
To get pods list
• kubectl get pods
• Kubectl get pods -w (it will wait for an update and show the live data) (To Test This command, Execute this command and open another command window, and try to delete a
specific pod from deployment method, you can see live info, that a pod is being terminated. If the deletion method is from deployment, it has to maintain the number of pods as
mentioned in manifest file, so it will create/delete those many number pods to match the numbers mentioned in manifest file)
• kubectl get pods -o wide (Gives few more details on get pods)
• Kubectl get pods -A (To Get the Pods from all Namespaces)

To delete pod
Syntax : Kubectl delete pod <pod name>
Example : kubectl delete pod my-deployment-c84f48dd5-sgv6g

To get Information about particular Pod


Syntax: kubectl describe pod <pod name>
Example : kubectl describe pod my-pod

-----------------------------------------------------------
To get deployments list
kubectl get deployments

To get Information about particular deployment


Syntax: kubectl describe deploy <deployment name>
Example : kubectl describe deploy my-deployment

To Edit the deployment


Syntax: kubectl edit deploy <deployment name>
Example : kubectl edit deploy my-deployment

Once this command is executed, a manifest file, will open to make changes in deployment. Try changing the number of replicas and save
Then try testing (execute kubectl get deployments)

-----------------------------------------------------------
To get ReplicaSets list
kubectl get replicasets

To get Information about particular deployment


Syntax: kubectl describe replicaset <replicaset name>
Example : kubectl describe replicaset my-deployment

If you update the number of replicas in Replicaset manifest file and then if applied, same number of containers will be running in that respective deployment name mentioned in
replicaset manifest file.

Changes made in manifest file on the fly will be applied and respective of pods will be created/deleted.
-----------------------------------------------------------
To Apply Manifest File Configurations
kubectl apply -f .\sample_deploy.yml

To Apply Multiple Manifest File in Single Run


Step 1: Navigate to folder where the manifest files are saved.
Step 2: Run the below command
kubectl apply -f .

To Delete Manifest File Configurations


kubectl delete -f .\sample_deploy.yml

To Get a List of Services


kubectl get service

To get information about particular Service


Syntax: kubectl describe service <service name>
Example : kubectl describe service my-app-cip

-----------------------------------------------------------
To get a List of Ingress
kubectl get ingress

-----------------------------------------------------------
To get a list of namespaces
Kubectl get namespaces

To get resources in particular namespace


Syntax: kubectl get all -n <namespace Name>
Example: kubectl get all -n kube-public

To create a namespace
Syntax: kubectl create namespace <namespace name>
Example : kubectl create namespace ingress-nginx

To Delete a namespace
Syntax: kubectl delete namespace <namespace name>
Example : kubectl delete namespace ingress-nginx

To Get PODs running in specific namespace


Syntax: kubectl get pods -n <namespace Name>
Example: kubectl get pods -n ingress-nginx
Note: you can get similar info about deployments, replicasets, ingress, services and other components info namespace wise.
-----------------------------------------------------------
To Create Ingress Controller
Below is the Example for creating Nginx Ingress controller. Each service provider have their respective Ingress Controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml

-----------------------------------------------------------
To get a list of Config Maps
Kubectl get configmap

To get information about particular ConfigMap


Syntax: kubectl describe cm <service name>
Example : kubectl describe cm kube-root-ca.crt

-----------------------------------------------------------
To get information about Secret
Kubectl get secrets

To get information about particular Secret


Syntax: kubectl describe secret <secret name>
Example : kubectl describe secret mongodb-secret

To get the environment variables defined within a POD


Syntax : kubectl exec <PODName> env
Example : kubectl exec mongo-express-58b7cb7879-dwq5s env
-----------------------------------------------------------

To Get list of PVC


kubectl get pvc

After running the above pvc command, if the Status Is set to "Bound" , it means respective pod is using it.

To get information about particular pvc


Syntax: kubectl describe pvc <pvc name>
Example : kubectl describe pvc my-pvc
-----------------------------------------------------------
To Get list of PV
kubectl get pv

To get information about particular pv


Syntax: kubectl describe pv <pv name>
Example : kubectl describe pv my-pv

-----------------------------------------------------------
To Get list of DeamonSet
kubectl get daemonset

To get information about particular daemonset


Syntax: kubectl describe daemonset <daemonset name>
Example : kubectl describe daemonset prometheus-node-exporter

-----------------------------------------------------------
To Get list of Jobs
kubectl get jobs

To get information about particular daemonset


Syntax: kubectl describe job <job name>
Example : kubectl describe job factorial-job

To get Logs about the Job


Syntax: kubectl logs <POD name which is running job>
Example : kubectl logs factorial-job-v5rtz
-----------------------------------------------------------

To Get list of HPA


kubectl get hpa

To get information about particular HPA


Syntax: kubectl describe hpa <hpa name>
Example : kubectl describe hpa nginx

-----------------------------------------------------------

To Get list of Limit Range


kubectl get limitrange

To get information about particular limitrange


Syntax: kubectl describe limitrange <limitrange name>
Example : kubectl describe limitrange my-limit-range

-----------------------------------------------------------

To Get list of ResourceQuota


kubectl get resourcequota

To Apply resource quota to specific Namespace


Syntax: kubectl apply -f <resourcequota yml file path> -ns <namespace name>
Example : kubectl apply -f .\resourcequota.yml -ns my-ns

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

In order to communicate with kubernetesi, we need the plugin kubectl. Below is the command of Kubernetes cli using Choco
Choco install kubernetes-cli

PODS
• Smallest and basic unit of deployment
• Single instance of a running process
• It contains one or more containers, storage resources, network configurations and others which are required to run together. In a pod other than main container, there would be
additional containers which are called helper containers based on requirement.
• In order to create multiple copies of Pods for purpose of High availability, Scaling , rolling updates, Self healing capabilities, we will use Deployments, ReplicaSets, StatefulSets.
• Pods have benefits which include resource isolation, flexible deployment strategies and enhanced reliability.
• POD Manifest in YAML Format.
apiVersion: This version will have a set of properties. Only those properties should be updated in Manifest
Metadata: Information or Tags about the Manifest
Kind : Type of Workload defining in Manifest
Spec: Specifications about the POD in Manifest
Format :
apiVersion : v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx:latest
ports
- containerPort: 8080

Deployments
• Deployment is an object which provides declarative updates and management for set of replica Pods
• When you create a deployment, you can specify desired state by container images, no.of replicas and other configuration parameters
• Whenever there is any issues in any Pod, Kubernetes will replace the POD which matches the actual state (Desired State while creating deployment). This is called Self healing.
Deployment Manifest in YAML Format
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx-latest
ports:
-containerPort: 8080

Replica Sets
• Deployment will internally use Replica Sets. Replica Sets ensure that the number of Pods running at any time which was given during creation of Deployments.
• It manages the lifecycle of the Pods.
Deployment Manifest in YAML Format
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx-latest
ports:
- containerPort: 8080

Services
• Services is a layer on top of deployment /pod /replicaset which acts as a stable network endpoint for accessing the pods, enabling inter-pod communication and also load balancing
• It assigns IP address for each pod.
• Whenever a Pod is deleted/recreated (For Example in Self Healing process), respective Pod's IP address will also gets deletes/creates. With this it would be quite difficult to
communicate with the respective pod. As a result, the application running on the pod may go down. To resolve this issue, Service would be an appropriate. Service will have a
standard IP address and this component will interact with new and existing pods. This will help application to run without downtime.
• It is a independent entity which binds with IP address. It will route/load balance the traffic its backend Pods.
• There are 4 types of Services.
○ Cluster IP : When service type is set as ClusterIP, Kubernetes Network Controller assigns the IP address to Service. This Service can be accessible only within Cluster and
cannot be communicated from outside cluster. This is a Secured service as the communication is happening only within cluster. This is a default service Type.
○ NodePort: When service Type is set as Nodeport, there would be no Specific IP address assigned to Service. Separate layer with NodePort will be assigned on top of ClusterIP.
Backend Pods will be accessible via their IP address. This is not much secure way. This can be used only for development /testing purposes. NodePort is always 30000 above
○ LoadBalancer: This is a Layer 4 Load Balancer Service. This is available when Service is created by cloud provider. IP address is being assigned by cloud provider and is
publicly accessible IP addresses. Separate layer with Load Balancer will be created on Nodeport. This is also a secure way of communication as the Traffic can be encrypted
using a certificate
○ ExternalName: Maps the service to a DNS Name allowing the service to redirect requests to an external endpoint outside the cluster
• Example of Cluster IP: When there is same Label for Service and also Deployment, we can use the attribute 'match Label' and also the attribute "Selector", Traffic will be routed to
only those pods where the Label name matches.
Ingress Controller and Ingress Resources
Ingress:
• Ingress works as Layer 7 Load Balancer (Example Application Gateway)

In general, there would be 2 Types of Load Balancer:


a. Layer 4 Load Balancer (Example Azure Load Balancer)
○ Layer 4 Load Balancer just routes the traffic to its Backend without making a further changes.
b. Layer 7 Load Balancer
○ Layer 7 Load Balancer inspects the incoming requests and routes the traffic to its respective backend server based on HTTP/HTTPs routes and rules.
○ Routes the Traffic based on the url. This is called path based routing

• In order to expose the microservices running in PODs within Cluster, we use Ingress.
• If we are using services , for each microservices running in PODs, we need to use Load Balancer Service (Layer4) to expose the microservice to access it from external network.
Instead, we can use single Load Balancer (Ingress-Layer 7), which will route the traffic based on incoming request.
• Ingress Controller
○ It is a component which manages and operates ingress resource.
○ It is responsible for fulfilling the ingress rules by configuring and managing the underlying load balancer that handles the incoming traffic
○ It provides a convenient interface to define and manage the ingress rules for applications running in cluster
• Ingress Resources
○ It is a API Object that defines rules for routing external HTTP and HTTPS traffics to services within cluster.
○ It acts as layer 7 (application layer) load balancer allows for more advanced traffic routing and configuration compared to basic service and Node port approaches.
○ Using Ingress Resources, you can define complex routing rules, manage multiple backend services and customize the behavior incoming traffic for your applications.
○ In order to deployment Ingress Resources, first we need to deploy ingress controller.
○ Once you deploy Ingress Resources, an external IP address would be assigned to Ingress , which would be assigned by Ingress controller.

ConfigMap
• ConfigMap is an centralized component used to store configuration data and apply in deployments of PODs or other resources in cluster.
• It stores data in key-value pairs. It also provide the ability to mount the configuration files as data. These will be exposed as environment variables within a container.
• All this information will be saved in etcd database in Master Node.
○ Usage Example
 Consider a Web Application. For which , Frontend is UserInterface and Backend is Businesslogic and then further Database.
Inorder to establish communication between UserInterface and Businesslogics , some kind of configuration is needed like destination servername, connection
strings, etc. This kind of configuration data is stored in ConfigMap
• ConfigMap Manifest Structure
apiVersion
Kind
Metadata
Name
Data
Database_url
○ In above Example, Inorder to connect to Database, Username and password for Database is required. Such information (Sensitive Information) is stored in another
Component called Secrets

Secrets
• Any Sensitive Information like passwords, tokens , SSH Keys, API Keys, TLS certificates should be saved in Secrets.
• It is a secure way to store and manage and sensitive data within a cluster.
• Secrets can be used in applications and pods to access sensitive information without exposing in plain text.
• There are 4 Types of Secrets
○ Opaque: Most Common type of Secret. It allows you to store key-value pairs as base64-encoded strings. Suitable for general-purpose sensitive information
○ Docker-Registry: Used for Storing Private Docker registry credentials for authentication. Includes, server, username, passwords, Email fields
○ TLS : Used to store TLS Certificates and private keys. Includes tls.crt and tls.key fields.
○ Service Account : Automatically created Secrets that provide credentials for accessing Kubernetes API. These are associated with service accounts and allow pods to
authenticate with API server
• Secrets Manifest Structure
apiVersion
Kind
Metadata
Name
Type
Data
Mongo-root-username:
Mongo-root-password:

Note: Information from ConfigMap and Secrets will be referred as environmental variable in deployment file under spec >containers> env section.
Example ;

spec:
containers:
- name: mongo-express
image: mongo-express
ports:
- containerPort: 8081
env:
- name: ME_CONFIG_MONGODB_ADMINUSERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: ME_CONFIG_MONGODB_ADMINPASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
- name: ME_CONFIG_MONGODB_SERVER
valueFrom:
configMapKeyRef:
name: mongodb-configmap
key: database_url

Namespaces
• It is a way to create virtual cluster within a physical cluster
• If you want to segregate the clusters logically, isolate them group specific (or) resource specific or application specific, Namespaces will be useful. They can run
independently within same cluster. This will help in utilizing the resources effectively
• You can maintain specific resources like pods, services, deployments, secrets, ConfigMap, etc. within respective namespaces.
• Each object in Kubernetes belongs to namespaces.
○ Key points:
 Isolation : Objects in one namespace are not aware of objects in other namespaces , unless explicitly configured to communicate.
 Resource allocation : CPU, memory, storage and network bandwidth can be allocated and managed at namespace level.
 Access Control : Kubernetes RBAC can be used to configure Access control policies at namespace level
 Scopes: Kubernetes has four default namespaces.
□ Default : Most of the Objects in Kuberenetes will be created in default namespace
□ Kube-system : All System Related Objects are created in kube-system namespace
□ Kube-public : Resource which are cluster wide and publicly readable are created in kube-public
□ Kube-node-lease : Kubernetes Nodes will use this namespace to communicate their lease status.
• Note: If in Manifest file, there is no Namespace mentioned, all the resources during deployment will be created in default namespace.

Persistent Volumes and Persistent Volume Claims


• It is a piece of storage provisioned in a cluster which is dynamically allocated and managed by administrators
• This storage can be mounted on the containers running in POD.
• If POD is deleted, the storage mounted on the containers running in POD will NOT get deleted. You can retain the storage and then can mount to another container running
in another POD as well
• This Volume can be either from local storage or NAS or cloud service (GCP or Azure, etc)
• Persistent Volumes can be provisioned in 2 Ways
○ Statically: PVs will be created in advance and then mount to containers
○ Dynamically: PVs will be created on demand when requested by applications.
 To Dynamically Provision the PVs, we need to use Storage Classes which is internal configuration. Storage classes are third party plugins developed by cloud
service providers to extend the support for Kubernetes.
 Storage Classes: Storage classes define different classes of storage with specific characters like performance, classes, mode based on provisioner (Cloud service
provider)
• PVs support different types of Access Modes as per requirements.
○ RWO - ReadWriteOnce for Single Node Access.
○ ROX - Read Only Many for Multi Node Access.
○ RWX - Read Write Many for Multi Node Access
• LifeCycle -Reclaim Policy : These Policy defines, if a PVs needs to be deleted or not when pod is deleted
These policies are independent of Pods and can be managed seperately from Pods.
 When a PV is no longer needed for any pod , it can be released and made available for reuse.
 Reclaim Policy determines what needs to be done to released PV with options like below. These Policies are called Persistent Volume Claims (PVCs)
□ Retain - PV will not delete and retains the data when PVC is deleted
□ Delete - PV gets deleted when PVC is deleted
□ Recycle - PV will not delete but data will be deleted for reuse. This policy is deprecated and not recommended for use
 Persistent Volume Claims
□ PVCs are in form of Manifest Files. These are used by applications to request storage resource from PVs
□ PVCs have desired configuration like storage size, access mode and storage class
□ When PVC is created, Kuberenetes automatically binds to available PV that matches requirements specified in Claim Policy.
• Note: POD Deployment Manifest File, will refer to PVC Claim Manifest File
Then, PVC Manifest File will refer to Storage Class Manifest File
Storage Class Manifest File will refer to Provisioner which is having details of Storage Provisioner like GCP, Azure
• To verify if PV is attached to POD, you can run the command "kubectl get deployment <deployment name>", you can check volumes section and check if mentioned PV in
Manifest file is listed there.

StatefulSets:
• Traditional Deployment will create Pods,
• Services, and other resources with some random names and can be start in any sequence while deploying.
• StatefulSets will help deploying the resources in required configuration like naming conventions, creating pods in sequential orders etc.
• When Stateful set is configured, if any pod is having issue (or) terminated, Kubernetes will delete the respective Pod and create the Pod with same name. This was not the
case with regular deployment workload. Regular deployment workload was deleting any random pod and creating a pod with random name.
○ Example : In Stateful set Deployment, if manifest file is configured for 3 PODs, Kubernetes will create POD0, POD1, POD2 and if we edit the Manifest file to 2 PODs
later, Kubernetes will delete POD, since it is last created.
• Stateful Applications like DB Applications MySQL, PostgreSQL or messaging applications like Kafka, require Stable Network Identities (Unique and Stable HostName), Stable
Storage. This means these applications should deployed with Stateful Deployment of PODs , mandatory attachment of PV and PVC, Services, etc.
○ Features and Characteristics
 Unique and Stable Host name for Network Identities: Each POD is assigned a unique and Stable host name based on naming convention. This maintains stable
network connectivity when applications are scaled up or down
 Ordered Deployment and Scaling: Each Pod is created and fully running before next pod is started ensuring dependencies and sequencing requirements are
maintained
 Stable Storage: Stateful sets provide Stable and Unique Storage volumes for each POD. To achieve this PVs and PVCs are used to provide storage to pods allowing
the data to be persistent and retained even when pods are restarted. If a POD is deleted, Kubernetes will create a POD with same POD Name and attach the
Existing PV which is mentioned in Manifest file. It will not delete and recreate a PV with same name when POD is deleted.
 Headless Service: Stateful sets automatically create a Headless service which allows pod to have its own DNS Entry. This Enables direct communication between
pods using their hostnames and not with IP address.
 Stateful POD Scaling: Stateful Sets support both Vertical and Horizontal scaling. Vertical scaling deals with adding/deleting CPU and Memory for POD. Horizontal
scaling deals with number of replicas of POD
 Ordered Termination: When scaling down in Stateful sets, Kubernetes deletes the Pods in reverse order they are created. This allows for orderly application
shutdown and ensures data integrity and consistency
 Data Replication : Since each POD in Stateful Set has its on PV and PVC, data replication can be handled at Storage level. Many Cloud service provider data
replication capabilities to ensure high availability and data integrity

DaemonSets Controller
• Traditional ReplicaSets or Deployments, maintain desired number of copies (replicas) of pods across cluster, in which, there are chances, that more than one pod is running
on Single node and no pods in other node. To Ensure that each POD is running on each node, we use Daemonset ensuring that essential services are present and running
throughout the cluster.
• Use Cases: Running System Services, Monitoring Agent in Each node.
• This can be configured in Manifest File. There would be no Section of Replicas in Manifest since its objective is to run the service on each node.
kind: Daemonset

Jobs Controller
• Job is a controller that creates one or more pods to perform a task and ensure that task is completed successfully.
• Replicasets or deployments will maintain desired number of replicas , where as Job is designed for short lived tasks or batch processing. Once the task is completed, it
terminates the pods. It is ideal for batch jobs or tasks that needs to be executed once
• Once the Job is completed, its respective POD status will go to "COMPLETED" State.
• You can check the Job log output using the command "kubectl logs <pod name running the job>". For Example, if a script is being run in a Job, then the output of the script
can be obtained from the logs.

Horizontal POD Autoscalar (HPA)


• HPA automatically adjusts the number of replicas of a Deployment, replicaset or Statefulset based on CPU, memory utilization or custom metrics of a POD
• It ensures that enough number of replicas are there to handle to complete the TASK. Once threshold values of CPU or other custom metrics exceeds, it will autoscale. When
the metrics decrease , it automatically scales down the replica.
• To work with this functionality, a plugin called metrics server should be configured in Kubernetes. If Kubernetes is provided by cloud service provider, it is configured by
default. If Installing Kubernetes manually on local machine, then we need to configure it explicitly. This plugin would be running in Namespace called kube-system
• Threshold values can be configured in Manifest File in resources section. Resource section should have both requests and limits keyvalue pairs.
• To achieve this we need to apply the manifest file after regular Deployment or Replicaset or StatefulSets Manifest Files
○ For Example : kubect apply -f .\deployment.yml
 From above command, pods will be created as mentioned in deployment file with number of replicas mentioned. Later apply the another manifest file which is
having hpa configured.
□ Kubectl apply -f .\hpa.yml
□ With this hpa plugin will monitor the pods metric values mentioned in hpa manifest file and then scale the pods automatically.

Cluster AutoScaler
• CA Automatically adjusts the size of Kubernetes cluster by adding/removing nodes based on resource demands of pods. This is managed outside of the Cluster.
• It ensures that enough resources are available in the cluster to accommodate the scheduled pods
• In order to increase the number of nodes in a cluster, we generally go to Portal and edit the number of nodes. However, all cloud service providers have a autoscale profile in
which minimum and maximum number of node can be configured. Once there is a load on nodes, it will automatically add nodes as per the requirement . This feature will
also be taken care by the metrics server which would be running in Namespace called kube-system.

Limit Range
• We can limit the resource utilization in PODs, like specific amount of cpu or memory should be utilized. This is to ensure that the POD is not utilizing more resources from
Node.
• Limit range will help in scaling if the threshould value of metrics (CPU, memory) is exceeded.
• These Limits will be updated in Manifest Files. In the Manifest file, you can update the namespace value, so that the limits mentioned can be applied to the pods mentioned
for the mentioned namespace only.
○ For Example, in deployment Manifest file, If we do not mention the resources limit or metrics. You can later apply the limitrange specific Manifest file mentioning the
namespace.
• Limit Range is applicable in POD Level

ResourceQuota
• Resource Quota is applicable in Namespace Level
• You can limit the resource quota (CPU and Memory) within a namespace.
• This will help in control the resource consumption of all pods and containers within that namespace
• These limits will be updated in Manifest Files.
○ These Manifest Files can be assigned on namespace level
 For Example : kubectl create namespace my-ns
Kubectl apply -f .\resourcequota.yml -ns my-ns
In above example, first a namespace is created and then Resource quota is applied on that namespace

Interview Questions
What is a service in Kubernetes ?
What is a Cluster IP service ?
What is a NodePort Service ?
What is a Load Balancer Service?
How POD Communication happens in Kubernetes?
• DNS-Based Service Discovery: Kubernetes provides a built in DNS Service called Core DNS. Each Pod is automatically assigned a DNS Name based on its metadata
• This DNS Name can be used by other pods or services to discover and communicate with POD.
• This DNS Name resolution is handled by Core DNS which maps the DNS Names to Pod's IP address.
• Works only Internally within PODs
Which service is secure in Kubernetes?
• Cluster IP as it is internal-only service within cluster
How is service mapped to its respective PODs
• Using Labels and Selectors
• Labels are key-value pairs that are attached to kubernetes objects, including pods and services
• Selectors are used to match labels and objects
How the containers in a POD communicate with each other
• Using local host
• They share the same network namespace or loopback address 127.0.0.1
• They are just like running on same machine
How to look at which PODs are mapped to a service
• Kubectl describe service <service name>
Common Commands to work with services
• Kubectl get services
• Kubectl describe service < service Name>
• Kubectl edit service <service Name>
• Kubectl delete service < service Name>
NodePort Port allocation starts from ?
• 30000

You might also like