K8S
K8S
HISTORY:
Initially Google created an internal system called Borg (later called as omega) to manage its
thousands of applications later they donated the borg system to cncf and they make it as open
source.
the word kubernetes originated from Greek word called pilot or Hailsmen.
Borg: 2014
INTRO:
It is used to automates many of the manual processes like deploying, managing, and scaling
containerized applications.
MEM -- > GOOGLE -- > CLUSTER -- > MULTIPLE APPS OF GOOGLE -- > BORG -- >
ARCHITECTURE:
DOCKER : CNCA
K8S: CNPCA
C : CLUSTER
N : NODE
P : POD
C : CONTAINER
A : APPLICATION
COMPONENTS:
MASTER:
1. API SERVER: communicate with user (takes command execute & give op)
WORKER:
CLUSTER TYPES:
MINIKUBE:
It is a platform Independent.
REQUIREMENTS:
2 CPUs or more
Internet connection
SETUP:
sudo sh get-docker.sh
and need to give executable permission for that binary file to work as a command.
POD:
It is a group of containers.
when we create a pod, containers inside pods can share the same network namespace, and can
share the same storage volumes .
While creating pod, we must specify the image, along with any necessary configuration and resource
limits.
K8's cannot communicate with containers, they can communicate with only pods.
1. Imperative(command)
IMPERATIVE:
apiVersion:
kind:
metadata:
spec:
vim pod.yml
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- image: vinodvanama/paytmtrain:latest
name: cont1
execution:
===========================================================================
REPLICASET:
rs -- > pods
LABLES: individual pods are difficult to manage because they have different names.
so we give a common label to group them and work with them together
vim replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: movies
labels:
app: paytm
spec:
replicas: 3
selector:
matchLabels:
app: paytm
template:
metadata:
labels:
app: paytm
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
ADV:
Self healing
scaling
DRAWBACKS:
DEPLOYMENT:
vim deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: paytm
spec:
replicas: 3
selector:
matchLabels:
app: paytm
template:
metadata:
labels:
app: paytm
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
vim .bashrc
source .bashrc
kgr
kgp
================================================================================
KOPS:
KOPS:
used to create, destroy, upgrade, and maintain a highly available, production-grade Kubernetes
cluster.
ADVANTAGES:
ALTERNATIVES:
aws need to give permission for it so we can use IAM user to allocate permission for the kops tool
IAM -- > USER -- > CREATE USER -- > NAME: KOPS -- > Attach Polocies Directly -- >
AdministratorAccess -- > NEXT -- > CREATE USER
USER -- > SECURTITY CREDENTIALS -- > CREATE ACCESS KEYS -- > CLI -- > CHECKBOX -- > CREATE
ACCESS KEYS -- > DOWNLOAD
wget https://github.com/kubernetes/kops/releases/download/v1.25.0/kops-linux-amd64
mv kubectl /usr/local/bin/kubectl
mv kops-linux-amd64 /usr/local/bin/kops
vim .bashrc
source .bashrc
export KOPS_STATE_STORE=s3://rahamsdevopsbatchmay292024pm.k8s.local
Suggestions:
ADMIN ACTIVITIES:
ADMIN ACTIVITIES:
NOTE: In real time we use fine node cluster to master nodes and three worker nodes.
NOTE: its My humble request for all of you not to delete the cluster manually and do not delete any
server use the below command to delete the cluster.
==================================================================
NAMESPACES:
CLUSTER: HOUSE
NAMESPACES: ROOM
If dev team create a pod on dev ns testing team cant able to access it.
we cant access the objects from one namespace to another namespace.
TYPES:
default : Is the default namespace, all objects will create here only
kube-node-lease : it will store object which is taken from one node to another.
kube-system : default k8s will create some objects, those are storing on this ns.
NOTE: Every component of Kubernetes cluster is going to create in the form of pod
NOTE: BY DEFAULT K8S NAMESPACE WILL PROVIDE ISOLATION BUT NOT RESTRICTION.
TYPES:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: movies
name: movies-deploy
spec:
replicas: 10
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: rahamshaik/moviespaytm:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: ClusterIP
selector:
app: movies
ports:
- port: 80
DRAWBACK:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: movies
name: movies-deploy
spec:
replicas: 10
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: rahamshaik/moviespaytm:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: service1
spec:
type: NodePort
selector:
app: movies
ports:
- port: 80
nodePort: 31111
NOTE: UPDATE THE SG (REMOVE OLD TRAFFIC AND GIVE ALL TRAFFIC)
DRAWBACK:
PORT RESTRICTION.
3. LOADBALACER: It will expose our app and distribute load blw pods.
it will expose the application with dns [Domain Name System] -- > 53
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: swiggy
name: swiggy-deploy
spec:
replicas: 3
selector:
matchLabels:
app: swiggy
template:
metadata:
labels:
app: swiggy
spec:
containers:
- name: cont1
image: rahamshaik/trainservice:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: abc
spec:
type: LoadBalancer
selector:
app: swiggy
ports:
- port: 80
targetPort: 80
==================================================================================
=====
METRIC SERVER:
it can collects metrics like cpu, ram -- from all the pods and nodes in cluster.
Resource efficiency, using 1 milli core of CPU and 2 MB of memory for each node in a cluster.
Vertical: Existing
Example : if you have pod-1 with 50% load and pod2 with 50% load then average will be
(50+50/2=50) average value is 50
but if pod-1 is exceeding 60% and pod-2 50% then average will be 55% (then here we need to create
a pod-3 because its exceeding the average)
Here we need to use metric server whose work is to collect the metrics (cpu & mem info)
Now HPA will analysis metrics for every 30 sec and create a new pod if needed.
COOLING PERIOD: amount of time taken to terminate the pod after load is decreased.
scaling can be done only for scalable objects (ex: RS, Deployment, RC )
Controller Periodically adjust the number of replicas in RS, RC and Deployment depends on average.
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: movies
spec:
replicas: 3
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
apt update -y
stress
stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 100s
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: swiggy
name: swiggy-deploy
spec:
selector:
matchLabels:
app: swiggy
template:
metadata:
labels:
app: swiggy
spec:
containers:
- name: cont1
image: rahamshaik/moviespaytm:latest
ports:
- containerPort: 80
==================================================================================
============
QUOTAS:
By default the pod in K8s will run with no limitations of Memory and CPU
It can limit the objects that can be created in a namespace and total amount of resources.
when we create a pod scheduler will check the limits of node to deploy pod on it.
1 cpu = 1000 millicpus ( half cpu = 500 millicpus (or) 0.5 cpu)
IMPORTANT:
Ever Pod in namespace must have CPU limts.
The amount of CPU used by all pods inside namespace must not exceed specified limit.
DEFAULT RANGE:
CPU :
MAX = LIMIT = 1
MEMORY :
MAX = LIMIT = 1G
vim dev-quota.yml
apiVersion: v1
kind: ResourceQuota
metadata:
name: dev-quota
namespace: dev
spec:
hard:
pods: "5"
limits.cpu: "1"
limits.memory: 1Gi
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: movies
spec:
replicas: 3
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
limits:
cpu: "1"
memory: 512Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: movies
spec:
replicas: 3
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
limits:
cpu: "0.2"
memory: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: movies
spec:
replicas: 3
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
limits:
cpu: "1"
memory: 1Gi
requests:
cpu: "0.2"
memory: 100Mi
apiVersion: apps/v1
kind: Deployment
metadata:
name: movies
labels:
app: movies
spec:
replicas: 3
selector:
matchLabels:
app: movies
template:
metadata:
labels:
app: movies
spec:
containers:
- name: cont1
image: yashuyadav6339/movies:latest
resources:
requests:
cpu: "0.2"
memory: 100Mi
==================================================================================
pv:
Once a PV is created, it can be bound to a Persistent Volume Claim (PVC), which is a request for
storage by a pod.
When a pod requests storage via a PVC, K8S will search for a suitable PV to satisfy the request.
PV is bound to the PVC and the pod can use the storage.
If no suitable PV is found, K8S will either dynamically create a new one (if the storage class supports
dynamic provisioning) or the PVC will remain unbound.
pvc:
PVC request a PV with your desired specification (size, access, modes & speed etc) from k8s and onec
a suitable PV is found it will bound to PVC
once USER finished its work the attached PV can be released the underlying PV can be reclaimed &
recycled for future.
RESTRICTIONS:
pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-07e5c6c3fe273239f
fsType: ext4
pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
dep.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: pvdeploy
spec:
replicas: 1
selector:
matchLabels:
app: swiggy
template:
metadata:
labels:
app: swiggy
spec:
containers:
- name: raham
image: centos
volumeMounts:
- name: my-pv
mountPath: "/tmp/persistent"
volumes:
- name: my-pv
persistentVolumeClaim:
claimName: my-pvc
cd /tmp/persistent/
ls
vim raham
exit
now delete the pod and new pod will created then in that pod you will see the same content.
SIDE CAR:
main container will have application and helper container will do help for maiN container.
Init Conatiner:
========================================================================
ENV VARIABLES:
To set Env vars it include the env or envFrom field in the configuration file.
ENV:
allows you to set environment variables for a container, specifying a value directly for each variable
that you name.
ENVFROM:
allows you to set environment variables for a container by referencing either a ConfigMap or a
Secret.
CONFIGMAPS:
It is used to store the data in key-value pair, files, or command-line arguments that can be used by
pods, containers and other resources in cluster
Limit of config map data in only 1 MB (we cannot store more than that)
But if we want to store a large amount of data in config maps we have to mount a volume or use a
seperate database or file service.
USE CASES:
We can store the data: By using this config maps, we can store the data like IP address, URL's and
DNS etc...
vim vars
MYSQL_ROOT_PASSWORD=Raham123
MYSQL_USER=admin
kubectl get cm
SECRETS: To store sensitive data in an unencrypted format like passwords, ssh-keys etc ---
By default k8s will create some Secrets these are useful from me to create communicate inside the
cluster
TYPES:
Generic: creates secrets from files, dir, ENV VARIABLES:
To set Env vars it include the env or envFrom field in the configuration file.
ENV:
allows you to set environment variables for a container, specifying a value directly for each variable
that you name.
ENVFROM:
allows you to set environment variables for a container by referencing either a ConfigMap or a
Secret.
CONFIGMAPS:
It is used to store the data in key-value pair, files, or command-line arguments that can be used by
pods, containers and other resources in cluster
Limit of config map data in only 1 MB (we cannot store more than that)
But if we want to store a large amount of data in config maps we have to mount a volume or use a
seperate database or file service.
USE CASES:
We can store the data: By using this config maps, we can store the data like IP address, URL's and
DNS etc...
vim vars
MYSQL_ROOT_PASSWORD=Raham123
MYSQL_USER=admin
kubectl get cm
SECRETS: To store sensitive data in an unencrypted format like passwords, ssh-keys etc ---
By default k8s will create some Secrets these are useful from me to create communicate inside the
cluster
TYPES:
kubectl get po
kubectl get po
TO SEE SECRETS:
kubectl get secrets password -o yaml
kubectl get po
kubectl get po
TO SEE SECRETS:
===============================================================================