0% found this document useful (0 votes)
7 views38 pages

K8S

The document outlines the limitations of Docker Swarm, highlights Kubernetes (K8s) as an open-source container orchestration platform developed by Google, and provides a comprehensive overview of its architecture, components, and setup instructions. It details the use of Minikube for local development, the creation and management of pods, replica sets, and deployments, as well as the use of kOps for managing Kubernetes clusters in the cloud. Additionally, it discusses namespaces for isolation within clusters and various service types for exposing applications.

Uploaded by

antaralaerospace
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views38 pages

K8S

The document outlines the limitations of Docker Swarm, highlights Kubernetes (K8s) as an open-source container orchestration platform developed by Google, and provides a comprehensive overview of its architecture, components, and setup instructions. It details the use of Minikube for local development, the creation and management of pods, replica sets, and deployments, as well as the use of kOps for managing Kubernetes clusters in the cloud. Additionally, it discusses namespaces for isolation within clusters and various service types for exposing applications.

Uploaded by

antaralaerospace
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

K8S:

LIMITATIONS OF DOCKER SWARM:

1. CANT DO AUTO-SCALING AUTOMATICALLY

2. CANT DO LOAD BALANCING AUTOMATICALLY

3. CANT HAVE DEFAULT DASHBOARD

4. WE CANT PLACE CONATINER ON REQUITED SERVER.

5. USED FOR EASY APPS.

DOCKER ALTERNATIVES: containerd, rocket, cri-o

HISTORY:

Initially Google created an internal system called Borg (later called as omega) to manage its
thousands of applications later they donated the borg system to cncf and they make it as open
source.

initial name is Borg but later cncf rename it to Kubernetes

the word kubernetes originated from Greek word called pilot or Hailsmen.

Borg: 2014

K8s first version came in 2015.

INTRO:

IT is an open-source container orchestration platform.

It is used to automates many of the manual processes like deploying, managing, and scaling
containerized applications.

Kubernetes was developed by GOOGLE using GO Language.

MEM -- > GOOGLE -- > CLUSTER -- > MULTIPLE APPS OF GOOGLE -- > BORG -- >

Google donated Borg to CNCF in 2014.

1st version was released in 2015.

ARCHITECTURE:
DOCKER : CNCA

K8S: CNPCA

C : CLUSTER

N : NODE

P : POD

C : CONTAINER

A : APPLICATION

COMPONENTS:

MASTER:

1. API SERVER: communicate with user (takes command execute & give op)

2. ETCD: database of cluster (stores complete info of a cluster ON KEY-VALUE pair)

3. SCHEDULER: select the worker node to shedule pods (depends on hw of node)

4. CONTROLLER: control the k8s objects (n/w, service, Node)

WORKER:

1. KUBELET : its an agent (it will inform all activites to master)

2. KUBEPROXY: it deals with nlw (ip, networks, ports)

3. POD: group of conatiners (inside pod we have app)

Note: all components of a cluster will be created as a pod.

CLUSTER TYPES:

1. SELF MANAGED: WE NEED TO CREATE & MANAGE THEM


minikube = single node cluster

kubeadm = multi node cluster (manual)

kops = multi-node cluster (automation)

2. CLOUD-BASED: CLOUD PROVIDERS WILL MANAGE THEM

AWS = EKS = ELASTIC KUBERNETES SERVICE

AZURE = AKS = AZURE KUBERENETS SERVICE

GOOGLE = GKS = GOOGLE KUBERENETS SERVICE

MINIKUBE:

It is a tool used to setup single node cluster on K8's.

Here Master and worker runs on same machine

It contains API Servers, ETDC database and container runtime

It is used for development, testing, and experimentation purposes on local.

It is a platform Independent.

Installing Minikube is simple compared to other tools.

NOTE: But we don't implement this in real-time Prod

REQUIREMENTS:

2 CPUs or more

2GB of free memory

20GB of free disk space

Internet connection

Container or virtual machine manager, such as: Docker.


Kubectl is the command line tool for k8s

if we want to execute commands we need to use kubectl.

SETUP:

sudo apt update -y

sudo apt upgrade -y

sudo apt install curl wget apt-transport-https -y

sudo curl -fsSL https://get.docker.com -o get-docker.sh

sudo sh get-docker.sh

sudo curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

sudo mv minikube-linux-amd64 /usr/local/bin/minikube

sudo chmod +x /usr/local/bin/minikube

sudo minikube version

sudo curl -LO "https://dl.k8s.io/release/$(curl -L -s


https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

sudo curl -LO "https://dl.k8s.io/$(curl -L -s


https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"

sudo echo "$(cat kubectl.sha256) kubectl" | sha256sum --check

sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

sudo minikube start --driver=docker --force

NOTE: When you download a command as binary file it need to be on /usr/local/bin

because all the commands in linux will be on /usr/local/bin

and need to give executable permission for that binary file to work as a command.

POD:

It is a smallest unit of deployment in K8's.

It is a group of containers.

Pods are ephemeral (short living objects)


Mostly we can use single container inside a pod but if we required, we can create multiple containers
inside a same pod.

when we create a pod, containers inside pods can share the same network namespace, and can
share the same storage volumes .

While creating pod, we must specify the image, along with any necessary configuration and resource
limits.

K8's cannot communicate with containers, they can communicate with only pods.

We can create this pod in two ways,

1. Imperative(command)

2. Declarative (Manifest file)

IMPERATIVE:

kubectl run pod1 --image vinodvanama/paytmmovies:latest

kubectl get pods/pod/po

kubectl get pod -o wide

kubectl describe pod pod1

kubectl delete pod pod1

DECRALATIVE: by using file called manifest file

MANDATORY FEILDS: without these feilds we cant create manifest

apiVersion:

kind:

metadata:

spec:

vim pod.yml
apiVersion: v1

kind: Pod

metadata:

name: pod1

spec:

containers:

- image: vinodvanama/paytmtrain:latest

name: cont1

execution:

kubectl create -f pod.yml

kubectl get pods/pod/po

kubectl get pod -o wide

kubectl describe pod pod1

kubectl delete -f raham.yml

DRAWBACK: once pod is deleted we can't retrieve the pod.

===========================================================================

REPLICASET:

rs -- > pods

it will create multiple copies of same pod.

if we delete one pod automatically it will create new pod.

All the pods will have same config.

only pod names will be differnet.

LABLES: individual pods are difficult to manage because they have different names.

so we give a common label to group them and work with them together

SELECTOR: Used to select pods with same labels.


use kubectl api-resources for checking the objects info

vim replicaset.yml

apiVersion: apps/v1

kind: ReplicaSet

metadata:

name: movies

labels:

app: paytm

spec:

replicas: 3

selector:

matchLabels:

app: paytm

template:

metadata:

labels:

app: paytm

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

To list rs :kubectl get rs/replicaset

To show addtional info :kubectl get rs -o wide


To show complete info :kubectl describe rs name-of-rs

To delete the rs :kubectl delete rs name-of-rs

to get lables of pods : kubectl get pods -l app=Paytm

to delete pods : kubectl delete po -l app=paytm

TO scale rs : kubectl scale rs/movies --replicas=10 (LIFO)

LIFO: LAST IN FIRST OUT.

IF A POD IS CREATED LASTLY IT WILL DELETE FIRST WHEN SCALE OUT.

ADV:

Self healing

scaling

DRAWBACKS:

1. we cant Rollin and rollout, we cant update the application in rs.

DEPLOYMENT:

deploy -- > rs -- > pods

we can update the application.

its high level k8s objects.

vim deploy.yml

apiVersion: apps/v1

kind: Deployment

metadata:

name: movies

labels:

app: paytm

spec:
replicas: 3

selector:

matchLabels:

app: paytm

template:

metadata:

labels:

app: paytm

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

To list deployment :kubectl get deploy

To show addtional info :kubectl get deploy -o wide

To show complete info :kubectl describe deploy name-of-deployment

To delete the deploy :kubectl delete deploy name-of-deploy

to get lables of pods :kubectl get pods -l app=paytm

TO scale deploy :kubectl scale deploy/name-of-deploy --replicas=10 (LIFO)

To edit deploy :kubectl edit deploy/name-of-deploy

to show all pod labels :kubectl get pods --show-labels

To delete all pods :kubectl delete pod --all

kubectl rollout history deploy/movies

kubectl rollout undo deploy/movies

kubectl rollout status deploy/movies

kubectl rollout pause deploy/movies

kubectl rollout resume deploy/movies


COMMANDS FOR SHORTCUTS:

vim .bashrc

alias kgp="kubectl get pods"

alias kgr="kubectl get rs"

alias kgd="kubectl get deploy"

source .bashrc

kgr

kgp

================================================================================

KOPS:

INFRASTRUCTURE: Resources used to run our application on cloud.

EX: Ec2, VPC, ALB, ASG-------------

Minikube -- > single node cluster

All the pods on single node

if that node got deleted then all pods will be gone.

KOPS:

kOps, also known as Kubernetes operations.

it is an free and open-source tool.

used to create, destroy, upgrade, and maintain a highly available, production-grade Kubernetes
cluster.

Depending on the requirement, kOps can also provide cloud infrastructure.

kOps is mostly used in deploying AWS and GCE Kubernetes clusters.


But officially, the tool only supports AWS. Support for other cloud providers (such as DigitalOcean,
GCP, and OpenStack) are in the beta stage.

ADVANTAGES:

• Automates the provisioning of AWS and GCE Kubernetes clusters

• Deploys highly available Kubernetes masters

• Supports rolling cluster updates

• Autocompletion of commands in the command line

• Generates Terraform and CloudFormation configurations

• Manages cluster add-ons.

• Supports state-sync model for dry-runs and automatic idempotency

• Creates instance groups to support heterogeneous clusters

ALTERNATIVES:

Amazon EKS , MINIKUBE, KUBEADM, RANCHER, TERRAFORM.

STEP-1: GIVING PERMISSIONS

KOps Is a third party tool if it want to create infrastructure on aws

aws need to give permission for it so we can use IAM user to allocate permission for the kops tool

IAM -- > USER -- > CREATE USER -- > NAME: KOPS -- > Attach Polocies Directly -- >
AdministratorAccess -- > NEXT -- > CREATE USER

USER -- > SECURTITY CREDENTIALS -- > CREATE ACCESS KEYS -- > CLI -- > CHECKBOX -- > CREATE
ACCESS KEYS -- > DOWNLOAD

aws configure (run this command on server)

SETP-2: INSTALL KUBECTL AND KOPS


curl -LO "https://dl.k8s.io/release/$(curl -L -s
https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

wget https://github.com/kubernetes/kops/releases/download/v1.25.0/kops-linux-amd64

chmod +x kops-linux-amd64 kubectl

mv kubectl /usr/local/bin/kubectl

mv kops-linux-amd64 /usr/local/bin/kops

vim .bashrc

export PATH=$PATH:/usr/local/bin/ -- > save and exit

source .bashrc

SETP-3: CREATING BUCKET

aws s3api create-bucket --bucket rahamsdevopsbatchmay292024pm.k8s.local --region us-east-1

aws s3api put-bucket-versioning --bucket rahamsdevopsbatchmay292024pm.k8s.local --region us-


east-1 --versioning-configuration Status=Enabled

export KOPS_STATE_STORE=s3://rahamsdevopsbatchmay292024pm.k8s.local

SETP-4: CREATING THE CLUSTER

kops create cluster --name rahams.k8s.local --zones us-east-1a --master-count=1 --master-size


t2.medium --node-count=2 --node-size t2.micro

kops update cluster --name rahams.k8s.local --yes --admin

Suggestions:

* list clusters with: kops get cluster

* edit this cluster with: kops edit cluster rahamdevops.k8s.local

* edit your node instance group: kops edit ig --name=rahamdevops.k8s.local nodes-ap-south-1a

* edit your master instance group: kops edit ig --name=rahamdevops.k8s.local master-ap-south-1a

ADMIN ACTIVITIES:

To scale the worker nodes:


kops edit ig --name=rahamdevops.k8s.local nodes-us-east-1a

kops update cluster --name rahamdevops.k8s.local --yes --admin

kops rolling-update cluster --yes

ADMIN ACTIVITIES:

kops update cluster --name rahamdevops.k8s.local --yes

kops rolling-update cluster

NOTE: In real time we use fine node cluster to master nodes and three worker nodes.

NOTE: its My humble request for all of you not to delete the cluster manually and do not delete any
server use the below command to delete the cluster.

TO DELETE: kops delete cluster --name rahamdevops.k8s.local --yes

==================================================================

NAMESPACES:

NAMESPACE: It is used to divide the cluster to multiple teams on real time.

it is used to isolate the env.

CLUSTER: HOUSE

NAMESPACES: ROOM

TEAM MATES: FAMILY MEM

Each namespace is isolated.

if your are room-1 are you able to see room-2.

If dev team create a pod on dev ns testing team cant able to access it.
we cant access the objects from one namespace to another namespace.

TYPES:

default : Is the default namespace, all objects will create here only

kube-node-lease : it will store object which is taken from one node to another.

kube-public : all the public objects will store here.

kube-system : default k8s will create some objects, those are storing on this ns.

NOTE: Every component of Kubernetes cluster is going to create in the form of pod

And all these pods are going to store on kUBE-SYSTEM ns.

kubectl get pod -n kube-system : to list all pods in kube-system namespace

kubectl get pod -n default : to list all pods in default namespace

kubectl get pod -n kube-public : to list all pods in kube-public namespace

kubectl get po -A : to list all pods in all namespaces

kubectl get po --all-namespaces

kubectl create ns dev : to create namespace

kubectl config set-context --current --namespace=dev : to switch to the namespace

kubectl config view --minify | grep namespace : to see current namespace

kubectl run dev1 --image nginx

kubectl run dev2 --image nginx

kubectl run dev3 --image nginx

kubectl create ns test : to create namespace

kubectl config set-context --current --namespace=test : to switch to the namespace

kubectl config view --minify | grep namespace : to see current namespace

kubectl get po -n dev

kubectl delete pod dev1 -n dev

kubectl delete ns dev : to delete namespace


kubectl delete pod --all: to delete all pods

NOTE: BY DEFAULT K8S NAMESPACE WILL PROVIDE ISOLATION BUT NOT RESTRICTION.

TO RESTRICT THE USER TO ACCESS A NAMESPACE IN REAL TIME WE USE RBAC.

WE CREATE USER, WE GIVE ROLES AND ATTACH ROLE.

SERVICE: It is used to expose the application in k8s.

TYPES:

1. CLUSTERIP: It will work inside the cluster.

it will not expose to outer world.

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: movies

name: movies-deploy

spec:

replicas: 10

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1
image: rahamshaik/moviespaytm:latest

ports:

- containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

name: service1

spec:

type: ClusterIP

selector:

app: movies

ports:

- port: 80

DRAWBACK:

We cannot use app outside.

2. NODEPORT: It will expose our application in a particular port.

Range: 30000 - 32767 (in sg we need to give all traffic)

if we dont sepcify k8s service will take random port number.

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: movies

name: movies-deploy

spec:

replicas: 10

selector:
matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: rahamshaik/moviespaytm:latest

ports:

- containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

name: service1

spec:

type: NodePort

selector:

app: movies

ports:

- port: 80

nodePort: 31111

NOTE: UPDATE THE SG (REMOVE OLD TRAFFIC AND GIVE ALL TRAFFIC)

DRAWBACK:

EXPOSING PUBLIC-IP & PORT

PORT RESTRICTION.

3. LOADBALACER: It will expose our app and distribute load blw pods.
it will expose the application with dns [Domain Name System] -- > 53

to crete dns we use Route53

apiVersion: apps/v1

kind: Deployment

metadata:

labels:

app: swiggy

name: swiggy-deploy

spec:

replicas: 3

selector:

matchLabels:

app: swiggy

template:

metadata:

labels:

app: swiggy

spec:

containers:

- name: cont1

image: rahamshaik/trainservice:latest

ports:

- containerPort: 80

---

apiVersion: v1

kind: Service

metadata:

name: abc

spec:

type: LoadBalancer
selector:

app: swiggy

ports:

- port: 80

targetPort: 80

==================================================================================
=====

scaling: increasing the count

why to scale: to avoid the increasing load.

METRIC SERVER:

it can collects metrics like cpu, ram -- from all the pods and nodes in cluster.

we can use kubectl top po/no to see metrics

previously we can called it as heapster.

Metrics Server offers:

A single deployment that works on most clusters (see Requirements)

Fast autoscaling, collecting metrics every 15 seconds.

Resource efficiency, using 1 milli core of CPU and 2 MB of memory for each node in a cluster.

Scalable support up to 5,000 node clusters.

You can use Metrics Server for:

CPU/Memory based horizontal autoscaling (Horizontal Autoscaling)

Automatically adjusting/suggesting resources needed by containers (Vertical Autoscaling)


Horizontal: New

Vertical: Existing

In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a


Deployment or ReplicaSet), with the aim of automatically scaling the workload to match demand.

Example : if you have pod-1 with 50% load and pod2 with 50% load then average will be
(50+50/2=50) average value is 50

but if pod-1 is exceeding 60% and pod-2 50% then average will be 55% (then here we need to create
a pod-3 because its exceeding the average)

Here we need to use metric server whose work is to collect the metrics (cpu & mem info)

metrics server is connected to the HPA and give information to HPA

Now HPA will analysis metrics for every 30 sec and create a new pod if needed.

COOLING PERIOD: amount of time taken to terminate the pod after load is decreased.

scaling can be done only for scalable objects (ex: RS, Deployment, RC )

HPA is implemented as a K8S API Resources and a controller.

Controller Periodically adjust the number of replicas in RS, RC and Deployment depends on average.

apiVersion: apps/v1

kind: Deployment

metadata:

name: movies

labels:

app: movies

spec:
replicas: 3

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

kubectl apply -f hpa.yml

kubectl get all

kubectl get deploy

kubectl autoscale deployment movies --cpu-percent=20 --min=1 --max=10

kubectl get hpa

kubectl desribe hpa movies

kubectl get al1

open second termina and give

kubectl get po --watch

come to first terminal and go inside pod

kubectl exec mydeploy-6bd88977d5-7s6t8 -it -- /bin/bash

apt update -y

apt install stress -y

stress
stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 100s

check terminal two to see live pods

DAEMONSET: used to create one pod on each worker node.

Its the old version of Deployment.

if we create a new node a pod will be automatically created.

if we delete a old node a pod will be automatically removed.

daemonsets will not be removed at any case in real time.

Usecases: we can create pods for Logging, Monitoring of nodes

Even Metric server is created like a DaemonSet.

NOTE: in DaemonSet we dont specify the replicas.

apiVersion: apps/v1

kind: DaemonSet

metadata:

labels:

app: swiggy

name: swiggy-deploy

spec:

selector:

matchLabels:

app: swiggy

template:

metadata:

labels:

app: swiggy

spec:
containers:

- name: cont1

image: rahamshaik/moviespaytm:latest

ports:

- containerPort: 80

==================================================================================
============

QUOTAS:

k8s cluster can be divide into namespaces

By default the pod in K8s will run with no limitations of Memory and CPU

But we need to give the limit for the Pod

It can limit the objects that can be created in a namespace and total amount of resources.

when we create a pod scheduler will check the limits of node to deploy pod on it.

here we can set limits to CPU, Memory and Storage

here CPU is measured on cores and memory in bytes.

1 cpu = 1000 millicpus ( half cpu = 500 millicpus (or) 0.5 cpu)

Here Request means how many we want

Limit means how many we can create maximum

limit can be given to pods as well as nodes

the default limit is 0

if you mention request and limit then everything is fine

if you dont mention request and mention limit then Request=Limit

if you mention request and not mention limit then Request=!Limit

IMPORTANT:
Ever Pod in namespace must have CPU limts.

The amount of CPU used by all pods inside namespace must not exceed specified limit.

DEFAULT RANGE:

CPU :

MIN = REQUEST = 0.5

MAX = LIMIT = 1

MEMORY :

MIN = REQUEST = 500M

MAX = LIMIT = 1G

kubectl create ns dev

kubectl config set-context $(kubectl config current-context) --namespace=dev

vim dev-quota.yml

apiVersion: v1

kind: ResourceQuota

metadata:

name: dev-quota

namespace: dev

spec:

hard:

pods: "5"

limits.cpu: "1"

limits.memory: 1Gi

kubectl create -f dev-quota.yml

kubectl get quota


EX-1: Mentioning Limits = SAFE WAY

apiVersion: apps/v1

kind: Deployment

metadata:

name: movies

labels:

app: movies

spec:

replicas: 3

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

resources:

limits:

cpu: "1"

memory: 512Mi

kubectl create -f dep.yml

apiVersion: apps/v1
kind: Deployment

metadata:

name: movies

labels:

app: movies

spec:

replicas: 3

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

resources:

limits:

cpu: "0.2"

memory: 100Mi

kubectl create -f dep.yml

EX-2: MENTION LIMITS & REQUESTS = SAFE WAY

apiVersion: apps/v1
kind: Deployment

metadata:

name: movies

labels:

app: movies

spec:

replicas: 3

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

resources:

limits:

cpu: "1"

memory: 1Gi

requests:

cpu: "0.2"

memory: 100Mi

EX-3: MENTION only REQUESTS = NOT SAFE WAY

apiVersion: apps/v1

kind: Deployment
metadata:

name: movies

labels:

app: movies

spec:

replicas: 3

selector:

matchLabels:

app: movies

template:

metadata:

labels:

app: movies

spec:

containers:

- name: cont1

image: yashuyadav6339/movies:latest

resources:

requests:

cpu: "0.2"

memory: 100Mi

==================================================================================

STATEFULL: WHICH STORE PREVIOUS DATA.

STATELESS: WHICH DOESNT STORE PREVIOUS DATA.

pv:

Persistent means always available.


PVs are independent they can exist even if no pod is using them.

it is created by administrator or dynamically created by a storage class.

Once a PV is created, it can be bound to a Persistent Volume Claim (PVC), which is a request for
storage by a pod.

When a pod requests storage via a PVC, K8S will search for a suitable PV to satisfy the request.

PV is bound to the PVC and the pod can use the storage.

If no suitable PV is found, K8S will either dynamically create a new one (if the storage class supports
dynamic provisioning) or the PVC will remain unbound.

pvc:

To use Pv we need to claim the volume using PVC.

PVC request a PV with your desired specification (size, access, modes & speed etc) from k8s and onec
a suitable PV is found it will bound to PVC

After bounding is done to pod you can mount it as a volume.

once USER finished its work the attached PV can be released the underlying PV can be reclaimed &
recycled for future.

RESTRICTIONS:

1. Instances must be on same az as the ebs

2. EBS supports only a sinlge EC2 instance mounting

pv.yml

apiVersion: v1

kind: PersistentVolume

metadata:

name: my-pv

spec:

capacity:

storage: 10Gi

accessModes:
- ReadWriteOnce

awsElasticBlockStore:

volumeID: vol-07e5c6c3fe273239f

fsType: ext4

pvc.yml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: my-pvc

spec:

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 10Gi

dep.yml

apiVersion: apps/v1

kind: Deployment

metadata:

name: pvdeploy

spec:

replicas: 1

selector:

matchLabels:

app: swiggy

template:

metadata:
labels:

app: swiggy

spec:

containers:

- name: raham

image: centos

command: ["/bin/bash", "-c", "sleep 10000"]

volumeMounts:

- name: my-pv

mountPath: "/tmp/persistent"

volumes:

- name: my-pv

persistentVolumeClaim:

claimName: my-pvc

kubectl exec pvdeploy-86c99cf54d-d8rj4 -it -- /bin/bash

cd /tmp/persistent/

ls

vim raham

exit

now delete the pod and new pod will created then in that pod you will see the same content.

ReadWriteOnce: the volume can be mounted as read-write by a single node.

ReadOnlyMany: the volume can be mounted as read-only by many nodes.

ReadWriteMany: the volume can be mounted as read-write by many nodes.

ReadWriteOncePod: the volume can be mounted as read-write by a single Pod.


MULTI CONTAINER PODS:

SIDE CAR:

It creates a helper container to main container.

main container will have application and helper container will do help for maiN container.

Adapter Design Pattern:

standardize the output pattern of main container.

Ambassador Design Pattern:

used to connect containers with the outside world

Init Conatiner:

it initialize the first work and exits later.

========================================================================

ENV VARIABLES:

It is a way to pass configuration information to containers running within pods.

To set Env vars it include the env or envFrom field in the configuration file.

ENV: DIRECTLY PASSING

ENVFROM: PASSING FROM FILE

ENV:

allows you to set environment variables for a container, specifying a value directly for each variable
that you name.

ENVFROM:
allows you to set environment variables for a container by referencing either a ConfigMap or a
Secret.

You can also specify a common prefix string

CONFIGMAPS:

It is used to store the data in key-value pair, files, or command-line arguments that can be used by
pods, containers and other resources in cluster

But the data should be non confidential data ()

But it does not provider security and encryption.

If we want to provide encryption use secrets in kubernetes.

Limit of config map data in only 1 MB (we cannot store more than that)

But if we want to store a large amount of data in config maps we have to mount a volume or use a
seperate database or file service.

USE CASES:

Configure application setting

Configuring a pod or container

Sharing configuration data across multiple resources

We can store the data: By using this config maps, we can store the data like IP address, URL's and
DNS etc...

kubectl create deploy swiggydb --image=mariadb

kubectl get pods

kubectl logs swiggydb-5d49dc56-cbbqk

It is crashed why because we havent specified the password for it

kubectl set env deploy swiggydb MYSQL_ROOT_PASSWORD=Raham123

kubectl get pods


now it will be on running state

kubectl delete deploy swiggydb

PASSING FROM VAR FILE:

kubectl create deploy swiggydb --image=mariadb

kubectl get pods

kubectl logs swiggydb-5d49dc56-cbbqk

vim vars

MYSQL_ROOT_PASSWORD=Raham123

MYSQL_USER=admin

kubectl create cm dbvars --from-env-file=varsfile

kubectl describe cm dbvars

kubectl get cm

kubectl set env deploy swiggydb --from=configmap/dbvars

kubectl get pods

SECRETS: To store sensitive data in an unencrypted format like passwords, ssh-keys etc ---

it uses base64 encoded format

password=raham (now we can encode and ecode the value)

WHY: if I dont want to expose the sensitive info so we use SECRETS

By default k8s will create some Secrets these are useful from me to create communicate inside the
cluster

used to communicate with one resource to another in cluster

These are system created secrets, we need not to delete

TYPES:
Generic: creates secrets from files, dir, ENV VARIABLES:

It is a way to pass configuration information to containers running within pods.

To set Env vars it include the env or envFrom field in the configuration file.

ENV: DIRECTLY PASSING

ENVFROM: PASSING FROM FILE

ENV:

allows you to set environment variables for a container, specifying a value directly for each variable
that you name.

ENVFROM:

allows you to set environment variables for a container by referencing either a ConfigMap or a
Secret.

You can also specify a common prefix string

CONFIGMAPS:

It is used to store the data in key-value pair, files, or command-line arguments that can be used by
pods, containers and other resources in cluster

But the data should be non confidential data ()

But it does not provider security and encryption.

If we want to provide encryption use secrets in kubernetes.

Limit of config map data in only 1 MB (we cannot store more than that)

But if we want to store a large amount of data in config maps we have to mount a volume or use a
seperate database or file service.

USE CASES:

Configure application setting

Configuring a pod or container


Sharing configuration data across multiple resources

We can store the data: By using this config maps, we can store the data like IP address, URL's and
DNS etc...

kubectl create deploy swiggydb --image=mariadb

kubectl get pods

kubectl logs swiggydb-5d49dc56-cbbqk

It is crashed why because we havent specified the password for it

kubectl set env deploy swiggydb MYSQL_ROOT_PASSWORD=Raham123

kubectl get pods

now it will be on running state

kubectl delete deploy swiggydb

PASSING FROM VAR FILE:

kubectl create deploy swiggydb --image=mariadb

kubectl get pods

kubectl logs swiggydb-5d49dc56-cbbqk

vim vars

MYSQL_ROOT_PASSWORD=Raham123

MYSQL_USER=admin

kubectl create cm dbvars --from-env-file=varsfile

kubectl describe cm dbvars

kubectl get cm

kubectl set env deploy swiggydb --from=configmap/dbvars


kubectl get pods

SECRETS: To store sensitive data in an unencrypted format like passwords, ssh-keys etc ---

it uses base64 encoded format

password=raham (now we can encode and ecode the value)

WHY: if I dont want to expose the sensitive info so we use SECRETS

By default k8s will create some Secrets these are useful from me to create communicate inside the
cluster

used to communicate with one resource to another in cluster

These are system created secrets, we need not to delete

TYPES:

Generic: creates secrets from files, dir, literal (direct values)

TLS: Keys and certs

Docker Registry: used to get private images by using the password

kubectl create deploy swiggydb --image=mariadb

kubectl get po

kubectl create secret generic password --from-literal=ROOT_PASSWORD=raham123 (from cli)

kubectl create secret generic my-secret --from-env-file=vars (from file)

kubectl get secrets

kubectl describe secret password

kubectl set env deploy swiggydb --from=secrets/password

kubectl get po

kubectl set env deploy newdb --from=secret/password --prefix=MYSQL_

without passing prefix we cant make the pod running status

TO SEE SECRETS:
kubectl get secrets password -o yaml

echo -n "cmFoYW0xMjM" | base64 -d

echo -n "cmFoYW0xMjM" | base64 --decode (direct values)

TLS: Keys and certs

Docker Registry: used to get private images by using the password

kubectl create deploy swiggydb --image=mariadb

kubectl get po

kubectl create secret generic password --from-literal=ROOT_PASSWORD=raham123 (from cli)

kubectl create secret generic my-secret --from-env-file=vars (from file)

kubectl get secrets

kubectl describe secret password

kubectl set env deploy swiggydb --from=secrets/password

kubectl get po

kubectl set env deploy newdb --from=secret/password --prefix=MYSQL_

without passing prefix we cant make the pod running status

TO SEE SECRETS:

kubectl get secrets password -o yaml

echo -n "cmFoYW0xMjM" | base64 -d

echo -n "cmFoYW0xMjM" | base64 --decode

===============================================================================

You might also like