0% found this document useful (0 votes)
6 views

k8-commands

The document provides a comprehensive guide on using Kubernetes commands for managing pods, services, and deployments. It includes instructions for creating and scaling pods, setting up services with different types, and configuring resource limits. Additionally, it covers the use of YAML files for defining Kubernetes resources and highlights important commands for monitoring and managing the cluster.

Uploaded by

Fazilat Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

k8-commands

The document provides a comprehensive guide on using Kubernetes commands for managing pods, services, and deployments. It includes instructions for creating and scaling pods, setting up services with different types, and configuring resource limits. Additionally, it covers the use of YAML files for defining Kubernetes resources and highlights important commands for monitoring and managing the cluster.

Uploaded by

Fazilat Fatima
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 39

=========================

Use SSH from host : if ssh not installed then use “sudo apt-get install openssh-server -y”
============================

Cluster Information :

41 kubectl cluster-info
42 kubectl get nodes
46 kubectl get nodes -w
47 kubectl get nodes -o wide
48 kubectl get pods -n kube-system
50 sudo lscpu
51 free
52 free -h
========================

### POd Creation

Method 1 : Yaml

55 sudo apt install vim -y


56 vim pod.yml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod1
labels:
class: cka
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80

----------
57 kubectl create -f pod.yml
58 kubectl get pods
59 kubectl get pods -w

⇒ this is pod ip
61 kubectl get pods -o wide
62 curl http://10.44.0.1
73 kubectl delete pod mypod

=====================================
Task : trp pod creation with image : docker.io/httpd:latest
==============================================

####### Service Creation ###########


### Labels and Selectors ############
87 kubectl delete pods --all

91 vim test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: webapp1
labels:
color: red
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80

------------------------------
92 kubectl create -f test1.yaml
93 vim test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: webapp2
labels:
color: red
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
-----------------------------
94 kubectl create -f test1.yaml
95 kubectl get pod
96 vim test2.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod1
labels:
class: cka
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
-------------------------------
97 kubectl create -f test2.yaml
98 vim test2.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
class: cka
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
-------------------------------
99 kubectl create -f test2.yaml
100 kubectl get pod
102 kubectl get pod --show-labels
103 vim myservice.yaml
---
apiVersion: v1
kind: Service
metadata:
name: my-service-1
spec:
selector:
color: red
ports:
- protocol: TCP
targetPort: 80
port: 8080
----------------------------
104 kubectl create -f myservice.yaml

⇒ service IP address : service port


105 kubectl get services
106 curl http://10.108.179.178:8080
no
107 kubectl describe service my-service-1
108 kubectl get pod --show-labels
109 kubectl get pod -o wide

=======================================
### Traffic Load balancing is feature of Service

114 kubectl get pod --show-labels


115 kubectl get services
116 kubectl get services --show-labels
118 kubectl exec -it webapp1 bash
$ echo “welcome to webapp1” > htdocs/index.html
$ exit

119 kubectl exec -it webapp2 bash


$ echo “welcome to webapp1” > htdocs/index.html
$ exit

120 curl http://10.108.179.178:8080


120 curl http://10.108.179.178:8080
=====================
If we increase no of pods with same label ; that will become part of service as selector
matches with labels of pod

121 vim test1.yaml


---
apiVersion: v1
kind: Pod
metadata:
name: webapp3
labels:
color: red
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80

-----------------------
122 kubectl create -f test1.yaml
123 kubectl get pod --show-labels
124 kubectl get services
125 kubectl describe service my-service-1
126 curl http://10.108.179.178:8080

=======================================
TASK : Create a service by name “test-service2” with selector : class: cka
==================================================================

## Extras :
One service to two ports :

---
apiVersion: v1
kind: Service
metadata:
name: test-service2
labels:
trainer: pavan
spec:
selector:
class: cka
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080
- name: port2
protocol: TCP
targetPort: 443
port: 18080
====================================
###### types of Services :
⇒ access within the cluster
⇒ access outside cluster also using NodeIP:portNo
1. ClusterIP

3. LoadBalancer ⇒ getting external IP from External LB


2. NodePort

===========================
Note : Default type is ClusterIP
###

## Changing Service type from ClusterIP to NodePort

146 kubectl get service


147 curl http://10.108.179.178:8080

⇒ change the line type: ClusterIP to


148 kubectl edit service my-service-1
type: NodePort save and exit

149 kubectl get service

⇒ note down any one nodes IP and : port no given by service


150 kubectl get nodes -o wide

151 curl 192.168.221.130:31255

###############################################################

#### Replica Set :

158 vim replicaset1.yaml


apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myreplicaset
labels:
color: red
spec:
# modify replicas according to your case
replicas: 3
selector:
matchLabels:
color: red
template:
metadata:
labels:
color: red
spec:
containers:
- name: webserver
image: docker.io/httpd:latest
ports:
- containerPort: 80
-------------------------------------------

159 kubectl delete pods --all


160 kubectl delete service --all
161 kubectl create -f replicaset1.yaml

162 kubectl get replicasets


163 kubectl get pods
164 kubectl delete pod myreplicaset-45whq

165 kubectl get pods


166 kubectl delete pods --all
167 kubectl get pods

### Changing replica count

⇒ replica : 3 ⇒ change it to 7
176 kubectl edit replicasets myreplicaset

177 kubectl get pods

⇒ replica : 7 ⇒ change it to 2
178 kubectl edit replicasets myreplicaset

179 kubectl get pods

168 kubectl delete replicaset myreplicaset

169 kubectl get pods


170 kubectl get replicasets
========================================================
Date : 28th June 2023
======================
Replica set (continue..)
193 vim rs2.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp
labels:
color: red
spec:
# modify replicas according to your case
replicas: 1
selector:
matchLabels:
color: red
template:
metadata:
labels:
color: red
spec:
containers:
- name: webserver
image: docker.io/nginx:latest
ports:
- containerPort: 80
--------------------------------------
194 kubectl create -f rs2.yaml
195 kubectl get rs
196 kubectl get pods
197 kubectl delete pod myapp-gcspk
198 kubectl get pods
199 kubectl delete pod myapp-hvfw5
200 kubectl get pods
201 kubectl edit rs myapp
202 kubectl get pods
203 kubectl get pods -o wide
204 kubectl delete rs myapp
205 kubectl get pods

==============================================
Extras : Creating Multiple Resources from one yaml

208 vim rs2.yaml


---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp
labels:
color: red
spec:
# modify replicas according to your case
replicas: 1
selector:
matchLabels:
color: red
template:
metadata:
labels:
color: red
spec:
containers:
- name: webserver
image: docker.io/nginx:latest
ports:
- containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
color: red
spec:
selector:
color: red
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080

===========================

209 kubectl get rs


210 kubectl get svc
211 kubectl create -f rs2.yaml
212 kubectl get rs
213 kubectl get svc
214 curl 10.100.233.6:8080
216 kubectl describe svc myapp
217 kubectl edit rs myapp
220 kubectl get pods -o wide
221 kubectl describe svc myapp
222 kubectl delete -f rs2.yaml

=================================
Service With type NodePort of your choice port no

$ cat service1.yaml
---
apiVersion: v1
kind: Service
metadata:
name: myservice123
labels:
color: blue
spec:
type: NodePort
selector:
color: blue
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080
nodePort: 30100

Extras :
Note : Node port supports range of ports in between 30000-32767
==================================================
232 kubectl delete service --all
233 kubectl delete rs --all
234 kubectl delete pods --all

### Deployments :
=================

235 kubectl get deployments


236 kubectl create deployment myapp --image=docker.io/httpd:latest
237 kubectl get deployments
238 kubectl get pods
239 kubectl get pods -o wide
240 curl 10.44.0.1

============
241 kubectl delete pod myapp-7b579b5d5b-lgpfz
242 kubectl get pods -o wide
243 kubectl get deployments
244 kubectl get rs
245 kubectl get pods

============== Scaling pod replicas


246 kubectl scale deployment myapp --replicas=5
247 kubectl get pods -w
248 kubectl get pods
249 kubectl scale deployment myapp --replicas=12
250 kubectl get pods -w
251 kubectl scale deployment myapp --replicas=2
252 kubectl get pods

=========== Creating Service from Deployments

254 kubectl get deployments


255 kubectl get svc
257 kubectl expose deployment myapp --port=80
258 kubectl get svc
259 curl http://10.100.40.24
### Delete deployments
260 kubectl delete deployment myapp
261 kubectl get deployments
262 kubectl get rs
263 kubectl get pods
264 kubectl delete svc myapp
Extras :
#######################
In this setup if Core DNS pod not running then try to get network plugin

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-


daemonset-k8s.yaml
#############################

### Deployment Example : MARIO

266 kubectl create deployment mario --image=docker.io/pengbai/docker-supermario --


replicas=2
267 kubectl get dep
268 kubectl get deployment
269 kubectl get pods
270 kubectl get pods -w
271 kubectl get deployment
272 kubectl expose deployment mario --type NodePort --port 8080
273 kubectl get service
274 kubectl get nodes -o wide

Access it with IPofNode:<port no given by service>

==============================================
297 kubectl delete svc --all
297 kubectl delete deployment --all

### generating Yamls for deployment and services

==============================================

287 kubectl create deployment mario --image=docker.io/pengbai/docker-supermario --dry-


run=client -o yaml > mario.yaml
289 ls
290 cat mario.yaml
292 kubectl create -f mario.yaml
294 kubectl get deployment

==================
295 kubectl expose deployment mario --type NodePort --port 8080 --dry-run=client -o yaml
>> sample.yaml
298 kubectl create -f sample.yaml
299 kubectl get svc

==========================================

#### Labels and Selectors :


==================
## Equity based Selectors , Set based Selectors

304 vim pod.yml


---
apiVersion: v1
kind: Pod
metadata:
name: mypod1
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
-------------------------------

305 kubectl create -f pod.yml


306 vim pod.yml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
-------------------------------
307 kubectl create -f pod.yml
308 vim pod.yml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod3
labels:
application: client
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
-------------------------------

315 kubectl create -f test1.yaml


316 kubectl get pods
317 kubectl get pods --show-labels
318 kubectl get pod -l application=client
319 kubectl get pod -l application=internal
320 kubectl get pod -l color=red
321 kubectl get pod -l 'application notin (client)'
322 kubectl get pod -l 'application in (client)'
323 kubectl get pod -l 'application in (internal)'
324 kubectl get pod -l 'application notin (internal)'
325 kubectl get pod -l 'application notin (internal,client)'
326 kubectl get pod -l 'color notin (red)'
327 kubectl get pod -l 'color in (red)'

=======================================

#### Resource Limitations for pods (CPU and Memory)

333 vim limit.yaml


---
apiVersion: v1
kind: Pod
metadata:
name: limited
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
resources:
limits:
cpu: "1"
requests:
cpu: "0.5"

----------------------------------
334 kubectl apply -f limit.yaml
335 kubectl get pod
336 kubectl describe pod limited
Ref:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
#:~:text=Each%20Container%20has%20a%20limit,cpu%20and%20256MiB%20of
%20memory.

https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
======================================================================
========

341 vim limit1.yaml


---
apiVersion: v1
kind: Pod
metadata:
name: limited1
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
resources:
limits:
cpu: "1"
memory: "400Mi"
requests:
cpu: "0.5"
memory: "100Mi"
----------------------------------------------------
342 kubectl get pods
343 kubectl create -f limit1.yaml
344 kubectl get pod
345 kubectl describe pod limited1
=================================================

##### Daemon Set ###


352 vim daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: test1
spec:
selector:
matchLabels:
color: red
template:
metadata:
labels:
color: red
spec:
containers:
- name: webserver
image: docker.io/httpd:latest
ports:
- containerPort: 80

---------------------
356 kubectl get daemonsets
357 kubectl create -f daemonset.yaml
358 kubectl get daemonsets
360 kubectl get pods -o wide
361 kubectl get nodes

### Try to make Master Schedulable

⇒ remove following lines and save


362 kubectl edit node ubuntu-controlplane


taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane

363 kubectl get pods -o wide


364 kubectl get daemonsets

⇒ Lets make Master unschedulable


Add following lines back to Spec Section

taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane

365 kubectl edit node ubuntu-controlplane


366 kubectl get pods -o wide
367 kubectl delete pod test1-vxjmn
368 kubectl delete pod test1-dnd6d
369 kubectl get pods -o wide

=====================================================

#### Node Labels and Node Selectors


372 kubectl get nodes
373 kubectl get nodes --show-labels
374 kubectl get nodes
375 kubectl label node ubuntu-workernode environment=production
376 kubectl label node ubuntu-controlplane environment=test
377 kubectl get nodes --show-labels

------------------

$ cat nodeSelector.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
nodeSelector:
environment: test

---------------------------

379 vim nodeSelevtor.yaml


380 kubectl delete daemonset --all
381 kubectl delete pod --all
382 kubectl create -f nodeSelevtor.yaml
383 vim nodeSelevtor.yaml
384 kubectl create -f nodeSelevtor.yaml
385 kubectl get pod
386 kubectl logs nginx
387 kubectl get pod -o wide
388 kubectl describe pod nginx
389 kubectl get pod -o wide
390 kubectl edit node ubuntu-controlplane
391 kubectl describe pod nginx
392 kubectl get pod -o wide
393 kubectl delete pod nginx

=============
Note : make your node ControlPlane (Master) Schedulable again
==========================================================

## Unlabel or remove or delete label

403 kubectl get nodes --show-labels


404 kubectl label node ubuntu-controlplane environment-
405 kubectl get nodes --show-labels
406 kubectl label node ubuntu-workernode environment-
407 kubectl get nodes --show-labels

=========================================

#### MultiTier Application :

Mysql/Mariadb + Wordpress
========================
410 kubectl create deployment database1 --image docker.io/mariadb:latest --dry-run=client
-o yaml > database.yaml
411 ls
412 vim database.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: database1
name: database1
spec:
replicas: 1
selector:
matchLabels:
app: database1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: database1
spec:
containers:
- image: docker.io/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: mypass
- name: MYSQL_DATABASE
value: hpindia
resources: {}
status: {}
-----------------------------
413 kubectl create -f database.yaml
414 kubectl get deployment
415 kubectl get pod
416 kubectl get pod -w
417 kubectl get pod

# ⇒ Password “mypass”
418 kubectl exec -it database1-6d47968b4f-85tsf bash
> mariadb -u root -p
>> show databases;
>> exit
> exi

419 kubectl get deployment


420 kubectl expose deployment database1 --port=3306
421 kubectl get svc

⇒ Application Wordpress

422 kubectl create deployment wordpress1 --image docker.io/wordpress:latest --dry-


run=client -o yaml > wordpress.yaml
423 vim wordpress.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: wordpress1
name: wordpress1
spec:
replicas: 1
selector:
matchLabels:
app: wordpress1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: wordpress1
spec:
containers:
- image: docker.io/wordpress:latest
name: wordpress
env:

⇒ Database Service name


- name: WORDPRESS_DB_HOST
value: database1
- name: WORDPRESS_DB_USER
value: root
- name: WORDPRESS_DB_PASSWORD
value: mypass
- name: WORDPRESS_DB_NAME
value: hpindia
resources: {}
status: {}

----------------------------------
424 kubectl create -f wordpress.yaml
425 kubectl get deployment
426 kubectl get pod -w
427 kubectl logs wordpress1-7746bcf5f-gbgvj
428 kubectl get deployments
429 kubectl expose deployment wordpress1 --port=80 --type=NodePort
430 kubectl get svc

⇒ with node IP:<service_NodePort> access


431 kubectl get nodes -o wide

========================================

Date : 03 July 2023


===============================

CONFIGMAPS :

==============

443 kubectl get configmaps


444 kubectl create configmap mydata --from-literal developer=pavan --from-literal
client=hpe
446 kubectl get configmaps
447 kubectl create configmap yourdata --from-literal developer=deepika --from-literal
client=dell
448 kubectl get configmaps
449 kubectl describe configmap mydata

--------------------

###Use of Config map data as env variables

$ cat configmap.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
envFrom:
- configMapRef:
name: yourdata
ports:
- containerPort: 80

------------------------------------
453 kubectl create -f configmap.yaml
454 kubectl get pods
455 kubectl exec -it mypod env

==========================================

## Using selected data from Config maps

---
apiVersion: v1
kind: Pod
metadata:
name: test1
labels:
application: internal
spec:
containers:
- name: container1
ports:
- containerPort: 80
image: httpd:latest
env:
- name: pythonDeveloper
valueFrom:
configMapKeyRef:
name: yourdata
key: developer

=================================
------------------------------------
453 kubectl create -f configmap.yaml
454 kubectl get pods
455 kubectl exec -it test1 env

==========================================

## Configmap as Volume mount


=========================

$ cat configmap_vol.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: test2
labels:
application: internal
spec:
containers:
- name: container1
ports:
- containerPort: 80
image: httpd:latest
volumeMounts:
- name: volume1
mountPath: /tmp/myenvs/
volumes:
- name: volume1
configMap:
name: mydata
restartPolicy: Never

======================

$ kubectl exec -it test2 bash


2 ls /tmp/myenvs/ -l
3 cat /tmp/myenvs/developer
4 cat /tmp/myenvs/client
5 exit
===========================================

Task : try to create Configmap using --from-file


=========================================

#### Secrets :
=======================
504 kubectl get secrets
505 kubectl create secret generic mysecret --from-literal mypassword=centos --from-literal
dbname=hp
506 kubectl get secrets
507 kubectl describe secret mysecret
508 ls
509 cp database.yaml database1.yaml
510 vim database1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: database1
name: database1
spec:
replicas: 1
selector:
matchLabels:
app: database1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: database1
spec:
containers:
- image: docker.io/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mypassword ## Key name from secret mysecret
name: mysecret
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: dbname
name: mysecret
resources: {}
status: {}

--------------------
511 kubectl apply -f database1.yaml
512 kubectl get deployments
513 kubectl describe deployment database1
514 kubectl get pod
515 kubectl exec -it database1-7744b6d897-hvzdd bash ## podname
1 env
2 mariadb -u -p
3 mariadb -u root -p
> show databases;
> exit
4 exit

======================================================
=============
### Secrets using source file (--from-file)

518 kubectl get secrets


519 echo -n "pavan" > username
520 echo -n "p@$$worD1908" > password
521 ls
522 kubectl create secret generic db-user-pass --from-file ./username --from-file
./password
523 kubectl get secrets
525 kubectl describe secret db-user-pass

===========================================

## EXTRAS :
Decode Secret
===========

531 kubectl get secret db-user-pass -o json


532 echo 'cEA0NTE5NndvckQxOTA4' | base64 --decode
533 echo 'cGF2YW4=' | base64 --decode

=======================================================

#### Namespaces :
===================
Running resources in isolation
=========================

540 kubectl get namespaces


541 kubectl get pods -n default
542 kubectl get pods -n kube-node-lease
543 kubectl get pods -n kube-public
544 kubectl get pods -n kube-system
545 kubectl create namespace hp
546 kubectl create namespace nokia
547 kubectl get namespaces
548 kubectl create deployment linuxserver --image docker.io/centos -n hp
549 kubectl create deployment linuxserver --replicas 3 --image docker.io/centos -n nokia
550 kubectl get deployments
551 kubectl get deployments -n hp
552 kubectl get deployments -n nokia
553 kubectl get pods -n hp
554 kubectl get pods -n nokia
555 kubectl delete deployment linuxserver -n nokia
556 kubectl get pods -n nokia
557 kubectl delete deployment linuxserver -n hp
558 kubectl delete namespace hp
559 kubectl delete namespace nokia

======================================================
Metrics Server

========
514 kubectl get pod -n kube-system
515 kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/
download/components.yaml
516 kubectl get pod -n kube-system
517 wget -c
https://gist.githubusercontent.com/initcron/1a2bd25353e1faa22a0ad41ad1c01b62/raw/
008e23f9fbf4d7e2cf79df1dd008de2f1db62a10/k8s-metrics-server.patch.yaml

518 ls
519 kubectl patch deploy metrics-server -p "$(cat k8s-metrics-server.patch.yaml)" -n kube-
system
520 kubectl get pod -n kube-system
521 kubectl top nodes
522 kubectl top pod
523 kubectl top pod -n kube-system
-----------
### HAP - Horizontal pod Autoscaler (Needs Metric server running)
585 vim example-hpa.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: php-apache
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: php-apache
status:
loadBalancer: {}
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: php-apache
name: php-apache
spec:
replicas: 1
selector:
matchLabels:
run: php-apache
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: php-aapache
spec:
containers:
- image: k8s.gcr.io/hpa-example
name: php-apache
ports:
- containerPort: 80
resources:
requests:
cpu: 200m
status: {}

--------------------------------
586 kubectl create -f example-hpa.yaml
587 kubectl get deployment
588 kubectl get pods
589 kubectl get pods -w
590 kubectl get svc

591 vim hpa.yaml


apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
creationTimestamp: null
name: php-apache
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
targetCPUUtilizationPercentage: 50
status:
currentReplicas: 0
desiredReplicas: 0
-------------------------------
592 kubectl create -f hpa.yaml
594 kubectl get hpa

================================\
## Lets generate load for application to test HPA fom another terminal of same controlplane

598 kubectl run -i --tty load-generator --image busybox /bin/sh

⇒ run following loop from terminal of busy box

⇒ Need to press Ctrl + C to stop the loop


$ while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done

=================

Observe with
kubectl get pod -w

==================================

===============================

### Rollout and Rollback


## Command is rollout
586 vi ghost.yml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kubernetes.io/change-cause: kubectl run mydep --image=ghost:0.9 --record=true
--dry-run=true --output=yaml
creationTimestamp: null
labels:
run: mydep
name: mydep
spec:
replicas: 1
selector:
matchLabels:
run: mydep
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: mydep
spec:
containers:
- image: ghost:0.9
name: mydep
resources: {}
status: {}

587 kubectl create -f ghost.yml


588 kubectl get deployment
589 kubectl get pods
590 kubectl describe deployment mydep
591 kubectl rollout --help
592 kubectl rollout history deployment mydep
593 kubectl set image deployment mydep mydep=ghost:0.11 --record
594 kubectl rollout history deployment mydep
595 kubectl describe deployment mydep
596 kubectl set image deployment mydep mydep=ghost:0.9 --record
597 kubectl rollout history deployment mydep
598 kubectl describe deployment mydep

--------------
603 kubectl rollout history deployment mydep

### rollout undo is rollback


605 kubectl rollout undo deployment mydep --to-revision=3
606 kubectl rollout history deployment mydep
607 kubectl rollout pause deployment mydep
608 kubectl get pod
609 kubectl set image deployment mydep mydep=ghost:5.52.1
610 kubectl get pod
611 kubectl rollout history deployment mydep
612 kubectl rollout resume deployment mydep
613 kubectl rollout history deployment mydep
614 kubectl get pod
=======================================
#### Dashboard
==================
641 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/
deploy/recommended.yaml
642 kubectl get namespaces
643 kubectl get pod -n kubernetes-dashboard
644 kubectl get svc -n kubernetes-dashboard
### Token to access dashboard
650 kubectl create token kubernetes-dashboard -n kubernetes-dashboard

## change service type to Node port and access it


645 kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
646 kubectl get svc -n kubernetes-dashboard
======
####
Login and check there will be nothing shown on dashboard as there is no cluster role binding

Lets get one with View access

656 kubectl create clusterrolebinding mybinding1 --clusterrole=view --


serviceaccount=kubernetes-dashboard:kubernetes-dashboard

==========
662 kubectl delete clusterrolebinding mybinding1
663 history
664 kubectl create clusterrolebinding mybinding1 --clusterrole=cluster-admin --
serviceaccount=kubernetes-dashboard:kubernetes-dashboard

==============
Extras :
To list available roles for clusterrolebindings please check

666 kubectl get clusterrole

===============================================================

==
### RBAC -

825 kubectl create namespace hp


826 mkdir hp
827 cd hp/

### Lets create Certificates for authentication and user name user1 for hp namespace

828 sudo openssl genrsa -out user1.key 2048


829 ls
830 sudo openssl req -new -key user1.key -out user1.csr
831 ls
832 sudo openssl x509 -req -in user1.csr -CA /etc/kubernetes/pki/ca.crt -CAkey
/etc//kubernetes/pki/ca.key -CAcreateserial -out user1.crt -days 500
Organisation name → namespace ## hp in my case
and common name → user1

833 ls
#### For Authorization lets create role and role binding
834 vi role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: hp
name: user1-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "pods", "services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

------------------------

835 vi rolebinding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-test
namespace: hp
subjects:
- kind: User
name: user1
apiGroup: ""
roleRef:
kind: Role
name: user1-role
apiGroup: ""
-------------------------------------------

836 kubectl create -f role.yml


837 kubectl create -f rolebinding.yml

##Next steps
Step 1: Set credentials :
Step 2: Set context to hp Namespace:

838 kubectl config set-context user1-context --cluster=kubernetes --namespace=hp --


user=user1

839 kubectl config get-contexts

=================================================
### Lets set user account and work directory

843 sudo adduser pavan


844 sudo cp user1.* /home/pavan/
845 sudo chown pavan:pavan /home/pavan/user1.*
846 sudo mkdir /home/pavan/.kube
847 sudo chown pavan:pavan /home/pavan/.kube
848 ls ~/.kube
849 sudo cp /home/rps/.kube/config /home/pavan/.kube/
850 sudo chown pavan:pavan /home/pavan/.kube/config
============

$ su - pavan

(In config file need to delete last 2 lines Key data and mention path of created keys and
change the line “current-context”)

1 whoami
2 pwd
3 ls -la
4 ls -la .kube/

5 vi .kube/config

apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQ
URBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY
201bGRHVnpNQjRYRFRJeE1UQXlNekUyTXpJeU0xb1hEVE14TVRBeU1URTJNekl5TT
Fvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29
aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDhPClBMb3dkUDFNMVl
HQzNNMEdlSXczTTNuTElUa2YyOEtsc1JpdzFSUGxCWG5HK1ZqRDRHZzFCMXVxV
Gxpa0xtL1gKZ1NCeDRLQzMzbUM0UG1kWENsY20vS3pVbmx3V01uVE1zNll3blNsUG
FhLzVKMkdvNHpMZTgvRnlocVBWb0FpSgpEUHFnNHE1RFZQbHZpY2NQb3JVNW9r
YXl6b0VldVNmNUJTMEttZlJJVndvalNNMXp5b0hEczNxcWNGUVRCMjhiCjdYNm9NS
WZFT2c3NmJDeFRSM3BGK3g2eCtjZjNCWWl5SXFmbzRHWUhHb29oMnFQVHpSbEV
3cmgyVnoxK2VvK2UKTXo1akpBaG11aWhhdjY5WkpQOXBaZFlEY3EzdWdpNUliZDR4
QUt5eXhaR05oWFdRWFBSS2FWa2lNMjZaNDhRcQpZT1hMTlBkYlEwdGNQdVZhdW1
FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0V
CCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOaDJMSE53UC9YNDJ3RVFhRFB
LVkp0cTJVd2xNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDcHIzRnlxU0QyNG
1XTUNya2tPQzA0N0YxTmR4YnlTOVRzaGg2Q2swS2pMbWF2OE9MMwppSE56eWdY
QVJyRUxkNTNEVnBkamFMc2dmZGkyWUQvcEdWeXZHK2xwK1RNTFVxMGdNNkdjc
XRHOFVHN2prNVJoCkZxSVFiOUFmY0tMN1B4T1NacWlvbzFJeTFDWXVnNVBnMlB
Uc01DV00rV01zTkl2UzlGbmNhOGpRTTBzeWg0d1EKWWF3Y1BqdjNoaVVsNnBEc1pu
UDE4M0dXVnpKazZrUkxYVXc5cm82aEJCOVJLdGNhcmZLZ3RyOE9DeHU1WGRaRgo
wZTlZRFVvcFJ4ZDlkMEhkR016bVl6UjZsWTc3dHFKeXVzV2ExUEorVnJhRFR6Wk81S
URGdlNKNGkyRWlCWFM2CnpOUkNyUDJ6UWdJU0ZNQ0Z0QVRpM3dkTzZxa1A2dW
NBOEtKNAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: 192.168.221.130:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
- context:
cluster: kubernetes
namespace: hp
user: user1
name: user1-context
current-context: user1-context
kind: Config
preferences: {}
users:
- name: user1
user:
client-certificate: /home/pavan/user1.crt
client-key: /home/pavan/user1.key

===========================
6 kubectl get pods ### should give no error ## No pods may be the output
7 kubectl get nodes ### should give error as no access to user1

========================================================

#### Ingress :


Lets Deploy Ingress and make its service as NodePort

841 kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-


v1.1.0/deploy/static/provider/cloud/deploy.yaml
842 kubectl get svc -n ingress-nginx
843 kubectl edit svc ingress-nginx-controller -n ingress-nginx ### change type to
NodePort
844 kubectl get svc -n ingress-nginx
=====================================

### Deploy 2 Apps and expose to get services

845 kubectl create deployment myapp1 --image docker.io/httpd


847 kubectl create deployment myapp2 --image docker.io/openshift/hello-openshift
851 kubectl expose deployment myapp1 --port=80
852 kubectl expose deployment myapp2 --port=8080
853 kubectl get svc
=======================

### Lets Add Rules to Ingress


$ cat ingress1.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /myapp1
pathType: Prefix
backend:
service:
name: myapp1
port:
number: 80
- path: /myapp2
pathType: Prefix
backend:
service:
name: myapp2
port:
number: 8080
-------------------------------------

931 kubectl create -f ingress1.yaml

⇒ Note the port no of ingress service NodePort with NodeIp (In my case it is 30742)
932 curl localhost:30742/myapp1
933 curl localhost:30742/myapp2
921 kubectl get nodes -o wide

919 curl 192.168.221.131:30742/myapp1


919 curl 192.168.221.131:30742/myapp2
=========================================================

######

Adapter pattern: (demo)


947 vim adapter.yaml
apiVersion: v1
kind: Pod
metadata:
name: adapter-container-example
spec:
# Create a volume called 'shared-logs' that the
# app and adapter share.
volumes:
- name: shared-logs
emptyDir: {}

containers:

# Main application container


- name: main-container
# Simple application: busybox writes system usage information (`top`) to a status
# file every five seconds to top.txt file.
# to the volume mount workdir location
image: busybox
command: ["/bin/sh"]
args: ["-c", "while true; do date > /var/log/top.txt && top -n 1 -b >> /var/log/top.txt; sleep
5;done"]
resources: {}
# Mount the pod's shared log file into the app
# container. The app writes logs here.
volumeMounts:
- name: shared-logs
mountPath: /var/log

# Adapter container
- name: adapter-container
# simple adapter: alpine creates logs in the same
# location as main application with a timestamp
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do (cat /var/log/top.txt | head -1 > /var/log/status.txt) && (cat
/var/log/top.txt | head -2 | tail -1 | grep -o -E '\\d+\\w' | head -1 >> /var/log/status.txt) &&
(cat /var/log/top.txt | head -3 | tail -1 | grep -o -E '\\d+%' | head -1 >> /var/log/status.txt); sleep
5; done"]
resources: {}

# Mount the pod's shared log file into the adapter


# container.
volumeMounts:
- name: shared-logs
mountPath: /var/log

----------------------
948 kubectl create -f adapter.yaml
949 kubectl get pods
950 kubectl get pods -w
951 kubectl describe pods adapter-container-example
952 kubectl get pods
954 kubectl exec -it adapter-container-example -c main-container sh
cat /var/log/top.txt
ls
exit
955 kubectl exec -it adapter-container-example -c adapter-container sh
cat /var/log/top.txt
cat /var/log/status.txt
ls
exit
=====================================================
### INIT Container Pods

### Download Db Dump and then start mysql container

$ cat mydb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydb
labels:
app: db
spec:
initContainers:
- name: fetch
image: mwendler/wget
command: ["wget","--no-check-certificate","https://sample-videos.com/sql/Sample-SQL-
File-1000rows.sql","-O","/docker-entrypoint-initdb.d/dump.sql"]
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: dump
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "example"
- name: MYSQL_DATABASE
value: "hp"

volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: dump
volumes:
- emptyDir: {}
name: dump

==============================
994 kubectl create -f mydb.yaml
995 kubectl get pod
996 kubectl logs -c fetch mydb -f
997 kubectl get pod
998 kubectl logs -c fetch mydb -f
=============================================
---
Reference : https://kubernetes.io/docs/concepts/services-networking/network-policies/

---
Helm book → https://drive.google.com/file/d/1bxJBi9GzHk2j2UslybDreVlfOSInsEwv/
view?usp=sharing
---

HELM

----

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-


helm-3
427 chmod 777 get_helm.sh
428 ./get_helm.sh
429 helm version
430 helm repo
431 helm repo list
432 helm repo add bitnami https://charts.bitnami.com/bitnami
433 helm repo list
434 helm install my-release bitnami/apache
435 kubectl get pods
436 kubectl get svc
438 curl 10.97.58.16:80
439 kubectl get pods -o wide
442 curl 10.32.0.16:8080

---

--
helm create inteltest
11 cd inteltest/
12 yum install tree -y
13 clear
14 tree
15 vi values.yaml
# Default values for inteltest.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 2

image:
repository: openshift/hello-openshift
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "latest"

imagePullSecrets: []
nameOverride: "myintelapp"
fullnameOverride: "myintelchart"
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""

podAnnotations: {}

podSecurityContext: {}
# fsGroup: 2000

securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
type: NodePort
port: 80

ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the
following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi

autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80

nodeSelector: {}

tolerations: []

affinity: {}

16 cd
17 helm install myfirstapp inteltest/ --values inteltest/values.yaml
18 helm status myfirstapp
19 kubectl get pods

---

vi values.yaml
Change something like nodeport or anything to observer

27 history
28 #helm upgrade myfirstapp
29 cd
30 helm upgrade myfirstapp inteltest -f inteltest/values.yaml
31 helm status
32 helm status myfirstapp
33 kubectl get pods
34 kubectl get svc
35 vi inteltest/values.yaml
36 helm upgrade myfirstapp inteltest -f inteltest/values.yaml
37 helm list
38 kubectl get svc
39 helm rollback myfirstapp 2
40 kubectl get svc
41 helm rollback myfirstapp 1

---

Conditional
---
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "grafana.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $extraPaths := .Values.ingress.extraPaths -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
apiVersion: networking.k8s.io/v1beta1
{{ else }}
apiVersion: extensions/v1beta1
{{ end -}}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.ingress.labels }}
{{ toYaml .Values.ingress.labels | indent 4 }}
{{- end }}
{{- if .Values.ingress.annotations }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ tpl $value $ | quote }}
{{- end }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
rules:
{{- if .Values.ingress.hosts }}
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
{{ if $extraPaths }}
{{ toYaml $extraPaths | indent 10 }}
{{- end }}
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- else }}
- http:
paths:
- backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- if $ingressPath }}
path: {{ $ingressPath }}
{{- end }}
{{- end -}}
{{- end }}
---

Upgrade kubernetes ---


https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

---

Statefulsets

https://docs.google.com/document/d/1q751Qj-M6kfcg48q5pm-qtw2Pvu_F-
SXmFy_tT_xk0w/edit

================================
Istio service mesh components
Istio service mesh components - Google Docs

You might also like