k8-commands
k8-commands
Use SSH from host : if ssh not installed then use “sudo apt-get install openssh-server -y”
============================
Cluster Information :
41 kubectl cluster-info
42 kubectl get nodes
46 kubectl get nodes -w
47 kubectl get nodes -o wide
48 kubectl get pods -n kube-system
50 sudo lscpu
51 free
52 free -h
========================
Method 1 : Yaml
----------
57 kubectl create -f pod.yml
58 kubectl get pods
59 kubectl get pods -w
⇒ this is pod ip
61 kubectl get pods -o wide
62 curl http://10.44.0.1
73 kubectl delete pod mypod
=====================================
Task : trp pod creation with image : docker.io/httpd:latest
==============================================
91 vim test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: webapp1
labels:
color: red
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
------------------------------
92 kubectl create -f test1.yaml
93 vim test1.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: webapp2
labels:
color: red
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
ports:
- containerPort: 80
-----------------------------
94 kubectl create -f test1.yaml
95 kubectl get pod
96 vim test2.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod1
labels:
class: cka
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
-------------------------------
97 kubectl create -f test2.yaml
98 vim test2.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
class: cka
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
-------------------------------
99 kubectl create -f test2.yaml
100 kubectl get pod
102 kubectl get pod --show-labels
103 vim myservice.yaml
---
apiVersion: v1
kind: Service
metadata:
name: my-service-1
spec:
selector:
color: red
ports:
- protocol: TCP
targetPort: 80
port: 8080
----------------------------
104 kubectl create -f myservice.yaml
=======================================
### Traffic Load balancing is feature of Service
-----------------------
122 kubectl create -f test1.yaml
123 kubectl get pod --show-labels
124 kubectl get services
125 kubectl describe service my-service-1
126 curl http://10.108.179.178:8080
=======================================
TASK : Create a service by name “test-service2” with selector : class: cka
==================================================================
## Extras :
One service to two ports :
---
apiVersion: v1
kind: Service
metadata:
name: test-service2
labels:
trainer: pavan
spec:
selector:
class: cka
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080
- name: port2
protocol: TCP
targetPort: 443
port: 18080
====================================
###### types of Services :
⇒ access within the cluster
⇒ access outside cluster also using NodeIP:portNo
1. ClusterIP
===========================
Note : Default type is ClusterIP
###
###############################################################
⇒ replica : 3 ⇒ change it to 7
176 kubectl edit replicasets myreplicaset
⇒ replica : 7 ⇒ change it to 2
178 kubectl edit replicasets myreplicaset
==============================================
Extras : Creating Multiple Resources from one yaml
---
apiVersion: v1
kind: Service
metadata:
name: myapp-service
labels:
color: red
spec:
selector:
color: red
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080
===========================
=================================
Service With type NodePort of your choice port no
$ cat service1.yaml
---
apiVersion: v1
kind: Service
metadata:
name: myservice123
labels:
color: blue
spec:
type: NodePort
selector:
color: blue
ports:
- name: port1
protocol: TCP
targetPort: 80
port: 8080
nodePort: 30100
Extras :
Note : Node port supports range of ports in between 30000-32767
==================================================
232 kubectl delete service --all
233 kubectl delete rs --all
234 kubectl delete pods --all
### Deployments :
=================
============
241 kubectl delete pod myapp-7b579b5d5b-lgpfz
242 kubectl get pods -o wide
243 kubectl get deployments
244 kubectl get rs
245 kubectl get pods
==============================================
297 kubectl delete svc --all
297 kubectl delete deployment --all
==============================================
==================
295 kubectl expose deployment mario --type NodePort --port 8080 --dry-run=client -o yaml
>> sample.yaml
298 kubectl create -f sample.yaml
299 kubectl get svc
==========================================
=======================================
----------------------------------
334 kubectl apply -f limit.yaml
335 kubectl get pod
336 kubectl describe pod limited
Ref:
https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
#:~:text=Each%20Container%20has%20a%20limit,cpu%20and%20256MiB%20of
%20memory.
https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
======================================================================
========
---------------------
356 kubectl get daemonsets
357 kubectl create -f daemonset.yaml
358 kubectl get daemonsets
360 kubectl get pods -o wide
361 kubectl get nodes
“
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
”
=====================================================
------------------
$ cat nodeSelector.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/nginx:latest
ports:
- containerPort: 80
imagePullPolicy: IfNotPresent
nodeSelector:
environment: test
---------------------------
=============
Note : make your node ControlPlane (Master) Schedulable again
==========================================================
=========================================
Mysql/Mariadb + Wordpress
========================
410 kubectl create deployment database1 --image docker.io/mariadb:latest --dry-run=client
-o yaml > database.yaml
411 ls
412 vim database.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: database1
name: database1
spec:
replicas: 1
selector:
matchLabels:
app: database1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: database1
spec:
containers:
- image: docker.io/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
value: mypass
- name: MYSQL_DATABASE
value: hpindia
resources: {}
status: {}
-----------------------------
413 kubectl create -f database.yaml
414 kubectl get deployment
415 kubectl get pod
416 kubectl get pod -w
417 kubectl get pod
# ⇒ Password “mypass”
418 kubectl exec -it database1-6d47968b4f-85tsf bash
> mariadb -u root -p
>> show databases;
>> exit
> exi
⇒ Application Wordpress
----------------------------------
424 kubectl create -f wordpress.yaml
425 kubectl get deployment
426 kubectl get pod -w
427 kubectl logs wordpress1-7746bcf5f-gbgvj
428 kubectl get deployments
429 kubectl expose deployment wordpress1 --port=80 --type=NodePort
430 kubectl get svc
========================================
CONFIGMAPS :
==============
--------------------
$ cat configmap.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: mypod2
labels:
application: internal
spec:
containers:
- name: mycontainer1
image: docker.io/httpd:latest
envFrom:
- configMapRef:
name: yourdata
ports:
- containerPort: 80
------------------------------------
453 kubectl create -f configmap.yaml
454 kubectl get pods
455 kubectl exec -it mypod env
==========================================
---
apiVersion: v1
kind: Pod
metadata:
name: test1
labels:
application: internal
spec:
containers:
- name: container1
ports:
- containerPort: 80
image: httpd:latest
env:
- name: pythonDeveloper
valueFrom:
configMapKeyRef:
name: yourdata
key: developer
=================================
------------------------------------
453 kubectl create -f configmap.yaml
454 kubectl get pods
455 kubectl exec -it test1 env
==========================================
$ cat configmap_vol.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: test2
labels:
application: internal
spec:
containers:
- name: container1
ports:
- containerPort: 80
image: httpd:latest
volumeMounts:
- name: volume1
mountPath: /tmp/myenvs/
volumes:
- name: volume1
configMap:
name: mydata
restartPolicy: Never
======================
#### Secrets :
=======================
504 kubectl get secrets
505 kubectl create secret generic mysecret --from-literal mypassword=centos --from-literal
dbname=hp
506 kubectl get secrets
507 kubectl describe secret mysecret
508 ls
509 cp database.yaml database1.yaml
510 vim database1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: database1
name: database1
spec:
replicas: 1
selector:
matchLabels:
app: database1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: database1
spec:
containers:
- image: docker.io/mariadb:latest
name: mariadb
env:
- name: MARIADB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mypassword ## Key name from secret mysecret
name: mysecret
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
key: dbname
name: mysecret
resources: {}
status: {}
--------------------
511 kubectl apply -f database1.yaml
512 kubectl get deployments
513 kubectl describe deployment database1
514 kubectl get pod
515 kubectl exec -it database1-7744b6d897-hvzdd bash ## podname
1 env
2 mariadb -u -p
3 mariadb -u root -p
> show databases;
> exit
4 exit
======================================================
=============
### Secrets using source file (--from-file)
===========================================
## EXTRAS :
Decode Secret
===========
=======================================================
#### Namespaces :
===================
Running resources in isolation
=========================
======================================================
Metrics Server
========
514 kubectl get pod -n kube-system
515 kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/
download/components.yaml
516 kubectl get pod -n kube-system
517 wget -c
https://gist.githubusercontent.com/initcron/1a2bd25353e1faa22a0ad41ad1c01b62/raw/
008e23f9fbf4d7e2cf79df1dd008de2f1db62a10/k8s-metrics-server.patch.yaml
518 ls
519 kubectl patch deploy metrics-server -p "$(cat k8s-metrics-server.patch.yaml)" -n kube-
system
520 kubectl get pod -n kube-system
521 kubectl top nodes
522 kubectl top pod
523 kubectl top pod -n kube-system
-----------
### HAP - Horizontal pod Autoscaler (Needs Metric server running)
585 vim example-hpa.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: php-apache
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: php-apache
status:
loadBalancer: {}
---
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
run: php-apache
name: php-apache
spec:
replicas: 1
selector:
matchLabels:
run: php-apache
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
run: php-aapache
spec:
containers:
- image: k8s.gcr.io/hpa-example
name: php-apache
ports:
- containerPort: 80
resources:
requests:
cpu: 200m
status: {}
--------------------------------
586 kubectl create -f example-hpa.yaml
587 kubectl get deployment
588 kubectl get pods
589 kubectl get pods -w
590 kubectl get svc
================================\
## Lets generate load for application to test HPA fom another terminal of same controlplane
=================
Observe with
kubectl get pod -w
==================================
===============================
--------------
603 kubectl rollout history deployment mydep
==========
662 kubectl delete clusterrolebinding mybinding1
663 history
664 kubectl create clusterrolebinding mybinding1 --clusterrole=cluster-admin --
serviceaccount=kubernetes-dashboard:kubernetes-dashboard
==============
Extras :
To list available roles for clusterrolebindings please check
===============================================================
==
### RBAC -
### Lets create Certificates for authentication and user name user1 for hp namespace
833 ls
#### For Authorization lets create role and role binding
834 vi role.yml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: hp
name: user1-role
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "pods", "services"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
------------------------
835 vi rolebinding.yml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-test
namespace: hp
subjects:
- kind: User
name: user1
apiGroup: ""
roleRef:
kind: Role
name: user1-role
apiGroup: ""
-------------------------------------------
##Next steps
Step 1: Set credentials :
Step 2: Set context to hp Namespace:
=================================================
### Lets set user account and work directory
$ su - pavan
(In config file need to delete last 2 lines Key data and mention path of created keys and
change the line “current-context”)
1 whoami
2 pwd
3 ls -la
4 ls -la .kube/
5 vi .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQ
URBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY
201bGRHVnpNQjRYRFRJeE1UQXlNekUyTXpJeU0xb1hEVE14TVRBeU1URTJNekl5TT
Fvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29
aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDhPClBMb3dkUDFNMVl
HQzNNMEdlSXczTTNuTElUa2YyOEtsc1JpdzFSUGxCWG5HK1ZqRDRHZzFCMXVxV
Gxpa0xtL1gKZ1NCeDRLQzMzbUM0UG1kWENsY20vS3pVbmx3V01uVE1zNll3blNsUG
FhLzVKMkdvNHpMZTgvRnlocVBWb0FpSgpEUHFnNHE1RFZQbHZpY2NQb3JVNW9r
YXl6b0VldVNmNUJTMEttZlJJVndvalNNMXp5b0hEczNxcWNGUVRCMjhiCjdYNm9NS
WZFT2c3NmJDeFRSM3BGK3g2eCtjZjNCWWl5SXFmbzRHWUhHb29oMnFQVHpSbEV
3cmgyVnoxK2VvK2UKTXo1akpBaG11aWhhdjY5WkpQOXBaZFlEY3EzdWdpNUliZDR4
QUt5eXhaR05oWFdRWFBSS2FWa2lNMjZaNDhRcQpZT1hMTlBkYlEwdGNQdVZhdW1
FQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0V
CCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZOaDJMSE53UC9YNDJ3RVFhRFB
LVkp0cTJVd2xNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDcHIzRnlxU0QyNG
1XTUNya2tPQzA0N0YxTmR4YnlTOVRzaGg2Q2swS2pMbWF2OE9MMwppSE56eWdY
QVJyRUxkNTNEVnBkamFMc2dmZGkyWUQvcEdWeXZHK2xwK1RNTFVxMGdNNkdjc
XRHOFVHN2prNVJoCkZxSVFiOUFmY0tMN1B4T1NacWlvbzFJeTFDWXVnNVBnMlB
Uc01DV00rV01zTkl2UzlGbmNhOGpRTTBzeWg0d1EKWWF3Y1BqdjNoaVVsNnBEc1pu
UDE4M0dXVnpKazZrUkxYVXc5cm82aEJCOVJLdGNhcmZLZ3RyOE9DeHU1WGRaRgo
wZTlZRFVvcFJ4ZDlkMEhkR016bVl6UjZsWTc3dHFKeXVzV2ExUEorVnJhRFR6Wk81S
URGdlNKNGkyRWlCWFM2CnpOUkNyUDJ6UWdJU0ZNQ0Z0QVRpM3dkTzZxa1A2dW
NBOEtKNAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: 192.168.221.130:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
- context:
cluster: kubernetes
namespace: hp
user: user1
name: user1-context
current-context: user1-context
kind: Config
preferences: {}
users:
- name: user1
user:
client-certificate: /home/pavan/user1.crt
client-key: /home/pavan/user1.key
===========================
6 kubectl get pods ### should give no error ## No pods may be the output
7 kubectl get nodes ### should give error as no access to user1
========================================================
#### Ingress :
⇒
Lets Deploy Ingress and make its service as NodePort
⇒ Note the port no of ingress service NodePort with NodeIp (In my case it is 30742)
932 curl localhost:30742/myapp1
933 curl localhost:30742/myapp2
921 kubectl get nodes -o wide
######
containers:
# Adapter container
- name: adapter-container
# simple adapter: alpine creates logs in the same
# location as main application with a timestamp
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do (cat /var/log/top.txt | head -1 > /var/log/status.txt) && (cat
/var/log/top.txt | head -2 | tail -1 | grep -o -E '\\d+\\w' | head -1 >> /var/log/status.txt) &&
(cat /var/log/top.txt | head -3 | tail -1 | grep -o -E '\\d+%' | head -1 >> /var/log/status.txt); sleep
5; done"]
resources: {}
----------------------
948 kubectl create -f adapter.yaml
949 kubectl get pods
950 kubectl get pods -w
951 kubectl describe pods adapter-container-example
952 kubectl get pods
954 kubectl exec -it adapter-container-example -c main-container sh
cat /var/log/top.txt
ls
exit
955 kubectl exec -it adapter-container-example -c adapter-container sh
cat /var/log/top.txt
cat /var/log/status.txt
ls
exit
=====================================================
### INIT Container Pods
$ cat mydb.yaml
apiVersion: v1
kind: Pod
metadata:
name: mydb
labels:
app: db
spec:
initContainers:
- name: fetch
image: mwendler/wget
command: ["wget","--no-check-certificate","https://sample-videos.com/sql/Sample-SQL-
File-1000rows.sql","-O","/docker-entrypoint-initdb.d/dump.sql"]
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: dump
containers:
- name: mysql
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "example"
- name: MYSQL_DATABASE
value: "hp"
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: dump
volumes:
- emptyDir: {}
name: dump
==============================
994 kubectl create -f mydb.yaml
995 kubectl get pod
996 kubectl logs -c fetch mydb -f
997 kubectl get pod
998 kubectl logs -c fetch mydb -f
=============================================
---
Reference : https://kubernetes.io/docs/concepts/services-networking/network-policies/
---
Helm book → https://drive.google.com/file/d/1bxJBi9GzHk2j2UslybDreVlfOSInsEwv/
view?usp=sharing
---
HELM
----
---
--
helm create inteltest
11 cd inteltest/
12 yum install tree -y
13 clear
14 tree
15 vi values.yaml
# Default values for inteltest.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 2
image:
repository: openshift/hello-openshift
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: "latest"
imagePullSecrets: []
nameOverride: "myintelapp"
fullnameOverride: "myintelchart"
serviceAccount:
# Specifies whether a service account should be created
create: false
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: NodePort
port: 80
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the
following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
16 cd
17 helm install myfirstapp inteltest/ --values inteltest/values.yaml
18 helm status myfirstapp
19 kubectl get pods
---
vi values.yaml
Change something like nodeport or anything to observer
27 history
28 #helm upgrade myfirstapp
29 cd
30 helm upgrade myfirstapp inteltest -f inteltest/values.yaml
31 helm status
32 helm status myfirstapp
33 kubectl get pods
34 kubectl get svc
35 vi inteltest/values.yaml
36 helm upgrade myfirstapp inteltest -f inteltest/values.yaml
37 helm list
38 kubectl get svc
39 helm rollback myfirstapp 2
40 kubectl get svc
41 helm rollback myfirstapp 1
---
Conditional
---
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "grafana.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
{{- $extraPaths := .Values.ingress.extraPaths -}}
{{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }}
apiVersion: networking.k8s.io/v1beta1
{{ else }}
apiVersion: extensions/v1beta1
{{ end -}}
kind: Ingress
metadata:
name: {{ $fullName }}
namespace: {{ template "grafana.namespace" . }}
labels:
{{- include "grafana.labels" . | nindent 4 }}
{{- if .Values.ingress.labels }}
{{ toYaml .Values.ingress.labels | indent 4 }}
{{- end }}
{{- if .Values.ingress.annotations }}
annotations:
{{- range $key, $value := .Values.ingress.annotations }}
{{ $key }}: {{ tpl $value $ | quote }}
{{- end }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{ toYaml .Values.ingress.tls | indent 4 }}
{{- end }}
rules:
{{- if .Values.ingress.hosts }}
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
{{ if $extraPaths }}
{{ toYaml $extraPaths | indent 10 }}
{{- end }}
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- end }}
{{- else }}
- http:
paths:
- backend:
serviceName: {{ $fullName }}
servicePort: {{ $servicePort }}
{{- if $ingressPath }}
path: {{ $ingressPath }}
{{- end }}
{{- end -}}
{{- end }}
---
---
Statefulsets
https://docs.google.com/document/d/1q751Qj-M6kfcg48q5pm-qtw2Pvu_F-
SXmFy_tT_xk0w/edit
================================
Istio service mesh components
Istio service mesh components - Google Docs