CKA Exam

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 54

alias k=kubectl # will already be pre-configured

export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do


export now="--force --grace-period 0" # k delete pod x $now

https://github.com/bbachi/CKAD-Practice-Questions/blob/master/core-concepts.md

https://medium.com/bb-tutorials-and-thoughts/practice-enough-with-these-questions-for-the-ckad-exam-2f42d1228552

Create a pod that echo ??hello world?? and then exists. Have the pod deleted automatically when
it??s completed

List Pods with name and status

Create NS and with pod into namespace:

Get pod without describe command.

Hungry -bear pod question:


Create and configure the servicefront-end-serviceso it's accessiblethroughNodePortand routes to
theexisting pod namedfront-end.

Create nginx pod show different level verbosity:

Get Pod IP address


kubect1 get pods -o=jsonpath='{range
.items[*]}{.metadata.name}{'\t'}{.status.podIP}{'\n'}{end}'
Busybox image sleep

Create deployment name : nginx-random


Expose deployment with name nginx-random
The container(s) within any pod(s) running as a part of this deployment should use the nginx
Image

Next, use the utility nslookup to look up the DNS records of the service & pod and write the
output to /opt/KUNW00601/service.dns and /opt/KUNW00601/pod.dns respectively.

Q1 Completed : sa, clusterrole, clusterrolebinding

Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a
specific namespace.
Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

Sol :
Set context : kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
kubectl create sa cicd-token -n app-team1
kubectl create clusterrolebinding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
deploy-b

################################################################################################
####################################################
Q2 Completed drain, unavailable
Task -
Set the node named ek8s-node-0 as unavailable and reschedule all the pods running on it.
kubectl config use-context ek8s

kubectl get nodes


kubectl drain ek8s-node-1 --ignore-daemonsets --delete-emptydir-data # make unavailable
kubectl uncordon <node name> # make avaiable again
kubectl get nodes and node should be unavailable

################################################################################################
####################################################
Q3: Completed upgrade all control plane and node
Task -
Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components
on the master node only to version 1.22.2.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.

set context : kubectl config use-context mk8s


kubectl get nodes
First drain master node : kubectl drain mk8s-master-0 --ignore-daemonsets
check node status : kubectl get nodes
ssh into node : ssh mk8s-master-0
sudo -i
apt install kubeadm =1.22.2-00 kubelet=1.22.2-00 kubectl=1.22.2-00
kubeadm upgrade plan
kubeadm upgrade apply v1.22.2
systemctl restart kubelet
exit from ssh
kubectl uncordon mk8s-master-0
kubectl get nodes to check the version
steps:1

Step2 : Install componenet

Step3 : run same command again.


Step4 :

Step5 :

Step6 : if said to join the nodes

Step7 : restart process


Step8 : uncordon node and verify the version

################################################################################################
####################################################

Q4: Completed backup and restore

Q4 : kubectl config use-context mk8s

Step1 : This will create snapshot at

Step2 : Then go to /var/lib/backup/etcd-snapshot.db and check file is present or not


Also look for etcd-snapshort-previous.db
Step3 : stop control plane component :

Etcd backup
kubectl config user-context k8s-c3-CCC

create backup and save it /tmp/etcd-backup.db


touch 25.yaml
copy command from documentation to yaml file

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \


--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

k get pods -n kube-system


ssh cluster3-master1
cat /etc/kuber*/manifest*/etcd.yaml | grep -i file # check location of server and cert file , ca file , key file

Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

check the file location and db file as backup taken

create pod in cluster : to test backup


k run test --image=nginx --command sleep 1d
k get pods
k get pods -n kube-system | grep wc -l

Restore the backup and make sure cluster still working


ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot restore /tmp/etcd-backup.db --data-dir=/var/lib/etcd-backup-1

cd /var/lib/etcd
ls -lrt

update the file vi /etc/kub*/mani*/etcd.yaml


update parameter volumes --> hostpath : Path : /var/lib/etcd-backup-1
k get pods -A
k get all -AC
It will stop or hang system to perform backup of db and cluster recreates
k get pods -A
k get pods -n kube-system --no-headers | wc -l
##############################################################################################

Q5: Completed networkpolicy


Task -
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar.
Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar.
Further ensure that the new NetworkPolicy:
✑ does not allow access to Pods, which don't listen on port 9000
✑ does not allow access from Pods, which are not in namespace internal
Ans :
set context : kubectl config use-context hk8s
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace # update name
namespace: fubar #update ns
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-app # update name
ports:
- protocol: TCP
port: 9000
Save file
kubectl label ns my-app prohject=my-app
kubectl describe ns my-app
kubectl create -f policy.yaml
###############################################################################################
Q6 Completed expose service
Task -
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing
container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

Answer : kubectl config use-context k8s


kubectl get deployments.apps
kubectl edit deployment.app front-end
--> update port specification to export 80 tcp
kubectl expose deployment front-end –name=front-end-svc --port=80 --type=NodePort --protocol=TCP

###############################################################################################
Q7: Completed scalee deployment 3
Task -
Scale the deployment presentation to 3 pods.

Answer : kubectl config use-context k8s


kubectl get deployments.apps
kubectl scale deployment presentation --replicas=3
kubectl get pod

###############################################################################################
Q8: Completed nodeselector disk:ssd
Task -
Schedule a pod as follows:
✑ Name: nginx-kusc00401
✑ Image: nginx
✑ Node selector: disk=ssd
Set context : kubectl config use-context k8s
k run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > 8.yaml
vi 8.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-kusc00401
name: nginx-kusc00401
spec:
containers:
- image: nginx
name: nginx-kusc00401
nodeSelector:
disktype: ssd
kubectl create -f 7.yaml
kubectl get pods
###############################################################################################
Q9:Completed many nodes

Task -
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and
write the number to /opt/KUSC00402/kusc00402.txt.

Set context : kubectl config use-context k8s


kubectl get nodes
kubectl get nodes -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')].effect}{\"\n\"}{end}"

echo "3" > /opt/KUSC00402/kusc00402.txt


cat /opt/KUSC00402/kusc00402.txt
###############################################################################################
Q10: Completed 2 containers
Schedule a Pod as follows:
✑ Name: kucc8
✑ App Containers: 2
✑ Container Name/Images:
- nginx
- consul

set context : kubectl config use-context k8s


kubectl run kucc8 --image=nginx --dry-run=client -o yaml > 10.yaml
vi 10.yaml and add 2nd container name and image name
apiVersion: v1
kind: Pod
metadata:
labels:
run: kucc8
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: consul
name: consul
##############################################################################################
Q11: Completed persistent volume
Task -
Create a persistent volume with name app-data, of capacity 2Gi and
access mode ReadOnlyMany.
The type of volume is hostPath and its location is /srv/app- data.

set context : kubectl config use-context hk8s


see persistent volume document and get yaml file and update the values below

kind: PersistentVolume
apiVersion: v1
metadata:
name: app-data
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/srv/app-data"

kubectl get pv
kubeclt create -f 11.yaml
kubectl get pv # verify the same
###############################################################################################
Q12: Completed error logs

Task -
Monitor the logs of pod foo and:
✑ Extract log lines corresponding to error file-not-found
✑ Write them to /opt/KUTR00101/foo

set context : kubectl config use-context k8s

kubectl get pods foo


kubectl logs foo
kubectl logs foo | grep -i "file-not-found" > /opt/KUTR00101/foo
cat /opt/KUTR00101/foo # to verify it

###############################################################################################
Q13: Completed sidecar container
Context -
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Task -
Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app.
The new sidecar container has to run the following command:

Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.

Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl get pods Big-corp-app -o yaml > 13.yaml
edit the vi 13.yaml
add the following into it

spec:
volumes:
- name: logs
containers:
- image: busybox
name : sidecar
command: ["/bin/sh"]
args: ["-c", "tail -n+1 -f /var/log/big-corp-app.log"]
volumeMounts:
- name: logs
mountPath: /var/log

save file and re-create the pod and verify the pod
kubectl create -f replace file.yaml
kubectl get pod
###############################################################################################
Q14: Completed high CPU workloads
Task -
From the pod label name=overloaded-cpu, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).

Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl top pods
kubectl top pods -l name=overloaded-cpu --sort-by=cpu
echo "overloaded-cpu-1234" > /opt/KUTR00401/KUTR00401.txt
cat /opt/KUTR00401/KUTR00401.txt
###############################################################################################
Q15: Completed kubelet fix not ready
Task -
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state,
ensuring that any changes are made permanent.

Answer :
set context : kubectl config use-context wk8s
kubectl get nodes
kubectl describe nodes wk8s-node-0
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl restart kubelet
systemctl start kubelet
exit
kubectl get node
###############################################################################################
Q16: Completed claim
Task -
Create a new PersistentVolumeClaim:
✑ Name: pv-volume
✑ Class: csi-hostpath-sc
✑ Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:


✑ Name: web-server
✑ Image: nginx
✑ Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.

Answer :
set context : kubectl config use-context ok8s

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
save file 16-pvc.yaml and
kubeclt create -f 16-pvc.yaml
kubectl get pv,pvc # verify the pvc having capacity of 10Mi

Claim as volume :

apiVersion: v1
kind: Pod
metadata:
name: web-server # update name of pod
spec:
containers:
- name: web-server # update name of container
image: nginx # update image name
volumeMounts:
- mountPath: "/usr/share/nginx/html" # update path
name: task-pv-storage
volumes:
- name: task-pv-storage # update namee
persistentVolumeClaim:
claimName: pv-volume # update name

save file and run it


kubectl create -f 16pod.yaml
kubectl get pods -o wide
3) kubectl edit PersistentVolumeClaim/pv-volume
for that run kubectl get pvc pv-volume -o yaml > 16.yaml
update the capacity fro m10 to 70Mi into new yaml file
save file and check again pvc
kubectl replace -f updatepvc.yaml
kubectl get pvc to verify the capacity from 10 to 70
###############################################################################################
Q17: Completed ingress
Task -
Create a new nginx Ingress resource as follows:
✑ Name: pong
✑ Namespace: ing-internal
✑ Exposing service hello on path /hello using service port 5678

Answer :
set context : kubectl config use-context k8s
check document and search nginx Ingress resource
The Ingress resource
vi 17.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
save file and run file
kubectl create -f 17.yaml
kubectl get ingress -n ing-internal
check the service : curl -kL IPAddress/hello # to verify the same
################################################################################################
####################################################
Q18:
Create a new service account with the name pvviewer.
Grant this Service account access to list all PersistentVolumes in the cluster by
creating an appropriate cluster role called pvviewer-role and
ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceaccount: pvviewer in the default namespace.

Anwer :
kubectl create sa pvviewer
kubectl create clusterrole pvviewer-role --verb=create --resource=PersistentVolumes
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer

verify it :
kubectl auth can-i list PersistentVolumes -as system:serviceaccount:default:pvviewer

create pod
kubectl run pvviewer --image=redis -n default

apiVersion: v1
kind: Pod
metadata:
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer

save yaml file and create pod


kubectl create -f 18.yaml

###############################################################################################
Q19:
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Record the version. Next upgrade the deployment to version 1.17 using rolling update.
Make sure that the version upgrade is recorded in the resource annotation.

get yaml file from document

kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1 --dry-run=client -o yaml > 19.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
resources: {}
status: {}
# Update the yaml file as below :

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx-deploy
ports:
- containerPort: 80

save file 19.yaml

kubectl apply -f nginx-deployment.yaml --record


kubectl get deployment
kubectl rollout history deployment nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 kubectl apply --filename=19.yaml --record=true

kubectl set image deployment.apps/nginx-deploy nginx-deploy=1.17 --record


kubectl describe deployment nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 kubectl set image deployment.apps/nginx-deploy nginx=1.17 --record=true
2 kubectl set image deployment.apps/nginx-deploy nginx-deploy=1.17 --record=true

controlplane $
###############################################################################################
Q20:
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000

apiVersion: v1
kind: Pod
metadata:
labels:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
###############################################################################################
Q21:
Create a NetworkPolicy which denies all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress

kubectl apply -f 21.yaml


kubectl get networkpolicies.networking.k8s.io

Create a NetworkPolicy which denies all Egress traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress

kubectl describe networkpolicies.networking.k8s.io default-deny-all-ingre


kubectl describe networkpolicies.networking.k8s.io default-deny-all-egress

Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a
specific namespace.

Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

alias k=kubectl # will already be pre-configured


export do="--dry-run=client -o yaml" # k create deploy nginx --image=nginx $do
export now="--force --grace-period 0" # k delete pod x $now

Q1
Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific
namespace.

Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

Sol :
Set context : kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
kubectl create sa cicd-token -n app-team1
kubectl create clusterrolebinding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token deploy-b

#################################################################################################################
Q2
Task -
Set the node named ek8s-node-0 as unavailable and reschedule all the pods running on it.
kubectl config use-context ek8s

kubectl get nodes


kubectl drain ek8s-node-1 --ignore-daemonsets --delete-emptydir-data # make unavailable
kubectl uncordon <node name> # make avaiable again
kubectl get nodes and node should be unavailable
#################################################################################################################
Q3:
Task -
Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components on the
master node only to version 1.22.2.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.

set context : kubectl config use-context mk8s


kubectl get nodes
First drain master node : kubectl drain mk8s-master-0 --ignore-daemonsets
check node status : kubectl get nodes
ssh into node : ssh mk8s-master-0
sudo -i
apt install kubeadm =1.22.2-00 kubelet=1.22.2-00 kubectl=1.22.2-00
kubeadm upgrade plan
kubeadm upgrade apply v1.22.2
systemctl restart kubelet
exit from ssh
kubectl uncordon mk8s-master-0
kubectl get nodes to check the version
#################################################################################################################
Q4 : kubectl config use-context mk8s

Etcd backup
kubectl config user-context k8s-c3-CCC

create backup and save it /tmp/etcd-backup.db


touch 25.yaml
copy command from documentation to yaml file

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \


--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

k get pods -n kube-system


ssh cluster3-master1
cat /etc/kuber*/manifest*/etcd.yaml | grep -i file # check location of server and cert file , ca file , key file

Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

check the file location and db file as backup taken

create pod in cluster : to test backup


k run test --image=nginx --command sleep 1d
k get pods
k get pods -n kube-system | grep wc -l

Restore the backup and make sure cluster still working


ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot restore /tmp/etcd-backup.db --data-dir=/var/lib/etcd-backup-1

cd /var/lib/etcd
ls -lrt
update the file vi /etc/kub*/mani*/etcd.yaml
update parameter volumes --> hostpath : Path : /var/lib/etcd-backup-1
k get pods -A
k get all -AC
It will stop or hang system to perform backup of db and cluster recreates
k get pods -A
k get pods -n kube-system --no-headers | wc -l
#################################################################################################################
Q5 :
Task -
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar.
Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar.
Further ensure that the new NetworkPolicy:
✑ does not allow access to Pods, which don't listen on port 9000
✑ does not allow access from Pods, which are not in namespace internal

Ans :
set context : kubectl config use-context hk8s

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace # update name
namespace: fubar #update ns
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-app # update name
ports:
- protocol: TCP
port: 9000
Save file
kubectl label ns my-app prohject=my-app
kubectl describe ns my-app
kubectl create -f policy.yaml
#################################################################################################################
Q6 :
Task -
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container
nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.

Answer : kubectl config use-context k8s


kubectl get deployments.apps
kubectl edit deployment.app front-end
--> update port specification to export 80 tcp

kubectl expose deployment front-end --name front-end-svc --port=80 --type=NodePort --protocol=TCP


#################################################################################################################
Q7:
Task -
Scale the deployment presentation to 3 pods.

Answer : kubectl config use-context k8s


kubectl get deployments.apps
kubectl scale deployment presentation --replicas=3
kubectl get pod
#################################################################################################################
Q8:
Task -
Schedule a pod as follows:
✑ Name: nginx-kusc00401
✑ Image: nginx
✑ Node selector: disk=ssd

Set context : kubectl config use-context k8s

k run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > 7.yaml


vi 7.yaml

apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-kusc00401
name: nginx-kusc00401
spec:
containers:
- image: nginx
name: nginx-kusc00401
nodeSelector:
disktype: ssd

kubectl create -f 7.yaml


kubectl get pods
#################################################################################################################
Q9:
Task -
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and
write the number to /opt/KUSC00402/kusc00402.txt.

Set context : kubectl config use-context k8s


kubectl get nodes
kubectl get nodes -o jsonpath="{range .items[*]}{.metadata.name} {.spec.taints[?(@.effect=='NoSchedule')].effect}{\"\n\"}{end}"

echo "3" > /opt/KUSC00402/kusc00402.txt


cat /opt/KUSC00402/kusc00402.txt
#################################################################################################################
Q10:
Schedule a Pod as follows:
✑ Name: kucc8
✑ App Containers: 2
✑ Container Name/Images:
- nginx
- consul

set context : kubectl config use-context k8s


kubectl run kucc8 --image=nginx --dry-run=client -o yaml > 10.yaml
vi 10.yaml and add 2nd container name and image name
apiVersion: v1
kind: Pod
metadata:
labels:
run: kucc8
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: consul
name: consul
#################################################################################################################
Q11:
Task -
Create a persistent volume with name app-data, of capacity 2Gi and
access mode ReadOnlyMany.
The type of volume is hostPath and its location is /srv/app- data.
set context : kubectl config use-context hk8s
see persistent volume document and get yaml file and update the values below

kind: PersistentVolume
apiVersion: v1
metadata:
name: app-data
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/srv/app-data"

kubectl get pv
kubeclt create -f 11.yaml
kubectl get pv # verify the same
#################################################################################################################
Q12:
Task -
Monitor the logs of pod foo and:
✑ Extract log lines corresponding to error file-not-found
✑ Write them to /opt/KUTR00101/foo

set context : kubectl config use-context k8s

kubectl get pods foo


kubectl logs foo
kubectl logs foo | grep -i "file-not-found" > /opt/KUTR00101/foo
cat /opt/KUTR00101/foo # to verify it
#################################################################################################################
Q13:
Context -
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.

Task -
Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app.
The new sidecar container has to run the following command:

Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.

Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl get pods Big-corp-app -o yaml > 13.yaml
edit the vi 13.yaml
add the following into it

spec:
volumes:
- name: logs
containers:
- image: busybox
name : sidecar
command: ["/bin/sh"]
args: ["-c", "tail -n+1 -f /var/log/big-corp-app.log"]
volumeMounts:
- name: logs
mountPath: /var/log

save file and re-create the pod and verify the pod
kubectl create -f replace file.yaml
kubectl get pod
#################################################################################################################
Q14:
Task -
From the pod label name=overloaded-cpu, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).

Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl top pods
kubectl top pods -l name=overloaded-cpu --sort-by=cpu
echo "overloaded-cpu-1234" > /opt/KUTR00401/KUTR00401.txt
cat /opt/KUTR00401/KUTR00401.txt
#################################################################################################################
Q15:
Task -
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state,
ensuring that any changes are made permanent.

Answer :
set context : kubectl config use-context wk8s
kubectl get nodes
kubectl describe nodes wk8s-node-0
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl restart kubelet
systemctl start kubelet
exit
kubectl get node
#################################################################################################################
Q16:
Task -
Create a new PersistentVolumeClaim:
✑ Name: pv-volume
✑ Class: csi-hostpath-sc
✑ Capacity: 10Mi

Create a new Pod which mounts the PersistentVolumeClaim as a volume:


✑ Name: web-server
✑ Image: nginx
✑ Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.

Answer :
set context : kubectl config use-context ok8s

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc

save file 16-pvc.yaml and


kubeclt create -f 16-pvc.yaml
kubectl get pv,pvc # verify the pvc having capacity of 10Mi
Claim as volume :

apiVersion: v1
kind: Pod
metadata:
name: web-server # update name of pod
spec:
containers:
- name: web-server # update name of container
image: nginx # update image name
volumeMounts:
- mountPath: "/usr/share/nginx/html" # update path
name: task-pv-storage
volumes:
- name: task-pv-storage # update namee
persistentVolumeClaim:
claimName: pv-volume # update name

save file and run it


kubectl create -f 16pod.yaml
kubectl get pods -o wide

3) kubectl edit PersistentVolumeClaim/pv-volume


for that run kubectl get pvc pv-volume -o yaml > 16.yaml
update the capacity fro m10 to 70Mi into new yaml file
save file and check again pvc
kubectl replace -f updatepvc.yaml
kubectl get pvc to verify the capacity from 10 to 70
#################################################################################################################
Q17:
Task -
Create a new nginx Ingress resource as follows:
✑ Name: pong
✑ Namespace: ing-internal
✑ Exposing service hello on path /hello using service port 5678

Answer :
set context : kubectl config use-context k8s
check document and search nginx Ingress resource
The Ingress resource
vi 17.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678

save file and run file


kubectl create -f 17.yaml
kubectl get ingress -n ing-internal
check the service : curl -kL IPAddress/hello # to verify the same
#################################################################################################################
Q18:
Create a new service account with the name pvviewer.
Grant this Service account access to list all PersistentVolumes in the cluster by
creating an appropriate cluster role called pvviewer-role and
ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceaccount: pvviewer in the default namespace.

Anwer :
kubectl create sa pvviewer
kubectl create clusterrole pvviewer-role --verb=create --resource=PersistentVolumes
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer

verify it :
kubectl auth can-i list PersistentVolumes -as system:serviceaccount:default:pvviewer

create pod
kubectl run pvviewer --image=redis -n default

apiVersion: v1
kind: Pod
metadata:
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer

save yaml file and create pod


kubectl create -f 18.yaml

#################################################################################################################
Q19:
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Record the version. Next upgrade the deployment to version 1.17 using rolling update.
Make sure that the version upgrade is recorded in the resource annotation.

Get yaml file via below command :

kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1 --dry-run=client -o yaml > 19.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
resources: {}
status: {}
# Update the yaml file as below :

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx-deploy
ports:
- containerPort: 80

save file 19.yaml

kubectl apply -f nginx-deployment.yaml --record


kubectl get deployment
kubectl rollout history deployment nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 kubectl apply --filename=19.yaml --record=true

kubectl set image deployment.apps/nginx-deploy nginx-deploy=1.17 --record


kubectl describe deployment nginx-deploy
deployment.apps/nginx-deploy
REVISION CHANGE-CAUSE
1 kubectl set image deployment.apps/nginx-deploy nginx=1.17 --record=true
2 kubectl set image deployment.apps/nginx-deploy nginx-deploy=1.17 --record=true

controlplane $
#################################################################################################################
Q20:
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000

apiVersion: v1
kind: Pod
metadata:
labels:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
#################################################################################################################
Q21:
Create a NetworkPolicy which denies all ingress traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
kubectl apply -f 21.yaml
kubectl get networkpolicies.networking.k8s.io

Create a NetworkPolicy which denies all Egress traffic

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress

kubectl describe networkpolicies.networking.k8s.io default-deny-all-ingre


kubectl describe networkpolicies.networking.k8s.io default-deny-all-egress

#################################################################################################################
Medium.com site 50 questions

1.Create a new pod with the nginx image


k run nginx --image=nginx

2.Create a new pod with the name redis and with the image redis:1.99.
k run redis --image=redis:1.99

3.Identify the problem with the pod.


k get pod redis
k logs redis
k describe pod redis

4.Create a new replicaset rs-1


using image nginx with
labels tier:frontend and
selectors as tier:frontend.
Create 3 replicas for the same.

get yaml file from document


apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: rs-1
labels:
app: nginx
tier: frontend
spec:
# modify replicas according to your cas
replicas: 3
selector
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx

save file and run file


kubectl create -f re.yaml
kubectl get rs

5.Scale up the above replicaset to 5 replicas


kubectl scale rs rs-1 -r-replicas=5
6. Update the image of the above replicaset to use nginx:alpine image.
edit the replicate set using kubectl edit rs rs-1 and update image name
save file and verify the same

7.Scale up the above replicaset to 2 replicas


kubectl scale rs rs-1 -r-replicas=2

#################################################################################################################
reate three pods, pod name and their image name is given below:

Pod Name Image Name & commands


nginx nginx
busybox1 busybox1, sleep 3600
busybox2 busybox2, sleep 1800
Make sure only busybox1 pod should be able to communicate with nginx pod on port 80.
pod busybox2 should not be able to connect to pod Nginx.

Answer :
kubectl run nginx --image:nginx
kubectl run busybox1 --image:busybox1 --sleep 3600
kubectl run busybox2 --image:busybox2 --sleep 1800
kubectl get pods # it should be running

Create network policy :


apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: networkpolicy
namespace: default
spec:
podSelector:
matchLabels:
run: nginx
policyTypes:
– Ingress
ingress:
– from:
– podSelector:
matchLabels:
run: busybox1
ports:
– protocol: TCP
port: 80
kubectl create -f networkpolicy.yaml
kubectl describe networkpolicy networkpolicy

do velidation:

kubectl exec -it busybox1 — telnet 192.168.77.131 80 # it should be successful


kubectl exec -it busybox2 — telnet 192.168.77.131 80 # it should be failed
#################################################################################################################
Pods with hard limit on CPU & Memory
Create a pod “cka-pod” using image “nginx”, with a hard limit of 0.5 CPU and 20 Mi memory in “cka-exam” namespace.

check the namespace is available or not , if not the create it


kubectl create ns cka-exam
k run cks-pod --image=nginx --dry-run=client -n cka-exam -o yaml > question.yaml
update file and resource limit

apiVersion: v1
kind: Pod
metadata:
name: cka-pod
namespace: cka-exam
spec:
containers:
– name: cka-pod
image: nginx
resources:
limits:
cpu: "0.5"
memory: 20Mi

kubectl create -f resource-limit.yaml


#################################################################################################################

#################################################################################################################
Which are the following commands used to display the ReplicaSet
kubectl get rs
What is the minimum number of Network Adapter(s) per VM needed to form a Kubernetes multi-node cluster
2
Which sub command of gcloud is used to set project ID?
config
Which one is the containerization tool among the following
Docker
Which tool is not installed in Worker node of a Kubernetes cluster
Which service in Google Cloud Platform is used to create kubernetes cluster?
Kubernetes Engine
The command to start dashboard from minikube
replicate set ensures : specific number of replicas are running on any specific time
Which one is not a container clustering tool
ansible
What is the valid apiVersion of POD in the yaml object file?
apiVersion : v1
To Create a customized image
Dockerfile us used.
Every POD gets an IP address
What is the minimum number of VMs needed to configure kubeadm multi-node kubernetes cluster
3
Through which tool a single-node cluster of Kubernetes can be installed
minikube
To ensure high-availability in a kubernetes multi-node cluster how many master node is needed minimum
3
To see the detailed configuration parameters of a POD
kubectl describe <podname> command is used
thee command to start minikube
minikube start
which sub command of gcloud is used to create kubernetes cluster?
container
Which tool is used in aws cli to create and manage kubernetes cluster in aws?
Should we keep the firewall off to interconnect all the VMs in a kubernetes multi-node cluster
yes
https://github.com/dgkanatsios/CKAD-exercises/blob/master/a.core_concepts.md
https://medium.com/@sensri108/practice-examples-dumps-tips-for-cka-ckad-certified-kubernetes-administrator-exam-by-cncf-
4826233ccc27
https://de.slideshare.net/vinodmeltoe/cka-dumps

********************Pods****************************************
kubectl get pods
kubectl run nginx --image=nginx --generator=run-pod/v1 --> create pod with name nginx
kubectl run redis --image=redis123 --generator=run-pod/v1 --> create pos
kubectl edit pod podname --> to edit pods
kubectl describe pod podname-id --> to get details information
kubectl get pods -o wide
kubectl delete pod webapp --> delete pod
Running meaning 1/2 --> Running container in pods / Total container in kubepods
Create a new pod with the name 'redis' and with the image 'redis123' --> Using file ###### pending

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s


https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
kubectl version --client
*********************************Replicaset*****************
'kubectl get replicaset'.
'kubectl get replicaset' -o wide
'kubectl describe replicaset'
replicaset ensure that pods must be running state in replicaset
correct api version for replicaset : apps/v1
kubectl create -f replicaset-definition-1.yaml --> create replicaset
kubectl get rs
kubectl delete replicaset replicasetname
kubectl edit replicaset replicasetname
if replicaset edited either delete pods
or delete replicaset before deleting take or export definication file
kubectl get rs new-replica-set -o yaml > new-replica-set.yaml
kubectl delete replicaset replicasetname
kubectl create -f new-replicaset.yaml
scale replicaset so either edit yaml file or kubectl edit replicaset new-replica-set to 5
kubectl scale rs new-replica-set -r-replicas=5

*********************************Deployments*****************
kubectl get deployment
kubectl get deployment -o wide
kubectl create -f deployment-definition.yaml
---apiVersion: apps/v1kind: Deployment
metadata:
name: httpd-frontend
spec:
replicas: 3
selector:
matchLabels:
name: httpd-frontend-pods
template:
metadata:
labels:
name: httpd-frontend-pods
spec:
containers:
- name: httpd-frontend-container
image: httpd:2.4-alpine
command:
- sh
- "-c"
- echo Hello Kubernetes! && sleep 3600

********************************Namespacess*****************
kubectl get namespaces
kubectl get pods --namespace=spacename --> get pods in namesapces
kubectl run redis --image=redis --namespace=finance --generator=run-pos/v1 --> create pod in namespace
kubectl get pods --all-namespaces
********************************services*****************
kubectl get service
default kubernates service is clusterIP
kubectl describe service
kubectl create -f service-defination.yaml
Service defination files

---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
ports:
- targetPort: 8080
port: 8080
nodePort: 30080
selector:
name: simple-webapp

*******************************imperative comments*****************
kubectl run nginx-pod --image=nginx:alpine --generator=run-pod/v1 --> create pod
kubectl run nginx-pod --image=nginx:alpine --generator=run-pod -l tier=db --> creation with label
kubectl expose pod redis --port=6379 --name redis-service --> create service to expose application port
kubectl create deployment webapp --image=kodekloud/webapp-color --> create deployment
kubectl scale webapp --replicas=3
kubectl expose deployment webapp --type=NodePort --port=8080 --name=webapp-service --dry-run -o yaml > webapp-service.yaml

*******************************scheduling and monitoring*****************


kubectl get pods
kubectl top node
kubectl top pode
kubectl logs webapp1

*****************************ALM Rolling update , env variable container ****************


kubectl get deployments
kubectl describe deployments frontend
strategy type : Rollingupdate
Create pod with given spects :
Pod Name: webapp-green
Image: kodekloud/webapp-color
Command line arguments: --color=green

apiVersion: v1
kind: Pod
metadata:
name: webapp-green
labels:
name: webapp-green
spec:
containers:
- name: simple-webapp
image: kodekloud/webapp-color
command: ["python", "app.py"]
args: ["--color", "pink"]
*****************************Env variable ****************
ubectl get pod webapp-color -o yaml > webapp-color.yaml
kubectl get configmaps
kubectl describe configmaps db-config
kubectl create configmap \
> webapp-config-map --from-literal=APP_COLOR=darkblue
kubectl get secrets
ubectl create secret generic db-secret --from-literal=DB_HOST=sql01 --from-literal=DB_USER=root --from-
literal=DB_Password=password123 --< create secrect
*****************************Multicontainer pods****************init container need practices#######
multicontainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: yellow
spec:
containers:
- name: lemon
image: busybox

- name: gold
image: redismaster

kubectl exec -it app cat/log/app.log --> get log file


******************************* Cluster Maintainace#########################################
kubectl get nodes
kubectl get deployent to see no of application hoster on cluster
We need to take node01 out for maintenance. Empty the node of all applications and mark it unschedulable.
kubectl drain node01 --ignore-daemonsets --
kubectlctl uncordon node01 --> schedule node
kubectl describe node master
kubectl drain node02 --ignore-daemonsets --force
Node03 has our critical applications. We do not want to schedule any more apps on node03. Mark node03 as unschedulable but do not
remove any apps currently running on it
--> Kubectl cordon node03
*****************Cluster upgrade**************************
kubectl get cluster --> get current cluster version
kubeadin upgrade plan
kubectl drain master --ignore-daemonsets --> drain node and unschedule
-- upgrade kubeadm
install upgarde apply
apt install kubeadm=1.12.0-00
kubeadm upgrade v1.12.00
apt install kubelet=1.12.0-00
kubectl uncordon master --> make master node scheduler
kubectl drain node01 --ignore-daemonsets --< off the load and mark unschedule
--> upgrade worker node
apt install kubeadn=1.12.0-00
apt install kubelet=1.12.0-00
kubeadm upgrade node config --kublet-version$(kubelet --version | cut -d ' ' -f 2)
aster $ kubeadm upgrade node config --kubelet-version $(kubelet --version | cut -d ' ' -f 2)
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuratio
n for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

apt install kubelet


*******************Backup and storage****************************
check etcd version
kubectl logs etcd-master -n kube-system
kubectl describe pod etcd-master -n kube-system -> check for listen-client
/etc/kubernetes/pki/etcd/server.crt --> server certificate location
/etc/kubernetes/pki/etcd/ca.crt --> ca certificate
Take backup before rebooting master node
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/snapshot-pre-boot.db

DO Restore the edcd snapshot


ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--name=master \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
--data-dir /var/lib/etcd-from-backup \
--initial-cluster=master=https://127.0.0.1:2380 \
--initial-cluster-token etcd-cluster-1 \
--initial-advertise-peer-urls=https://127.0.0.1:2380 \
snapshot restore /tmp/snapshot-pre-boot.db
*********************************Certificate********************************

Kube-api server certificate file :


cat /etc/kubernetes/manifests/kube-apiserver.yaml'
/etc/kubernetes/pki/apiserver-etcd-client.crt

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text


kube api server certificate
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text --> api server
openssl x509 -in /etc/kubernetes/pki/etcd/server.crt -text --> etcd server configuration
penssl x509 -in /etc/kubernetes/pki/apiserver.crt -text --> api server
openssl x509 -in /etc/kubernetes/pki/ca.crt -text --> Root CA

openssl x509 -req -in /etc/kubernetes/pki/apiserver-etcd-client.csr -CA /etc/kubernetes/pki/etcd/ca.crt -CAkey


/etc/kubernetes/pki/etcd/ca.key -CAcreateserial -out /etc/kubernetes/pki/apiserver-etcd-client.crtmaster --> Generate the new certificate
export dr='--dry-run=client -o yaml'
export now='--grace-period=0 --force'

k run pod --image=nginx -- sleep 1d


k delete pod pod $now --> Forcefully deleted

#################################################################################################################
Q1 :
Answer :
k config get-contexts -o name
k config get-contexts -o name > /opt/course/1/contexts
cat /opt/course/1/contexts

With Kubectl
kubectl config current-context # Display Current contexts
echo "kubectl config current-context" > /opt/course/1/context_default_kubectl.sh
cat /opt/course/1/context_default_kubectl.sh
sh /opt/course/1/context_default_kubectl.sh

Without Kubectl
you will get all context into .kube/config text
cat /.kube/config | grep -i current-context
cat /.kube/config | grep -i current-context | sed 's/current-context: //'" >/opt/course/1/context_default_no_kubectl.sh
sh /opt/course/1/context_default_no_kubectl.sh
#################################################################################################################
Q2 :
kubectl config use-context k8s-c1-Hello
k run -n default pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml
k run -n default pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml > 2.yaml
edit 2.yaml file and update container name : pod1-container
schedule pod : under spec get kubectl get nodes as master name
spec:
nodeName: cluster1-master

k create -f 2.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
nodeSelector: # add
node-role.kubernetes.io/control-plane: "cluster1-controlplane1" # add
status: {}

k get pods -n default pod1


k get pods -n default pod1 -o wide # check node running on master node or not
k describe pod pod1 -n default
#################################################################################################################

Q3 :
kubectl config use-context k8s-c1-H # set this context
k get all -n project-c13 # to check pods
k get pods -n project-c13 # to check pods
k scale statefulset -n project-c13 o3db --replicas=1 # Scale down to 1 replica
k get pods -n project-c13 # to verify pods now one pods should run
k get all -n project-c13 # to verify pods now one pods should run
#################################################################################################################
Q4 :
kubectl config use-context k8s-c1-H
k run -n default ready-if-service-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml > 4.yaml
edit 4.yaml file
configure livenessprobe and readinessprobe take help of document and put underspace section
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- 'sh'
- 'c'
- 'wget -T2 -0- http://service-am-i-ready:80'

save file and create the pod


k get pods -n default # check the pods with same name found or not
k create -f 4.yaml
check the pods status and describe service and it should failed with readiness probe

k run am-i-ready -n default --image=nginx:1.16.1-alipne --labels="id=cross-servicer-ready" --dry-run=client -o yaml


k get svc -n default servcie-am-i-ready
k get ep -n default
k run am-i-ready -n default --image=nginx:1.16.1-alipne --labels="id=cross-servicer-ready"
k get pods -n default am-i-ready
k get svc -n default service-am-i-ready
k get ep -n default
k get pods -n default am-i-ready -o wide

Now first pod should be in ready status :


k get pods -n default ready-if-service-ready
k describe pod -n default ready-if-service-ready # check pod runnig state

# 4_pod1.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe: # add from here
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80' # to here
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

#################################################################################################################
Q5 :
kubectl config use-context k8s-c1-H
k get pods --all-namespaces or k get pods -A
k get pods -A --sort-by=metadata.creationTimestamp
echo "k get pods -A --sort-by=metadata.creationTimestamp" > /opt/course/5/find_pods.sh
cat /opt/course/5/find_pods.sh
sh /opt/course/5/find_pods.sh

k get pods -A --sort-by=metadata.uid


echo "kubectl get pods -A --sort-by=metadata.uid" > /opt/course/5/find_pods_uid.sh
sh /opt/course/5/find_pods_uid.sh
#################################################################################################################
Q6 :
kubectl config use-context k8s-c1-H
copy yaml from k8s document with 6.yaml
modify name of persisten volume : safari-pv

kind: PersistentVolume
apiVersion: v1
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"
k get pv
k create -f 6.yaml
k get pv
**************************************
pvclain name : persistenvolume claim
update : name safari-pvc
namespace : project-tiger
storage : 2Gi
access mode : readwritemode
not define storate class name

k get pvc
k create -f 6-pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi

k get pvc -n project-tiger


****************************************
k create deployment safari -n project-tiger --image=httpd:2.4.41-alpine --dry-run=client -o yaml > 6-dep.yaml
update yaml file with mount volume name as per question
spec:
volumes:
- name: vol
hostpath:
path: "/tmp/safari-data"
volumeMounts:
- name: vol
- mountpath: "/tmp/safari-data"
k get deployments -A

apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: safari
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: safari
spec:
volumes: # add
- name: data # add
persistentVolumeClaim: # add
claimName: safari-pvc # add
containers:
- image: httpd:2.4.41-alpine
name: container
volumeMounts: # add
- name: data # add
mountPath: /tmp/safari-data # add

k create -f 6-dep.yaml
verify deployments and describe deployment
#################################################################################################################
Q7 :
kubectl config use-context k8s-c1-H

Show node resurace usage


k top node # resource utilization
echo "kubectl top node" > /opt/course/7/node.sh
cat /opt/course/7/node.sh
sh /opt/course/7/node.sh

Showw pods and their container resourse usage


kubectl top pods --containers
echo "kubectl top pods --container" > /opt/course/7/pod.sh
cat /opt/course/7/pod.sh
sh /opt/course/7/pod.sh
verify the both files using ls -lrt
#################################################################################################################
Q8:
kubectl config use-context k8s-c1-H
SSh market node, find out DNS application
touch /opt/course/8/maaster-components.txt
copy the context from question
do ssh cluster1-master1
k get all -n kube-system | grep -i dns
k get all -n kube-system | grep -i etcd
ls /etc/kube*/manifests | grep etcd
k get all -n kube-system | grep -i kube-controller-manager
k get all -n kube-system | grep -i kube-scheduler
k get all -n kube-system | grep -i kube-apiserver
k get all -n kube-system | grep -i kubelet # kublet is not service it is process

dns Type : pod Name : coredns


etcd : static-pod
kube-controller-manager Type : static-pod
kube-scheduler Type : static-pod
kube-apiserver Type : static-pod
kubelete Type : process
#################################################################################################################
Q9
kubectl config use-context k8s-c2-AC
stop kube scheduler :
ssh cluster2-master1
k get pods -n kube-system | grep -i kube-scheduler
cd /etc/kub*/manifests
mv kube-scheduler.yaml kube-scheduler_org.yaml
k get pods -n kube-system | grep -i kube-scheduler # pod should be terminated

create pod with non schedule mode


k run pod manual-schedule --image=httpd:2.4-alpine
k get pod manual-schedule

manually schedule pod


restore the file mv kube-scheduler_org.yaml kube-scheduler.yaml
check the scheduler pod
k get pods -n kube-system | grep -i kube-scheduler
k get pod manual-schedule
k run manual-schedule --image=httpd:2.4-alpine

start scheduler and run 2nd pod


k get pod manual-schedule -o yaml > 9.yaml
vi 9.yaml file
update the nodeName: cluster2-controlplane1 # add the controlplane node name
in spec section add
nodeName: cluster-worker1
k -f 9.yaml replace --force
k get pods manual-schedule1 -o wide
k create -f 9.yaml

#################################################################################################################
Q10:

A ClusterRole|Role defines a set of permissions and where it is available, in the whole cluster or just a single Namespace.
A ClusterRoleBinding|RoleBinding connects a set of permissions with an account and defines where it is applied, in the whole cluster or just
a single Namespace.
Because of this there are 4 different RBAC combinations and 3 valid ones:

Role + RoleBinding (available in single Namespace, applied in single Namespace)


ClusterRole + ClusterRoleBinding (available cluster-wide, applied cluster-wide)
ClusterRole + RoleBinding (available cluster-wide, applied in single Namespace)
Role + ClusterRoleBinding (NOT POSSIBLE: available in single Namespace, applied cluster-wide)

kubectl config use-context k8s-c1-H


Service account create
k create sa processor --namespace project-hamster
k get sa -A

create role allow SA to create secret and configmap into NS


kubectl api-resources # check resource name into first colum
k create role processor -n project-hamster --verb=create --resource=secrets,configmaps
k create role processor -n project-hamster --verb=create --resource=configmaps
k get role -n project-hamster
k describe role -n project-hamster
k get configmap -n project-hamster

Create rolebinding
k create rolebinding -n project-hamster processor --serviceaccount=project-hamster:processor --role=processor --dry-run=client -o yaml
k create rolebinding -n project-hamster processor --serviceaccount=project-hamster:processor --role=processor

k get rolebindings -n project-hamster # verify the same


k describe rolebindings -n project-hamster # describe the same
check able to create service account or not
k auth can-i create secret --namespace project-hamster --list --as=system:serviceaccount:project-hamster:processor # yes
k auth can-i create configmaps --namespace project-hamster --list --as=system:serviceaccount:project-hamster:processor # yes
k auth can-i create pods --namespace project-hamster --list --as=system:serviceaccount:project-hamster:processor # no

➜ k -n project-hamster auth can-i create secret \


--as system:serviceaccount:project-hamster:processor
yes

➜ k -n project-hamster auth can-i create configmap \


--as system:serviceaccount:project-hamster:processor
yes

➜ k -n project-hamster auth can-i create pod \


--as system:serviceaccount:project-hamster:processor
no

➜ k -n project-hamster auth can-i delete secret \


--as system:serviceaccount:project-hamster:processor
no

➜ k -n project-hamster auth can-i get configmap \


--as system:serviceaccount:project-hamster:processor
no

#################################################################################################################
Q11 :
kubectl config use-context k8s-c1-H

Create daemonset : use documents and use manifests


set cpu and memory
pod daemonset all nodes master, workers

# 11.yaml
apiVersion: apps/v1
kind: DaemonSet # change from Deployment to Daemonset
metadata:
creationTimestamp: null
labels: # add
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
name: ds-important
namespace: project-tiger # important
spec:
#replicas: 1 # remove
selector:
matchLabels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
#strategy: {} # remove
template:
metadata:
creationTimestamp: null
labels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
spec:
containers:
- image: httpd:2.4-alpine
name: ds-important
resources:
requests: # add
cpu: 10m # add
memory: 10Mi # add
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
#status: {} # remove

k create -f 11.yaml
verify the all resources are created or not
#################################################################################################################
Q12
kubectl config use-context k8s-c1-H

k create deployment --namespace project-tiger deploy-important --image=nginx:1.17.6-alpine --replicas=3 --dry-client=client -o yaml >
12.yaml
vi 12.yaml

# 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
id: very-important # change
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3 # change
selector:
matchLabels:
id: very-important # change
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
id: very-important # change
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: kubernetes/pause # add
name: container2 # add
affinity: # add
podAntiAffinity: # add
requiredDuringSchedulingIgnoredDuringExecution: # add
- labelSelector: # add
matchExpressions: # add
- key: id # add
operator: In # add
values: # add
- very-important # add
topologyKey: kubernetes.io/hostname # add
status: {}

save file and run file


k create -f 12.yaml
k get deployment -n project-tiger # 3 replica and 2 ready and available also 2
k get pods -n project-tiger -o wide # 3 pods out of that 2 running and 1 pending

validae solution
k describe deployment -n project-tiger
--> verify name
--> id , replica 3 desired 2 available , container 1 , container 2

#################################################################################################################
Q13:
kubectl config use-context k8s-c1-H

Create pod with 3 container + volume mount per pod + not shared or persited with other pod
pod : multi-container-playground
c1 : nginx:1.17.6-alpine
env variable : MY_NOBE_NAME
c2 : busybox:1.31-1
write output date command every second with shared volume file date.log use while true:
c3 : busybox:1.31.1 and send content of file date.log from shared volume to stdout and use tril-f
check log of c2 to confirm setup

k run multi-container-playground -n default --image=nginx:1.17.6-alpine --dry-run=client -o yaml


open vi 13.yaml file

# 13.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-container-playground
name: multi-container-playground
spec:
containers:
- image: nginx:1.17.6-alpine
name: c1 # change
resources: {}
env: # add
- name: MY_NODE_NAME # add
valueFrom: # add
fieldRef: # add
fieldPath: spec.nodeName # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c2 # add
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c3 # add
command: ["sh", "-c", "tail -f /vol/date.log"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: vol # add
emptyDir: {} # add
status: {}

do verification :
k create -f 13.yaml
k get pods -n default
k describe pod podname -n default
k exec -it multi-container-playground -c c1 -- printenv # print env vraiable
k exec -it multi-container-playground -c c1 -- printenv | grep -i MY_NODE_NAME # print env vraiable

k exec -it multi-container-playground -c c1 -- cat /your/vol/path/date.log # writing log every second


k exec -it multi-container-playground -c c3
#################################################################################################################
Q14:
kubectl config use-context k8s-c1-H

How many Master Nodes :


k get nodes

How many worker nodes:


k get nodes

What is service CIDR:


ssh cluster1-master1
k get pods -n kube-system | grep -i kube-api # this show static pod
cd /etc/kub*/mani/kube-apiserver.yaml
cat /etc/kub*/mani/kube-apiserver.yaml | grep -i range

Which networking CNI and location :


cd /etc/cni/net.d
ls -lrt
cat 10.wave.congflist

Suffix of static pod to run cluster1-worker1


static pod is node name

touch /opt/course/14/cluster-info
1:1
2:2
3 : 10.96.0.0/12
4 : weave, /etc/cni/net.d/10-weave-conflist
5 : -cluster1-worker1

cat /opt/course/14/cluster-info
#################################################################################################################
Q15:
kubectl config use-context k8s-c2-AC

touch /opt/course/15/cluster_events.sh
echo 'kubectl get events -A sort-by="metadata.creationTiestamp"' > /opt/course/15/cluster_events.sh
sh /opt/course/15/cluster_events.sh

kubectl get pods -A -o wide | grep kube-proxy


kubectl delete pod podname -n kube-system
kubectl get pods -A -o wide | grep kube-proxy
kubectl get events -n kube-system --sort-by="metadata.creationTiestamp" --> copy events to /opt/course/15/pod_kill.log

kubectl get pods -A -o wide | grep kube-proxy --> get pod name
ssh cluster2-worker1
crictl ps
copy the container number or name against pod name
task is to kill container
crictl stop 653345545(containeID)
crictl rm 653345545(containeID)
crictl ps

kubectl get events -n kube-system --sort-by="metadata.creationTiestamp" --> copy events to /opt/course/15/container_kill.log


#################################################################################################################
Q16:
kubectl config use-context k8s-c1-H

Create namespace
kubectl create ns cks-master

write all ns of k8s resourses


echo "kubectl api-resources --namespaced -o name" > /opt/course/16/resources.txt
cat /opt/course/16/resources.txt # verify the output

find project* with NS highest roles

kubectl get roles -A | grep -i project-c14 --no-headers| wc-1


approx 300

k get ns
verify with other ns also just for confirmation.
kubectl get roles -A | grep -i project-c14 --no-headers| wc-1

vi /opt/course/16/crowded-namespace.txt
add
project-c14 300
save file and cat file /opt/course/16/crowded-namespace.txt
#################################################################################################################
Q17:
kubectl config user-context k8s-c1-H

kubectl run -n project-tiger tigers-reunite --image=httrpd:2.4.41-alpine --labels="pods=container,container=pod" --dry-run=client -0 yaml


kubectl run -n project-tiger tigers-reunite --image=httrpd:2.4.41-alpine --labels="pods=container,container=pod"

k get pods -n project-tiger -0 wide # will get node name on pod is running
cluster1-worker2

ssh cluster1-worker2
crictl ps | grep -i tiger-reunite
copy containre ID and write into txt file
crictl inspect 5656645656565(containerID) | grep info.runtimeType
check the details

vi /opt/course/17/pod-container.txt
5656645656565 io.containerd.runc.v2

write logs into /opt/course/17/pod-container.log


crictl logs 5656645656565
crictl logs 5656645656565 # copy the logs and add into txt file
#################################################################################################################
Q18:
kubectl config use-context k8s-c2-CCC

kubelet is not running on worker , fix it ,

k get nodes # 2 nodes one will be not ready


ssh cluster3-worker1
ps aux | grep -i kublet # check running or not
service kubelet status
service kubelet start and check starting or not
--> check location of kubelet : /usr/bin/kubelet and in config file it is configured as /usr/local/bin/kubelet
--> correct the file and save it
--> start the service again systemctl daemon-reload && systemctl restart kubelet
--> service kubelet status
--> k get nodes # it should be ready status

write reason of the issue into /opt/course/18/reason.txt


Incorrect path set for kubelet /usr/local/bin/kubelete
corrected path for kubelet as /usr/bin/kubelet
info file /etc/systemd/system/kubelet.service.d/10-kube.conf
#################################################################################################################
Q19
Secret Test
kubectl config use-context k8s-c2-CCC

k create ns secret # create secret


# 19_secret1.yaml
apiVersion: v1
data:
halt: IyEgL2Jpbi9zaAo...
kind: Secret
metadata:
creationTimestamp: null
name: secret1
namespace: secret # change

k run -n secret secret-pod --image=busybox:1.31.1 --dry-run=client -o yaml -- sh -C sleep 1d > 19.yaml


vi 19.yaml
# 19.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: secret-pod
name: secret-pod
namespace: secret # add
spec:
containers:
- args:
- sh
- -c
- sleep 1d
image: busybox:1.31.1
name: secret-pod
resources: {}
env: # add
- name: APP_USER # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: user # add
- name: APP_PASS # add
valueFrom: # add
secretKeyRef: # add
name: secret2 # add
key: pass # add
volumeMounts: # add
- name: secret1 # add
mountPath: /tmp/secret1 # add
readOnly: true # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: secret1 # add
secret: # add
secretName: secret1 # add
status: {}

update secret yaml location into yaml file for that check secret1.yaml and update namespace = secret
k get secrets -n secrets
k create -f secret1.yaml # which is specified in question

k create secret generic secret2 -n secret --from-literal="user=user1" --from-literal="pass=1234"


k describe secret secret2
copy username and pass and verify echo adbcc | base64 --decode

#################################################################################################################
Q20:
kubectl config use-context k8s-c2-CCC

Node version upgrade from old to new

kubectl get nodes -0 wide


Cluster3-master1 v 1.25.1

ssh cluster3-worker2
kubeadmin version
--> 1.25.2
kubelet --version
v.1.24.6

upgrade kubelet
kubelet and kubectl upgrade
apt-mark kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.25.2-00 kubectl=1.25.2
apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubeadmin join # join worker to cluster
********************************************************
ssh Cluster3-master1
kubeadm token create --print-join-command
copy output and paster into worker node
********************************************************
kubectl get nodes
#################################################################################################################
Q21:
kubectl config use-context k8s-c2-CCC

Create static pod + memory + service

ssh cluster3-master1
cd /etc/kub*/mani*
ls -rlt
Go to another terminal where context is set
k run my-static-pod -n default --image=nginx:1.16-alpine --dry-run=cliet -o yaml > 21.yaml
vi 21.yaml
# /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-static-pod
name: my-static-pod
spec:
containers:
- image: nginx:1.16-alpine
name: my-static-pod
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}

add the memory and cpu resources information request only


save file in manifest directory
k get pods
k describe pods
**************************************************************************
create nodeport service
k expose pod my-static-pod-cluster3-master1 -n default --name=static-pod-service -port=80 --type=nodePort --dry-run=client -o yaml
k get svc -A
check the IP and port using terminal
k -n default exec -it -- ping 10.32.0.4
#################################################################################################################
Q22:
kubectl config use-context k8s-c2-AC
check kubeapi server certification validation
use openssl or cfssl
ssh cluster2-master1

k get po -n kube-system | grep -i kube-api


cd /etc/kub*/mani*
cat kube-apiserver.yaml | grep -i tls-cert
cat /etc/kub*/pki/apiserver.crt
openssl -x509 --noout --text -in /etc/kub*/pki/apiserver.crt | grep -i valid -A 2
copy validity certificate to vi /opt/course/22/expiration

kubeadm certs check-expiration | grep -i apiserver # get certificate date using kubeadm

certificat renew command using kubeadm


touch /opt/course/22/kubeadm-renew-certs.sh
kubeadm certs renew apiserver # copy this command and add it into mention file
#################################################################################################################
Q23 :
kubectl config use-context k8s-c2-AC

find issue and key users of cluster2-worker1


1. kubelet client
2. kubelet server

ssh cluster2-worker1
service kubelet status # check status running or not
cd /etc/systemd/system/kubelet.serviced
cat 10-kubeadm.conf
cat /etc/kub*/kublet.conf | grep -i cert
you will get client certificat location which is pem file

openssl x509 -noout --text in /var/lib/kubelet/pki/kubelet-client-current.pem | grep -i issuer


copy Issuer to certificate-info file
openssl x509 -noout --text in /var/lib/kubelet/pki/kubelet-client-current.pem | grep -i usage -A 10
copy useage to certificate-info file
openssl x509 -noout --text in /var/lib/kubelet/pki/kubelet.crt | grep -i issuer -A 10
copy useage to certificate-info file
openssl x509 -noout --text in /var/lib/kubelet/pki/kubelet.crt | grep -i usage -A 10
copy useage to certificate-info file

write info into file and compare both fields


Issues: CN : Kubernets
Extended key usage : TLS Web Client Authentacation

Issuer: CN = cluster2-worker-ca@16667677
Extended key usage : TLS Web Client Authentacation
#################################################################################################################
Q24
kubectl config use-context k8s-c1-H
Create network policy

set context :
k get all -o wide -n project-snake

k get exec -it -n project-snake backend-0 --curl 10.47.0.11:1111


--> database one
k get -it -n project-snake backend-0 --ping 10.47.0.11

k get exec -it -n project-snake backend-0 --curl 10.47.0.12:2222


--> database two
k get -it -n project-snake backend-0 --ping 10.47.0.12

k get exec -it -n project-snake backend-0 --curl 10.47.0.13:3333


--> vault secret storage
k get -it -n project-snake backend-0 --ping 10.47.0.13

k get -n project-tiger pods --show-labels


# 24_np.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-backend
namespace: project-snake
spec:
podSelector:
matchLabels:
app: backend
policyTypes:
- Egress # policy is only about Egress
egress:
- # first rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db1
ports: # second condition "port"
- protocol: TCP
port: 1111
- # second rule
to: # first condition "to"
- podSelector:
matchLabels:
app: db2
ports: # second condition "port"
- protocol: TCP
port: 2222

k get networkpolicies -n project-snake


k create -f 24.yaml
do validation once policy is implemented.
k get pods -n project-snake -o wide

for port 1111 and 2222 response should get it


k get exec -it -n project-snake backend-0 --curl 10.47.0.13:3333
no access should get it
k get exec -it -n project-snake backend-0 --ping 10.47.0.13

#################################################################################################################
##########
Q25
Etcd backup
kubectl config user-context k8s-c3-CCC

create backup and save it /tmp/etcd-backup.db


touch 25.yaml
copy command from documentation to yaml file

ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \


--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

k get pods -n kube-system


ssh cluster3-master1
cat /etc/kuber*/manifest*/etcd.yaml | grep -i file # check location of server and cert file , ca file , key file

Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db

check the file location and db file as backup taken

create pod in cluster : to test backup


k run test --image=nginx --command sleep 1d
k get pods
k get pods -n kube-system | grep wc -l

Restore the backup and make sure cluster still working


ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot restore /tmp/etcd-backup.db --data-dir=/var/lib/etcd-backup-1

cd /var/lib/etcd
ls -lrt

update the file vi /etc/kub*/mani*/etcd.yaml


update parameter volumes --> hostpath : Path : /var/lib/etcd-backup-1
k get pods -A
k get all -AC
It will stop or hang system to perform backup of db and cluster recreates
k get pods -A
k get pods -n kube-system --no-headers | wc -l

#################################################################################################################

You might also like