CKA Exam
CKA Exam
CKA Exam
https://github.com/bbachi/CKAD-Practice-Questions/blob/master/core-concepts.md
https://medium.com/bb-tutorials-and-thoughts/practice-enough-with-these-questions-for-the-ckad-exam-2f42d1228552
Create a pod that echo ??hello world?? and then exists. Have the pod deleted automatically when
it??s completed
Next, use the utility nslookup to look up the DNS records of the service & pod and write the
output to /opt/KUNW00601/service.dns and /opt/KUNW00601/pod.dns respectively.
Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a
specific namespace.
Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.
Sol :
Set context : kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
kubectl create sa cicd-token -n app-team1
kubectl create clusterrolebinding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
deploy-b
################################################################################################
####################################################
Q2 Completed drain, unavailable
Task -
Set the node named ek8s-node-0 as unavailable and reschedule all the pods running on it.
kubectl config use-context ek8s
################################################################################################
####################################################
Q3: Completed upgrade all control plane and node
Task -
Given an existing Kubernetes cluster running version 1.22.1, upgrade all of the Kubernetes control plane and node components
on the master node only to version 1.22.2.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.
Step5 :
################################################################################################
####################################################
Etcd backup
kubectl config user-context k8s-c3-CCC
Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db
cd /var/lib/etcd
ls -lrt
###############################################################################################
Q7: Completed scalee deployment 3
Task -
Scale the deployment presentation to 3 pods.
###############################################################################################
Q8: Completed nodeselector disk:ssd
Task -
Schedule a pod as follows:
✑ Name: nginx-kusc00401
✑ Image: nginx
✑ Node selector: disk=ssd
Set context : kubectl config use-context k8s
k run nginx-kusc00401 --image=nginx --dry-run=client -o yaml > 8.yaml
vi 8.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-kusc00401
name: nginx-kusc00401
spec:
containers:
- image: nginx
name: nginx-kusc00401
nodeSelector:
disktype: ssd
kubectl create -f 7.yaml
kubectl get pods
###############################################################################################
Q9:Completed many nodes
Task -
Check to see how many nodes are ready (not including nodes tainted NoSchedule) and
write the number to /opt/KUSC00402/kusc00402.txt.
kind: PersistentVolume
apiVersion: v1
metadata:
name: app-data
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/srv/app-data"
kubectl get pv
kubeclt create -f 11.yaml
kubectl get pv # verify the same
###############################################################################################
Q12: Completed error logs
Task -
Monitor the logs of pod foo and:
✑ Extract log lines corresponding to error file-not-found
✑ Write them to /opt/KUTR00101/foo
###############################################################################################
Q13: Completed sidecar container
Context -
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e.g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task -
Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app.
The new sidecar container has to run the following command:
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.
Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl get pods Big-corp-app -o yaml > 13.yaml
edit the vi 13.yaml
add the following into it
spec:
volumes:
- name: logs
containers:
- image: busybox
name : sidecar
command: ["/bin/sh"]
args: ["-c", "tail -n+1 -f /var/log/big-corp-app.log"]
volumeMounts:
- name: logs
mountPath: /var/log
save file and re-create the pod and verify the pod
kubectl create -f replace file.yaml
kubectl get pod
###############################################################################################
Q14: Completed high CPU workloads
Task -
From the pod label name=overloaded-cpu, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).
Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl top pods
kubectl top pods -l name=overloaded-cpu --sort-by=cpu
echo "overloaded-cpu-1234" > /opt/KUTR00401/KUTR00401.txt
cat /opt/KUTR00401/KUTR00401.txt
###############################################################################################
Q15: Completed kubelet fix not ready
Task -
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state,
ensuring that any changes are made permanent.
Answer :
set context : kubectl config use-context wk8s
kubectl get nodes
kubectl describe nodes wk8s-node-0
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl restart kubelet
systemctl start kubelet
exit
kubectl get node
###############################################################################################
Q16: Completed claim
Task -
Create a new PersistentVolumeClaim:
✑ Name: pv-volume
✑ Class: csi-hostpath-sc
✑ Capacity: 10Mi
Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.
Answer :
set context : kubectl config use-context ok8s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
save file 16-pvc.yaml and
kubeclt create -f 16-pvc.yaml
kubectl get pv,pvc # verify the pvc having capacity of 10Mi
Claim as volume :
apiVersion: v1
kind: Pod
metadata:
name: web-server # update name of pod
spec:
containers:
- name: web-server # update name of container
image: nginx # update image name
volumeMounts:
- mountPath: "/usr/share/nginx/html" # update path
name: task-pv-storage
volumes:
- name: task-pv-storage # update namee
persistentVolumeClaim:
claimName: pv-volume # update name
Answer :
set context : kubectl config use-context k8s
check document and search nginx Ingress resource
The Ingress resource
vi 17.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
save file and run file
kubectl create -f 17.yaml
kubectl get ingress -n ing-internal
check the service : curl -kL IPAddress/hello # to verify the same
################################################################################################
####################################################
Q18:
Create a new service account with the name pvviewer.
Grant this Service account access to list all PersistentVolumes in the cluster by
creating an appropriate cluster role called pvviewer-role and
ClusterRoleBinding called pvviewer-role-binding.
Next, create a pod called pvviewer with the image: redis and serviceaccount: pvviewer in the default namespace.
Anwer :
kubectl create sa pvviewer
kubectl create clusterrole pvviewer-role --verb=create --resource=PersistentVolumes
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
verify it :
kubectl auth can-i list PersistentVolumes -as system:serviceaccount:default:pvviewer
create pod
kubectl run pvviewer --image=redis -n default
apiVersion: v1
kind: Pod
metadata:
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer
###############################################################################################
Q19:
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Record the version. Next upgrade the deployment to version 1.17 using rolling update.
Make sure that the version upgrade is recorded in the resource annotation.
kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1 --dry-run=client -o yaml > 19.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
resources: {}
status: {}
# Update the yaml file as below :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx-deploy
ports:
- containerPort: 80
controlplane $
###############################################################################################
Q20:
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000
apiVersion: v1
kind: Pod
metadata:
labels:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
###############################################################################################
Q21:
Create a NetworkPolicy which denies all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a
specific namespace.
Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.
Q1
Context -
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific
namespace.
Task -
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource types:
✑ Deployment
✑ Stateful Set
✑ DaemonSet
Create a new ServiceAccount named cicd-token in the existing namespace app-team1.
Bind the new ClusterRole deployment-clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.
Sol :
Set context : kubectl config use-context k8s
kubectl create clusterrole deployment-clusterrole --verb=create --resource=Deployment,StatefulSet,DaemonSet
kubectl create sa cicd-token -n app-team1
kubectl create clusterrolebinding -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token deploy-b
#################################################################################################################
Q2
Task -
Set the node named ek8s-node-0 as unavailable and reschedule all the pods running on it.
kubectl config use-context ek8s
Etcd backup
kubectl config user-context k8s-c3-CCC
Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db
cd /var/lib/etcd
ls -lrt
update the file vi /etc/kub*/mani*/etcd.yaml
update parameter volumes --> hostpath : Path : /var/lib/etcd-backup-1
k get pods -A
k get all -AC
It will stop or hang system to perform backup of db and cluster recreates
k get pods -A
k get pods -n kube-system --no-headers | wc -l
#################################################################################################################
Q5 :
Task -
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace fubar.
Ensure that the new NetworkPolicy allows Pods in namespace internal to connect to port 9000 of Pods in namespace fubar.
Further ensure that the new NetworkPolicy:
✑ does not allow access to Pods, which don't listen on port 9000
✑ does not allow access from Pods, which are not in namespace internal
Ans :
set context : kubectl config use-context hk8s
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace # update name
namespace: fubar #update ns
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
project: my-app # update name
ports:
- protocol: TCP
port: 9000
Save file
kubectl label ns my-app prohject=my-app
kubectl describe ns my-app
kubectl create -f policy.yaml
#################################################################################################################
Q6 :
Task -
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container
nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are scheduled.
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx-kusc00401
name: nginx-kusc00401
spec:
containers:
- image: nginx
name: nginx-kusc00401
nodeSelector:
disktype: ssd
kind: PersistentVolume
apiVersion: v1
metadata:
name: app-data
spec:
capacity:
storage: 2Gi
accessModes:
- ReadOnlyMany
hostPath:
path: "/srv/app-data"
kubectl get pv
kubeclt create -f 11.yaml
kubectl get pv # verify the same
#################################################################################################################
Q12:
Task -
Monitor the logs of pod foo and:
✑ Extract log lines corresponding to error file-not-found
✑ Write them to /opt/KUTR00101/foo
Task -
Add a sidecar container named sidecar, using the busybox image, to the existing Pod big-corp-app.
The new sidecar container has to run the following command:
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.
Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl get pods Big-corp-app -o yaml > 13.yaml
edit the vi 13.yaml
add the following into it
spec:
volumes:
- name: logs
containers:
- image: busybox
name : sidecar
command: ["/bin/sh"]
args: ["-c", "tail -n+1 -f /var/log/big-corp-app.log"]
volumeMounts:
- name: logs
mountPath: /var/log
save file and re-create the pod and verify the pod
kubectl create -f replace file.yaml
kubectl get pod
#################################################################################################################
Q14:
Task -
From the pod label name=overloaded-cpu, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).
Answer :
set context : kubectl config use-context k8s
kubectl get pods -A
kubectl top pods
kubectl top pods -l name=overloaded-cpu --sort-by=cpu
echo "overloaded-cpu-1234" > /opt/KUTR00401/KUTR00401.txt
cat /opt/KUTR00401/KUTR00401.txt
#################################################################################################################
Q15:
Task -
A Kubernetes worker node, named wk8s-node-0 is in state NotReady.
Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state,
ensuring that any changes are made permanent.
Answer :
set context : kubectl config use-context wk8s
kubectl get nodes
kubectl describe nodes wk8s-node-0
ssh wk8s-node-0
sudo -i
systemctl status kubelet
systemctl restart kubelet
systemctl start kubelet
exit
kubectl get node
#################################################################################################################
Q16:
Task -
Create a new PersistentVolumeClaim:
✑ Name: pv-volume
✑ Class: csi-hostpath-sc
✑ Capacity: 10Mi
Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.
Answer :
set context : kubectl config use-context ok8s
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
apiVersion: v1
kind: Pod
metadata:
name: web-server # update name of pod
spec:
containers:
- name: web-server # update name of container
image: nginx # update image name
volumeMounts:
- mountPath: "/usr/share/nginx/html" # update path
name: task-pv-storage
volumes:
- name: task-pv-storage # update namee
persistentVolumeClaim:
claimName: pv-volume # update name
Answer :
set context : kubectl config use-context k8s
check document and search nginx Ingress resource
The Ingress resource
vi 17.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pong
namespace: ing-internal
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: hello
port:
number: 5678
Anwer :
kubectl create sa pvviewer
kubectl create clusterrole pvviewer-role --verb=create --resource=PersistentVolumes
kubectl create clusterrolebinding pvviewer-role-binding --clusterrole=pvviewer-role --serviceaccount=default:pvviewer
verify it :
kubectl auth can-i list PersistentVolumes -as system:serviceaccount:default:pvviewer
create pod
kubectl run pvviewer --image=redis -n default
apiVersion: v1
kind: Pod
metadata:
labels:
run: pvviewer
name: pvviewer
spec:
containers:
- image: redis
name: pvviewer
serviceAccountName: pvviewer
#################################################################################################################
Q19:
Create a new deployment called nginx-deploy, with image nginx:1.16 and 1 replica.
Record the version. Next upgrade the deployment to version 1.17 using rolling update.
Make sure that the version upgrade is recorded in the resource annotation.
kubectl create deployment nginx-deploy --image=nginx:1.16 --replicas=1 --dry-run=client -o yaml > 19.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx
resources: {}
status: {}
# Update the yaml file as below :
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
app: nginx-deploy
template:
metadata:
labels:
app: nginx-deploy
spec:
containers:
- image: nginx:1.16
name: nginx-deploy
ports:
- containerPort: 80
controlplane $
#################################################################################################################
Q20:
Create a Pod called non-root-pod , image: redis:alpine
runAsUser: 1000
fsGroup: 2000
apiVersion: v1
kind: Pod
metadata:
labels:
name: non-root-pod
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- image: redis:alpine
name: non-root-pod
#################################################################################################################
Q21:
Create a NetworkPolicy which denies all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress
kubectl apply -f 21.yaml
kubectl get networkpolicies.networking.k8s.io
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all-egress
spec:
podSelector: {}
policyTypes:
- Egress
#################################################################################################################
Medium.com site 50 questions
2.Create a new pod with the name redis and with the image redis:1.99.
k run redis --image=redis:1.99
#################################################################################################################
reate three pods, pod name and their image name is given below:
Answer :
kubectl run nginx --image:nginx
kubectl run busybox1 --image:busybox1 --sleep 3600
kubectl run busybox2 --image:busybox2 --sleep 1800
kubectl get pods # it should be running
do velidation:
apiVersion: v1
kind: Pod
metadata:
name: cka-pod
namespace: cka-exam
spec:
containers:
– name: cka-pod
image: nginx
resources:
limits:
cpu: "0.5"
memory: 20Mi
#################################################################################################################
Which are the following commands used to display the ReplicaSet
kubectl get rs
What is the minimum number of Network Adapter(s) per VM needed to form a Kubernetes multi-node cluster
2
Which sub command of gcloud is used to set project ID?
config
Which one is the containerization tool among the following
Docker
Which tool is not installed in Worker node of a Kubernetes cluster
Which service in Google Cloud Platform is used to create kubernetes cluster?
Kubernetes Engine
The command to start dashboard from minikube
replicate set ensures : specific number of replicas are running on any specific time
Which one is not a container clustering tool
ansible
What is the valid apiVersion of POD in the yaml object file?
apiVersion : v1
To Create a customized image
Dockerfile us used.
Every POD gets an IP address
What is the minimum number of VMs needed to configure kubeadm multi-node kubernetes cluster
3
Through which tool a single-node cluster of Kubernetes can be installed
minikube
To ensure high-availability in a kubernetes multi-node cluster how many master node is needed minimum
3
To see the detailed configuration parameters of a POD
kubectl describe <podname> command is used
thee command to start minikube
minikube start
which sub command of gcloud is used to create kubernetes cluster?
container
Which tool is used in aws cli to create and manage kubernetes cluster in aws?
Should we keep the firewall off to interconnect all the VMs in a kubernetes multi-node cluster
yes
https://github.com/dgkanatsios/CKAD-exercises/blob/master/a.core_concepts.md
https://medium.com/@sensri108/practice-examples-dumps-tips-for-cka-ckad-certified-kubernetes-administrator-exam-by-cncf-
4826233ccc27
https://de.slideshare.net/vinodmeltoe/cka-dumps
********************Pods****************************************
kubectl get pods
kubectl run nginx --image=nginx --generator=run-pod/v1 --> create pod with name nginx
kubectl run redis --image=redis123 --generator=run-pod/v1 --> create pos
kubectl edit pod podname --> to edit pods
kubectl describe pod podname-id --> to get details information
kubectl get pods -o wide
kubectl delete pod webapp --> delete pod
Running meaning 1/2 --> Running container in pods / Total container in kubepods
Create a new pod with the name 'redis' and with the image 'redis123' --> Using file ###### pending
*********************************Deployments*****************
kubectl get deployment
kubectl get deployment -o wide
kubectl create -f deployment-definition.yaml
---apiVersion: apps/v1kind: Deployment
metadata:
name: httpd-frontend
spec:
replicas: 3
selector:
matchLabels:
name: httpd-frontend-pods
template:
metadata:
labels:
name: httpd-frontend-pods
spec:
containers:
- name: httpd-frontend-container
image: httpd:2.4-alpine
command:
- sh
- "-c"
- echo Hello Kubernetes! && sleep 3600
********************************Namespacess*****************
kubectl get namespaces
kubectl get pods --namespace=spacename --> get pods in namesapces
kubectl run redis --image=redis --namespace=finance --generator=run-pos/v1 --> create pod in namespace
kubectl get pods --all-namespaces
********************************services*****************
kubectl get service
default kubernates service is clusterIP
kubectl describe service
kubectl create -f service-defination.yaml
Service defination files
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
type: NodePort
ports:
- targetPort: 8080
port: 8080
nodePort: 30080
selector:
name: simple-webapp
*******************************imperative comments*****************
kubectl run nginx-pod --image=nginx:alpine --generator=run-pod/v1 --> create pod
kubectl run nginx-pod --image=nginx:alpine --generator=run-pod -l tier=db --> creation with label
kubectl expose pod redis --port=6379 --name redis-service --> create service to expose application port
kubectl create deployment webapp --image=kodekloud/webapp-color --> create deployment
kubectl scale webapp --replicas=3
kubectl expose deployment webapp --type=NodePort --port=8080 --name=webapp-service --dry-run -o yaml > webapp-service.yaml
apiVersion: v1
kind: Pod
metadata:
name: webapp-green
labels:
name: webapp-green
spec:
containers:
- name: simple-webapp
image: kodekloud/webapp-color
command: ["python", "app.py"]
args: ["--color", "pink"]
*****************************Env variable ****************
ubectl get pod webapp-color -o yaml > webapp-color.yaml
kubectl get configmaps
kubectl describe configmaps db-config
kubectl create configmap \
> webapp-config-map --from-literal=APP_COLOR=darkblue
kubectl get secrets
ubectl create secret generic db-secret --from-literal=DB_HOST=sql01 --from-literal=DB_USER=root --from-
literal=DB_Password=password123 --< create secrect
*****************************Multicontainer pods****************init container need practices#######
multicontainer.yaml
apiVersion: v1
kind: Pod
metadata:
name: yellow
spec:
containers:
- name: lemon
image: busybox
- name: gold
image: redismaster
#################################################################################################################
Q1 :
Answer :
k config get-contexts -o name
k config get-contexts -o name > /opt/course/1/contexts
cat /opt/course/1/contexts
With Kubectl
kubectl config current-context # Display Current contexts
echo "kubectl config current-context" > /opt/course/1/context_default_kubectl.sh
cat /opt/course/1/context_default_kubectl.sh
sh /opt/course/1/context_default_kubectl.sh
Without Kubectl
you will get all context into .kube/config text
cat /.kube/config | grep -i current-context
cat /.kube/config | grep -i current-context | sed 's/current-context: //'" >/opt/course/1/context_default_no_kubectl.sh
sh /opt/course/1/context_default_no_kubectl.sh
#################################################################################################################
Q2 :
kubectl config use-context k8s-c1-Hello
k run -n default pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml
k run -n default pod1 --image=httpd:2.4.41-alpine --dry-run=client -o yaml > 2.yaml
edit 2.yaml file and update container name : pod1-container
schedule pod : under spec get kubectl get nodes as master name
spec:
nodeName: cluster1-master
k create -f 2.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: pod1
name: pod1
spec:
containers:
- image: httpd:2.4.41-alpine
name: pod1-container # change
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
nodeSelector: # add
node-role.kubernetes.io/control-plane: "cluster1-controlplane1" # add
status: {}
Q3 :
kubectl config use-context k8s-c1-H # set this context
k get all -n project-c13 # to check pods
k get pods -n project-c13 # to check pods
k scale statefulset -n project-c13 o3db --replicas=1 # Scale down to 1 replica
k get pods -n project-c13 # to verify pods now one pods should run
k get all -n project-c13 # to verify pods now one pods should run
#################################################################################################################
Q4 :
kubectl config use-context k8s-c1-H
k run -n default ready-if-service-ready --image=nginx:1.16.1-alpine --dry-run=client -o yaml > 4.yaml
edit 4.yaml file
configure livenessprobe and readinessprobe take help of document and put underspace section
livenessProbe:
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- 'sh'
- 'c'
- 'wget -T2 -0- http://service-am-i-ready:80'
# 4_pod1.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: ready-if-service-ready
name: ready-if-service-ready
spec:
containers:
- image: nginx:1.16.1-alpine
name: ready-if-service-ready
resources: {}
livenessProbe: # add from here
exec:
command:
- 'true'
readinessProbe:
exec:
command:
- sh
- -c
- 'wget -T2 -O- http://service-am-i-ready:80' # to here
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
#################################################################################################################
Q5 :
kubectl config use-context k8s-c1-H
k get pods --all-namespaces or k get pods -A
k get pods -A --sort-by=metadata.creationTimestamp
echo "k get pods -A --sort-by=metadata.creationTimestamp" > /opt/course/5/find_pods.sh
cat /opt/course/5/find_pods.sh
sh /opt/course/5/find_pods.sh
kind: PersistentVolume
apiVersion: v1
metadata:
name: safari-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/Volumes/Data"
k get pv
k create -f 6.yaml
k get pv
**************************************
pvclain name : persistenvolume claim
update : name safari-pvc
namespace : project-tiger
storage : 2Gi
access mode : readwritemode
not define storate class name
k get pvc
k create -f 6-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: safari-pvc
namespace: project-tiger
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: safari
name: safari
namespace: project-tiger
spec:
replicas: 1
selector:
matchLabels:
app: safari
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: safari
spec:
volumes: # add
- name: data # add
persistentVolumeClaim: # add
claimName: safari-pvc # add
containers:
- image: httpd:2.4.41-alpine
name: container
volumeMounts: # add
- name: data # add
mountPath: /tmp/safari-data # add
k create -f 6-dep.yaml
verify deployments and describe deployment
#################################################################################################################
Q7 :
kubectl config use-context k8s-c1-H
#################################################################################################################
Q10:
A ClusterRole|Role defines a set of permissions and where it is available, in the whole cluster or just a single Namespace.
A ClusterRoleBinding|RoleBinding connects a set of permissions with an account and defines where it is applied, in the whole cluster or just
a single Namespace.
Because of this there are 4 different RBAC combinations and 3 valid ones:
Create rolebinding
k create rolebinding -n project-hamster processor --serviceaccount=project-hamster:processor --role=processor --dry-run=client -o yaml
k create rolebinding -n project-hamster processor --serviceaccount=project-hamster:processor --role=processor
#################################################################################################################
Q11 :
kubectl config use-context k8s-c1-H
# 11.yaml
apiVersion: apps/v1
kind: DaemonSet # change from Deployment to Daemonset
metadata:
creationTimestamp: null
labels: # add
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
name: ds-important
namespace: project-tiger # important
spec:
#replicas: 1 # remove
selector:
matchLabels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
#strategy: {} # remove
template:
metadata:
creationTimestamp: null
labels:
id: ds-important # add
uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 # add
spec:
containers:
- image: httpd:2.4-alpine
name: ds-important
resources:
requests: # add
cpu: 10m # add
memory: 10Mi # add
tolerations: # add
- effect: NoSchedule # add
key: node-role.kubernetes.io/control-plane # add
#status: {} # remove
k create -f 11.yaml
verify the all resources are created or not
#################################################################################################################
Q12
kubectl config use-context k8s-c1-H
k create deployment --namespace project-tiger deploy-important --image=nginx:1.17.6-alpine --replicas=3 --dry-client=client -o yaml >
12.yaml
vi 12.yaml
# 12.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
id: very-important # change
name: deploy-important
namespace: project-tiger # important
spec:
replicas: 3 # change
selector:
matchLabels:
id: very-important # change
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
id: very-important # change
spec:
containers:
- image: nginx:1.17.6-alpine
name: container1 # change
resources: {}
- image: kubernetes/pause # add
name: container2 # add
affinity: # add
podAntiAffinity: # add
requiredDuringSchedulingIgnoredDuringExecution: # add
- labelSelector: # add
matchExpressions: # add
- key: id # add
operator: In # add
values: # add
- very-important # add
topologyKey: kubernetes.io/hostname # add
status: {}
validae solution
k describe deployment -n project-tiger
--> verify name
--> id , replica 3 desired 2 available , container 1 , container 2
#################################################################################################################
Q13:
kubectl config use-context k8s-c1-H
Create pod with 3 container + volume mount per pod + not shared or persited with other pod
pod : multi-container-playground
c1 : nginx:1.17.6-alpine
env variable : MY_NOBE_NAME
c2 : busybox:1.31-1
write output date command every second with shared volume file date.log use while true:
c3 : busybox:1.31.1 and send content of file date.log from shared volume to stdout and use tril-f
check log of c2 to confirm setup
# 13.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-container-playground
name: multi-container-playground
spec:
containers:
- image: nginx:1.17.6-alpine
name: c1 # change
resources: {}
env: # add
- name: MY_NODE_NAME # add
valueFrom: # add
fieldRef: # add
fieldPath: spec.nodeName # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c2 # add
command: ["sh", "-c", "while true; do date >> /vol/date.log; sleep 1; done"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
- image: busybox:1.31.1 # add
name: c3 # add
command: ["sh", "-c", "tail -f /vol/date.log"] # add
volumeMounts: # add
- name: vol # add
mountPath: /vol # add
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes: # add
- name: vol # add
emptyDir: {} # add
status: {}
do verification :
k create -f 13.yaml
k get pods -n default
k describe pod podname -n default
k exec -it multi-container-playground -c c1 -- printenv # print env vraiable
k exec -it multi-container-playground -c c1 -- printenv | grep -i MY_NODE_NAME # print env vraiable
touch /opt/course/14/cluster-info
1:1
2:2
3 : 10.96.0.0/12
4 : weave, /etc/cni/net.d/10-weave-conflist
5 : -cluster1-worker1
cat /opt/course/14/cluster-info
#################################################################################################################
Q15:
kubectl config use-context k8s-c2-AC
touch /opt/course/15/cluster_events.sh
echo 'kubectl get events -A sort-by="metadata.creationTiestamp"' > /opt/course/15/cluster_events.sh
sh /opt/course/15/cluster_events.sh
kubectl get pods -A -o wide | grep kube-proxy --> get pod name
ssh cluster2-worker1
crictl ps
copy the container number or name against pod name
task is to kill container
crictl stop 653345545(containeID)
crictl rm 653345545(containeID)
crictl ps
Create namespace
kubectl create ns cks-master
k get ns
verify with other ns also just for confirmation.
kubectl get roles -A | grep -i project-c14 --no-headers| wc-1
vi /opt/course/16/crowded-namespace.txt
add
project-c14 300
save file and cat file /opt/course/16/crowded-namespace.txt
#################################################################################################################
Q17:
kubectl config user-context k8s-c1-H
k get pods -n project-tiger -0 wide # will get node name on pod is running
cluster1-worker2
ssh cluster1-worker2
crictl ps | grep -i tiger-reunite
copy containre ID and write into txt file
crictl inspect 5656645656565(containerID) | grep info.runtimeType
check the details
vi /opt/course/17/pod-container.txt
5656645656565 io.containerd.runc.v2
update secret yaml location into yaml file for that check secret1.yaml and update namespace = secret
k get secrets -n secrets
k create -f secret1.yaml # which is specified in question
#################################################################################################################
Q20:
kubectl config use-context k8s-c2-CCC
ssh cluster3-worker2
kubeadmin version
--> 1.25.2
kubelet --version
v.1.24.6
upgrade kubelet
kubelet and kubectl upgrade
apt-mark kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.25.2-00 kubectl=1.25.2
apt-mark hold kubelet kubectl
sudo systemctl daemon-reload
sudo systemctl restart kubelet
kubeadmin join # join worker to cluster
********************************************************
ssh Cluster3-master1
kubeadm token create --print-join-command
copy output and paster into worker node
********************************************************
kubectl get nodes
#################################################################################################################
Q21:
kubectl config use-context k8s-c2-CCC
ssh cluster3-master1
cd /etc/kub*/mani*
ls -rlt
Go to another terminal where context is set
k run my-static-pod -n default --image=nginx:1.16-alpine --dry-run=cliet -o yaml > 21.yaml
vi 21.yaml
# /etc/kubernetes/manifests/my-static-pod.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-static-pod
name: my-static-pod
spec:
containers:
- image: nginx:1.16-alpine
name: my-static-pod
resources:
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
kubeadm certs check-expiration | grep -i apiserver # get certificate date using kubeadm
ssh cluster2-worker1
service kubelet status # check status running or not
cd /etc/systemd/system/kubelet.serviced
cat 10-kubeadm.conf
cat /etc/kub*/kublet.conf | grep -i cert
you will get client certificat location which is pem file
Issuer: CN = cluster2-worker-ca@16667677
Extended key usage : TLS Web Client Authentacation
#################################################################################################################
Q24
kubectl config use-context k8s-c1-H
Create network policy
set context :
k get all -o wide -n project-snake
#################################################################################################################
##########
Q25
Etcd backup
kubectl config user-context k8s-c3-CCC
Run command : into cluster and make sure you are in ssh
ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /tmp/etcd-backup.db
cd /var/lib/etcd
ls -lrt
#################################################################################################################