0% found this document useful (0 votes)
835 views7 pages

Cka - Practice Questions

The document contains questions related to various Kubernetes concepts like cluster role and bindings, draining nodes, installing and upgrading a Kubeadm cluster, taking ETCD backups and restoring, network policies, scaling deployments, multi-container pods, persistent volumes, ingress resources, logging, node selectors, metrics server and services.

Uploaded by

smile2me2012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
835 views7 pages

Cka - Practice Questions

The document contains questions related to various Kubernetes concepts like cluster role and bindings, draining nodes, installing and upgrading a Kubeadm cluster, taking ETCD backups and restoring, network policies, scaling deployments, multi-container pods, persistent volumes, ingress resources, logging, node selectors, metrics server and services.

Uploaded by

smile2me2012
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 7

Question 1: ClusterRole and ClusterRole Binding

Create a new ClusterRole named demo-clusterrole. Any resource associated with the
cluster role should be able to create the following resources:
Deployment StatefulSet DaemonSet
Create a new namespace named demo-namespace. Within that namespace, create a new
ServiceAccount named demo-token. Bind the new ClusterRole with the custom service-
account token
Limited to namespace demo-namespace, bind the new ClusterRole demo-clusterrole to
the new ServiceAccount demo-token.

Question 2: Drain Nodes


Before proceeding, make sure you have two worker node based Kubernetes cluster.
Apply the following Manifest file:
kubectl apply -f https://raw.githubusercontent.com/zealvora/myrepo/master/demo-
files/drain.yaml
2 Requirements:
- Find out in which node a pod named test-pd is running.
- Make that node unavailable and reschedule all pods on that node.

Question 3: KUBEADM Cluster Initialization


Create a new kubeadm based K8s cluster. The cluster should be based on the 1.18.0
Kubernetes version. All the components including kubelet, kubectl should also be on
the same version. The cluster should be based on Ubuntu OS.

Question 4: KUBEADM Cluster Upgrade


Upgrade the kubeadm cluster created in the above step from 1.18.0 to 1.19.0 version
only. In addition, also make sure that kubelet, and kubectl are upgraded to the
same version as kubeadm. Ensure to drain the master node before upgrading and
uncordon the master node after upgrading. No other worker nodes should be upgraded.

Question 5: ETCD Backup & Restore


For this question, you should have a working ETCD cluster with TLS certificates.
You can refer to the following guide to create an ETCD cluster:
https://github.com/zealvora/certified-kubernetes-administrator/blob/master/Domain
%206%20-%20Cluster%20Architecture%2C%20Installation%20%26%20Configuration/k8s-
scratch-step-3-install-etcd.md
Requirements:
i). Add the following data to the ETCD cluster:
course "I am awesome"
ii). Take a backup of the ETCD cluster. Store the backup at /tmp/firstbackup.db
path
iii). Add one more data to the ETCD cluster
course "We am awesome"
iv) Restore the backup from /tmp/firstbackup.db to ETCD
v) Create etcd user with useradd etcd command
v) Make sure that whichever directory ETCD backup data is restored has full
permissions for the etcd user.
vi) Verify if "I am awesome" data is present"

Question 6: Network Policies


Create a new namespace named custom-namespace.
Create a new network policy named my-network-policy in the custom-namespace.
Requirements:
1. i) Network Policy should allow PODS within the custom-namespace to connect to
each other only on Port 80. No other ports should be allowed.
2.
3. ii) No PODs from outside of the custom-namespace should be able to connect to
any pods inside the custom-namespace.
Question 7: Scale Deployments
Create a new deployment named my-deployment based on nginx image. Deployment should
have 1 replicas.
Scale the deployment to 5 replicas.

Question 8: Multi-Container Pods


Create a pod named multi-container. This POD should have two containers
nginx + redis

Question 9: Persistent Volume


Create a Persistent Volume with the following specification:
1. Name: my-pv
2. Hostpath of /data should be mounted inside POD on /mydata location
3. Storage: 2Gi

Question 10: Ingress


Create a new ingress resource based on the following specification:
1. Name: demo-ingress
2. Namespace: demo-namespace
3. Expose service web on the path of /web using the serviceport of 8080

Question 11: POD and Logging


Apply the following manifest to your K8s cluster:
https://raw.githubusercontent.com/zealvora/myrepo/master/demo-files/cka_logs.yaml
Monitor the logs for all the containers which are part of the counter2 pod. Extract
log lines which have the number 02. Write them to /opt/kplabs-foobar.txt

Question 12: NodeSelector


Create a pod named selector-pod based on nginx image. The POD should only run on
the node which has a label of disk=ssd

Question 13: Metric Server


Note: For Exams, Metric Server will be pre-installed. You do not have to install
it.
Create deployment named overload-pods based on nginx image. There should be 3
replicas.
Create deployment named underload-pods based on nginx image. There should be 2
replicas.
For the overload-pods deployment, find the PODS that are taking the highest CPU and
store data to /tmp/overload.txt

Question 14: Service and Deployment


i) Create deployment named service-deployment based on nginx image with 3 replicas.
ii) Re-configure the existing deployment and add named port http to expose 80/tcp
port of containers.
iii) Create a new service named node-port-service to expose the http port. This
service should be based on NodePort.

Solution:

----------------------------------------------
Solution 1 - Cluster Role & Service Account
-----------------------------------------------------
Step 1: Create Namespace
kubectl create namespace demo-namespace
Step 2: Create Service Account in Custom Namespace
1. apiVersion: v1
2. kind: ServiceAccount
3. metadata:
4. name: demo-token
5. namespace: demo-namespace
Step 3: Create a Cluster Role
kubectl create clusterrole demo-clusterrole --verb=create --
resource=Deployment,StatefulSet,DaemonSet --dry-run=client -o yaml >
clusterrole.yaml
Following is the snippet generated from the above command:
1. apiVersion: rbac.authorization.k8s.io/v1
2. kind: ClusterRole
3. metadata:
4. creationTimestamp: null
5. name: demo-clusterrole
6. rules:
7. - apiGroups:
8. - apps
9. resources:
10. - deployments
11. - statefulsets
12. - daemonsets
13. verbs:
14. - create
kubectl apply -f clusterrole.yaml
Step 4: Bind the ClusterRole with Service Account
kubectl create clusterrolebinding demo-role-bind --clusterrole=demo-clusterrole --
serviceaccount=demo-namespace:demo-token
The following snippet is generated from the above code:
1. apiVersion: rbac.authorization.k8s.io/v1
2. kind: ClusterRoleBinding
3. metadata:
4. creationTimestamp: null
5. name: demo-role-bind
6. roleRef:
7. apiGroup: rbac.authorization.k8s.io
8. kind: ClusterRole
9. name: demo-clusterrole
10. subjects:
11. - kind: ServiceAccount
12. name: demo-token
13. namespace: demo-namespace

----------------------------------------------
Solution 2 - Draining Node
-----------------------------------------------------
Step 1: Find in which node the test-pd POD is running
kubectl get pods -o wide
Step 2: Making that node unavailable and rescheduling the pods
kubectl drain k8s-worker-nodes-3pohh --delete-local-data --ignore-daemonsets --
force

----------------------------------------------
Solution 3 - Kubeadm installation
-----------------------------------------------------
Follow the steps mentioned in the following page to install kubeadm of 1.18 version
https://github.com/zealvora/certified-kubernetes-administrator/tree/master/Domain
%206%20-%20Cluster%20Architecture,%20Installation%20&%20Configuration

--------------------------------------------------
Solution 4 - Kubeadm Cluster Upgrade
-----------------------------------------------------
1. apt-mark unhold kubelet kubectl
2. apt-get install -qy kubeadm=1.19.1-00
3. kubectl drain kubeadm-master --ignore-daemonsets
4. kubeadm upgrade plan
5. kubeadm upgrade apply v1.19.0
6. kubectl uncordon kubeadm-master

--------------------------------------------------
Solution 5 - ETCD Backup & Restore
-----------------------------------------------------
Step 1: Add Data to cluster:
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key put course "I am awesome"
Step 2: Take Backup of ETCD Cluster
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key snapshot save
/tmp/firstbackup.db
Step 3: Add Data to Cluster
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key put course "We are awesome"
Step 4: Restore from Snapshot
1. systemctl stop etcd
2. rm -rf /var/lib/etcd/*
3. cd /tmp
4.
5. ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379
--cacert=/etc/etcd/ca.crt --cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key
snapshot restore firstbackup.db
6.
7. mv /tmp/default.etcd/* /var/lib/etcd
Step 5: Create etcd user
useradd etcd
Step 6: Add permissions for etcd user to the path of restore
chown -R etcd.etcd /var/lib/etcd
Step 6: Start etcd
1. systemctl start etcd
Step 7: Verify if "I am awesome" data is present
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key get course

--------------------------------------------------
Solution 6 - Network Policy
-----------------------------------------------------
Step 1: Create a custom namespace
kubectl create namespace custom-namespace
Step 2: Create POD in the namespace
kubectl run nginx --image=nginx --namespace custom-namespace
Step 3: Add a Network Policy According to Requirement
1. apiVersion: networking.k8s.io/v1
2. kind: NetworkPolicy
3. metadata:
4. name: my-network-policy
5. namespace: custom-namespace
6. spec:
7. podSelector: {}
8. policyTypes:
9. - Ingress
10. ingress:
11. - from:
12. - podSelector: {}
13. ports:
14. - protocol: TCP
15. port: 80
--------------------------------------------------
Solution 7 - Scaling Deployments
-----------------------------------------------------
Step 1: Create Deployment
kubectl create deployment my-deployment --image=nginx
Step 2: Scale Deployment
kubectl scale --replicas 5 deployment/my-deployment

--------------------------------------------------
Solution 8 - Multi-Container PODS
-----------------------------------------------------

1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: multi-container-pods
5. spec:
6. containers:
7. - name: nginx
8. image: nginx
9. - name: redis
10. image: redis

--------------------------------------------------
Solution 9 - Persistent Volumes - Host Path
-----------------------------------------------------
Step 1: Create PV according to specification
1. apiVersion: v1
2. kind: PersistentVolume
3. metadata:
4. name: my-pv
5. spec:
6. capacity:
7. storage: 2Gi
8. accessModes:
9. - ReadWriteOnce
10. hostPath:
11. path: "data"
Step 2: Mount PV in POD
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: pv-pod
5. spec:
6. containers:
7. - image: nginx
8. name: test-container
9. volumeMounts:
10. - mountPath: /mydata
11. name: my-pv
12. volumes:
13. - name: my-pv
--------------------------------------------------
Solution 10 - Ingress
-----------------------------------------------------
1. apiVersion: networking.k8s.io/v1
2. kind: Ingress
3. metadata:
4. name: demo-ingress
5. namespace: demo-namespace
6. annotations:
7. nginx.ingress.kubernetes.io/rewrite-target: /
8. spec:
9. rules:
10. - http:
11. paths:
12. - path: /web
13. pathType: Prefix
14. backend:
15. service:
16. name: web
17. port:
18. number: 8080
--------------------------------------------------
Solution 11 - POD Logging
-----------------------------------------------------
Step 1: Create Objects based on URL
kubectl apply -f https://raw.githubusercontent.com/zealvora/myrepo/master/demo-
files/cka_logs.yaml
Step 2: Write Specific Logs to a file
kubectl logs counter2 --all-containers | grep 02 > /opt/kplabs-foobar.txt

--------------------------------------------------
Solution 12 - Node Selector
-----------------------------------------------------
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: selector-pod
5. spec:
6. containers:
7. - name: nginx
8. image: nginx
9. nodeSelector:
10. disk: ssd

--------------------------------------------------
Solution 13 - Deployment & Metric Server
-----------------------------------------------------
Step 1: Create Deployment
1. apiVersion: apps/v1
2. kind: Deployment
3. metadata:
4. labels:
5. app: overload-pods
6. name: overload-pods
7. spec:
8. replicas: 3
9. selector:
10. matchLabels:
11. app: overload-pods
12. template:
13. metadata:
14. labels:
15. app: overload-pods
16. spec:
17. containers:
18. - image: nginx
19. name: nginx
20.
Step 2: Find Pods with Highest CPU and store it in a file
kubectl top pods -l name=overload-pods > /tmp/overload.txt

--------------------------------------------------
Solution 14 - Deployment & NodePort
-----------------------------------------------------
Step 1 - Create the deployment
1. apiVersion: apps/v1
2. kind: Deployment
3. metadata:
4. creationTimestamp: null
5. labels:
6. app: service-deployment
7. name: service-deployment
8. spec:
9. replicas: 3
10. selector:
11. matchLabels:
12. app: service-deployment
13. strategy: {}
14. template:
15. metadata:
16. creationTimestamp: null
17. labels:
18. app: service-deployment
19. spec:
20. containers:
21. - image: nginx
22. name: nginx
23. resources: {}
24. status: {}
Step 2 - Edit the Deployment and add a Named port under spec
kubectl edit deployment service-deployment

Step 3 - Create Service According to Specification


kubectl expose deployment service-deployment --name node-port-service --port=80 --
target-port=http --type=NodePort

You might also like