Cka - Practice Questions
Cka - Practice Questions
Create a new ClusterRole named demo-clusterrole. Any resource associated with the
cluster role should be able to create the following resources:
Deployment StatefulSet DaemonSet
Create a new namespace named demo-namespace. Within that namespace, create a new
ServiceAccount named demo-token. Bind the new ClusterRole with the custom service-
account token
Limited to namespace demo-namespace, bind the new ClusterRole demo-clusterrole to
the new ServiceAccount demo-token.
Solution:
----------------------------------------------
Solution 1 - Cluster Role & Service Account
-----------------------------------------------------
Step 1: Create Namespace
kubectl create namespace demo-namespace
Step 2: Create Service Account in Custom Namespace
1. apiVersion: v1
2. kind: ServiceAccount
3. metadata:
4. name: demo-token
5. namespace: demo-namespace
Step 3: Create a Cluster Role
kubectl create clusterrole demo-clusterrole --verb=create --
resource=Deployment,StatefulSet,DaemonSet --dry-run=client -o yaml >
clusterrole.yaml
Following is the snippet generated from the above command:
1. apiVersion: rbac.authorization.k8s.io/v1
2. kind: ClusterRole
3. metadata:
4. creationTimestamp: null
5. name: demo-clusterrole
6. rules:
7. - apiGroups:
8. - apps
9. resources:
10. - deployments
11. - statefulsets
12. - daemonsets
13. verbs:
14. - create
kubectl apply -f clusterrole.yaml
Step 4: Bind the ClusterRole with Service Account
kubectl create clusterrolebinding demo-role-bind --clusterrole=demo-clusterrole --
serviceaccount=demo-namespace:demo-token
The following snippet is generated from the above code:
1. apiVersion: rbac.authorization.k8s.io/v1
2. kind: ClusterRoleBinding
3. metadata:
4. creationTimestamp: null
5. name: demo-role-bind
6. roleRef:
7. apiGroup: rbac.authorization.k8s.io
8. kind: ClusterRole
9. name: demo-clusterrole
10. subjects:
11. - kind: ServiceAccount
12. name: demo-token
13. namespace: demo-namespace
----------------------------------------------
Solution 2 - Draining Node
-----------------------------------------------------
Step 1: Find in which node the test-pd POD is running
kubectl get pods -o wide
Step 2: Making that node unavailable and rescheduling the pods
kubectl drain k8s-worker-nodes-3pohh --delete-local-data --ignore-daemonsets --
force
----------------------------------------------
Solution 3 - Kubeadm installation
-----------------------------------------------------
Follow the steps mentioned in the following page to install kubeadm of 1.18 version
https://github.com/zealvora/certified-kubernetes-administrator/tree/master/Domain
%206%20-%20Cluster%20Architecture,%20Installation%20&%20Configuration
--------------------------------------------------
Solution 4 - Kubeadm Cluster Upgrade
-----------------------------------------------------
1. apt-mark unhold kubelet kubectl
2. apt-get install -qy kubeadm=1.19.1-00
3. kubectl drain kubeadm-master --ignore-daemonsets
4. kubeadm upgrade plan
5. kubeadm upgrade apply v1.19.0
6. kubectl uncordon kubeadm-master
--------------------------------------------------
Solution 5 - ETCD Backup & Restore
-----------------------------------------------------
Step 1: Add Data to cluster:
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key put course "I am awesome"
Step 2: Take Backup of ETCD Cluster
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key snapshot save
/tmp/firstbackup.db
Step 3: Add Data to Cluster
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key put course "We are awesome"
Step 4: Restore from Snapshot
1. systemctl stop etcd
2. rm -rf /var/lib/etcd/*
3. cd /tmp
4.
5. ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379
--cacert=/etc/etcd/ca.crt --cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key
snapshot restore firstbackup.db
6.
7. mv /tmp/default.etcd/* /var/lib/etcd
Step 5: Create etcd user
useradd etcd
Step 6: Add permissions for etcd user to the path of restore
chown -R etcd.etcd /var/lib/etcd
Step 6: Start etcd
1. systemctl start etcd
Step 7: Verify if "I am awesome" data is present
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/etcd/ca.crt
--cert=/etc/etcd/etcd.crt --key=/etc/etcd/etcd.key get course
--------------------------------------------------
Solution 6 - Network Policy
-----------------------------------------------------
Step 1: Create a custom namespace
kubectl create namespace custom-namespace
Step 2: Create POD in the namespace
kubectl run nginx --image=nginx --namespace custom-namespace
Step 3: Add a Network Policy According to Requirement
1. apiVersion: networking.k8s.io/v1
2. kind: NetworkPolicy
3. metadata:
4. name: my-network-policy
5. namespace: custom-namespace
6. spec:
7. podSelector: {}
8. policyTypes:
9. - Ingress
10. ingress:
11. - from:
12. - podSelector: {}
13. ports:
14. - protocol: TCP
15. port: 80
--------------------------------------------------
Solution 7 - Scaling Deployments
-----------------------------------------------------
Step 1: Create Deployment
kubectl create deployment my-deployment --image=nginx
Step 2: Scale Deployment
kubectl scale --replicas 5 deployment/my-deployment
--------------------------------------------------
Solution 8 - Multi-Container PODS
-----------------------------------------------------
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: multi-container-pods
5. spec:
6. containers:
7. - name: nginx
8. image: nginx
9. - name: redis
10. image: redis
--------------------------------------------------
Solution 9 - Persistent Volumes - Host Path
-----------------------------------------------------
Step 1: Create PV according to specification
1. apiVersion: v1
2. kind: PersistentVolume
3. metadata:
4. name: my-pv
5. spec:
6. capacity:
7. storage: 2Gi
8. accessModes:
9. - ReadWriteOnce
10. hostPath:
11. path: "data"
Step 2: Mount PV in POD
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: pv-pod
5. spec:
6. containers:
7. - image: nginx
8. name: test-container
9. volumeMounts:
10. - mountPath: /mydata
11. name: my-pv
12. volumes:
13. - name: my-pv
--------------------------------------------------
Solution 10 - Ingress
-----------------------------------------------------
1. apiVersion: networking.k8s.io/v1
2. kind: Ingress
3. metadata:
4. name: demo-ingress
5. namespace: demo-namespace
6. annotations:
7. nginx.ingress.kubernetes.io/rewrite-target: /
8. spec:
9. rules:
10. - http:
11. paths:
12. - path: /web
13. pathType: Prefix
14. backend:
15. service:
16. name: web
17. port:
18. number: 8080
--------------------------------------------------
Solution 11 - POD Logging
-----------------------------------------------------
Step 1: Create Objects based on URL
kubectl apply -f https://raw.githubusercontent.com/zealvora/myrepo/master/demo-
files/cka_logs.yaml
Step 2: Write Specific Logs to a file
kubectl logs counter2 --all-containers | grep 02 > /opt/kplabs-foobar.txt
--------------------------------------------------
Solution 12 - Node Selector
-----------------------------------------------------
1. apiVersion: v1
2. kind: Pod
3. metadata:
4. name: selector-pod
5. spec:
6. containers:
7. - name: nginx
8. image: nginx
9. nodeSelector:
10. disk: ssd
--------------------------------------------------
Solution 13 - Deployment & Metric Server
-----------------------------------------------------
Step 1: Create Deployment
1. apiVersion: apps/v1
2. kind: Deployment
3. metadata:
4. labels:
5. app: overload-pods
6. name: overload-pods
7. spec:
8. replicas: 3
9. selector:
10. matchLabels:
11. app: overload-pods
12. template:
13. metadata:
14. labels:
15. app: overload-pods
16. spec:
17. containers:
18. - image: nginx
19. name: nginx
20.
Step 2: Find Pods with Highest CPU and store it in a file
kubectl top pods -l name=overload-pods > /tmp/overload.txt
--------------------------------------------------
Solution 14 - Deployment & NodePort
-----------------------------------------------------
Step 1 - Create the deployment
1. apiVersion: apps/v1
2. kind: Deployment
3. metadata:
4. creationTimestamp: null
5. labels:
6. app: service-deployment
7. name: service-deployment
8. spec:
9. replicas: 3
10. selector:
11. matchLabels:
12. app: service-deployment
13. strategy: {}
14. template:
15. metadata:
16. creationTimestamp: null
17. labels:
18. app: service-deployment
19. spec:
20. containers:
21. - image: nginx
22. name: nginx
23. resources: {}
24. status: {}
Step 2 - Edit the Deployment and add a Named port under spec
kubectl edit deployment service-deployment