Cka (1) (1) 2
Cka (1) (1) 2
Linux Foundation
CKA Exam
Certified Kubernetes Administrator
www.certkillers.net
Questions & Answers PDF P-2
Product Questions: 65
Version: 7.0
Question: 1
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-3
Question: 2
www.certkillers.net
Questions & Answers PDF P-4
List all persistent volumes sorted by capacity, saving the full kubectl output to
/opt/KUCC00102/volume_list. Use kubectl 's own functionality for sorting the output, and do not
manipulate it any further.
Explanation:
solution
Question: 3
Ensure a single instance of pod nginx is running on each node of the Kubernetes cluster where nginx
also represents the Image name which has to be used. Do not override any taints currently in place.
Use DaemonSet to complete this task and use ds-kusc00201 as DaemonSet name.
www.certkillers.net
Questions & Answers PDF P-5
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-6
www.certkillers.net
Questions & Answers PDF P-7
Question: 4
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-8
www.certkillers.net
Questions & Answers PDF P-9
Question: 5
Create a pod named kucc8 with a single app container for each of the
following images running inside (there may be between 1 and 4 images specified):
nginx + redis + memcached.
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-10
www.certkillers.net
Questions & Answers PDF P-11
Question: 6
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-12
www.certkillers.net
Questions & Answers PDF P-13
Question: 7
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-14
www.certkillers.net
Questions & Answers PDF P-15
Question: 8
Create and configure the service front-end-service so it's accessible through NodePort and routes to
the existing pod named front-end.
Explanation:
solution
Question: 9
www.certkillers.net
Questions & Answers PDF P-16
solution below.
Explanation:
solution
Question: 10
www.certkillers.net
Questions & Answers PDF P-17
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-18
Question: 11
Create a file:
/opt/KUCC00302/kucc00302.txt that lists all pods that implement service baz in namespace
development.
The format of the file should be one pod name per line.
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-19
Question: 12
www.certkillers.net
Questions & Answers PDF P-20
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-21
www.certkillers.net
Questions & Answers PDF P-22
Question: 13
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-23
Question: 14
www.certkillers.net
Questions & Answers PDF P-24
Explanation:
solution
Question: 15
Check to see how many worker nodes are ready (not including nodes tainted NoSchedule) and write
the number to /opt/KUCC00104/kucc00104.txt.
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-25
Question: 16
www.certkillers.net
Questions & Answers PDF P-26
From the pod label name=cpu-utilizer, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00102/KUTR00102.txt (which
already exists).
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-27
Question: 17
Explanation:
Solution:
www.certkillers.net
Questions & Answers PDF P-28
www.certkillers.net
Questions & Answers PDF P-29
Question: 18
Create a snapshot of the etcd instance running at https://127.0.0.1:2379, saving the snapshot to the
file path /srv/data/etcd-snapshot.db.
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
CA certificate: /opt/KUCM00302/ca.crt
Client certificate: /opt/KUCM00302/etcd-client.crt
Client key: Topt/KUCM00302/etcd-client.key
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-30
Question: 19
Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it.
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-31
Question: 20
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case,
and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are
made permanent.
You can ssh to the failed node using:
[student@node-1] $ | ssh Wk8s-node-0
You can assume elevated privileges on the node with the following command:
[student@w8ks-node-0] $ | sudo –i
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-32
www.certkillers.net
Questions & Answers PDF P-33
Question: 21
Configure the kubelet systemd- managed service, on the node labelled with name=wk8s-node-1, to
launch a pod containing a single container of Image httpd named webtool automatically. Any spec
files required should be placed in the /etc/kubernetes/manifests directory on the node.
You can ssh to the appropriate node using:
[student@node-1] $ ssh wk8s-node-1
You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1] $ | sudo –i
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-34
www.certkillers.net
Questions & Answers PDF P-35
www.certkillers.net
Questions & Answers PDF P-36
Question: 22
For this item, you will have to ssh to the nodes ik8s-master-0 and ik8s-node-0 and complete all tasks
on these nodes. Ensure that you return to the base node (hostname: node-1) when you have
completed this item.
Context
As an administrator of a small development team, you have been asked to set up a Kubernetes
cluster to test the viability of a new application.
Task
You must use kubeadm to perform this task. Any kubeadm invocations will require the use of the --
ignore-preflight-errors=all option.
Configure the node ik8s-master-O as a master node. .
Join the node ik8s-node-o to the cluster.
Explanation:
solution
You must use the kubeadm configuration file located at /etc/kubeadm.conf when initializing
www.certkillers.net
Questions & Answers PDF P-37
your cluster.
You may use any CNI plugin to complete this task, but if you don't have your favourite CNI plugin's
manifest URL at hand, Calico is one popular option:
https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Docker is already installed on both nodes and apt has been configured so that you can install the
required tools.
Question: 23
Explanation:
solution
www.certkillers.net
Questions & Answers PDF P-38
www.certkillers.net
Questions & Answers PDF P-39
Question: 24
Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadWriteMany.
The type of volume is hostPath and its location is /srv/app-data.
Explanation:
solution
Persistent Volume
A persistent volume is a piece of storage in a Kubernetes cluster. PersistentVolumes are a cluster-
level resource like nodes, which don’t belong to any namespace. It is provisioned by the
administrator and has a particular file size. This way, a developer deploying their app on Kubernetes
need not know the underlying infrastructure. When the developer needs a certain amount of
persistent storage for their application, the system administrator configures the cluster so that they
consume the PersistentVolume provisioned in an easy way.
Creating Persistent Volume
kind: PersistentVolume
apiVersion: v1
metadata:
name:app-data
spec:
www.certkillers.net
Questions & Answers PDF P-40
Our persistent volume status is available meaning it is available and it has not been mounted yet.
This status will change when we mount the persistentVolume to a persistentVolumeClaim.
PersistentVolumeClaim
In a real ecosystem, a system admin will create the PersistentVolume then a developer will create a
PersistentVolumeClaim which will be referenced in a pod. A PersistentVolumeClaim is created by
specifying the minimum size and the access mode they require from the persistentVolume.
Challenge
Create a Persistent Volume Claim that requests the Persistent Volume we had created above. The
claim should request 2Gi. Ensure that the Persistent Volume Claim has the same storageClassName
as the persistentVolume you had previously created.
kind: PersistentVolume
apiVersion: v1
www.certkillers.net
Questions & Answers PDF P-41
metadata:
name:app-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: shared
Question: 25
www.certkillers.net
Questions & Answers PDF P-42
Create a namespace called 'development' and a pod with image nginx called nginx on this
namespace.
Explanation:
kubectl create namespace development
kubectl run nginx --image=nginx --restart=Never -n development
Question: 26
Explanation:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: engineering
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
restartPolicy: Never
Question: 27
www.certkillers.net
Questions & Answers PDF P-43
Get list of all pods in all namespaces and write it to file “/opt/pods-list.yaml”
Explanation:
kubectl get po –all-namespaces > /opt/pods-list.yaml
Question: 28
Create a pod with image nginx called nginx and allow traffic on port 80
Explanation:
kubectl run nginx --image=nginx --restart=Never --port=80
Question: 29
Create a busybox pod that runs the command “env” and save the output to “envpod” file
Explanation:
kubectl run busybox --image=busybox --restart=Never –-rm -it -- env > envpod.yaml
Question: 30
List pod logs named “frontend” and search for the pattern “started” and write it to a file “/opt/error-
logs”
Explanation:
Kubectl logs frontend | grep -i “started” > /opt/error-logs
www.certkillers.net
Questions & Answers PDF P-44
Question: 31
Create a pod that echo “hello world” and then exists. Have the pod deleted automatically when it’s
completed
Explanation:
kubectl run busybox --image=busybox -it --rm --restart=Never --
/bin/sh -c 'echo hello world'
kubectl get po # You shouldn't see pod with the name "busybox"
Question: 32
Create a pod with environment variables as var1=value1.Check the environment variable in pod
Explanation:
kubectl run nginx --image=nginx --restart=Never --env=var1=value1
# then
kubectl exec -it nginx -- env
# or
kubectl exec -it nginx -- sh -c 'echo $var1'
# or
kubectl describe po nginx | grep value1
Question: 33
Get list of all the pods showing name and namespace with a jsonpath expression.
Explanation:
kubectl get pods -o=jsonpath="{.items[*]['metadata.name'
, 'metadata.namespace']}"
Question: 34
www.certkillers.net
Questions & Answers PDF P-45
Explanation:
kubectl get po nginx -o
jsonpath='{.spec.containers[].image}{"\n"}'
Question: 35
List the nginx pod with custom columns POD_NAME and POD_STATUS
Explanation:
kubectl get po -o=custom-columns="POD_NAME:.metadata.name,
POD_STATUS:.status.containerStatuses[].state"
Question: 36
Explanation:
kubect1 get pods --sort-by=.metadata.name
Question: 37
Explanation:
image=nginx, image=redis, image=consul
Name nginx container as “nginx-container”
Name redis container as “redis-container”
www.certkillers.net
Questions & Answers PDF P-46
Question: 38
Create 2 nginx image pods in which one of them is labelled with env=prod and another one labelled
with env=dev and verify the same.
Explanation:
kubectl run --generator=run-pod/v1 --image=nginx -- labels=env=prod nginx-prod --dry-run -o yaml >
nginx-prodpod.yaml Now, edit nginx-prod-pod.yaml file and remove entries like “creationTimestamp:
null” “dnsPolicy: ClusterFirst”
vim nginx-prod-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
env: prod
name: nginx-prod
spec:
containers:
- image: nginx
name: nginx-prod
www.certkillers.net
Questions & Answers PDF P-47
restartPolicy: Always
# kubectl create -f nginx-prod-pod.yaml
kubectl run --generator=run-pod/v1 --image=nginx --
labels=env=dev nginx-dev --dry-run -o yaml > nginx-dev-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
env: dev
name: nginx-dev
spec:
containers:
- image: nginx
name: nginx-dev
restartPolicy: Always
# kubectl create -f nginx-prod-dev.yaml
Verify :
kubectl get po --show-labels
kubectl get po -l env=prod
kubectl get po -l env=dev
Question: 39
Explanation:
Kubect1 get po -o wide
Using JsonPath
kubect1 get pods -o=jsonpath='{range
.items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
Question: 40
Explanation:
kubect1 get pods -o=jsonpath='{range
.items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
www.certkillers.net
Questions & Answers PDF P-48
Question: 41
Explanation:
kubect1 get po nginx-dev -o
jsonpath='{.spec.containers[].image}{"\n"}'
Question: 42
Explanation:
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c
"sleep 3600"
Question: 43
Create an nginx pod and list the pod with different levels of verbosity
Explanation:
// create a pod
kubectl run nginx --image=nginx --restart=Never --port=80
// List the pod with different verbosity
kubectl get po nginx --v=7
kubectl get po nginx --v=8
kubectl get po nginx --v=9
Question: 44
List the nginx pod with custom columns POD_NAME and POD_STATUS
www.certkillers.net
Questions & Answers PDF P-49
Explanation:
kubectl get po -o=custom-columns="POD_NAME:.metadata.name,
POD_STATUS:.status.containerStatuses[].state"
Question: 45
Explanation:
kubectl get pods --sort-by=.metadata.name
Question: 46
Explanation:
kubect1 get pods--sort-by=.metadata.creationTimestamp
Question: 47
List all the pods showing name and namespace with a json path expression
Explanation:
kubectl get pods -o=jsonpath="{.items[*]['metadata.name',
'metadata.namespace']}"
Question: 48
www.certkillers.net
Questions & Answers PDF P-50
Explanation:
kubect1 get pods -o wide
kubectl delete po “nginx-dev”
kubectl delete po “nginx-prod”
Question: 49
Score: 4%
Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific
ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following
resource types:
• Deployment
• StatefulSet
• DaemonSet
Bind the new ClusterRole deployment-clusterrole lo the new ServiceAccount cicd-token , limited to
the namespace app-team1.
Explanation:
www.certkillers.net
Questions & Answers PDF P-51
Solution:
Task should be complete on node k8s -1 master, 2 worker for this connect use command
[student@node-1] > ssh k8s
kubectl create clusterrole deployment-clusterrole --verb=create --
resource=deployments,statefulsets,daemonsets
Question: 50
Score: 4%
Task
Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it.
Explanation:
SOLUTION:
[student@node-1] > ssh ek8s
kubectl cordon ek8s-node-1
kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force
Question: 51
Score: 7%
www.certkillers.net
Questions & Answers PDF P-52
Task
Given an existing Kubernetes cluster running version 1.20.0, upgrade all of the Kubernetes control
plane and node components on the master node only to version 1.20.1.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.
www.certkillers.net
Questions & Answers PDF P-53
solution below.
Explanation:
SOLUTION:
[student@node-1] > ssh ek8s
kubectl cordon k8s-master
kubectl drain k8s-master --delete-local-data --ignore-daemonsets --force
apt-get install kubeadm=1.20.1-00 kubelet=1.20.1-00 kubectl=1.20.1-00 --
disableexcludes=kubernetes
kubeadm upgrade apply 1.20.1 --etcd-upgrade=false
systemctl daemon-reload
systemctl restart kubelet
kubectl uncordon k8s-master
Question: 52
Score: 7%
Task
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the
snapshot to /srv/data/etcd-snapshot.db.
www.certkillers.net
Questions & Answers PDF P-54
Explanation:
Solution:
#backup
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --
cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save
/etc/data/etcd-snapshot.db
www.certkillers.net
Questions & Answers PDF P-55
#restore
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --
cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore
/var/lib/backup/etcd-snapshot-previoys.db
Question: 53
Score: 7%
Task
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace echo.
Ensure that the new NetworkPolicy allows Pods in namespace my-app to connect to port 9000 of
Pods in namespace echo.
• does not allow access to Pods, which don't listen on port 9000
• does not allow access from Pods, which are not in namespace my-app
Explanation:
Solution:
#network.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: internal
spec:
podSelector:
matchLabels: {
www.certkillers.net
Questions & Answers PDF P-56
}
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {
}
ports:
- protocol: TCP
port: 8080
#spec.podSelector namespace pod
kubectl create -f network.yaml
Question: 54
Score: 7%
Task
Reconfigure the existing deployment front-end and add a port specification named http exposing
port 80/tcp of the existing container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which
they are scheduled.
Explanation:
Solution:
www.certkillers.net
Questions & Answers PDF P-57
Question: 55
Score: 7%
Task
Create a new nginx Ingress resource as follows:
• Name: ping
• Namespace: ing-internal
• Exposing service hi on path /hi using service port 5678
www.certkillers.net
Questions & Answers PDF P-58
Explanation:
Solution:
vi ingress.yaml
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
namespace: ing-internal
spec:
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
#
kubectl create -f ingress.yaml
Question: 56
Score: 4%
www.certkillers.net
Questions & Answers PDF P-59
Task
Scale the deployment presentation to 6 pods.
Explanation:
Solution:
Question: 57
Score: 4%
Task
Schedule a pod as follows:
• Name: nginx-kusc00401
• Image: nginx
• Node selector: disk=ssd
www.certkillers.net
Questions & Answers PDF P-60
Explanation:
Solution:
#yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
#
kubectl create -f node-select.yaml
Question: 58
Score: 4%
Task
Check to see how many nodes are ready (not including nodes tainted NoSchedule ) and write the
number to /opt/KUSC00402/kusc00402.txt.
Explanation:
www.certkillers.net
Questions & Answers PDF P-61
Solution:
#
kubectl get node | grep -i ready |wc -l
# taints、noSchedule
kubectl describe nodes | grep -i taints | grep -i noschedule |wc -l
#
echo 2 > /opt/KUSC00402/kusc00402.txt
Question: 59
Score: 4%
Task
Create a pod named kucc8 with a single app container for each of the following images running
inside (there may be between 1 and 4 images specified): nginx + redis + memcached .
Explanation:
Solution:
www.certkillers.net
Questions & Answers PDF P-62
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
#
kubectl create -f kucc8.yaml
#12.07
Question: 60
Score: 4%
Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany.
The type of volume is hostPath and its location is /srv/app-data .
Explanation:
Solution:
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
www.certkillers.net
Questions & Answers PDF P-63
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
#
kubectl create -f pv.yaml
Question: 61
Score:7%
Task
Create a new PersistentVolumeClaim
• Name: pv-volume
• Class: csi-hostpath-sc
• Capacity: 10Mi
Create a new Pod which mounts the PersistentVolumeClaim as a volume:
• Name: web-server
• Image: nginx
• Mount path: /usr/share/nginx/html
Configure the new Pod to have ReadWriteOnce access on the volume.
Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi
and record that change.
Explanation:
Solution:
vi pvc.yaml
storageclass pvc
www.certkillers.net
Questions & Answers PDF P-64
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
# vi pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv-volume
# craete
kubectl create -f pod-pvc.yaml
#edit
kubectl edit pvc pv-volume --record
Question: 62
Score: 5%
www.certkillers.net
Questions & Answers PDF P-65
Task
Monitor the logs of pod bar and:
Explanation:
Solution:
Question: 63
Score:7%
Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl
logs). Adding a streaming sidecar container is a good and common way to accomplish this
requirement.
www.certkillers.net
Questions & Answers PDF P-66
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app.
The new sidecar container has to run the following command:
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar
container.
Explanation:
Solution:
#
kubectl get pod big-corp-app -o yaml
#
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
->
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
www.certkillers.net
Questions & Answers PDF P-67
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
#
kubectl logs big-corp-app -c count-log-1
Question: 64
Score: 5%
Task
From the pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of
the pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).
Explanation:
Solution:
Question: 65
www.certkillers.net
Questions & Answers PDF P-68
Score: 13%
Task
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case,
and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are
made permanent.
Explanation:
Solution:
www.certkillers.net
Questions & Answers PDF P-69
sudo -i
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet
www.certkillers.net