Test-Exam-Linux Foundation-CKA
Test-Exam-Linux Foundation-CKA
CKA
Certified Kubernetes
Administrator (CKA)
Program
Version: 6.0
Email: [email protected]
IMPORTANT NOTICE
Feedback
We have developed quality product and state-of-art service to ensure our customers interest. If you have any
suggestions, please feel free to contact us at [email protected]
Support
If you have any questions about our product, please provide the following items:
exam code
screenshot of the question
login id/email
please contact us at [email protected] and our technical experts will provide support within 24 hours.
Copyright
The product of each order has its own encryption code, so you should use it independently. Any unauthorized
changes will inflict legal punishment. We reserve the right of final explanation for this statement.
Practice ExamLinux Foundation - CKA
Leaders in it certification of
Question #:1
Name: nginx-app
Next, deploy the application with new version 1.11.13-alpine, by performing a rolling update.
Explanation
solution
Score:7%
Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app. The new
sidecar container has to run the following command:
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.
Explanation
Solution:
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
->
i=0;
while true;
do
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
Question #:3
Score: 4%
Task
Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it.
Explanation
SOLUTION:
Question #:4
Create a pod that echo “hello world” and then exists. Have the pod deleted automatically when it’s completed
Explanation
kubectl run busybox --image=busybox -it --rm --restart=Never --
kubectl get po # You shouldn't see pod with the name "busybox"
Question #:5
Create a persistent volume with name app-data, of capacity 2Gi and access mode ReadWriteMany. The type
of volume is hostPath and its location is /srv/app-data.
Explanation
solution
Persistent Volume
Challenge
Create a Persistent Volume named app-data, with access mode ReadWriteMany, storage classname
shared, 2Gi of storage capacity and the host path /srv/app-data.
2. Save the file and create the persistent volume.
PersistentVolumeClaim
In a real ecosystem, a system admin will create the PersistentVolume then a developer will create a
PersistentVolumeClaim which will be referenced in a pod. A PersistentVolumeClaim is created by specifying
the minimum size and the access mode they require from the persistentVolume.
Challenge
Create a Persistent Volume Claim that requests the Persistent Volume we had created above. The claim
should request 2Gi. Ensure that the Persistent Volume Claim has the same storageClassName as the
persistentVolume you had previously created.
spec:
storageClassName: shared
persistentvolumeclaim/app-data created
5. Create a new pod named myapp with image nginx that will be used to Mount the Persistent Volume Claim
with the path /var/app/config.
Mounting a Claim
Question #:6
Score: 4%
Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany. The
type of volume is hostPath and its location is /srv/app-data .
Explanation
Solution:
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
Question #:7
Add an init container to hungry-bear (which has been defined in spec file
/opt/KUCC00108/pod-spec-KUCC00108.yaml)
Once the spec file has been updated with the init container definition, the pod should be created
Explanation
solution
Explanation
kubect1 get po nginx-dev -o
jsonpath='{.spec.containers[].image}{"\n"}'
Question #:9
Create an nginx pod and list the pod with different levels of verbosity
See the solution below.
Explanation
// create a pod
Question #:10
Explanation
kubectl get po nginx -o
jsonpath='{.spec.containers[].image}{"\n"}'
Question #:11
Name: super-secret
password: bob
Create a pod named pod-secrets-via-file, using the redis Image, which mounts a secret named super-secret at
/secrets.
Create a second pod named pod-secrets-via-env, using the redis Image, which exports password as
CONFIDENTIAL
Explanation
solution
Score:7%
Task
• Class: csi-hostpath-sc
• Capacity: 10Mi
• Name: web-server
• Image: nginx
Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and
record that change.
Explanation
Solution:
vi pvc.yaml
storageclass pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-volume
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Mi
storageClassName: csi-hostpath-sc
# vi pod-pvc.yaml
apiVersion: v1
kind: Pod
metadata:
name: web-server
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pv-volume
# craete
#edit
Question #:13
List pod logs named “frontend” and search for the pattern “started” and write it to a file “/opt/error-logs”
Explanation
Kubectl logs frontend | grep -i “started” > /opt/error-logs
Question #:14
Explanation
image=nginx, image=redis, image=consul
# then
vim multi-container.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: multi-container
name: multi-container
spec:
containers:
- image: nginx
name: nginx-container
- image: redis
name: redis-container
- image: consul
name: consul-container
restartPolicy: Always
Question #:15
Explanation
Kubect1 get po -o wide
Using JsonPath
items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
Question #:16
Get list of all pods in all namespaces and write it to file “/opt/pods-list.yaml”
Explanation
kubectl get po –all-namespaces > /opt/pods-list.yaml
Question #:17
Configure the kubelet systemd- managed service, on the node labelled with name=wk8s-node-1, to launch a
pod containing a single container of Image httpd named webtool automatically. Any spec files required
should be placed in the /etc/kubernetes/manifests directory on the node.
You can assume elevated privileges on the node with the following command:
[student@wk8s-node-1] $ | sudo –i
Explanation
solution
F:\Work\Data Entry Work\Data Entry\20200827\CKA\21 C.JPG
Set the node named ek8s-node-1 as unavailable and reschedule all the pods running on it.
Explanation
solution
Name: nginx-random
Ensure that the service & pod are accessible via their respective DNS records
The container(s) within any pod(s) running as a part of this deployment should use the nginx Image
Next, use the utility nslookup to look up the DNS records of the service & pod and write the output to
/opt/KUNW00601/service.dns and /opt/KUNW00601/pod.dns respectively.
Explanation
Solution:
Score: 4%
Task
Create a pod named kucc8 with a single app container for each of the following images running inside (there
may be between 1 and 4 images specified): nginx + redis + memcached .
Explanation
Solution:
# vi kucc8.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
#12.07
Question #:21
Explanation
solution
Question #:22
Score: 7%
Task
Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp
of the existing container nginx.
Create a new service named front-end-svc exposing the container port http.
Configure the new service to also expose the individual Pods via a NodePort on the nodes on which they are
scheduled.
Explanation
Solution:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: front-end-svc
labels:
app: nginx
spec:
ports:
- port: 80
protocol: tcp
name: http
selector:
app: nginx
type: NodePort
Question #:23
Explanation
kubect1 get pods --sort-by=.metadata.name
Question #:24
Create a file:
/opt/KUCC00302/kucc00302.txt that lists all pods that implement service baz in namespace development.
The format of the file should be one pod name per line.
Explanation
solution
Name: nginx-kusc00101
Image: nginx
Explanation
solution
Explanation
kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c
"sleep 3600"
Question #:27
List all the pods showing name and namespace with a json path expression
'metadata.namespace']}"
Question #:28
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and
perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made
permanent.
You can assume elevated privileges on the node with the following command:
[student@w8ks-node-0] $ | sudo –i
Explanation
solution
Create and configure the service front-end-service so it's accessible through NodePort and routes to the
existing pod named front-end.
Explanation
solution
Explanation
kubect1 get pods--sort-by=.metadata.creationTimestamp
Question #:31
Create a pod with image nginx called nginx and allow traffic on port 80
Explanation
kubectl run nginx --image=nginx --restart=Never --port=80
Question #:32
Name: non-persistent-redis
The pod should launch in the staging namespace and the volume must not be persistent.
Explanation
solution
Ensure a single instance of pod nginx is running on each node of the Kubernetes cluster where nginx
also represents the Image name which has to be used. Do not override any taints currently in place.
Use DaemonSet to complete this task and use ds-kusc00201 as DaemonSet name.
Explanation
solution
Score: 7%
Task
Create a new nginx Ingress resource as follows:
• Name: ping
• Namespace: ing-internal
Explanation
Solution:
vi ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ping
namespace: ing-internal
spec:
rules:
- http:
paths:
- path: /hi
pathType: Prefix
backend:
service:
name: hi
port:
number: 5678
Question #:35
unable-to-access-website
Explanation
solution
Explanation
kubectl run nginx --image=nginx --restart=Never --labels=env=test --namespace=engineering --dry-run -o
yaml > nginx-pod.yaml
YAML File:
apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: engineering
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
restartPolicy: Never
Question #:37
Score: 7%
Task
Given an existing Kubernetes cluster running version 1.20.0, upgrade all of the Kubernetes control plane and
node components on the master node only to version 1.20.1.
Be sure to drain the master node before upgrading it and uncordon it after the upgrade.
You are also expected to upgrade kubelet and kubectl on the master node.
Explanation
SOLUTION:
systemctl daemon-reload
Question #:38
Create 2 nginx image pods in which one of them is labelled with env=prod and another one labelled with
env=dev and verify the same.
Explanation
kubectl run --generator=run-pod/v1 --image=nginx -- labels=env=prod nginx-prod --dry-run -o yaml >
nginx-prodpod.yaml Now, edit nginx-prod-pod.yaml file and remove entries like “creationTimestamp: null”
“dnsPolicy: ClusterFirst”
vim nginx-prod-pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
env: prod
name: nginx-prod
spec:
containers:
- image: nginx
name: nginx-prod
restartPolicy: Always
apiVersion: v1
kind: Pod
metadata:
labels:
env: dev
name: nginx-dev
spec:
containers:
- image: nginx
name: nginx-dev
restartPolicy: Always
Verify :
Question #:39
For this item, you will have to ssh to the nodes ik8s-master-0 and ik8s-node-0 and complete all tasks on these
nodes. Ensure that you return to the base node (hostname: node-1) when you have completed this item.
Context
As an administrator of a small development team, you have been asked to set up a Kubernetes cluster to test
the viability of a new application.
Task
You must use kubeadm to perform this task. Any kubeadm invocations will require the use of the
--ignore-preflight-errors=all option.
Explanation
solution
You must use the kubeadm configuration file located at /etc/kubeadm.conf when initializingyour cluster.
You may use any CNI plugin to complete this task, but if you don't have your favourite CNI plugin's
manifest URL at hand, Calico is one popular option:
https://docs.projectcalico.org/v3.14/manifests/calico.yaml
Docker is already installed on both nodes and apt has been configured so that you can install the required
tools.
Question #:40
Score: 4%
Task
Check to see how many nodes are ready (not including nodes tainted NoSchedule ) and write the number to
/opt/KUSC00402/kusc00402.txt.
Explanation
Solution:
# taintsnoSchedule
Explanation
kubect1 get pods -o wide
Question #:42
Determine the node, the failing service, and take actions to bring up the failed service and restore the health
of the cluster. Ensure that any changes are made permanently.
You can assume elevated privileges on any node in the cluster with the following command:
[student@nodename] $ | sudo –i
Explanation
solution
Score: 4%
Context
You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific
ServiceAccount scoped to a specific namespace.
Task
Create a new ClusterRole named deployment-clusterrole, which only allows to create the following resource
types:
• Deployment
• StatefulSet
• DaemonSet
Bind the new ClusterRole deployment-clusterrole lo the new ServiceAccount cicd-token , limited to the
namespace app-team1.
Explanation
Solution:
Task should be complete on node k8s -1 master, 2 worker for this connect use command
Question #:44
(or /opt/KUAL00201/spec_deployment.json).
When you are done, clean up (delete) any new Kubernetes API object that you produced during this task.
Create a snapshot of the etcd instance running at https://127.0.0.1:2379, saving the snapshot to the file path
/srv/data/etcd-snapshot.db.
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
CA certificate: /opt/KUCM00302/ca.crt
Explanation
solution
F:\Work\Data Entry Work\Data Entry\20200827\CKA\18 C.JPG
Question #:46
From the pod label name=cpu-utilizer, find pods running high CPU workloads and
write the name of the pod consuming most CPU to the file /opt/KUTR00102/KUTR00102.txt (which already
exists).
Explanation
solution
Explanation
kubectl get pods --sort-by=.metadata.name
Question #:48
Create a pod named kucc8 with a single app container for each of the
following images running inside (there may be between 1 and 4 images specified):
Explanation
solution
Score: 4%
Task
Schedule a pod as follows:
• Name: nginx-kusc00401
• Image: nginx
Explanation
Solution:
#yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-kusc00401
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: spinning
Question #:50
Score: 5%
Task
Explanation
Solution:
cat /opt/KUTR00101/bar
Question #:51
Create a pod with environment variables as var1=value1.Check the environment variable in pod
Explanation
kubectl run nginx --image=nginx --restart=Never --env=var1=value1
# then
# or
# or
Explanation
kubect1 get pods -o=jsonpath='{range
items[*]}{.metadata.name}{"\t"}{.status.podIP}{"\n"}{end}'
Question #:53
Get list of all the pods showing name and namespace with a jsonpath expression.
Explanation
kubectl get pods -o=jsonpath="{.items[*]['metadata.name'
, 'metadata.namespace']}"
Question #:54
Explanation
solution
List all persistent volumes sorted by capacity, saving the full kubectl output to
/opt/KUCC00102/volume_list. Use kubectl 's own functionality for sorting the output, and do not
manipulate it any further.
Explanation
solution
Score: 13%
Task
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and
perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made
permanent.
Explanation
Solution:
sudo -i
Question #:57
List the nginx pod with custom columns POD_NAME and POD_STATUS
Explanation
kubectl get po -o=custom-columns="POD_NAME:.metadata.name,
POD_STATUS:.status.containerStatuses[].state"
Question #:58
Check to see how many worker nodes are ready (not including nodes tainted NoSchedule) and write the
number to /opt/KUCC00104/kucc00104.txt.
Explanation
solution
Score: 7%
Task
Create a new NetworkPolicy named allow-port-from-namespace in the existing namespace echo. Ensure that
the new NetworkPolicy allows Pods in namespace my-app to connect to port 9000 of Pods in namespace
echo.
• does not allow access to Pods, which don't listen on port 9000
• does not allow access from Pods, which are not in namespace my-app
Explanation
Solution:
#network.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-port-from-namespace
namespace: internal
spec:
podSelector:
matchLabels: {
policyTypes:
- Ingress
ingress:
- from:
- podSelector: {
ports:
- protocol: TCP
port: 8080
#spec.podSelector namespace pod
Question #:60
Create a namespace called 'development' and a pod with image nginx called nginx on this namespace.
Explanation
kubectl create namespace development
Question #:61
Score: 5%
Task
From the pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the
pod consuming most CPU to the file /opt/KUTR00401/KUTR00401.txt (which already exists).
Explanation
Solution:
Question #:62
List the nginx pod with custom columns POD_NAME and POD_STATUS
See the solution below.
Explanation
kubectl get po -o=custom-columns="POD_NAME:.metadata.name,
POD_STATUS:.status.containerStatuses[].state"
Question #:63
Score: 7%
Task
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to
/srv/data/etcd-snapshot.db.
Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previo us.db
Explanation
Solution:
#backup
#restore
Question #:64
Create a busybox pod that runs the command “env” and save the output to “envpod” file
Explanation
kubectl run busybox --image=busybox --restart=Never –-rm -it -- env > envpod.yaml
Question #:65
Score: 4%
Task
Explanation
Solution:
We help you pass any IT / Business Certification Exams with 100% Pass Guaranteed or Full Refund. Especially
Cisco, CompTIA, Citrix, EMC, HP, Oracle, VMware, Juniper, Check Point, LPI, Nortel, EXIN and so on.
We prepare state-of-the art practice tests for certification exams. You can reach us at any of the email addresses listed
below.
Sales: [email protected]
Feedback: [email protected]
Support: [email protected]
Any problems about IT certification or our products, You can write us back and we will get back to you within 24
hours.