0% found this document useful (0 votes)
62 views14 pages

CKS Kubernetes Security Specialist Practice Questions

This document provides a comprehensive set of practice questions for the CKS exam, focusing on key topics and configurations relevant to Kubernetes security. It includes detailed tasks and solutions related to API server settings, admission controllers, service accounts, secrets, and network policies. The material is intended for personal study and should not be redistributed without permission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views14 pages

CKS Kubernetes Security Specialist Practice Questions

This document provides a comprehensive set of practice questions for the CKS exam, focusing on key topics and configurations relevant to Kubernetes security. It includes detailed tasks and solutions related to API server settings, admission controllers, service accounts, secrets, and network policies. The material is intended for personal study and should not be redistributed without permission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

This PDF contains a set of carefully selected practice questions for the

CKS exam. These questions are designed to reflect the structure,


difficulty, and topics covered in the actual exam, helping you reinforce
your understanding and identify areas for improvement.

What's Inside:

1. Topic-focused questions based on the latest exam objectives


2. Accurate answer keys to support self-review
3. Designed to simulate the real test environment
4. Ideal for final review or daily practice

Important Note:

This material is for personal study purposes only. Please do not


redistribute or use for commercial purposes without permission.

For full access to the complete question bank and topic-wise explanations, visit:
CertQuestionsBank.com

Our YouTube: https://www.youtube.com/@CertQuestionsBank

FB page: https://www.facebook.com/certquestionsbank
Share some CKS exam online questions below.
1. Create a Namespace (if not already existing)

2. CORRECT TEXT
Fix all issues via configuration and restart the affected components to ensure the new setting takes
effect.
Fix all of the following violations that were found against the API server:-
? a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.
? b. Ensure that the admission control plugin PodSecurityPolicyisset.
? c. Ensure that the --kubelet-certificate-authority argumentissetasappropriate.
Fix all of the following violations that were found against the Kubelet:-
? a. Ensure the --anonymous-auth argumentissettofalse.
? b. Ensure that the --authorization-mode argumentissetto Webhook.
Fix all of the following violations that were found against the ETCD:-
? a. Ensure that the --auto-tls argumentisnotsettotrue
? b. Ensure that the --peer-auto-tls argumentisnotsettotrue
Hint: Take the use of Tool Kube-Bench
Answer:
Fix all of thefollowing violations that were found against the API server:-
? a. Ensure that the RotateKubeletServerCertificate argumentissettotrue.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component:kubelet
tier: control-plane
name: kubelet
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
+ - --feature-gates=RotateKubeletServerCertificate=true image: gcr.io/google_containers/kubelet-
amd64:v1.6.0 livenessProbe:
failureThreshold: 8 httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name:kubelet
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/ name: k8s
readOnly: true
- mountPath: /etc/ssl/certs name: certs
- mountPath: /etc/pki name:pki hostNetwork: true volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath: path: /etc/pki name: pki
? b. Ensure that theadmission control plugin PodSecurityPolicyisset.
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--enable-admission-plugins"
compare:
op: has
value:"PodSecurityPolicy"
set: true
remediation: |
Follow the documentation and create Pod Security Policy objects as per your environment. Then, edit
the API server pod specification file $apiserverconf
on themaster node and set the --enable-admission-plugins parameter to a
value that includes PodSecurityPolicy :
--enable-admission-plugins=...,PodSecurityPolicy,...
Then restart the API Server.
scored: true
? c. Ensure thatthe --kubelet-certificate-authority argumentissetasappropriate.
audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep"
tests:
test_items:
- flag: "--kubelet-certificate-authority" set: true
remediation: |
Follow the Kubernetes documentation and setup the TLS connection between the apiserver and
kubelets. Then, edit the API server pod specification file $apiserverconf on the master node and set
the --kubelet-certificate-authority parameter to the path to the cert file for the certificate authority.
--kubelet-certificate-authority=<ca-string>
scored: true
Fix all of the following violations that were found against the ETCD:-
? a. Ensurethat the --auto-tls argumentisnotsettotrue
Edit the etcd pod specification file $etcdconf on the masternode and either remove the -- auto-tls
parameter or set it to false.--auto-tls=false
? b. Ensure that the --peer-auto-tls argumentisnotsettotrue
Edit the etcd pod specification file $etcdconf on the masternode and either remove the -- peer-auto-tls
parameter or set it to false.--peer-auto-tls=false

3. Validate the control configuration and change it to implicit deny.


Finally,test the configuration by deploying the pod having the image tag as latest.
Answer:
To achieve the tasks you described, you need to interact with a Kubernetes cluster's admission
controller settings to enforce image policies through a webhooks admission plugin. Below are the
detailed steps to configure the admission webhook for container image validation and test the setup.
Step 1: Enable the Admission Plugin (Webhook)
Kubernetes supports dynamic admission control using webhooks. You will need to set up a
ValidatingAdmissionWebhook that points to your container image scanner.
Edit the API Server Configuration : To enable admission webhooks, you must ensure that the
Kubernetes API server is started with the following options:
--enable-admission-plugins=ValidatingAdmissionWebhook,MutatingAdmissionWebhook
Create a ValidatingWebhookConfiguration : Here is an example configuration for your scenario:
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: image-validation-webhook
webhooks:
- name: imagepolicy.test-server.local
clientConfig:
service:
name:
test-server
namespace: default
path:
"/image_policy"
caBundle:
"<CA_BUNDLE>" # Base64 encoded CA certificate (if using
HTTPS)
rules:
- operations:
["CREATE", "UPDATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
failurePolicy: Fail # Set
to Fail to enforce implicit deny
sideEffects: None
admissionReviewVersions:
["v1", "v1beta1"]
? Replace <CA_BUNDLE> with the base64-encoded CA certificate that validates the TLS certificate
presented by https://test-server.local:8081. This is crucial for secure communication over HTTPS.
? Apply the Webhook Configuration : Save the above configuration into a file named webhook-
config.yaml and apply it:
kubectl apply -f webhook-config.yaml

Step 2: Validate the Control Configuration


The webhook is configured to implicitly deny all pod creations where the image validation fails
(failurePolicy: Fail). This means if the webhook server (https://test-server.local:8081/image_policy)
does not explicitly approve an image, the pod creation request will be denied.
Step 3: Test the Configuration
Deploy a pod with an image tagged as latest to see if the admission control works as expected.
Create a Test Pod : Here’s an example Pod definition that uses an image with the latest tag.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx:latest
Deploy the Pod : Save the above Pod specification to a file named test-pod.yaml and try to create the
pod: kubectl apply -f test-pod.yaml
Observe the Result : If your image policy webhook is set up to block images with the latest tag, the
pod creation should fail. Check the output and the Kubernetes event logs:
kubectl describe pod test-pod
This setup tests the entire workflow from the webhook configuration to the policy enforcement and
helps ensure that your image policy is enforced correctly within your Kubernetes cluster.

4. Does not allow access to pod not listening on port 80.

5.2.8 authorization-mode argument includes Node FAIL

6. CORRECT TEXT
Create a new ServiceAccount named backend-sa in the existing namespace default, which has the
capability to list the pods inside thenamespace default.
Create a new Pod named backend-pod in the namespace default, mount the newly created sa
backend-sa to the pod, and Verify that the pod is able to list pods.
Ensure that the Pod is running.
Answer:
A service account provides an identity for processes that run in a Pod.
When you (a human) access the cluster (for example, using kubectl), you are authenticated by the
apiserver as a particular User Account (currently this is usually admin,unless your cluster
administrator has customized your cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service
Account (for example, default).
When you create a pod, if youdo not specify a service account, it is automatically assigned the default
service account in the same namespace. If you get the raw json or yaml for a pod you have created
(for example, kubectl get pods/<podname> -o yaml), you can see the spec.serviceAccountName field
has been automatically set.
You can access the API from inside a pod using automatically mounted service account credentials,
as described in Accessing the Cluster. The API permissions of the service account depend on the
authorization plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting
automountServiceAccountToken: false on the service account: apiVersion:v1
kind:ServiceAccount
metadata:
name:build-robot
automountServiceAccountToken:false
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion:v1
kind:Pod
metadata:
name:my-pod
spec:
serviceAccountName:build-robot
automountServiceAccountToken:false
The pod spec takes precedence over the service account if both specify a
automountServiceAccountToken value.

7. Do not use/modify the created files in the following steps, create new temporary files if needed.
Create a new secret names newsecret in the safe namespace, with the following content:
Username: dbadmin
Password: moresecurepas
Finally, create a new Pod that has access to the secret newsecret via a volume:
? Namespace:safe
? Pod name:mysecret-pod
? Container name:db-container
? Image:redis
? Volume name:secret-vol
? Mount path:/etc/mysecret
Answer:

8.2 Ensure that the client-cert-auth argument is set to true


Answer:
worker1 $ vim /var/lib/kubelet/config.yaml
? uk.co.certification.simulator.questionpool.PList@e615a40
worker1 $ systemctl restart kubelet. # To reload kubelet configssh to master1master1 $ vim
/etc/kubernetes/manifests/kube-apiserver.yaml- -- authorization-mode=Node,RBACmaster1 $ vim
/etc/kubernetes/manifests/etcd.yaml- --client-cert-auth=true
Explanationssh to worker1worker1 $ vim /var/lib/kubelet/config.yaml apiVersion:
kubelet.config.k8s.io/v1beta1 authentication:
anonymous:
enabled: true #Delete this
enabled: false #Replace by this
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: AlwaysAllow #Delete this
mode: Webhook #Replace by this
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10 clusterDomain: cluster.local cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s fileCheckFrequency: 0s healthzBindAddress: 127.0.0.1
healthzPort: 10248 httpCheckFrequency: 0s imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
worker1 $ systemctl restart kubelet. # To reload kubelet configssh to master1master1 $ vim
/etc/kubernetes/manifests/kube-apiserver.yaml

Text
Description automatically generated
master1 $ vim /etc/kubernetes/manifests/etcd.yaml

9. CORRECT TEXT
Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that

10. Log files are retained for 12 days.

11. Enable the necessary plugins to create an image policy


12. For Dockerfile: Fix the image version & user name in Dockerfile2. For mydeployment.yaml : Fix
security contexts
Explanation[desk@cli] $ vim /home/cert_masters/Dockerfile
FROM ubuntu:latest # Remove this
FROM ubuntu:18.04 # Add this
USER root # Remove this
USER nobody # Add this
RUN apt get install -y lsof=4.72 wget=1.17.1 nginx=4.2
ENV ENVIRONMENT=testing
USER root # Remove this
USER nobody # Add this
CMD ["nginx -d"]

Text
Description automatically generated
[desk@cli] $ vim /home/cert_masters/mydeployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: kafka
name: kafka
spec:
replicas: 1
selector:
matchLabels:
app: kafka
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: kafka
spec:
containers:
- image: bitnami/kafka name: kafka volumeMounts:
- name: kafka-vol mountPath: /var/lib/kafka securityContext:
{"capabilities":{"add":["NET_ADMIN"],"drop":["all"]},"privileged": True,"readOnlyRootFilesystem":
False, "runAsUser": 65535} # Delete This
{"capabilities":{"add":["NET_ADMIN"],"drop":["all"]},"privileged": False,"readOnlyRootFilesystem":
True, "runAsUser": 65535} # Add This resources: {}
volumes:
- name: kafka-vol
emptyDir: {}
status: {}
Pictorial View:[desk@cli] $ vim /home/cert_masters/mydeployment.yaml

Text
Description automatically generated

13.pods with label version:v1 in any namespace.


Make sure to apply the network policy.
Answer:
To create a network policy named restrict-np that allows only specific pods to connect to the pod
named nginx-test in the testing namespace, you'll need to set up the policy to allow traffic from pods
in the default namespace and from pods with the label version: v1 in any namespace. Here's how you
can define this policy:
Explanation:
podSelector : Targets the policy to pods in the testing namespace with the label app: nginx-test.
Ensure that your nginx-test pod has this label. policyTypes : Specifies that the policy will apply to
ingress traffic. ingress : Defines the rules for incoming traffic. namespaceSelector : Allows traffic from
all pods in the default namespace (note: this assumes that your default namespace has a label name:
default, which is not standard and may need adjustment based on your actual namespace labels).
podSelector : Allows traffic from pods with the label version: v1 across all namespaces.
Applying the Network Policy:
Once you've created your YAML configuration, save it to a file (e.g., restrict-np.yaml). Then apply it to
your Kubernetes cluster using the following command:
bash
kubectl apply -f restrict-np.yaml
This command will create the network policy in your cluster, enforcing the connectivity rules as
defined. Make sure to verify that your namespaces and pods are labeled correctly to match the
selectors specified in the policy. If the default namespace does not have labels, you might need to
adjust the namespaceSelector to use a different identifying feature or apply labels to the namespace.

14. CORRECT TEXT


On the Cluster worker node, enforce the prepared AppArmor profile
? #include<tunables/global>
?
? profilenginx-deny flags=(attach_disconnected) {
? #include<abstractions/base>
?
? file,
?
? # Deny all file writes.
? deny/** w,
?}
? EOF'
Edit the prepared manifest file to include the AppArmor profile.
? apiVersion: v1
? kind: Pod
? metadata:
? name:apparmor-pod
? spec:
? containers:
? - name: apparmor-pod
? image: nginx
Finally, apply the manifests files and create the Pod specified on it.
Verify: Try to make a file inside the directory which is restricted.
Answer:
To successfully deploy an AppArmor profile to a Kubernetes cluster and integrate it with a pod, you
need to follow these steps: create the AppArmor profile, modify the Kubernetes pod manifest to use
this profile, and then deploy the pod. Here's a step-by-step guide:
Step 1: Create the AppArmor Profile
Create an AppArmor profile named nginx-deny. This profile will deny all file write operations,
enhancing the security stance of the container. Here’s the AppArmor profile content:
#include <tunables/global>

profile nginx-deny flags=(attach_disconnected) {


#include
<abstractions/base>

file,

# Deny all file writes.


deny /** w,
}
? Save this profile on the Kubernetes worker node :
Save this to a file, typically named nginx-deny, in the AppArmor profile directory /etc/apparmor.d/.
? Load the profile into AppArmor :
Use the following command to load and enforce the profile
sudo apparmor_parser -r -W /etc/apparmor.d/nginx-deny
Step 2: Modify the Kubernetes Pod Manifest
To use the AppArmor profile in a Kubernetes pod, modify the pod definition to include the AppArmor
annotation. This instructs Kubernetes to apply the nginx-deny profile to the pod.
apiVersion: v1
kind: Pod
metadata:
name: apparmor-pod
annotations:
container.apparmor.security.beta.kubernetes.io/apparmor-pod:
"nginx-deny"
spec:
containers:
- name: apparmor-pod
image: nginx
Step 3: Apply the Manifest File
With the profile loaded and the manifest prepared, apply the manifest to create the pod:
kubectl apply -f apparmor-pod.yaml
Step 4: Verify the Enforcement
To verify that the AppArmor profile is actively enforcing the security policies:
Check the Pod Status : Make sure the pod is running
kubectl get pods
Exec into the Pod :
Try to create a file within the container to test the write denial
kubectl exec -it apparmor-pod -- /bin/sh -c "echo 'test' > /tmp/test.txt"
This command should fail with a permission denied error, indicating that the AppArmor profile is
actively preventing file writes as expected.
Conclusion
By following these steps, you have successfully enforced an AppArmor profile on a Kubernetes pod,
enhancing its security by restricting file write operations. Ensure that you test thoroughly in a
development environment before applying such security measures in a production setting, as they
can affect application functionality.

15. CORRECT TEXT


Create aRuntimeClass named gvisor-rc using the prepared runtime handler named runsc.
Create a Pods of image Nginx in the Namespace server to run on the gVisor runtime class
Answer:
? Install the Runtime Class for gVisor {# Step 1: Install a RuntimeClass
cat <<EOF | kubectl apply -f - apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata:
name: gvisor
handler: runsc
EOF
}
? Create a Pod with the gVisor Runtime Class { # Step 2: Create a pod
cat <<EOF |kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-gvisor
spec:
runtimeClassName: gvisor
containers:
- name: nginx image: nginx
EOF
}
? Verify that the Pod is running { # Step 3: Get the pod
kubectl get podnginx-gvisor -o wide
}

16. CORRECT TEXT


Given an existing Pod named test-web-pod running in the namespace test-system
Edit the existing Role bound to the Pod's Service Account named sa-backend to only allow
performing get operations on endpoints.
Create a new Rolenamed test-system-role-2 in the namespace test-system, which can perform patch
operations, on resources of type statefulsets.
Create a new RoleBinding named test-system-role-2-binding binding the newly created Role to the
Pod's ServiceAccount sa-backend.
Answer:

17. CORRECT TEXT


Use the kubesec docker images to scan the given YAML manifest, edit and apply the advised
changes, and passed with a score of 4 points.
kubesec-test.yaml
? apiVersion: v1
? kind: Pod
? metadata:
? name: kubesec-demo
? spec:
? containers:
? - name: kubesec-demo
? image: gcr.io/google-samples/node-hello:1.0
? securityContext:
? readOnlyRootFilesystem:true
Hint: docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin <kubesec-test.yaml
Answer:
To utilize Kubesec for scanning a Kubernetes YAML manifest and applying recommended security
changes, you need to follow a series of steps to ensure the manifest is up to standards and achieves
a security score of at least 4 points. Below are the instructions to scan, evaluate, and update the
YAML manifest accordingly.
Step 1: Prepare the YAML Manifest
Here is the given YAML manifest that needs to be scanned and potentially modified based on
Kubesec's recommendations:
kubesec-test.yaml:
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image:
gcr.io/google-samples/node-hello:1.0
securityContext:
readOnlyRootFilesystem: true
Step 2: Scan the Manifest Using Kubesec
Run the Kubesec Docker container to scan your YAML file. You'll need to have Docker installed on
your machine. Use the following command in your terminal:
docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < kubesec-test.yaml
This command will output a security assessment report with a score and possible suggestions for
improving the security of the Pod.
Step 3: Interpret the Results and Edit the Manifest
Based on the output from Kubesec, you will need to make adjustments to the YAML file. Common
recommendations might include:
Adding a runAsNonRoot: true directive to ensure the container does not run as the root user.
Specifying a runAsUser value to define which user the container should run as. Limiting the
capabilities of the container with a capabilities block.
For instance, if the initial scan suggests improvements in handling user permissions and capabilities,
you could update the YAML file as follows:
apiVersion: v1
kind: Pod
metadata:
name: kubesec-demo
spec:
containers:
- name: kubesec-demo
image:
gcr.io/google-samples/node-hello:1.0
securityContext:
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000 # Example user ID
capabilities:
drop:
- ALL
allowPrivilegeEscalation: false
Step 4: Re-scan the Updated Manifest
After making changes, re-scan the updated manifest to verify that it meets the required security score:
docker run -i kubesec/kubesec:512c5e0 scan /dev/stdin < updated-kubesec-test.yaml
Step 5: Apply the Manifest
Once the manifest achieves a satisfactory security score, apply it to your Kubernetes cluster:
kubectl apply -f updated-kubesec-test.yaml
Step 6: Verify the Pod Deployment
Check the status of your pod to ensure it deploys correctly with the new security settings:
kubectl get pods
kubectl describe pod kubesec-demo
his process ensures that the Kubernetes manifest meets basic security benchmarks as advised by
Kubesec, enhancing the security posture of your deployments.

Get CKS exam dumps full version.

Powered by TCPDF (www.tcpdf.org)

You might also like