0% found this document useful (0 votes)
80 views18 pages

CKS Demo

Uploaded by

murali p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
80 views18 pages

CKS Demo

Uploaded by

murali p
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Questions and Answers PDF 1/18

Thank You for your download

om
Linux Foundation CKS Exam Question & Answers
(Demo)

.c
Certified Kubernetes Security Specialist Exam

ps
m
du
am
ex
id
al
.v
w
w
// w
s:
tp
ht

https://www.validexamdumps.com/CKS.html
Questions and Answers PDF 2/18

Version: 6.0

Question: 1

om
Create a new ServiceAccount named backend-sa in the existing namespace default, which has the
capability to list the pods inside the namespace default.

.c
Create a new Pod named backend-pod in the namespace default, mount the newly created sa

ps
backend-sa to the pod, and Verify that the pod is able to list pods.
Ensure that the Pod is running.

m
du
Answer: See the
am
Explanation below:
Explanation:
ex

A service account provides an identity for processes that run in a Pod.


id
al

When you (a human) access the cluster (for example, using kubectl), you are authenticated by the
.v

apiserver as a particular User Account (currently this is usually admin, unless your cluster
w

administrator has customized your cluster). Processes in containers inside pods can also contact the
apiserver. When they do, they are authenticated as a particular Service Account (for
w

example, default).
// w

When you create a pod, if you do not specify a service account, it is automatically assigned
s:

the default service account in the same namespace. If you get the raw json or yaml for a pod you
tp

have created (for example, kubectl get pods/<podname> -o yaml), you can see
ht

the spec.serviceAccountName field has been automatically set.


You can access the API from inside a pod using automatically mounted service account credentials, as
described in Accessing the Cluster. The API permissions of the service account depend on
the authorization plugin and policy in use.
In version 1.6+, you can opt out of automounting API credentials for a service account by
setting automountServiceAccountToken: false on the service account:
apiVersion: v1
kind: ServiceAccount
metadata:
name: build-robot
automountServiceAccountToken: false
...
In version 1.6+, you can also opt out of automounting API credentials for a particular pod:
apiVersion: v1
Questions and Answers PDF 3/18

name: my-pod
spec:
serviceAccountName: build-robot
automountServiceAccountToken: false
...
The pod spec takes precedence over the service account if both specify
a automountServiceAccountToken value.

Question: 2
Fix all issues via configuration and restart the affected components to ensure the new setting takes

om
effect.
Fix all of the following violations that were found against the API server:-

.c
ps
m
a. Ensure the --authorization-mode argument includes RBAC

du
b. Ensure the --authorization-mode argument includes Node
c. Ensure that the --profiling argument is set to false
am
Fix all of the following violations that were found against the Kubelet:-
ex

a. Ensure the --anonymous-auth argument is set to false.


b. Ensure that the --authorization-mode argument is set to Webhook.
id

Fix all of the following violations that were found against the ETCD:-
al

a. Ensure that the --auto-tls argument is not set to true


.v
w

Hint: Take the use of Tool Kube-Bench


w

Answer: See the


// w

Explanation below.
s:
tp

Explanation:
API server:
ht

Ensure the --authorization-mode argument includes RBAC


Turn on Role Based Access Control.
Role Based Access Control (RBAC) allows fine-grained control over the operations that different
entities can perform on different objects in the cluster. It is recommended to use the RBAC
authorization mode.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
Questions and Answers PDF 4/18

namespace: kube-system
spec:
containers:
- command:
+ - kube-apiserver
+ - --authorization-mode=RBAC,Node
image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
livenessProbe:
failureThreshold: 8
httpGet:

om
host: 127.0.0.1
path: /healthz
port: 6443

.c
scheme: HTTPS

ps
initialDelaySeconds: 15

m
timeoutSeconds: 15
name: kube-apiserver-should-pass

du
resources:
am
requests:
cpu: 250m
ex

volumeMounts:
- mountPath: /etc/kubernetes/
id

name: k8s
al

readOnly: true
- mountPath: /etc/ssl/certs
.v

name: certs
w

- mountPath: /etc/pki
w

name: pki
w

hostNetwork: true
//

volumes:
s:

- hostPath:
tp

path: /etc/kubernetes
name: k8s
ht

- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki

Ensure the --authorization-mode argument includes Node

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-


apiserver.yaml on the master node and set the --authorization-mode parameter to a value that
includes Node.
--authorization-mode=Node,RBAC
Audit:
Questions and Answers PDF 5/18

Expected result:
'Node,RBAC' has 'Node'

Ensure that the --profiling argument is set to false

Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-


apiserver.yaml on the master node and set the below parameter.
--profiling=false
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep

om
Expected result:
'false' is equal to 'false'
Fix all of the following violations that were found against the Kubelet:-

.c
Ensure the --anonymous-auth argument is set to false.

ps
Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled

m
to false. If using executable arguments, edit the kubelet service
file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node and set the

du
below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
am
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
ex

systemctl daemon-reload
systemctl restart kubelet.service
id

Audit:
al

/bin/ps -fC kubelet


Audit Config:
.v

/bin/cat /var/lib/kubelet/config.yaml
w

Expected result:
w

'false' is equal to 'false'


// w

2) Ensure that the --authorization-mode argument is set to Webhook.


s:
tp

Audit
docker inspect kubelet | jq -e '.[0].Args[] | match("--authorization-mode=Webhook").string'
ht

Returned Value: --authorization-mode=Webhook

Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
Do not use self-signed certificates for TLS. etcd is a highly-available key value store used by
Kubernetes deployments for persistent storage of all of its REST API objects. These objects are
sensitive in nature and should not be available to unauthenticated clients. You should enable the
client authentication via valid certificates to secure the access to the etcd service.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
annotations:
Questions and Answers PDF 6/18

creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
+ - etcd

om
+ - --auto-tls=true
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent

.c
livenessProbe:

ps
exec:

m
command:
- /bin/sh

du
- -ec
am
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --
cacert=/etc/kubernetes/pki/etcd/ca.crt
ex

--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --
key=/etc/kubernetes/pki/etcd/healthcheck-client.key
id

get foo
al

failureThreshold: 8
initialDelaySeconds: 15
.v

timeoutSeconds: 15
w

name: etcd-should-fail
w

resources: {}
w

volumeMounts:
//

- mountPath: /var/lib/etcd
s:

name: etcd-data
tp

- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
ht

hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}

Explanation:
ht
tp
s:
Questions and Answers PDF

//w
w
w
.v
al
id
ex
am
du
m
ps
.c
om
7/18
ht
tp
s:
Questions and Answers PDF

//w
w
w
.v
al
id
ex
am
du
m
ps
.c
om
8/18
Questions and Answers PDF 9/18

om
.c
ps
m
du
am
ex
id

Question: 3
al
.v
w
w
w

Create a PSP that will prevent the creation of privileged pods in the namespace.
//

Create a new PodSecurityPolicy named prevent-privileged-policy which prevents the creation of


s:

privileged pods.
tp

Create a new ServiceAccount named psp-sa in the namespace default.


Create a new ClusterRole named prevent-role, which uses the newly created Pod Security Policy
ht

prevent-privileged-policy.
Create a new ClusterRoleBinding named prevent-role-binding, which binds the created ClusterRole
prevent-role to the created SA psp-sa.
Also, Check the Configuration is working or not by trying to Create a Privileged pod, it should get
failed.

Answer: See the


Explanation below.
Explanation:
Create a PSP that will prevent the creation of privileged pods in the namespace.

$ cat clusterrole-use-privileged.yaml
---
Questions and Answers PDF 10/18

metadata:
name: use-privileged-psp
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- default-psp
---
apiVersion: rbac.authorization.k8s.io/v1

om
kind: RoleBinding
metadata:
name: privileged-role-bind

.c
namespace: psp-test

ps
roleRef:

m
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole

du
name: use-privileged-psp
am
subjects:
- kind: ServiceAccount
ex

name: privileged-sa
$ kubectl -n psp-test apply -f clusterrole-use-privileged.yaml
id

After a few moments, the privileged Pod should be created.


al

Create a new PodSecurityPolicy named prevent-privileged-policy which prevents the creation of


.v

privileged pods.
w
w

apiVersion: policy/v1beta1
w

kind: PodSecurityPolicy
//

metadata:
s:

name: example
tp

spec:
privileged: false # Don't allow privileged pods!
ht

# The rest fills in some required fields.


seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'

And create it with kubectl:


kubectl-admin create -f example-psp.yaml
Questions and Answers PDF 11/18

kubectl-user create -f- <<EOF


apiVersion: v1
kind: Pod
metadata:
name: pause
spec:
containers:
- name: pause
image: k8s.gcr.io/pause
EOF

om
The output is similar to this:
Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden: unable to
validate against any pod security policy: []

.c
ps
Create a new ServiceAccount named psp-sa in the namespace default.

m
$ cat clusterrole-use-privileged.yaml

du
---
am
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
ex

metadata:
name: use-privileged-psp
id

rules:
al

- apiGroups: ['policy']
resources: ['podsecuritypolicies']
.v

verbs: ['use']
w

resourceNames:
w

- default-psp
w

---
//

apiVersion: rbac.authorization.k8s.io/v1
s:

kind: RoleBinding
tp

metadata:
name: privileged-role-bind
ht

namespace: psp-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: use-privileged-psp
subjects:
- kind: ServiceAccount
name: privileged-sa
$ kubectl -n psp-test apply -f clusterrole-use-privileged.yaml
After a few moments, the privileged Pod should be created.

Create a new ClusterRole named prevent-role, which uses the newly created Pod Security Policy
prevent-privileged-policy.
Questions and Answers PDF 12/18

kind: PodSecurityPolicy
metadata:
name: example
spec:
privileged: false # Don't allow privileged pods!
# The rest fills in some required fields.
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny

om
runAsUser:
rule: RunAsAny
fsGroup:

.c
rule: RunAsAny

ps
volumes:

m
- '*'
And create it with kubectl:

du
kubectl-admin create -f example-psp.yaml
am
Now, as the unprivileged user, try to create a simple pod:
kubectl-user create -f- <<EOF
ex

apiVersion: v1
kind: Pod
id

metadata:
al

name: pause
spec:
.v

containers:
w

- name: pause
w

image: k8s.gcr.io/pause
w

EOF
//

The output is similar to this:


s:

Error from server (Forbidden): error when creating "STDIN": pods "pause" is forbidden: unable to
tp

validate against any pod security policy: []


ht

Create a new ClusterRoleBinding named prevent-role-binding, which binds the created ClusterRole
prevent-role to the created SA psp-sa.

apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read pods in the "default" namespace.
# You need to already have a Role named "pod-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
Questions and Answers PDF 13/18

roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default

om
name: pod-reader
rules:
- apiGroups: [""] # "" indicates the core API group

.c
resources: ["pods"]

ps
verbs: ["get", "watch", "list"]

m
Question: 4
du
am
ex
id
al
.v
w
w
// w
s:
tp
ht
Questions and Answers PDF 14/18

om
.c
ps
m
du
am
ex
id
al
.v
w
w
// w
s:
tp
ht

Context

A Role bound to a Pod's ServiceAccount grants overly permissive permissions. Complete the
following tasks to reduce the set of permissions.

Task
Questions and Answers PDF 15/18

Edit the existing Role bound to the Pod's ServiceAccount sa-dev-1 to only allow performing watch
operations, only on resources of type services.

Create a new Role named role-2 in the namespace security, which only allows performing update
operations, only on resources of type namespaces.

Create a new RoleBinding named role-2-binding binding the newly created Role to the Pod's
ServiceAccount.

om
.c
ps
m
du Answer: See
am
explanation below.
ex

Explanation:
id
al
.v
w
w
// w
s:
tp
ht
Questions and Answers PDF 16/18

om
.c
ps
m
du
am
ex
id
al
.v
w
w
// w
s:
tp
ht

Question: 5

Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that
1. logs are stored at /var/log/kubernetes-logs.txt.
2. Log files are retained for 12 days.
3. at maximum, a number of 8 old audit logs files are retained.
4. set the maximum size before getting rotated to 200MB
Edit and extend the basic policy to log:
1. namespaces changes at RequestResponse
2. Log the request body of secrets changes in the namespace kube-system.
3. Log all other resources in core and extensions at the Request level.
4. Log "pods/portforward", "services/proxy" at Metadata level.
Questions and Answers PDF 17/18

All other requests at the Metadata level

Answer: See the


explanation below:
Explanation:
Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-
apiserver performs auditing. Each request on each stage of its execution generates an event, which is
then pre-processed according to a certain policy and written to a backend. The policy determines
what’s recorded and the backends persist the records.
You might want to configure the audit log as part of compliance with the CIS (Center for Internet

om
Security) Kubernetes Benchmark controls.

.c
The audit log can be enabled by default using the following configuration in cluster.yml:

ps
services:
kube-api:

m
audit_log:

du
enabled: true
When the audit log is enabled, you should be able to see the default values at /etc/kubernetes/audit-
am
policy.yaml
ex

The log backend writes audit events to a file in JSONlines format. You can configure the log audit
id

backend using the following kube-apiserver flags:


--audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying
al

this flag disables log backend. - means standard out


.v

--audit-log-maxage defined the maximum number of days to retain old audit log files
w

--audit-log-maxbackup defines the maximum number of audit log files to retain


w

--audit-log-maxsize defines the maximum size in megabytes of the audit log file before it gets rotated
w

If your cluster's control plane runs the kube-apiserver as a Pod, remember to mount the hostPath to
the location of the policy file and log file, so that audit records are persisted. For example:
//
s:

--audit-policy-file=/etc/kubernetes/audit-policy.yaml \
--audit-log-path=/var/log/audit.log
tp
ht
Questions and Answers PDF 18/18

Thank you for your visit.


To try more exams, please visit below link
https://www.validexamdumps.com/CKS.html

om
.c
ps
m
du
am
ex
id
al
.v
w
w
// w
s:
tp
ht

You might also like