Actual4test CKS
Actual4test CKS
Actual4test CKS
➱Prüfungsnummer: CKS
Besuchen Sie Actual4test laden Sie die Vollversion CKS Fragen & Antworten
NEW QUESTION # 17
Create a network policy named restrict-np to restrict to pod nginx-test running in namespace testing.
Only allow the following Pods to connect to Pod nginx-test:-
1. pods in the namespace default
2. pods with label version:v1 in any namespace.
Make sure to apply the network policy.
A. Send us your Feedback on this.
Answer: A
NEW QUESTION # 18
Fix all issues via configuration and restart the affected components to ensure the new setting takes effect.
Fix all of the following violations that were found against the API server:- a. Ensure the --authorization-mode
argument includes RBAC b. Ensure the --authorization-mode argument includes Node c. Ensure that the
--profiling argument is set to false Fix all of the following violations that were found against the Kubelet:- a. Ensure
the --anonymous-auth argument is set to false.
b. Ensure that the --authorization-mode argument is set to Webhook.
Fix all of the following violations that were found against the ETCD:-
a. Ensure that the --auto-tls argument is not set to true
Hint: Take the use of Tool Kube-Bench
Answer:
Explanation:
API server:
Ensure the --authorization-mode argument includes RBAC
Turn on Role Based Access Control. Role Based Access Control (RBAC) allows fine-grained control over the
operations that different entities can perform on different objects in the cluster. It is recommended to use the
RBAC authorization mode.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
+ - kube-apiserver
+ - --authorization-mode=RBAC,Node
image: gcr.io/google_containers/kube-apiserver-amd64:v1.6.0
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver-should-pass
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/
name: k8s
readOnly: true
- mountPath: /etc/ssl/certs
name: certs
- mountPath: /etc/pki
name: pki
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes
name: k8s
- hostPath:
path: /etc/ssl/certs
name: certs
- hostPath:
path: /etc/pki
name: pki
Ensure the --authorization-mode argument includes Node
Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the
master node and set the --authorization-mode parameter to a value that includes Node.
--authorization-mode=Node,RBAC
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'Node,RBAC' has 'Node'
Ensure that the --profiling argument is set to false
Remediation: Edit the API server pod specification file /etc/kubernetes/manifests/kube-apiserver.yaml on the
master node and set the below parameter.
--profiling=false
Audit:
/bin/ps -ef | grep kube-apiserver | grep -v grep
Expected result:
'false' is equal to 'false'
Fix all of the following violations that were found against the Kubelet:- Ensure the --anonymous-auth argument is
set to false.
Remediation: If using a Kubelet config file, edit the file to set authentication: anonymous: enabled to false. If using
executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf on
each worker node and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--anonymous-auth=false
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
Audit:
/bin/ps -fC kubelet
Audit Config:
/bin/cat /var/lib/kubelet/config.yaml
Expected result:
'false' is equal to 'false'
2) Ensure that the --authorization-mode argument is set to Webhook.
Audit
docker inspect kubelet | jq -e '.[0].Args[] | match("--authorization-mode=Webhook").string' Returned Value:
--authorization-mode=Webhook Fix all of the following violations that were found against the ETCD:- a. Ensure
that the --auto-tls argument is not set to true Do not use self-signed certificates for TLS. etcd is a highly-available
key value store used by Kubernetes deployments for persistent storage of all of its REST API objects. These
objects are sensitive in nature and should not be available to unauthenticated clients. You should enable the
client authentication via valid certificates to secure the access to the etcd service.
Fix - Buildtime
Kubernetes
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: etcd
tier: control-plane
name: etcd
namespace: kube-system
spec:
containers:
- command:
+ - etcd
+ - --auto-tls=true
image: k8s.gcr.io/etcd-amd64:3.2.18
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /bin/sh
- -ec
- ETCDCTL_API=3 etcdctl --endpoints=https://[192.168.22.9]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
--cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get
foo failureThreshold: 8 initialDelaySeconds: 15 timeoutSeconds: 15 name: etcd-should-fail resources: {}
volumeMounts:
- mountPath: /var/lib/etcd
name: etcd-data
- mountPath: /etc/kubernetes/pki/etcd
name: etcd-certs
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /var/lib/etcd
type: DirectoryOrCreate
name: etcd-data
- hostPath:
path: /etc/kubernetes/pki/etcd
type: DirectoryOrCreate
name: etcd-certs
status: {}
NEW QUESTION # 19
You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config
use-context prod-account Context: A Role bound to a Pod's ServiceAccount grants overly permissive
permissions. Complete the following tasks to reduce the set of permissions. Task: Given an existing Pod named
web-pod running in the namespace database. 1. Edit the existing Role bound to the Pod's ServiceAccount test-sa
to only allow performing get operations, only on resources of type Pods. 2. Create a new Role named test-role-2
in the namespace database, which only allows performing update operations, only on resources of type
statuefulsets. 3. Create a new RoleBinding named test-role-2-bind binding the newly created Role to the Pod's
ServiceAccount. Note: Don't delete the existing RoleBinding.
Answer:
Explanation:
NEW QUESTION # 20
You must complete this task on the following cluster/nodes:
Cluster: trace
Master node: master
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context trace
Given: You may use Sysdig or Falco documentation.
Task:
Use detection tools to detect anomalies like processes spawning and executing something weird frequently in the
single container belonging to Pod tomcat.
Two tools are available to use:
1. falco
2. sysdig
Tools are pre-installed on the worker1 node only.
Analyse the container's behaviour for at least 40 seconds, using filters that detect newly spawning and executing
processes.
Store an incident file at /home/cert_masters/report, in the following format:
[timestamp],[uid],[processName]
Note: Make sure to store incident file on the cluster's worker node, don't move it to master node.
Answer:
Explanation:
$vim /etc/falco/falco_rules.local.yaml
- rule: Container Drift Detected (open+create)
desc: New executable created in a container due to open+create
condition: >
evt.type in (open,openat,creat) and
evt.is_open_exec=true and
container and
not runc_writing_exec_fifo and
not runc_writing_var_lib_docker and
not user_known_container_drift_activities and
evt.rawres>=0
output: >
%evt.time,%user.uid,%proc.name # Add this/Refer falco documentation
priority: ERROR
$kill -1 <PID of falco>
Explanation
[desk@cli] $ ssh node01
[node01@cli] $ vim /etc/falco/falco_rules.yaml
search for Container Drift Detected & paste in falco_rules.local.yaml
[node01@cli] $ vim /etc/falco/falco_rules.local.yaml
- rule: Container Drift Detected (open+create)
desc: New executable created in a container due to open+create
condition: >
evt.type in (open,openat,creat) and
evt.is_open_exec=true and
container and
not runc_writing_exec_fifo and
not runc_writing_var_lib_docker and
not user_known_container_drift_activities and
evt.rawres>=0
output: >
%evt.time,%user.uid,%proc.name # Add this/Refer falco documentation
priority: ERROR
[node01@cli] $ vim /etc/falco/falco.yaml
NEW QUESTION # 21
You can switch the cluster/configuration context using the following command: [desk@cli] $ kubectl config
use-context qa Context: A pod fails to run because of an incorrectly specified ServiceAccount Task: Create a new
service account named backend-qa in an existing namespace qa, which must not have access to any secret. Edit
the frontend pod yaml to use backend-qa service account Note: You can find the frontend pod yaml at
/home/cert_masters/frontend-pod.yaml
Answer:
Explanation:
[desk@cli] $ k create sa backend-qa -n qa sa/backend-qa created [desk@cli] $ k get role,rolebinding -n qa No
resources found in qa namespace. [desk@cli] $ k create role backend -n qa --resource
pods,namespaces,configmaps --verb list # No access to secret [desk@cli] $ k create rolebinding backend -n qa
--role backend --serviceaccount qa:backend-qa [desk@cli] $ vim /home/cert_masters/frontend-pod.yaml
apiVersion: v1 kind: Pod metadata:
name: frontend
spec:
serviceAccountName: backend-qa # Add this
image: nginx
name: frontend
[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml pod created
[desk@cli] $ k create sa backend-qa -n qa serviceaccount/backend-qa created [desk@cli] $ k get role,rolebinding
-n qa No resources found in qa namespace. [desk@cli] $ k create role backend -n qa --resource
pods,namespaces,configmaps --verb list role.rbac.authorization.k8s.io/backend created [desk@cli] $ k create
rolebinding backend -n qa --role backend --serviceaccount qa:backend-qa
rolebinding.rbac.authorization.k8s.io/backend created [desk@cli] $ vim /home/cert_masters/frontend-pod.yaml
apiVersion: v1 kind: Pod metadata:
name: frontend
spec:
serviceAccountName: backend-qa # Add this
image: nginx
name: frontend
[desk@cli] $ k apply -f /home/cert_masters/frontend-pod.yaml pod/frontend created
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
NEW QUESTION # 22
SIMULATION
Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.
Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.
Verify: Exec the pods and run the dmesg, you will see output like this:-
NEW QUESTION # 23
Context
A default-deny NetworkPolicy avoids to accidentally expose a Pod in a namespace that doesn't have any other
NetworkPolicy defined.
Task
Create a new default-deny NetworkPolicy named defaultdeny in the namespace testing for all traffic of type
Egress.
The new NetworkPolicy must deny all Egress traffic in the namespace testing.
Apply the newly created default-deny NetworkPolicy to all Pods running in namespace testing.
Answer:
Explanation:
NEW QUESTION # 24
Using the runtime detection tool Falco, Analyse the container behavior for at least 30 seconds, using filters that
detect newly spawning and executing processes
A. store the incident file art /opt/falco-incident.txt, containing the detected incidents. one per line, in the format
Answer: A
Explanation:
[timestamp],[uid],[user-name],[processName]
NEW QUESTION # 25
Create a User named john, create the CSR Request, fetch the certificate of the user after approving it.
Create a Role name john-role to list secrets, pods in namespace john
Finally, Create a RoleBinding named john-role-binding to attach the newly created role john-role to the user john
in the namespace john. To Verify: Use the kubectl auth CLI command to verify the permissions.
Answer:
Explanation:
se kubectl to create a CSR and approve it.
Get the list of CSRs:
kubectl get csr
Approve the CSR:
kubectl certificate approve myuser
Get the certificate
Retrieve the certificate from the CSR:
kubectl get csr/myuser -o yaml
here are the role and role-binding to give john permission to create NEW_CRD resource:
kubectl apply -f roleBindingJohn.yaml --as=john
rolebinding.rbac.authorization.k8s.io/john_external-rosource-rb created kind: RoleBinding apiVersion:
rbac.authorization.k8s.io/v1 metadata:
name: john_crd
namespace: development-john
subjects:
- kind: User
name: john
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: crd-creation
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: crd-creation
rules:
- apiGroups: ["kubernetes-client.io/v1"]
resources: ["NEW_CRD"]
verbs: ["create, list, get"]
NEW QUESTION # 26
Service is running on port 389 inside the system, find the process-id of the process, and stores the names of all
the open-files inside the /candidate/KH77539/files.txt, and also delete the binary.
Answer:
Explanation:
root# netstat -ltnup
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:17600
0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 127.0.0.1:17603 0.0.0.0:* LISTEN 1293/dropbox tcp 0 0 0.0.0.0:22
0.0.0.0:* LISTEN 575/sshd tcp 0 0 127.0.0.1:9393 0.0.0.0:* LISTEN 900/perl tcp 0 0 :::80 :::* LISTEN
9583/docker-proxy tcp 0 0 :::443 :::* LISTEN 9571/docker-proxy udp 0 0 0.0.0.0:68 0.0.0.0:* 8822/dhcpcd
...
root# netstat -ltnup | grep ':22'
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 575/sshd
The ss command is the replacement of the netstat command.
Now let's see how to use the ss command to see which process is listening on port 22:
root# ss -ltnup 'sport = :22'
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:("sshd",pid=575,fd=3))
NEW QUESTION # 27
Create a RuntimeClass named untrusted using the prepared runtime handler named runsc.
Create a Pods of image alpine:3.13.2 in the Namespace default to run on the gVisor runtime class.
Answer:
Explanation:
Verify: Exec the pods and run the dmesg, you will see output like this:-
NEW QUESTION # 28
Context: Cluster: prod Master node: master1 Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context prod
Task: Analyse and edit the given Dockerfile (based on the ubuntu:18:04 image) /home/cert_masters/Dockerfile
fixing two instructions present in the file being prominent security/best-practice issues.
Analyse and edit the given manifest file /home/cert_masters/mydeployment.yaml fixing two fields present in the
file being prominent security/best-practice issues.
Note: Don't add or remove configuration settings; only modify the existing configuration settings, so that two
configuration settings each are no longer security/best-practice concerns. Should you need an unprivileged user
for any of the tasks, use user nobody with user id 65535
Answer:
Explanation:
1. For Dockerfile: Fix the image version & user name in Dockerfile 2. For mydeployment.yaml : Fix security
contexts Explanation
[desk@cli] $ vim /home/cert_masters/Dockerfile
FROM ubuntu:latest # Remove this
FROM ubuntu:18.04 # Add this
USER root # Remove this
USER nobody # Add this
RUN apt get install -y lsof=4.72 wget=1.17.1 nginx=4.2
ENV ENVIRONMENT=testing
USER root # Remove this
USER nobody # Add this
CMD ["nginx -d"]
NEW QUESTION # 29
SIMULATION
Enable audit logs in the cluster, To Do so, enable the log backend, and ensure that
1. logs are stored at /var/log/kubernetes-logs.txt.
2. Log files are retained for 12 days.
3. at maximum, a number of 8 old audit logs files are retained.
4. set the maximum size before getting rotated to 200MB
Edit and extend the basic policy to log:
1. namespaces changes at RequestResponse
2. Log the request body of secrets changes in the namespace kube-system.
3. Log all other resources in core and extensions at the Request level.
4. Log "pods/portforward", "services/proxy" at Metadata level.
5. Omit the Stage RequestReceived
All other requests at the Metadata level
Answer:
Explanation:
Kubernetes auditing provides a security-relevant chronological set of records about a cluster. Kube-apiserver
performs auditing. Each request on each stage of its execution generates an event, which is then pre-processed
according to a certain policy and written to a backend. The policy determines what's recorded and the backends
persist the records.
You might want to configure the audit log as part of compliance with the CIS (Center for Internet Security)
Kubernetes Benchmark controls.
The audit log can be enabled by default using the following configuration in cluster.yml:
services:
kube-api:
audit_log:
enabled: true
When the audit log is enabled, you should be able to see the default values at /etc/kubernetes/audit-policy.yaml
The log backend writes audit events to a file in JSONlines format. You can configure the log audit backend using
the following kube-apiserver flags:
--audit-log-path specifies the log file path that log backend uses to write audit events. Not specifying this flag
NEW QUESTION # 30
SIMULATION
Before Making any changes build the Dockerfile with tag base:v1
Now Analyze and edit the given Dockerfile(based on ubuntu 16:04)
Fixing two instructions present in the file, Check from Security Aspect and Reduce Size point of view.
Dockerfile:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt install nginx -y
COPY entrypoint.sh /
RUN useradd ubuntu
ENTRYPOINT ["/entrypoint.sh"]
USER ubuntu
entrypoint.sh
#!/bin/bash
echo "Hello from CKS"
After fixing the Dockerfile, build the docker-image with the tag base:v2 To Verify: Check the size of the image
before and after the build.
A. Send us the Feedback on it.
Answer: A
NEW QUESTION # 31
Cluster: scanner
Master node: controlplane
Worker node: worker1
You can switch the cluster/configuration context using the following command:
[desk@cli] $ kubectl config use-context scanner
Given:
You may use Trivy's documentation.
Task:
Use the Trivy open-source container scanner to detect images with severe vulnerabilities used by Pods in the
namespace nato.
Look for images with High or Critical severity vulnerabilities and delete the Pods that use those images.
Trivy is pre-installed on the cluster's master node. Use cluster's master node to use Trivy.
Answer:
Explanation:
[controlplane@cli] $ k get pods -n nato -o yaml | grep "image: "
NEW QUESTION # 32
......