CIS Oracle Cloud Infrastructure Container Engine For Kubernetes (OKE) Benchmark v1.6.0 PDF
CIS Oracle Cloud Infrastructure Container Engine For Kubernetes (OKE) Benchmark v1.6.0 PDF
Infrastructure Container
Engine for
Kubernetes(OKE)
Benchmark
v1.6.0 - 10-24-2024
For information on referencing and/or citing CIS Benchmarks in 3rd party documentation
(including using portions of Benchmark Recommendations) please contact CIS Legal
([email protected]) and request guidance on copyright usage.
NOTE: It is NEVER acceptable to host a CIS Benchmark in ANY format (PDF, etc.)
on a 3rd party (non-CIS owned) site.
Page 1
Page 2
Page 3
Page 4
These tools make the hardening process much more scalable for large numbers of
systems and applications.
NOTE: Some tooling focuses only on the CIS Benchmarks™ Recommendations that
can be fully automated (skipping ones marked Manual). It is important that
ALL Recommendations (Automated and Manual) be addressed, since all
are important for properly securing systems and are typically in scope for
audits.
In addition, CIS has developed CIS Build Kits for some common technologies to assist
in applying CIS Benchmarks™ Recommendations.
Page 5
NOTE: CIS and the CIS Benchmarks™ development communities in CIS WorkBench
do their best to test and have high confidence in the Recommendations, but
they cannot test potential conflicts with all possible system deployments.
Known potential issues identified during CIS Benchmarks™ development are
documented in the Impact section of each Recommendation.
By using CIS and/or CIS Benchmarks™ Certified tools, and being careful with
remediation deployment, it is possible to harden large numbers of deployed systems in
a cost effective, efficient, and safe manner.
NOTE: As previously stated, the PDF versions of the CIS Benchmarks™ are
available for free, non-commercial use on the CIS Website. All other formats
of the CIS Benchmarks™ (MS Word, Excel, and Build Kits) are available for
CIS SecureSuite® members.
Page 6
Page 7
Page 8
Convention Meaning
Page 9
Title
Concise description for the recommendation's intended configuration.
Assessment Status
An assessment status is included for every recommendation. The assessment status
indicates whether the given recommendation can be automated or requires manual
steps to implement. Both statuses are equally important and are determined and
supported as defined below:
Automated
Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state. Recommendations will include the
necessary information to implement automation.
Manual
Represents recommendations for which assessment of a technical control cannot be
fully automated and requires all or some manual steps to validate that the configured
state is set as expected. The expected state can vary depending on the environment.
Profile
A collection of recommendations for securing a technology or a supporting platform.
Most benchmarks include at least a Level 1 and Level 2 Profile. Level 2 extends Level 1
recommendations and is not a standalone profile. The Profile Definitions section in the
benchmark provides the definitions as they pertain to the recommendations included for
the technology.
Description
Detailed information pertaining to the setting with which the recommendation is
concerned. In some cases, the description will include the recommended value.
Rationale Statement
Detailed reasoning for the recommendation to provide the user a clear and concise
understanding on the importance of the recommendation.
Page 10
Audit Procedure
Systematic instructions for determining if the target system complies with the
recommendation.
Remediation Procedure
Systematic instructions for applying recommendations to the target system to bring it
into compliance according to the recommendation.
Default Value
Default value for the given setting in this recommendation, if known. If not known, either
not configured or not defined will be applied.
References
Additional documentation relative to the recommendation.
Additional Information
Supplementary information that does not correspond to any other field but may be
useful to the user.
Page 11
• Level 1
• Level 2
Extends Level 1
Page 12
Author/s
Gilson Melo
Adao Oliveira
Editor
Randall Mowen
Contributors
Mark Larinde
Logan Kleier
Adao Oliveira Junior KCNA, CKA, Oracle Corporation
Sherwood Zern
Josh Hammer
Yogesh Asalkar
Page 13
• The security configuration of the data plane, including the configuration of the
security groups that allow traffic to pass from the Oracle OKE control plane into
the customer Virtual Cloud Network (VCN)
• The configuration of the worker nodes and the containers themselves
• The worker node guest operating system (including updates and security
patches)
o OKE follows the shared responsibility model for CVEs and security
patches on managed node groups. Because managed nodes run the
OKE-optimized OPIs, OKE is responsible for building patched versions of
these OPIs when bugs or issues are reported and we are able to publish a
fix. Customers are responsible for deploying these patched OPI versions
to your managed node groups.
• Other associated application software:
o Setting up and managing network controls, such as firewall rules
o Managing platform-level identity and access management, either with or in
addition to IAM
• The sensitivity of your data, your company’s requirements, and applicable laws
and regulations
Oracle is responsible for securing the control plane, though you might be able to
configure certain options based on your requirements. Section 2 of this Benchmark
addresses these configurations.
Page 14
Page 15
• Level 1
Description:
Kubernetes provides the option to use client certificates for user authentication.
However as there is no way to revoke these certificates when a user leaves an
organization or loses their credential, they are not suitable for this purpose.
It is not possible to fully disable client certificate use within a cluster as it is used for
component to component authentication.
Rationale:
With any authentication mechanism the ability to revoke credentials if they are
compromised or no longer required, is a key control. Kubernetes client certificate
authentication does not allow for this due to a lack of support for certificate revocation.
Impact:
External mechanisms for authentication generally require additional software to be
deployed.
Audit:
Review user access to the cluster and ensure that users are not making use of
Kubernetes client certificate authentication.
You can verify the availability of client certificates in your OKE cluster.
Run the following command to verify the availability of client certificates in your OKE
cluster:
kubectl get secrets --namespace kube-system
This command lists all the secrets in the kube-system namespace, which includes the
client certificates used for authentication.
Look for secrets with names starting with oci- or oke-. These secrets contain the client
certificates. If the command returns secrets with such names, it indicates that client
certificates are available in your OKE cluster.
Remediation:
Alternative mechanisms provided by Kubernetes such as the use of OIDC should be
implemented in place of client certificates.
You can remediate the availability of client certificates in your OKE cluster.
Page 16
1. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
Additional Information:
The lack of certificate revocation was flagged up as a high risk issue in the recent
Kubernetes security audit. Without this feature, client certificate authentication is not
suitable for end users.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 17
Page 18
• Level 1
Description:
The audit logs are part of the OKE managed Kubernetes control plane logs managed by
OKE. OKE integrates with Oracle Cloud Infrastructure Audit Service.
All operations performed by the Kubernetes API server are visible as log events on the
Oracle Cloud Infrastructure Audit service.
Rationale:
Logging is a crucial detective control for all systems to detect potential unauthorized
access.
Impact:
The Control plane audit logs are managed by OKE. OKE Control plane logs are written
to the Oracle Cloud Infrastructure Audit Service. The Oracle Cloud Infrastructure Audit
service automatically records calls to all supported Oracle Cloud Infrastructure public
application programming interface (API) endpoints as log events.
Audit:
1. In the Console, open the navigation menu. Under Solutions and Platform, go to
Developer Services and click Kubernetes Clusters.
2. Choose a Compartment you have permission to work in.
3. On the Cluster List page, click the cluster's name for which you want to monitor
and manage operations.
4. The Cluster page shows information about the cluster.
5. Display the Work Requests tab, showing the recent operations performed on the
cluster.
To view operations performed by Container Engine for Kubernetes and the Kubernetes
API server as log events in the Oracle Cloud Infrastructure Audit service:
Page 19
Remediation:
No remediation is necessary for this control.
Default Value:
By default, Kubernetes API server logs and Container Engine for Kubernetes audit
events are sent to the Oracle Cloud Infrastructure Audit service. By default, the Audit
Log retention period is 90 days.
References:
1. https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengmonitoringoke.htm
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/Audit/Tasks/viewinglogevents.htm#Viewing_Audit_Log_Events
4. https://docs.cloud.oracle.com/en-
us/iaas/Content/Audit/Tasks/settingretentionperiod.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 20
Page 21
This section covers recommendations for configuration files on the Oracle OKE worker
nodes.
Page 22
• Level 1
Description:
If kubelet is running, and if it is using a file-based oke_kubelet_conf.json file, ensure
that the proxy oke_kubelet_conf.json file has permissions of 644 or more restrictive.
Rationale:
The kubelet oke_kubelet_conf.json file controls various parameters of the kubelet
service in the worker node. You should restrict its file permissions to maintain the
integrity of the file. The file should be writable by only the administrators on the system.
It is possible to run kubelet with the kubeconfig parameters configured as a
Kubernetes ConfigMap instead of a file. In this case, there is no proxy
oke_kubelet_conf.json file.
Impact:
None.
Audit:
Method 1
SSH to the worker nodes
To check to see if the Kubelet Service is running:
sudo systemctl status kubelet
The output should return Active: active (running) since..
Run the following command on each node to find the appropriate oke_kubelet_conf.json
file:
ps -ef | grep kubelet
The output of the above command should return something similar to --kubeconfig
/var/lib/kubelet/oke_kubelet_conf.json which is the location of the
oke_kubelet_conf.json file.
Run this command to obtain the oke_kubelet_conf.json file permissions:
Page 23
Remediation:
Run the below command (based on the file location on your system) on the each worker
node. For example,
Page 24
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 25
• Level 1
Description:
If kubelet is running, ensure that the file ownership of its oke_kubelet_conf.json file is
set to root:root.
Rationale:
The oke_kubelet_conf.json file for kubelet controls various parameters for the kubelet
service in the worker node. You should set its file ownership to maintain the integrity of
the file. The file should be owned by root:root.
Impact:
None
Audit:
Method 1
SSH to the worker nodes
To check to see if the Kubelet Service is running:
sudo systemctl status kubelet
The output should return Active: active (running) since..
Run the following command on each node to find the appropriate oke_kubelet_conf.json
file:
ps -ef | grep kubelet
The output of the above command should return something similar to --kubeconfig
/var/lib/kubelet/oke_kubelet_conf.json which is the location of the
oke_kubelet_conf.json file.
Run this command to obtain the oke_kubelet_conf.json file ownership:
Page 26
Once the pod is running, you can exec into it to check file ownership on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through
the /host directory and check the ownership of the file:
ls -l /host/var/lib/kubelet/oke_kubelet_conf.json
The output of the above command gives you the oke_kubelet_conf.json file's ownership.
Verify that the ownership is set to root:root.
Remediation:
Run the below command (based on the file location on your system) on the each worker
node. For example,
Page 27
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 28
• Level 1
Description:
Ensure that if the kubelet refers to a configuration file with the --config argument, that
file has permissions of 644 or more restrictive.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
Impact:
None.
Audit:
Method 1
First, SSH to the relevant worker node:
To check to see if the Kubelet Service is running:
sudo systemctl status kubelet
The output of the above command should return something similar to --config
/etc/kubernetes/kubelet.conf which is the location of the Kubelet config file.
Run the following command:
stat -c %a /etc/kubernetes/kubelet.conf
The output of the above command is the Kubelet config file's permissions. Verify that
the permissions are 644 or more restrictive.
Method 2
Create and Run a Privileged Pod.
You will need to run a pod that is privileged enough to access the host's file system.
This can be achieved by deploying a pod that uses the hostPath volume to mount the
node's file system into the pod.
Here's an example of a simple pod definition that mounts the root of the host to /host
within the pod:
Page 29
Verify that if a file is specified and it exists, the permissions are 644 or more restrictive.
Remediation:
Run the following command (using the config file location identied in the Audit step)
chmod 644 etc/kubernetes/kubelet.conf
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 30
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 31
• Level 1
Description:
Ensure that if the kubelet refers to a configuration file with the --config argument, that
file is owned by root:root.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
Impact:
None.
Audit:
Method 1
First, SSH to the relevant worker node:
To check to see if the Kubelet Service is running:
sudo systemctl status kubelet
The output of the above command should return something similar to --config
/etc/kubernetes/kubelet.conf which is the location of the Kubelet config file.
Run the following command:
stat -c %U:%G /etc/kubernetes/kubelet.conf
The output of the above command is the Kubelet config file's ownership. Verify that the
ownership is set to root:root
Method 2
Create and Run a Privileged Pod.
You will need to run a pod that is privileged enough to access the host's file system.
This can be achieved by deploying a pod that uses the hostPath volume to mount the
node's file system into the pod.
Here's an example of a simple pod definition that mounts the root of the host to /host
within the pod:
Page 32
The output of the above command gives you the azure.json file's ownership. Verify that
the ownership is set to root:root.
Remediation:
Run the following command (using the config file location identied in the Audit step)
chown root:root /etc/kubernetes/kubelet.conf
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 33
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 34
If the --kubeconfig argument is present, this gives the location of the Kubelet config
file. This config file could be in JSON or YAML format depending on your distribution.
Page 35
• Level 1
Description:
Disable anonymous requests to the Kubelet server.
Rationale:
When enabled, requests that are not rejected by other configured authentication
methods are treated as anonymous requests. These requests are then served by the
Kubelet server. You should rely on authentication to authorize access and disallow
anonymous requests.
Impact:
Anonymous requests will be rejected.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for authentication:
anonymous: enabled set to false.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
sudo more etc/systemd/system/kublet.service
Verify that the --anonymous-auth=false.
Audit Method 2:
If using the api configz endpoint consider searching for the status of
authentication... "anonymous":{"enabled":false} by extracting the live
configuration from the nodes running kubelet.
Set the local proxy port and the following variables and provide proxy port number and
node name;
HOSTNAME_PORT="localhost-and-port-number"
NODE_NAME="The-Name-Of-Node-To-Extract-Configuration" from the output
of "kubectl get nodes"
Page 36
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--anonymous-auth=false
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
"authentication.*anonymous":{"enabled":false}" by extracting the live
configuration from the nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-
authentication
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 37
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 38
• Level 1
Description:
Do not allow all requests. Enable explicit authorization.
Rationale:
Kubelets, by default, allow all authenticated requests (even anonymous ones) without
needing explicit authorization checks from the apiserver. You should restrict this
behavior and only allow explicitly authorized requests.
Impact:
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
sudo more etc/systemd/system/kublet.service
Verify that the --authentication-mode=Webhook.
Audit Method 2:
If using the api configz endpoint consider searching for the status of
authentication... "webhook":{"enabled":true} by extracting the live
configuration from the nodes running kubelet.
Set the local proxy port and the following variables and provide proxy port number and
node name;
HOSTNAME_PORT="localhost-and-port-number"
NODE_NAME="The-Name-Of-Node-To-Extract-Configuration" from the output
of "kubectl get nodes"
Page 39
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--authorization-mode=Webhook
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
"authentication.*webhook":{"enabled":true}" by extracting the live
configuration from the nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-
authentication
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 40
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 41
• Level 1
Description:
Enable Kubelet authentication using certificates.
Rationale:
The connections from the apiserver to the kubelet are used for fetching logs for pods,
attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding
functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default,
the apiserver does not verify the kubelet’s serving certificate, which makes the
connection subject to man-in-the-middle attacks, and unsafe to run over untrusted
and/or public networks. Enabling Kubelet certificate authentication ensures that the
apiserver could authenticate the Kubelet before submitting any requests.
Impact:
You require TLS to be configured on apiserver as well as kubelets.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --client-ca-file
set to the location of the client certificate authority file.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 42
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--client-ca-file=/etc/kubernetes/ca.crt \
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
"authentication.*x509":("clientCAFile":"/etc/kubernetes/pki/ca.crt" by
extracting the live configuration from the nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Page 43
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-
authentication-authorization/
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 44
• Level 1
Description:
Disable the read-only port.
Rationale:
The Kubelet process provides a read-only API in addition to the main Kubelet API.
Unauthenticated access is provided to this read-only API which could possibly retrieve
potentially sensitive information about the cluster.
Impact:
Removal of the read-only port will require that any service which made use of it will
need to be re-configured to use the main Kubelet API.
Audit:
If using a Kubelet configuration file, check that there is an entry for --read-only-
port=0.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet service config
file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
sudo more etc/systemd/system/kublet.service
Verify that the --read-only-port argument exists and is set to 0.
If the --read-only-port argument is not present, check that there is a Kubelet config
file specified by --config. Check that if there is a readOnlyPort entry in the file, it is
set to 0.
Remediation:
If modifying the Kubelet config file, edit the kubelet.service file
/etc/sytemd/system/kubelet.service and set the below parameter
Page 45
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 46
• Level 1
Description:
Do not disable timeouts on streaming connections.
Rationale:
Setting idle timeouts ensures that you are protected against Denial-of-Service attacks,
inactive connections and running out of ephemeral ports.
Note: By default, --streaming-connection-idle-timeout is set to 4 hours which
might be too high for your environment. Setting this as appropriate would additionally
ensure that such streaming connections are timed out after serving legitimate use
cases.
Impact:
Long-lived connections could be interrupted.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --streaming-
connection-idle-timeout is not set to 0.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 47
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--streaming-connection-idle-timeout
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
Page 48
1. https://kubernetes.io/docs/admin/kubelet/
2. https://github.com/kubernetes/kubernetes/pull/18552
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 49
• Level 1
Description:
Allow Kubelet to manage iptables.
Rationale:
Kubelets can automatically manage the required changes to iptables based on how you
choose your networking options for the pods. It is recommended to let kubelets manage
the changes to iptables. This ensures that the iptables configuration remains in sync
with pods networking configuration. Manually configuring iptables with dynamic pod
network configuration changes might hamper the communication between
pods/containers and to the outside world. You might have iptables rules too restrictive
or too open.
Impact:
Kubelet would manage the iptables on the system and keep it in sync. If you are using
any other iptables management solution, then there might be some conflicts.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --make-iptables-
util-chains set to true.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 50
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--make-iptables-util-chains:true
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
"makeIPTablesUtilChains": true by extracting the live configuration from the nodes
running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
Page 51
1. https://kubernetes.io/docs/admin/kubelet/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
T1190 TA0001
Page 52
• Level 1
Description:
Security relevant information should be captured. The --event-qps flag on the Kubelet
can be used to limit the rate at which events are gathered. Setting this too low could
result in relevant events not being logged, however the unlimited setting of 0 could
result in a denial of service on the kubelet.
Rationale:
It is important to capture all events and not restrict event creation. Events are an
important source of security information and analytics that ensure that your environment
is consistently monitored using the event data.
Impact:
Setting this parameter to 0 could result in a denial of service condition due to excessive
events being created. The cluster's event processing and storage systems should be
scaled to handle expected event loads.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --event-qps set to
0 or a value equal to or greater than 0.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 53
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--event-qps=0
Remediation Method 2:
If using the api configz endpoint consider searching for the status of "eventRecordQPS"
by extracting the live configuration from the nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
Page 54
1. https://kubernetes.io/docs/admin/kubelet/
2. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/kubeletco
nfig/v1beta1/types.go
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 55
• Level 1
Description:
Setup TLS connection on the Kubelets.
Rationale:
Kubelet communication contains sensitive parameters that should remain encrypted in
transit. Configure the Kubelets to serve only HTTPS traffic.
Impact:
TLS and client certificate authentication must be configured for your Kubernetes cluster
deployment.
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for tls-cert-file set
to correct pem file and tls-private-key-file is set to correct key file
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
sudo more etc/systemd/system/kublet.service
Verify that the tls-cert-file=/var/lib/kubelet/pki/tls.pem.
Verify that the tls-private-key-file=/var/lib/kubelet/pki/tls.key.
Audit Method 2:
If using the api configz endpoint consider searching for the status of tlsCertFile and
tlsPrivateKeyFile are set by extracting the live configuration from the nodes running
kubelet.
Set the local proxy port and the following variables and provide proxy port number and
node name;
HOSTNAME_PORT="localhost-and-port-number"
NODE_NAME="The-Name-Of-Node-To-Extract-Configuration" from the output
of "kubectl get nodes"
Page 56
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
Verify that the `tls-cert-file=/var/lib/kubelet/pki/tls.pem`.
Verify that the `tls-private-key-file=/var/lib/kubelet/pki/tls.key`.
Remediation Method 2:
If using the api configz endpoint consider searching for the status of tlsCertFile and
tlsPrivateKeyFile are set by extracting the live configuration from the nodes running
kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. http://rootsquash.com/2016/05/10/securing-the-kubernetes-api/
3. https://github.com/kelseyhightower/docker-kubernetes-tls-guide
4. https://jvns.ca/blog/2017/08/05/how-kubernetes-certificates-work/
5. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 57
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 58
• Level 1
Description:
Enable kubelet client certificate rotation.
Rationale:
The --rotate-certificates setting causes the kubelet to rotate its client certificates
by creating new CSRs as its existing credentials expire. This automated periodic
rotation ensures that the there is no downtime due to expired certificates and thus
addressing availability in the CIA security triad.
Note: This recommendation only applies if you let kubelets get their certificates from the
API server. In case your kubelet certificates come from an outside authority/tool (e.g.
Vault) then you need to take care of rotation yourself.
Note: This feature also require the RotateKubeletClientCertificate feature gate to
be enabled (which is the default since Kubernetes v1.7)
Impact:
None
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --rotate-
certificates set to true.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 59
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
Verify that the `--rotate-certificates` is present.
Remediation Method 2:
If using the api configz endpoint consider searching for the status of
rotateCertificates by extracting the live configuration from the nodes running
kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
Page 60
1. https://github.com/kubernetes/kubernetes/pull/41912
2. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-
bootstrapping/#kubelet-configuration
3. https://kubernetes.io/docs/imported/release/notes/
4. https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
5. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 61
• Level 1
Description:
Enable kubelet server certificate rotation.
Rationale:
--rotate-server-certificates causes the kubelet to both request a serving
certificate after bootstrapping its client credentials and rotate the certificate as its
existing credentials expire. This automated periodic rotation ensures that the there are
no downtimes due to expired certificates and thus addressing availability in the CIA
security triad.
Note: This recommendation only applies if you let kubelets get their certificates from the
API server. In case your kubelet certificates come from an outside authority/tool (e.g.
Vault) then you need to take care of rotation yourself.
Impact:
None
Audit:
Audit Method 1:
If using a Kubelet configuration file, check that there is an entry for --rotate-server-
certificates is set to true.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
find / -name kubelet.service
The output of the above command should return the file and location
/etc/systemd/system/kublet.service which is the location of the Kubelet service
config file.
Open the Kubelet service config file:
Page 62
Remediation:
Remediation Method 1:
If modifying the Kubelet service config file, edit the kubelet.service file
/etc/systemd/system/kubelet.service and set the below parameter
--rotate-server-certificates=true
Remediation Method 2:
If using the api configz endpoint consider searching for the status of --rotate-server-
certificates by extracting the live configuration from the nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
kubectl proxy --port=8001 &
Default Value:
See the OKE documentation for the default value.
Page 63
1. https://github.com/kubernetes/kubernetes/pull/45059
2. https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/#kubelet-configuration
3. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
4. https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-
bootstrapping/#certificate-rotation
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
4 Policies
This section contains recommendations for various Kubernetes policies which are
important to the security of Oracle OKE customer environment.
Page 64
Page 65
• Level 1
Description:
The RBAC role cluster-admin provides wide-ranging powers over the environment
and should be used only where and when needed.
Rationale:
Kubernetes provides a set of default roles where RBAC is used. Some of these roles
such as cluster-admin provide wide-ranging privileges which should only be applied
where absolutely necessary. Roles such as cluster-admin allow super-user access to
perform any action on any resource. When used in a ClusterRoleBinding, it gives full
control over every resource in the cluster and in all namespaces. When used in a
RoleBinding, it gives full control over every resource in the rolebinding's namespace,
including the namespace itself.
Impact:
Audit:
Obtain a list of the principals who have access to the cluster-admin role by reviewing
the clusterrolebinding output for each role binding that has access to the cluster-
admin role.
kubectl get clusterrolebindings -o=custom-
columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
Review each principal listed and ensure that cluster-admin privilege is required for it.
Remediation:
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role :
Page 66
Default Value:
References:
1. https://kubernetes.io/docs/admin/authorization/rbac/#user-facing-roles
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 67
• Level 1
Description:
The Kubernetes API stores secrets, which may be service account tokens for the
Kubernetes API or credentials used by workloads in the cluster. Access to these secrets
should be restricted to the smallest possible group of users to reduce the risk of
privilege escalation.
Rationale:
Inappropriate access to secrets stored within the Kubernetes cluster can allow for an
attacker to gain additional access to the Kubernetes cluster or external resources
whose credentials are stored as secrets.
Impact:
Care should be taken not to remove access to secrets to system components which
require this for their operation
Audit:
Review the users who have get, list or watch access to secrets objects in the
Kubernetes API.
Run the following command to list all the users who have access to secrets objects in
the Kubernetes API:
kubectl auth can-i get,list,watch secrets --all-namespaces --
as=system:authenticated
Default Value:
By default, the following list of principals have get privileges on secret objects
Page 68
References:
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 69
• Level 1
Description:
Kubernetes Roles and ClusterRoles provide access to resources based on sets of
objects and actions that can be taken on those objects. It is possible to set either of
these to be the wildcard "*" which matches all items.
Use of wildcards is not optimal from a security perspective as it may allow for
inadvertent access to be granted when new resources are added to the Kubernetes API
either as CRDs or in later versions of the product.
Rationale:
The principle of least privilege recommends that users are provided only the access
required for their role and nothing more. The use of wildcard rights grants is likely to
provide excessive rights to the Kubernetes API.
Audit:
Retrieve the roles defined across each namespaces in the cluster and review for
wildcards
kubectl get roles --all-namespaces -o yaml
Retrieve the cluster roles defined in the cluster and review for wildcards
kubectl get clusterroles -o yaml
Remediation:
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 70
Techniques / Sub-
Tactics Mitigations
techniques
Page 71
• Level 1
Description:
The ability to create pods in a namespace can provide a number of opportunities for
privilege escalation, such as assigning privileged service accounts to these pods or
mounting hostPaths with access to sensitive data (unless Pod Security Policies are
implemented to restrict this access)
As such, access to create new pods should be restricted to the smallest possible group
of users.
Rationale:
The ability to create pods in a cluster opens up possibilities for privilege escalation and
should be restricted, where possible.
Impact:
Care should be taken not to remove access to pods to system components which
require this for their operation
Audit:
Review the users who have create access to pod objects in the Kubernetes API.
Run the following command to list all the users who have access to create pod objects
in the Kubernetes API:
kubectl auth can-i create pods --all-namespaces --as=system:authenticated
This command checks if the system:authenticated group (which includes all
authenticated users) has the create permission for pod objects in all namespaces. The
output will display either yes or no for each user.
Note: If you want to check for specific namespaces, replace --all-namespaces with the
desired namespace(s) separated by commas.
Remediation:
Where possible, remove create access to pod objects in the cluster.
Default Value:
By default, the following list of principals have create privileges on pod objects
Page 72
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 73
• Level 1
Description:
The default service account should not be used to ensure that rights granted to
applications can be more easily audited and reviewed.
Rationale:
Kubernetes provides a default service account which is used by cluster workloads
where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
The default service account should be configured such that it does not provide a service
account token and does not have any explicit rights assignments.
Impact:
All workloads which require access to the Kubernetes API will require an explicit service
account to be created.
Audit:
For each namespace in the cluster, review the rights assigned to the default service
account and ensure that it has no roles or cluster roles bound to it apart from the
defaults.
Additionally ensure that the automountServiceAccountToken: false setting is in
place for each default service account.
Check for automountServiceAccountToken using:
export SERVICE_ACCOUNT=<service account name>
kubectl get serviceaccounts/$SERVICE_ACCOUNT -o yaml
Remediation:
Page 74
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 75
• Level 1
Description:
Service accounts tokens should not be mounted in pods except where the workload
running in the pod explicitly needs to communicate with the API server
Rationale:
Mounting service account tokens inside pods can provide an avenue for privilege
escalation attacks where an attacker is able to compromise a single pod in the cluster.
Avoiding mounting these tokens removes this attack avenue.
Impact:
Pods mounted without service account tokens will not be able to communicate with the
API server, except where the resource is available to unauthenticated principals.
Audit:
Review pod and service account objects in the cluster and ensure that the option below
is set, unless the resource explicitly requires this access.
automountServiceAccountToken: false
Check for automountServiceAccountToken using:
export SERVICE_ACCOUNT=<service account name>
kubectl get serviceaccounts/$SERVICE_ACCOUNT -o yaml
Remediation:
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
Default Value:
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
Page 76
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 77
Pod Security Standards (PSS) are recommendations for securing deployed workloads
to reduce the risks of container breakout. There are a number of ways if implementing
PSS, including the built-in Pod Security Admission controller, or external policy control
systems which integrate with Kubernetes via validating and mutating webhooks.
The previous feature described in this document, pod security policy (preview), was
deprecated with version 1.21, and removed as of version 1.25. After pod security policy
(preview) is deprecated, you must disable the feature on any existing clusters using the
deprecated feature to perform future cluster upgrades and stay within Oracle support.
Page 78
• Level 1
Description:
Do not generally permit containers to be run with the securityContext.privileged
flag set to true.
Rationale:
Privileged containers have access to all Linux Kernel capabilities and devices. A
container running with full privileges can do almost everything that the host can do. This
flag exists to allow special use-cases, like manipulating the network stack and
accessing devices.
There should be at least one admission control policy defined which does not permit
privileged containers.
If you need to run privileged containers, this should be defined in a separate policy and
you should carefully check to ensure that only limited service accounts and users are
given permission to use that policy.
Impact:
Pods defined with spec.containers[].securityContext.privileged: true,
spec.initContainers[].securityContext.privileged: true and
spec.ephemeralContainers[].securityContext.privileged: true will not be
permitted.
Page 79
Default Value:
By default, there are no restrictions on the creation of privileged containers.
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission/
Page 80
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 81
• Level 1
Description:
Do not generally permit containers to be run with the hostPID flag set to true.
Rationale:
A container running in the host's PID namespace can inspect processes running outside
the container. If the container also has access to ptrace capabilities this can be used to
escalate privileges outside of the container.
There should be at least one admission control policy defined which does not permit
containers to share the host PID namespace.
If you need to run containers which require hostPID, this should be defined in a
separate policy and you should carefully check to ensure that only limited service
accounts and users are given permission to use that policy.
Impact:
Pods defined with spec.hostPID: true will not be permitted unless they are run under
a specific policy.
Audit:
List the policies in use for each namespace in the cluster, ensure that each policy
disallows the admission of hostPID containers
Search for the hostPID Flag: In the YAML output, look for the hostPID setting under the
spec section to check if it is set to true.
kubectl get pods --all-namespaces -o json | jq -r '.items[] |
select(.spec.hostPID == true) |
"\(.metadata.namespace)/\(.metadata.name)"'
OR
kubectl get pods --all-namespaces -o json | jq '.items[] |
select(.metadata.namespace != "kube-system" and .spec.hostPID == true)
| {pod: .metadata.name, namespace: .metadata.namespace, container:
.spec.containers[].name}'
When creating a Pod Security Policy, ["kube-system"] namespaces are excluded by
default.
This command retrieves all pods across all namespaces in JSON format, then uses jq
to filter out those with the hostPID flag set to true, and finally formats the output to
show the namespace and name of each matching pod.
Page 82
Default Value:
By default, there are no restrictions on the creation of hostPID containers.
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 83
• Level 1
Description:
Do not generally permit containers to be run with the hostIPC flag set to true.
Rationale:
A container running in the host's IPC namespace can use IPC to interact with processes
outside the container.
There should be at least one admission control policy defined which does not permit
containers to share the host IPC namespace.
If you need to run containers which require hostIPC, this should be defined in a
separate policy and you should carefully check to ensure that only limited service
accounts and users are given permission to use that policy.
Impact:
Pods defined with spec.hostIPC: true will not be permitted unless they are run under
a specific policy.
Audit:
List the policies in use for each namespace in the cluster, ensure that each policy
disallows the admission of hostIPC containers
Search for the hostIPC Flag: In the YAML output, look for the hostIPC setting under the
spec section to check if it is set to true.
kubectl get pods --all-namespaces -o json | jq -r '.items[] |
select(.spec.hostIPC == true) |
"\(.metadata.namespace)/\(.metadata.name)"'
OR
kubectl get pods --all-namespaces -o json | jq '.items[] |
select(.metadata.namespace != "kube-system" and .spec.hostIPC == true)
| {pod: .metadata.name, namespace: .metadata.namespace, container:
.spec.containers[].name}'
When creating a Pod Security Policy, ["kube-system"] namespaces are excluded by
default.
This command retrieves all pods across all namespaces in JSON format, then uses jq
to filter out those with the hostIPC flag set to true, and finally formats the output to
show the namespace and name of each matching pod.
Page 84
Default Value:
By default, there are no restrictions on the creation of hostIPC containers.
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 85
• Level 1
Description:
Do not generally permit containers to be run with the hostNetwork flag set to true.
Rationale:
A container running in the host's network namespace could access the local loopback
device, and could access network traffic to and from other pods.
There should be at least one admission control policy defined which does not permit
containers to share the host network namespace.
If you need to run containers which require access to the host's network namespaces,
this should be defined in a separate policy and you should carefully check to ensure that
only limited service accounts and users are given permission to use that policy.
Impact:
Pods defined with spec.hostNetwork: true will not be permitted unless they are run
under a specific policy.
Audit:
List the policies in use for each namespace in the cluster, ensure that each policy
disallows the admission of hostNetwork containers
Given that manually checking each pod can be time-consuming, especially in large
environments, you can use a more automated approach to filter out pods where
hostNetwork is set to true. Here’s a command using kubectl and jq:
kubectl get pods --all-namespaces -o json | jq -r '.items[] |
select(.spec.hostNetwork == true) |
"\(.metadata.namespace)/\(.metadata.name)"'
OR
kubectl get pods --all-namespaces -o json | jq '.items[] |
select(.metadata.namespace != "kube-system" and .spec.hostNetwork ==
true) | {pod: .metadata.name, namespace: .metadata.namespace,
container: .spec.containers[].name}'
When creating a Pod Security Policy, ["kube-system"] namespaces are excluded by
default.
This command retrieves all pods across all namespaces in JSON format, then uses jq
to filter out those with the hostNetwork flag set to true, and finally formats the output
to show the namespace and name of each matching pod.
Page 86
Default Value:
By default, there are no restrictions on the creation of hostNetwork containers.
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 87
• Level 1
Description:
Do not generally permit containers to be run with the allowPrivilegeEscalation flag
set to true. Allowing this right can lead to a process running a container getting more
rights than it started with.
It's important to note that these rights are still constrained by the overall container
sandbox, and this setting does not relate to the use of privileged containers.
Rationale:
A container running with the allowPrivilegeEscalation flag set to true may have
processes that can gain more privileges than their parent.
There should be at least one admission control policy defined which does not permit
containers to allow privilege escalation. The option exists (and is defaulted to true) to
permit setuid binaries to run.
If you have need to run containers which use setuid binaries or require privilege
escalation, this should be defined in a separate policy and you should carefully check to
ensure that only limited service accounts and users are given permission to use that
policy.
Impact:
Pods defined with spec.allowPrivilegeEscalation: true will not be permitted
unless they are run under a specific policy.
Page 88
Default Value:
By default, there are no restrictions on contained process ability to escalate privileges,
within the context of the container.
References:
1. https://kubernetes.io/docs/concepts/security/pod-security-admission/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 89
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 90
Page 91
• Level 1
Description:
There are a variety of CNI plugins available for Kubernetes. If the CNI in use does not
support Network Policies it may not be possible to effectively restrict traffic in the
cluster.
Rationale:
Kubernetes network policies are enforced by the CNI plugin in use. As such it is
important to ensure that the CNI plugin supports both Ingress and Egress network
policies.
Impact:
None.
Audit:
Review the documentation of CNI plugin in use by the cluster, and confirm that it
supports network policies.
Check the DaemonSets in the kube-system namespace:
Many CNI plugins operate as DaemonSets within the kube-system namespace. To see
what's running:
kubectl get daemonset -n kube-system
Look for known CNI providers like Calico, Flannel, Cilium, etc.
You can further inspect the configuration of these DaemonSets to understand more
about the CNI setup:
kubectl describe daemonset <daemonset-name> -n kube-system
Check the CNI Configuration Files:
If you have access to the nodes (via SSH), you can check the CNI configuration directly
in /etc/cni/net.d/. This often requires node-level access, which might not be available
depending on your permissions and the security setup of your environment.
Remediation:
As with RBAC policies, network policies should adhere to the policy of least privileged
access. Start by creating a deny all policy that restricts all inbound and outbound traffic
from a namespace or create a global policy using Calico.
Default Value:
This will depend on the CNI plugin in use.
Page 92
1. https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-
net/network-plugins/
2. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Additional Information:
One example here is Flannel (https://github.com/coreos/flannel) which does not support
Network policy unless Calico is also in use.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 93
• Level 1
Description:
Use network policies to isolate traffic in your cluster network.
Rationale:
Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. A network policy is a specification of how selections of pods are allowed to
communicate with each other and other network endpoints.
Network Policies are namespace scoped. When a network policy is introduced to a
given namespace, all traffic not allowed by the policy is denied. However, if there are no
network policies in a namespace all traffic will be allowed into and out of the pods in that
namespace.
Impact:
Once network policies are in use within a given namespace, traffic not explicitly allowed
by a network policy will be denied. As such it is important to ensure that, when
introducing network policies, legitimate traffic is not blocked.
Audit:
Run the below command and review the NetworkPolicy objects created in the cluster.
kubectl get networkpolicy --all-namespaces
Ensure that each namespace defined in the cluster has at least one Network Policy.
Page 94
References:
1. https://kubernetes.io/docs/concepts/services-networking/networkpolicies/
2. https://octetz.com/posts/k8s-network-policy-apis
3. https://kubernetes.io/docs/tasks/configure-pod-container/declare-network-policy/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 95
Techniques / Sub-
Tactics Mitigations
techniques
Page 96
Page 97
• Level 1
Description:
Kubernetes supports mounting secrets as data volumes or as environment variables.
Minimize the use of environment variable secrets.
Rationale:
It is reasonably common for application code to log out its environment (particularly in
the event of an error). This will include any secret values passed in as environment
variables, so secrets can easily be exposed to any user or entity who has access to the
logs.
Impact:
Application code which expects to read secrets in the form of environment variables
would need modification
Audit:
Run the following command to find references to objects which use environment
variables defined from secrets.
kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind}
{.metadata.name} {"\n"}{end}' -A
Remediation:
If possible, rewrite application code to read secrets from mounted secret files, rather
than from environment variables.
Default Value:
By default, secrets are not defined
References:
1. https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
Additional Information:
Mounting secrets as volumes has the additional benefit that secret values can be
updated without restarting the pod
Page 98
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 99
• Level 1
Description:
Consider the use of an external secrets storage and management system, instead of
using Kubernetes Secrets directly, if you have more complex secret management
needs. Ensure the solution requires authentication to access secrets, has auditing of
access to and use of secrets, and encrypts secrets. Some solutions also make it easier
to rotate secrets.
Rationale:
Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure
that access to secrets is carefully limited. Using an external secrets provider can ease
the management of access to secrets, especially where secrests are used across both
Kubernetes and non-Kubernetes environments.
Impact:
None
Audit:
Review your secrets management implementation.
Remediation:
Refer to the secrets management options offered by your cloud provider or a third-party
secrets management solution.
The master nodes in a Kubernetes cluster store sensitive configuration data (such as
authentication tokens, passwords, and SSH keys) as Kubernetes secret objects in etcd.
Etcd is an open source distributed key-value store that Kubernetes uses for cluster
coordination and state management. In the Kubernetes clusters created by Container
Engine for Kubernetes, etcd writes and reads data to and from block storage volumes in
the Oracle Cloud Infrastructure Block Volume service. Although the data in block
storage volumes is encrypted, Kubernetes secrets at rest in etcd itself are not encrypted
by default.
For additional security, when you create a new cluster you can specify that Kubernetes
secrets at rest in etcd are to be encrypted using the Oracle Cloud Infrastructure Vault
service.
Default Value:
By default, no external secret management is configured.
Page 100
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 101
These policies relate to general cluster management topics, like namespace best
practices and policies applied to pod objects in the cluster.
Page 102
• Level 1
Description:
Use namespaces to isolate your Kubernetes objects.
Rationale:
Limiting the scope of user permissions can reduce the impact of mistakes or malicious
activities. A Kubernetes namespace allows you to partition created resources into
logically named groups. Resources created in one namespace can be hidden from
other namespaces. By default, each resource created by a user in Kubernetes cluster
runs in a default namespace, called default. You can create additional namespaces
and attach resources and users to them. You can use Kubernetes Authorization plugins
to create policies that segregate access to namespace resources between different
users.
Impact:
You need to switch between namespaces for administration.
Audit:
Run the below command and review the namespaces created in the cluster.
kubectl get namespaces
Ensure that these namespaces are the ones you need and are adequately administered
as per your requirements.
Remediation:
Follow the documentation and create namespaces for objects in your deployment as
you need them.
Default Value:
By default, Kubernetes starts with two initial namespaces:
Page 103
1. https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
2. http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-
deployment.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 104
• Level 1
Description:
Apply Security Context to Your Pods and Containers
Rationale:
A security context defines the operating system security settings (uid, gid, capabilities,
SELinux role, etc..) applied to a container. When designing your containers and pods,
make sure that you configure the security context for your pods, containers, and
volumes. A security context is a property defined in the deployment yaml. It controls the
security parameters that will be assigned to the pod/container/volume. There are two
levels of security context: pod level security context, and container level security
context.
Impact:
If you incorrectly apply security contexts, you may have trouble running the pods.
Audit:
Review the pod definitions in your cluster and verify that you have security contexts
defined as appropriate.
Remediation:
As a best practice we recommend that you scope the binding for privileged pods to
service accounts within a particular namespace, e.g. kube-system, and limiting access
to that namespace. For all other serviceaccounts/namespaces, we recommend
implementing a more restrictive policy such as this:
Page 105
Page 106
References:
1. https://kubernetes.io/docs/concepts/policy/security-context/
2. https://learn.cisecurity.org/benchmarks
3. https://aws.github.io/aws-eks-best-practices/pods/#restrict-the-containers-that-
can-run-as-privileged
4. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 107
• Level 1
Description:
Kubernetes provides a default namespace, where objects are placed if no namespace
is specified for them. Placing objects in this namespace makes application of RBAC and
other controls more difficult.
Rationale:
Resources in a Kubernetes cluster should be segregated by namespace, to allow for
security controls to be applied at that level and to make it easier to manage resources.
Impact:
None
Audit:
Remediation:
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
Default Value:
Unless a namespace is specific on object creation, the default namespace will be
used
References:
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengoverview.htm
Page 108
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
5 Managed services
This section consists of security recommendations for the Oracle OKE. These
recommendations are applicable for configurations that Oracle OKE customers own and
manage.
Page 109
Page 110
• Level 1
Description:
Oracle regularly performs penetration and vulnerability testing and security
assessments against the Oracle Cloud infrastructure, platforms, and applications.
These tests are intended to validate and improve the overall security of Oracle Cloud
services.
Rationale:
Vulnerabilities in software packages can be exploited by hackers or malicious users to
obtain unauthorized access to local cloud resources. Oracle Cloud Container Analysis
and other third party products allow images stored in Oracle Cloud to be scanned for
known vulnerabilities.
Impact:
None.
Audit:
As a service administrator, you can run tests for some Oracle Cloud services. Before
running the tests, you must first review the Oracle Cloud Testing Policies section.
Follow the steps below to notify Oracle of a penetration and vulnerability test.
Remediation:
As a service administrator, you can run tests for some Oracle Cloud services. Before
running the tests, you must first review the Oracle Cloud Testing Policies section.
Note:
You must have an Oracle Account with the necessary privileges to file service
maintenance requests, and you must be signed in to the environment that will be the
subject of the penetration and vulnerability testing.
Submitting a Cloud Security Testing Notification
References:
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/Security/Concepts/security_testing-policy.htm
Page 111
Controls
Control IG 1 IG 2 IG 3
Version
Page 112
• Level 1
Description:
Restrict user access to OKE, limiting interaction with build images to only authorized
personnel and service accounts.
Rationale:
Weak access control to OKE may allow malicious users to replace built images with
vulnerable or backdoored containers.
Impact:
Care should be taken not to remove access to Oracle Cloud Infrastructure Registry
(OCR) for accounts that require this for their operation. Any account granted the
Storage Object Viewer role at the project level can view all objects stored in OCS for the
project.
Audit:
For most operations on Kubernetes clusters created and managed by Container Engine
for Kubernetes, Oracle Cloud Infrastructure Identity and Access Management (IAM)
provides access control. A user's permissions to access clusters comes from the groups
to which they belong. The permissions for a group are defined by policies. Policies
define what actions members of a group can perform, and in which compartments.
Users can then access clusters and perform operations based on the policies set for the
groups they are members of.
IAM provides control over:
Page 113
Controls
Control IG 1 IG 2 IG 3
Version
Page 114
Techniques / Sub-
Tactics Mitigations
techniques
Page 115
• Level 1
Description:
Configure the Cluster Service Account to only allow read-only access to OKE.
Rationale:
The Cluster Service Account does not require administrative access to OCR, only
requiring pull access to containers to deploy onto OKE. Restricting permissions follows
the principles of least privilege and prevents credentials from being abused beyond the
required role.
Impact:
A separate dedicated service account may be required for use by build servers and
other robot users pushing or managing container images.
Audit:
Review Oracle OCS worker node IAM role IAM Policy Permissions to verify that they
are set and the minimum required level.
If utilizing a 3rd party tool to scan images utilize the minimum required permission level
required to interact with the cluster - generally this should be read-only.
Remediation:
To access a cluster using kubectl, you have to set up a Kubernetes configuration file
(commonly known as a 'kubeconfig' file) for the cluster. The kubeconfig file (by default
named config and stored in the $HOME/.kube directory) provides the necessary details
to access the cluster. Having set up the kubeconfig file, you can start using kubectl to
manage the cluster.
The steps to follow when setting up the kubeconfig file depend on how you want to
access the cluster:
• To access the cluster using kubectl in Cloud Shell, run an Oracle Cloud
Infrastructure CLI command in the Cloud Shell window to set up the kubeconfig
file.
• To access the cluster using a local installation of kubectl:
1. Generate an API signing key pair (if you don't already have one).
2. Upload the public key of the API signing key pair.
3. Install and configure the Oracle Cloud Infrastructure CLI.
4. Set up the kubeconfig file.
Page 116
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 117
• Level 1
Description:
Use approved container registries.
Rationale:
Allowing unrestricted access to external container registries provides the opportunity for
malicious or unapproved containers to be deployed into the cluster. Allow listing only
approved container registries reduces this risk.
Impact:
All container images to be deployed to the cluster must be hosted within an approved
container image registry.
Audit:
Remediation:
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 118
This section contains recommendations relating to using Oracle Cloud IAM with OKE.
Page 119
• Level 1
Description:
Kubernetes workloads should not use cluster node service accounts to authenticate to
Oracle Cloud APIs. Each Kubernetes Workload that needs to authenticate to other
Oracle services using Cloud IAM should be provisioned a dedicated Service account.
Rationale:
Manual approaches for authenticating Kubernetes workloads running on OKE against
Oracle Cloud APIs are: storing service account keys as a Kubernetes secret (which
introduces manual key rotation and potential for key compromise); or use of the
underlying nodes' IAM Service account, which violates the principle of least privilege on
a multitenanted node, when one pod needs to have access to a service, but every other
pod on the node that uses the Service account does not.
Audit:
For each namespace in the cluster, review the rights assigned to the default service
account and ensure that it has no roles or cluster roles bound to it apart from the
defaults.
Additionally ensure that the automountServiceAccountToken: false setting is in place for
each default service account.
Remediation:
When you create a pod, if you do not specify a service account, it is automatically
assigned the default service account in the same namespace. If you get the raw json or
yaml for a pod you have created (for example, kubectl get pods/<podname> -o
yaml), you can see the spec.serviceAccountName field has been automatically set.
See Configure Service Accounts for Pods
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 120
Page 121
This section contains recommendations relating to using Cloud KMS with OKE.
Page 122
• Level 1
Description:
Encrypt Kubernetes secrets, stored in etcd, at the application-layer using a customer-
managed key.
Rationale:
The master nodes in a Kubernetes cluster store sensitive configuration data (such as
authentication tokens, passwords, and SSH keys) as Kubernetes secret objects in etcd.
Etcd is an open source distributed key-value store that Kubernetes uses for cluster
coordination and state management. In the Kubernetes clusters created by Container
Engine for Kubernetes, etcd writes and reads data to and from block storage volumes in
the Oracle Cloud Infrastructure Block Volume service. Although the data in block
storage volumes is encrypted, Kubernetes secrets at rest in etcd itself are not encrypted
by default.
Audit:
Before you can create a cluster where Kubernetes secrets are encrypted in the etcd
key-value store, you have to:
• know the name and OCID of a suitable master encryption key in Vault
• create a dynamic group that includes all clusters in the compartment in which you
are going to create the new cluster
• create a policy authorizing the dynamic group to use the master encryption key
Remediation:
You can create a cluster in one tenancy that uses a master encryption key in a different
tenancy. In this case, you have to write cross-tenancy policies to enable the cluster in its
tenancy to access the master encryption key in the Vault service's tenancy. Note that if
you want to create a cluster and specify a master encryption key that's in a different
tenancy, you cannot use the Console to create the cluster.
For example, assume the cluster is in the ClusterTenancy, and the master encryption
key is in the KeyTenancy. Users belonging to a group (OKEAdminGroup) in the
ClusterTenancy have permissions to create clusters. A dynamic group
(OKEAdminDynGroup) has been created in the cluster, with the rule ALL
{resource.type = 'cluster', resource.compartment.id =
'ocid1.compartment.oc1..<unique_ID>'}, so all clusters created in the
ClusterTenancy belong to the dynamic group.
In the root compartment of the KeyTenancy, the following policies:
Page 123
See Accessing Object Storage Resources Across Tenancies for more examples of
writing cross-tenancy policies.
Having entered the policies, you can now run a command similar to the following to
create a cluster in the ClusterTenancy that uses the master key obtained from the
KeyTenancy:
oci ce cluster create --name oke-with-cross-kms --kubernetes-version v1.16.8
--vcn-id ocid1.vcn.oc1.iad.<unique_ID> --service-lb-subnet-ids
'["ocid1.subnet.oc1.iad.<unique_ID>"]' --compartment-id
ocid1.compartment.oc1..<unique_ID> --kms-key-id ocid1.key.oc1.iad.<unique_ID>
References:
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Tasks/contengencryptingdata.htm
Page 124
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 125
Page 126
• Level 1
Description:
Enable Master Authorized Networks to restrict access to the cluster's control plane
(master endpoint) to only an allowlist (whitelist) of authorized IPs.
Rationale:
Authorized networks are a way of specifying a restricted range of IP addresses that are
permitted to access your cluster's control plane. Kubernetes Engine uses both
Transport Layer Security (TLS) and authentication to provide secure access to your
cluster's control plane from the public internet. This provides you the flexibility to
administer your cluster from anywhere; however, you might want to further restrict
access to a set of IP addresses that you control. You can set this restriction by
specifying an authorized network.
Restricting access to an authorized network can provide additional security benefits for
your container cluster, including:
Impact:
When implementing Master Authorized Networks, be careful to ensure all desired
networks are on the allowlist (whitelist) to prevent inadvertently blocking external access
to your cluster's control plane.
Audit:
Check for the following:
export OKE_CLUSTERID=<oke cluster id>
oci ce cluster get --cluster-id $OKE_CLUSTERID
Check for endpoint URL of the Kubernetes API Server in output
Page 127
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 128
• Level 1
Description:
Disable access to the Kubernetes API from outside the node network if it is not required.
Rationale:
In a private cluster, the master node has two endpoints, a private and public endpoint.
The private endpoint is the internal IP address of the master, behind an internal load
balancer in the master's VPC network. Nodes communicate with the master using the
private endpoint. The public endpoint enables the Kubernetes API to be accessed from
outside the master's VPC network.
Although Kubernetes API requires an authorized token to perform sensitive actions, a
vulnerability could potentially expose the Kubernetes publically with unrestricted access.
Additionally, an attacker may be able to identify the current cluster and Kubernetes API
version and determine whether it is vulnerable to an attack. Unless required, disabling
public endpoint will help prevent such threats, and require the attacker to be on the
master's VPC network to perform any attack on the Kubernetes API.
Impact:
This topic gives an overview of the options for enabling private access to services within
Oracle Cloud Infrastructure. Private access means that traffic does not go over the
internet. Access can be from hosts within your virtual cloud network (VCN) or your on-
premises network.
• You can enable private access to certain services within Oracle Cloud
Infrastructure from your VCN or on-premises network by using either a private
endpoint or a service gateway. See the sections that follow.
• For each private access option, these services or resource types are available:
o With a private endpoint: Autonomous Database (shared Exadata
infrastructure)
o With a service gateway: Available services
• With either private access option, the traffic stays within the Oracle Cloud
Infrastructure network and does not traverse the internet. However, if you use a
service gateway, requests to the service use a public endpoint for the service.
• If you do not want to access a given Oracle service through a public endpoint,
Oracle recommends using a private endpoint in your VCN (assuming the service
supports private endpoints). A private endpoint is represented as a private IP
address within a subnet in your VCN.
Page 129
Remediation:
Default Value:
Controls
Control IG 1 IG 2 IG 3
Version
Page 130
• Level 1
Description:
Disable public IP addresses for cluster nodes, so that they only have private IP
addresses. Private Nodes are nodes with no public IP addresses.
Rationale:
Disabling public IP addresses on cluster nodes restricts access to only internal
networks, forcing attackers to obtain local network access before attempting to
compromise the underlying Kubernetes hosts.
Impact:
To enable Private Nodes, the cluster has to also be configured with a private master IP
range and IP Aliasing enabled.
Private Nodes do not have outbound access to the public internet. If you want to provide
outbound Internet access for your private nodes, you can use Cloud NAT or you can
manage your own NAT gateway.
Audit:
Check for the following:
export OKE_NODEPOOLID=<node pool id>
export OKE_NODEID=<node id>
Remediation:
Default Value:
By default, Private Nodes are disabled.
Page 131
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Techniques / Sub-
Tactics Mitigations
techniques
Page 132
• Level 1
Description:
Use Network Policy to restrict pod to pod traffic within a cluster and segregate
workloads.
Rationale:
By default, all pod to pod traffic within a cluster is allowed. Network Policy creates a
pod-level firewall that can be used to restrict traffic between sources. Pod traffic is
restricted by having a Network Policy that selects it (through the use of labels). Once
there is any Network Policy in a namespace selecting a particular pod, that pod will
reject any connections that are not allowed by any Network Policy. Other pods in the
namespace that are not selected by any Network Policy will continue to accept all traffic.
Network Policies are managed via the Kubernetes Network Policy API and enforced by
a network plugin, simply creating the resource without a compatible network plugin to
implement it will have no effect. OKE supports Network Policy enforcement through the
use of Calico.
Impact:
Network Policy requires the Network Policy add-on. This add-on is included
automatically when a cluster with Network Policy is created, but for an existing cluster,
needs to be added prior to enabling Network Policy.
Enabling/Disabling Network Policy causes a rolling update of all cluster nodes, similar to
performing a cluster upgrade. This operation is long-running and will block other
operations on the cluster (including delete) until it has run to completion.
If Network Policy is used, a cluster must have at least 2 nodes of type n1-standard-1
or higher. The recommended minimum size cluster to run Network Policy enforcement
is 3 n1-standard-1 instances.
Enabling Network Policy enforcement consumes additional resources in nodes.
Specifically, it increases the memory footprint of the kube-system process by
approximately 128MB, and requires approximately 300 millicores of CPU.
Audit:
Check for the following:
Page 133
Check settings for Network Policy is enabled set correctly for the cluster
"kubernetesNetworkConfig": {
"podsCidr": "10.244.0.0/16",
"servicesCidr": "10.96.0.0/12",
"networkPolicyConfig": {
"isEnabled": true
}
}
Remediation:
Configure Network Policy for the Cluster
Default Value:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 134
• Level 1
Description:
Encrypt traffic to HTTPS load balancers using TLS certificates.
Rationale:
Encrypting traffic between users and your Kubernetes workload is fundamental to
protecting data sent over the web.
Audit:
Your load balancer vendor can provide details on auditing the certificates and policies
required to utilize TLS.
Remediation:
Your load balancer vendor can provide details on configuring HTTPS with TLS.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Techniques / Sub-
Tactics Mitigations
techniques
Page 135
Page 136
• Level 1
Description:
Cluster Administrators should leverage Oracle Groups and Cloud IAM to assign
Kubernetes user roles to a collection of users, instead of to individual emails using only
Cloud IAM.
Rationale:
For most operations on Kubernetes clusters created and managed by Container Engine
for Kubernetes, Oracle Cloud Infrastructure Identity and Access Management (IAM)
provides access control. A user's permissions to access clusters comes from the groups
to which they belong. The permissions for a group are defined by policies. Policies
define what actions members of a group can perform, and in which compartments.
Users can then access clusters and perform operations based on the policies set for the
groups they are members of.
IAM provides control over:
Audit:
By default, users are not assigned any Kubernetes RBAC roles (or clusterroles) by
default. So before attempting to create a new role (or clusterrole), you must be assigned
an appropriately privileged role (or clusterrole). A number of such roles and clusterroles
are always created by default, including the cluster-admin clusterrole (for a full list, see
Default Roles and Role Bindings in the Kubernetes documentation). The cluster-admin
clusterrole essentially confers super-user privileges. A user granted the cluster-admin
clusterrole can perform any operation across all namespaces in a given cluster.
Page 137
1. If you haven't already done so, follow the steps to set up the cluster's kubeconfig
configuration file and (if necessary) set the KUBECONFIG environment variable
to point to the file. Note that you must set up your own kubeconfig file. You
cannot access a cluster using a kubeconfig file that a different user set up. See
Setting Up Cluster Access.
2. In a terminal window, grant the Kubernetes RBAC cluster-admin clusterrole to
the user by entering:
where:
• is a string of your choice to be used as the name for the binding between the
user and the Kubernetes RBAC cluster-admin clusterrole. For example,
jdoe_clst_adm
• <user_OCID> is the user's OCID (obtained from the Console ). For example,
ocid1.user.oc1..aaaaa...zutq (abbreviated for readability).
For example:
$ kubectl create clusterrolebinding jdoe_clst_adm --clusterrole=cluster-admin
--user=ocid1.user.oc1..aaaaa...zutq
References:
1. https://docs.cloud.oracle.com/en-
us/iaas/Content/ContEng/Concepts/contengaboutaccesscontrol.htm
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 138
Yes No
2.2 Logging
3 Worker Nodes
3.2 Kubelet
Page 140
Yes No
4 Policies
Page 141
Yes No
Page 142
Yes No
5 Managed services
Page 143
Yes No
Page 144
Page 145
Page 147
Page 148
Page 149
Page 150
Page 152
Page 153
Page 154
Page 155
Page 156
Page 158
Page 159
Page 160
Page 161
Page 163
Page 164
9/23/2023 1.4.0 Updated and tested all AAC automated content against the
latest cluster version/s
5/15/2023 1.3.0 Support and AAC for Kubernetes Engine Versions: 1.2.4
&1.2.5 & 1.2.6
Page 165