CIS Google Kubernetes Engine (GKE) Benchmark V1.7.0 PDF
CIS Google Kubernetes Engine (GKE) Benchmark V1.7.0 PDF
For information on referencing and/or citing CIS Benchmarks in 3rd party documentation
(including using portions of Benchmark Recommendations) please contact CIS Legal
([email protected]) and request guidance on copyright usage.
NOTE: It is NEVER acceptable to host a CIS Benchmark in ANY format (PDF, etc.)
on a 3rd party (non-CIS owned) site.
Page 1
Page 2
Page 3
Page 4
Page 5
These tools make the hardening process much more scalable for large numbers of
systems and applications.
NOTE: Some tooling focuses only on the Benchmark Recommendations that can
be fully automated (skipping ones marked Manual). It is important that ALL
Recommendations (Automated and Manual) be addressed since all are
important for properly securing systems and are typically in scope for
audits.
Key Stakeholders
Cybersecurity is a collaborative effort, and cross functional cooperation is imperative
within an organization to discuss, test, and deploy Benchmarks in an effective and
efficient way. The Benchmarks are developed to be best practice configuration
guidelines applicable to a wide range of use cases. In some organizations, exceptions
to specific Recommendations will be needed, and this team should work to prioritize the
problematic Recommendations based on several factors like risk, time, cost, and labor.
These exceptions should be properly categorized and documented for auditing
purposes.
Page 6
• Use the most recent version of a Benchmark: This is true for all Benchmarks,
but especially true for cloud technologies. Cloud technologies change frequently
and using an older version of a Benchmark may have invalid methods for
auditing and remediation.
Exceptions
The guidance items in the Benchmarks are called recommendations and not
requirements, and exceptions to some of them are expected and acceptable. The
Benchmarks strive to be a secure baseline, or starting point, for a specific technology,
with known issues identified during Benchmark development are documented in the
Impact section of each Recommendation. In addition, organizational, system specific
requirements, or local site policy may require changes as well, or an exception to a
Recommendation or group of Recommendations (e.g. A Benchmark could Recommend
that a Web server not be installed on the system, but if a system's primary purpose is to
function as a Webserver, there should be a documented exception to this
Recommendation for that specific server).
It is the responsibility of the organization to determine their overall security policy, and
which settings are applicable to their unique needs based on the overall risk profile for
the organization.
Page 7
NOTE: As previously stated, the PDF versions of the CIS Benchmarks™ are
available for free, non-commercial use on the CIS Website. All other formats
of the CIS Benchmarks™ (MS Word, Excel, and Build Kits) are available for
CIS SecureSuite® members.
Page 8
Intended Audience
This document is intended for cluster administrators, security specialists, auditors, and
any personnel who plan to develop, deploy, assess, or secure solutions that incorporate
Google Kubernetes Engine (GKE).
Page 9
Page 10
Convention Meaning
Page 11
Title
Concise description for the recommendation's intended configuration.
Assessment Status
An assessment status is included for every recommendation. The assessment status
indicates whether the given recommendation can be automated or requires manual
steps to implement. Both statuses are equally important and are determined and
supported as defined below:
Automated
Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state. Recommendations will include the
necessary information to implement automation.
Manual
Represents recommendations for which assessment of a technical control cannot be
fully automated and requires all or some manual steps to validate that the configured
state is set as expected. The expected state can vary depending on the environment.
Profile
A collection of recommendations for securing a technology or a supporting platform.
Most benchmarks include at least a Level 1 and Level 2 Profile. Level 2 extends Level 1
recommendations and is not a standalone profile. The Profile Definitions section in the
benchmark provides the definitions as they pertain to the recommendations included for
the technology.
Description
Detailed information pertaining to the setting with which the recommendation is
concerned. In some cases, the description will include the recommended value.
Rationale Statement
Detailed reasoning for the recommendation to provide the user a clear and concise
understanding on the importance of the recommendation.
Page 12
Audit Procedure
Systematic instructions for determining if the target system complies with the
recommendation.
Remediation Procedure
Systematic instructions for applying recommendations to the target system to bring it
into compliance according to the recommendation.
Default Value
Default value for the given setting in this recommendation, if known. If not known, either
not configured or not defined will be applied.
References
Additional documentation relative to the recommendation.
Additional Information
Supplementary information that does not correspond to any other field but may be
useful to the user.
Page 13
• Level 1
• Level 2
Page 14
With Special Thanks to the Google team of: Poonam Lamba, Michele Chubirka,
Shannon Kularathana, Vinayak Goyal, Andrew Peabody and Padma Padmalatha.
Author/s
Andrew Martin
Rowan Baker
Kevin Ward
Editor/s
Randall Mowen
Poonam Lamba
Michele Chubirka
Shannon Kularathana
Vinayak Goyal
Contributor/s
Rory Mccune
Jordan Liggitt
Liz Rice
Maya Kaczorowski
Mark Wolters
Iulia Ion
Andrew Kiggins
Greg Castle
Mark Larinde
Andrew Thompson
Gareth Boyes
Rachel Rice
Page 15
3 Worker Nodes
This section consists of security recommendations for the components that run on GKE
worker nodes.
Page 17
This section covers recommendations for configuration files on the worker nodes.
Page 18
• Level 1
Description:
If kube-proxy is running, and if it is configured by a kubeconfig file, ensure that the
proxy kubeconfig file has permissions of 644 or more restrictive.
Rationale:
The kube-proxy kubeconfig file controls various parameters of the kube-proxy service
on the worker node. You should restrict its file permissions to maintain the integrity of
the file. The file should be writable only by the administrators on the system.
Impact:
Overly permissive file permissions increase security risk to the platform.
Audit:
Page 19
Once the pod is running, you can exec into it to check file permissions on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through
the /host directory and check the permission level of the file:
ls -l /host/var/lib/kubelet/kubeconfig
Verify that if a file is specified and it exists, the permissions are 644 or more restrictive.
Remediation:
Run the below command (based on the file location on your system) on the each worker
node. For example,
Page 20
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 21
• Level 1
Description:
If kube-proxy is running, ensure that the file ownership of its kubeconfig file is set to
root:root.
Rationale:
The kubeconfig file for kube-proxy controls various parameters for the kube-proxy
service in the worker node. You should set its file ownership to maintain the integrity of
the file. The file should be owned by root:root.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Using Google Cloud Console
The output of the above command should return something similar to --kubeconfig
/var/lib/kubelet/kubeconfig which is the location of the kubeconfig file.
Run this command to obtain the kubeconfig file ownership:
Page 22
Once the pod is running, you can exec into it to check file ownership on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through
the /host directory and check the ownership of the file:
ls -l /host/var/lib/kubelet/kubeconfig
The output of the above command gives you the kubeconfig file's ownership. Verify that
the ownership is set to root:root.
Remediation:
Run the below command (based on the file location on your system) on each worker
node. For example,
Page 23
Default Value:
1. https://kubernetes.io/docs/admin/kube-proxy/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 24
• Level 1
Description:
Ensure that if the kubelet configuration file exists, it has permissions of 644.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file exists, you should restrict its file
permissions to maintain the integrity of the file. The file should be writable by only the
administrators on the system.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Using Google Cloud Console
Page 25
Once the pod is running, you can exec into it to check file permissions on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through
the /host directory and check the permission level of the file:
ls -l /host/etc/kubernetes/kubelet-config.yaml
Verify that if a file is specified and it exists, the permissions are 644 or more restrictive.
Remediation:
Run the following command (using the kubelet config file location):
chmod 644 <kubelet_config_file>
Default Value:
The default permissions for the kubelet configuration file are 600.
Page 26
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 27
• Level 1
Description:
Ensure that if the kubelet configuration file exists, it is owned by root:root.
Rationale:
The kubelet reads various parameters, including security settings, from a config file
specified by the --config argument. If this file is specified you should restrict its file
permissions to maintain the integrity of the file. The file should be owned by root:root.
Impact:
Overly permissive file access increases the security risk to the platform.
Audit:
Using Google Cloud Console
Page 28
Once the pod is running, you can exec into it to check file ownership on the node:
kubectl exec -it file-check -- sh
Now you are in a shell inside the pod, but you can access the node's file system through
the /host directory and check the ownership of the file:
ls -l /etc/kubernetes/kubelet/kubelet-config.yaml
The output of the above command gives you the file's ownership. Verify that the
ownership is set to root:root.
Remediation:
Run the following command (using the config file location identified in the Audit step):
Page 29
Default Value:
1. https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cis-benchmarks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 30
Kubelets can accept configuration via a configuration file and in some cases via
command line arguments. It is important to note that parameters provided as command
line arguments will override their counterpart parameters in the configuration file (see --
config details in the Kubelet CLI Reference for more info, where you can also find out
which configuration parameters can be supplied as a command line argument).
With this in mind, it is important to check for the existence of command line arguments
as well as configuration file entries when auditing Kubelet configuration.
Firstly, SSH to each node and execute the following command to find the Kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active Kubelet process, from
which we can see the command line arguments provided to the process. Also note the
location of the configuration file, provided with the --config argument, as this will be
needed to verify configuration. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
This config file could be in JSON or YAML format depending on your distribution.
Page 31
• Level 1
Description:
Disable anonymous requests to the Kubelet server.
Rationale:
When enabled, requests that are not rejected by other configured authentication
methods are treated as anonymous requests. These requests are then served by the
Kubelet server. You should rely on authentication to authorize access and disallow
anonymous requests.
Impact:
Anonymous requests will be rejected.
Audit:
Audit Method 1:
Kubelets can accept configuration via a configuration file and in some cases via
command line arguments. It is important to note that parameters provided as command
line arguments will override their counterpart parameters in the configuration file (see --
config details in the Kubelet CLI Reference for more info, where you can also find out
which configuration parameters can be supplied as a command line argument).
With this in mind, it is important to check for the existence of command line arguments
as well as configuration file entries when auditing Kubelet configuration.
Firstly, SSH to each node and execute the following command to find the Kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active Kubelet process, from
which we can see the command line arguments provided to the process. Also note the
location of the configuration file, provided with the --config argument, as this will be
needed to verify configuration. The file can be viewed with a command such as more or
less, like so:
Page 32
The curl command will return the API response which will be a JSON formatted string
representing the Kubelet configuration.
Verify that Anonymous Authentication is not enabled checking that "authentication":
{ "anonymous": { "enabled": false } is in the API response.
Remediation:
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Page 33
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
2. https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-
authz/#kubelet-authentication
3. https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 34
• Level 1
Description:
Do not allow all requests. Enable explicit authorization.
Rationale:
Kubelets can be configured to allow all authenticated requests (even anonymous ones)
without needing explicit authorization checks from the apiserver. You should restrict this
behavior and only allow explicitly authorized requests.
Impact:
Page 36
The curl command will return the API response which will be a JSON formatted string
representing the Kubelet configuration.
Verify that Webhook Authentication is enabled with "authentication": {
"webhook": { "enabled": true } } in the API response.
Verify that the Authorization Mode is set to WebHook with "authorization": {
"mode": "Webhook } in the API response.
Remediation:
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Enable Webhook Authentication by setting the following parameter:
Page 37
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
2. https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-
authz/#kubelet-authentication
3. https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
Page 38
Controls
Control IG 1 IG 2 IG 3
Version
Page 39
• Level 1
Description:
Enable Kubelet authentication using certificates.
Rationale:
The connections from the apiserver to the kubelet are used for fetching logs for pods,
attaching (through kubectl) to running pods, and using the kubelet’s port-forwarding
functionality. These connections terminate at the kubelet’s HTTPS endpoint. By default,
the apiserver does not verify the kubelet’s serving certificate, which makes the
connection subject to man-in-the-middle attacks, and unsafe to run over untrusted
and/or public networks. Enabling Kubelet certificate authentication ensures that the
apiserver could authenticate the Kubelet before submitting any requests.
Impact:
You require TLS to be configured on apiserver as well as kubelets.
Audit:
Audit Method 1:
Kubelets can accept configuration via a configuration file and in some cases via
command line arguments. It is important to note that parameters provided as command
line arguments will override their counterpart parameters in the configuration file (see --
config details in the Kubelet CLI Reference for more info, where you can also find out
which configuration parameters can be supplied as a command line argument).
With this in mind, it is important to check for the existence of command line arguments
as well as configuration file entries when auditing Kubelet configuration.
Firstly, SSH to each node and execute the following command to find the Kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active Kubelet process, from
which we can see the command line arguments provided to the process. Also note the
location of the configuration file, provided with the --config argument, as this will be
needed to verify configuration. The file can be viewed with a command such as more or
less, like so:
Page 40
With this running, in a separate terminal run the following command for each node:
export NODE_NAME=my-node-name
curl http://localhost:8080/api/v1/nodes/${NODE_NAME}/proxy/configz
The curl command will return the API response which will be a JSON formatted string
representing the Kubelet configuration.
Verify that a client certificate authority file is configured with "authentication": {
"x509": {"clientCAFile": <path/to/client-ca-file> } }" in the API response.
Remediation:
Remediation Method 1:
If configuring via the Kubelet config file, you first need to locate the file.
To do this, SSH to each node and execute the following command to find the kubelet
process:
ps -ef | grep kubelet
The output of the above command provides details of the active kubelet process, from
which we can see the location of the configuration file provided to the kubelet service
with the --config argument. The file can be viewed with a command such as more or
less, like so:
sudo less /path/to/kubelet-config.json
Configure the client certificate authority file by setting the following parameter
appropriately:
Page 41
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
2. https://kubernetes.io/docs/reference/access-authn-authz/kubelet-authn-
authz/#kubelet-authentication
3. https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 42
• Level 1
Description:
Disable the read-only port.
Rationale:
The Kubelet process provides a read-only API in addition to the main Kubelet API.
Unauthenticated access is provided to this read-only API which could possibly retrieve
potentially sensitive information about the cluster.
Impact:
Removal of the read-only port will require that any service which made use of it will
need to be re-configured to use the main Kubelet API.
Audit:
If using a Kubelet configuration file, check that there is an entry for authentication:
anonymous: enabled set to 0.
First, SSH to the relevant node:
Run the following command on each node to find the appropriate Kubelet config file:
ps -ef | grep kubelet
The output of the above command should return something similar to --config
/etc/kubernetes/kubelet/kubelet-config.json which is the location of the
Kubelet config file.
Open the Kubelet config file:
cat /etc/kubernetes/kubelet/kubelet-config.json
Verify that the --read-only-port argument exists and is set to 0.
If the --read-only-port argument is not present, check that there is a Kubelet config
file specified by --config. Check that if there is a readOnlyPort entry in the file, it is
set to 0.
Remediation:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to 0
Page 43
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 44
• Level 1
Description:
Do not disable timeouts on streaming connections.
Rationale:
Setting idle timeouts ensures that you are protected against Denial-of-Service attacks,
inactive connections and running out of ephemeral ports.
Note: By default, --streaming-connection-idle-timeout is set to 4 hours which
might be too high for your environment. Setting this as appropriate would additionally
ensure that such streaming connections are timed out after serving legitimate use
cases.
Impact:
Long-lived connections could be interrupted.
Audit:
Audit Method 1:
First, SSH to the relevant node:
Run the following command on each node to find the running kubelet process:
ps -ef | grep kubelet
If the command line for the process includes the argument streaming-connection-
idle-timeout verify that it is not set to 0.
If the streaming-connection-idle-timeout argument is not present in the output of
the above command, refer instead to the config argument that specifies the location of
the Kubelet config file e.g. --config /etc/kubernetes/kubelet-config.yaml.
Open the Kubelet config file:
Page 45
Remediation:
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to a non-zero
value in the format of #h#m#s
"streamingConnectionIdleTimeout": "4h0m0s"
You should ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not
specify a --streaming-connection-idle-timeout argument because it would
override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--streaming-connection-idle-timeout=4h0m0s
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"streamingConnectionIdleTimeout": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
Page 46
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://github.com/kubernetes/kubernetes/pull/18552
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 47
• Level 1
Description:
Allow Kubelet to manage iptables.
Rationale:
Kubelets can automatically manage the required changes to iptables based on how you
choose your networking options for the pods. It is recommended to let kubelets manage
the changes to iptables. This ensures that the iptables configuration remains in sync
with pods networking configuration. Manually configuring iptables with dynamic pod
network configuration changes might hamper the communication between
pods/containers and to the outside world. You might have iptables rules too restrictive
or too open.
Impact:
Kubelet would manage the iptables on the system and keep it in sync. If you are using
any other iptables management solution, then there might be some conflicts.
Audit:
Audit Method 1:
First, SSH to each node:
Run the following command on each node to find the Kubelet process:
ps -ef | grep kubelet
If the output of the above command includes the argument --make-iptables-util-
chains then verify it is set to true.
If the --make-iptables-util-chains argument does not exist, and there is a Kubelet
config file specified by --config, verify that the file does not set
makeIPTablesUtilChains to false.
Audit Method 2:
If using the api configz endpoint consider searching for the status of
authentication... "makeIPTablesUtilChains.:true by extracting the live
configuration from the nodes running kubelet.
Set the local proxy port and the following variables and provide proxy port number and
node name;
HOSTNAME_PORT="localhost-and-port-number"
NODE_NAME="The-Name-Of-Node-To-Extract-Configuration" from the output
of "kubectl get nodes"
Page 48
Remediation:
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet/kubelet-config.json and set the below parameter to
true
"makeIPTablesUtilChains": true
Page 49
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 50
• Level 1
Description:
Security relevant information should be captured. The eventRecordQPS on the Kubelet
configuration can be used to limit the rate at which events are gathered and sets the
maximum event creations per second. Setting this too low could result in relevant
events not being logged, however the unlimited setting of 0 could result in a denial of
service on the kubelet.
Rationale:
It is important to capture all events and not restrict event creation. Events are an
important source of security information and analytics that ensure that your environment
is consistently monitored using the event data.
Impact:
Setting this parameter to 0 could result in a denial of service condition due to excessive
events being created. The cluster's event processing and storage systems should be
scaled to handle expected event loads.
Audit:
Remediation:
If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate
level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf on each worker node
and set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
Page 51
Default Value:
See the GKE documentation for the default value.
References:
1. https://kubernetes.io/docs/admin/kubelet/
2. https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/kubeletco
nfig/v1beta1/types.go
3. https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 52
• Level 1
Description:
Enable kubelet client certificate rotation.
Rationale:
The --rotate-certificates setting causes the kubelet to rotate its client certificates
by creating new CSRs as its existing credentials expire. This automated periodic
rotation ensures that the there is no downtime due to expired certificates and thus
addressing availability in the CIA (Confidentiality, Integrity, and Availability) security
triad.
Note: This recommendation only applies if you let kubelets get their certificates from the
API server. In case your kubelet certificates come from an outside authority/tool (e.g.
Vault) then you need to implement rotation yourself.
Note: This feature also requires the RotateKubeletClientCertificate feature gate
to be enabled.
Impact:
None
Audit:
Audit Method 1:
SSH to each node and run the following command to find the Kubelet process:
ps -ef | grep kubelet
If the output of the command above includes the --RotateCertificate executable
argument, verify that it is set to true.
If the output of the command above does not include the --RotateCertificate
executable argument then check the Kubelet config file. The output of the above
command should return something similar to --config
/etc/kubernetes/kubelet/kubelet-config.json which is the location of the
Kubelet config file.
Open the Kubelet config file:
cat /etc/kubernetes/kubelet-config.yaml
Verify that the RotateCertificate argument is not present, or is set to true.
Page 53
Default Value:
See the GKE documentation for the default value.
References:
1. https://github.com/kubernetes/kubernetes/pull/41912
2. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-
bootstrapping/#kubelet-configuration
3. https://kubernetes.io/docs/imported/release/notes/
4. https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
5. https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 54
• Level 1
Description:
Enable kubelet server certificate rotation.
Rationale:
RotateKubeletServerCertificate causes the kubelet to both request a serving
certificate after bootstrapping its client credentials and rotate the certificate as its
existing credentials expire. This automated periodic rotation ensures that the there are
no downtimes due to expired certificates and thus addressing availability in the CIA
(Confidentiality, Integrity, and Availability) security triad.
Note: This recommendation only applies if you let kubelets get their certificates from the
API server. In case your kubelet certificates come from an outside authority/tool (e.g.
Vault) then you need to implement rotation yourself.
Impact:
None
Audit:
Audit Method 1:
First, SSH to each node:
Run the following command on each node to find the Kubelet process:
ps -ef | grep kubelet
If the output of the command above includes the --rotate-kubelet-server-
certificate executable argument verify that it is set to true.
If the process does not have the --rotate-kubelet-server-certificate executable
argument then check the Kubelet config file. The output of the above command should
return something similar to --config /etc/kubernetes/kubelet-config.yaml
which is the location of the Kubelet config file.
Open the Kubelet config file:
Page 55
Remediation:
Remediation Method 1:
If modifying the Kubelet config file, edit the kubelet-config.json file
/etc/kubernetes/kubelet-config.yaml and set the below parameter to true
"featureGates": {
"RotateKubeletServerCertificate":true
},
Additionally, ensure that the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf does not set
the --rotate-kubelet-server-certificate executable argument to false because
this would override the Kubelet config file.
Remediation Method 2:
If using executable arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service.d/10-kubelet-args.conf on each
worker node and add the below parameter at the end of the KUBELET_ARGS variable
string.
--rotate-kubelet-server-certificate=true
Remediation Method 3:
If using the api configz endpoint consider searching for the status of
"RotateKubeletServerCertificate": by extracting the live configuration from the
nodes running kubelet.
**See detailed step-by-step configmap procedures in Reconfigure a Node's Kubelet in a
Live Cluster, and then rerun the curl statement from audit process to check for kubelet
configuration changes
Page 56
Default Value:
See the GKE documentation for the default value.
References:
1. https://github.com/kubernetes/kubernetes/pull/45059
2. https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/#kubelet-configuration
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
4 Policies
This section contains recommendations for various Kubernetes policies which are
important to the security of the environment.
Page 57
Page 58
• Level 1
Description:
The RBAC role cluster-admin provides wide-ranging powers over the environment
and should be used only where and when needed.
Rationale:
Kubernetes provides a set of default roles where RBAC is used. Some of these roles
such as cluster-admin provide wide-ranging privileges which should only be applied
where absolutely necessary. Roles such as cluster-admin allow super-user access to
perform any action on any resource. When used in a ClusterRoleBinding, it gives full
control over every resource in the cluster and in all namespaces. When used in a
RoleBinding, it gives full control over every resource in the rolebinding's namespace,
including the namespace itself.
Impact:
Audit:
Obtain a list of the principals who have access to the cluster-admin role by reviewing
the clusterrolebinding output for each role binding that has access to the cluster-
admin role.
kubectl get clusterrolebindings -o=custom-
columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
Review each principal listed and ensure that cluster-admin privilege is required for it.
Remediation:
Identify all clusterrolebindings to the cluster-admin role. Check if they are used and if
they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower-privileged role and then remove the
clusterrolebinding to the cluster-admin role :
Page 59
Default Value:
References:
1. https://kubernetes.io/docs/concepts/cluster-administration/
2. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 60
• Level 1
Description:
The Kubernetes API stores secrets, which may be service account tokens for the
Kubernetes API or credentials used by workloads in the cluster. Access to these secrets
should be restricted to the smallest possible group of users to reduce the risk of
privilege escalation.
Rationale:
Inappropriate access to secrets stored within the Kubernetes cluster can allow for an
attacker to gain additional access to the Kubernetes cluster or external resources
whose credentials are stored as secrets.
Impact:
Care should be taken not to remove access to secrets to system components which
require this for their operation
Audit:
Review the users who have get, list or watch access to secrets objects in the
Kubernetes API.
Remediation:
Where possible, remove get, list and watch access to secret objects in the cluster.
Page 61
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 62
• Level 1
Description:
Kubernetes Roles and ClusterRoles provide access to resources based on sets of
objects and actions that can be taken on those objects. It is possible to set either of
these to be the wildcard "*", which matches all items.
Use of wildcards is not optimal from a security perspective as it may allow for
inadvertent access to be granted when new resources are added to the Kubernetes API
either as CRDs or in later versions of the product.
Rationale:
The principle of least privilege recommends that users are provided only the access
required for their role and nothing more. The use of wildcard rights grants is likely to
provide excessive rights to the Kubernetes API.
Audit:
Retrieve the roles defined across each namespaces in the cluster and review for
wildcards
kubectl get roles --all-namespaces -o yaml
Retrieve the cluster roles defined in the cluster and review for wildcards
kubectl get clusterroles -o yaml
Remediation:
Where possible replace any use of wildcards in clusterroles and roles with specific
objects or actions.
References:
1. https://kubernetes.io/docs/reference/access-authn-authz/rbac/
Page 63
Controls
Control IG 1 IG 2 IG 3
Version
Page 64
• Level 1
Description:
The default service account should not be used to ensure that rights granted to
applications can be more easily audited and reviewed.
Rationale:
Kubernetes provides a default service account which is used by cluster workloads
where no specific service account is assigned to the pod.
Where access to the Kubernetes API from a pod is required, a specific service account
should be created for that pod, and rights granted to that service account.
The default service account should be configured such that it does not provide a service
account token and does not have any explicit rights assignments.
Impact:
All workloads which require access to the Kubernetes API will require an explicit service
account to be created.
Audit:
For each namespace in the cluster, review the rights assigned to the default service
account and ensure that it has no roles or cluster roles bound to it apart from the
defaults.
Additionally ensure that the automountServiceAccountToken: false setting is in
place for each default service account.
Remediation:
Create explicit service accounts wherever a Kubernetes workload requires specific
access to the Kubernetes API server.
Modify the configuration of each default service account to include this value
automountServiceAccountToken: false
Default Value:
By default the default service account allows for its service account token to be
mounted in pods in its namespace.
Page 65
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 66
• Level 1
Description:
Service accounts tokens should not be mounted in pods except where the workload
running in the pod explicitly needs to communicate with the API server
Rationale:
Mounting service account tokens inside pods can provide an avenue for privilege
escalation attacks where an attacker is able to compromise a single pod in the cluster.
Avoiding mounting these tokens removes this attack avenue.
Impact:
Pods mounted without service account tokens will not be able to communicate with the
API server, except where the resource is available to unauthenticated principals.
Audit:
Review pod and service account objects in the cluster and ensure that the option below
is set, unless the resource explicitly requires this access.
automountServiceAccountToken: false
Remediation:
Modify the definition of pods and service accounts which do not need to mount service
account tokens to disable it.
Default Value:
By default, all pods get a service account token mounted in them.
References:
1. https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-
account/
Page 67
Controls
Control IG 1 IG 2 IG 3
Version
Page 68
• Level 1
Description:
The special group system:masters should not be used to grant permissions to any
user or service account, except where strictly necessary (e.g. bootstrapping access
prior to RBAC being fully available)
Rationale:
The system:masters group has unrestricted access to the Kubernetes API hard-coded
into the API server source code. An authenticated user who is a member of this group
cannot have their access reduced, even if all bindings and cluster role bindings which
mention it, are removed.
When combined with client certificate authentication, use of this group can allow for
irrevocable cluster-admin level credentials to exist for a cluster.
GKE includes the CertificateSubjectRestriction admission controller which
rejects requests for the system:masters group.
CertificateSubjectRestriction "This admission controller observes creation of
CertificateSigningRequest resources that have a spec.signerName of
kubernetes.io/kube-apiserver-client. It rejects any request that specifies a 'group' (or
'organization attribute') of system:masters." https://kubernetes.io/docs/reference/access-
authn-authz/admission-controllers/#certificatesubjectrestriction
Impact:
Once the RBAC system is operational in a cluster system:masters should not be
specifically required, as ordinary bindings from principals to the cluster-admin cluster
role can be made where unrestricted access is required.
Audit:
Review a list of all credentials which have access to the cluster and ensure that the
group system:masters is not used.
Remediation:
Remove the system:masters group from all users in the cluster.
Default Value:
By default some clusters will create a "break glass" client certificate which is a member
of this group. Access to this client certificate should be carefully controlled and it should
not be used for general cluster operations.
Page 69
1. https://github.com/kubernetes/kubernetes/blob/master/pkg/registry/rbac/escalatio
n_check.go#L38
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 70
• Level 1
Description:
Cluster roles and roles with the impersonate, bind or escalate permissions should not
be granted unless strictly required. Each of these permissions allow a particular subject
to escalate their privileges beyond those explicitly granted by cluster administrators
Rationale:
The impersonate privilege allows a subject to impersonate other users gaining their
rights to the cluster. The bind privilege allows the subject to add a binding to a cluster
role or role which escalates their effective permissions in the cluster. The escalate
privilege allows a subject to modify cluster roles to which they are bound, increasing
their rights to that level.
Each of these permissions has the potential to allow for privilege escalation to cluster-
admin level.
Impact:
There are some cases where these permissions are required for cluster service
operation, and care should be taken before removing these permissions from system
service accounts.
Audit:
Review the users who have access to cluster roles or roles which provide the
impersonate, bind or escalate privileges.
Remediation:
Where possible, remove the impersonate, bind and escalate rights from subjects.
Default Value:
1. https://www.impidio.com/blog/kubernetes-rbac-security-pitfalls
2. https://raesene.github.io/blog/2020/12/12/Escalating_Away/
3. https://raesene.github.io/blog/2021/01/16/Getting-Into-A-Bind-with-Kubernetes/
Page 71
Controls
Control IG 1 IG 2 IG 3
Version
Page 72
• Level 2
Description:
Avoid ClusterRoleBindings nor RoleBindings with the user system:anonymous.
Rationale:
Kubernetes assigns user system:anonymous to API server requests that have no
authentication information provided. Binding a role to user system:anonymous gives
any unauthenticated user the permissions granted by that role and is strongly
discouraged.
Impact:
Unauthenticated users will have privileges and permissions associated with roles
associated with the configured bindings.
Care should be taken before removing any clusterrolebindings or rolebindings
from the environment to ensure they were not required for operation of the cluster. Use
a more specific and authenticated user for cluster operations.
Audit:
Both CusterRoleBindings and RoleBindings should be audited. Use the following
command to confirm there are no ClusterRoleBindings to system:anonymous:
$ kubectl get clusterrolebindings -o json | jq -r '["Name"], ["-----"],
(.items[] | select((.subjects | length) > 0) | select(any(.subjects[]; .name
== "system:anonymous")) | [.metadata.namespace, .metadata.name]) | @tsv'
Page 73
Remediation:
Identify all clusterrolebindings and rolebindings to the user system:anonymous.
Check if they are used and review the permissions associated with the binding using the
commands in the Audit section above or refer to GKE documentation.
Strongly consider replacing unsafe bindings with an authenticated, user-defined group.
Where possible, bind to non-default, user-defined groups with least-privilege roles.
If there are any unsafe bindings to the user system:anonymous, proceed to delete them
after consideration for cluster operations with only necessary, safer bindings.
kubectl delete clusterrolebinding
[CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding
[ROLE_BINDING_NAME]
--namespace
[ROLE_BINDING_NAMESPACE]
Default Value:
No clusterrolebindings nor rolebindings with user system:anonymous.
References:
1. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles
Page 74
Controls
Control IG 1 IG 2 IG 3
Version
Page 75
• Level 1
Description:
Avoid non-default ClusterRoleBindings and RoleBindings with the group
system:unauthenticated, except the ClusterRoleBinding system:public-info-
viewer.
Rationale:
Kubernetes assigns the group system:unauthenticated to API server requests that
have no authentication information provided. Binding a role to this group gives any
unauthenticated user the permissions granted by that role and is strongly discouraged.
Impact:
Unauthenticated users will have privileges and permissions associated with roles
associated with the configured bindings.
Care should be taken before removing any non-default clusterrolebindings or
rolebindings from the environment to ensure they were not required for operation of
the cluster. Leverage a more specific and authenticated user for cluster operations.
Audit:
Both CusterRoleBindings and RoleBindings should be audited. Use the following
command to confirm there are no non-default ClusterRoleBindings to group
system:unauthenticated:
$ kubectl get clusterrolebindings -o json | jq -r '["Name"], ["-----"],
(.items[] | select((.subjects | length) > 0) | select(any(.subjects[]; .name
== "system:unauthenticated")) | [.metadata.namespace, .metadata.name]) |
@tsv'
Page 76
Remediation:
Identify all non-default clusterrolebindings and rolebindings to the group
system:unauthenticated. Check if they are used and review the permissions
associated with the binding using the commands in the Audit section above or refer to
GKE documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:unauthenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
kubectl delete clusterrolebinding
[CLUSTER_ROLE_BINDING_NAME]
kubectl delete rolebinding
[ROLE_BINDING_NAME]
--
namespace
[ROLE_BINDING_NAMESPACE]
Default Value:
• system:public-info-viewer
Page 77
1. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 78
• Level 1
Description:
Avoid non-default ClusterRoleBindings and RoleBindings with the group
system:authenticated, except the ClusterRoleBindings system:basic-user,
system:discovery, and system:public-info-viewer.
Google's approach to authentication is to make authenticating to Google Cloud and
GKE as simple and secure as possible without adding complex configuration steps. The
group system:authenticated includes all users with a Google account, which
includes all Gmail accounts. Consider your authorization controls with this extended
group scope when granting permissions. Thus, group system:authenticated is not
recommended for non-default use.
Rationale:
GKE assigns the group system:authenticated to API server requests made by any
user who is signed in with a Google Account, including all Gmail accounts. In practice,
this isn't meaningfully different from system:unauthenticated because anyone can
create a Google Account.
Binding a role to the group system:authenticated gives any user with a Google
Account, including all Gmail accounts, the permissions granted by that role and is
strongly discouraged.
Impact:
Authenticated users in group system:authenticated should be treated similarly to
users in system:unauthenticated, having privileges and permissions associated with
roles associated with the configured bindings.
Care should be taken before removing any non-default clusterrolebindings or
rolebindings from the environment to ensure they were not required for operation of
the cluster. Leverage a more specific and authenticated user for cluster operations.
Audit:
Page 79
Remediation:
Identify all non-default clusterrolebindings and rolebindings to the group
system:authenticated. Check if they are used and review the permissions associated
with the binding using the commands in the Audit section above or refer to GKE
documentation.
Strongly consider replacing non-default, unsafe bindings with an authenticated, user-
defined group. Where possible, bind to non-default, user-defined groups with least-
privilege roles.
If there are any non-default, unsafe bindings to the group system:authenticated,
proceed to delete them after consideration for cluster operations with only necessary,
safer bindings.
Page 80
Default Value:
ClusterRoleBindings with group system:authenticated:
• system:basic-user
• system:discovery
References:
1. https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 81
Pod Security Standards (PSS) are recommendations for securing deployed workloads
to reduce the risks of container breakout. There are a number of ways if implementing
PSS, including the built-in Pod Security Admission controller, or external policy control
systems which integrate with Kubernetes via validating and mutating webhooks.
Page 82
• Level 1
Description:
The Pod Security Standard Baseline profile defines a baseline for container security.
You can enforce this by using the built-in Pod Security Admission controller.
Rationale:
Without an active mechanism to enforce the Pod Security Standard Baseline profile, it is
not possible to limit the use of containers with access to underlying cluster nodes, via
mechanisms like privileged containers, or the use of hostPath volume mounts.
Impact:
Enforcing a baseline profile will limit the use of containers.
Audit:
diff
Ensure that Pod Security Admission is in place for every namespace which contains
user workloads.
Run the following command to enforce the Baseline profile in a namespace:
kubectl label namespace pod-security.kubernetes.io/enforce=baseline
Default Value:
Page 83
Controls
Control IG 1 IG 2 IG 3
Version
Page 84
Page 85
• Level 1
Description:
There are a variety of CNI plugins available for Kubernetes. If the CNI in use does not
support Network Policies it may not be possible to effectively restrict traffic in the
cluster.
Rationale:
Kubernetes network policies are enforced by the CNI plugin in use. As such it is
important to ensure that the CNI plugin supports both Ingress and Egress network
policies.
See also recommendation 5.6.7.
Impact:
None
Audit:
Review the documentation of CNI plugin in use by the cluster, and confirm that it
supports Ingress and Egress network policies.
Remediation:
To use a CNI plugin with Network Policy, enable Network Policy in GKE, and the CNI
plugin will be updated. See recommendation 5.6.7.
Default Value:
This will depend on the CNI plugin in use.
References:
1. https://kubernetes.io/docs/concepts/services-networking/network-policies/
2. https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-
net/network-plugins/
3. https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview
Additional Information:
One example here is Flannel (https://github.com/flannel-io/flannel) which does not
support Network policy unless Calico is also in use.
Page 86
Controls
Control IG 1 IG 2 IG 3
Version
Page 87
• Level 2
Description:
Use network policies to isolate traffic in the cluster network.
Rationale:
Running different applications on the same Kubernetes cluster creates a risk of one
compromised application attacking a neighboring application. Network segmentation is
important to ensure that containers can communicate only with those they are supposed
to. A network policy is a specification of how selections of pods are allowed to
communicate with each other and other network endpoints.
Network Policies are namespace scoped. When a network policy is introduced to a
given namespace, all traffic not allowed by the policy is denied. However, if there are no
network policies in a namespace all traffic will be allowed into and out of the pods in that
namespace.
Impact:
Once network policies are in use within a given namespace, traffic not explicitly allowed
by a network policy will be denied. As such it is important to ensure that, when
introducing network policies, legitimate traffic is not blocked.
Audit:
Run the below command and review the NetworkPolicy objects created in the cluster.
kubectl get networkpolicy --all-namespaces
ensure that each namespace defined in the cluster has at least one Network
Policy.
Remediation:
Follow the documentation and create NetworkPolicy objects as needed.
See: https://cloud.google.com/kubernetes-engine/docs/how-to/network-
policy#creating_a_network_policy for more information.
Default Value:
By default, network policies are not created.
Page 88
1. https://cloud.google.com/kubernetes-engine/docs/how-to/network-
policy#creating_a_network_policy
2. https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
3. https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 89
Page 90
• Level 2
Description:
Kubernetes supports mounting secrets as data volumes or as environment variables.
Minimize the use of environment variable secrets.
Rationale:
It is reasonably common for application code to log out its environment (particularly in
the event of an error). This will include any secret values passed in as environment
variables, so secrets can easily be exposed to any user or entity who has access to the
logs.
Impact:
Application code which expects to read secrets in the form of environment variables
would need modification
Audit:
Run the following command to find references to objects which use environment
variables defined from secrets.
kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind}
{.metadata.name} {"\n"}{end}' -A
Remediation:
If possible, rewrite application code to read secrets from mounted secret files, rather
than from environment variables.
Default Value:
By default, secrets are not defined
References:
1. https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
Additional Information:
Mounting secrets as volumes has the additional benefit that secret values can be
updated without restarting the pod
Page 91
Controls
Control IG 1 IG 2 IG 3
Version
3 Data Protection
v8 Develop processes and technical controls to identify, classify, securely
handle, retain, and dispose of data.
v7 13 Data Protection
Data Protection
Page 92
• Level 2
Description:
Consider the use of an external secrets storage and management system instead of
using Kubernetes Secrets directly, if more complex secret management is required.
Ensure the solution requires authentication to access secrets, has auditing of access to
and use of secrets, and encrypts secrets. Some solutions also make it easier to rotate
secrets.
Rationale:
Kubernetes supports secrets as first-class objects, but care needs to be taken to ensure
that access to secrets is carefully limited. Using an external secrets provider can ease
the management of access to secrets, especially where secrests are used across both
Kubernetes and non-Kubernetes environments.
Impact:
None
Audit:
Review your secrets management implementation.
Remediation:
Refer to the secrets management options offered by the cloud service provider or a
third-party secrets management solution.
Default Value:
By default, no external secret management is configured.
References:
1. https://kubernetes.io/docs/concepts/configuration/secret/
2. https://cloud.google.com/secret-manager/docs/overview
Page 93
Controls
Control IG 1 IG 2 IG 3
Version
3 Data Protection
v8 Develop processes and technical controls to identify, classify, securely
handle, retain, and dispose of data.
v7 13 Data Protection
Data Protection
Page 94
Page 95
• Level 2
Description:
Configure Image Provenance for the deployment.
Rationale:
Kubernetes supports plugging in provenance rules to accept or reject the images in
deployments. Rules can be configured to ensure that only approved images are
deployed in the cluster.
Also see recommendation 5.10.4.
Impact:
Regular maintenance for the provenance configuration should be carried out, based on
container image updates.
Audit:
Review the pod definitions in the cluster and verify that image provenance is configured
as appropriate.
Also see recommendation 5.10.4.
Remediation:
Follow the Kubernetes documentation and setup image provenance.
Also see recommendation 5.10.4.
Default Value:
By default, image provenance is not set.
References:
1. https://kubernetes.io/docs/concepts/containers/images/
2. https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
Page 96
Controls
Control IG 1 IG 2 IG 3
Version
Page 97
These policies relate to general cluster management topics, like namespace best
practices and policies applied to pod objects in the cluster.
Page 98
• Level 1
Description:
Use namespaces to isolate your Kubernetes objects.
Rationale:
Limiting the scope of user permissions can reduce the impact of mistakes or malicious
activities. A Kubernetes namespace allows you to partition created resources into
logically named groups. Resources created in one namespace can be hidden from
other namespaces. By default, each resource created by a user in Kubernetes cluster
runs in a default namespace, called default. You can create additional namespaces
and attach resources and users to them. You can use Kubernetes Authorization plugins
to create policies that segregate access to namespace resources between different
users.
Impact:
You need to switch between namespaces for administration.
Audit:
Run the below command and review the namespaces created in the cluster.
kubectl get namespaces
Ensure that these namespaces are the ones you need and are adequately administered
as per your requirements.
Remediation:
Follow the documentation and create namespaces for objects in your deployment as
you need them.
Default Value:
By default, Kubernetes starts with two initial namespaces:
Page 99
1. https://kubernetes.io/docs/concepts/overview/working-with-
objects/namespaces/#viewing-namespaces
2. http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-
deployment.html
3. https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/589-
efficient-node-heartbeats
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 100
• Level 2
Description:
Enable RuntimeDefault seccomp profile in the pod definitions.
Rationale:
Seccomp (secure computing mode) is used to restrict the set of system calls
applications can make, allowing cluster administrators greater control over the security
of workloads running in the cluster. Kubernetes disables seccomp profiles by default for
historical reasons. It should be enabled to ensure that the workloads have restricted
actions available within the container.
Impact:
If the RuntimeDefault seccomp profile is too restrictive for you, you would have to
create/manage your own Localhost seccomp profiles.
Audit:
Review the pod definitions output for all namespaces in the cluster with the command
below.
kubectl get pods --all-namespaces -o json | jq -r '.items[] |
select(.metadata.annotations."seccomp.security.alpha.kubernetes.io/pod" ==
"runtime/default" or .spec.securityContext.seccompProfile.type ==
"RuntimeDefault") | {namespace: .metadata.namespace, name: .metadata.name,
seccompProfile: .spec.securityContext.seccompProfile.type}'
Remediation:
Use security context to enable the RuntimeDefault seccomp profile in your pod
definitions. An example is as below:
{
"namespace": "kube-system",
"name": "metrics-server-v0.7.0-dbcc8ddf6-gz7d4",
"seccompProfile": "RuntimeDefault"
}
Default Value:
By default, seccomp profile is set to unconfined which means that no seccomp profiles
are enabled.
Page 101
1. https://kubernetes.io/docs/tutorials/security/seccomp/
2. https://cloud.google.com/kubernetes-engine/docs/concepts/seccomp-in-gke
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 102
• Level 2
Description:
Apply Security Context to Pods and Containers
Rationale:
A security context defines the operating system security settings (uid, gid, capabilities,
SELinux role, etc..) applied to a container. When designing containers and pods, make
sure that the security context is configured for pods, containers, and volumes. A security
context is a property defined in the deployment yaml. It controls the security parameters
that will be assigned to the pod/container/volume. There are two levels of security
context: pod level security context, and container level security context.
Impact:
If you incorrectly apply security contexts, there may be issues running the pods.
Audit:
Review the pod definitions in the cluster and verify that the security contexts have been
defined as appropriate.
Remediation:
Follow the Kubernetes documentation and apply security contexts to your pods. For a
suggested list of security contexts, you may refer to the CIS Google Container-
Optimized OS Benchmark.
Default Value:
By default, no security contexts are automatically applied to pods.
References:
1. https://kubernetes.io/docs/concepts/workloads/pods/
2. https://kubernetes.io/docs/concepts/containers/
3. https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
4. https://learn.cisecurity.org/benchmarks
Page 103
Controls
Control IG 1 IG 2 IG 3
Version
Page 104
• Level 2
Description:
Kubernetes provides a default namespace, where objects are placed if no namespace
is specified for them. Placing objects in this namespace makes application of RBAC and
other controls more difficult.
Rationale:
Resources in a Kubernetes cluster should be segregated by namespace, to allow for
security controls to be applied at that level and to make it easier to manage resources.
Impact:
None
Audit:
Remediation:
Ensure that namespaces are created to allow for appropriate segregation of Kubernetes
resources and that all new resources are created in a specific namespace.
Default Value:
Page 105
Controls
Control IG 1 IG 2 IG 3
Version
5 Managed services
This section consists of security recommendations for the direct configuration of
Kubernetes managed service components, namely, Google Kubernetes Engine (GKE).
These recommendations are directly applicable for features which exist only as part of a
managed service.
Page 106
Page 107
• Level 2
Description:
Note: GCR is now deprecated, being superseded by Artifact Registry starting 15th May
2024. Runtime Vulnerability scanning is available via GKE Security Posture
Scan images stored in Google Container Registry (GCR) or Artifact Registry (AR) for
vulnerabilities.
Rationale:
Vulnerabilities in software packages can be exploited by malicious users to obtain
unauthorized access to local cloud resources. GCR Container Analysis API or Artifact
Registry Container Scanning API allow images stored in GCR or AR respectively to be
scanned for known vulnerabilities.
Impact:
None.
Audit:
Page 108
1. Go to AR by visiting https://console.cloud.google.com/artifacts
2. Select Settings and check if Vulnerability scanning is Enabled.
Ensure that Container Scanning API and Artifact Registry API are listed in the
output.
Remediation:
Page 109
Default Value:
1. https://cloud.google.com/artifact-registry/docs/analysis
2. https://cloud.google.com/artifact-analysis/docs/os-overview
3. https://console.cloud.google.com/marketplace/product/google/containerregistry.g
oogleapis.com
4. https://cloud.google.com/kubernetes-engine/docs/concepts/about-configuration-
scanning
5. https://containersecurity.googleapis.com
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 110
• Level 2
Description:
Note: GCR is now deprecated, see the references for more details.
Restrict user access to GCR or AR, limiting interaction with build images to only
authorized personnel and service accounts.
Rationale:
Weak access control to GCR or AR may allow malicious users to replace built images
with vulnerable or back-doored containers.
Impact:
Care should be taken not to remove access to GCR or AR for accounts that require this
for their operation. Any account granted the Storage Object Viewer role at the project
level can view all objects stored in GCS for the project.
Audit:
Users may have permissions to use Service Accounts and thus Users could inherit
privileges on the AR repositories. To check the accounts that could do this:
Note that other privileged project level roles will have the ability to write and modify AR
repositories. Consult the GCP CIS benchmark and IAM documentation for further
reference.
Using Command Line:
Page 111
The output of the command will return roles associated with the AR repository and
which members have those roles.
Users may have permissions to use Service Accounts and thus Users could inherit
privileges on the GCR Bucket. To check the accounts that could do this:
Note that other privileged project level roles will have the ability to write and modify
objects and the GCR bucket. Consult the GCP CIS benchmark and IAM documentation
for further reference.
Using Command Line:
To check GCR bucket specific permissions
gsutil iam get gs://artifacts.<project_id>.appspot.com
The output of the command will return roles associated with the GCR bucket and which
members have those roles.
Additionally, run the following to identify users and service accounts that hold privileged
roles at the project level, and thus inherit these privileges within the GCR bucket:
Page 112
Page 113
For a User or Service account with Project level permissions inherited by the GCR
bucket, or the Service Account User Role:
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role>
gs://artifacts.<project_id>.appspot.com
where:
•
<type> can be one of the following:
o
user, if the <email_address> is a Google account.
o
serviceAccount, if <email_address> specifies a Service account.
o
<email_address> can be one of the following:
▪ a Google account (for example, [email protected]).
▪ a Cloud IAM service account.
To modify roles defined at the project level and subsequently inherited within the GCR
bucket, or the Service Account User role, extract the IAM policy file, modify it
accordingly and apply it using:
Page 114
Default Value:
By default, GCR is disabled and access controls are set during initialisation.
References:
1. https://cloud.google.com/container-registry/docs/
2. https://cloud.google.com/kubernetes-engine/docs/how-to/service-accounts
3. https://cloud.google.com/kubernetes-engine/docs/how-to/iam
4. https://cloud.google.com/artifact-registry/docs/access-control#grant
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 115
• Level 2
Description:
Note: GCR is now deprecated, see the references for more details.
Configure the Cluster Service Account with Artifact Registry Viewer Role to only allow
read-only access to AR repositories. Configure the Cluster Service Account with
Storage Object Viewer Role to only allow read-only access to GCR.
Rationale:
The Cluster Service Account does not require administrative access to GCR or AR, only
requiring pull access to containers to deploy onto GKE. Restricting permissions follows
the principles of least privilege and prevents credentials from being abused beyond the
required role.
Impact:
A separate dedicated service account may be required for use by build servers and
other robot users pushing or managing container images.
Any account granted the Storage Object Viewer role at the project level can view all
objects stored in GCS for the project.
Audit:
The output of the command will return roles associated with the AR repository. If listed,
ensure the GKE Service account is set to "role":
"roles/artifactregistry.reader".
Page 116
Your GKE Service Account should not be output when this command is run.
Remediation:
Page 117
For an account that inherits access to the bucket through Project level permissions:
•
<type> can be one of the following:
o
user, if the <email_address> is a Google account.
Page 118
Then remove the excessively privileged role (Storage Admin / Storage Object
Admin / Storage Object Creator) using:
gsutil iam ch -d <type>:<email_address>:<role>
gs://artifacts.<project_id>.appspot.com
For an account that inherits access to the GCR Bucket through Project level
permissions, modify the Projects IAM policy file accordingly, then upload it using:
gcloud projects set-iam-policy <project_id> <policy_file>
Default Value:
The default permissions for the cluster Service account is dependent on the initial
configuration and IAM policy.
References:
1. https://cloud.google.com/container-registry/docs/
2. https://cloud.google.com/kubernetes-engine/docs/how-to/service-accounts
3. https://cloud.google.com/kubernetes-engine/docs/how-to/iam
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 119
• Level 2
Description:
Use Binary Authorization to allowlist (whitelist) only approved container registries.
Rationale:
Allowing unrestricted access to external container registries provides the opportunity for
malicious or unapproved containers to be deployed into the cluster. Ensuring only
trusted container images are used reduces this risk.
Also see recommendation 5.10.4.
Impact:
All container images to be deployed to the cluster must be hosted within an approved
container image registry. If public registries are not on the allowlist, a process for
bringing commonly used container images into an approved private registry and
keeping them up to date will be required.
Audit:
Using Google Cloud Console:
Check that Binary Authorization is enabled for the GKE cluster:
Page 120
Remediation:
Using Google Cloud Console:
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
Default Value:
By default, Binary Authorization is disabled along with container registry allowlisting.
Page 121
1. https://cloud.google.com/binary-authorization/docs/policy-yaml-reference
2. https://cloud.google.com/binary-authorization/docs/setting-up
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 122
This section contains recommendations relating to using Cloud IAM with GKE.
Page 123
• Level 1
Description:
Create and use minimally privileged Service accounts to run GKE cluster nodes instead
of using the Compute Engine default Service account. Unnecessary permissions could
be abused in the case of a node compromise.
Rationale:
A GCP service account (as distinct from a Kubernetes ServiceAccount) is an identity
that an instance or an application can be used to run GCP API requests. This identity is
used to identify virtual machine instances to other Google Cloud Platform services. By
default, Kubernetes Engine nodes use the Compute Engine default service account.
This account has broad access by default, as defined by access scopes, making it
useful to a wide variety of applications on the VM, but it has more permissions than are
required to run your Kubernetes Engine cluster.
A minimally privileged service account should be created and used to run the
Kubernetes Engine cluster instead of using the Compute Engine default service
account, and create separate service accounts for each Kubernetes Workload (See
recommendation 5.2.2).
Kubernetes Engine requires, at a minimum, the node service account to have the
monitoring.viewer, monitoring.metricWriter, and logging.logWriter roles.
Additional roles may need to be added for the nodes to pull images from GCR.
Impact:
Instances are automatically granted the https://www.googleapis.com/auth/cloud-
platform scope to allow full access to all Google Cloud APIs. This is so that the IAM
permissions of the instance are completely determined by the IAM roles of the Service
account. Thus if Kubernetes workloads were using cluster access scopes to perform
actions using Google APIs, they may no longer be able to, if not permitted by the
permissions of the Service account. To remediate, follow recommendation 5.2.2.
The Service account roles listed here are the minimum required to run the cluster.
Additional roles may be required to pull from a private instance of Google Container
Registry (GCR).
Audit:
Using Google Cloud Console:
Page 124
To check the permissions allocated to the service account are the minimum required for
cluster operation:
• Logs Writer
• Monitoring Metric Writer
• Monitoring Viewer
The output of the above command will return default if default Service account is used
for Project access.
To check that the permissions allocated to the service account are the minimum
required for cluster operation:
gcloud projects get-iam-policy <project_id> \
--flatten="bindings[].members" \
--format='table(bindings.role)' \
--filter="bindings.members:<service_account>"
Review the output to ensure that the service account only has the roles required to run
the cluster:
•
roles/logging.logWriter
•
roles/monitoring.metricWriter
•
roles/monitoring.viewer
Remediation:
Page 125
Note: The workloads will need to be migrated to the new Node pool, and the old node
pools that use the default service account should be deleted to complete the
remediation.
Using Command Line:
To create a minimally privileged service account:
gcloud iam service-accounts create <node_sa_name> --display-name "GKE Node
Service Account"
export NODE_SA_EMAIL=gcloud iam service-accounts list --format='value(email)'
--filter='displayName:GKE Node Service Account'
Grant the following roles to the service account:
export PROJECT_ID=gcloud config get-value project
gcloud projects add-iam-policy-binding <project_id> --member
serviceAccount:<node_sa_email> --role roles/monitoring.metricWriter
gcloud projects add-iam-policy-binding <project_id> --member
serviceAccount:<node_sa_email> --role roles/monitoring.viewer
gcloud projects add-iam-policy-binding <project_id> --member
serviceAccount:<node_sa_email> --role roles/logging.logWriter
To create a new Node pool using the Service account, run the following command:
Page 126
Default Value:
By default, nodes use the Compute Engine default service account when you create a
new cluster.
References:
1. https://cloud.google.com/compute/docs/access/service-
accounts#compute_engine_default_service_account
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 127
• Level 2
Description:
Kubernetes workloads should not use cluster node service accounts to authenticate to
Google Cloud APIs. Each Kubernetes Workload that needs to authenticate to other
Google services using Cloud IAM should be provisioned a dedicated Service account.
Enabling Workload Identity manages the distribution and rotation of Service account
keys for the workloads to use.
Rationale:
Impact:
Workload Identity replaces the need to use Metadata Concealment and as such, the
two approaches are incompatible. The sensitive metadata protected by Metadata
Concealment is also protected by Workload Identity.
When Workload Identity is enabled, the Compute Engine default Service account can
not be used. Correspondingly, Workload Identity can't be used with Pods running in the
host network. Workloads may also need to be modified in order for them to use
Workload Identity, as described within: https://cloud.google.com/kubernetes-
engine/docs/how-to/workload-identity
GKE infrastructure pods such as Stackdriver will continue to use the Node's Service
account.
Audit:
Using Google Cloud Console:
Page 128
Page 129
Note that existing Node pools are unaffected. New Node pools default to --workload-
metadata-from-node=GKE_METADATA_SERVER.
Then, modify existing Node pools to enable GKE_METADATA_SERVER:
gcloud container node-pools update <node_pool_name> --cluster <cluster_name>
--zone <cluster_zone> --workload-metadata=GKE_METADATA
Workloads may need to be modified in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity. Also consider the effects on the availability of hosted workloads as Node pools
are updated. It may be more appropriate to create new Node Pools.
Default Value:
By default, Workload Identity is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
2. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
3. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 130
This section contains recommendations relating to using Cloud KMS with GKE.
Page 131
• Level 2
Description:
Encrypt Kubernetes secrets, stored in etcd, at the application-layer using a customer-
managed key in Cloud KMS.
Rationale:
By default, GKE encrypts customer content stored at rest, including Secrets. GKE
handles and manages this default encryption for you without any additional action on
your part.
Application-layer Secrets Encryption provides an additional layer of security for sensitive
data, such as user defined Secrets and Secrets required for the operation of the cluster,
such as service account keys, which are all stored in etcd.
Using this functionality, you can use a key, that you manage in Cloud KMS, to encrypt
data at the application layer. This protects against attackers in the event that they
manage to gain access to etcd.
Impact:
To use the Cloud KMS CryptoKey to protect etcd in the cluster, the 'Kubernetes Engine
Service Agent' Service account must hold the 'Cloud KMS CryptoKey
Encrypter/Decrypter' role.
Audit:
Using Google Cloud Console:
If configured correctly, the output from the command returns a response containing the
following detail:
Page 132
{
"currentState": "CURRENT_STATE_ENCRYPTED",
"keyName": "projects/<key_project_id>/locations/us-
central1/keyRings/<ring_name>/cryptoKeys/<key_name>",
"state": "ENCRYPTED"
}
Remediation:
To enable Application-layer Secrets Encryption, several configuration items are
required. These include:
• A key ring
• A key
• A GKE service account with Cloud KMS CryptoKey Encrypter/Decrypter
role
Page 133
Create a key:
gcloud kms keys create <key_name> --location <location> --keyring <ring_name>
--purpose encryption --project <key_project_id>
Grant the Kubernetes Engine Service Agent service account the Cloud KMS
CryptoKey Encrypter/Decrypter role:
gcloud kms keys add-iam-policy-binding <key_name> --location <location> --
keyring <ring_name> --member serviceAccount:<service_account_name> --role
roles/cloudkms.cryptoKeyEncrypterDecrypter --project <key_project_id>
To create a new cluster with Application-layer Secrets Encryption:
gcloud container clusters create <cluster_name> --cluster-version=latest --
zone <zone> --database-encryption-key
projects/<key_project_id>/locations/<location>/keyRings/<ring_name>/cryptoKey
s/<key_name> --project <cluster_project_id>
Default Value:
By default, Application-layer Secrets Encryption is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/encrypting-secrets
Page 134
Controls
Control IG 1 IG 2 IG 3
Version
Page 135
Page 136
• Level 2
Description:
Running the GKE Metadata Server prevents workloads from accessing sensitive
instance metadata and facilitates Workload Identity.
Rationale:
Every node stores its metadata on a metadata server. Some of this metadata, such as
kubelet credentials and the VM instance identity token, is sensitive and should not be
exposed to a Kubernetes workload. Enabling the GKE Metadata server prevents pods
(that are not running on the host network) from accessing this metadata and facilitates
Workload Identity.
When unspecified, the default setting allows running pods to have full access to the
node's underlying metadata server.
Impact:
The GKE Metadata Server must be run when using Workload Identity. Because
Workload Identity replaces the need to use Metadata Concealment, the two approaches
are incompatible.
When the GKE Metadata Server and Workload Identity are enabled, unless the Pod is
running on the host network, Pods cannot use the the Compute Engine default service
account.
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity.
Audit:
Using Google Cloud Console
Page 137
Remediation:
The GKE Metadata Server requires Workload Identity to be enabled on a cluster. Modify
the cluster to enable Workload Identity and enable the GKE Metadata Server.
Using Google Cloud Console
Workloads may need modification in order for them to use Workload Identity as
described within: https://cloud.google.com/kubernetes-engine/docs/how-to/workload-
identity.
Default Value:
By default, running pods to have full access to the node's underlying metadata server.
Page 138
1. https://cloud.google.com/kubernetes-engine/docs/how-to/protecting-cluster-
metadata#concealment
2. https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
3. https://cloud.google.com/kubernetes-engine/docs/concepts/workload-identity
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 139
Page 140
• Level 1
Description:
Use Container-Optimized OS (cos_containerd) as a managed, optimized and hardened
base OS that limits the host's attack surface.
Rationale:
COS is an operating system image for Compute Engine VMs optimized for running
containers. With COS, the containers can be brought up on Google Cloud Platform
quickly, efficiently, and securely.
Using COS as the node image provides the following benefits:
• Run containers out of the box: COS instances come pre-installed with the
container runtime and cloud-init. With a COS instance, the container can be
brought up at the same time as the VM is created, with no on-host setup
required.
• Smaller attack surface: COS has a smaller footprint, reducing the instance's
potential attack surface.
• Locked-down by default: COS instances include a locked-down firewall and other
security settings by default.
Impact:
If modifying an existing cluster's Node pool to run COS, the upgrade operation used is
long-running and will block other operations on the cluster (including delete) until it has
run to completion.
COS nodes also provide an option with containerd as the main container runtime
directly integrated with Kubernetes instead of docker. Thus, on these nodes, Docker
cannot view or access containers or images managed by Kubernetes. Applications
should not interact with Docker directly. For general troubleshooting or debugging, use
crictl instead.
Audit:
Using Google Cloud Console:
Page 141
Default Value:
Container-optimised OS with containerd (cos_containerd) (default) is the default option
for a cluster node image.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/using-containerd
2. https://cloud.google.com/kubernetes-engine/docs/concepts/node-images
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 142
Page 143
• Level 2
Description:
Nodes in a degraded state are an unknown quantity and so may pose a security risk.
Rationale:
Kubernetes Engine's node auto-repair feature helps you keep the nodes in the cluster in
a healthy, running state. When enabled, Kubernetes Engine makes periodic checks on
the health state of each node in the cluster. If a node fails consecutive health checks
over an extended time period, Kubernetes Engine initiates a repair process for that
node.
Impact:
If multiple nodes require repair, Kubernetes Engine might repair them in parallel.
Kubernetes Engine limits number of repairs depending on the size of the cluster (bigger
clusters have a higher limit) and the number of broken nodes in the cluster (limit
decreases if many nodes are broken).
Node auto-repair is not available on Alpha Clusters.
Audit:
Using Google Cloud Console
Page 144
Remediation:
Using Google Cloud Console
Default Value:
Node auto-repair is enabled by default.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 145
• Level 2
Description:
Node auto-upgrade keeps nodes at the current Kubernetes and OS security patch level
to mitigate known vulnerabilities.
Rationale:
Node auto-upgrade helps you keep the nodes in the cluster or node pool up to date with
the latest stable patch version of Kubernetes as well as the underlying node operating
system. Node auto-upgrade uses the same update mechanism as manual node
upgrades.
Node pools with node auto-upgrade enabled are automatically scheduled for upgrades
when a new stable Kubernetes version becomes available. When the upgrade is
performed, the Node pool is upgraded to match the current cluster master version. From
a security perspective, this has the benefit of applying security updates automatically to
the Kubernetes Engine when security fixes are released.
Impact:
Enabling node auto-upgrade does not cause the nodes to upgrade immediately.
Automatic upgrades occur at regular intervals at the discretion of the Kubernetes
Engine team.
To prevent upgrades occurring during a peak period for the cluster, a maintenance
window should be defined. A maintenance window is a four-hour timeframe that can be
chosen, during which automatic upgrades should occur. Upgrades can occur on any
day of the week, and at any time within the timeframe. To prevent upgrades from
occurring during certain dates, a maintenance exclusion should be defined. A
maintenance exclusion can span multiple days.
Audit:
Using Google Cloud Console
Page 146
If node auto-upgrade is disabled, the output of the above command output will not
contain the autoUpgrade entry.
Remediation:
Using Google Cloud Console
Default Value:
Node auto-upgrade is enabled by default.
Even if a cluster has been created with node auto-repair enabled, this only applies to
the default Node pool. Subsequent node pools do not have node auto-upgrade enabled
by default.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/node-auto-upgrades
2. https://cloud.google.com/kubernetes-engine/docs/how-to/maintenance-windows-
and-exclusions
Page 147
Controls
Control IG 1 IG 2 IG 3
Version
Page 148
• Level 1
Description:
Subscribe to the Regular or Stable Release Channel to automate version upgrades to
the GKE cluster and to reduce version management complexity to the number of
features and level of stability required.
Rationale:
Release Channels signal a graduating level of stability and production-readiness. These
are based on observed performance of GKE clusters running that version and represent
experience and confidence in the cluster version.
The Regular release channel upgrades every few weeks and is for production users
who need features not yet offered in the Stable channel. These versions have passed
internal validation, but don't have enough historical data to guarantee their stability.
Known issues generally have known workarounds.
The Stable release channel upgrades every few months and is for production users who
need stability above all else, and for whom frequent upgrades are too risky. These
versions have passed internal validation and have been shown to be stable and reliable
in production, based on the observed performance of those clusters.
Critical security patches are delivered to all release channels.
Impact:
Once release channels are enabled on a cluster, they cannot be disabled. To stop using
release channels, the cluster must be recreated without the --release-channel flag.
Node auto-upgrade is enabled (and cannot be disabled), so the cluster is updated
automatically from releases available in the chosen release channel.
Audit:
Using Google Cloud Console:
Page 149
Returned Value:
"REGULAR"
The output of the above command will return regular or stable if these release
channels are being used to manage automatic upgrades for the cluster.
Remediation:
Currently, cluster Release Channels are only configurable at cluster provisioning time.
Using Google Cloud Console:
Default Value:
Currently, release channels are not enabled by default.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/release-channels
2. https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades
3. https://cloud.google.com/kubernetes-engine/docs/how-to/maintenance-windows-
and-exclusions
Page 150
Controls
Control IG 1 IG 2 IG 3
Version
Page 151
• Level 1
Description:
Shielded GKE Nodes provides verifiable integrity via secure boot, virtual trusted
platform module (vTPM)-enabled measured boot, and integrity monitoring.
Rationale:
Shielded GKE nodes protects clusters against boot- or kernel-level malware or rootkits
which persist beyond infected OS.
Shielded GKE nodes run firmware which is signed and verified using Google's
Certificate Authority, ensuring that the nodes' firmware is unmodified and establishing
the root of trust for Secure Boot. GKE node identity is strongly protected via virtual
Trusted Platform Module (vTPM) and verified remotely by the master node before the
node joins the cluster. Lastly, GKE node integrity (i.e., boot sequence and kernel) is
measured and can be monitored and verified remotely.
Impact:
After Shielded GKE Nodes is enabled in a cluster, any nodes created in a Node pool
without Shielded GKE Nodes enabled, or created outside of any Node pool, aren't able
to join the cluster.
Shielded GKE Nodes can only be used with Container-Optimized OS (COS), COS with
containerd, and Ubuntu node images.
Audit:
Using Google Cloud Console:
This will return the following if Shielded GKE Nodes are enabled:
Page 152
Remediation:
Note: From version 1.18, clusters will have Shielded GKE nodes enabled by default.
Using Google Cloud Console:
To update an existing cluster to use Shielded GKE nodes:
Default Value:
Clusters will have Shielded GKE nodes enabled by default, as of version v1.18
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-nodes
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 153
Page 154
• Level 1
Description:
Enable Integrity Monitoring for Shielded GKE Nodes to be notified of inconsistencies
during the node boot sequence.
Rationale:
Integrity Monitoring provides active alerting for Shielded GKE nodes which allows
administrators to respond to integrity failures and prevent compromised nodes from
being deployed into the cluster.
Impact:
None.
Audit:
Using Google Cloud Console:
Remediation:
Once a Node pool is provisioned, it cannot be updated to enable Integrity Monitoring.
New Node pools must be created within the cluster with Integrity Monitoring enabled.
Using Google Cloud Console
Page 155
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
Using Command Line
To create a Node pool within the cluster with Integrity Monitoring enabled, run the
following command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name>
--zone <compute_zone> --shielded-integrity-monitoring
Workloads from existing non-conforming Node pools will need to be migrated to the
newly created Node pool, then delete non-conforming Node pools to complete the
remediation
Default Value:
Integrity Monitoring is disabled by default on GKE clusters. Integrity Monitoring is
enabled by default for Shielded GKE Nodes; however, if Secure Boot is enabled at
creation time, Integrity Monitoring is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-nodes
2. https://cloud.google.com/compute/shielded-vm/docs/integrity-monitoring
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 156
Page 157
• Level 2
Description:
Enable Secure Boot for Shielded GKE Nodes to verify the digital signature of node boot
components.
Rationale:
An attacker may seek to alter boot components to persist malware or root kits during
system initialisation. Secure Boot helps ensure that the system only runs authentic
software by verifying the digital signature of all boot components, and halting the boot
process if signature verification fails.
Impact:
Secure Boot will not permit the use of third-party unsigned kernel modules.
Audit:
Using Google Cloud Console:
Page 158
Remediation:
Once a Node pool is provisioned, it cannot be updated to enable Secure Boot. New
Node pools must be created within the cluster with Secure Boot enabled.
Using Google Cloud Console:
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
Using Command Line:
To create a Node pool within the cluster with Secure Boot enabled, run the following
command:
gcloud container node-pools create <node_pool_name> --cluster <cluster_name>
--zone <compute_zone> --shielded-secure-boot
Workloads will need to be migrated from existing non-conforming Node pools to the
newly created Node pool, then delete the non-conforming pools.
Default Value:
By default, Secure Boot is disabled in GKE clusters. By default, Secure Boot is disabled
when Shielded GKE Nodes is enabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-
nodes#secure_boot
2. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster
Page 159
Controls
Control IG 1 IG 2 IG 3
Version
Page 160
Page 161
• Level 2
Description:
Enable VPC Flow Logs and Intranode Visibility to see pod-level traffic, even for traffic
within a worker node.
Rationale:
Enabling Intranode Visibility makes intranode pod to pod traffic visible to the networking
fabric. With this feature, VPC Flow Logs or other VPC features can be used for
intranode traffic.
Impact:
Enabling it on existing cluster causes the cluster master and the cluster nodes to restart,
which might cause disruption.
Audit:
Using Google Cloud Console:
Remediation:
Enable Intranode Visibility:
Using Google Cloud Console:
Page 162
Default Value:
By default, Intranode Visibility is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/intranode-visibility
2. https://cloud.google.com/vpc/docs/using-flow-logs
Page 163
Controls
Control IG 1 IG 2 IG 3
Version
Page 164
• Level 1
Description:
Create Alias IPs for the node network CIDR range in order to subsequently configure
IP-based policies and firewalling for pods. A cluster that uses Alias IPs is called a VPC-
native cluster.
Rationale:
Using Alias IPs has several benefits:
• Pod IPs are reserved within the network ahead of time, which prevents conflict
with other compute resources.
• The networking layer can perform anti-spoofing checks to ensure that egress
traffic is not sent with arbitrary source IPs.
• Firewall controls for Pods can be applied separately from their nodes.
• Alias IPs allow Pods to directly access hosted services without using a NAT
gateway.
Impact:
You cannot currently migrate an existing cluster that uses routes for Pod routing to a
cluster that uses Alias IPs.
Cluster IPs for internal services remain only available from within the cluster. If you want
to access a Kubernetes Service from within the VPC, but from outside of the cluster,
use an internal load balancer.
Audit:
Page 165
The output of the above command should return true, if VPC-native (using alias IP) is
enabled. If VPC-native (using alias IP) is disabled, the above command will return null
({ }).
Remediation:
Alias IPs cannot be enabled on an existing cluster. To create a new cluster using Alias
IPs, follow the instructions below.
Using Google Cloud Console:
If using Standard configuration mode:
Default Value:
By default, VPC-native (using alias IP) is enabled when you create a new cluster in the
Google Cloud Console, however this is disabled when creating a new cluster using the
gcloud CLI, unless the --enable-ip-alias argument is specified.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips
2. https://cloud.google.com/kubernetes-engine/docs/concepts/alias-ips
Page 166
Controls
Control IG 1 IG 2 IG 3
Version
Page 167
• Level 2
Description:
Enable Control Plane Authorized Networks to restrict access to the cluster's control
plane to only an allowlist of authorized IPs.
Rationale:
Authorized networks are a way of specifying a restricted range of IP addresses that are
permitted to access your cluster's control plane. Kubernetes Engine uses both
Transport Layer Security (TLS) and authentication to provide secure access to your
cluster's control plane from the public internet. This provides you the flexibility to
administer your cluster from anywhere; however, you might want to further restrict
access to a set of IP addresses that you control. You can set this restriction by
specifying an authorized network.
Control Plane Authorized Networks blocks untrusted IP addresses. Google Cloud
Platform IPs (such as traffic from Compute Engine VMs) can reach your master through
HTTPS provided that they have the necessary Kubernetes credentials.
Restricting access to an authorized network can provide additional security benefits for
your container cluster, including:
Impact:
When implementing Control Plane Authorized Networks, be careful to ensure all desired
networks are on the allowlist to prevent inadvertently blocking external access to your
cluster's control plane.
Audit:
Using Google Cloud Console:
Page 168
Remediation:
Using Google Cloud Console:
Default Value:
By default, Control Plane Authorized Networks is disabled.
Page 169
1. https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 170
• Level 2
Description:
Disable access to the Kubernetes API from outside the node network if it is not required.
Rationale:
In a private cluster, the master node has two endpoints, a private and public endpoint.
The private endpoint is the internal IP address of the master, behind an internal load
balancer in the master's VPC network. Nodes communicate with the master using the
private endpoint. The public endpoint enables the Kubernetes API to be accessed from
outside the master's VPC network.
Although Kubernetes API requires an authorized token to perform sensitive actions, a
vulnerability could potentially expose the Kubernetes publically with unrestricted access.
Additionally, an attacker may be able to identify the current cluster and Kubernetes API
version and determine whether it is vulnerable to an attack. Unless required, disabling
public endpoint will help prevent such threats, and require the attacker to be on the
master's VPC network to perform any attack on the Kubernetes API.
Impact:
To enable a Private Endpoint, the cluster has to also be configured with private nodes, a
private master IP range and IP aliasing enabled.
If the Private Endpoint flag --enable-private-endpoint is passed to the gcloud CLI,
or the external IP address undefined in the Google Cloud Console during cluster
creation, then all access from a public IP address is prohibited.
Audit:
Page 171
The output of the above command returns true if a Private Endpoint is enabled with
Public Access disabled.
For an additional check, the endpoint parameter can be queried with the following
command:
gcloud container clusters describe <cluster_name> --format json | jq
'.endpoint'
The output of the above command returns a private IP address if Private Endpoint is
enabled with Public Access disabled.
Remediation:
Once a cluster is created without enabling Private Endpoint only, it cannot be
remediated. Rather, the cluster must be recreated.
Using Google Cloud Console:
Default Value:
By default, the Private Endpoint is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
Page 172
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 173
• Level 1
Description:
Private Nodes are nodes with no public IP addresses. Disable public IP addresses for
cluster nodes, so that they only have private IP addresses.
Rationale:
Disabling public IP addresses on cluster nodes restricts access to only internal
networks, forcing attackers to obtain local network access before attempting to
compromise the underlying Kubernetes hosts.
Impact:
To enable Private Nodes, the cluster has to also be configured with a private master IP
range and IP Aliasing enabled.
Private Nodes do not have outbound access to the public internet. If you want to provide
outbound Internet access for your private nodes, you can use Cloud NAT or you can
manage your own NAT gateway.
To access Google Cloud APIs and services from private nodes, Private Google Access
needs to be set on Kubernetes Engine Cluster Subnets.
Audit:
Using Google Cloud Console:
Remediation:
Page 174
Default Value:
By default, Private Nodes are disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
v7 12 Boundary Defense
Boundary Defense
Page 175
• Level 2
Description:
Reduce the network attack surface of GKE nodes by using Firewalls to restrict ingress
and egress traffic.
Rationale:
Utilizing stringent ingress and egress firewall rules minimizes the ports and services
exposed to an network-based attacker, whilst also restricting egress routes within or out
of the cluster in the event that a compromised component attempts to form an outbound
connection.
Impact:
All instances targeted by a firewall rule, either using a tag or a service account will be
affected. Ensure there are no adverse effects on other instances using the target tag or
service account before implementing the firewall rule.
Audit:
Using Google Cloud Console:
Page 176
Remediation:
Using Google Cloud Console:
Page 177
Default Value:
Every VPC network has two implied firewall rules. These rules exist, but are not shown
in the Cloud Console:
• The implied allow egress rule: An egress rule whose action is allow, destination
is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send
traffic to any destination, except for traffic blocked by GCP. Outbound access
may be restricted by a higher priority firewall rule. Internet access is allowed if no
other firewall rules deny outbound traffic and if the instance has an external IP
address or uses a NAT instance.
• The implied deny ingress rule: An ingress rule whose action is deny, source is
0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by
blocking incoming traffic to them. Incoming access may be allowed by a higher
priority rule. Note that the default network includes some additional rules that
override this one, allowing certain types of incoming traffic.
The implied rules cannot be removed, but they have the lowest possible priorities.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture
2. https://cloud.google.com/vpc/docs/using-firewalls
Page 178
Controls
Control IG 1 IG 2 IG 3
Version
Page 179
• Level 2
Description:
Encrypt traffic to HTTPS load balancers using Google-managed SSL certificates.
Rationale:
Encrypting traffic between users and the Kubernetes workload is fundamental to
protecting data sent over the web.
Google-managed SSL Certificates are provisioned, renewed, and managed for domain
names. This is only available for HTTPS load balancers created using Ingress
Resources, and not TCP/UDP load balancers created using Service of
type:LoadBalancer.
Impact:
Google-managed SSL Certificates are less flexible than certificates that are self
obtained and managed. Managed certificates support a single, non-wildcard domain.
Self-managed certificates can support wildcards and multiple subject alternative names
(SANs).
Audit:
Using Command Line:
Identify if there are any workloads exposed publicly using Services of
type:LoadBalancer:
kubectl get svc -A -o json | jq '.items[] |
select(.spec.type=="LoadBalancer")'
Consider using ingresses instead of these services in order to use Google managed
SSL certificates.
For the ingresses within the cluster, run the following command:
kubectl get ingress -A -o json | jq .items[] | jq '{name: .metadata.name,
annotations: .metadata.annotations, namespace: .metadata.namespace, status:
.status}'
The above command should return the name of the ingress, namespace, annotations
and status. Check that the following annotation is present to ensure managed
certificates are referenced.
Page 180
Remediation:
If services of type:LoadBalancer are discovered, consider replacing the Service with
an Ingress.
To configure the Ingress and use Google-managed SSL certificates, follow the
instructions as listed at: https://cloud.google.com/kubernetes-engine/docs/how-
to/managed-certs.
Default Value:
By default, Google-managed SSL Certificates are not created when an Ingress resource
is defined.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs
2. https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 181
Page 182
• Level 1
Description:
Send logs and metrics to a remote aggregator to mitigate the risk of local tampering in
the event of a breach.
Rationale:
Exporting logs and metrics to a dedicated, persistent datastore such as Cloud
Operations for GKE ensures availability of audit data following a cluster security event,
and provides a central location for analysis of log and metric data collated from multiple
sources.
Audit:
Using Google Cloud Console:
LOGGING AND CLOUD MONITORING SUPPORT (PREFERRED):
Page 183
Page 184
References:
1. https://cloud.google.com/stackdriver/docs/solutions/gke/observing
2. https://cloud.google.com/stackdriver/docs/solutions/gke/managing-logs
3. https://cloud.google.com/stackdriver/docs/solutions/gke/installing
4. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
logging
5. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
monitoring
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 185
• Level 2
Description:
Run the auditd logging daemon to obtain verbose operating system logs from GKE
nodes running Container-Optimized OS (COS).
Rationale:
Auditd logs provide valuable information about the state of the cluster and workloads,
such as error messages, login attempts, and binary executions. This information can be
used to debug issues or to investigate security incidents.
Impact:
Increased logging activity on a node increases resource usage on that node, which may
affect the performance of the workload and may incur additional resource costs. Audit
logs sent to Stackdriver consume log quota from the project. The log quota may require
increasing and storage to accommodate the additional logs.
Note that the provided logging daemonset only works on nodes running Container-
Optimized OS (COS).
Audit:
Page 186
The above command returns the name, namespace and status of the daemonsets that
use the Stackdriver logging agent. The example auditd logging daemonset has a
description within the annotation as output by the command above:
{
"name": "cos-auditd-logging",
"annotations": "DaemonSet that enables Linux auditd logging on COS nodes.",
"namespace": "cos-auditd",
"status": {...
}
}
Ensure that the status fields return that the daemonset is running as expected.
Remediation:
Using Command Line:
Download the example manifests:
curl https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-node-
tools/master/os-audit/cos-auditd-logging.yaml > cos-auditd-logging.yaml
Default Value:
By default, the auditd logging daemonset is not launched when a GKE cluster is
created.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/linux-auditd-logging
2. https://cloud.google.com/container-optimized-os/docs
Page 187
Controls
Control IG 1 IG 2 IG 3
Version
Page 188
Page 189
• Level 1
Description:
Disable Client Certificates, which require certificate rotation, for authentication. Instead,
use another authentication method like OpenID Connect.
Rationale:
With Client Certificate authentication, a client presents a certificate that the API server
verifies with the specified Certificate Authority. In GKE, Client Certificates are signed by
the cluster root Certificate Authority. When retrieved, the Client Certificate is only
base64 encoded and not encrypted.
GKE manages authentication via gcloud for you using the OpenID Connect token
method, setting up the Kubernetes configuration, getting an access token, and keeping
it up to date. This means Basic Authentication using static passwords and Client
Certificate authentication, which both require additional management overhead of key
management and rotation, are not necessary and should be disabled.
When Client Certificate authentication is disabled, you will still be able to authenticate to
the cluster with other authentication methods, such as OpenID Connect tokens. See
also Recommendation 6.8.1 to disable authentication using static passwords, known as
Basic Authentication.
Impact:
Users will no longer be able to authenticate with the pre-provisioned x509 certificate.
You will have to configure and use alternate authentication mechanisms, such as
OpenID Connect tokens.
Audit:
Page 190
Default Value:
Google Kubernetes Engine (GKE), both Basic Authentication and Client Certificate
issuance are disabled by default for new clusters. This change was implemented
starting with GKE version 1.12 to enhance security by reducing the attack surface
associated with these authentication methods.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#restrict_authn_methods
Page 191
Controls
Control IG 1 IG 2 IG 3
Version
Page 192
• Level 2
Description:
Cluster Administrators should leverage G Suite Groups and Cloud IAM to assign
Kubernetes user roles to a collection of users, instead of to individual emails using only
Cloud IAM.
Rationale:
On- and off-boarding users is often difficult to automate and prone to error. Using a
single source of truth for user permissions via G Suite Groups reduces the number of
locations that an individual must be off-boarded from, and prevents users gaining
unique permissions sets that increase the cost of audit.
Impact:
When migrating to using security groups, an audit of RoleBindings and
ClusterRoleBindings is required to ensure all users of the cluster are managed using
the new groups and not individually.
When managing RoleBindings and ClusterRoleBindings, be wary of inadvertently
removing bindings required by service accounts.
Audit:
Using G Suite Admin Console and Google Cloud Console
Page 193
1. https://cloud.google.com/kubernetes-engine/docs/how-to/google-groups-rbac
2. https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-
control
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 194
• Level 1
Description:
Legacy Authorization, also known as Attribute-Based Access Control (ABAC) has been
superseded by Role-Based Access Control (RBAC) and is not under active
development. RBAC is the recommended way to manage permissions in Kubernetes.
Rationale:
In Kubernetes, RBAC is used to grant permissions to resources at the cluster and
namespace level. RBAC allows the definition of roles with rules containing a set of
permissions, whilst the legacy authorizer (ABAC) in Kubernetes Engine grants broad,
statically defined permissions. As RBAC provides significant security advantages over
ABAC, it is recommended option for access control. Where possible, legacy
authorization must be disabled for GKE clusters.
Impact:
Once the cluster has the legacy authorizer disabled, the user must be granted the ability
to create authorization roles using RBAC to ensure that the role-based access control
permissions take effect.
Audit:
Using Google Cloud Console:
The output should return null ({}) if Legacy Authorization is Disabled. If Legacy
Authorization is Enabled, the above command will return true value.
Remediation:
Page 195
Default Value:
Kubernetes Engine clusters running GKE version 1.8 and later disable the legacy
authorization system by default, and thus role-based access control permissions take
effect with no special action required.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-
control
2. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#leave_abac_disabled_default_for_110
Additional Information:
On clusters running GKE 1.6 or 1.7, Kubernetes Service accounts have full permissions
on the Kubernetes API by default. To ensure that the role-based access control
permissions take effect for a Kubernetes service account, the cluster must be created or
updated with the option --no-enable-legacy-authorization. This requirement is
removed for clusters running GKE version 1.8 or higher.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 196
Page 197
Page 198
• Level 2
Description:
Use Customer-Managed Encryption Keys (CMEK) to encrypt dynamically-provisioned
attached Google Compute Engine Persistent Disks (PDs) using keys managed within
Cloud Key Management Service (Cloud KMS).
Rationale:
GCE persistent disks are encrypted at rest by default using envelope encryption with
keys managed by Google. For additional protection, users can manage the Key
Encryption Keys using Cloud KMS.
Impact:
Encryption of dynamically-provisioned attached disks requires the use of the self-
provisioned Compute Engine Persistent Disk CSI Driver v0.5.1 or higher.
If CMEK is being configured with a regional cluster, the cluster must run GKE 1.14 or
higher.
Audit:
Using Google Cloud Console:
Page 199
Default Value:
Persistent disks are encrypted at rest by default, but are not encrypted using Customer-
Managed Encryption Keys by default. By default, the Compute Engine Persistent Disk
CSI Driver is not provisioned within the cluster.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek
2. https://cloud.google.com/compute/docs/disks/customer-managed-encryption
3. https://cloud.google.com/security/encryption-at-rest/default-encryption/
4. https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes
5. https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 200
• Level 2
Description:
Use Customer-Managed Encryption Keys (CMEK) to encrypt node boot disks using
keys managed within Cloud Key Management Service (Cloud KMS).
Rationale:
GCE persistent disks are encrypted at rest by default using envelope encryption with
keys managed by Google. For additional protection, users can manage the Key
Encryption Keys using Cloud KMS.
Impact:
Encryption of dynamically-provisioned attached disks requires the use of the self-
provisioned Compute Engine Persistent Disk CSI Driver v0.5.1 or higher.
If CMEK is being configured with a regional cluster, the cluster must run GKE 1.14 or
higher.
Audit:
Using Google Cloud Console:
Page 201
Create a cluster using customer-managed encryption keys for the node boot disk, of
<disk_type> either pd-standard or pd-ssd:
Page 202
Default Value:
Persistent disks are encrypted at rest by default, but are not encrypted using Customer-
Managed Encryption Keys by default. By default, the Compute Engine Persistent Disk
CSI Driver is not provisioned within the cluster.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek
2. https://cloud.google.com/compute/docs/disks/customer-managed-encryption
3. https://cloud.google.com/security/encryption-at-rest/default-encryption/
4. https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes
5. https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 203
Page 204
• Level 1
Description:
Note: The Kubernetes web UI (Dashboard) does not have admin access by default in
GKE 1.7 and higher. The Kubernetes web UI is disabled by default in GKE 1.10 and
higher. In GKE 1.15 and higher, the Kubernetes web UI add-on KubernetesDashboard
is no longer supported as a managed add-on.
The Kubernetes Web UI (Dashboard) has been a historical source of vulnerability and
should only be deployed when necessary.
Rationale:
You should disable the Kubernetes Web UI (Dashboard) when running on Kubernetes
Engine. The Kubernetes Web UI is backed by a highly privileged Kubernetes Service
Account.
The Google Cloud Console provides all the required functionality of the Kubernetes
Web UI and leverages Cloud IAM to restrict user access to sensitive cluster controls
and settings.
Impact:
Users will be required to manage cluster resources using the Google Cloud Console or
the command line. These require appropriate permissions. To use the command line,
this requires the installation of the command line client, kubectl, on the user's device
(this is already included in Cloud Shell) and knowledge of command line operations.
Audit:
Using Google Cloud Console:
Currently not possible, due to the add-on having been removed. Must use the command
line.
Using Command Line:
Run the following command:
gcloud container clusters describe <cluster_name> --zone <compute_zone> --
format json | jq '.addonsConfig.kubernetesDashboard'
Ensure the output of the above command has JSON key attribute disabled set to true:
Page 205
Remediation:
Using Google Cloud Console:
Currently not possible, due to the add-on having been removed. Must use the command
line.
Using Command Line:
To disable the Kubernetes Dashboard on an existing cluster, run the following
command:
gcloud container clusters update <cluster_name> --zone <zone> --update-
addons=KubernetesDashboard=DISABLED
Default Value:
The Kubernetes web UI (Dashboard) does not have admin access by default in GKE
1.7 and higher. The Kubernetes web UI is disabled by default in GKE 1.10 and higher.
In GKE 1.15 and higher, the Kubernetes web UI add-on KubernetesDashboard is no
longer supported as a managed add-on.
References:
1. https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-
cluster#disable_kubernetes_dashboard
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 206
• Level 1
Description:
Alpha clusters are not covered by an SLA and are not production-ready.
Rationale:
Alpha clusters are designed for early adopters to experiment with workloads that take
advantage of new features before those features are production-ready. They have all
Kubernetes API features enabled, but are not covered by the GKE SLA, do not receive
security updates, have node auto-upgrade and node auto-repair disabled, and cannot
be upgraded. They are also automatically deleted after 30 days.
Impact:
Users and workloads will not be able to take advantage of features included within
Alpha clusters.
Audit:
Remediation:
Alpha features cannot be disabled. To remediate, a new cluster must be created.
Using Google Cloud Console
Page 207
Default Value:
By default, Kubernetes Alpha features are disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 208
• Level 2
Description:
Use GKE Sandbox to restrict untrusted workloads as an additional layer of protection
when running in a multi-tenant environment.
Rationale:
GKE Sandbox provides an extra layer of security to prevent untrusted code from
affecting the host kernel on your cluster nodes.
When you enable GKE Sandbox on a Node pool, a sandbox is created for each Pod
running on a node in that Node pool. In addition, nodes running sandboxed Pods are
prevented from accessing other GCP services or cluster metadata. Each sandbox uses
its own userspace kernel.
Multi-tenant clusters and clusters whose containers run untrusted workloads are more
exposed to security vulnerabilities than other clusters. Examples include SaaS
providers, web-hosting providers, or other organizations that allow their users to upload
and run code. A flaw in the container runtime or in the host kernel could allow a process
running within a container to 'escape' the container and affect the node's kernel,
potentially bringing down the node.
The potential also exists for a malicious tenant to gain access to and exfiltrate another
tenant's data in memory or on disk, by exploiting such a defect.
Impact:
Using GKE Sandbox requires the node image to be set to Container-Optimized OS with
containerd (cos_containerd).
It is not currently possible to use GKE Sandbox along with the following Kubernetes
features:
Page 209
Audit:
The output of the above command will return the following if the Node pool is running a
sandbox:
{
"sandboxType":"gvisor"
}
If there is no sandbox, the above command output will be null ({ }).
The default node pool cannot use GKE Sandbox.
Remediation:
Once a node pool is created, GKE Sandbox cannot be enabled, rather a new node pool
is required. The default node pool (the first node pool in your cluster, created when the
cluster is created) cannot use GKE Sandbox.
Using Google Cloud Console:
Page 210
Default Value:
By default, GKE Sandbox is disabled.
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/sandbox-pods
2. https://cloud.google.com/kubernetes-engine/docs/concepts/node-pools
3. https://cloud.google.com/kubernetes-engine/docs/how-to/sandbox-pods
Additional Information:
The default node pool (the first node pool in your cluster, created when the cluster is
created) cannot use GKE Sandbox.
When using GKE Sandbox, your cluster must have at least two node pools. You must
always have at least one node pool where GKE Sandbox is disabled. This node pool
must contain at least one node, even if all your workloads are sandboxed.
It is optional but recommended that you enable Stackdriver Logging and Stackdriver
Monitoring, by adding the flag --enable-stackdriver-kubernetes. gVisor messages
are logged.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 211
• Level 2
Description:
Binary Authorization helps to protect supply-chain security by only allowing images with
verifiable cryptographically signed metadata into the cluster.
Rationale:
Binary Authorization provides software supply-chain security for images that are
deployed to GKE from Google Container Registry (GCR) or another container image
registry.
Binary Authorization requires images to be signed by trusted authorities during the
development process. These signatures are then validated at deployment time. By
enforcing validation, tighter control over the container environment can be gained by
ensuring only verified images are integrated into the build-and-release process.
Impact:
Care must be taken when defining policy in order to prevent inadvertent denial of
container image deployments. Depending on policy, attestations for existing container
images running within the cluster may need to be created before those images are
redeployed or pulled as part of the pod churn.
To prevent key system images from being denied deployment, consider the use of
global policy evaluation mode, which uses a global policy provided by Google and
exempts a list of Google-provided system images from further policy evaluation.
Audit:
Page 212
Ensure that the current policy is not configured to allow all images (evaluationMode:
ALWAYS_ALLOW):
cat current-policy.yaml
...
defaultAdmissionRule:
evaluationMode: ALWAYS_ALLOW
Remediation:
Using Google Cloud Console
Page 213
Example:
gcloud container clusters update $CLUSTER_NAME --zone $COMPUTE_ZONE --
binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE
See: https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
binauthz-evaluation-mode for more details around the evaluation modes available.
Create a Binary Authorization Policy using the Binary Authorization Policy Reference:
https://cloud.google.com/binary-authorization/docs/policy-yaml-reference for guidance.
Import the policy file into Binary Authorization:
gcloud container binauthz policy import <yaml_policy>
Default Value:
By default, Binary Authorization is disabled.
References:
1. https://cloud.google.com/binary-authorization/docs/setting-up
2. https://cloud.google.com/sdk/gcloud/reference/container/clusters/update#--
binauthz-evaluation-mode
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 214
• Level 2
Description:
Rationale:
The security posture dashboard provides insights about your workload security posture
at the runtime phase of the software delivery life-cycle.
Impact:
GKE security posture configuration auditing checks your workloads against a set of
defined best practices. Each configuration check has its own impact or risk. Learn more
about the checks: https://cloud.google.com/kubernetes-engine/docs/concepts/about-
configuration-scanning
Example: The host namespace check identifies pods that share host namespaces.
Pods that share host namespaces allow Pod processes to communicate with host
processes and gather host information, which could lead to a container escape
Audit:
Check the SecurityPostureConfig on your cluster:
gcloud container clusters --location describe
securityPostureConfig:
mode: BASIC
Remediation:
Enable security posture via the UI, gCloud or API.
https://cloud.google.com/kubernetes-engine/docs/how-to/protect-workload-configuration
Default Value:
GKE security posture has multiple features. Not all are on by default. Configuration
auditing is enabled by default for new standard and autopilot clusters.
securityPostureConfig: mode: BASIC
References:
1. https://cloud.google.com/kubernetes-engine/docs/concepts/about-security-
posture-dashboard
Page 215
Controls
Control IG 1 IG 2 IG 3
Version
Page 216
Yes No
3 Worker Nodes
3.1.1 Ensure that the proxy kubeconfig file permissions are set
to 644 or more restrictive (Automated)
3.2 Kubelet
Page 217
Yes No
4 Policies
Page 218
Yes No
5 Managed services
Page 219
Yes No
5.2.1 Ensure GKE clusters are not running using the Compute
Engine default service account (Automated)
Page 220
Yes No
5.7 Logging
Page 221
Yes No
5.9 Storage
5.10.2 Ensure that Alpha clusters are not used for production
workloads (Automated)
Page 222
Page 223
Page 224
Page 225
Page 226
Page 227
Page 228
Page 229
Page 230
Page 231
Page 232
Page 233
Page 234
Page 235
Page 236
Page 237
Page 238
Page 239
Page 241
Page 242
Page 243
Page 244
Page 245
Page 246
Page 247
Page 248
Page 249
Page 250
Page 251
Page 252