This page explains how to configure cluster-wide network policies for Google Kubernetes Engine (GKE).
Network policies and FQDN network policies help you define communication traffic rules between Pods. Network policies control how Pods communicate with each other within their applications and with external endpoints.
As a cluster administrator, you can configure Cilium cluster-wide network policies (CCNP), which overcome the network policy limitations for managing cluster-wide administrative traffic. Cilium cluster-wide network policies enforce strict network rules for all workloads across the entire cluster, across namespaces, overriding any application-specific rules.
Cilium cluster-wide network policy for GKE is a cluster-scoped CustomResourceDefinition (CRD) that specifies policies enforced by GKE. By enabling Cilium cluster-wide network policy in GKE, you can centrally manage network rules for your entire cluster. You can control basic Layer 3 (IP-level) and Layer 4 (port-level) access for traffic entering and leaving the cluster.
Benefits
With Cilium cluster-wide network policy you can:
- Enforce centralized security: With CCNP, you can define network access rules that apply to your entire network. These CCNP rules act as a top-level security layer, overriding any potentially conflicting policies at the namespace level.
- Protect multi-tenancy: If your cluster hosts multiple teams or tenants, you can secure isolation within a shared cluster by implementing CCNP rules, which focus on network traffic control. You can enforce network-level separation by assigning namespaces or groups of namespaces to specific teams.
- Define flexible default policies: With CCNP, you can define default network rules for the entire cluster. You can customize these rules when necessary without compromising your overall cluster security.
To implement CCNP, enable GKE Dataplane V2 on your cluster. Ensure that the CCNP CRD is enabled, then create policies that define network access rules for your cluster.
Before you begin
Before you start, make sure you have performed the following tasks:
- Enable the Google Kubernetes Engine API. Enable Google Kubernetes Engine API
- If you want to use the Google Cloud CLI for this task,
install and then
initialize the
gcloud CLI. If you previously installed the gcloud CLI, get the latest
version by running
gcloud components update
.
Requirements
Cilium cluster-wide network policies have the following requirements:
- Google Cloud CLI version 465.0.0 or later.
- You must have a GKE cluster running one of the following versions:
- 1.28.6-gke.1095000 or later
- 1.29.1-gke.1016000 or later
- Your cluster must use GKE Dataplane V2.
- You must enable Cilium cluster-wide network policy CRD.
Limitations
Cilium cluster-wide network policies have the following limitations:
- Layer 7 policies are not supported.
- Node selectors are not supported.
- The maximum number of
CiliumClusterwideNetworkPolicy
per cluster is 1000.
Enable Cilium cluster-wide network policy in a new cluster
You can enable Cilium cluster-wide network policy in a new cluster by using the Google Cloud CLI or the Google Kubernetes Engine API.
gcloud
To enable Cilium cluster-wide network policy in a new cluster, create a new
cluster with --enable-cilium-clusterwide-network-policy
flag.
Autopilot
gcloud container clusters create-auto CLUSTER_NAME \
--location COMPUTE_LOCATION \
--enable-cilium-clusterwide-network-policy
Replace the following:
CLUSTER_NAME
with the name of your cluster.COMPUTE_LOCATION
with the location of your cluster.
Standard
gcloud container clusters create CLUSTER_NAME \
--location COMPUTE_LOCATION \
--enable-cilium-clusterwide-network-policy \
--enable-dataplane-v2
Replace the following:
CLUSTER_NAME
with the name of your cluster.COMPUTE_LOCATION
with the location of your cluster.
API
To enable the Cilium cluster-wide network policy, you must specify the following options while creating a new cluster:
datapathProvider
field in the networkConfig
object.
{
"cluster": {
...
"networkConfig": {
"datapathProvider": "ADVANCED_DATAPATH",
"enableCiliumClusterwideNetworkPolicy": true
}
}
}
Verify that ciliumclusterwidenetworkpolicies.cilium.io
is present in the output of the following command:
kubectl get crds ciliumclusterwidenetworkpolicies.cilium.io
The output should be similar to the following:
ciliumclusterwidenetworkpolicies.cilium.io 2023-09-19T16:54:48Z
Enable Cilium cluster-wide network policy in an existing cluster
You can enable Cilium cluster-wide network policy in an existing cluster by using the Google Cloud CLI or the Google Kubernetes Engine API.
gcloud
Confirm the cluster has GKE Dataplane V2 enabled.
gcloud container clusters describe CLUSTER_NAME \ --location COMPUTE_LOCATION \ --format="value(networkConfig.datapathProvider)" \
Replace the following:
CLUSTER_NAME
with the name of your cluster.COMPUTE_LOCATION
with the location of your cluster.
Update the cluster using the
--enable-cilium-clusterwide-network-policy
flag.gcloud container clusters update CLUSTER_NAME \ --location COMPUTE_LOCATION \ --enable-cilium-clusterwide-network-policy
Restart the anetd DaemonSet.
kubectl rollout restart ds -n kube-system anetd && \ kubectl rollout status ds -n kube-system anetd
API
Confirm that the cluster is enabled for GKE Dataplane V2:
{
"update": {
"desiredEnableCiliumClusterwideNetworkPolicy": true
},
"name": "cluster"
}
To update an existing cluster, run the following update cluster command:
{
"update": {
"desiredEnableCiliumClusterwideNetworkPolicy": true
}
"name": "cluster"
}
Verify that ciliumclusterwidenetworkpolicies.cilium.io
is present in the
output of the following command:
kubectl get crds ciliumclusterwidenetworkpolicies.cilium.io
The output should be similar to the following:
ciliumclusterwidenetworkpolicies.cilium.io 2023-09-19T16:54:48Z
Using Cilium cluster-wide network policy
This section lists examples for configuring Cilium cluster-wide network policy.
Example 1: Control ingress traffic to a workload
The following example enables all endpoints with the label role=backend
to
accept ingress connections on port 80 from endpoints with the label
role=frontend
. Endpoints with the label role=backend
will reject all ingress
connection that are not allowed by this policy.
Save the following manifest as
l4-rule-ingress.yaml
:apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "l4-rule-ingress" spec: endpointSelector: matchLabels: role: backend ingress: - fromEndpoints: - matchLabels: role: frontend toPorts: - ports: - port: "80" protocol: TCP
Apply the manifest:
kubectl apply -f l4-rule-ingress.yaml
Example 2: Restrict egress traffic from a workload on a given port
The following rule limits all endpoints with the label app=myService
to only
be able to emit packets using TCP on port 80, to any Layer 3 destination:
Save the following manifest as
l4-rule-egress.yaml
:apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "l4-rule-egress" spec: endpointSelector: matchLabels: app: myService egress: - toPorts: - ports: - port: "80" protocol: TCP
Apply the manifest:
kubectl apply -f l4-rule-egress.yaml
Example 3: Restrict egress traffic from a workload on a given port and CIDR
The following example limits all endpoints with the label role=crawler
to only
be able to send packets on port 80, protocols TCP, to a destination CIDR
192.10.2.0/24
.
Save the following manifest as
cidr-l4-rule.yaml
:apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "cidr-l4-rule" spec: endpointSelector: matchLabels: role: crawler egress: - toCIDR: - 192.0.2.0/24 toPorts: - ports: - port: "80" protocol: TCP
Apply the manifest:
kubectl apply -f cidr-l4-rule.yaml
Monitoring and troubleshooting network traffic
You can monitor and troubleshoot the network traffic affected by cilium cluster-wide network policies by Network policy logging and GKE Dataplane V2 observability.
Attempt to use Layer 7 policies or node selectors
Symptom
If you are using GKE with GKE Dataplane V2 and you attempt to define CCNP policies that include Layer 7 rules (for example: HTTP filtering) and node selectors, you might see an error message similar to the following:
Error
Error from server (GKE Warden constraints violations): error when creating
"ccnp.yaml": admission webhook
"warden-validating.common-webhooks.networking.gke.io" denied the request: GKE
Warden rejected the request because it violates one or more constraints.
Violations details: {"[denied by gke-cilium-network-policy-limitation]":["L7
rules are not allowed in CiliumClusterwideNetworkPolicy"]} Requested by user:
'[email protected]', groups: 'system:authenticated'.
Potential cause
GKE has specific limitations on CCNPs. Layer 7 policies, which allow filtering based on application-level data (like HTTP headers), and node selectors are not supported within GKE's Cilium integration.
Resolution
If you need advanced Layer 7 filtering capabilities in your GKE cluster, consider using Cloud Service Mesh. This provides more granular application-level traffic control.
Cilium cluster-wide network policy not enabled
Symptom
When you try to configure Cilium cluster-wide network policies (CCNP) in a cluster where where the feature has not been explicitly enabled, you won't be able to configure it and might see an error message similar to the following:
Error
error: resource mapping not found for name: "l4-rule" namespace: "" from
"ccnp.yaml": no matches for kind "CiliumClusterwideNetworkPolicy" in version
"cilium.io/v2" ensure CRDs are installed first
Potential cause
Cilium cluster-wide network policies rely on a Custom Resource Definition (CRD). The error message indicates that the CRD is missing in the cluster.
Resolution
Enable Cilium cluster-wide network policy CRD before using CCNPs.