0% found this document useful (0 votes)
35 views15 pages

Kubernetes Namespaces Offer No Isolation 1709616779

Uploaded by

Steve McQueen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views15 pages

Kubernetes Namespaces Offer No Isolation 1709616779

Uploaded by

Steve McQueen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Namespaces are one of the

fundamental resources in Kubernetes.

kubectl apply But they don't provide network


isolation, are ignored by the
Namespace #1 scheduler and can't limit resource
API SERVER usage.

How do they actually work, and what


are they useful for?

Namespaces are not real and don't


exist in the infrastructure.

POD1 POD2 POD3


You could think of namespaces as
labels that you can use to group
resources when you list them.

POD4 POD5

Example: show me all pods in the


default (label) namespace.

POD7 POD8 POD6

We often picture them as flexible


boundaries around objects, but
there's no real grouping of resources
in the node.
10.0.0.1 10.0.3.1

192.168.0.2 192.168.0.3 192.168.0.4 192.168.0.5

192.168.0.1
The Kubernetes network should always satisfy two

requirements:

FROM: 1. Any pod can talk to any pod.

2. Pods have "stable" IP addresses.

Pod (10.0.0.1)

to: Notice how the first requirement already invalidates the idea

of namespaces as security boundaries.


Pod (10.0.3.1)
But it doesn't end there.

Namespaces grows
Namespaces are just labels and don't define how many with the resources.
resources can be created or assigned.

You can have as many (and as big) workloads as you want.

POD4 POD5

POD7 POD6
Queue

filter valid nodes Prefilter


Filter
Filter

PreScore
score nodes from
Scheduling

best to worst Score


phase

Normalise score

(Notifier)

On top of that, the Kubernetes scheduler isn't (Binding Policies)


namespace-aware.

When it decides where the pod should be WaitOnPermit


placed in the infrastructure, it doesn't take
namespaces into account.

PreBind
Binding

Binding phase
And why should it?
Bind

PostBind
PODS DEPLOYMENT PODS DEPLOYMENT
UID CLUSTER ROLE read write read write UID ROLE read write read write
1 admin1
1 teamA

2 debug
2 QA
3 reviewer DEV namespace

When we look at permissions and


RBAC, we see things changing.

You have ClusterRoles that grant you


access to resources regardless of the
namespace.

oleBinding
But you also have Roles that are
RoleBinding restricted by it.

And here's the first clue on how


namespaces work: the Kubernetes
API.

Identity1 Identity2
Normal user Service Account
Mutation

EXTERNAL
webhoo Validation
Mutation webhook
webhook Validation
webhook
External APIs Mutation
k Validation
webhook webhook

INTERNAL
API Aggregation

Resource handlers
Schema validation
Authentication

Authorisation
Mutation Validation
admission admission
controllers controllers

If you are familiar with the Kubernetes API, you


might know there are Validating and Mutating
controllers designed to check and mutate
resources before they reach etcd.

Each controller has several checks or mutating


actions designed to modify pods, namespaces,
ingress, storage classes and more.
API SERVER

Schema validation
Authentication

Authorisation
Mutation Validation
CertificateSigning admission admission
controllers controllers
CertificateSubjectRestriction
LimitRanger
NamespaceLifecycle So, what happens when you want to
limit the resources in a namespace?

PersistentVolumeClaimResize You don't attach that policy to the


scheduler; instead, you create a
ResourceQuota ResourceQuota object.

The API server in the validating


RuntimeClass admission controller will enforce it
before the resource is stored in the
ValidatingAdmissionWebhook database.
This!
Schema validation
Authentication

Authorisation
Mutation Validation
admission admission
controllers controllers

CertificateSigning API SERVER


You can also define the default CertificateSubjectRestriction
requests and limits for workloads in a
namespace with a LimitRange
resource.

LimitRanger This!
As for the ResourceQuota, those NamespaceLifecycle
values aren't enforced by the
scheduler, but the validating and PersistentVolumeClaimResize
mutating admission controllers still
inspects and mutates the values. ResourceQuota
RuntimeClass
NETWORK POLICIES default
NAMESPACE

Pod 1 Pod 3

Pod 2

What about networking?

If it's true that any pod can talk to


any pod, how do namespaced
Network Policies work? nginx-ingress
NAMESPACE
In this case, network policies are
rules (iptables, eBPF) set on the node
The DaemonSet is namespace aware
by a DaemonSet.

The DaemonSet is namespace-


aware and can craft the correct
firewall rules — the network is still
new iptables rules
unaware of namespaces.

eth0 eth0 eth0


192.168.49.2 Root netns 192.168.49.2 Root netns 192.168.49.2 Root netns

cali-0 cali-0 cali-0

eth0 eth0 eth0


10.244.0.3 10.244.0.3 10.244.0.3
Pod1 netns Pod1 netns Pod1 netns

Node 1 Node 1 Node 1


Namespace aware Pod 6

CoreDNS

Overloading the
DNS server 10.0.2.1
Pod 1
Namespaces are not designed for multi-
tenancy, and it shows when you focus on
shared Kubernetes components such as
CoreDNS.

Nothing stops one tenant from


overloading the DNS and affecting all
other tenants.

ip 172.17.0.1 The DNS isn't namespace-aware, and you


can't set policies based on namespace use
(unless you install extra plugins).
You might experience the same scenario
with the Kubernetes API: a tenant
overwhelming the API with requests.

In Kubernetes 1.29 there is a new feature


Namespace #1
called API Priority and Fairness that is
designed to mitigate exactly this scenario.
API SERVER

Overloading the API server

Pod
Pod AA
Pod
Pod A Pod
A A
1 Soft multi-tenancy
Hierarchical Namespace controller, Capsule

2 Control plane isolation


* *
vCluster, Kamaji, Hypershift

3 Hard multi-tenancy
Karmada
Namespaces are a great building block The community has developed several
for building higher abstractions in tools to build more robust abstractions
Kubernetes.

for managing tenants to fill this gap.

However, they are not meant to be


used as a multi-tenancy solution by
themselves.
Join Salman’s session on building Kubernetes platforms
this Thursday (7th Mar) at 8am PT / 5pm CET!

Kubernetes namespaces
offer no isolation
and how to work around it!
7th of Mar Access cloud resources Application credentials

8am PT | 5pm CET Pod 2 Pod

Service account

CoreDNS

brought to you by host hostPath


poisoning
and

Register (it’s free) bit.ly/multitenancy2 Salman Iqbal

You might also like