Tutorials - Kubernetes
Tutorials - Kubernetes
Tutorials
1: Hello Minikube
2: Learn Kubernetes Basics
2.1: Create a Cluster
2.1.1: Using Minikube to Create a Cluster
2.1.2: Interactive Tutorial - Creating a Cluster
2.2: Deploy an App
2.2.1: Using kubectl to Create a Deployment
2.2.2: Interactive Tutorial - Deploying an App
2.3: Explore Your App
2.3.1: Viewing Pods and Nodes
2.3.2: Interactive Tutorial - Exploring Your App
2.4: Expose Your App Publicly
2.4.1: Using a Service to Expose Your App
2.4.2: Interactive Tutorial - Exposing Your App
2.5: Scale Your App
2.5.1: Running Multiple Instances of Your App
2.5.2: Interactive Tutorial - Scaling Your App
2.6: Update Your App
2.6.1: Performing a Rolling Update
2.6.2: Interactive Tutorial - Updating Your App
3: Configuration
3.1: Example: Configuring a Java Microservice
3.1.1: Externalizing config using MicroProfile, ConfigMaps and Secrets
3.1.2: Interactive Tutorial - Configuring a Java Microservice
3.2: Configuring Redis using a ConfigMap
4: Stateless Applications
4.1: Exposing an External IP Address to Access an Application in a Cluster
4.2: Example: Deploying PHP Guestbook application with Redis
5: Stateful Applications
5.1: StatefulSet Basics
5.2: Example: Deploying WordPress and MySQL with Persistent Volumes
5.3: Example: Deploying Cassandra with a StatefulSet
5.4: Running ZooKeeper, A Distributed System Coordinator
6: Clusters
6.1: Restrict a Container's Access to Resources with AppArmor
6.2: Restrict a Container's Syscalls with seccomp
7: Services
7.1: Using Source IP
https://kubernetes.io/docs/tutorials/_print/ 1/117
10/25/21, 2:20 PM Tutorials | Kubernetes
uration
le: Configuring a Java Microservice (/docs/tutorials/configuration/configure-java-microservice/)
uring Redis Using a ConfigMap (/docs/tutorials/configuration/configure-redis-using-configmap/)
ess Applications
ng an External IP Address to Access an Application in a Cluster (/docs/tutorials/stateless-
tion/expose-external-ip-address/)
le: Deploying PHP Guestbook application with Redis (/docs/tutorials/stateless-
tion/guestbook/)
ul Applications
lSet Basics (/docs/tutorials/stateful-application/basic-stateful-set/)
le: WordPress and MySQL with Persistent Volumes (/docs/tutorials/stateful-application/mysql-
ress-persistent-volume/)
le: Deploying Cassandra with Stateful Sets (/docs/tutorials/stateful-application/cassandra/)
g ZooKeeper, A CP Distributed System (/docs/tutorials/stateful-application/zookeeper/)
rs
mor (/docs/tutorials/clusters/apparmor/)
mp (/docs/tutorials/clusters/seccomp/)
es
Source IP (/docs/tutorials/services/source-ip/)
s next
like to write a tutorial, see
Content Page Types (/docs/contribute/style/page-content-types/)
for
about the tutorial page type.
https://kubernetes.io/docs/tutorials/_print/ 2/117
10/25/21, 2:20 PM Tutorials | Kubernetes
1 - Hello Minikube
This tutorial shows you how to run a sample app
on Kubernetes using minikube and Katacoda.
Katacoda provides a free, in-browser Kubernetes environment.
Note: You can also follow this tutorial if you've installed minikube locally.
See minikube
start (https://minikube.sigs.k8s.io/docs/start/) for installation instructions.
Objectives
Deploy a sample application to minikube.
Run the app.
View application logs.
Note: If you installed minikube locally, run minikube start . Before you run minikube
dashboard , you should open a new terminal, start minikube dashboard there, and then
switch back to the main terminal.
2. Open the Kubernetes dashboard in a browser:
minikube dashboard
3. Katacoda environment only: At the top of the terminal pane, click the plus sign, and then click
Select port to view on Host 1.
4. Katacoda environment only: Type 30000 , and then click Display Port.
Note:
The dashboard command enables the dashboard add-on and opens the proxy in the
default web browser.
You can create Kubernetes resources on the dashboard such as
Deployment and Service.
If you are running in an environment as root, see Open Dashboard with URL.
By default, the dashboard is only accessible from within the internal Kubernetes virtual
network.
The dashboard command creates a temporary proxy to make the dashboard
accessible from outside the Kubernetes virtual network.
To stop the proxy, run Ctrl+C to exit the process.
After the command exits, the
dashboard remains running in the Kubernetes cluster.
You can run the dashboard
command again to create another proxy to access the dashboard.
https://kubernetes.io/docs/tutorials/_print/ 3/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Create a Deployment
A Kubernetes Pod (/docs/concepts/workloads/pods/) is a group of one or more Containers,
tied
together for the purposes of administration and networking. The Pod in this
tutorial has only one
Container. A Kubernetes
Deployment (/docs/concepts/workloads/controllers/deployment/) checks on
the health of your
Pod and restarts the Pod's Container if it terminates. Deployments are the
recommended way to manage the creation and scaling of Pods.
1. Use the kubectl create command to create a Deployment that manages a Pod. The
Pod runs a
Container based on the provided Docker image.
hello-node 1/1 1 1 1m
Note: For more information about kubectl commands, see the kubectl overview
(/docs/reference/kubectl/overview/).
Create a Service
By default, the Pod is only accessible by its internal IP address within the
Kubernetes cluster. To
make the hello-node Container accessible from outside the
Kubernetes virtual network, you have to
expose the Pod as a
Kubernetes Service (/docs/concepts/services-networking/service/).
1. Expose the Pod to the public internet using the kubectl expose command:
https://kubernetes.io/docs/tutorials/_print/ 4/117
10/25/21, 2:20 PM Tutorials | Kubernetes
The --type=LoadBalancer flag indicates that you want to expose your Service
outside of the
cluster.
The application code inside the image k8s.gcr.io/echoserver only listens on TCP port 8080. If
you used
kubectl expose to expose a different port, clients could not connect to that other
port.
2. View the Service you created:
4. Katacoda environment only: Click the plus sign, and then click Select port to view on Host 1.
5. Katacoda environment only: Note the 5-digit port number displayed opposite to 8080 in
services output. This port number is randomly generated and it can be different for you. Type
your number in the port number text box, then click Display Port. Using the example from
earlier, you would type 30369 .
This opens up a browser window that serves your app and shows the app's response.
Enable addons
The minikube tool includes a set of built-in addons (/docs/concepts/cluster-administration/addons/)
that can be enabled, disabled and opened in the local Kubernetes environment.
1. List the currently supported addons:
addon-manager: enabled
dashboard: enabled
default-storageclass: enabled
efk: disabled
freshpod: disabled
gvisor: disabled
helm-tiller: disabled
ingress: disabled
ingress-dns: disabled
logviewer: disabled
metrics-server: disabled
nvidia-driver-installer: disabled
nvidia-gpu-device-plugin: disabled
registry: disabled
registry-creds: disabled
storage-provisioner: enabled
storage-provisioner-gluster: disabled
https://kubernetes.io/docs/tutorials/_print/ 5/117
10/25/21, 2:20 PM Tutorials | Kubernetes
4. Disable metrics-server :
Clean up
Now you can clean up the resources you created in your cluster:
minikube stop
minikube delete
https://kubernetes.io/docs/tutorials/_print/ 6/117
10/25/21, 2:20 PM Tutorials | Kubernetes
What's next
Learn more about Deployment objects (/docs/concepts/workloads/controllers/deployment/).
Learn more about Deploying applications (/docs/tasks/run-application/run-stateless-
application-deployment/).
Learn more about Service objects (/docs/concepts/services-networking/service/).
https://kubernetes.io/docs/tutorials/_print/ 7/117
10/25/21, 2:20 PM Tutorials | Kubernetes
es Basics Modules
https://kubernetes.io/docs/tutorials/_print/ 8/117
10/25/21, 2:20 PM Tutorials | Kubernetes
s Clusters
rdinates a highly available cluster of computers that are
Summary:
ork as a single unit. The abstractions in Kubernetes allow you Kubernetes cluster
nerized applications to a cluster without tying them specifically Minikube
hines. To make use of this new model of deployment,
d to be packaged in a way that decouples them from individual
to be containerized. Containerized applications are more
able than in past deployment models, where applications Kubernetes is a production-grade,
rectly onto specific machines as packages deeply integrated open-source platform that
bernetes automates the distribution and scheduling of orchestrates the placement
tainers across a cluster in a more efficient way. (scheduling) and execution of
open-source platform and is production-ready. application containers within and
across computer clusters.
ster consists of two types of resources:
l Plane coordinates the cluster
the workers that run applications
iagram
Node
Control Plane
Node Processes
bernetes Cluster
https://kubernetes.io/docs/tutorials/_print/ 9/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 10/117
10/25/21, 2:20 PM Tutorials | Kubernetes
utorials/kubernetes-basics/)
https://kubernetes.io/docs/tutorials/_print/ 11/117
10/25/21, 2:20 PM Tutorials | Kubernetes
s Deployments
running Kubernetes cluster, you can deploy your
Summary:
plications on top of it.
To do so, you create a Kubernetes Deployments
nfiguration. The Deployment instructs Kubernetes
how to Kubectl
te instances of your application. Once you've created a
Kubernetes
control plane schedules the application instances
Deployment to run on individual Nodes in the
cluster.
A Deployment is responsible for
tion instances are created, a Kubernetes Deployment
creating and updating instances
uously monitors those instances. If the Node hosting an
of your application
wn or is deleted, the Deployment controller replaces the
instance on another Node in the cluster. This provides a self-
ism to address machine failure or maintenance.
ation world, installation scripts would often be used to start
they did not allow recovery from machine failure. By both
plication instances and keeping them running across Nodes,
oyments provide a fundamentally different approach to
agement.
Node
containerized app
Deployment
Control Plane
node processes
rnetes Cluster
https://kubernetes.io/docs/tutorials/_print/ 13/117
10/25/21, 2:20 PM Tutorials | Kubernetes
odule 1 (/docs/tutorials/kubernetes-basics/create-cluster/cluster-intro/)
utorials/kubernetes-basics/)
https://kubernetes.io/docs/tutorials/_print/ 14/117
10/25/21, 2:20 PM Tutorials | Kubernetes
rview
https://kubernetes.io/docs/tutorials/_print/ 15/117
10/25/21, 2:20 PM Tutorials | Kubernetes
erview
https://kubernetes.io/docs/tutorials/_print/ 16/117
10/25/21, 2:20 PM Tutorials | Kubernetes
odule 2 (/docs/tutorials/kubernetes-basics/deploy-app/deploy-intro/)
utorials/kubernetes-basics/)
https://kubernetes.io/docs/tutorials/_print/ 17/117
10/25/21, 2:20 PM Tutorials | Kubernetes
t a Service in Kubernetes
d how labels and LabelSelector objects relate to a Service
application outside a Kubernetes cluster using a Service
f Kubernetes Services
(/docs/concepts/workloads/pods/) are mortal. Pods in fact
Summary
docs/concepts/workloads/pods/pod-lifecycle/). When a Exposing Pods to external
s, the Pods running on the Node are also lost. A ReplicaSet traffic
workloads/controllers/replicaset/) might then dynamically Load balancing traffic
back to desired state via creation of new Pods to keep your across multiple Pods
ng. As another example, consider an image-processing Using labels
eplicas. Those replicas are exchangeable; the front-end
ot care about backend replicas or even if a Pod is lost and
aid, each Pod in a Kubernetes cluster has a unique IP address,
e same Node, so there needs to be a way of automatically A Kubernetes Service is an
ges among Pods so that your applications continue to abstraction layer which defines a
logical set of Pods and enables
external traffic exposure, load
rnetes is an abstraction which defines a logical set of Pods
balancing and service discovery
which to access them. Services enable a loose coupling
for those Pods.
ent Pods. A Service is defined using YAML (preferred)
configuration/overview/#general-configuration-tips) or JSON,
es objects. The set of Pods targeted by a Service is usually
LabelSelector (see below for why you might want a Service
g selector in the spec).
od has a unique IP address, those IPs are not exposed outside
ut a Service. Services allow your applications to receive traffic.
xposed in different ways by specifying a type in the
d Labels
traffic across a set of Pods. Services are the abstraction that
and replicate in Kubernetes without impacting your
https://kubernetes.io/docs/tutorials/_print/ 18/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 19/117
10/25/21, 2:20 PM Tutorials | Kubernetes
odule 3 (/docs/tutorials/kubernetes-basics/explore/explore-intro/)
utorials/kubernetes-basics/)
https://kubernetes.io/docs/tutorials/_print/ 20/117
10/25/21, 2:20 PM Tutorials | Kubernetes
application
modules we created a Deployment
Summary:
workloads/controllers/deployment/), and then exposed it Scaling a Deployment
vice (/docs/concepts/services-networking/service/). The
ted only one Pod for running our application. When traffic
l need to scale the application to keep up with user demand.
You can create from the start a
plished by changing the number of replicas in a Deployment
Deployment with multiple
instances using the --replicas
parameter for the kubectl create
deployment command
verview
https://kubernetes.io/docs/tutorials/_print/ 21/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 22/117
10/25/21, 2:20 PM Tutorials | Kubernetes
odule 4 (/docs/tutorials/kubernetes-basics/expose/expose-interactive/)
utorials/kubernetes-basics/)
https://kubernetes.io/docs/tutorials/_print/ 23/117
10/25/21, 2:20 PM Tutorials | Kubernetes
n application
plications to be available all the time and developers are
Summary:
oy new versions of them several times a day. In Kubernetes Updating an app
rolling updates. Rolling updates allow Deployments' update
h zero downtime by incrementally updating Pods instances
he new Pods will be scheduled on Nodes with available
Rolling updates allow
Deployments' update to take
module we scaled our application to run multiple instances.
place with zero downtime by
ment for performing updates without affecting application
incrementally updating Pods
efault, the maximum number of Pods that can be unavailable
instances with new ones.
e and the maximum number of new Pods that can be created,
ons can be configured to either numbers or percentages (of
etes, updates are versioned and any Deployment update can
previous (stable) version.
pdates overview
https://kubernetes.io/docs/tutorials/_print/ 24/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 25/117
10/25/21, 2:20 PM Tutorials | Kubernetes
odule 5 (/docs/tutorials/kubernetes-basics/scale/scale-interactive/)
https://kubernetes.io/docs/tutorials/_print/ 26/117
10/25/21, 2:20 PM Tutorials | Kubernetes
3 - Configuration
3.1 - Example: Configuring a Java
Microservice
3.1.1 - Externalizing config using
MicroProfile, ConfigMaps and Secrets
In this tutorial you will learn how and why to externalize your microservice’s configuration.
Specifically, you will learn how to use Kubernetes ConfigMaps and Secrets to set environment
variables and then consume them using MicroProfile Config.
Objectives
Create a Kubernetes ConfigMap and Secret
Inject microservice configuration using MicroProfile Config
https://kubernetes.io/docs/tutorials/_print/ 27/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 28/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 29/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Objectives
Create a ConfigMap with Redis configuration values
Create a Redis Pod that mounts and uses the created ConfigMap
Verify that the configuration was correctly applied.
The example shown on this page works with kubectl 1.14 and above.
Understand Configure Containers Using a ConfigMap (/docs/tasks/configure-pod-
container/configure-pod-configmap/).
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: ""
EOF
Apply the ConfigMap created above, along with a Redis pod manifest:
Examine the contents of the Redis pod manifest and note the following:
A volume named config is created by spec.volumes[1]
The key and path under spec.volumes[1].items[0] exposes the redis-config key from the
example-redis-config ConfigMap as a file named redis.conf on the config volume.
https://kubernetes.io/docs/tutorials/_print/ 30/117
10/25/21, 2:20 PM Tutorials | Kubernetes
This has the net effect of exposing the data in data.redis-config from the example-redis-config
ConfigMap above as /redis-master/redis.conf inside the Pod.
pods/config/redis-pod.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/re
dis-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis:5.0.4
command:
- redis-server
- "/redis-master/redis.conf"
env:
- name: MASTER
value: "true"
ports:
- containerPort: 6379
resources:
limits:
cpu: "0.1"
volumeMounts:
- mountPath: /redis-master-data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: example-redis-config
items:
- key: redis-config
path: redis.conf
configmap/example-redis-config 1 14s
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
https://kubernetes.io/docs/tutorials/_print/ 31/117
10/25/21, 2:20 PM Tutorials | Kubernetes
====
redis-config:
Use kubectl exec to enter the pod and run the redis-cli tool to check the current configuration:
Check maxmemory :
1) "maxmemory"
2) "0"
1) "maxmemory-policy"
2) "noeviction"
pods/config/example-redis-config.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/config/ex
ample-redis-config.yaml)
apiVersion: v1
kind: ConfigMap
metadata:
name: example-redis-config
data:
redis-config: |
maxmemory 2mb
maxmemory-policy allkeys-lru
https://kubernetes.io/docs/tutorials/_print/ 32/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Name: example-redis-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis-config:
----
maxmemory 2mb
maxmemory-policy allkeys-lru
Check the Redis Pod again using redis-cli via kubectl exec to see if the configuration was applied:
Check maxmemory :
1) "maxmemory"
2) "0"
Returns:
1) "maxmemory-policy"
2) "noeviction"
The configuration values have not changed because the Pod needs to be restarted to grab updated
values from associated ConfigMaps. Let's delete and recreate the Pod:
Check maxmemory :
https://kubernetes.io/docs/tutorials/_print/ 33/117
10/25/21, 2:20 PM Tutorials | Kubernetes
1) "maxmemory"
2) "2097152"
1) "maxmemory-policy"
2) "allkeys-lru"
What's next
Learn more about ConfigMaps (/docs/tasks/configure-pod-container/configure-pod-
configmap/).
https://kubernetes.io/docs/tutorials/_print/ 34/117
10/25/21, 2:20 PM Tutorials | Kubernetes
4 - Stateless Applications
4.1 - Exposing an External IP Address to
Access an Application in a Cluster
This page shows how to create a Kubernetes Service object that exposes an
external IP address.
Objectives
Run five instances of a Hello World application.
Create a Service object that exposes an external IP address.
Use the Service object to access the running application.
service/load-balancer-example.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/service/lo
ad-balancer-example.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
name: hello-world
spec:
replicas: 5
selector:
matchLabels:
app.kubernetes.io/name: load-balancer-example
template:
metadata:
labels:
app.kubernetes.io/name: load-balancer-example
spec:
containers:
- image: gcr.io/google-samples/node-hello:1.0
name: hello-world
ports:
- containerPort: 8080
Deployment (/docs/concepts/workloads/controllers/deployment/)
and an associated
ReplicaSet (/docs/concepts/workloads/controllers/replicaset/).
The ReplicaSet has five
https://kubernetes.io/docs/tutorials/_print/ 35/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Pods (/docs/concepts/workloads/pods/)
each of which runs the Hello World application.
2. Display information about the Deployment:
Note: If the external IP address is shown as <pending>, wait for a minute and
enter the same command again.
6. Display detailed information about the Service:
Name: my-service
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
Events: <none>
curl http://<external-ip>:<port>
Hello Kubernetes!
Cleaning up
To delete the Service, enter this command:
To delete the Deployment, the ReplicaSet, and the Pods that are running
the Hello World
application, enter this command:
What's next
Learn more about
connecting applications with services (/docs/concepts/services-
networking/connect-applications-service/).
https://kubernetes.io/docs/tutorials/_print/ 37/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Objectives
Start up a Redis leader.
Start up two Redis followers.
Start up the guestbook frontend.
Expose and view the Frontend Service.
Clean up.
application/guestbook/redis-leader-deployment.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/gu
estbook/redis-leader-deployment.yaml)
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
https://kubernetes.io/docs/tutorials/_print/ 38/117
10/25/21, 2:20 PM Tutorials | Kubernetes
role: leader
tier: backend
spec:
containers:
- name: leader
image: "docker.io/redis:6.0.5"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
1. Launch a terminal window in the directory you downloaded the manifest files.
2. Apply the Redis Deployment from the redis-leader-deployment.yaml file:
3. Query the list of Pods to verify that the Redis Pod is running:
4. Run the following command to view the logs from the Redis leader Pod:
application/guestbook/redis-leader-service.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/gu
estbook/redis-leader-service.yaml)
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
name: redis-leader
labels:
app: redis
role: leader
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: leader
tier: backend
2. Query the list of Services to verify that the Redis Service is running:
Note: This manifest file creates a Service named redis-leader with a set of labels
that
match the labels previously defined, so the Service routes network
traffic to the Redis
Pod.
application/guestbook/redis-follower-deployment.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/gu
estbook/redis-follower-deployment.yaml)
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-follower
labels:
app: redis
role: follower
tier: backend
spec:
replicas: 2
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
role: follower
tier: backend
spec:
containers:
- name: follower
image: gcr.io/google_samples/gb-redis-follower:v2
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
2. Verify that the two Redis follower replicas are running by querying the list of Pods:
https://kubernetes.io/docs/tutorials/_print/ 40/117
10/25/21, 2:20 PM Tutorials | Kubernetes
application/guestbook/redis-follower-service.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/gu
estbook/redis-follower-service.yaml)
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
name: redis-follower
labels:
app: redis
role: follower
tier: backend
spec:
ports:
- port: 6379
selector:
app: redis
role: follower
tier: backend
2. Query the list of Services to verify that the Redis Service is running:
Note: This manifest file creates a Service named redis-follower with a set of
labels
that match the labels previously defined, so the Service routes network
traffic to the
Redis Pod.
Now that you have the Redis storage of your guestbook up and running, start
the guestbook web
servers. Like the Redis followers, the frontend is deployed
using a Kubernetes Deployment.
The guestbook app uses a PHP frontend. It is configured to communicate with
either the Redis
follower or leader Services, depending on whether the request
is a read or a write. The frontend
exposes a JSON interface, and serves a
jQuery-Ajax-based UX.
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: guestbook
tier: frontend
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google_samples/gb-frontend:v5
env:
- name: GET_HOSTS_FROM
value: "dns"
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 80
2. Query the list of Pods to verify that the three frontend replicas are running:
If you want guests to be able to access your guestbook, you must configure the
frontend Service to
be externally visible, so a client can request the Service
from outside the Kubernetes cluster.
However a Kubernetes user you can use
kubectl port-forward to access the service even though it
uses a
ClusterIP .
Note: Some cloud providers, like Google Compute Engine or Google Kubernetes Engine,
support external load balancers. If your cloud provider supports load
balancers and you
want to use it, uncomment type: LoadBalancer .
application/guestbook/frontend-service.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/gu
estbook/frontend-service.yaml)
# SOURCE: https://cloud.google.com/kubernetes-engine/docs/tutorials/guestbook
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# type: LoadBalancer
#type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
2. Query the list of Services to verify that the frontend Service is running:
https://kubernetes.io/docs/tutorials/_print/ 43/117
10/25/21, 2:20 PM Tutorials | Kubernetes
2. Copy the external IP address, and load the page in your browser to view your guestbook.
Note: Try adding some guestbook entries by typing in a message, and clicking Submit.
The message you typed appears in the frontend. This message indicates that
data is
successfully added to Redis through the Services you created earlier.
2. Query the list of Pods to verify the number of frontend Pods running:
3. Run the following command to scale down the number of frontend Pods:
4. Query the list of Pods to verify the number of frontend Pods running:
https://kubernetes.io/docs/tutorials/_print/ 44/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Cleaning up
Deleting the Deployments and Services also deletes any running Pods. Use
labels to delete multiple
resources with one command.
1. Run the following commands to delete all Pods, Deployments, and Services.
What's next
Complete the Kubernetes Basics (/docs/tutorials/kubernetes-basics/) Interactive Tutorials
Use Kubernetes to create a blog using Persistent Volumes for MySQL and Wordpress
(/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#visit-your-new-
wordpress-blog)
Read more about connecting applications (/docs/concepts/services-networking/connect-
applications-service/)
Read more about Managing Resources (/docs/concepts/cluster-administration/manage-
deployment/#using-labels-effectively)
https://kubernetes.io/docs/tutorials/_print/ 45/117
10/25/21, 2:20 PM Tutorials | Kubernetes
5 - Stateful Applications
5.1 - StatefulSet Basics
This tutorial provides an introduction to managing applications with
StatefulSets (/docs/concepts/workloads/controllers/statefulset/).
It demonstrates how to create,
delete, scale, and update the Pods of StatefulSets.
Note: This tutorial assumes that your cluster is configured to dynamically provision
PersistentVolumes. If your cluster is not configured to do so, you
will have to manually
provision two 1 GiB volumes prior to starting this
tutorial.
Objectives
StatefulSets are intended to be used with stateful applications and distributed
systems. However,
the administration of stateful applications and
distributed systems on Kubernetes is a broad,
complex topic. In order to
demonstrate the basic features of a StatefulSet, and not to conflate the
former
topic with the latter, you will deploy a simple web application using a StatefulSet.
After this tutorial, you will be familiar with the following.
How to create a StatefulSet
How a StatefulSet manages its Pods
How to delete a StatefulSet
How to scale a StatefulSet
How to update a StatefulSet's Pods
Creating a StatefulSet
Begin by creating a StatefulSet using the example below. It is similar to the
example presented in the
StatefulSets (/docs/concepts/workloads/controllers/statefulset/) concept.
It creates a headless
Service (/docs/concepts/services-networking/service/#headless-services),
nginx , to publish the IP
addresses of Pods in the StatefulSet, web .
application/web/web.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/we
b/web.yaml)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
https://kubernetes.io/docs/tutorials/_print/ 46/117
10/25/21, 2:20 PM Tutorials | Kubernetes
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
You will need to use two terminal windows. In the first terminal, use
kubectl get
(/docs/reference/generated/kubectl/kubectl-commands/#get) to watch the creation
of the
StatefulSet's Pods.
service/nginx created
statefulset.apps/web created
...then get the web StatefulSet, to verify that both were created successfully:
https://kubernetes.io/docs/tutorials/_print/ 47/117
10/25/21, 2:20 PM Tutorials | Kubernetes
web 2 1 20s
Notice that the web-1 Pod is not launched until the web-0 Pod is
Running (see Pod Phase
(/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase))
and Ready (see type in Pod Conditions
(/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions)).
Pods in a StatefulSet
Pods in a StatefulSet have a unique ordinal index and a stable network identity.
https://kubernetes.io/docs/tutorials/_print/ 48/117
10/25/21, 2:20 PM Tutorials | Kubernetes
web-0
web-1
nslookup web-0.nginx
Server: 10.0.0.10
Name: web-0.nginx
Address 1: 10.244.1.6
nslookup web-1.nginx
Server: 10.0.0.10
Name: web-1.nginx
Address 1: 10.244.2.6
Wait for the StatefulSet to restart them, and for both Pods to transition to
Running and Ready:
https://kubernetes.io/docs/tutorials/_print/ 49/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Use kubectl exec and kubectl run to view the Pods' hostnames and in-cluster
DNS entries. First,
view the Pods' hostnames:
web-0
web-1
then, run:
nslookup web-0.nginx
Server: 10.0.0.10
Name: web-0.nginx
Address 1: 10.244.1.7
nslookup web-1.nginx
Server: 10.0.0.10
Name: web-1.nginx
Address 1: 10.244.2.8
PersistentVolumeClaims (/docs/concepts/storage/persistent-volumes/)
that are bound to two
PersistentVolumes (/docs/concepts/storage/persistent-volumes/).
https://kubernetes.io/docs/tutorials/_print/ 50/117
10/25/21, 2:20 PM Tutorials | Kubernetes
web-0
web-1
Note:
If you instead see 403 Forbidden responses for the above curl command,
you will need
to fix the permissions of the directory mounted by the volumeMounts
(due to a bug when
using hostPath volumes (https://github.com/kubernetes/kubernetes/issues/2630)),
by
running:
for i in 0 1; do kubectl exec web-$i -- chmod 755 /usr/share/nginx/html; done
Examine the output of the kubectl get command in the first terminal, and wait
for all of the Pods to
transition to Running and Ready.
web-0
web-1
https://kubernetes.io/docs/tutorials/_print/ 51/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Even though web-0 and web-1 were rescheduled, they continue to serve their
hostnames because
the PersistentVolumes associated with their
PersistentVolumeClaims are remounted to their
volumeMounts . No matter what
node web-0 and web-1 are scheduled on, their PersistentVolumes
will be
mounted to the appropriate mount points.
Scaling a StatefulSet
Scaling a StatefulSet refers to increasing or decreasing the number of replicas.
This is accomplished
by updating the replicas field. You can use either
kubectl scale
(/docs/reference/generated/kubectl/kubectl-commands/#scale) or
kubectl patch
(/docs/reference/generated/kubectl/kubectl-commands/#patch) to scale a StatefulSet.
Scaling Up
In one terminal window, watch the Pods in the StatefulSet:
In another terminal window, use kubectl scale to scale the number of replicas
to 5:
statefulset.apps/web scaled
Examine the output of the kubectl get command in the first terminal, and wait
for the three
additional Pods to transition to Running and Ready.
Scaling Down
In one terminal, watch the StatefulSet's Pods:
In another terminal, use kubectl patch to scale the StatefulSet back down to
three replicas:
https://kubernetes.io/docs/tutorials/_print/ 52/117
10/25/21, 2:20 PM Tutorials | Kubernetes
statefulset.apps/web patched
Updating StatefulSets
In Kubernetes 1.7 and later, the StatefulSet controller supports automated updates. The
strategy
used is determined by the spec.updateStrategy field of the
StatefulSet API Object. This feature can
be used to upgrade the container
images, resource requests and/or limits, labels, and annotations
of the Pods in a
StatefulSet. There are two valid update strategies, RollingUpdate and
OnDelete .
RollingUpdate update strategy is the default for StatefulSets.
Rolling Update
The RollingUpdate update strategy will update all Pods in a StatefulSet, in
reverse ordinal order,
while respecting the StatefulSet guarantees.
Patch the web StatefulSet to apply the RollingUpdate update strategy:
https://kubernetes.io/docs/tutorials/_print/ 53/117
10/25/21, 2:20 PM Tutorials | Kubernetes
statefulset.apps/web patched
In one terminal window, patch the web StatefulSet to change the container
image again:
statefulset.apps/web patched
The Pods in the StatefulSet are updated in reverse ordinal order. The
StatefulSet controller
terminates each Pod, and waits for it to transition to Running and
Ready prior to updating the next
Pod. Note that, even though the StatefulSet
controller will not proceed to update the next Pod until
its ordinal successor
is Running and Ready, it will restore any Pod that fails during the update to
its
current version.
Pods that have already received the update will be restored to the updated version,
and Pods that
have not yet received the update will be restored to the previous
version. In this way, the controller
attempts to continue to keep the application
healthy and the update consistent in the presence of
intermittent failures.
Get the Pods to view their container images:
https://kubernetes.io/docs/tutorials/_print/ 54/117
10/25/21, 2:20 PM Tutorials | Kubernetes
k8s.gcr.io/nginx-slim:0.8
k8s.gcr.io/nginx-slim:0.8
k8s.gcr.io/nginx-slim:0.8
All the Pods in the StatefulSet are now running the previous container image.
Note: You can also use kubectl rollout status sts/<name> to view
the status of a
rolling update to a StatefulSet
Staging an Update
You can stage an update to a StatefulSet by using the partition parameter of
the RollingUpdate
update strategy. A staged update will keep all of the Pods
in the StatefulSet at the current version
while allowing mutations to the
StatefulSet's .spec.template .
Patch the web StatefulSet to add a partition to the updateStrategy field:
statefulset.apps/web patched
statefulset.apps/web patched
k8s.gcr.io/nginx-slim:0.8
https://kubernetes.io/docs/tutorials/_print/ 55/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Notice that, even though the update strategy is RollingUpdate the StatefulSet
restored the Pod with
its original container. This is because the
ordinal of the Pod is less than the partition specified by
the
updateStrategy .
statefulset.apps/web patched
k8s.gcr.io/nginx-slim:0.7
https://kubernetes.io/docs/tutorials/_print/ 56/117
10/25/21, 2:20 PM Tutorials | Kubernetes
k8s.gcr.io/nginx-slim:0.8
web-1 was restored to its original configuration because the Pod's ordinal
was less than the
partition. When a partition is specified, all Pods with an
ordinal that is greater than or equal to the
partition will be updated when the
StatefulSet's .spec.template is updated. If a Pod that has an
ordinal less
than the partition is deleted or otherwise terminated, it will be restored to
its original
configuration.
statefulset.apps/web patched
Wait for all of the Pods in the StatefulSet to become Running and Ready.
Get the container image details for the Pods in the StatefulSet:
https://kubernetes.io/docs/tutorials/_print/ 57/117
10/25/21, 2:20 PM Tutorials | Kubernetes
k8s.gcr.io/nginx-slim:0.7
k8s.gcr.io/nginx-slim:0.7
k8s.gcr.io/nginx-slim:0.7
On Delete
The OnDelete update strategy implements the legacy (1.6 and prior) behavior,
When you select this
update strategy, the StatefulSet controller will not
automatically update Pods when a modification is
made to the StatefulSet's
.spec.template field. This strategy can be selected by setting the
.spec.template.updateStrategy.type to OnDelete .
Deleting StatefulSets
StatefulSet supports both Non-Cascading and Cascading deletion. In a
Non-Cascading Delete, the
StatefulSet's Pods are not deleted when the StatefulSet is deleted. In a Cascading Delete, both the
StatefulSet and its Pods are
deleted.
Non-Cascading Delete
In one terminal window, watch the Pods in the StatefulSet.
Even though web has been deleted, all of the Pods are still Running and Ready.
Delete web-0 :
https://kubernetes.io/docs/tutorials/_print/ 58/117
10/25/21, 2:20 PM Tutorials | Kubernetes
As the web StatefulSet has been deleted, web-0 has not been relaunched.
In one terminal, watch the StatefulSet's Pods.
statefulset.apps/web created
service/nginx unchanged
Ignore the error. It only indicates that an attempt was made to create the nginx
headless Service
even though that Service already exists.
Examine the output of the kubectl get command running in the first terminal.
Let's take another look at the contents of the index.html file served by the
Pods' webservers:
web-0
web-1
Even though you deleted both the StatefulSet and the web-0 Pod, it still
serves the hostname
originally entered into its index.html file. This is
because the StatefulSet never deletes the
PersistentVolumes associated with a
Pod. When you recreated the StatefulSet and it relaunched
web-0 , its original
PersistentVolume was remounted.
Cascading Delete
In one terminal window, watch the Pods in the StatefulSet.
https://kubernetes.io/docs/tutorials/_print/ 59/117
10/25/21, 2:20 PM Tutorials | Kubernetes
In another terminal, delete the StatefulSet again. This time, omit the
--cascade=orphan parameter.
Examine the output of the kubectl get command running in the first terminal,
and wait for all of the
Pods to transition to Terminating.
Note: Although a cascading delete removes a StatefulSet together with its Pods,
the
cascade does not delete the headless Service associated with the StatefulSet.
You must
delete the nginx Service manually.
service/nginx created
statefulset.apps/web created
When all of the StatefulSet's Pods transition to Running and Ready, retrieve
the contents of their
index.html files:
web-0
web-1
https://kubernetes.io/docs/tutorials/_print/ 60/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Even though you completely deleted the StatefulSet, and all of its Pods, the
Pods are recreated with
their PersistentVolumes mounted, and web-0 and
web-1 continue to serve their hostnames.
Finally, delete the nginx Service...
application/web/web-parallel.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/we
b/web-parallel.yaml)
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: nginx
template:
https://kubernetes.io/docs/tutorials/_print/ 61/117
10/25/21, 2:20 PM Tutorials | Kubernetes
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
This manifest is identical to the one you downloaded above except that the
.spec.podManagementPolicy
of the web StatefulSet is set to Parallel .
service/nginx created
statefulset.apps/web created
Examine the output of the kubectl get command that you executed in the first terminal.
The StatefulSet controller launched both web-0 and web-1 at the same time.
Keep the second terminal open, and, in another terminal window scale the
StatefulSet:
statefulset.apps/web scaled
Examine the output of the terminal where the kubectl get command is running.
https://kubernetes.io/docs/tutorials/_print/ 62/117
10/25/21, 2:20 PM Tutorials | Kubernetes
The StatefulSet launched two new Pods, and it did not wait for
the first to become Running and
Ready prior to launching the second.
Cleaning up
You should have two terminals open, ready for you to run kubectl commands as
part of cleanup.
You can watch kubectl get to see those Pods being deleted.
During deletion, a StatefulSet removes all Pods concurrently; it does not wait for
a Pod's ordinal
successor to terminate prior to deleting that Pod.
Close the terminal where the kubectl get command is running and delete the nginx
Service:
Note:
You also need to delete the persistent storage media for the PersistentVolumes
used in
this tutorial.
Follow the necessary steps, based on your environment, storage configuration,
and
provisioning method, to ensure that all storage is reclaimed.
https://kubernetes.io/docs/tutorials/_print/ 63/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Warning: This deployment is not suitable for production use cases, as it uses single
instance WordPress and MySQL Pods. Consider using WordPress Helm Chart
(https://github.com/kubernetes/charts/tree/master/stable/wordpress) to deploy
WordPress in production.
Note: The files provided in this tutorial are using GA Deployment APIs and are specific
to kubernetes version 1.9 and later. If you wish to use this tutorial with an earlier
version of Kubernetes, please update the API version appropriately, or reference earlier
versions of this tutorial.
Objectives
Create PersistentVolumeClaims and PersistentVolumes
Create a kustomization.yaml with
a Secret generator
MySQL resource configs
WordPress resource configs
Apply the kustomization directory by kubectl apply -k ./
Clean up
Warning: In local clusters, the default StorageClass uses the hostPath provisioner.
hostPath volumes are only suitable for development and testing. With hostPath
volumes, your data lives in /tmp on the node the Pod is scheduled onto and does not
move between nodes. If a Pod dies and gets scheduled to another node in the cluster,
or the node is rebooted, the data is lost.
Note: If you are bringing up a cluster that needs to use the hostPath provisioner, the -
-enable-hostpath-provisioner flag must be set in the controller-manager component.
Note: If you have a Kubernetes cluster running on Google Kubernetes Engine, please
follow this guide (https://cloud.google.com/kubernetes-
engine/docs/tutorials/persistent-disk).
Create a kustomization.yaml
Add a Secret generator
A Secret (/docs/concepts/configuration/secret/) is an object that stores a piece of sensitive data like a
password or key. Since 1.14, kubectl supports the management of Kubernetes objects using a
kustomization file. You can create a Secret by generators in kustomization.yaml .
Add a Secret generator in kustomization.yaml from the following command. You will need to replace
YOUR_PASSWORD with the password you want to use.
secretGenerator:
- name: mysql-pass
literals:
- password=YOUR_PASSWORD
EOF
application/wordpress/mysql-deployment.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/wo
rdpress/mysql-deployment.yaml)
apiVersion: v1
kind: Service
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
ports:
- port: 3306
selector:
app: wordpress
tier: mysql
clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
labels:
app: wordpress
spec:
accessModes:
https://kubernetes.io/docs/tutorials/_print/ 65/117
10/25/21, 2:20 PM Tutorials | Kubernetes
- ReadWriteOnce
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
application/wordpress/wordpress-deployment.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/wo
rdpress/wordpress-deployment.yaml)
apiVersion: v1
kind: Service
metadata:
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
tier: frontend
type: LoadBalancer
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wp-pv-claim
labels:
app: wordpress
spec:
accessModes:
- ReadWriteOnce
https://kubernetes.io/docs/tutorials/_print/ 66/117
10/25/21, 2:20 PM Tutorials | Kubernetes
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:4.8-apache
name: wordpress
env:
- name: WORDPRESS_DB_HOST
value: wordpress-mysql
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
ports:
- containerPort: 80
name: wordpress
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: wp-pv-claim
resources:
- mysql-deployment.yaml
- wordpress-deployment.yaml
EOF
kubectl apply -k ./
https://kubernetes.io/docs/tutorials/_print/ 67/117
10/25/21, 2:20 PM Tutorials | Kubernetes
mysql-pass-c57bb4t7mf Opaque 1 9s
Note: It can take up to a few minutes for the PVs to be provisioned and bound.
The response should be like this:
Note: It can take up to a few minutes for the Pod's Status to be RUNNING .
Note: Minikube can only expose Services through NodePort . The EXTERNAL-IP is
always pending.
5. Run the following command to get the IP Address for the WordPress Service:
https://kubernetes.io/docs/tutorials/_print/ 68/117
10/25/21, 2:20 PM Tutorials | Kubernetes
http://1.2.3.4:32406
6. Copy the IP address, and load the page in your browser to view your site.
You should see the WordPress set up page similar to the following screenshot.
Warning: Do not leave your WordPress installation on this page. If another user finds it,
they can set up a website on your instance and use it to serve malicious content.
Either install WordPress by creating a username and password or delete your instance.
Cleaning up
1. Run the following command to delete your Secret, Deployments, Services and
PersistentVolumeClaims:
kubectl delete -k ./
What's next
Learn more about Introspection and Debugging (/docs/tasks/debug-application-cluster/debug-
application-introspection/)
Learn more about Jobs (/docs/concepts/workloads/controllers/job/)
Learn more about Port Forwarding (/docs/tasks/access-application-cluster/port-forward-
access-application-cluster/)
Learn how to Get a Shell to a Container (/docs/tasks/debug-application-cluster/get-shell-
running-container/)
https://kubernetes.io/docs/tutorials/_print/ 69/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Note:
Cassandra and Kubernetes both use the term node to mean a member of a cluster. In
this
tutorial, the Pods that belong to the StatefulSet are Cassandra nodes and are
members
of the Cassandra cluster (called a ring). When those Pods run in your
Kubernetes cluster,
the Kubernetes control plane schedules those Pods onto
Kubernetes
Nodes (/docs/concepts/architecture/nodes/).
When a Cassandra node starts, it uses a seed list to bootstrap discovery of other
nodes
in the ring.
This tutorial deploys a custom Cassandra seed provider that lets the
database discover
new Cassandra Pods as they appear inside your Kubernetes cluster.
Objectives
Create and validate a Cassandra headless
Service (/docs/concepts/services-networking/service/).
Use a StatefulSet (/docs/concepts/workloads/controllers/statefulset/) to create a Cassandra
ring.
Validate the StatefulSet.
Modify the StatefulSet.
Delete the StatefulSet and its Pods (/docs/concepts/workloads/pods/).
To complete this tutorial, you should already have a basic familiarity with
Pods (/docs/concepts/workloads/pods/),
Services (/docs/concepts/services-networking/service/), and
StatefulSets (/docs/concepts/workloads/controllers/statefulset/).
https://kubernetes.io/docs/tutorials/_print/ 70/117
10/25/21, 2:20 PM Tutorials | Kubernetes
application/cassandra/cassandra-service.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/cas
sandra/cassandra-service.yaml)
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
Create a Service to track all Cassandra StatefulSet members from the cassandra-service.yaml file:
Validating (optional)
Get the Cassandra Service.
The response is
If you don't see a Service named cassandra , that means creation failed. Read
Debug Services
(/docs/tasks/debug-application-cluster/debug-service/)
for help troubleshooting common issues.
application/cassandra/cassandra-statefulset.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/cas
sandra/cassandra-statefulset.yaml)
apiVersion: apps/v1
kind: StatefulSet
metadata:
https://kubernetes.io/docs/tutorials/_print/ 71/117
10/25/21, 2:20 PM Tutorials | Kubernetes
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
volumeClaimTemplates:
- metadata:
name: cassandra-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast
https://kubernetes.io/docs/tutorials/_print/ 72/117
10/25/21, 2:20 PM Tutorials | Kubernetes
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: k8s.io/minikube-hostpath
parameters:
type: pd-ssd
cassandra 3 0 13s
It can take several minutes for all three Pods to deploy. Once they are deployed, the same
command
returns output similar to:
https://kubernetes.io/docs/tutorials/_print/ 73/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
This command opens an editor in your terminal. The line you need to change is the replicas
field.
The following sample is an excerpt of the StatefulSet file:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
cassandra 4 4 36m
Cleaning up
https://kubernetes.io/docs/tutorials/_print/ 74/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet.
This setting is for your safety because your data is more valuable than automatically purging all
related StatefulSet resources.
Warning: Depending on the storage class and reclaim policy, deleting the
PersistentVolumeClaims may cause the associated volumes
to also be deleted. Never
assume you'll be able to access data if its volume claims are deleted.
1. Run the following commands (chained together into a single command) to delete everything in
the Cassandra StatefulSet:
2. Run the following command to delete the Service you set up for Cassandra:
CASSANDRA_NUM_TOKENS 32
CASSANDRA_RPC_ADDRESS 0.0.0.0
What's next
Learn how to Scale a StatefulSet (/docs/tasks/run-application/scale-stateful-set/).
Learn more about the KubernetesSeedProvider
(https://github.com/kubernetes/examples/blob/master/cassandra/java/src/main/java/io/k8s/cassandra/KubernetesSeedProvider.java)
See more custom Seed Provider Configurations
(https://git.k8s.io/examples/cassandra/java/README.md)
https://kubernetes.io/docs/tutorials/_print/ 75/117
10/25/21, 2:20 PM Tutorials | Kubernetes
You must have a cluster with at least four nodes, and each node requires at least 2 CPUs and 4 GiB
of memory. In this tutorial you will cordon and drain the cluster's nodes. This means that the
cluster will terminate and evict all Pods on its nodes, and the nodes will temporarily become
unschedulable. You should use a dedicated cluster for this tutorial, or you should ensure that the
disruption you cause will not interfere with other tenants.
This tutorial assumes that you have configured your cluster to dynamically provision
PersistentVolumes. If your cluster is not configured to do so, you
will have to manually provision
three 20 GiB volumes before starting this
tutorial.
Objectives
After this tutorial, you will know the following.
How to deploy a ZooKeeper ensemble using StatefulSet.
How to consistently configure the ensemble.
How to spread the deployment of ZooKeeper servers in the ensemble.
How to use PodDisruptionBudgets to ensure service availability during planned maintenance.
ZooKeeper
Apache ZooKeeper (https://zookeeper.apache.org/doc/current/) is a
distributed, open-source
coordination service for distributed applications.
ZooKeeper allows you to read, write, and observe
updates to data. Data are
organized in a file system like hierarchy and replicated to all ZooKeeper
servers in the ensemble (a set of ZooKeeper servers). All operations on data
are atomic and
sequentially consistent. ZooKeeper ensures this by using the
Zab
(https://pdfs.semanticscholar.org/b02c/6b00bd5dbdbd951fddb00b906c82fa80f0b3.pdf)
consensus
protocol to replicate a state machine across all servers in the ensemble.
The ensemble uses the Zab protocol to elect a leader, and the ensemble cannot write data until that
election is complete. Once complete, the ensemble uses Zab to ensure that it replicates all writes to
a quorum before it acknowledges and makes them visible to clients. Without respect to weighted
quorums, a quorum is a majority component of the ensemble containing the current leader. For
instance, if the ensemble has three servers, a component that contains the leader and one other
server constitutes a quorum. If the ensemble can not achieve a quorum, the ensemble cannot write
data.
https://kubernetes.io/docs/tutorials/_print/ 76/117
10/25/21, 2:20 PM Tutorials | Kubernetes
ZooKeeper servers keep their entire state machine in memory, and write every mutation to a
durable WAL (Write Ahead Log) on storage media. When a server crashes, it can recover its previous
state by replaying the WAL. To prevent the WAL from growing without bound, ZooKeeper servers will
periodically snapshot them in memory state to storage media. These snapshots can be loaded
directly into memory, and all WAL entries that preceded the snapshot may be discarded.
application/zookeeper/zookeeper.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/zo
okeeper/zookeeper.yaml)
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
https://kubernetes.io/docs/tutorials/_print/ 77/117
10/25/21, 2:20 PM Tutorials | Kubernetes
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
https://kubernetes.io/docs/tutorials/_print/ 78/117
10/25/21, 2:20 PM Tutorials | Kubernetes
service/zk-hs created
service/zk-cs created
poddisruptionbudget.policy/zk-pdb created
statefulset.apps/zk created
Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate kubectl.
The StatefulSet controller creates three Pods, and each Pod has a container with
a ZooKeeper
(https://www-us.apache.org/dist/zookeeper/stable/) server.
The StatefulSet controller provides each Pod with a unique hostname based on its ordinal index. The
hostnames take the form of <statefulset name>-<ordinal index> . Because the replicas field of the
zk StatefulSet is set to 3 , the Set's controller creates three Pods with their hostnames set to zk-0 ,
zk-1 , and
zk-2 .
zk-0
zk-1
zk-2
The servers in a ZooKeeper ensemble use natural numbers as unique identifiers, and store each
server's identifier in a file called myid in the server's data directory.
To examine the contents of the myid file for each server use the following command.
Because the identifiers are natural numbers and the ordinal indices are non-negative integers, you
can generate an identifier by adding 1 to the ordinal.
https://kubernetes.io/docs/tutorials/_print/ 79/117
10/25/21, 2:20 PM Tutorials | Kubernetes
myid zk-0
myid zk-1
myid zk-2
To get the Fully Qualified Domain Name (FQDN) of each Pod in the zk StatefulSet use the following
command.
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/log
tickTime=2000
initLimit=10
syncLimit=2000
maxClientCnxns=60
minSessionTimeout= 4000
maxSessionTimeout= 40000
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
Achieving consensus
Consensus protocols require that the identifiers of each participant be unique. No two participants
in the Zab protocol should claim the same unique identifier. This is necessary to allow the processes
in the system to agree on which processes have committed which data. If two Pods are launched
with the same ordinal, two ZooKeeper servers would both identify themselves as the same server.
https://kubernetes.io/docs/tutorials/_print/ 80/117
10/25/21, 2:20 PM Tutorials | Kubernetes
The A records for each Pod are entered when the Pod becomes Ready. Therefore,
the FQDNs of the
ZooKeeper servers will resolve to a single endpoint, and that
endpoint will be the unique ZooKeeper
server claiming the identity configured
in its myid file.
zk-0.zk-hs.default.svc.cluster.local
zk-1.zk-hs.default.svc.cluster.local
zk-2.zk-hs.default.svc.cluster.local
This ensures that the servers properties in the ZooKeepers' zoo.cfg files
represents a correctly
configured ensemble.
server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888
When the servers use the Zab protocol to attempt to commit a value, they will either achieve
consensus and commit the value (if leader election has succeeded and at least two of the Pods are
Running and Ready), or they will fail to do so (if either of the conditions are not met). No state will
arise where one server acknowledges a write on behalf of another.
WATCHER::
Created /hello
To get the data from the zk-1 Pod use the following command.
The data that you created on zk-0 is available on all the servers in the
ensemble.
https://kubernetes.io/docs/tutorials/_print/ 81/117
10/25/21, 2:20 PM Tutorials | Kubernetes
WATCHER::
world
cZxid = 0x100000002
mZxid = 0x100000002
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
This creates the zk StatefulSet object, but the other API objects in the manifest are not modified
because they already exist.
Watch the StatefulSet controller recreate the StatefulSet's Pods.
Once the zk-2 Pod is Running and Ready, use CTRL-C to terminate kubectl.
https://kubernetes.io/docs/tutorials/_print/ 82/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Use the command below to get the value you entered during the sanity test,
from the zk-2 Pod.
Even though you terminated and recreated all of the Pods in the zk StatefulSet, the ensemble still
serves the original value.
WATCHER::
world
cZxid = 0x100000002
mZxid = 0x100000002
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
volumeClaimTemplates:
- metadata:
name: datadir
annotations:
volume.alpha.kubernetes.io/storage-class: anything
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
When the StatefulSet recreated its Pods, it remounts the Pods' PersistentVolumes.
The volumeMounts section of the StatefulSet 's container template mounts the PersistentVolumes in
the ZooKeeper servers' data directories.
https://kubernetes.io/docs/tutorials/_print/ 83/117
10/25/21, 2:20 PM Tutorials | Kubernetes
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
The command used to start the ZooKeeper servers passed the configuration as command line
parameter. You can also use environment variables to pass configuration to the ensemble.
Configuring logging
One of the files generated by the zkGenConfig.sh script controls ZooKeeper's logging.
ZooKeeper
uses Log4j (https://logging.apache.org/log4j/2.x/), and, by default,
it uses a time and size based
rolling file appender for its logging configuration.
Use the command below to get the logging configuration from one of Pods in the zk StatefulSet .
The logging configuration below will cause the ZooKeeper process to write all
of its logs to the
standard output file stream.
https://kubernetes.io/docs/tutorials/_print/ 84/117
10/25/21, 2:20 PM Tutorials | Kubernetes
zookeeper.root.logger=CONSOLE
zookeeper.console.threshold=INFO
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
This is the simplest possible way to safely log inside the container.
Because the applications write
logs to standard out, Kubernetes will handle log rotation for you.
Kubernetes also implements a
sane retention policy that ensures application logs written to
standard out and standard error do
not exhaust local storage media.
Use kubectl logs (/docs/reference/generated/kubectl/kubectl-commands/#logs) to retrieve the last
20 log lines from one of the Pods.
You can view application logs written to standard out or standard error using kubectl logs and
from the Kubernetes Dashboard.
Kubernetes integrates with many logging solutions. You can choose a logging solution
that best fits
your cluster and applications. For cluster-level logging and aggregation,
consider deploying a sidecar
container (/docs/concepts/cluster-administration/logging#sidecar-container-with-logging-agent) to
rotate and ship your logs.
securityContext:
runAsUser: 1000
fsGroup: 1000
In the Pods' containers, UID 1000 corresponds to the zookeeper user and GID 1000
corresponds to
the zookeeper group.
Get the ZooKeeper process information from the zk-0 Pod.
https://kubernetes.io/docs/tutorials/_print/ 85/117
10/25/21, 2:20 PM Tutorials | Kubernetes
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
By default, when the Pod's PersistentVolumes is mounted to the ZooKeeper server's data directory,
it is only accessible by the root user. This configuration prevents the ZooKeeper process from writing
to its WAL and storing its snapshots.
Use the command below to get the file permissions of the ZooKeeper data directory on the zk-0
Pod.
Because the fsGroup field of the securityContext object is set to 1000, the ownership of the Pods'
PersistentVolumes is set to the zookeeper group, and the ZooKeeper process is able to read and
write its data.
statefulset.apps/zk patched
This terminates the Pods, one at a time, in reverse ordinal order, and recreates them with the new
configuration. This ensures that quorum is maintained during a rolling update.
https://kubernetes.io/docs/tutorials/_print/ 86/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Use the kubectl rollout history command to view a history or previous configurations.
statefulsets "zk"
REVISION
Use the kubectl rollout undo command to roll back the modification.
The command used as the container's entry point has PID 1, and
the ZooKeeper process, a child of
the entry point, has PID 27.
In another terminal watch the Pods in the zk StatefulSet with the following command.
In another terminal, terminate the ZooKeeper process in Pod zk-0 with the following command.
The termination of the ZooKeeper process caused its parent process to terminate. Because the
RestartPolicy of the container is Always, it restarted the parent process.
https://kubernetes.io/docs/tutorials/_print/ 87/117
10/25/21, 2:20 PM Tutorials | Kubernetes
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
The probe calls a bash script that uses the ZooKeeper ruok four letter
word to test the server's
health.
exit 0
else
exit 1
fi
In one terminal window, use the following command to watch the Pods in the zk StatefulSet.
In another window, using the following command to delete the zookeeper-ready script from the file
system of Pod zk-0 .
When the liveness probe for the ZooKeeper process fails, Kubernetes will
automatically restart the
process for you, ensuring that unhealthy processes in
the ensemble are restarted.
not ready.
If you specify a readiness probe, Kubernetes will ensure that your application's
processes will not
receive network traffic until their readiness checks pass.
For a ZooKeeper server, liveness implies readiness. Therefore, the readiness
probe from the
zookeeper.yaml manifest is identical to the liveness probe.
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 15
timeoutSeconds: 5
Even though the liveness and readiness probes are identical, it is important
to specify both. This
ensures that only healthy servers in the ZooKeeper
ensemble receive network traffic.
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
kubernetes-node-cxpk
kubernetes-node-a5aq
kubernetes-node-2g2d
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
https://kubernetes.io/docs/tutorials/_print/ 89/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Surviving maintenance
In this section you will cordon and drain nodes. If you are using this tutorial
on a shared cluster, be
sure that this will not adversely affect other tenants.
The previous section showed you how to spread your Pods across nodes to survive
unplanned node
failures, but you also need to plan for temporary node failures
that occur due to planned
maintenance.
Use this command to get the nodes in your cluster.
The max-unavailable field indicates to Kubernetes that at most one Pod from
zk StatefulSet can
be unavailable at any time.
zk-pdb N/A 1 1
In one terminal, use this command to watch the Pods in the zk StatefulSet .
In another terminal, use this command to get the nodes that the Pods are currently scheduled on.
for i in 0 1 2; do kubectl get pod zk-$i --template {{.spec.nodeName}}; echo ""; done
kubernetes-node-pb41
kubernetes-node-ixsl
kubernetes-node-i4c4
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --dele
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-clo
pod "zk-0" deleted
https://kubernetes.io/docs/tutorials/_print/ 90/117
10/25/21, 2:20 PM Tutorials | Kubernetes
As there are four nodes in your cluster, kubectl drain , succeeds and the
zk-0 is rescheduled to
another node.
Keep watching the StatefulSet 's Pods in the first terminal and drain the node on which
zk-1 is
scheduled.
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --dele
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-clo
pod "zk-1" deleted
The zk-1 Pod cannot be scheduled because the zk StatefulSet contains a PodAntiAffinity rule
preventing
co-location of the Pods, and as only two nodes are schedulable, the Pod will remain in a
Pending state.
Continue to watch the Pods of the stateful set, and drain the node on which
zk-2 is scheduled.
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --dele
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-clo
WARNING: Ignoring DaemonSet-managed pods: node-problem-detector-v0.1-dyrog; Deleting pods not managed b
There are pending pods when an error occurred: Cannot evict pod as it would violate the pod's disruptio
pod/zk-2
https://kubernetes.io/docs/tutorials/_print/ 91/117
10/25/21, 2:20 PM Tutorials | Kubernetes
world
cZxid = 0x200000002
mZxid = 0x200000002
pZxid = 0x200000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
zk-1 is rescheduled on this node. Wait until zk-1 is Running and Ready.
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --dele
https://kubernetes.io/docs/tutorials/_print/ 92/117
10/25/21, 2:20 PM Tutorials | Kubernetes
The output:
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, or DaemonSet: fluentd-clo
pod "heapster-v1.2.0-2604621511-wht1r" deleted
You can use kubectl drain in conjunction with PodDisruptionBudgets to ensure that your services
remain available during maintenance.
If drain is used to cordon nodes and evict pods prior to taking
the node offline for maintenance,
services that express a disruption budget will have that budget
respected.
You should always allocate additional capacity for critical services so that their Pods can
be immediately rescheduled.
Cleaning up
Use kubectl uncordon to uncordon all the nodes in your cluster.
You must delete the persistent storage media for the PersistentVolumes used in this tutorial.
Follow the necessary steps, based on your environment, storage configuration,
and
provisioning method, to ensure that all storage is reclaimed.
https://kubernetes.io/docs/tutorials/_print/ 93/117
10/25/21, 2:20 PM Tutorials | Kubernetes
6 - Clusters
6.1 - Restrict a Container's Access to
Resources with AppArmor
FEATURE STATE: Kubernetes v1.4 [beta]
AppArmor is a Linux kernel security module that supplements the standard Linux user and group
based
permissions to confine programs to a limited set of resources. AppArmor can be configured
for any
application to reduce its potential attack surface and provide greater in-depth defense. It is
configured through profiles tuned to allow the access needed by a specific program or container,
such as Linux capabilities, network access, file permissions, etc. Each profile can be run in either
enforcing mode, which blocks access to disallowed resources, or complain mode, which only reports
violations.
AppArmor can help you to run a more secure deployment by restricting what containers are allowed
to
do, and/or provide better auditing through system logs. However, it is important to keep in mind
that AppArmor is not a silver bullet and can only do so much to protect against exploits in your
application code. It is important to provide good, restrictive profiles, and harden your
applications
and cluster from other angles as well.
Objectives
See an example of how to load a profile on a node
Learn how to enforce the profile on a Pod
Learn how to check that the profile is loaded
See what happens when a profile is violated
See what happens when a profile cannot be loaded
gke-test-default-pool-239f5d02-gyn2: v1.4.0
gke-test-default-pool-239f5d02-x1kf: v1.4.0
gke-test-default-pool-239f5d02-xwux: v1.4.0
2. AppArmor kernel module is enabled -- For the Linux kernel to enforce an AppArmor profile, the
AppArmor kernel module must be installed and enabled. Several distributions enable the
module by
default, such as Ubuntu and SUSE, and many others provide optional support. To
check whether the
module is enabled, check the /sys/module/apparmor/parameters/enabled file:
cat /sys/module/apparmor/parameters/enabled
If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor
options if the kernel module is not enabled.
https://kubernetes.io/docs/tutorials/_print/ 94/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Note: Ubuntu carries many AppArmor patches that have not been merged into the
upstream Linux
kernel, including patches that add additional hooks and features.
Kubernetes has only been
tested with the upstream version, and does not promise
support for other features.
3. Container runtime supports AppArmor -- Currently all common Kubernetes-supported
container
runtimes should support AppArmor, like Docker (https://docs.docker.com/engine/),
apparmor-test-deny-write (enforce)
apparmor-test-audit-write (enforce)
docker-default (enforce)
k8s-nginx (enforce)
As long as the Kubelet version includes AppArmor support (>= v1.4), the Kubelet will reject a Pod
with AppArmor options if any of the prerequisites are not met. You can also verify AppArmor
support
on nodes by checking the node ready condition message (though this is likely to be
removed in a
later release):
Securing a Pod
Note: AppArmor is currently in beta, so options are specified as annotations. Once
support graduates to
general availability, the annotations will be replaced with first-
class fields (more details in
Upgrade path to GA).
AppArmor profiles are specified per-container. To specify the AppArmor profile to run a Pod
container with, add an annotation to the Pod's metadata:
container.apparmor.security.beta.kubernetes.io/<container_name>: <profile_ref>
Where <container_name> is the name of the container to apply the profile to, and <profile_ref>
specifies the profile to apply. The profile_ref can be one of:
runtime/default to apply the runtime's default profile
localhost/<profile_name> to apply the profile loaded on the host with the name <profile_name>
See the API Reference for the full details on the annotation and profile name formats.
https://kubernetes.io/docs/tutorials/_print/ 95/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Kubernetes AppArmor enforcement works by first checking that all the prerequisites have been
met,
and then forwarding the profile selection to the container runtime for enforcement. If the
prerequisites have not been met, the Pod will be rejected, and will not run.
To verify that the profile was applied, you can look for the AppArmor security option listed in the
container created event:
You can also verify directly that the container's root process is running with the correct profile by
checking its proc attr:
k8s-apparmor-example-deny-write (enforce)
Example
This example assumes you have already set up a cluster with AppArmor support.
First, we need to load the profile we want to use onto our nodes. This profile denies all file writes:
#include <tunables/global>
#include <abstractions/base>
file,
deny /** w,
Since we don't know where the Pod will be scheduled, we'll need to load the profile on all our
nodes.
For this example we'll use SSH to install the profiles, but other approaches are
discussed in Setting
up nodes with profiles.
NODES=(
gke-test-default-pool-239f5d02-gyn2.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-x1kf.us-central1-a.my-k8s
gke-test-default-pool-239f5d02-xwux.us-central1-a.my-k8s)
#include <tunables/global>
#include <abstractions/base>
file,
deny /** w,
EOF'
done
Next, we'll run a simple "Hello AppArmor" pod with the deny-write profile:
pods/security/hello-apparmor.yaml
https://kubernetes.io/docs/tutorials/_print/ 96/117
10/25/21, 2:20 PM Tutorials | Kubernetes
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/security/
hello-apparmor.yaml)
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor
annotations:
# Note that this is ignored if the Kubernetes node is not running version 1.4 or greater.
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write
spec:
containers:
- name: hello
image: busybox
If we look at the pod events, we can see that the Pod container was created with the AppArmor
profile "k8s-apparmor-example-deny-write":
We can verify that the container is actually running with that profile by checking its proc attr:
k8s-apparmor-example-deny-write (enforce)
Finally, we can see what happens if we try to violate the profile by writing to a file:
error: error executing remote command: command terminated with non-zero exit code: Error executing in D
To wrap up, let's look at what happens if we try to specify a profile that hasn't been loaded:
apiVersion: v1
kind: Pod
metadata:
name: hello-apparmor-2
annotations:
container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-allow-write
spec:
https://kubernetes.io/docs/tutorials/_print/ 97/117
10/25/21, 2:20 PM Tutorials | Kubernetes
containers:
- name: hello
image: busybox
EOF
pod/hello-apparmor-2 created
Name: hello-apparmor-2
Namespace: default
Node: gke-test-default-pool-239f5d02-x1kf/
Labels: <none>
Annotations: container.apparmor.security.beta.kubernetes.io/hello=localhost/k8s-apparmor-example-allo
Status: Pending
Reason: AppArmor
IP:
Controllers: <none>
Containers:
hello:
Container ID:
Image: busybox
Image ID:
Port:
Command:
sh
-c
State: Waiting
Reason: Blocked
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-dnz7v:
SecretName: default-token-dnz7v
Optional: false
Node-Selectors: <none>
Tolerations: <none>
Events:
Note the pod status is Pending, with a helpful error message: Pod Cannot enforce AppArmor: profile
"k8s-apparmor-example-allow-write" is not loaded . An event was also recorded with the same
message.
Administration
Setting up nodes with profiles
Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto
nodes. There are lots of ways to setup the profiles though, such as:
Through a DaemonSet (/docs/concepts/workloads/controllers/daemonset/) that runs a Pod on
each node to
ensure the correct profiles are loaded. An example implementation can be found
here (https://git.k8s.io/kubernetes/test/images/apparmor-loader).
At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or
image.
https://kubernetes.io/docs/tutorials/_print/ 98/117
10/25/21, 2:20 PM Tutorials | Kubernetes
By copying the profiles to each node and loading them through SSH, as demonstrated in the
Example.
The scheduler is not aware of which profiles are loaded onto which node, so the full set of profiles
must be loaded onto every node. An alternative approach is to add a node label for each profile (or
class of profiles) on the node, and use a
node selector (/docs/concepts/scheduling-eviction/assign-
pod-node/) to ensure the Pod is run on a
node with the required profile.
--enable-admission-plugins=PodSecurityPolicy[,others...]
apparmor.security.beta.kubernetes.io/defaultProfileName: <profile_ref>
apparmor.security.beta.kubernetes.io/allowedProfileNames: <profile_ref>[,others...]
The default profile name option specifies the profile to apply to containers by default when none is
specified. The allowed profile names option specifies a list of profiles that Pod containers are
allowed to be run with. If both options are provided, the default must be allowed. The profiles are
specified in the same format as on containers. See the API Reference for the full
specification.
Disabling AppArmor
If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line
flag:
--feature-gates=AppArmor=false
When disabled, any Pod that includes an AppArmor profile will fail validation with a "Forbidden"
error. Note that by default docker always enables the "docker-default" profile on non-privileged
pods (if the AppArmor kernel module is enabled), and will continue to do so even if the feature-gate
is disabled. The option to disable AppArmor will be removed when AppArmor graduates to general
availability (GA).
Authoring Profiles
https://kubernetes.io/docs/tutorials/_print/ 99/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Getting AppArmor profiles specified correctly can be a tricky business. Fortunately there are some
tools to help with that:
aa-genprof and aa-logprof generate profile rules by monitoring an application's activity and
logs, and admitting the actions it takes. Further instructions are provided by the
AppArmor
documentation (https://gitlab.com/apparmor/apparmor/wikis/Profiling_with_tools).
bane (https://github.com/jfrazelle/bane) is an AppArmor profile generator for Docker that uses
a
simplified profile language.
API Reference
Pod Annotation
Specifying the profile a container will run with:
key: container.apparmor.security.beta.kubernetes.io/<container_name>
Where <container_name>
matches the name of a container in the Pod.
A separate profile can be specified for each
container in the Pod.
value: a profile reference, described below
Profile Reference
runtime/default : Refers to the default runtime profile.
Equivalent to not specifying a profile (without a PodSecurityPolicy default), except it still
requires AppArmor to be enabled.
For Docker, this resolves to the
docker-default
(https://docs.docker.com/engine/security/apparmor/) profile for non-privileged
containers, and unconfined (no profile) for privileged containers.
localhost/<profile_name> : Refers to a profile loaded on the node (localhost) by name.
The possible profile names are detailed in the
core policy reference
(https://gitlab.com/apparmor/apparmor/wikis/AppArmor_Core_Policy_Reference#profile-
names-and-attachment-specifications).
unconfined : This effectively disables AppArmor on the container.
PodSecurityPolicy Annotations
Specifying the default profile to apply to containers when none is provided:
key: apparmor.security.beta.kubernetes.io/defaultProfileName
value: a profile reference, described above
What's next
Additional resources:
Quick guide to the AppArmor profile language
(https://gitlab.com/apparmor/apparmor/wikis/QuickProfileLanguage)
https://kubernetes.io/docs/tutorials/_print/ 100/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 101/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Seccomp stands for secure computing mode and has been a feature of the Linux
kernel since
version 2.6.12. It can be used to sandbox the privileges of a
process, restricting the calls it is able to
make from userspace into the
kernel. Kubernetes lets you automatically apply seccomp profiles
loaded onto a
Node to your Pods and containers.
Identifying the privileges required for your workloads can be difficult. In this
tutorial, you will go
through how to load seccomp profiles into a local
Kubernetes cluster, how to apply them to a Pod,
and how you can begin to craft
profiles that give only the necessary privileges to your container
processes.
Objectives
Learn how to load seccomp profiles on a node
Learn how to apply a seccomp profile to a container
Observe auditing of syscalls made by a container process
Observe behavior when a missing profile is specified
Observe a violation of a seccomp profile
Learn how to create fine-grained seccomp profiles
Learn how to apply a container runtime default seccomp profile
https://kubernetes.io/docs/tutorials/_print/ 102/117
10/25/21, 2:20 PM Tutorials | Kubernetes
If you were introducing this feature into production-like cluster, the Kubernetes project
recommends that you enable this feature gate on a subset of your nodes and then
test workload
execution before rolling the change out cluster-wide.
More detailed information about a possible upgrade and downgrade strategy can be
found in the
related Kubernetes Enhancement Proposal (KEP)
(https://github.com/kubernetes/enhancements/tree/a70cc18/keps/sig-node/2413-seccomp-by-
default#upgrade--downgrade-strategy).
Since the feature is in alpha state it is disabled per default. To enable it,
pass the flags --feature-
gates=SeccompDefault=true --seccomp-default to the
kubelet CLI or enable it via the kubelet
configuration
file (/docs/tasks/administer-cluster/kubelet-config-file/). To enable the
feature gate in
kind (https://kind.sigs.k8s.io), ensure that kind provides
the minimum required Kubernetes version
and enables the SeccompDefault feature
in the kind configuration
(https://kind.sigs.k8s.io/docs/user/quick-start/#enable-feature-gates-in-your-cluster):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
SeccompDefault: true
pods/security/seccomp/profiles/audit.json
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/secu
rity/seccomp/profiles/audit.json)
"defaultAction": "SCMP_ACT_LOG"
pods/security/seccomp/kind.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/security/
seccomp/kind.yaml)
apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraMounts:
- hostPath: "./profiles"
containerPath: "/var/lib/kubelet/seccomp/profiles"
Download the example above, and save it to a file named kind.yaml . Then create
the cluster with
the configuration.
Once the cluster is ready, identify the container running as the single node
cluster:
docker ps
You should see output indicating that a container is running with name
kind-control-plane .
If observing the filesystem of that container, one should see that the
profiles/ directory has been
successfully loaded into the default seccomp path
of the kubelet. Use docker exec to run a
command in the Pod:
https://kubernetes.io/docs/tutorials/_print/ 104/117
10/25/21, 2:20 PM Tutorials | Kubernetes
pods/security/seccomp/ga/audit-pod.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/secu
rity/seccomp/ga/audit-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: audit-pod
labels:
app: audit-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
securityContext:
allowPrivilegeEscalation: false
This profile does not restrict any syscalls, so the Pod should start
successfully.
Check what port the Service has been assigned on the node.
Now you can curl the endpoint from inside the kind control plane container at
the port exposed by
this Service. Use docker exec to run a command in the Pod:
https://kubernetes.io/docs/tutorials/_print/ 105/117
10/25/21, 2:20 PM Tutorials | Kubernetes
You can see that the process is running, but what syscalls did it actually make?
Because this Pod is
running in a local cluster, you should be able to see those
in /var/log/syslog . Open up a new
terminal window and tail the output for
calls from http-echo :
You should already see some logs of syscalls made by http-echo , and if you
curl the endpoint in
the control plane container you will see more written.
You can begin to understand the syscalls required by the http-echo process by
looking at the
syscall= entry on each line. While these are unlikely to
encompass all syscalls it uses, it can serve as
a basis for a seccomp profile
for this container.
Clean up that Pod and Service before moving to the next section:
pods/security/seccomp/ga/violation-pod.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/secu
rity/seccomp/ga/violation-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
https://kubernetes.io/docs/tutorials/_print/ 106/117
10/25/21, 2:20 PM Tutorials | Kubernetes
name: violation-pod
labels:
app: violation-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/violation.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
securityContext:
allowPrivilegeEscalation: false
If you check the status of the Pod, you should see that it failed to start.
As seen in the previous example, the http-echo process requires quite a few
syscalls. Here seccomp
has been instructed to error on any syscall by setting
"defaultAction": "SCMP_ACT_ERRNO" . This is
extremely secure, but removes the
ability to do anything meaningful. What you really want is to give
workloads
only the privileges they need.
Clean up that Pod and Service before moving to the next section:
https://kubernetes.io/docs/tutorials/_print/ 107/117
10/25/21, 2:20 PM Tutorials | Kubernetes
pods/security/seccomp/ga/fine-pod.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/secu
rity/seccomp/ga/fine-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: fine-pod
labels:
app: fine-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/fine-grained.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
securityContext:
allowPrivilegeEscalation: false
Open up a new terminal window and tail the output for calls from http-echo :
Check what port the Service has been assigned on the node:
curl the endpoint from inside the kind control plane container:
You should see no output in the syslog because the profile allowed all
necessary syscalls and
specified that an error should occur if one outside of
the list is invoked. This is an ideal situation
from a security perspective, but
required some effort in analyzing the program. It would be nice if
there was a
simple way to get closer to this security without requiring as much effort.
Clean up that Pod and Service before moving to the next section:
https://kubernetes.io/docs/tutorials/_print/ 108/117
10/25/21, 2:20 PM Tutorials | Kubernetes
pods/security/seccomp/ga/default-pod.yaml
(https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/pods/secu
rity/seccomp/ga/default-pod.yaml)
apiVersion: v1
kind: Pod
metadata:
name: audit-pod
labels:
app: audit-pod
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
securityContext:
allowPrivilegeEscalation: false
The default seccomp profile should provide adequate access for most workloads.
What's next
Additional resources:
https://kubernetes.io/docs/tutorials/_print/ 109/117
10/25/21, 2:20 PM Tutorials | Kubernetes
https://kubernetes.io/docs/tutorials/_print/ 110/117
10/25/21, 2:20 PM Tutorials | Kubernetes
7 - Services
7.1 - Using Source IP
Applications running in a Kubernetes cluster find and communicate with each
other, and the outside
world, through the Service abstraction. This document
explains what happens to the source IP of
packets sent to different types
of Services, and how you can toggle this behavior according to your
needs.
Prerequisites
You need to have a Kubernetes cluster, and the kubectl command-line tool must
be configured to
communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two
nodes that are not acting as control plane hosts. If you do not already have a
cluster, you can create
one by using
minikube (https://minikube.sigs.k8s.io/docs/tutorials/multi_node/)
or you can use one
of these Kubernetes playgrounds:
Katacoda (https://www.katacoda.com/courses/kubernetes/playground)
Play with Kubernetes (http://labs.play-with-k8s.com/)
The examples use a small nginx webserver that echoes back the source
IP of requests it receives
through an HTTP header. You can create it as follows:
deployment.apps/source-ip-app created
Objectives
Expose a simple application through various types of Services
Understand how each Service type handles source IP NAT
Understand the tradeoffs involved in preserving source IP
Packets sent to ClusterIP from within the cluster are never source NAT'd if
you're running kube-
proxy in
iptables mode (/docs/concepts/services-networking/service/#proxy-mode-iptables),
(the
default). You can query the kube-proxy mode by fetching
http://localhost:10249/proxyMode on the
node where kube-proxy is running.
Get the proxy mode on one of the nodes (kube-proxy listens on port 10249):
curl http://localhost:10249/proxyMode
iptables
You can test source IP preservation by creating a Service over the source IP app:
service/clusterip exposed
Waiting for pod default/busybox to be running, status is Pending, pod ready: false
ip addr
https://kubernetes.io/docs/tutorials/_print/ 112/117
10/25/21, 2:20 PM Tutorials | Kubernetes
# Replace "10.0.170.92" with the IPv4 address of the Service named "clusterip"
CLIENT VALUES:
client_address=10.244.3.8
command=GET
...
The client_address is always the client pod's IP address, whether the client pod and server pod are
in the same node or in different nodes.
service/nodeport exposed
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
Note that these are not the correct client IPs, they're cluster internal IPs. This is what happens:
Client sends packet to node2:nodePort
node2 replaces the source IP address (SNAT) in the packet with its own IP address
https://kubernetes.io/docs/tutorials/_print/ 113/117
10/25/21, 2:20 PM Tutorials | Kubernetes
Visually:
graph LR;
client(client)-->node2[Node 2];
node2-->client;
node2-. SNAT .-
>node1[Node 1];
node1-. SNAT .->node2;
node1-->endpoint(Endpoint);
classDef
plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s
fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
class node1,node2,endpoint
k8s;
class client plain;
service/nodeport patched
client_address=198.51.100.79
Note that you only got one reply, with the right client IP, from the one node on which the endpoint
pod
is running.
This is what happens:
client sends packet to node2:nodePort , which doesn't have any endpoints
packet is dropped
client sends packet to node1:nodePort , which does have endpoints
node1 routes packet to endpoint with the correct source IP
Visually:
graph TD;
client --> node1[Node 1];
client(client) --x node2[Node 2];
node1 -->
endpoint(endpoint);
endpoint --> node1;
classDef plain
fill:#ddd,stroke:#fff,stroke-width:4px,color:#000;
classDef k8s
fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff;
class node1,node2,endpoint
k8s;
class client plain;
service/loadbalancer exposed
curl 203.0.113.140
CLIENT VALUES:
client_address=10.240.0.5
...
configuration
Load balancer
Service
Health Health
check of check of
node 1 node 2
returns returns
200 500
Node 1 Node 2
https://kubernetes.io/docs/tutorials/_print/ 115/117
10/25/21, 2:20 PM Tutorials | Kubernetes
healthCheckNodePort: 32122
curl localhost:32122/healthz
curl localhost:32122/healthz
curl 203.0.113.140
CLIENT VALUES:
client_address=198.51.100.79
...
Cross-platform support
Only some cloud providers offer support for source IP preservation through
Services with
Type=LoadBalancer .
The cloud provider you're running on might fulfill the request for a loadbalancer
in a few different ways:
1. With a proxy that terminates the client connection and opens a new connection
to your
nodes/endpoints. In such cases the source IP will always be that of the
cloud LB, not that of the
client.
https://kubernetes.io/docs/tutorials/_print/ 116/117
10/25/21, 2:20 PM Tutorials | Kubernetes
2. With a packet forwarder, such that requests from the client sent to the
loadbalancer VIP end
up at the node with the source IP of the client, not
an intermediate proxy.
Cleaning up
Delete the Services:
What's next
Learn more about connecting applications via services (/docs/concepts/services-
networking/connect-applications-service/)
Read how to Create an External Load Balancer (/docs/tasks/access-application-cluster/create-
external-load-balancer/)
https://kubernetes.io/docs/tutorials/_print/ 117/117