08 Managing State.v1
08 Managing State.v1
Table of contents
1. Managing state 5
Storing configuration 7
Storing secrets securely 9
Persisting changes with Volumes 13
The emptyDir Volume 17
Volume drivers in Kubernetes 20
Using Secrets and ConfigMaps as Volumes 22
Mounting local folders as Volumes 23
Using Volumes from a cloud provider 27
Abstracting volumes for portability with Persistent 32
Volumes
Persistent Volume access modes 34
Using Persistent Volumes with Persistent Volume 36
Claims
Using Persistent Volumes as Volumes in your Pods 37
Dynamically provisioning of Persistent Volumes 39
Using local volumes as an alternative for hostPath 42
3
3. Persisting changes 88
Creating a Persistent Volume 90
Claiming a Persistent Volume 91
Using volumes in Deployments 92
4
4. Dynamic provisioning 96
Creating volumes with a Storage Class 98
Testing the Deployment 101
5. Lab 103
Extracting configs 104
Chapter 1
Managing
state
6
1 2
3
7
database.
Fig. 2 You might have an existing database already that
lives outside the cluster.
You could use the connection string to connect to
Fig. 3
Storing configuration
ConfigMaps are objects that contains key-value pairs.
8
configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
database_url: postgresql://my-postgres:5432
Once you have decided which values you wish to store, you
can use the ConfigMap in your Pod.
It's common practice to use the values stored in the
ConfigMap as environment variables inside the Pod.
You could inject the URL of the database as a
DATABASE_URL environment variable in the Pod.
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
9
name: my-config
key: database_ur
secret.yaml
10
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
username: bGVhcm5rOHMK
password: cHJlY2lvdSQK
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
env:
- name: DATABASE_USERNAME
valueFrom:
secretKeyRef:
name: my-SECRET
key: username
1 2
3 4
With Secret and ConfigMap you can store all the details that
you need to connect your Pods to a database securely.
But what if you want to host the database in Kubernetes
rather than connecting to an external one?
You know how to store the details in ConfigMaps and
Secrets.
However, what you need is a way to persist the data on disk.
running as a Pod.
1 2
3 4
1 2
use a Volume.
When you wish to persist data into a Pod, you can
Fig. 2
use a Volume.
Even if one of the containers inside the Pod restarts,
Fig. 3
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
pod.yaml
apiVersion: v1
kind: Pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
1 2
emptyDir isn't the Volume you can use, but it's the default.
Kubernetes comes with several Volumes drivers.
nfs
persistentVolumeClaim
projected
portworxVolume
quobyte
rbd
scaleIO
secret
storageos
vsphereVolume
Exploring each Volume isn't in the scope of this section.
Instead, you will focus on a few important Volumes and
drivers:
emptyDir
configMap and secret
hostPath
persistentVolumeClaim
Local
flexVolumes and CSI
You already learned about the EmptyDir volume and how
it's the default Volume in a Pod.
You also just learned about ConfigMaps and Secrets and
how you can use their values to inject environment variables
in your Pods.
22
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: hello-world-config
data:
application.properties: |
greeting=Hello
pod.yaml
23
apiVersion: v1
kind: Pod
spec:
containers:
- name: app
image: spring-boot
volumeMounts:
- name: config
mountPath: /config
volumes:
- name: config
configMap:
name: hello-world-config
1 2
3 4
pod.yaml
apiVersion: v1
kind: Pod
metadata:
28
name: test-ebs
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-ebs
name: test-volume
volumes:
- name: test-volume
# This AWS EBS volume must already exist.
awsElasticBlockStore:
volumeID: <volume-id>
fsType: ext4
1 2
3 4
5 6
1 2
33
Please note that those disks such as Elastic Block Store and
Azurefile are not part of the cluster and are provisioned
manually.
For each storage you wish to use you should create an
equivalent Persistent Volume in Kubernetes defining the
specs of that volume.
pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-867g5kii
fsType: ext4
pvc.yaml
kind: PersistentVolumeClaim
37
apiVersion: v1
metadata:
name: task-pv
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
You claimed the Volume, but you haven't used in your Pod
yet.
pod.yaml
38
kind: Pod
apiVersion: v1
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
1 2
3
39
in your Pod.
Fig. 2The Persistent Volume Claim is bound to a
Persistent Volume. When the Pod requests the storage,
the volume is mounted to the Node first.
Fig. 3After the Volume is mounted into the Node, it can
finally be mounted inside the Pod as well.
Dynamically provisioning of
Persistent Volumes
The idea is smart: instead of having to create Persistent
40
1 2
3 4
pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
1 2
3 4
5 6
45
Node.
1 2
3 4
primary instance.
Once the primary is booted, you can start more
Fig. 2
mechanism.
50
Now that you have a plan let's see what Kubernetes has to
offer.
Each instance of the database could be wrapped into a Pod.
Since there are three Pods, this suggests that you could create
a Deployment with three replicas.
Pods have to talk to each other to form a cluster.
So perhaps you could create a Service that is pointing to the
same label as the Deployment.
Finally, you need to provision for Persistent Volumes, and
Persistent Volume Claims to make sure that the data is
stored safely on disk.
You could use the local volume for that.
The final bill of material is the following:
A Deployment with three replicas for the three
instances: 1 primary, 2 secondary instances of
PostgreSQL
A Service to connect the Pods
3 Persistent volumes with a local Volume and 3
Persistent Volume Claims to store the data
51
1 2
3 4
secondaries as Pods.
You could use a Deployment to manage the Pods
Fig. 2
1 2
That doesn't play well with the database because you don't
know who ends up being the primary.
53
1 2
3 4
primary and routes the traffic to any Pod that can find.
1 2
3 4
1 2
3
58
Ideally:
one of the two secondary becomes the primary
the StatefulSet recreates the Pod
The Pods joins as a secondary and starts replicating with
the primary
If the order of actions above is respected, the database can
60
1 2
3 4
5
63
1 2
3 4
1 2
3 4
5 6
Database operators
Speaking of operators and databases, have you ever seen a
Postgres kind?
postgres.yaml
apiVersion: kubedb.com/v1alpha1
kind: Postgres
metadata:
name: p1
namespace: demo
spec:
version: 9.6.5
replicas: 1
doNotPause: true
dynamodb.yaml
apiVersion: service-operator.aws/v1alpha1
kind: DynamoDB
metadata:
name: dynamo-table
spec:
hashAttribute:
name: name
type: S
rangeAttribute:
name: created_at
type: S
readCapacityUnits: 5
writeCapacityUnits: 5
1 2
3
75
Fig. 1In OpenEBS you have Pods that mount the local
filesystem on the Node as a hostPath Volume.
hostPath has a number of drawbacks. One of the
Fig. 2
Since the Pods are used only to mount the filesystem and
replication, they are referred to as storage replicas.
Those Pods are then exposed to Kubernetes as Persistent
Volumes through a StorageClass.
1 2
3 4
5 6
container
Two controllers managing replicas
Running stateful workloads in Kubernetes isn't easy.
You should pay attention to how many components are
involved and what's the risk of one of them going down.
If you are risk-averse and wish to leverage proven solutions,
using operators such as the AWS operator or the Service
catalogue might be a better solution than rolling out your
clustered database or unified storage layer.
Chapter 2
Deploying a
stateless
MySQL
81
Prerequisites
You should have installed:
minikube
kubectl and
MySQL Workbench
deployment-mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
name: mysql-deployment
spec:
replicas: 1
selector:
matchLabels:
name: mysql
template:
metadata:
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql:8.0.2
83
ports:
- containerPort: 3306
protocol: TCP
env:
- name: MYSQL_ROOT_PASSWORD
value: password
- name: MYSQL_DATABASE
value: sample
- name: MYSQL_USER
value: mysql
- name: MYSQL_PASSWORD
value: mysql
bash
service-mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
name: mysql
targetPort: 3306
nodePort: 31000
selector:
name: mysql
type: NodePort
84
bash
secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: bXlzcWw=
password: bXlzcWw=
root: cGFzc3dvcmQ=
85
bash
deployment-mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
name: mysql-deployment
spec:
replicas: 1
selector:
matchLabels:
name: mysql
template:
metadata:
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql:8.0.2
ports:
- containerPort: 3306
86
protocol: TCP
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecret
key: root
- name: "MYSQL_DATABASE"
value: "sample"
- name: "MYSQL_USER"
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: "MYSQL_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecret
key: password
bash
bash
$ minikube ip
Credentials
SQL
Persisting
changes
89
bash
bash
bash
pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv0001
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/somepath/data01"
91
bash
bash
$ kubectl get pv
pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
storageClassName: "" # Static provisioning
92
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
bash
bash
deployment-mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment
labels:
name: mysql-deployment
spec:
replicas: 1
selector:
matchLabels:
name: mysql
template:
metadata:
labels:
name: mysql
spec:
containers:
- name: mysql
image: mysql:8.0.2
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqlvolume
ports:
- containerPort: 3306
protocol: TCP
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecret
key: root
- name: "MYSQL_DATABASE"
value: "sample"
- name: "MYSQL_USER"
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: "MYSQL_PASSWORD"
94
valueFrom:
secretKeyRef:
name: mysecret
key: password
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: test-pvc
bash
SQL
bash
95
bash
Dynamic
provisioning
97
bash
pvc-standard.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc-2
spec:
storageClassName: "standard"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
bash
bash
deployment-mysql2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-deployment2
labels:
name: mysql-deployment2
spec:
replicas: 1
selector:
matchLabels:
name: mysql2
template:
metadata:
labels:
name: mysql2
spec:
containers:
- name: mysql
image: mysql:8.0.2
volumeMounts:
- mountPath: /var/lib/mysql
name: mysqlvolume
ports:
- containerPort: 3306
protocol: TCP
env:
- name: "MYSQL_ROOT_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecret
key: root
100
- name: "MYSQL_DATABASE"
value: "sample"
- name: "MYSQL_USER"
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: "MYSQL_PASSWORD"
valueFrom:
secretKeyRef:
name: mysecret
key: password
volumes:
- name: mysqlvolume
persistentVolumeClaim:
claimName: test-pvc-2
bash
service2.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql2
spec:
ports:
- port: 3306
name: mysql
targetPort: 3306
nodePort: 32000
selector:
101
name: mysql2
type: NodePort
bash
Credentials
SQL
Even if you delete the MySQL Pod, you should still be able
to retrieve the same table.
You can list the Pods with:
bash
bash
Lab
104
Extracting configs
The name of the database is stored in a Secret. But it isn't a
secret as such, it's configuration.
You should store the value in a ConfigMap and refactor the
Deployment to use that ConfigMap.