0% found this document useful (0 votes)
5 views33 pages

Day3 Translated

The document explains the concept of Deployments in Kubernetes, detailing how to create and manage them using kubectl commands and YAML spec files. It highlights the importance of labels for resource organization and introduces the ReplicaSet, which maintains the desired number of pod replicas. Additionally, it covers the use of Node Selectors for pod scheduling based on node characteristics and demonstrates live editing of Deployments.

Uploaded by

www.lordkrishnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views33 pages

Day3 Translated

The document explains the concept of Deployments in Kubernetes, detailing how to create and manage them using kubectl commands and YAML spec files. It highlights the importance of labels for resource organization and introduces the ReplicaSet, which maintains the desired number of pod replicas. Additionally, it covers the use of Node Selectors for pod scheduling based on node characteristics and demonstrates live editing of Deployments.

Uploaded by

www.lordkrishnan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

Deployments

When you use kubectl run, you are deploying an object called a
Deployment. Like other objects, a Deployment can also be created using a
YAML or JSON file, known as spec files.

If you want to modify the configuration of your objects, such as a pod, you
can use kubectl apply with a spec file or kubectl edit.

Previous versions of ReplicaSets are retained, enabling rollback in case of


failures.

Labels are crucial for cluster management, as they allow you to search for
or select resources in your cluster, helping you organize resources into
smaller categories. This simplifies searching and organizing your pods and
cluster resources. Labels are not API server resources; they are stored in the
metadata in a key-value format.

Previously, we had only the Replication Controller (RC), which controlled


the number of replicas a pod was running. The issue was that this
management was handled on the client side. To address this, the
Deployment object was introduced, enabling server-side updates.
Deployments generate ReplicaSets, which offer better options than the
Replication Controller, which is now being phased out.

Let’s create our first Deployments:

# vim primeiro-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
app: giro pops
name: primeiro-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
dc: UK
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

# kubectl create -f primeiro-deployment.yaml


deployment.extensions/primeiro-deployment created
# vim segundo-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: segundo-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
creationTimestamp: null
labels:
run: nginx
dc: Netherlands
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

# kubectl create -f segundo-deployment.yaml


deployment.extensions/segundo-deployment created

# kubectl get deployment


NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
primeiro-deployment 1 1 1 1 6m
segundo-deployment 1 1 1 1 1m

# kubectl get pods


NAME READY STATUS RESTARTS AGE
primeiro-deployment-68c9dbf8b8-kjqpt 1/1 Running 0 19s
segundo-deployment-59db86c584-cf9pp 1/1 Running 0 15s

# kubectl describe pod primeiro-deployment-68c9dbf8b8-kjqpt


Name: primeiro-deployment-68c9dbf8b8-kjqpt
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: elliot-02/10.138.0.3
Start Time: Sat, 04 Aug 2018 00:45:29 +0000
Labels: dc=UK
pod-template-hash=2475869464
run=nginx
Annotations: <none>
Status: Running
IP: 10.46.0.1
Controlled By: ReplicaSet/primeiro-deployment-68c9dbf8b8
Containers:
nginx2:
Container ID:
docker://963ec997a0aa4aa3cecabdb3c59f67d80e7010c5leac23735524899f7
f2dd4f9
Image: nginx
Image ID:
docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baac
ab7e0cd4255442b04577c4d1f424
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 04 Aug 2018 00:45:36 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-np77m
(ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-np77m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-np77m
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 51s default-scheduler Successfully assigned
default/primeiro-deployment-68c9dbf8b8-kjqpt to elliot-02
Normal Pulling 50s kubelet, elliot-02 pulling image "nginx"
Normal Pulled 44s kubelet, elliot-02 Successfully pulled image "nginx"
Normal Created 44s kubelet, elliot-02 Created container
Normal Started 44s kubelet, elliot-02 Started container

# kubectl describe pod segundo-deployment-59db86c584-cf9pp


Name: segundo-deployment-59db86c584-cf9pp
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: elliot-02/10.138.0.3
Start Time: Sat, 04 Aug 2018 00:45:49 +0000
Labels: dc=Netherlands
pod-template-hash=1586427140
run=nginx
Annotations: <none>
Status: Running
IP: 10.46.0.2
Controlled By: ReplicaSet/segundo-deployment-59db86c584
Containers:
nginx2:
Container ID:
docker://a9e6b5463341e62eff9e45c8c0aace14195f35e41be088ca38694950
0alf2bbo
Image: nginx
Image ID:
docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baac
ab7e0cd4255442b04577c4d1f424
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 04 Aug 2018 00:45:51 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-np77m
(ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-np77m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-np77m
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m default-scheduler Successfully assigned
default/segundo-deployment-59db86c584-cf9pp to elliot-02
Normal Pulling 2m kubelet, elliot-02 pulling image "nginx"
Normal Pulled 2m kubelet, elliot-02 Successfully pulled image "nginx"
Normal Created 2m kubelet, elliot-02 Created container
Normal Started 2m kubelet, elliot-02 Started container

# kubectl describe deployment primeiro-deployment


Name: primeiro-deployment
Namespace: default
CreationTimestamp: Sat, 04 Aug 2018 00:45:29 +0000
Labels: app=giro pops
run=nginx
Annotations: deployment.kubernetes.io/revision=1
Selector: run=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: dc=UK
run=nginx
Containers:
nginx2:
Image: nginx
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: primeiro-deployment-68c9dbf8b8 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m deployment-controller Scaled up replica
set primeiro-deployment-68c9dbf8b8 to 1

Filtering by Labels

When we created our Deployments, we added the following labels:

labels:
run: nginx
dc: UK

labels:
run: nginx
dc: Netherlands

Labels are used for cluster organization. Let’s list our pods by searching for
these labels.

First, let’s search using the labels dc=UK and dc=Netherlands:

# kubectl get pods -l dc=UK


NAME READY STATUS RESTARTS AGE
primeiro-deployment-68c9dbf8b8-kjqpt 1/1 Running 0 3m
# kubectl get pods -l dc=Netherlands
NAME READY STATUS RESTARTS AGE
segundo-deployment-59db86c584-cf9pp 1/1 Running 0 4m

For a more customized output, we can list as follows:

# kubectl get pod -L dc


NAME READY STATUS RESTARTS AGE DC
primeiro-deployment-68c9... 1/1 Running 0 5m UK
segundo-deployment-59db... 1/1 Running 0 5m
Netherlands

Node Selector

The Node Selector is a way to classify our nodes. For example, our node
elliot-02 has an SSD disk and is located in the UK Data Center, while elliot-03
has an HDD disk and is located in the Netherlands Data Center.

Now that we have this information, let’s create these labels on our nodes to
use the nodeSelector:

# kubectl label node elliot-02 disk=SSD


node/elliot-02 labeled
# kubectl label node elliot-02 dc=UK
node/elliot-02 labeled
# kubectl label node elliot-03 dc=Netherlands
node/elliot-03 labeled
# kubectl label nodes elliot-03 disk=hdd
node/elliot-03 labeled

Oops! We declared disk=hdd in lowercase. How do we fix this? By


overwriting the label:

# kubectl label nodes elliot-03 disk=HDD --overwrite


node/elliot-03 labeled
To check the configured labels on each node:

# kubectl label nodes elliot-02 --list


dc=UK
disk=SSD
kubernetes.io/hostname=elliot-02
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux

# kubectl label nodes elliot-03 --list


beta.kubernetes.io/os=linux
dc=Netherlands
disk=HDD
kubernetes.io/hostname=elliot-03
beta.kubernetes.io/arch=amd64

Now, let’s redeploy, adding two new options to the YAML, and see the magic
happen. Our pod will be created on node elliot-02, which has the label
disk=SSD:

# vim terceiro-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: terceiro-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
creationTimestamp: null
labels:
run: nginx
dc: Netherlands
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
nodeSelector:
disk: SSD

# kubectl create -f terceiro-deployment.yaml


deployment.extensions/terceiro-deployment created

# kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE IP
NODE
primeiro-deployment-56d9... 1/1 Running 0 14m
172.17.0.4 elliot-03
segundo-deployment-869f... 1/1 Running 0 14m
172.17.0.5 elliot-03
terceiro-deployment-59cd... 1/1 Running 0 22s 172.17.0.6
elliot-02

Imagine the endless possibilities this provides! You could categorize based
on whether it’s production, high CPU or RAM usage, specific rack placement,
and more.

Simple as flying, right?

kubectl Edit

Now, let’s use the kubectl edit command to modify our first Deployment
“live” with the pod still running:

# kubectl edit deployment primeiro-deployment

This opens an editor. Let’s change the dc to Netherlands, imagining this


Deployment now runs in the Netherlands Data Center. We need to add the
label and nodeSelector:

spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
dc: Netherlands
app: giro pops
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
dc: Netherlands
...
deployment.extensions/primeiro-deployment edited

As we can see, we changed the dc label and modified the nodeSelector to


run on the node with the label dc=Netherlands. Easy!

Let’s verify the result:

# kubectl get pods -l dc=Netherlands -o wide


NAME READY STATUS RESTARTS AGE NODE
primeiro-deployment-7... 1/1 Running 0 3m elliot-03
segundo-deployment-5... 1/1 Running 0 49m elliot-02
terceiro-deployment-5... 1/1 Running 0 14m elliot-02
The pod was indeed created on node elliot-03, as we previously labeled it.

ReplicaSet

The ReplicaSet ensures the requested number of pods and their required
resources for a Deployment. Once a Deployment is created, the ReplicaSet
controls the number of running pods. If a pod terminates, the ReplicaSet
detects it and requests a new pod to maintain the desired replica count.

Let’s create our first ReplicaSet:

# vim primeiro-replicaset.yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: replica-set-primeiro
spec:
replicas: 3
template:
metadata:
labels:
system: Giro pops
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80

# kubectl create -f primeiro-replicaset.yaml


replicaset.extensions/replica-set-primeiro created
# kubectl get replicaset
NAME DESIRED CURRENT READY AGE
replica-set-primeiro 3 3 2 5s

We can observe the running pods:

# kubectl get pods


NAME READY STATUS RESTARTS AGE
replica-set-primeiro-6drmt 1/1 Running 0 12s
replica-set-primeiro-7j59w 1/1 Running 0 12s
replica-set-primeiro-mg8q9 1/1 Running 0 12s

We have exactly three nginx pods running simultaneously.

We can get more details about our ReplicaSet using the describe command:

# kubectl describe rs replica-set-primeiro


Name: replica-set-primeiro
Namespace: default
Selector: system=Giro pops
Labels: system=Giro pops
Annotations: <none>
Replicas: 3 current / 3 desired
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: system=Giro pops
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 12s replicaset-controller Created pod: replica-
set-primeiro-6drmt
Normal SuccessfulCreate 12s replicaset-controller Created pod: replica-
set-primeiro-7j59w
Normal SuccessfulCreate 12s replicaset-controller Created pod: replica-
set-primeiro-mg8q9

DaemonSet

A DaemonSet ensures that a copy of a pod runs on every node in the


cluster. This is useful for tasks like log collection or monitoring.

Let’s create our first DaemonSet:

# vim primeiro-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: daemon-set-primeiro
spec:
template:
metadata:
labels:
system: DaemonOne
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
updateStrategy:
type: RollingUpdate

# kubectl create -f primeiro-daemonset.yaml


daemonset.extensions/daemon-set-primeiro created

Let’s verify if our DaemonSet started correctly:

# kubectl get daemonset


NAME DESIRED CURRENT READY AGE
daemon-set-primeiro 3 3 3 5m

# kubectl describe ds daemon-set-primeiro


Name: daemon-set-primeiro
Selector: system=DaemonOne
Node-Selector: <none>
Labels: system=DaemonOne
Annotations: <none>
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: system=DaemonOne
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 5m daemonset-controller Created pod:
daemon-set-primeiro-52k8k
Normal SuccessfulCreate 5m daemonset-controller Created pod:
daemon-set-primeiro-6sln2
Normal SuccessfulCreate 5m daemonset-controller Created pod:
daemon-set-primeiro-9v2w9

Let’s verify our newly added RollingUpdate configuration:

# kubectl get ds daemon-set-primeiro -o yaml | grep -A 2 strategy


updateStrategy:
rollingUpdate:
maxUnavailable: 1

Now, with our DaemonSet configured, let’s change the nginx image and see
what happens:

# kubectl set image ds daemon-set-primeiro nginx=nginx:1.15.0


daemonset.extensions/daemon-set-primeiro image updated

Let’s list the DaemonSet and pods to ensure nothing broke:

# kubectl get daemonset


NAME DESIRED CURRENT READY AGE
daemon-set-primeiro 3 3 3 7m

# kubectl get pods


NAME READY STATUS RESTARTS AGE
daemon-set-primeiro-j788v 1/1 Running 0 1m
daemon-set-primeiro-7mpwr 1/1 Running 0 1m
daemon-set-primeiro-v5m47 1/1 Running 0 1m

# kubectl describe pod daemon-set-primeiro-j788v | grep -i image:


Image: nginx:1.15.0

The RollingUpdate terminated the old pods and recreated them with the new
image we set using kubectl set.

Let’s check our modification history:

# kubectl rollout history ds daemon-set-primeiro


daemonsets "daemon-set-primeiro"
REVISION CHANGE-CAUSE
1 <none>
2 <none>

Let’s detail each revision:

# kubectl rollout history ds daemon-set-primeiro --revision=1


daemonsets "daemon-set-primeiro" with revision #1
Pod Template:
Labels: system=DaemonOne
Containers:
nginx:
Image: nginx:1.7.9
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

# kubectl rollout history ds daemon-set-primeiro --revision=2


daemonsets "daemon-set-primeiro" with revision #2
Pod Template:
Labels: system=DaemonOne
Containers:
nginx:
Image: nginx:1.15.0
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>

Now, let’s roll back our DaemonSet to revision 1:

# kubectl rollout undo ds daemon-set-primeiro --to-revision=1


daemonset.extensions/daemon-set-primeiro rolled back

# kubectl get pods


NAME READY STATUS RESTARTS AGE
daemon-set-primeiro-c2jjk 1/1 Running 0 19s
daemon-set-primeiro-hrn48 1/1 Running 0 19s
daemon-set-primeiro-t6mr9 1/1 Running 0 19s

# kubectl describe pod daemon-set-primeiro-c2jjk | grep -i image:


Image: nginx:1.7.9

Awesome, right?

If something goes wrong, just revert to the other configuration:

# kubectl rollout undo ds daemon-set-primeiro --to-revision=2


daemonset.extensions/daemon-set-primeiro rolled back

# kubectl rollout status ds daemon-set-primeiro


daemon set "daemon-set-primeiro" successfully rolled out

# kubectl get pods


NAME READY STATUS RESTARTS AGE
daemon-set-primeiro-jzck9 1/1 Running 0 32s
daemon-set-primeiro-td7h5 1/1 Running 0 29s
daemon-set-primeiro-v5c86 1/1 Running 0 40s

# kubectl describe pod daemon-set-primeiro-jzck9 | grep -i image:


Image: nginx:1.15.0

Now, let’s delete our DaemonSet:

# kubectl delete ds daemon-set-primeiro


daemonset.extensions "daemon-set-primeiro" deleted

Volumes

EmptyDir

An EmptyDir volume is created whenever a pod is assigned to an existing


node. This volume is initially empty, and all containers in the pod can read
and write files to it. This volume is not persistent; when a pod is removed
from a node, the data in the EmptyDir is permanently deleted. Note that data
is not deleted in case of container crashes.

Let’s create a pod to test this volume:

# vim pod-emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
name: busy
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /giropops
name: giropops-dir
volumes:
- name: giropops-dir
emptyDir: {}

# kubectl create -f pod-emptydir.yaml


pod/busybox created

# kubectl get pod


NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 12s

Our pod is up. Now, let’s add a file to the /giropops path directly in the
created pod:

# kubectl exec -ti busybox -c busy -- touch /giropops/funciona

Now, let’s list this directory:

# kubectl exec -ti busybox -c busy -- ls -l /giropops/


total 0
-rw-r--r-- 1 root root 0 Jul 7 17:37 funciona

As we can see, our file was created correctly. Let’s check if this file was also
created in the volume managed by the kubelet. First, we need to find out
which node the pod is allocated to:

# kubectl get pod -o wide


NAME READY STATUS RESTARTS AGE IP NODE
busybox 1/1 Running 0 1m 10.40.0.6 elliot-02

Now, let’s search for our volume on node elliot-02:


# find /var/lib/kubelet/pods/ -iname giropops-dir
/var/lib/kubelet/pods/7d33810f-8215-11e8-b889-42010a8a0002/volumes/
kubernetes.io~empty-dir/giropops-dir

Let’s list this path:

# ls /var/lib/kubelet/pods/7d...kubernetes.io~empty-dir/giropops-dir
funciona

The file we created inside the container is listed.

Now, let’s delete the pod and list the directory again:

# kubectl delete -f pod-emptydir.yaml


pod "busybox" deleted

# ls /var/lib/kubelet/pods/7d...kubernetes.io~empty-dir/giropops-dir
No such file or directory

As expected, the directory cannot be found because EmptyDir volumes do


not persist data.

Persistent Volume

The PersistentVolume subsystem provides an API for users and


administrators that abstracts the details of how storage is provided and
consumed by pods. Two API resources were introduced for better control:
PersistentVolume and PersistentVolumeClaim.

A PersistentVolume (PV) is a cluster resource, similar to a node, but it


represents storage. A PV is a piece of storage in the cluster provisioned by
an administrator. PVs have a lifecycle independent of any pod associated
with them. This API supports storage types like NFS, iSCSI, or cloud provider-
specific storage.
A PersistentVolumeClaim (PVC) is similar to a pod. Pods consume node
resources, and PVCs consume PV resources. A PVC is a storage request
created by a user.

Let’s create a PersistentVolume of type NFS. First, we’ll install the necessary
packages to set up an NFS server on Linux.

Install the packages on node elliot-01:

For Debian-based systems:

# apt-get install nfs-kernel-server -y

For RedHat-based systems:

# sudo yum install nfs-utils -y

Now, install the nfs-common package on the other nodes for Debian-based
systems:

# apt-get install -y nfs-common

Now, let’s create a directory and set the necessary permissions to test
everything:

# mkdir /opt/giropops
# chmod 1777 /opt/giropops/

Add this directory to the NFS server and activate it:

# vim /etc/exports
/opt/giropops *(rw,sync,no_root_squash,subtree_check)

# exportfs -ra

Create a file in this directory for our test:

# touch /opt/giropops/FUNCIONA
Now, let’s create the YAML manifest for our PersistentVolume. Make sure to
update the server field’s IP address to the IP of node elliot-01:

# vim primeiro-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: primeiro-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /opt/giropops
server: 10.138.0.2
readOnly: false

Create the PersistentVolume:

# kubectl create -f primeiro-pv.yaml


persistentvolume/primeiro-pv created

# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY AGE
primeiro-pv 1Gi RWX Retain 22s

# kubectl describe pv primeiro-pv


Name: primeiro-pv
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 10.138.0.2
Path: /opt/giropops
ReadOnly: false
Events: <none>

Now, we need to create a PersistentVolumeClaim so pods can request read


and write access to our PersistentVolume:

# vim primeiro-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: primeiro-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 400Mi

# kubectl create -f primeiro-pvc.yaml


persistentvolumeclaim/primeiro-pvc created
List our PersistentVolume and PersistentVolumeClaim:

# kubectl get pv
NAME CAPACITY ACCESS MODES CLAIM AGE
primeiro-pv 1Gi RWX default/primeiro-pvc 8m

# kubectl get pvc


NAME STATUS VOLUME CAPACITY ACCESS MODES AGE
primeiro-pvc Bound primeiro-pv 1Gi RWX 3m

Now that we have a PersistentVolume and a PersistentVolumeClaim, let’s


create a deployment that consumes this volume:

# vim nfs-pv.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
volumeMounts:
- name: nfs-pv
mountPath: /giropops
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: primeiro-pvc
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30

# kubectl create -f nfs-pv.yaml


deployment.extensions/nginx created

# kubectl describe deployment nginx


Name: nginx
Namespace: default
CreationTimestamp: ...
Labels: run=nginx
Annotations: ...
Selector: run=nginx
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: run=nginx
Containers:
nginx:
Image: nginx
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/giropops from nfs-pv (rw)
Volumes:
nfs-pv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim
in the same namespace)
ClaimName: primeiro-pvc
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-b4bd77674 (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7s deployment-controller Scaled up replica
set nginx-b4bd77674 to 1

As we can see, our pod was created with the NFS volume using the
ClaimName primeiro-pvc.

Let’s detail our pod to verify everything is correct:

# kubectl get pods -o wide


NAME READY STATUS RESTARTS AGE NODE
nginx-b4bd77674-gwc9k 1/1 Running 0 28s elliot-02

# kubectl describe pod nginx-b4bd77674-gwc9k


Name: nginx-b4bd77674-gwc9k
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: elliot-02/10.138.0.3
...
Mounts:
/giropops from nfs-pv (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-np77m
(ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-pv:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim
in the same namespace)
ClaimName: primeiro-pvc
ReadOnly: false
default-token-np77m:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-np77m
Optional: false

Everything looks good—the pod mounted the /giropops volume using the
NFS volume.

Now, let’s list the files in the path inside the container, which is located on
node elliot-02:

# kubectl exec -ti nginx-b4bd77674-gwc9k -- ls /giropops/


FUNCIONA

We can see the file we listed earlier in the directory.

Now, let’s create another file using the container itself with the exec
command and -- touch:

# kubectl exec -ti nginx-b4bd77674-gwc9k -- touch /giropops/STRIGUS

# kubectl exec -ti nginx-b4bd77674-gwc9k -- ls -la /giropops/


total 4
drwxr-xr-x. 2 root root 4096 Jul 7 23:13 .
drwxr-xr-x. 1 root root 44 Jul 7 22:53 ..
-rw-r--r--. 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r--. 1 root root 0 Jul 7 23:13 STRIGUS

Listing inside the container, we see the file was created. But what about on
our NFS server? Let’s list the directory on the NFS server on elliot-01:
# ls -la /opt/giropops/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS

Our NFS server is working correctly!

Now, let’s delete the deployment to see what happens to the volume:

# kubectl get deployment


NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 28m

# kubectl delete deployment nginx


deployment.extensions "nginx" deleted

Now, let’s list the directory on the NFS server:

# ls -la /opt/giropops/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS

As expected, the files remain and were not deleted with the deployment’s
deletion. This is one of many ways to maintain persistent files for pods to
consume.

You might also like