Day3 Translated
Day3 Translated
When you use kubectl run, you are deploying an object called a
Deployment. Like other objects, a Deployment can also be created using a
YAML or JSON file, known as spec files.
If you want to modify the configuration of your objects, such as a pod, you
can use kubectl apply with a spec file or kubectl edit.
Labels are crucial for cluster management, as they allow you to search for
or select resources in your cluster, helping you organize resources into
smaller categories. This simplifies searching and organizing your pods and
cluster resources. Labels are not API server resources; they are stored in the
metadata in a key-value format.
# vim primeiro-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
app: giro pops
name: primeiro-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
dc: UK
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
Filtering by Labels
labels:
run: nginx
dc: UK
labels:
run: nginx
dc: Netherlands
Labels are used for cluster organization. Let’s list our pods by searching for
these labels.
Node Selector
The Node Selector is a way to classify our nodes. For example, our node
elliot-02 has an SSD disk and is located in the UK Data Center, while elliot-03
has an HDD disk and is located in the Netherlands Data Center.
Now that we have this information, let’s create these labels on our nodes to
use the nodeSelector:
Now, let’s redeploy, adding two new options to the YAML, and see the magic
happen. Our pod will be created on node elliot-02, which has the label
disk=SSD:
# vim terceiro-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: terceiro-deployment
namespace: default
spec:
replicas: 1
selector:
matchLabels:
run: nginx
template:
metadata:
creationTimestamp: null
labels:
run: nginx
dc: Netherlands
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
nodeSelector:
disk: SSD
Imagine the endless possibilities this provides! You could categorize based
on whether it’s production, high CPU or RAM usage, specific rack placement,
and more.
kubectl Edit
Now, let’s use the kubectl edit command to modify our first Deployment
“live” with the pod still running:
spec:
replicas: 1
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
dc: Netherlands
app: giro pops
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx2
ports:
- containerPort: 80
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
nodeSelector:
dc: Netherlands
...
deployment.extensions/primeiro-deployment edited
ReplicaSet
The ReplicaSet ensures the requested number of pods and their required
resources for a Deployment. Once a Deployment is created, the ReplicaSet
controls the number of running pods. If a pod terminates, the ReplicaSet
detects it and requests a new pod to maintain the desired replica count.
# vim primeiro-replicaset.yaml
apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
name: replica-set-primeiro
spec:
replicas: 3
template:
metadata:
labels:
system: Giro pops
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
We can get more details about our ReplicaSet using the describe command:
DaemonSet
# vim primeiro-daemonset.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: daemon-set-primeiro
spec:
template:
metadata:
labels:
system: DaemonOne
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
updateStrategy:
type: RollingUpdate
Now, with our DaemonSet configured, let’s change the nginx image and see
what happens:
The RollingUpdate terminated the old pods and recreated them with the new
image we set using kubectl set.
Awesome, right?
Volumes
EmptyDir
# vim pod-emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
name: busy
command:
- sleep
- "3600"
volumeMounts:
- mountPath: /giropops
name: giropops-dir
volumes:
- name: giropops-dir
emptyDir: {}
Our pod is up. Now, let’s add a file to the /giropops path directly in the
created pod:
As we can see, our file was created correctly. Let’s check if this file was also
created in the volume managed by the kubelet. First, we need to find out
which node the pod is allocated to:
# ls /var/lib/kubelet/pods/7d...kubernetes.io~empty-dir/giropops-dir
funciona
Now, let’s delete the pod and list the directory again:
# ls /var/lib/kubelet/pods/7d...kubernetes.io~empty-dir/giropops-dir
No such file or directory
Persistent Volume
Let’s create a PersistentVolume of type NFS. First, we’ll install the necessary
packages to set up an NFS server on Linux.
Now, install the nfs-common package on the other nodes for Debian-based
systems:
Now, let’s create a directory and set the necessary permissions to test
everything:
# mkdir /opt/giropops
# chmod 1777 /opt/giropops/
# vim /etc/exports
/opt/giropops *(rw,sync,no_root_squash,subtree_check)
# exportfs -ra
# touch /opt/giropops/FUNCIONA
Now, let’s create the YAML manifest for our PersistentVolume. Make sure to
update the server field’s IP address to the IP of node elliot-01:
# vim primeiro-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: primeiro-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /opt/giropops
server: 10.138.0.2
readOnly: false
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY AGE
primeiro-pv 1Gi RWX Retain 22s
# vim primeiro-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: primeiro-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 400Mi
# kubectl get pv
NAME CAPACITY ACCESS MODES CLAIM AGE
primeiro-pv 1Gi RWX default/primeiro-pvc 8m
# vim nfs-pv.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
volumeMounts:
- name: nfs-pv
mountPath: /giropops
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumes:
- name: nfs-pv
persistentVolumeClaim:
claimName: primeiro-pvc
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
As we can see, our pod was created with the NFS volume using the
ClaimName primeiro-pvc.
Everything looks good—the pod mounted the /giropops volume using the
NFS volume.
Now, let’s list the files in the path inside the container, which is located on
node elliot-02:
Now, let’s create another file using the container itself with the exec
command and -- touch:
Listing inside the container, we see the file was created. But what about on
our NFS server? Let’s list the directory on the NFS server on elliot-01:
# ls -la /opt/giropops/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS
Now, let’s delete the deployment to see what happens to the volume:
# ls -la /opt/giropops/
-rw-r--r-- 1 root root 0 Jul 7 22:07 FUNCIONA
-rw-r--r-- 1 root root 0 Jul 7 23:13 STRIGUS
As expected, the files remain and were not deleted with the deployment’s
deletion. This is one of many ways to maintain persistent files for pods to
consume.