cc lec
cc lec
Deployment
A Deployment manages a set of Pods that doesn't maintain state.
A Deployment manages updates for Pods and ReplicaSets automatically.
It ensures your application runs smoothly by keeping the desired number of Pods running.
You tell Kubernetes the desired state and The system adjusts to match the desired state step-by-step.
Gradual changes without downtime and Automatically replaces failed Pods.
Deployment - Use Case
Create a Deployment to rollout a ReplicaSet. The ReplicaSet creates Pods in the background. Check the
status of the rollout to see if it succeeds or not.
Declare the new state of the Pods by updating the PodTemplateSpec of the Deployment. A new
ReplicaSet is created and the Deployment manages moving the Pods from the old ReplicaSet to the
new one at a controlled rate. Each new ReplicaSet updates the revision of the Deployment.
Rollback to an earlier Deployment revision if the current state of the Deployment is not stable. Each
rollback updates the revision of the Deployment.
Scale up the Deployment to facilitate more load.
Pause the rollout of a Deployment to apply multiple fixes to its PodTemplateSpec and then resume it to
start a new rollout.
Use the status of the Deployment as an indicator that a rollout has stuck.
Clean up older ReplicaSets that you don't need anymore
Creating a Deployment
$ kubectl apply -f nginx-deployment.yaml
$ kubectl get deployments
$ kubectl rollout status deployment/nginx-deployment
$ kubectl get rs
$ kubectl get pods --show-labels
Run $ kubectl get deployments to check if the Deployment was created.
NAME lists the names of the Deployments in the namespace.
READY displays how many replicas of the application are available to your users. It follows the pattern
ready/desired.
UP-TO-DATE displays the number of replicas that have been updated to achieve the desired state.
AVAILABLE displays how many replicas of the application are available to your users.
AGE displays the amount of time that the application has been running.
l To see the ReplicaSet (rs) created by the Deployment, run $ kubectl get rs
l NAME lists the names of the ReplicaSets in the namespace.
l DESIRED displays the desired number of replicas of the application, which you define when you
create the Deployment. This is the desired state.
l CURRENT displays how many replicas are currently running.
l READY displays how many replicas of the application are available to your users.
l AGE displays the amount of time that the application has been running.
l Notice that the name of the ReplicaSet is always formatted as [DEPLOYMENT-NAME]-[HASH]. This
name will become the basis for the Pods which are created.
l The HASH string is the same as the pod-template-hash label on the ReplicaSet.
Pod-template-hash label: Do not change this label.
The pod-template-hash label is added by the Deployment controller to every ReplicaSet that a
Deployment creates or adopts.
This label ensures that child ReplicaSets of a Deployment do not overlap.
Pod-template-hash is generated by hashing the PodTemplate of the ReplicaSet.
2- Updating a Deployment
A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is,
.spec.template) is changed.
For example, if the labels or container images of the template are updated.
Other updates, such as scaling the Deployment, do not trigger a rollout.
$ kubectl apply -f nginx-deployment.yaml
$ kubectl get deployments
$ kubectl get rs
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
or
$ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
$ kubectl get deployments
$ kubectl get rs
l Update the nginx Pods to use the nginx:1.16.1 image instead of the nginx:1.14.2 image.
l $ kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1
l or
l $ kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
l The output is similar to:
l deployment.apps/nginx-deployment image updated
Deployment ensures that only a certain number of Pods are down while they are being updated.
By default, it ensures that at least 75% of the desired number of Pods are up (25% max unavailable).
Deployment also ensures that only a certain number of Pods are created above the desired number
of Pods.
By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge).
For example, if you look at the above Deployment closely, you will see that it first creates a new Pod,
then deletes an old Pod, and creates another new one. It does not kill old Pods until a su icient
number of new Pods have come up, and does not create new Pods until a su icient number of old
Pods have been killed. It makes sure that at least 3 Pods are available and that at max 4 Pods in total
are available. In case of a Deployment with 4 replicas, the number of Pods would be between 3 and
5.
Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-
deployment-d556bf558) and scaled it up to 3 replicas directly. When you updated the Deployment,
it created a new ReplicaSet (nginx-deployment-7dbfbc79cf) and scaled it up to 1 and waited for it to
come up. Then it scaled down the old ReplicaSet to 2 and scaled up the new ReplicaSet to 2 so that
at least 3 Pods were available and at most 4 Pods were created at all times. It then continued
scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. Finally,
you'll have 3 available replicas in the new ReplicaSet, and the old ReplicaSet is scaled down to 0.
Rollover: If you update a Deployment while an existing rollout is in progress, the Deployment
creates a new ReplicaSet as per the update and start scaling that up, and rolls over the ReplicaSet
that it was scaling up previously -- it will add it to its list of old ReplicaSets and start scaling it down.
For example, suppose you create a Deployment to create 5 replicas of nginx:1.14.2, but then update
the Deployment to create 5 replicas of nginx:1.16.1, when only 3 replicas of nginx:1.14.2 had been
created. In that case, the Deployment immediately starts killing the 3 nginx:1.14.2 Pods that it had
created, and starts creating nginx:1.16.1 Pods. It does not wait for the 5 replicas of nginx:1.14.2 to
be created before changing course.
3- Rolling Back a Deployment
A Deployment may need to be rolled back, for instance, when it becomes unstable or experiences
crash looping.
By default, the system retains the entire rollout history of the Deployment, allowing rollbacks to be
performed at any time.
Suppose that you made a typo while updating the Deployment, by putting the image name as
nginx:1.161 instead of nginx:1.16.1.
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.161
$ kubectl rollout status deployment/nginx-deployment
$ kubectl get rs
$ kubectl get pods
$ kubectl describe deployment
$ kubectl apply -f nginx-deployment.yaml --record=true
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record=true
kubectl set image deployment/nginx-deployment nginx=nginx:1.161 --record=true
$ kubectl rollout history deployment/nginx-deployment
$ kubectl rollout history deployment/nginx-deployment --revision=2
$ kubectl rollout history deployment/nginx-deployment
$ kubectl rollout undo deployment/nginx-deployment
$ kubectl rollout undo deployment/nginx-deployment --to-revision=2
4- Scaling a Deployment
Works similarly to a ReplicaSet.
$ kubectl scale deployment/nginx-deployment --replicas=10
Assuming horizontal Pod autoscaling is enabled in the cluster, an autoscaler can be set up for the
Deployment, specifying the minimum and maximum number of Pods to run based on the CPU
utilization of existing Pods.
$ kubectl autoscale deployment/nginx-deployment --min=5 --max=15 --cpu-percent=80
Proportional scaling: RollingUpdate Deployments support running multiple versions of an application
at the same time. When you or an autoscaler scales a RollingUpdate Deployment that is in the middle
of a rollout (either in progress or paused), the Deployment controller balances the additional replicas in
the existing active ReplicaSets (ReplicaSets with Pods) in order to mitigate risk. This is called
proportional scaling.
For example, you are running a Deployment with 10 replicas, maxSurge=3, and maxUnavailable=2.
A new scaling request for the Deployment comes along. The autoscaler increments the Deployment
replicas to 15. The Deployment controller needs to decide where to add these new 5 replicas. If you
weren't using proportional scaling, all 5 of them would be added in the new ReplicaSet. With
proportional scaling, you spread the additional replicas across all ReplicaSets. Bigger proportions go to
the ReplicaSets with the most replicas and lower proportions go to ReplicaSets with less replicas. Any
leftovers are added to the ReplicaSet with the most replicas. ReplicaSets with zero replicas are not
scaled up.
In our example above, 3 replicas are added to the old ReplicaSet and 2 replicas are added to the new
ReplicaSet.
5- Pausing and Resuming a rollout of a Deployment
When a Deployment needs to be updated, rollouts can be paused before making any changes. Once
the updates are ready, rollouts can be resumed.
This allows multiple changes to be made during the pause without triggering unnecessary rollouts.
$ kubectl apply -f nginx-deployment.yaml –record=true
$ kubectl get deploy
$ kubectl rollout pause deployment/nginx-deployment
$ kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1
$ kubectl rollout history deployment/nginx-deployment
$ kubectl set resources deployment/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
$ kubectl rollout resume deployment/nginx-deployment
$ kubectl rollout history deployment/nginx-deployment
$ kubectl get rs
6- Deployment status
A Deployment enters various states during its lifecycle. It can be progressing while rolling out a new
ReplicaSet, it can be complete, or it can fail to progress.
Progressing Deployment:
• The Deployment creates a new ReplicaSet.
• The Deployment is scaling up its newest ReplicaSet.
• The Deployment is scaling down its older ReplicaSet(s).
• New Pods become ready or available
Complete Deployment:
• All of the replicas associated with the Deployment have been updated to the latest version
you've specified, meaning any updates you've requested have been completed.
• All of the replicas associated with the Deployment are available.
• No old replicas for the Deployment are running.
• Failed Deployment:
• Insu icient quota
• Readiness probe failures
• Image pull errors
• Insu icient permissions
• Limit ranges
• Application runtime misconfiguration
$ kubectl describe deployment nginx-deployment
7- Clean up Policy
The .spec.revisionHistoryLimit field in a Deployment can be set to specify the number of old
ReplicaSets to retain.
The remaining ReplicaSets will be garbage-collected in the background. By default, this value is 10.
1- apiVersion: apps/v1
2- kind: Deployment
3- metadata:
4- name: nginx-deployment
5- labels:
6- app: nginx
7- spec:
8- replicas: 3
9- revisionHistoryLimit: 5
10- selector:
11- matchLabels:
12- app: nginx
13- template:
14- metadata:
15- labels:
16- app: nginx
17- spec:
18- containers:
a. name: nginx
19- image: nginx:1.14.2
20- imagePullPolicy: IfNotPresent
21- ports:
a. containerPort: 80
HorizontalPodAutoscaler
Some commands:
$ minikube addons enable metrics-server
$ kubectl get pods -n kube-system | grep metrics-server
$ kubectl top nodes
$ kubectl top pods
$ kubectl create -f nginx-hpa.yaml
kubectl top pods
Lecture#12
Labels and Selectors
Unlike names, labels do not ensure uniqueness, as multiple objects can share the same label. ➢ Once
labels are attached to an object, filters are needed to narrow down the selection. These filters are
called label selectors. ➢ Currently two types of selectors are supported • Equality based • Set based ➢
A label selector can be made of multiple requirements which are comma separated.
Node Selectors
One use case for selecting labels is to constrain the set of nodes on which a pod can be scheduled. This allows
a pod to run only on specific nodes. ➢ Generally, such constraints are not necessary, as the scheduler
automatically handles reasonable placement. However, in certain circumstances, these constraints may be
required. ➢ Labels can be used to tag nodes. ➢ Once the nodes are tagged, label selectors can be used to
specify that pods run only on specific nodes.
➢ $ kubectl apply -f pod2.yaml ➢ $ kubectl get pods ➢ $ kubectl get nodes ➢ $ kubectl label nodes minikube
hardware=ssd ➢ $ kubectl get pods ➢ $ kubectl label nodes minikube hardware
ReplicaSet
➢ It is a Kubernetes object that ensures a specific number of identical Pods are always running. ➢ It ensures
availability of applications. It helps with scaling the application in or out. ➢ It continuously monitors the Pods.
If a Pod is deleted or crashes, it creates a new one to match the desired state. ➢ Replica specifies how many
Pods should be running. When creating new Pods, it uses the Pod template.
ReplicaSet – Equality based
➢ $ kubectl apply -f myreplica.yaml ➢ $ kubectl get rs ➢ $ kubectl get pods ➢ For scaling a specific
ReplicaSet • $ kubectl scale rs --replicas=8 ➢ For scaling using a label selector • $ kubectl scale rs -l
name=rs_label --replicas=2
ReplicaSet – Set based
Pick objects where the name is one of the given options (CR, Rushdah, or Hadia) and the section is not alpha.
➢ $ kubectl apply -f myreplica1.yaml ➢ $ kubectl get rs ➢ $ kubectl get pods –show-labels
ReplicaSet – Scaling down
➢ You can create Pods without using a ReplicaSet, but avoid matching labels with a ReplicaSet's selector. ➢ If
a Pod has labels that match the selector of a ReplicaSet, the ReplicaSet will automatically acquire those Pods.
➢ The acquired Pods are then managed by the ReplicaSet, even though they were not originally created by it.
ReplicaSet
➢ $ kubectl get pods --show-labels ➢ $ kubectl apply -f myreplica.yaml ➢ $ kubectl get rs ➢ $ kubectl get
pods --show-labels ➢ $ kubectl delete –f myreplica.yaml ➢ $ kubectl get pods –show-labels
Scenario - ReplicaSet Overcounting Pods
➢ If you create new Pods after the ReplicaSet is deployed and it has already set its initial replicas, the
ReplicaSet may acquire these new Pods. ➢ When the new Pods are acquired, the ReplicaSet will immediately
terminate them if the total Pod count exceeds the desired number of replicas.
Lecture#11
Pod
kind: Pod apiVersion: v1 metadata: name: testpod spec: containers: - name: c00 image: ubuntu
command: ["/bin/bash", "-c", "while true; do echo Hello VA; sleep 5 ; done"] restartPolicy: Never #
Defaults to Always
Multi Container Pod
Cloud Computing # 8
Docker Volume
• Docker volume is simply a directory inside our container
• Firstly, we have to declare this directory as a volume and then share volume
• Even if we stop container, still we can access volume
• Volume will be created in one container
• You can declare a directory as a volume only while creating container
• You can’t create volume from existing container
• You can share one volume across any number of container
• Volume will not be included when you update an image
• You can map volume in two ways
• Container Container
• Host Container
Benefits of volume
• Decoupling container from storage
• Share volume among di erent containers
• Attach volume to containers
• On deleting container volume does not delete i.e. persistent
Creating volume from Dockerfile
• Create a Dockerfile and write
• FROM ubuntu
• VOLUME [“/myvolume1”]
• Then create image from this dockerfile
• $ docker build –t myimage .
• Now create a container from this image and run
• $ docker run –it --name container1 myimage /bin/bash
• Now do ls, you can see myvolume1
Volume sharing between containers
• Now, share volume with another container
• Container1 Container2
• $ docker run –it --name container2 --privileged=true --volumes-from container1 ubuntu /bin/bash
• Now after creating container2, myvolume1 is visible whatever you do in one volume, can see from other
volume.
• $ touch /myvolume1/samplefile
• $ docker start container1
• $ docker attach container1
• $ ls ./myvolume1
Create anonymous volume using docker command
• Now, try to create volume by using command
• $ docker run –it --name container3 -v /volume2 ubuntu /bin/bash
• $ cd volume2
• $ ls
• $ touch file1 file2 file3
Create volume using docker command
• Now, create one more volume and share volume2
• $ docker run –it –name container4 --privileged=true –-volumes-from container3 ubuntu /bin/bash
• $ cd volume2
• $ ls
Create named volume using docker command
• Now, try to create volume by using command
• $ docker run –it --name container5 -v my_volume:/volume2 ubuntu /bin/bash
• $ cd volume2
• $ ls
• $ touch file1 file2 file3
Create volume using docker command
• Now, create one more volume and share volume2
• $ docker run –it –name container6 --privileged=true –v my_volume:/volume2 ubuntu /bin/bash
• $ cd volume
• $ ls
Volume sharing from host to container
• Create a directory in C:/ drive
• $ docker run -it --name hostcontainer -v C:/host:/volume3 --privileged=true ubuntu /bin/bash
• $ cd volume3
• $ ls
• $ touch file1 file2 file3
• $ exit
• Check host machine you will see file1 file2 and file3 on C:/host
docker volume create <volume_name>
• Creates a new Docker volume with the specified name.
• Example:
• $ docker volume create my_volume
• Output:
• Indicates successful creation of the volume with its name.
docker volume rm <volume_name>
• Removes the specified Docker volume.
• Example:
• $ docker volume rm my_volume
• Output:
• Confirms the volume is removed if it's not in use by any container.
docker volume prune
• Removes all unused Docker volumes from the host machine.
• Example:
• $ docker volume prune
• Output:
• Lists each removed volume name and provides a summary of freed space.
docker volume inspect <volume_name>
• Displays detailed information about a specified Docker volume.
• Example:
• $ docker volume inspect my_volume
• Output:
• Shows metadata, mount paths, and other details about the volume.
docker container inspect <container_name>
• Displays detailed information about a specific container, including volume mounts and other
configuration.
• Example:
• $ docker container inspect my_container
• Output:
• Shows comprehensive JSON output including volume mounts, network settings, etc.
DotNet Application
Publish .NET app
• dotnet publish -c Release
dir .\bin\Release\net8.0\publish
Cloud Computing # 9
Overview of Docker Compose Files
• What is Docker Compose?
• A tool for defining and running multi-container Docker applications using a Compose file.
• Default Compose File Names
• Preferred
• compose.yaml
• compose.yml
• Legacy Formats Supported
• docker-compose.yaml
• docker-compose.yml
• If multiple files exist in the working directory:
• First Priority: compose.yaml
• Fallbacks: compose.yml, docker-compose.yaml, or docker-compose.yml
Architecture Overview: Frontend & Backend Services