Kubernetes Notes
Kubernetes Notes
m
Class Notes
Ku
Date: 12/11/2023
.A
ce
1st GRADE
en
KUBERNETES
er
ef
rR
● It was originally designed by Google and is now maintained by the Cloud Native
O
Computing Foundation.
○ Container and
○ Orchestration.
2
ar
m
Ku
Orchestration
.A
ce
en
er
ef
rR
➢ What if the number of users increases and needs to scale the application?
➢ You would also like to scale down when the load decreases.
O
Orchestration:
This whole process of automatically deploying and managing containers is
known as Container Orchestration.
ar
m
Ku
Orchestration Technologies
.A
ce
en
er
ef
rR
fo
y
➢ There are multiple such technologies available today – Docker has its own tool
called Docker Swarm.
O
➢ Kubernetes - Arguably the most popular of it all – is a bit difficult to set up and get
started but provides a lot of options to customize deployments and supports
deployment of complex architectures.
➢ Kubernetes is now supported on all public cloud service providers like GCP, Azure
and AWS and the kubernetes project is one of the top ranked projects in Github.
ar
m
Ku
Kubernetes Advantage
.A
ce
en
er
ef
rR
fo
y
nl
O
O
nl
y
fo
rR
ef
er
en
ce
.A
Ku
m
ar
5
6
Application is now highly available as hardware failures do not bring our application down
because users have multiple instances of their application running on different nodes.
➢ The user traffic is load balanced across the various containers.
➢ When demand increases, deploy more instances of the application seamlessly and
within a matter of seconds and users have the ability to do that at a service level.
ar
➢ When running out of hardware resources, scale the number of nodes up/down without
having to take down the application.
m
➢ And do all of these easily with a set of Declarative Object Configuration Files.
Ku
It is a Container Orchestration Technology used to orchestrate the deployment and management of
100s and 1000s of containers in a clustered environment.
.A
Architecture
Nodes(Minions) ce
en
er
ef
rR
fo
kubernetes.
➢ It was also known as Minions in the past. (used interchangeably)
O
➢ Consider
○ If the node on which our application is running fails? Then obviously the
application goes down. So need to have more than one node.
7
Cluster
➢ A cluster is a set of nodes grouped together.
➢ This way even if one node fails you have your application still accessible from the
other nodes.
➢ Moreover having multiple nodes helps in sharing load as well.
ar
m
Ku
.A
Master
ce
en
er
ef
rR
fo
y
nl
O
➢ The master watches over the nodes in the cluster and is responsible for the actual
orchestration of containers on the worker nodes.
Components
ar
m
Ku
.A
ce
en
➢ Installing Kubernetes on a System, means actually installing the following
components.
er
○ An API Server.
○ An ETCD service.
ef
○ A kubelet service.
rR
○ The users, Management Devices, Command line interfaces all talk to the API
server to interact with the kubernetes cluster.
y
ar
○ The controllers make decisions to bring up new containers in such cases.
m
➢ The container Runtime is the underlying software that is used to run
containers.(Here consider DOCKER).
Ku
➢ Kubelet is the agent that runs on each node in the cluster.
○ The agent is responsible for making sure that the containers are running on
the nodes as expected.
.A
ce
Master vs Worker Nodes
en
er
ef
rR
fo
y
nl
O
Distribution of these components across different types of servers means how does one server
become a master and the other slave?
➢ The worker node (or minion) as it is also known, is where the containers are hosted.
➢ For example Docker containers, and to run docker containers on a system, needed a
ar
container runtime installed And that’s where the container runtime falls. (Here
Docker).
m
➢ Alternatives available (other container runtime) such as Rocket or CRIO.
➢ The master server has the kube-apiserver and makes it a master.
Ku
➢ The worker nodes have the kubelet agent that is responsible for interacting with the
master to provide health information of the worker node and carry out actions
.A
requested by the master on the worker nodes.
➢ All the information gathered is stored in a key-value store on the Master.
ce
➢ The key value store is based on the popular etcd framework as discussed.
➢ The master also has the controller manager and the scheduler.
en
Note: There are other components as well.
➢ It will help to install and configure the right components on different systems when
er
we set up infrastructure.
ef
rR
fo
y
nl
O
11
Kubectl
ar
m
Ku
.A
ce
en
The kube Command Line Tool or kubectl or kube control
➢ ONE of the command line utilities known as the kube command line tool or kubectl
er
○ To get the status of nodes in the cluster and many other things.
fo
Command Uses
kubectl run To deploy an application on the cluster
y
nl
kubectl get pod To list all the nodes part of the cluster
kubectl get nodes To Show no of nodes part of the cluster
kubectl version or kubectl version --short To Show the version of Kubernetes
kubectl get nodes -o wide Show the flavor and version of OS where
Kubernetes nodes are running
12
Setup
ar
m
Ku
.A
ce
en
● Lots of ways to set up Kubernetes.
● Locally on our laptops or virtual machines using solutions like Minikube and
er
Kubeadmin.
● Minikube is a tool used to set up a single instance of Kubernetes in an All-in-one
ef
setup.
● Kubeadmin is a tool used to configure kubernetes in a multi-node setup.
rR
NOTE: Choose any that the best suite needs based on time and resources.
13
Pod
ar
m
Ku
.A
ce
➢ At this point assume that the following have been setup already,
○ The application is already developed and built into Docker Images and
en
○ Docker Images is available on a Docker repository like Docker hub, so
kubernetes can pull it down.
er
○ The Kubernetes cluster has already been set up and is working, may be a
ef
ar
m
Ku
.A
ce
en
er
👏
ef
➢ Have a single node kubernetes cluster with a single instance of our application
running in a single docker container encapsulated in a POD.
➢ What if,
fo
➢ Again if the user base FURTHER increases and our current node has no sufficient
capacity?
○ THEN you can always deploy additional PODs on a new node in the cluster.
➢ You will have a new node added to the cluster to expand the cluster’s physical
capacity.
➢ SO, means that PODs usually have a one-to-one relationship with containers running
any application.
ar
○ To scale UP: Create new PODs and
m
○ To Scale down : Delete PODs.
➢ No need to add additional containers to an existing POD to scale our application.
Ku
Note: Think !! How to implement all of this and how we achieve load balancing between
containers etc.
.A
Multi Containers Pod ce
en
er
ef
rR
fo
y
nl
➢ But, users are not restricted to having a single container in a single POD.
➢ A single POD CAN have multiple containers, in fact they are usually not multiple
containers of the same kind.
➢ To scale our application, needs to create additional PODs.
➢ But sometimes there might be a scenario where users have a helper container that
might be doing some kind of supporting task for our web application such as
processing a user entered data, processing a file uploaded by the user etc.
16
➢ And users want these helper containers to live alongside their application container.
➢ In that case, users CAN have both of these containers part of the same POD, so that
when a new application container is created, the helper is also created and when it
dies the helper also dies since they are part of the same POD.
➢ The two containers can also communicate with each other directly by referring to
each other as ‘localhost’ since they share the same network namespace and the
same storage space as well.
ar
m
To understand PODs from a different angle.
Ku
.A
ce
en
er
ef
rR
on a docker host.
● Then they would first simply deploy their application using a simple docker run
y
python-app command and the application runs fine and it’s accessible.
nl
● As & When the load increases they deploy more instances of their application by
running the docker run commands many more times.
O
ar
links and custom networks,
m
○ To create shareable volumes and share it among the containers and maintain
a map of that as well.
Ku
○ And most importantly, need to monitor the state of the application container
and when it dies, manually kill the helper container as well as its no longer
required.
.A
● When a new container is deployed they would need to deploy the new helper
container as well.
ce
● With PODs, kubernetes does all of this for users automatically.
● We just need to define what containers a POD consists of and the containers in a
en
POD by default will have access to the same storage, the same network namespace,
and the same fate as if they will be created together and destroyed together.
er
● Even if our application didn’t happen to be so complex and we could live with a
ef
YAML:
A YAML file : Used to represent data, here Configuration Data.
Format of different data Structure like “XML”, “JSON”, & YAML.
Displayed Lists of Servers and associated information Data below.
ar
m
Ku
.A
ce
en
er
ef
rR
fo
The data in its simplest form to define such as Key Value Pair, an Array and a Dictionary.
➢ Key Value Pair
y
➢ A Dictionary
○ A dictionary is a set of properties grouped together under an item.
○ Here try to represent nutrition information of two fruits
ar
○ The calories, fat, and carbs are different for each fruit.
m
○ Notice the blank space before each item.
○ Remember to provide an equal number of blank spaces before the properties
Ku
of a single item so they are all aligned together.
.A
SPACES:
ce
en
er
ef
rR
fo
y
nl
O
◆ Notice the number of spaces before each property that indicates these key
value pairs fall within the banana.
◆ If added extra spaces for fat and carbs.
● Then they will fall under calories and thus become properties of
calories which doesn't make any sense.
● Result: A Syntax error which tells that mapping values are not allowed
here because calories already have a value set which is 105.
ar
◆ Users can either set a direct value or a hash map and cannot have both.
m
◆ The number of spaces before each property is key in YAML
◆ Always must ensure in the right form to represent any data correctly.
Ku
.A
ce
en
er
ef
rR
fo
ar
about an automobile manufacturing company and all of its cars and its details.
It could be anything.
m
Ku
Consider an example of a car .
A car is a single object.
It has properties such as color, model, transition, and price.
.A
To store different information or properties of a single object, we use a dictionary.
In this simple dictionary properties of the car are defined in a key value format.
ce
en
er
ef
rR
fo
y
nl
O
For example, in case we need to split the model further into the model name and make year,
you could then represent this as a dictionary within another dictionary.
In this case, the single value of model is now replaced by a small dictionary with two
properties, name and year.
This is a dictionary within another dictionary.
ar
m
Ku
.A
ce
en
er
ef
rR
fo
y
nl
O
23
List of strings
Let's say we would like to store the name of six cars.
The names are formed by the color and the model of the car.
To store this, We would use a list or an array as it is multiple items of the same type of
object.
Since we are only storing the names, we have a simple list of strings.
ar
m
Ku
.A
ce
en
er
ef
Everything that we listed before such as the color, model, transition, and price
We will then modify the array from a list of strings to a list of dictionaries.
fo
We expand each item in the array and replace the name with the dictionary we built earlier.
This way, we are able to represent all information about multiple cars in a single YAML file
y
YAML: Notes
y
fo
rR
ef
er
en
ce
.A
Ku
m
ar
25
26
Let's take a look at some key notes Dictionary is an unordered collection whereas lists are
ordered collection.
What does that mean?
The two dictionaries that you see here have the same properties for banana.
However, you can see that the order of properties, fat, and carbs do not match.
In the first dictionary, fat is defined before carbs.
In the second dictionary, carbs comes first followed by fat.
ar
That doesn't really matter.
m
The properties can be defined in any order.
The two dictionaries will still be the same as long as the values of each property match.
Ku
This is not the same for lists or arrays. Arrays are ordered collection, so the order of items
matter.
The two lists shown are not the same because apple and banana are at different positions.
.A
This is something to keep in mind while working with data structures.
Also remember, any line beginning with a hash, is automatically ignored and considered as
a comment.
ce
en
er
ef
rR
fo
y
nl
O
27
ar
m
➢ A K8s definition file contains 4 top level fields.
1. The apiVersion,
Ku
2. kind,
3. metadata and
.A
4. spec.
➢ These are top level or root level properties.
ce
➢ Act as siblings, children of the same parent.
➢ All are REQUIRED fields, so MUST available
en
configuration file.
➢ All of these follow a similar structure.
er
ef
rR
1. The apiVersion,
This is the version of the kubernetes API used to create the object.
fo
Few other possible values for this field are apps/v1beta1, extensions/v1beta1 etc.
nl
O
28
2. kind
The kind refers to the type of object trying to create, (Here set as POD).
Some other possible values here could be ReplicaSet or Deployment or Service,
ar
m
Ku
.A
3. metadata and
ce
en
er
ef
rR
fo
y
nl
O
The number of spaces before the two properties name and labels doesn’t matter, but
they should be the same as they are siblings.
In this case labels has more spaces on the left than name and so it is now a child of
the name property instead of a sibling.
Also the two properties must have MORE spaces than its parent, which is metadata,
so that its intended to the right a little bit.
ar
In this case all above have the same number of spaces before them and so they are
all siblings, which is not correct.
m
Under metadata, the name is a string value – so name as your POD myapp-pod - and
Ku
the label is a dictionary.
So labels is a dictionary within the metadata dictionary.
And it can have any key and value pairs as users wish.
.A
For now I have added a label app with the value myapp.
ce
To add other labels which helps in identifying these objects later.
en
Example: Let, 100s of PODs running a front-end application, and 100’s of them
running a backend application or a database, it will be DIFFICULT to group these
er
Till now only mentioned the type and name of the object, need to create, which
happens to be a POD with the name myapp-pod, but haven’t really specified the
container or image needed in the pod.
30
4. spec.
ar
section to get the right format for each.
m
To create a pod with a single container.
Ku
Spec is a dictionary so add a property under it called containers, which is a list or an
array.
.A
The reason this property is a list is because the PODs can have multiple containers
within them (as learned in the earlier)
ce
Here, only planned to add a single item in the list, since planned to have only a single
container in the POD.
en
er
Summary: Remember the 4 top level properties. apiVersion, kind, metadata and spec.
Then add values to those depending on the object needed to create.
31
ar
Display information
Creation time, labels assigned,
m
containers parts and the events
Ku
associated.
.A
ce
en
er
ef
rR
fo
y
nl
O
O DEMO:
nl
y
fo
rR
ef
er
en
ce
.A
Ku
m
ar
32
33
pod.yml File:
ar
m
Ku
.A
Tips: YAML – Tips
ce
❖ Different IDE Tools for creating yml files to get help in syntax.
en
er
ef
rR
fo
y
nl
O
ar
In Case of Multi POD:
m
➢ Users have a single POD running any application, & app crashes and the POD fails?
Ku
➢ Users will no longer be able to access applications.
➢ To prevent users from losing access to applications, need to have more than one
instance or POD running at the same time.
.A
➢ That way if one fails users still have application running on the other one.
➢ The replication controller helps us run multiple instances of a single POD in the
ce
kubernetes cluster thus providing High Availability.
en
er
ef
rR
fo
y
nl
O
35
ar
m
Ku
.A
ce
en
er
ef
rR
fo
y
nl
O
36
ar
m
Ku
Reason 2:
➢ Need replication controller is to create multiple PODs to share the load across them.
.A
➢ For example,
○ In this simple scenario we have a single POD serving a set of users.
ce
○ When the number of users increases we deploy additional POD to balance the
load across the two pods.
en
○ If the demand further increases and If we were to run out of resources on the
first node, we could deploy additional PODs across other nodes in the cluster.
er
○ As you can see, the replication controller spans across multiple nodes in the
ef
cluster.
➢ It helps us balance the load across multiple pods on different nodes as well as scale
rR
ar
technologies.
m
➢ There are minor differences in the way each works and we will look at that in a bit.
➢ As such we will try to stick to Replica Sets in all of our demos and implementations
Ku
going forward.
.A
Replication Controller
ce
en
er
ef
rR
fo
y
nl
O
ar
➔ For any kubernetes definition file, the spec section defines what’s inside the object
m
we are creating.
➔ In this case we know that the replication controller creates multiple instances of a
Ku
POD.
➔ But what POD?
➔ We create a “template” section under spec to provide a POD template to be used by
.A
the replication controller to create replicas.
➔ Now how do we DEFINE the POD template?
ce
➔ It’s not that hard because, we have already done that in the previous exercise.
➔ Remember, we created a pod-definition file in the previous exercise.
en
➔ We could re-use the contents of the same file to populate the template section.
➔ Move all the contents of the pod-definition file into the template section of the
er
replication controller, except for the first two lines – which are apiVersion and kind.
ef
Controller and another for the POD and we have two spec sections – one for each.
➔ We have nested two definition files together.
y
➔ The replication controller being the parent and the pod-definition being the child.
nl
➔ For that, add another property to the spec called replicas and input the number of
replicas you need under it.
➔ Remember that the template and replicas are direct children of the spec section.
➔ So they are siblings and must be on the same vertical line : having equal number of
spaces before them.
39
➔ Once the file is ready, run the kubectl create command and input the file using the –f
parameter.
➔ The replication controller Is created. When the replication controller is created it first
creates the PODs using the pod-definition template as many as required, which is 3
in this case.
➔ To view the list of created replication controllers run the kubectl get replication
controller command and you will see the replication controller listed.
ar
➔ We can also see the desired number of replicas or pods, the current number of
m
replicas and how many of them are ready. If you would like to see the pods that were
created by the replication controller, run the kubectl get pods
Ku
➔ command and you will see 3 pods running.
➔ Note that all of them are starting with the name of the replication controller which is
myapp-rc indicating that they are all created automatically by the replication
.A
controller.
Replica Set
It is very similar to replication controller.
ce
en
er
ef
rR
fo
y
nl
O
ar
And number of “replicas set” to 3.
m
However, there is one major difference between replication controller and replica set.
Replica set requires a “selector definition”.
Ku
The selector section helps the replicaset identify what pods fall under it.
But why would you have to specify what PODs fall under it, if you have provided the
.A
contents of the pod-definition file itself in the template?
ce
BECAUSE, replica set can ALSO manage pods that were not created as part of the
replicaset creation.
en
For example, there were pods created BEFORE the creation of the ReplicaSet that match
the labels specified in the selector, the replica set will also take THOSE pods into
er
But before we get into that, I would like to mention that the selector is one of the major
differences between replication controller and replica set.
rR
The selector is not a REQUIRED field in case of a replication controller, but it is still available.
When users skip it, assume it to be the same as the labels provided in the pod-definition
fo
file.
In the case of replicaset a “user input” IS required for this property.
y
The matchLabels selector simply matches the labels specified under it to the labels on the
PODs.
O
The replicaset selector also provides many other options for matching labels that were not
available in a replication controller.
And as always to create a ReplicaSet run the kubectl create command providing the
definition file as input and to see the created replicasets run the kubectl get replicaset
command.
To get list of pods, simply run the kubectl get pods command.
41
ar
m
Ku
.A
ce
en
er
ef
rR
fo
y
nl
O
42
ar
m
Ku
.A
So what is the deal with Labels and Selectors?
ce
Why do we label our PODs and objects in kubernetes?
Consider a simple scenario. Say we deployed 3 instances of our frontend web application
en
as 3 PODs.
Users need to create a replication controller or replica set to ensure that they have 3 active
er
Users CAN use it to monitor existing pods, if they have them already created, as it IS in this
example.
rR
In case they were not created, the replica set will create them for users.
The role of the replicaset is to monitor the pods and if any of them were to fail, deploy new
fo
ones.
The replica set is in FACT a “process” that monitors the pods.
y
There could be 100s of other PODs in the cluster running different applications.
This is where labeling our PODs during creation comes in handy.
O
Users could now provide these labels “as a filter” for replicaset.
Under the selector section users use the “matchLabels” filter and provide the same label
that we used while creating the pods.
This way the replicaset knows which pods to monitor.
43
ar
m
Ku
.A
Now let me ask you a question along the same lines.
ce
In the replicaset specification section we learned that there are 3 sections: Template,
replicas and the selector.
en
We need 3 replicas and we have updated our selector based on our discussion.
Say for instance we have the same scenario as in the previous where we have 3 existing
er
PODs that were created already and we need to create a replica set to monitor the PODs to
ensure there are a minimum of 3 running at all times.
ef
When the replication controller is created, it is NOT going to deploy a new instance of POD
as 3 of them with matching labels are already created.
rR
In that case, do we really need to provide a template section in the replica-set specification,
since we are not expecting the replicaset to create a new POD on deployment?
fo
Yes we do, BECAUSE in case one of the PODs were to fail in the future, the replicaset needs
to create a new one to maintain the desired number of PODs.
y
And for the replica set to create a new POD, the template definition section IS required.
nl
O
44
SCALE
How we scale the replicaset.
Consider users started with 3 replicas and in the future decided to scale to 6.
How to update replicaset to scale to 6 replicas?
There are multiple ways to do it.
The first, is to update the number of replicas in the definition file to 6.
Then run the kubectl replace command specifying the same file using the “–f” parameter
ar
and that will update the replicaset to have 6 replicas.
m
The second way to do it is to run the “kubectl scale” command.
Use the replicas parameter to provide the new number of replicas and specify the same file
Ku
as input.
Users may either input the definition file or provide the replicaset name in the TYPE Name
.A
format.
However, Remember that using the file name as input will not result in the number of
replicas being updated automatically in the file.
ce
In otherwords, the number of replicas in the replicaset-definition file will still be 3 even
though users scaled their replicaset to have 6 replicas using the kubectl scale command
en
and the file as input.
There are also options available for automatically scaling the replicaset based on load, but
er
Commands:
ar
m
Ku
The kubectl create command, is used to create a replica set.
.A
Provide the input file using the “–f” parameter.
Use the kubectl get command to see list of replicasets created.
ce
Use the kubectl delete replicaset command followed by the name of the replica set to
delete the replicaset. (Also deletes all underlying PODs.)
en
And then we have the kubectl replace command to replace or update replicaset and also
the kubectl scale command to scale the replicas simply from the command line without
er
nl
y
fo
rR
ef
er
en
ce
.A
Ku
m
ar
46
47
ar
m
Ku
Get All details:
.A
ce
en
er
ef
rR
TASK: Create a POD outside the Replicaset with the “same label” as Replica set and check
what happens?
fo
y
nl
O
O
nl
y
fo
rR
ef
er
en
ce
.A
Ku
m
ar
48
49
ar
m
Ku
.A
ce
en
To change the number of replicas to, say, four instead of the current three, say for instance
er
For this, we will make use of a new command called kubecti edit replicaset and we will
provide the name of the replicaset, which is, myapp-replicaset.
rR
Now when we run this command, we see that it opens up the running configuration of the
replicaset in a text editor in a text format. In this case, it opens up in [vim].
fo
Note that this is not the actual file that we created at the beginning.
This is a temporary file that's created by Kubernetes in memory to allow us to edit the
y
format.
O
That's why you'll see a lot of additional fields in this file other than the details that users
provided.
So changes made to this file are directly applied on the running configuration on the cluster
as soon as the file is saved. So, be very careful with the changes that making here .
50
ar
was spin up few seconds ago and users can use the same approach to scale down as well.
m
Ku
.A
ce
en
er
ef
rR
fo
y
nl
O
We will provide the name of the replicaset, and we will set the number of replicas for it to
scale to two. You can specify a number which is greater or less than the current number of
replicas.
Take a note of the “double dashes” before the replicas.
ar
m
2 PODs Terminating State because mentioned 2 in previous command.
Ku
.A
ce
en
Have 2 PODs only after some Time:
er
ef
rR
fo
y
nl
O
52
Deployment
ar
m
Ku
.A
ce
For a minute, let us forget about PODs and replicasets and other kubernetes concepts and
en
talk about how you might want to deploy your application in a production environment.
Say for example you have a web server that needs to be deployed in a production
er
environment.
You need not ONE, but many such instances of the web server running for obvious reasons.
ef
Secondly, when newer versions of application builds become available on the docker
rR
This may impact users accessing our applications, so you may want to upgrade them one
after the other.
y
Suppose one of the upgrades you performed resulted in an unexpected error and you are
O
Finally,say for example Users would like to make multiple changes to your environment
such as
upgrading the underlying WebServer versions,
as well as scaling your environment and
also modifying the resource allocations etc.
You do not want to apply each change immediately after the command is run, instead you
would like to apply a pause to your environment, make the changes and then resume so
ar
that all changes are rolled-out together.
m
All of these capabilities are available with the kubernetes Deployments.
So far in this course we discussed about PODs, which deploy single instances of our
Ku
application such as the web application in this case.
Each container is encapsulated in PODs.
Multiple such PODs are deployed using Replication Controllers or Replica Sets.
.A
And then comes Deployment which is a kubernetes object that comes higher in the
hierarchy.
ce
The deployment provides us with capabilities to upgrade the underlying instances
seamlessly using rolling updates, undo changes, and pause and resume changes to
en
deployments.
er
ef
rR
fo
y
nl
O
54
ar
m
So how do we create a deployment.
Ku
As with the previous components, we first create a deployment definition file.
The contents of the deployment-definition file are exactly similar to the replicaset definition
file, except for the kind, which is now going to be Deployment.
.A
If we walk through the contents of the file it has an apiVersion which is apps/v1, metadata
which has name and labels and a spec that has template, replicas and selector.
The template has a POD definition inside it. ce
Once the file is ready run the kubectl create command and specify deployment definition
en
file.
Then run the kubectl get deployments command to see the newly created deployment.
er
The deployment automatically creates a replica set. So if you run the kubectl get replcaset
command you will be able to see a new replicaset in the name of the deployment.
ef
The replicasets ultimately create pods, so if you run the kubectl get pods command you will
rR
be able to see the pods with the name of the deployment and the replicaset.
So far there hasn’t been much of a difference between replicaset and deployments, except
for the fact that deployments created a new kubernetes object called deployments.
fo
y
nl
O
55
=================================
Docker-vs-ContainerD
============================
ar
m
Ku
.A
ce
en
er
ef
rR
fo
y
nl
O