CSE Semester 06 22CS910 DevOps - Unit IV
CSE Semester 06 22CS910 DevOps - Unit IV
This document is confidential and intended solely for the educational purpose of RMK
Group of Educational Institutions. If you have received this document through email in
error, please notify the system manager. This document contains proprietary information
and is intended only to the respective group / learning community as intended. If you
are not the addressee you should not disseminate, distribute or copy through e-mail.
Please notify the sender immediately by e-mail if you have received this document by
mistake and delete this document from your system. If you are not the intended
recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.
22CS910 - DevOps
Department : Computer Science & Engineering
Batch / Year : 2022 - 2026 / III
Created by : Mr. S Vijayakumar, Associate Professor / CSE
Date : 25.01.2025
1. Contents
S. No. Contents
1 Contents
2 Course Objectives
3 Prerequisites
4 Syllabus
5 Course Outcomes
6 CO-PO Mapping
7 Lecture Plan
9 Lecture Notes
10 Assignments
12 Part-B Questions
16 Assessment Schedule
9 Revision
2. Course Objectives
22CS910 - DevOps
22IT910
Rest Application Development Using Spring Boot and JPA
22CS402
Web Development Frameworks
22CS301
Advanced Java Programming
22CS202
Java Programming
OBJECTIVES:
The Course will enable learners to:
❖ Bridge the gap between development and operations for faster, more reliable
software releases.
❖ Automate software delivery with CI/CD pipelines.
❖ Package and deploy apps efficiently using Docker containers.
❖ Automate infrastructure with Infrastructure as Code (IaC).
❖ Monitor and troubleshoot applications in production.
UNIT I Introduction to DEVOPS 9
collaboration.
applications.
PO PO PO PO PO PO PO PO PO PO PO PO PS PS PS
COs 1 2 3 4 5 6 7 8 9 10 11 12 O1 O2 O3
CO1 3 3 3 2 2 2 1 1
CO2 3 3 3 2 2 2 1 1
CO3 3 3 3 2 2 2 1 1
CO4 3 3 3 2 2 2 1 1
CO5 3 3 3 2 2 2 1 1
7. Lecture Plan - Unit IV
No.
S. of Proposed Actual Pertaining Taxonomy Mode of
Topic
No. Period Date Lecture CO Level Delivery
s Date
Introduction to
1 containerization and CO4
K2
Chalk &
1 Talk
its benefits
Understanding
Docker concepts:
2 1 CO4 Chalk &
images, containers, K3
Talk
registries.
Understanding
Docker concepts:
1 CO4 Chalk &
3 images, containers, K3
Talk
registries.
Building and
4 managing Docker 1 CO4
K2
Chalk &
containers Talk
Introduction to
container
7 orchestration with 1 CO4
K2
Chalk &
Docker Swarm or Talk
Kubernetes.
Introduction to
container
8 orchestration with 1 CO4
K3
Chalk &
Docker Swarm or Talk
Kubernetes.
9 Revision 1 CO4
K2
Chalk &
Talk
8. Activity Based Learning
Learning Method Activity
Docker architecture
Containers
A container is a runnable instance of an image. You can create,
start, stop, move, or delete a container using the Docker API or CLI. You can
connect a container to one or more networks, attach storage to it, or even
create a new image based on its current state.
By default, a container is relatively well isolated from other
containers and its host machine. You can control how isolated a container's
network, storage, or other underlying subsystems are from other containers
or from the host machine.
A container is defined by its image as well as any configuration
options you provide to it when you create or start it. When a container is
removed, any changes to its state that aren't stored in persistent storage
disappear.
Example docker run command
The following command runs an ubuntu container, attaches interactively to
your local command-line session, and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using
the default registry configuration):
If you don't have the ubuntu image locally, Docker pulls it from your
configured registry, as though you had run docker pull ubuntu manually.
Registry
Docker Registry is a centralized storage and distributed system
for collecting and managing the docker images. It provides both public and
private repositories as per the choice whether to make the image accessible
publicly or not. It is an essential component in the containerization workflow
for streamlining the deployment and management of applications.
A Docker registry is a system for storing and distributing Docker
images with specific names. There may be several versions of the same
image, each with its own set of tags. A Docker registry is separated into
Docker repositories, each of which holds all image modifications. The registry
may be used by Docker users to fetch images locally and to push new
images to the registry (given adequate access permissions when applicable).
The registry is a server-side application that stores and distributes Docker
images. It is stateless and extremely scalable.
Different Types of Docker Registeries
The following are the different types of docker registries:
1. DockerHub
2. Amazon Elastic Container Registry (ECR)
3. Google Container Registry (GCR)
4. Azure Container Registry (ACR)
Basic commands for Docker registry
The following are the basic commands for Docker registry:
1. Starting your registry
This command effectively starts a Docker registry on your local machine or
server, accessible via port 5000.
docker run -d -p 5000:5000 --restart=always --name registry
registry:2
Now, let’s run through what each of those commands in the Dockerfile mean.
FROM
Use the FROM command to specify the base image that you want your image
to derive from. Here, we’re using the Python version 3.9 base image as our
app is running in Python.
WORKDIR
Sets the current working directory inside the container. Like a cd in Linux.
Any subsequent commands in the Dockerfile will happen inside this directory.
COPY
The COPY instruction has the following format: COPY <source> <dest>.
It copies files from <source> (in the host) into <dest> (in the container).
So the above copy instruction is copying all files from the current working
directory on my local machine to the current working directory in the
container.
CMD
This is the command instruction, it specifies what to run when the container
is created. Here we’re telling Docker to tell Python to run our hello-world.py
app.
Now that we have the Dockerfile we can build an image from it. Do this using
the docker build command, which takes a directory and optional -t, which
allows you to add a tag to the image name.
docker build -t lgmooney98/hello-world:v1 .
The convention for naming Docker images is to use your Docker username
followed by a name, and the tag is usually used to specify the version of the
image.
docker build -t username/image-name:tag directory
So, the above command is telling Docker to look in the current directory for
the Dockerfile and build the image from it.
If you don’t already have the python 3.9 image, the build might take a few
minutes as Docker has to pull that image from the Docker registry.
We can check that our image has been created by using the docker images
command, which lists all images available locally.
Building the container
The next thing to do is run the image, which creates an instance of
the image – i.e. builds a container from the image. This is done with the
docker run command.
docker run lgmooney98/hello-world
From in here I can write regular Linux commands, such as running our hello-
world python script.
Use the docker stop command to stop a running container, you need to
provide the container ID – the first few characters should suffice, Docker just
needs to be able to distinguish it from the others.
docker stop a2c9
Running a website in a container locally
To show something slightly more interesting, I have created a
basic web app using flask, it shows “Hello world” in a web page, which
changes colour upon refreshing the page. To get this running in a container,
it’s the same process as with the hello world example above, only now we
need to change the Dockerfile.
First we need change the CMD command to make Docker tell
Python to run app.py, instead of hello-world.py. Secondly, we need to
tell Docker to expose a port in the container in order for us to be able to
connect to the web page being run inside the container later on – this is
done with the EXPOSE command. I have exposed port 5000 on the container
here as that is the port that the flask app is set to run on. And finally, since
we’re using the flask package to build the app, we need to install that
package. This can be done with the RUN command, which runs whatever you
tell it inside a shell in the container when the container is being built; it’s
similar to CMD, except there can only be one CMD command. Here, we’re
telling Docker to use the package manager pip to install everything in
requirements.txt, which contains flask.
We have a different Dockerfile now, and so we need to build a new image
from it. Once the image is built, we can create a container from it using the
docker run command. However, this time we need to map the port running
on the container (5000) to a port on the outside, which in this case is my
local machine, so that when we try to connect to the external port using our
browser, that port is connected internally to the port on the container; the
container can then serve the web app to our browser. This is done using the
-p flag; here, I’m mapping port 5000 on the inside to port 8888 on the
outside.
docker run -p 8888:5000 lgmooney98/hello-world-app
Now, go to your browser and go to localhost:8888, you should get the hello
world web page.
4.5 Docker Compose for multi- container applications
Docker Compose is a software containerized tool developed to
orchestrate the definitions and running of multi-container docker applications
using single commands. It reads the definitions of the multiple containers
from the single Configuration Yaml file and performs the orchestration with
single-line commands making easy usage for developers.
It facilitates the management of complex applications allowing
the users to specify the services, networks, and docker volumes required for
the application. In the configuration yaml file developers can specify the
parameters for each service including the docker images, environmental
variables, ports, and dependencies. Docker compose provides security to the
application by encapsulating the entire application setup in the Compose file
providing consistency across different development environments making it
easy to share and reproduce the applicable reliably.
Reference : https://docs.docker.com/get-started/docker-concepts/running-
containers/multi-container-applications/
4.6 Introduction to container orchestration with Docker Swarm or
Kubernetes.
Introduction
Docker Swarm is a native clustering and orchestration tool for
Docker containers. It allows developers and IT administrators to create and
manage a cluster of Docker nodes as a single virtual system. This capability
is essential for deploying, managing, and scaling containerized applications in
production environments. Docker Swarm was introduced by Docker Inc. to
provide a simple yet powerful way to orchestrate containers, offering an
alternative to more complex orchestration solutions like Kubernetes.
The significance of Docker Swarm in container orchestration lies
in its seamless integration with the Docker ecosystem, its simplicity, and its
ability to efficiently manage resources. As organizations increasingly adopt
containerization for their applications, the need for effective orchestration
tools becomes paramount. Docker Swarm fulfills this need by offering a
robust and easy-to-use platform for managing containerized applications at
scale.
Core Concepts
To understand Docker Swarm, it’s essential to grasp its core
concepts. At its heart, Docker Swarm operates in “Swarm Mode,” which is
activated by initializing a swarm cluster. This mode transforms Docker nodes
into a coordinated cluster capable of running services and balancing
workloads.
Node
Nodes in a Docker Swarm are categorized into two
types: Manager nodes and Worker nodes. Manager nodes are responsible for
the overall management of the swarm, including maintaining the cluster
state, scheduling tasks, and managing membership and orchestration.
Worker nodes, on the other hand, execute the tasks assigned to them by the
managers. This division of labor ensures that the cluster operates efficiently
and scales effectively.
Services
Services in Docker Swarm define the desired state of the
application, including the number of replicas and the network configurations.
Each service consists of multiple tasks, which are individual instances of the
Docker container running the application. The swarm manager is responsible
for distributing these tasks across the available nodes, ensuring that the
desired state is maintained even in the face of node failures or changes in
demand.
Networking
Online Courses
1. https://www.practical-devsecops.com/containerization-certification/
2. https://www.coursera.org/learn/aws-containerization
1. https://www.linkedin.com/pulse/detailed-guide-containers-kubernetes-
certifications-atul-kumar-ei4qc/
2. https://www.shiksha.com/online-courses/container-certification
14. Real Time Applications
Examples of containerized Real Time applications
S. Name of the
Start Date End Date Portion
No. Assessment
TEXTBOOKS:
1. Deepak Gaikwad, Viral Thakkar, "DevOps Tools: from Practitioner's
Point of View", Wiley, 2019.
2. Jennifer Davis, Ryn Daniels, "Effective DevOps", O'Reilly Media, 2016.
REFERENCES:
1. Gene Kim, Jez Humble, Patrick Debois, "The DevOps Handbook: How
to Create World-Class Agility, Reliability, and Security in Technology
Organizations", IT Revolution Press, 2016.
2. Jez Humble, Gene Kim, "Continuous Delivery: Reliable Software
Releases Through Build, Test, and Deployment Automation", Addison-
Wesley, 2010.
3. Yevgeniy Brikman, "Terraform: Up & Running: Writing Infrastructure
as Code", O'Reilly Media, 2019.
4. Joseph Muli, "Beginning DevOps with Docker", Packt Publishing, 2018.
18. Mini Project Suggestions
Disclaimer:
This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.