0% found this document useful (0 votes)
11 views65 pages

CSE Semester 06 22CS910 DevOps - Unit IV

This document is a confidential educational resource for RMK Group of Educational Institutions, detailing the course '22CS910 - DevOps' for the Computer Science & Engineering department. It outlines course objectives, prerequisites, syllabus, outcomes, and various learning methodologies related to DevOps practices, including containerization with Docker and CI/CD pipelines. The document emphasizes the importance of bridging development and operations for efficient software delivery and includes guidelines for assessments and project suggestions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views65 pages

CSE Semester 06 22CS910 DevOps - Unit IV

This document is a confidential educational resource for RMK Group of Educational Institutions, detailing the course '22CS910 - DevOps' for the Computer Science & Engineering department. It outlines course objectives, prerequisites, syllabus, outcomes, and various learning methodologies related to DevOps practices, including containerization with Docker and CI/CD pipelines. The document emphasizes the importance of bridging development and operations for efficient software delivery and includes guidelines for assessments and project suggestions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Please read this disclaimer before proceeding:

This document is confidential and intended solely for the educational purpose of RMK
Group of Educational Institutions. If you have received this document through email in
error, please notify the system manager. This document contains proprietary information
and is intended only to the respective group / learning community as intended. If you
are not the addressee you should not disseminate, distribute or copy through e-mail.
Please notify the sender immediately by e-mail if you have received this document by
mistake and delete this document from your system. If you are not the intended
recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.
22CS910 - DevOps
Department : Computer Science & Engineering
Batch / Year : 2022 - 2026 / III
Created by : Mr. S Vijayakumar, Associate Professor / CSE
Date : 25.01.2025
1. Contents
S. No. Contents

1 Contents

2 Course Objectives

3 Prerequisites

4 Syllabus

5 Course Outcomes

6 CO-PO Mapping

7 Lecture Plan

8 Activity Based Learning

9 Lecture Notes

10 Assignments

11 Part- A Questions & Answers

12 Part-B Questions

13 Supportive Online Courses

14 Real Time Applications

15 Content beyond the Syllabus

16 Assessment Schedule

17 Prescribed Text books & Reference Books

18 Mini Project Suggestions


Content – Unit IV
S.No. Contents

1 Introduction to containerization and its benefits

Understanding Docker concepts: images, containers,


2
registries.

Understanding Docker concepts: images, containers,


3
registries.

Building and managing Docker containers


4

Docker Compose for multi- container applications


5

6 Docker Compose for multi- container applications

Introduction to container orchestration with Docker Swarm


7
or Kubernetes.

Introduction to container orchestration with Docker Swarm


8
or Kubernetes.

9 Revision
2. Course Objectives

The Course will enable learners to:

❖ Bridge the gap between development and operations for


faster, more reliable software releases.

❖ Automate software delivery with CI/CD pipelines.

❖ Package and deploy apps efficiently using Docker containers.

❖ Automate infrastructure with Infrastructure as Code (IaC).

❖ Monitor and troubleshoot applications in production.


3. Prerequisites

22CS910 - DevOps

22IT910
Rest Application Development Using Spring Boot and JPA

22CS402
Web Development Frameworks

22CS301
Advanced Java Programming

22CS202
Java Programming

22CS101 Problem Solving Using C++


22CS102 Software Development Practices
4. Syllabus
L T P C
22CS910 DevOps
3 0 0 3

OBJECTIVES:
The Course will enable learners to:
❖ Bridge the gap between development and operations for faster, more reliable
software releases.
❖ Automate software delivery with CI/CD pipelines.
❖ Package and deploy apps efficiently using Docker containers.
❖ Automate infrastructure with Infrastructure as Code (IaC).
❖ Monitor and troubleshoot applications in production.
UNIT I Introduction to DEVOPS 9

Software Development Methodologies - Operations Methodologies - Systems


Methodologies - Development, Release, and Deployment Concepts - Infrastructure
Concepts. What is DevOps? - DevOps importance and benefits -DevOps principles and
practices - 7 C’s of DevOps lifecycle for business agility - DevOps and continuous
testing. How to choose right DevOps tools? - Challenges with DevOps implementation.
UNIT II Version Control with GIT 9

Build production-grade applications – MYSQL - mapping Java classes to relational


database - Introduction to Git version control system - Git commands for basic
operations (clone, commit, push, pull) - Branching and merging strategies -
Collaboration using Git workflows.
UNIT III Continuous Integration and Delivery (CI/CD) 9

Introduction to CI/CD pipelines - Benefits of CI/CD for faster deployments - Setting up


a CI/CD pipeline with Jenkins - Automating builds, tests, and deployments.
UNIT IV Containerization with Docker 9

Introduction to containerization and its benefits - Understanding Docker concepts:


images, containers, registries - Building and managing Docker containers - Docker
Compose for multi- container applications - Introduction to container orchestration with
Docker Swarm or Kubernetes.
UNIT V Infrastructure as Code (IaC) and Monitoring 9

Introduction to Infrastructure as Code (IaC) - Benefits of using IaC for repeatable


infrastructure provisioning - Learning IaC with Terraform - Setting up infrastructure
configurations with Terraform - Introduction to monitoring and logging tools for
applications - Alerting and troubleshooting techniques.
TOTAL: 45 PERIODS
4. Syllabus Contd...
OUTCOMES:
Upon completion of the course, the students will be able to:
CO1: Understand the core principles and philosophies of DevOps.
CO2: Implement version control systems for code management and collaboration.
CO3: Automate software delivery pipelines using CI/CD tools.
CO4: Utilize containerization technologies for packaging and deploying applications.
CO5: Configure infrastructure as code (IaC) for repeatable deployments.
CO6: Monitor and maintain applications in a production environment.
TEXTBOOKS:
1. Deepak Gaikwad, Viral Thakkar, "DevOps Tools: from Practitioner's Point of View",
Wiley, 2019.
2. Jennifer Davis, Ryn Daniels, "Effective DevOps", O'Reilly Media, 2016.
REFERENCES:
1. Gene Kim, Jez Humble, Patrick Debois, "The DevOps Handbook: How to Create
World-Class Agility, Reliability, and Security in Technology Organizations", IT
Revolution Press, 2016.
2. Jez Humble, Gene Kim, "Continuous Delivery: Reliable Software Releases Through
Build, Test, and Deployment Automation", Addison-Wesley, 2010.
3. Yevgeniy Brikman, "Terraform: Up & Running: Writing Infrastructure as Code",
O'Reilly Media, 2019.
4. Joseph Muli, "Beginning DevOps with Docker", Packt Publishing, 2018.
5. Course Outcomes
Upon completion of the course, the students will be able to:

CO1: Understand the core principles and philosophies of DevOps.

CO2: Implement version control systems for code management and

collaboration.

CO3: Automate software delivery pipelines using CI/CD tools.

CO4: Utilize containerization technologies for packaging and deploying

applications.

CO5: Configure infrastructure as code (IaC) for repeatable deployments.

CO6: Monitor and maintain applications in a production environment.


6. CO - PO Mapping
POs and PSOs

PO PO PO PO PO PO PO PO PO PO PO PO PS PS PS
COs 1 2 3 4 5 6 7 8 9 10 11 12 O1 O2 O3

CO1 3 3 3 2 2 2 1 1

CO2 3 3 3 2 2 2 1 1

CO3 3 3 3 2 2 2 1 1

CO4 3 3 3 2 2 2 1 1

CO5 3 3 3 2 2 2 1 1
7. Lecture Plan - Unit IV
No.
S. of Proposed Actual Pertaining Taxonomy Mode of
Topic
No. Period Date Lecture CO Level Delivery
s Date

Introduction to
1 containerization and CO4
K2
Chalk &
1 Talk
its benefits
Understanding
Docker concepts:
2 1 CO4 Chalk &
images, containers, K3
Talk
registries.
Understanding
Docker concepts:
1 CO4 Chalk &
3 images, containers, K3
Talk
registries.
Building and
4 managing Docker 1 CO4
K2
Chalk &
containers Talk

Docker Compose for


5 multi- container 1 CO4
K3
Chalk &
applications Talk

Docker Compose for


6 multi- container 1 CO4
K3
Chalk &
applications Talk

Introduction to
container
7 orchestration with 1 CO4
K2
Chalk &
Docker Swarm or Talk
Kubernetes.
Introduction to
container
8 orchestration with 1 CO4
K3
Chalk &
Docker Swarm or Talk
Kubernetes.

9 Revision 1 CO4
K2
Chalk &
Talk
8. Activity Based Learning
Learning Method Activity

Tutorial Sessions available in


Learn by Solving Problems
RMK Nextgen

Quiz / MCQ Using RMK Nextgen


Learn by Questioning
App

Practice available in RMK


Learn by doing Hands-on
Nextgen
9. Lecture Notes
Containerization with Docker
Introduction to containerization and its benefits - Understanding
Docker concepts: images, containers, registries - Building and managing
Docker containers - Docker Compose for multi- container applications -
Introduction to container orchestration with Docker Swarm or Kubernetes.
4.1 Introduction to containerization
Containerization is a technology that has revolutionized the way
applications are developed and deployed. Containers are essentially
lightweight, standalone executable packages that contain all the necessary
components to run an application, including code, dependencies,
Configuration files and system libraries. Although it has become increasingly
popular in recent years with the rise of cloud computing and microservices
architecture.
In simple terms, containerization is a way of packaging an
application and its dependencies into a single, portable unit called a
container. Containers are lightweight, fast, and easy to deploy, making them
an ideal solution for deploying applications in various environments.
Containers can be considered virtual machines (VMs) but with a smaller
footprint and faster startup times.
Working of containerization
Containerization is achieved through a combination of
technologies, including kernel namespaces, control groups, and file system
isolation.
Kernel namespaces provide each container with its own view of
the operating system, including its own file system, network stack, and
process tree. This isolation ensures that containers are completely isolated
from each other and the host operating system.
Control groups, also known as cgroups, enable administrators to
allocate resources to each container, such as CPU time, memory, and
network bandwidth. This ensures that containers do not interfere with each
other or with the host operating system and that each container has the
resources it needs to run efficiently.
File system isolation ensures that each container has its own file
system, separate from the host operating system and other containers. This
ensures that applications running inside containers are not affected by
changes made to the host file system or by changes made by other
containers.
Containerization works by separating an application from the host
environment. The container provides a layer of abstraction between the
application and the host operating system. The container contains all the
dependencies needed to run the application, including libraries, frameworks,
and other software components. The container also includes the application
code and its configuration files.
Containers are created from images. An image is a file that
contains all the necessary components needed to run an application. An
image is created by defining a set of instructions that are used to build the
image. These instructions are stored in a file called a Dockerfile. A Dockerfile
contains a list of commands that are used to install dependencies, copy files,
and configure the environment. Once the Dockerfile is created, it is used to
build an image. The image can then be used to create one or more
containers.
Each container runs in its own isolated environment. This means
that the container cannot access resources outside of its own environment,
ensuring that it does not interfere with other containers on the same host.
The container also shares the same kernel as the host operating system. This
means that the container can run on any operating system that supports the
Docker runtime
Container

Figure illustrates an executing container. Once the container image is


created, it cannot be changed.
A software developer can create an image of tested software that
can be moved easily from environment to environment without having to
install and configure the dependencies specifically for each environment. This
makes it efficient to migrate applications from one environment to the other,
be it from development to QA to production, or from in-house to cloud-based
environments while ensuring traceability via the immutable nature of the
container.
Popular Containerization Technologies
Containerization tools are software platforms that enable the
creation, management, and deployment of containerized applications. Some
of the most popular containerization tools and their key features.
Docker
Docker is one of the most popular containerization tools used by developers.
Docker provides a complete containerization platform that enables
developers to build, package, and deploy applications in a consistent and
scalable manner. Key features of Docker include:
Docker Engine: The core component of the Docker platform, which allows
users to create and manage containerized applications.
Docker Hub: A public registry of container images that can be used to store
and share container images with other users.
Docker Compose: A tool for defining and running multi-container Docker
applications.
Docker Swarm: A native clustering and orchestration tool for managing
Docker containers at scale.
Kubernetes
Kubernetes is an open-source container orchestration platform that
automates the deployment, scaling, and management of containerized
applications. Kubernetes provides a flexible and scalable platform for
managing containerized applications in production environments. Key
features of Kubernetes include:
Container orchestration: Kubernetes automates the deployment and
management of containerized applications across multiple nodes.
Scaling: Kubernetes enables users to scale containerized applications up or
down based on demand.
Fault tolerance: Kubernetes provides self-healing capabilities that
automatically replace failed containers with new ones.
Service discovery: Kubernetes enables users to easily discover and connect
to services running inside containers.
Limitations of Containerization Technologies

While containerization offers many benefits, there are also some


potential drawbacks to consider. Some of the key drawbacks include:
Complexity
Containerization can be complex, especially for organizations that
are new to the technology. There is a learning curve associated with
containerization, and it can take time to develop the necessary expertise and
tools to manage containers effectively.
Security
Containers are designed to be portable and self-contained, but
this can also make them more vulnerable to security threats. Therefore,
containers need to be properly secured to prevent unauthorized access and
protect against potential attacks.
Compatibility
Containers are designed to be portable, but there can be
compatibility issues between different container platforms and versions. This
can make it difficult to move containers between different environments,
especially if they were created using different tools or technologies.
Performance
While containers are designed to be lightweight and efficient,
there can be performance issues if too many containers are run on a single
server or cluster. Therefore, careful management is required to ensure that
containers are properly balanced and resources are allocated appropriately.
4.2 Benefits of containerization
Portability
One of the main benefits of containerization is portability.
Containers can be run on any infrastructure that supports the Docker
runtime. This means that an application can be developed and tested on a
developer's machine, then deployed to a production environment without
modification. This makes it easier to move applications between different
environments, reducing the time and effort required to deploy applications.
Containers can be easily moved between different environments, such as
development, testing, and production, without any changes to the application
or its configuration. This enables developers to build, test, and deploy
applications faster and more efficiently than ever before.
Consistency
Containers provide a consistent runtime environment. This means
that an application will behave the same way regardless of the environment
it is running on. This consistency is achieved by packaging all the
dependencies needed to run the application into a single container. This
eliminates the need to worry about the underlying infrastructure, ensuring
that the application will always run as expected. Containers ensure that
applications run consistently across different environments, as they are
packaged with all the dependencies and configurations they need to run.
Scalability
Containers are designed to be scalable. Multiple containers can
be deployed to a single host or across multiple hosts to increase the capacity
of an application. This allows applications to be scaled up or down quickly
and easily in response to changing demand. In addition, containers are
designed to be easily scalable, meaning they can be quickly and easily
replicated to meet the demands of high-traffic applications.
Efficiency
Containerization can improve the efficiency of application
development and deployment. Containers provide a lightweight runtime
environment, reducing the resources needed to run an application. This
means that applications can be deployed more quickly and with less
overhead. Containers also make it easier to manage application
dependencies, reducing the time and effort required to configure the
environment. Containers are much more efficient than traditional virtual
machines, as they do not require the emulation of an entire operating
system. This makes them much lighter and faster, enabling developers to run
more containers on a single host machine.
Security
Containers provide a level of isolation between applications,
improving security. Each container runs in its own isolated environment,
which means that if one container is compromised, it will not affect other
containers on the same host. Containers provide an extra layer of security, as
they are completely isolated from each other and from the host operating
system. This ensures that applications running inside containers cannot
interfere with each other or with the host operating system.
Isolation
Containers provide a high degree of isolation between
applications, which ensures that applications do not interfere with each other.
This makes it easier to manage and secure applications in production
environments.
Faster Deployment Times
Because containers are self-contained and portable, they can be
deployed much more quickly than traditional software packages. This makes
it easier to deploy new features and updates and reduces the time required
to get new applications up and running.
4.3. Understanding Docker concepts: images, containers, registries
Docker is an open-source platform that allows developers to
package applications and their dependencies into containers. Containers are
lightweight, portable units that can be used to develop, ship, and run
applications. Docker enables you to separate your applications from your
infrastructure so you can deliver software quickly.

The Docker platform

Docker provides the ability to package and run an application in a


loosely isolated environment called a container. The isolation and security
lets you run many containers simultaneously on a given host. Containers are
lightweight and contain everything needed to run the application.

Docker architecture

Docker uses a client-server architecture. The Docker client talks


to the Docker daemon, which does the heavy lifting of building, running, and
distributing your Docker containers. The Docker client and daemon can run
on the same system, or you can connect a Docker client to a remote Docker
daemon. The Docker client and daemon communicate using a REST API,
over UNIX sockets or a network interface. Another Docker client is Docker
Compose, that lets you work with applications consisting of a set of
containers.
Docker Objects
Images
An image is a read-only template with instructions for creating a
Docker container. Often, an image is based on another image, with some
additional customization. For example, you may build an image which is
based on the ubuntu image, but installs the Apache web server and your
application, as well as the configuration details needed to make your
application run.
You might create your own images or you might only use those
created by others and published in a registry. To build your own image, you
create a Dockerfile with a simple syntax for defining the steps needed to
create the image and run it. Each instruction in a Dockerfile creates a layer in
the image. When you change the Dockerfile and rebuild the image, only
those layers which have changed are rebuilt. This is part of what makes
images so lightweight, small, and fast, when compared to other virtualization
technologies.
Images in nutshell
 Considered a "blueprint" or "snapshot" of an application environment.
 Immutable, meaning once created, it cannot be directly modified.
 Used to create containers.
 Can be shared and distributed across different environments.

Containers
A container is a runnable instance of an image. You can create,
start, stop, move, or delete a container using the Docker API or CLI. You can
connect a container to one or more networks, attach storage to it, or even
create a new image based on its current state.
By default, a container is relatively well isolated from other
containers and its host machine. You can control how isolated a container's
network, storage, or other underlying subsystems are from other containers
or from the host machine.
A container is defined by its image as well as any configuration
options you provide to it when you create or start it. When a container is
removed, any changes to its state that aren't stored in persistent storage
disappear.
Example docker run command
The following command runs an ubuntu container, attaches interactively to
your local command-line session, and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using
the default registry configuration):
If you don't have the ubuntu image locally, Docker pulls it from your
configured registry, as though you had run docker pull ubuntu manually.

Docker creates a new container, as though you had run a docker


container create command manually.
Docker allocates a read-write filesystem to the container, as its
final layer. This allows a running container to create or modify files and
directories in its local filesystem.
Docker creates a network interface to connect the container to
the default network, since you didn't specify any networking options. This
includes assigning an IP address to the container. By default, containers can
connect to external networks using the host machine's network connection.
Docker starts the container and executes /bin/bash. Because
the container is running interactively and attached to your terminal (due to
the -i and -t flags), you can provide input using your keyboard while
Docker logs the output to your terminal.
When you run exit to terminate the /bin/bash command, the
container stops but isn't removed. You can start it again or remove it.
Container in nutshell
 A running instance of a Docker image.
 Can be started, stopped, and interacted with.
 Represents a live, isolated environment for running an application.
 Changes made within a container are not reflected in the original image.

Registry
Docker Registry is a centralized storage and distributed system
for collecting and managing the docker images. It provides both public and
private repositories as per the choice whether to make the image accessible
publicly or not. It is an essential component in the containerization workflow
for streamlining the deployment and management of applications.
A Docker registry is a system for storing and distributing Docker
images with specific names. There may be several versions of the same
image, each with its own set of tags. A Docker registry is separated into
Docker repositories, each of which holds all image modifications. The registry
may be used by Docker users to fetch images locally and to push new
images to the registry (given adequate access permissions when applicable).
The registry is a server-side application that stores and distributes Docker
images. It is stateless and extremely scalable.
Different Types of Docker Registeries
The following are the different types of docker registries:
1. DockerHub
2. Amazon Elastic Container Registry (ECR)
3. Google Container Registry (GCR)
4. Azure Container Registry (ACR)
Basic commands for Docker registry
The following are the basic commands for Docker registry:
1. Starting your registry
This command effectively starts a Docker registry on your local machine or
server, accessible via port 5000.
docker run -d -p 5000:5000 --restart=always --name registry
registry:2

It instructs Docker to start a registry named registry:2 in detached mode with


the name registry. Map the registry’s port 5000 to a local port 5000 and
restart it immediately if it dies.
2. Pulling some images from the hub
The following is the command used for pulling the image from the public
docker registry, Here we pulling the ubuntu image.
docker pull ubuntu:latest
3. Tagging that image and point to your registry
The following command is used for tagging the image and point to your
registry:
docker image tag ubuntu:latest localhost:5000/gfg-image
4. Pushing the image
The following command is used for pushing the image from the dockerhub:
docker push localhost:5000/gfg-image
5. Pulling that image back
This command instructs Docker to pull the image named gfg-image from the
local registry running on localhost at port 5000.
docker pull localhost:5000/gfg-image
6. Stop the registry
This command instructs Docker to stop the container named registry:
docker container stop registry
7. Stop the registry and remove the data
The following is the command is used for stopping the registry and remove
the associated data effectively:
docker container stop registry && docker container rm -v
registry

Key Features of Docker Registry


The following are the key features of docker registry:
Centralized Image Storage: it facilitates with effectively storing and
managing the docker images in a central repository.
Access Control: It comes up with providing strong authentication and
authorization mechanisms to secure access to images.
Integration with CI/CD: It facilitates with seamlessly integrates with
continuous integration and deployment pipelines for automated workflows.
Version Control: it supports versioning and tagging of images, enabling
easy tracking and rollback of image versions.

Benefits of Docker Registry


The following are the benefits of docker registry:
Centralized Storage: it comes with providing a centralized repository for
storing and managing the docker images.
Enhanced Security: it facilitates with offeribng the access control and
security features to ensure safe and authorized usage of images.
Integration with CI/CD: It seamlessly helps in integration through
continuous integration and delivery pipelines for automated workflows.
Efficient Image Management: It comes with facilitating the features such
as version control and efficient management of Docker images.

Uses of Docker Registry


The following are the some of the uses of docker registry:
1. Our images can be stored in the Docker registry.
2. We can automate the development.
3. With the help of a private docker registry, we can secure our image.
4. Docker registries helps with version controlling feature allows the teams
to track the changes, roll back to previous versions.
4.4 Building and managing Docker containers
Managing Docker Containers
Managing Docker containers can feel a bit overwhelming at first, but with the
right approach and tools, it becomes a breeze. By using the following steps
you can easily manage the docker containers:
Understanding Basic Commands
 Starting and Stopping Containers: Use commands like docker
start <container_name> to start a container and docker stop
<container_name> to stop it. These are your go-to commands for
managing container states.
 Viewing Running Containers: The command docker ps shows you a
list of all running containers. Add -a (i.e., docker ps -a) to see
all containers, including those that are stopped.
Container Lifecycle Management
 Creating Containers: You can create a new container from an image
using docker run <image_name>. This will pull the image if you don’t
have it already.
 Removing Containers: When you’re done with a container, you can
remove it using docker rm <container_name>. Use -f to force
remove running containers.
Managing Resources
 Limiting Resources: Docker allows you to limit resources for each
container. Use flags like --memory and --cpus when you run a
container to prevent it from using too many resources.
 Volumes for Persistence: Use Docker volumes to keep data persistent,
even when containers are removed. This is helpful for databases or any
application that requires saving data.
Monitoring Container Health
 Health Checks: Set up health checks in your Dockerfile or through
Docker Compose. This ensures your containers are running properly and
automatically restarts them if they aren’t.
 Logging: Use Docker’s logging options to capture container output. The
command docker logs <container_name> lets you view logs for
troubleshooting.
Using Management Tools
 Portainer: This is a popular web-based GUI for managing Docker
containers. It’s user-friendly and lets you manage your containers
visually, making it easier to see what’s running and adjust settings.
 Docker Compose: Use Docker Compose for managing multi-container
applications. You can define your containers in a docker-compose.yml
file and manage them all with a single command (docker-compose
up).
Automating with Scripts
 Bash Scripts: Create scripts to automate routine tasks. For example,
you could write a script to start up multiple containers at once or back up
data from your volumes.
 Dockerfiles for Custom Images: Use Dockerfiles to automate the
creation of custom images tailored to your needs. This makes it easy to
deploy the same environment consistently.
Regular Maintenance
 Clean Up Unused Resources: Over time, you’ll accumulate unused
images, containers, and networks. Use docker system prune to
clean up unused resources and free up space.
 Update Images Regularly: Keep your images up to date to benefit
from security patches and performance improvements. Use docker
pull <image_name> to fetch the latest version.
Backup and Restore
 Backing Up Data: Ensure you back up important data stored in
volumes. Use commands like docker run --rm --volumes-from
<container_name> -v $(pwd):/backup busybox tar czf
/backup/backup.tar.gz /data to create backups.
 Restoring Data: To restore data, you can create a new container and
use a similar command to untar the backup.
Example: creating an image and building a container – hello world
Creating an image
To create a python script that simply prints “Hello world” when ran. In order
to run this app inside a Docker container, you need to first create a
Dockerfile. A Dockerfile is a text file that contains the instructions for how to
create an image, from which a container can be built – a container is an
instance of an image. Remember, an image is like a blueprint, the Dockerfile
defines that blueprint. I have created a Dockerfile in the same directory as
where my hello-world.py file lives, you can see it in the figure below.

Now, let’s run through what each of those commands in the Dockerfile mean.
FROM
Use the FROM command to specify the base image that you want your image
to derive from. Here, we’re using the Python version 3.9 base image as our
app is running in Python.
WORKDIR
Sets the current working directory inside the container. Like a cd in Linux.
Any subsequent commands in the Dockerfile will happen inside this directory.
COPY
The COPY instruction has the following format: COPY <source> <dest>.
It copies files from <source> (in the host) into <dest> (in the container).
So the above copy instruction is copying all files from the current working
directory on my local machine to the current working directory in the
container.
CMD
This is the command instruction, it specifies what to run when the container
is created. Here we’re telling Docker to tell Python to run our hello-world.py
app.
Now that we have the Dockerfile we can build an image from it. Do this using
the docker build command, which takes a directory and optional -t, which
allows you to add a tag to the image name.
docker build -t lgmooney98/hello-world:v1 .
The convention for naming Docker images is to use your Docker username
followed by a name, and the tag is usually used to specify the version of the
image.
docker build -t username/image-name:tag directory
So, the above command is telling Docker to look in the current directory for
the Dockerfile and build the image from it.
If you don’t already have the python 3.9 image, the build might take a few
minutes as Docker has to pull that image from the Docker registry.
We can check that our image has been created by using the docker images
command, which lists all images available locally.
Building the container
The next thing to do is run the image, which creates an instance of
the image – i.e. builds a container from the image. This is done with the
docker run command.
docker run lgmooney98/hello-world

Since we instructed in the Dockerfile for hello-world.py to be run, we expect


"Hello world" will be printed in the command prompt when this container is
created. Notice if you do this that the container closes after it has done its
thing; containers are ephemeral, they run as long as the command inside the
container takes to complete, then close.
We can change this behaviour by telling Docker to run the container
interactively, this is done with the -i flag. You also need the -t flag to
create a pseudo terminal, without it you won’t be able to issue commands in
the terminal inside the container. You can also specify a shell at the end of
the command, such as bash.
docker run -it lgmooney98/hello-world bash

From in here I can write regular Linux commands, such as running our hello-
world python script.

Use the docker stop command to stop a running container, you need to
provide the container ID – the first few characters should suffice, Docker just
needs to be able to distinguish it from the others.
docker stop a2c9
Running a website in a container locally
To show something slightly more interesting, I have created a
basic web app using flask, it shows “Hello world” in a web page, which
changes colour upon refreshing the page. To get this running in a container,
it’s the same process as with the hello world example above, only now we
need to change the Dockerfile.
First we need change the CMD command to make Docker tell
Python to run app.py, instead of hello-world.py. Secondly, we need to
tell Docker to expose a port in the container in order for us to be able to
connect to the web page being run inside the container later on – this is
done with the EXPOSE command. I have exposed port 5000 on the container
here as that is the port that the flask app is set to run on. And finally, since
we’re using the flask package to build the app, we need to install that
package. This can be done with the RUN command, which runs whatever you
tell it inside a shell in the container when the container is being built; it’s
similar to CMD, except there can only be one CMD command. Here, we’re
telling Docker to use the package manager pip to install everything in
requirements.txt, which contains flask.
We have a different Dockerfile now, and so we need to build a new image
from it. Once the image is built, we can create a container from it using the
docker run command. However, this time we need to map the port running
on the container (5000) to a port on the outside, which in this case is my
local machine, so that when we try to connect to the external port using our
browser, that port is connected internally to the port on the container; the
container can then serve the web app to our browser. This is done using the
-p flag; here, I’m mapping port 5000 on the inside to port 8888 on the
outside.
docker run -p 8888:5000 lgmooney98/hello-world-app
Now, go to your browser and go to localhost:8888, you should get the hello
world web page.
4.5 Docker Compose for multi- container applications
Docker Compose is a software containerized tool developed to
orchestrate the definitions and running of multi-container docker applications
using single commands. It reads the definitions of the multiple containers
from the single Configuration Yaml file and performs the orchestration with
single-line commands making easy usage for developers.
It facilitates the management of complex applications allowing
the users to specify the services, networks, and docker volumes required for
the application. In the configuration yaml file developers can specify the
parameters for each service including the docker images, environmental
variables, ports, and dependencies. Docker compose provides security to the
application by encapsulating the entire application setup in the Compose file
providing consistency across different development environments making it
easy to share and reproduce the applicable reliably.

Starting up a single-container application is easy. For example, a


Python script that performs a specific data processing task runs within a
container with all its dependencies. Similarly, a Node.js application serving a
static website with a small API endpoint can be effectively containerized with
all its necessary libraries and dependencies. However, as applications grow in
size, managing them as individual containers becomes more difficult.
Imagine the data processing Python script needs to connect to a
database. Suddenly, you're now managing not just the script but also a
database server within the same container. If the script requires user logins,
you'll need an authentication mechanism, further bloating the container size.
One best practice for containers is that each container should do
one thing and do it well. While there are exceptions to this rule, avoid the
tendency to have one container do multiple things.
Now you might ask, "Do I need to run these containers
separately? If I run them separately, how shall I connect them all together?"
While docker run is a convenient tool for launching containers,
it becomes difficult to manage a growing application stack with it. Here's
why:
Imagine running several docker run commands (frontend,
backend, and database) with different configurations for development,
testing, and production environments. It's error-prone and time-consuming.
Applications often rely on each other. Manually starting containers in a
specific order and managing network connections become difficult as the
stack expands.
Each application needs its docker run command, making it
difficult to scale individual services. Scaling the entire application means
potentially wasting resources on components that don't need a boost.
Persisting data for each application requires separate volume mounts or
configurations within each docker run command. This creates a scattered
data management approach.
Setting environment variables for each application through
separate docker run commands is tedious and error-prone.
That's where Docker Compose comes to the rescue.
Docker Compose defines your entire multi-container application in a single
YAML file called compose.yml. This file specifies configurations for all your
containers, their dependencies, environment variables, and even volumes
and networks. With Docker Compose:
 You don't need to run multiple docker run commands. All you need to do
is define your entire multi-container application in a single YAML file. This
centralizes configuration and simplifies management.
 You can run containers in a specific order and manage network
connections easily.
 You can simply scale individual services up or down within the multi-
container setup. This allows for efficient allocation based on real-time
needs.
 You can implement persistent volumes with ease.
 It's easy to set environment variables once in your Docker Compose file.
By leveraging Docker Compose for running multi-container setups, you can
build complex applications with modularity, scalability, and consistency at
their core.

Reference : https://docs.docker.com/get-started/docker-concepts/running-
containers/multi-container-applications/
4.6 Introduction to container orchestration with Docker Swarm or
Kubernetes.
Introduction
Docker Swarm is a native clustering and orchestration tool for
Docker containers. It allows developers and IT administrators to create and
manage a cluster of Docker nodes as a single virtual system. This capability
is essential for deploying, managing, and scaling containerized applications in
production environments. Docker Swarm was introduced by Docker Inc. to
provide a simple yet powerful way to orchestrate containers, offering an
alternative to more complex orchestration solutions like Kubernetes.
The significance of Docker Swarm in container orchestration lies
in its seamless integration with the Docker ecosystem, its simplicity, and its
ability to efficiently manage resources. As organizations increasingly adopt
containerization for their applications, the need for effective orchestration
tools becomes paramount. Docker Swarm fulfills this need by offering a
robust and easy-to-use platform for managing containerized applications at
scale.
Core Concepts
To understand Docker Swarm, it’s essential to grasp its core
concepts. At its heart, Docker Swarm operates in “Swarm Mode,” which is
activated by initializing a swarm cluster. This mode transforms Docker nodes
into a coordinated cluster capable of running services and balancing
workloads.
Node
Nodes in a Docker Swarm are categorized into two
types: Manager nodes and Worker nodes. Manager nodes are responsible for
the overall management of the swarm, including maintaining the cluster
state, scheduling tasks, and managing membership and orchestration.
Worker nodes, on the other hand, execute the tasks assigned to them by the
managers. This division of labor ensures that the cluster operates efficiently
and scales effectively.

Services
Services in Docker Swarm define the desired state of the
application, including the number of replicas and the network configurations.
Each service consists of multiple tasks, which are individual instances of the
Docker container running the application. The swarm manager is responsible
for distributing these tasks across the available nodes, ensuring that the
desired state is maintained even in the face of node failures or changes in
demand.
Networking

Networking within a Docker Swarm is facilitated by overlay


networks, which enable communication between containers across different
nodes. This network abstraction allows services to discover and communicate
with each other seamlessly, regardless of their physical location within the
cluster. Additionally, Docker Swarm provides built-in load balancing,
distributing incoming requests across the various service replicas to ensure
optimal performance and resource utilization.
Architecture
The architecture of Docker Swarm is designed to provide a robust
and scalable platform for container orchestration. The process begins with
the formation of a swarm cluster. This involves initializing a Docker node as a
manager and joining additional nodes as either managers or workers. The
manager nodes form a quorum, which is essential for maintaining the cluster
state and making decisions.
In a Docker Swarm cluster, the manager nodes play a crucial role
in ensuring data consistency and fault tolerance. They use the Raft
consensus algorithm to replicate the cluster state across all managers,
ensuring that changes are consistently applied even in the presence of node
failures. This high availability model allows the swarm to continue operating
smoothly even if some manager nodes go offline.
Service scheduling in Docker Swarm involves the placement of
tasks across the worker nodes based on various strategies, such
as spread, binpack, and random. These strategies help optimize resource
utilization and ensure that the workload is evenly distributed. Docker Swarm
also supports constraints and affinities, allowing users to specify where tasks
should or should not run, based on node attributes.
Security is a critical aspect of Docker Swarm’s architecture. The
platform includes several built-in security features, such as mutual TLS for
node communication, role-based access control, and encrypted network
traffic. These features help protect the integrity and confidentiality of the
cluster, ensuring that only authorized nodes and users can interact with the
swarm.
Benefits of Docker Swarm
One of the primary benefits of Docker Swarm is its simplicity and
ease of use. Unlike some other orchestration tools, Docker Swarm is
designed to be straightforward and intuitive, making it accessible to
developers and IT administrators with varying levels of expertise. The tight
integration with Docker also means that users can leverage their existing
Docker knowledge and tools without needing to learn a new platform from
scratch.
Docker Swarm is also known for its efficient resource utilization.
By distributing tasks across the available nodes and balancing the workload,
Docker Swarm ensures that resources are used optimally, reducing waste
and improving performance. This efficiency is particularly valuable in
production environments where resource constraints are a common concern.
Security is another significant advantage of Docker Swarm. The
platform’s built-in security features, such as mutual TLS and encrypted
networks, provide robust protection against unauthorized access and data
breaches. This security model helps organizations maintain compliance with
industry standards and best practices, safeguarding their applications and
data.
Performance and scalability are key strengths of Docker Swarm.
The platform can handle large-scale deployments with ease, allowing
organizations to scale their applications horizontally by adding more nodes to
the swarm. The built-in load balancing and service discovery mechanisms
ensure that the system remains responsive and performant even under heavy
load.
Comparison with Kubernetes

Kubernetes is another popular container orchestration tool that


often comes up in discussions alongside Docker Swarm. While both tools aim
to solve similar problems, they have different approaches and feature sets
that make them suitable for different use cases.
Kubernetes is known for its comprehensive feature set and
flexibility. It provides a rich set of APIs and tools for managing complex
applications, including advanced scheduling, self-healing capabilities, and
extensive networking options. However, this complexity can also make
Kubernetes more challenging to set up and manage, particularly for users
who are new to container orchestration.
In contrast, Docker Swarm is designed to be simpler and more
straightforward. Its tight integration with Docker and user-friendly interface
make it an attractive option for small to medium-sized deployments and for
users who prioritize ease of use over advanced features. While Docker
Swarm may not offer the same level of flexibility and customization as
Kubernetes, it provides a more accessible entry point for organizations
looking to adopt container orchestration.
The community and ecosystem support for Kubernetes is also
more extensive than for Docker Swarm. Kubernetes has a larger user base,
more third-party integrations, and a broader range of community-contributed
tools and extensions. This ecosystem can be a significant advantage for
organizations that require a wide array of functionalities and integrations.
Best Use Cases
Docker Swarm is best suited for small to medium-sized
deployments, where its simplicity and ease of use can be fully leveraged. It is
an excellent choice for development and testing environments, where quick
setup and straightforward management are priorities. Docker Swarm is also
well-suited for applications that do not require the advanced features and
flexibility of Kubernetes.
For organizations that are already heavily invested in the Docker
ecosystem, Docker Swarm provides a seamless extension of their existing
workflows and tools. This integration can simplify the transition to container
orchestration and reduce the learning curve for teams.
10. Assignment Questions
S.No K-
Topic COs
. Level
Develop a project to Demonstrate
Containerize an application
1 Refer: K3 CO4
https://docs.docker.com/get-
started/workshop/02_our_app/
Develop an Application Using Docker
Compose
2 Refer: K3 CO4
https://docs.docker.com/get-
started/workshop/08_using_compose/
11. Part A
Question & Answer
Part A
1. What is Containerization (CO4, K1)
Containerization is a software deployment process that
packages an application's code and dependencies into a single
container. This container can then run on any operating system and
infrastructure.
2. List out the Benefits of containerization (CO4, K1)
Portability : Containers can run on any infrastructure, including bare
metal, VMs, and the cloud.
Efficiency : Containers are lightweight and resource-efficient, and they
share the same operating system kernel.
Consistency : Containers ensure that applications run consistently across
different environments.
Speed : Containerization allows developers to create and deploy
applications faster.
3. How containerization works (CO4, K3)
Isolation : Containers are isolated from the host operating system, and
they only have limited access to underlying resources.
Sharing : Containers share the same operating system kernel with other
containers on the same computing system.
Automation : Containerization is highly conducive to automation, which
can help streamline development, testing, and deployment.
4. List out the Tools for containerization (CO4, K2)
Docker :
A popular containerization technology that is easy to use but can be less
secure than other options
Rkt : A newer containerization technology that is designed to be more
secure and easier to use than Docker
Kubernetes : A container orchestration tool that can help manage the
complexity of containerized environments
Part A
5. What is a Docker and why is it used? (CO4, K1)
Docker is an operating system for containers. Similar to how a
virtual machine virtualizes (removes the need to directly manage) server
hardware, containers virtualize the operating system of a server. Docker is
installed on each server and provides simple commands you can use to
build, start, or stop containers.
6. What is Docker most used for? (CO4, K1)
An open-source platform, Docker is used by developers to help
them automate the deployment of applications inside containers. A
consistent environment is provided by docker so that the software can run
across multiple computing environments. It gets easier to efficiently ship,
build and run applications due to docker.
7. What is a Docker machine used for? (CO4, K1)
Docker Machine is a tool used to install and manage Docker Engine on
various virtual hosts or older versions of macOS and Windows. When
Docker Machine is installed on the local system, executing a command
through Docker Machine not only creates virtual hosts, but also installs
Docker and configures its clients.
8. Why is Docker so famous? (CO4, K2)
Efficient Resource Usage. Unlike traditional virtual machines (VMs),
containers share the host OS kernel and run as isolated processes, leading
to much lower overhead. This makes Docker more resource-efficient
compared to VMs, allowing you to pack more containers on a single host
machine.
9. What is the purpose of Docker run? (CO4, K1)
The 'docker run' is used to create a running container from using a docker
image. It is used with options, docker images, commands, and arguments.
It is used for developing, shipping, and running applications in containers.
Part A
10. Why run Python in Docker? (CO4, K2)
It simplifies the process of managing Python environments and makes it
easier to collaborate with others. Personally, I have found Docker to be a
game-changer in my development work
11. How to check Docker version? (CO4, K3)
Following command from a terminal prompt can be used to check the
Docker vesion:
docker --version.
sudo systemctl start docker.
sudo systemctl enable docker.
sudo usermod -a -G docker <username>
docker-compose --version.
12. What is a Docker image? (CO4, K1)
Docker is a software platform that packages software into containers.
Docker images are read-only templates that contain instructions for
creating a container. A Docker image is a snapshot or blueprint of the
libraries and dependencies required inside a container for an application to
run.
13. Is Docker Engine free? (CO4, K2)
Docker Engine (Community Edition) is free to use. Docker also offers a paid
Enterprise Edition with additional features and support for business use.
Docker Desktop is also free for personal use, but there are commercial
licenses available for enterprises that include additional features and
support.
14. What to name your dockerfile? (CO4, K1)
Filename. The default filename to use for a Dockerfile is Dockerfile, without
Part A
a file extension. Using the default name allows you to run the docker build
command without having to specify additional command flags. Some
projects may need distinct Dockerfiles for specific purposes.
15. What is CLI in Docker? (CO4, K1)
Command Line Tools for Container Management | Docker CLI
The Docker command-line interface (Docker CLI) is a robust tool that
empowers you to interact with Docker containers and manage different
aspects of the container ecosystem directly from the command line
16. What is the T flag in Docker? (CO4, K1)
From the official documentation, Docker states that the -t option will
“allocate a pseudo-TTY” to the process inside the container. TTY stands for
Teletype and can be interpreted as a device that offers basic input-output.
17. What is the P flag in docker? (CO4, K1)
To publish a port for your container, you'll use the --publish flag ( -p for
short) on the docker run command. The format of the --publish command
is [host_port]:[container_port] .
18. What is the IP range of docker? (CO4, K1)
By default, docker will use these two ranges: 172.17. 0.0/16 : for the
bridge network (also called docker0 ) 172.18.
19. How to run a container in Docker? (CO4, K3)
Use the following instructions to run a container.
1. Open Docker Desktop and select the Search field on the top navigation
bar.
2. Specify welcome-to-docker in the search input and then select the Pull
button.
3. Once the image is successfully pulled, select the Run button.
4. Expand the Optional settings.
5. In the Container name, specify welcome-to-docker.
6. In the Host port, specify 8080.
7. Select Run to start your container.
Part A
20. What’s the difference between virtualization and
containerization? (CO4, K1)
Virtualization is an abstract version of a physical machine, while
containerization is the abstract version of an application.
21. Describe a Docker container’s lifecycle. (CO4, K2)
Although there are several different ways of describing the steps in a
Docker container’s lifecycle, the following is the most common:
Create container
Run container
Pause container
Unpause container
Start container
Stop container
Restart container
Kill container
Destroy container
22. Name the essential Docker commands and what they do.
(CO4, K1)
The most critical Docker commands are:
Build. Builds a Docker image file
Commit. Creates a new image from container changes
Create. Creates a new container
Dockerd. Launches Docker daemon
Kill. Kills a container
23. List some of the more advanced Docker commands and what
they do. (CO4, K1)
Some advanced commands include:
Docker info. Displays system-wide information regarding the Docker
installation
Docker pull. Downloads an image
Docker stats. Provides you with container information
Docker images. Lists downloaded images
Part A
24. Can you implement continuous development (CD) and
continuous integration (CI) in Docker? (CO4, K2)
Yes, you can. You can run Jenkins on Docker and use Docker Compose to
run integration tests.
24. how do you create a Docker swarm? (CO4, K3)
Use the following command: docker swarm init –advertise-addr <manager
IP>
25. What are your go-to tools for creating and managing
containerization environments? (CO4, K1)
When it comes to creating and managing containerization environments,
my go-to tools are Docker and Kubernetes.
Docker is a powerful containerization platform that enables me to easily
package, deploy, and scale applications. With Docker, I am able to create
lightweight, portable containers that can run on any platform, making it
easy to move my applications from development to production
environments.
Kubernetes, on the other hand, is a powerful container orchestration tool
that allows me to manage and scale my containerized applications with
ease. With Kubernetes, I am able to automate deployment, scaling, and
management of my applications, reducing the time and effort required to
manage a large number of containers.
Using Docker and Kubernetes together, I am able to create highly resilient
and scalable environments that allow an application to scale from a few
hundred users to millions of users without having to worry about
infrastructure management costs. With the help of these tools, I was able
to reduce the deployment time of a recent project from a week to just a
few hours, leading to a significant increase in productivity and efficiency for
the team.
12.PART – B Questions
1. Explain in detail about Containerization. (CO4, K2)
2. Discuss the benefits of Containerization. (CO4, K2)
3. Explain Docker concepts in details : images, containers, registries.
(CO4, K2)
4. Develop an application to demonstrate Building and managing Docker
containers . (CO4, K3)
5. How Docker Compose is used in building multi- container applications.
(CO4, K3)
6. Describe with an example application to illustrate container
orchestration with Docker Swarm or Kubernetes. (CO4, K3)
Note : All the above questions requires Diagrams to Illustrate the Concepts, Wherever
Necessary.
13. Supportive online courses

Online Courses
1. https://www.practical-devsecops.com/containerization-certification/

2. https://www.coursera.org/learn/aws-containerization

External Links for Additional Resources

1. https://www.linkedin.com/pulse/detailed-guide-containers-kubernetes-
certifications-atul-kumar-ei4qc/

2. https://www.shiksha.com/online-courses/container-certification
14. Real Time Applications
Examples of containerized Real Time applications

Docker: An open-source container runtime that allows developers to build,


deploy, and test containerized applications
Kubernetes: An open-source platform for managing containerized workloads
and services
Amazon Elastic Container Service (ECS): A container orchestration service
that runs containerized applications on Amazon Web Services
Google Kubernetes Engine (GKE): A container runtime from Google that
automates application container deployment
Microsoft Azure Container Instances (ACI): A container orchestration
service
15. Content Beyond Syllabus

Containerization use cases for startups


 Building consistency in environments

 Infrastructure and environment isolation

 Implementation of different strategies


16. Assessment Schedule

Tentative schedule for the Assessment During 2024-2025 Even Semester

S. Name of the
Start Date End Date Portion
No. Assessment

1 Unit Test 1 January 2025 January 2025 Unit 1

2 IAT 1 25th January 2025 03rd February 2024 Unit 1 & 2

3 Unit Test 2 February 2024 February 2024 Unit 3

4 IAT 2 10th March 2024 15th March 2024 Unit 3 & 4

5 Revision 1 April 2024 April 2024 Unit 5, 1 & 2

6 Revision 2 April 2024 April 2024 Unit 3 & 4

7 Model 03rd April 2024 17th April 2024 All 5 Units


17. Text Books & References

TEXTBOOKS:
1. Deepak Gaikwad, Viral Thakkar, "DevOps Tools: from Practitioner's
Point of View", Wiley, 2019.
2. Jennifer Davis, Ryn Daniels, "Effective DevOps", O'Reilly Media, 2016.

REFERENCES:
1. Gene Kim, Jez Humble, Patrick Debois, "The DevOps Handbook: How
to Create World-Class Agility, Reliability, and Security in Technology
Organizations", IT Revolution Press, 2016.
2. Jez Humble, Gene Kim, "Continuous Delivery: Reliable Software
Releases Through Build, Test, and Deployment Automation", Addison-
Wesley, 2010.
3. Yevgeniy Brikman, "Terraform: Up & Running: Writing Infrastructure
as Code", O'Reilly Media, 2019.
4. Joseph Muli, "Beginning DevOps with Docker", Packt Publishing, 2018.
18. Mini Project Suggestions

Top 10 Docker Project Ideas


1. Build a Personal Portfolio Website with Docker
2. Dockerized To-Do List Application
3. Simple Weather App in a Container
4. Set Up a Dockerized WordPress Website
5. Create a Python API and Deploy with Docker
6. Build a Chat Application with Docker
7. Create a Custom Docker Image
8. E-commerce Application in Docker
9. Develop a Dockerized CI/CD Pipeline
10. Containerize a Machine Learning Model
Thank you

Disclaimer:

This document is confidential and intended solely for the educational purpose of RMK Group of
Educational Institutions. If you have received this document through email in error, please notify the
system manager. This document contains proprietary information and is intended only to the
respective group / learning community as intended. If you are not the addressee you should not
disseminate, distribute or copy through e-mail. Please notify the sender immediately by e-mail if you
have received this document by mistake and delete this document from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in
reliance on the contents of this information is strictly prohibited.

You might also like