0% found this document useful (0 votes)
110 views

Getting Started with Docker

The document provides an introduction to Docker, an open-source platform for automating application deployment using containers, highlighting its benefits such as portability, efficiency, and isolation. It covers common use cases, basic commands, Docker architecture, and the differences between monolithic and microservices architectures. Additionally, it explains key Docker components like images, containers, registries, and networking, along with the structure and purpose of Dockerfiles.

Uploaded by

suresh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Getting Started with Docker

The document provides an introduction to Docker, an open-source platform for automating application deployment using containers, highlighting its benefits such as portability, efficiency, and isolation. It covers common use cases, basic commands, Docker architecture, and the differences between monolithic and microservices architectures. Additionally, it explains key Docker components like images, containers, registries, and networking, along with the structure and purpose of Dockerfiles.

Uploaded by

suresh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

DOCKER DOCUMENT WITH HANDS-ON

Introduction to Docker
Docker is an open-source platform designed to automate the deployment, scaling, and
management of applications using containers. A container is a lightweight, portable, and self-sufficient
unit that packages up an application and all its dependencies (including libraries, frameworks,
configurations, etc.) so that the application can run consistently across any environment—whether it's
on a developer's laptop, a test server, or in production on the cloud.

Benefits of Docker:

1. Portability: Docker containers can run anywhere—whether on a developer's local machine, in


the cloud, or on a virtual server. As long as Docker is supported, you can run your application
without worrying about the underlying environment.

2. Efficiency: Containers are lightweight because they share the host system's kernel rather than
having their own operating system (like VMs). This reduces overhead and allows for faster
startup times.

3. Isolation: Containers run applications in isolated environments. This isolation ensures that
dependencies and configurations for one application don’t interfere with another.

4. Version Control: Docker images can be versioned, so you can have different versions of your
app, dependencies, or configurations, and easily switch between them.

Common Use Cases:

• Microservices: Docker allows developers to break down complex applications into smaller,
manageable services (microservices) that can run independently in containers.

• DevOps & CI/CD: Docker is widely used in continuous integration/continuous deployment


(CI/CD) pipelines to ensure that the application runs consistently across different stages of
development, testing, and production.

• Environment Consistency: It eliminates the "it works on my machine" problem, as Docker


ensures the same environment is used across all stages of development.

Docker Terminology
1. Container: A lightweight, stand-alone, executable package that includes everything needed to
run a piece of software (e.g., code, libraries, system tools, etc.). Containers run from Docker
images.

2. Image: A read-only template used to create containers. It consists of a series of layers (files and
commands). You can think of it as a snapshot of a file system at a particular point.

3. Dockerfile: A script containing a series of instructions on how to build a Docker image. It defines
the environment in which the application runs.

4. Docker Engine: The core component of Docker that executes containers.

5. Registry: A storage and distribution system for Docker images. The default registry is Docker
Hub, but you can also set up private registries.

6. Volume: A persistent data storage mechanism. Volumes are used to store and manage data
that should not be lost when containers are removed.

7. Network: A virtual network allowing containers to communicate with each other and with
external services.
Basic Docker Commands
Install Docker - For Linux (Ubuntu):

sudo apt-get update

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -


cs) stable"

sudo apt-get update

sudo apt-get install docker-ce

For macOS and Windows:

• Download Docker Desktop from Docker’s website.

• Install it by following the setup instructions.

1. Check Docker version

docker --version

2. Run a container

docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Example:

docker run hello-world

3. List running containers

docker ps

4. List all containers (including stopped)

docker ps -a

5. Stop a running container

docker stop <container_id>

6. Start a stopped container

docker start <container_id>

7. Remove a container

docker rm <container_id>

8. Remove a container forcefully

docker rm -f <container_id>

9. Remove an image

docker rmi <image_id>


10. Pull an image from Docker Hub

docker pull <image_name>

11. Build an image from a Dockerfile

docker build -t <image_name> <path_to_dockerfile>

12. List all images

docker images

13. View container logs

docker logs <container_id>

14. Attach to a running container

docker attach <container_id>

15. Execute a command inside a running container

docker exec -it <container_id> <command>

16. View detailed information about a container

docker inspect <container_id>

Docker Networking

1. List Docker networks

docker network ls

2. Create a custom network

docker network create <network_name>

3. Connect a container to a network

docker network connect <network_name> <container_id>

4. Disconnect a container from a network

docker network disconnect <network_name> <container_id>

Docker Volumes

1. Create a volume

docker volume create <volume_name>

2. List volumes

docker volume ls

3. Inspect a volume

docker volume inspect <volume_name>

4. Remove a volume

docker volume rm <volume_name>


Docker History

Aspect Before Docker After Docker

Environment Inconsistent between development, Consistent across all environments


Consistency staging, production with containers

Dependency Manual management and version All dependencies bundled inside


Management conflicts Docker containers

Lightweight containers, more efficient


Resource Usage Virtual machines, resource-heavy
resource usage

Automated scaling and orchestration


Scaling Manual, complex configurations
with Kubernetes

Complex, error-prone environment Simple, fast, and repeatable builds


CI/CD Pipelines
replication and deployments

Easy to deploy and scale independent


Microservices Difficult to manage and deploy
microservices

Process-level isolation with Docker


Security & Isolation VMs or bare-metal isolation
containers

Highly portable across different


Portability Machine-specific configurations
environments

Monolithic Architecture vs. Microservices Architecture


In the context of software architecture, monolithic and microservices are two different approaches to
building and deploying applications. They differ in how the application is structured, how components
interact, and how they are maintained and scaled.

Monolithic Architecture
A monolithic architecture is a traditional approach where an entire application is built as a single,
unified unit. All the different components (like the user interface, business logic, and data access layers)
are tightly integrated into a single codebase and are deployed together as one unit.

Key Characteristics of Monolithic Architecture:

1. Single Codebase:

o The entire application’s functionality (user interface, business logic, data management)
resides in a single codebase.

2. Tightly Coupled:

o All components of the application are interdependent, meaning changes in one part can
affect the entire application.

3. Single Deployment:

o The application is packaged and deployed as a single executable or web application.


When deploying updates, the entire application must be redeployed.
4. Scaling:

o Scaling a monolithic application typically involves scaling the entire application by


replicating instances, which can be inefficient if only specific parts of the application need
scaling.

5. Development:

o Monolithic applications are often easier to develop initially, as everything is in one place,
and there’s no need for complex communication between different services or
components.

6. Maintenance:

o As the application grows, it can become harder to maintain and update because changes
in one area might impact other areas, and testing can become more complex.

Pros of Monolithic Architecture:

• Simpler to develop initially: A unified codebase makes it easier to get started and build the
application.

• Easier to test: Since everything is in one place, integration testing is straightforward.

• Performance: No inter-service communication overhead, as everything runs in the same


process.

Cons of Monolithic Architecture:

• Scalability issues: Difficult to scale specific parts of the application independently.

• Lack of flexibility: Difficult to update or add new features without affecting the entire application.

• Risk of tight coupling: Changes in one area of the application can unintentionally break other
parts.

Example Use Case for Monolithic:

• A small e-commerce website where all the components (product catalog, user management,
checkout, etc.) are tightly coupled together.

Microservices Architecture
Microservices architecture breaks down an application into small, loosely coupled, and independently
deployable services. Each microservice is responsible for a specific business function and
communicates with other microservices via APIs (often HTTP or message queues).

Key Characteristics of Microservices Architecture:

1. Decoupled Services:

o Each microservice is a self-contained unit that encapsulates a specific business function.


These services are developed, deployed, and scaled independently.

2. Independent Deployment:

o Each microservice can be developed, deployed, and scaled independently of the others.
This allows for easier updates and better fault isolation.
3. Technology Agnostic:

o Microservices can be written in different programming languages and technologies.


Each service is independent and doesn't require the entire application to be
homogeneous.

4. Communication via APIs:

o Microservices communicate with each other via lightweight protocols (e.g., REST APIs,
gRPC, or messaging queues). This allows services to interact in a decoupled manner.

5. Scaling:

o Each microservice can be scaled independently based on demand, which makes it more
efficient for resource allocation and improves performance for specific features of the
application.

6. Fault Isolation:

o If one microservice fails, it doesn’t necessarily bring down the entire system. This
isolation improves system resilience and robustness.

Pros of Microservices Architecture:

• Scalability: Each microservice can be scaled independently based on its own requirements.

• Flexibility: Development teams can use different technologies for different services, based on
specific needs.

• Independent Deployment: Individual services can be updated or deployed without affecting


the entire application, speeding up development and delivery cycles.

• Resilience: Failures in one service won’t necessarily affect other services.

Cons of Microservices Architecture:

• Complexity: Microservices require careful management of inter-service communication, API


gateways, service discovery, and security. It can be harder to monitor and manage many small
services.

• Distributed systems: Because services are distributed, issues like network latency, failure
handling, and data consistency become more complicated.

• Data management: Managing data consistency and transactions across multiple services can
be challenging.

Example Use Case for Microservices:

• A large e-commerce platform where different services (product catalog, order management,
payment, inventory) are managed by separate teams and need to be scaled independently.
Monolithic vs. Microservices: Key Differences

Feature Monolithic Architecture Microservices Architecture

Structure Single unified application Multiple independent services

Easier to develop initially, but becomes More complex due to the need for
Development
harder as the app grows managing multiple services

Deployment Single deployment unit Independent deployment of services

Scaling Scale the entire application as a unit Scale individual services independently

Technology Different technology stacks for each


Single technology stack for the whole app
Stack service

Failure in one part of the app can impact


Fault Tolerance Failures are isolated to specific services
the entire system

Harder to maintain as the app grows in Easier to maintain with smaller, focused
Maintenance
size services

Direct function calls within the same Communication via APIs between
Communication
process services

Large-scale, complex systems with many


Examples Small to medium-sized applications
services

Docker Architecture
Docker follows a client-server architecture and relies on several components that work together to
enable the creation, management, and running of containers. The architecture is designed to provide a
consistent environment across different stages of development, testing, and production.

Here’s a breakdown of Docker’s architecture and its key components:

Key Components of Docker Architecture:

1. Docker Client

o What it is: The Docker client is the primary interface through which users interact with
Docker. It can be a command-line interface (CLI) or a graphical user interface (GUI).

o Functionality:

▪ The Docker client sends requests to the Docker daemon (server) to perform
actions like building containers, running containers, and pulling images.

▪ Common commands include docker run, docker build, docker pull, and docker
push.

o How it works: The client communicates with the Docker daemon via the Docker API
over a network (either locally or remotely).
2. Docker Daemon (dockerd)

o What it is: The Docker daemon (also called dockerd) is a background process that
manages the creation, running, and monitoring of Docker containers.

o Functionality:

▪ The daemon listens for Docker API requests from the Docker client.

▪ It builds Docker images, runs containers from those images, manages


containers, and interacts with Docker registries to pull and push images.

▪ The daemon is responsible for managing the Docker containers, images,


networks, and volumes.

o How it works: The daemon can run on the same machine as the client or on a remote
machine.

3. Docker Images

o What it is: A Docker image is a read-only template used to create containers. It includes
the application code, libraries, dependencies, and runtime needed for the containerized
application.

o Functionality:

▪ Images are the foundation of Docker containers. They can be created from a
Dockerfile, which contains instructions for how to build the image.

▪ Docker images are stored in a registry and can be pulled from a registry like
Docker Hub or a private Docker registry.

o How it works: When you run a Docker container, you are essentially creating a running
instance of an image.

4. Docker Containers

o What it is: A Docker container is a lightweight, standalone, executable package that


contains everything needed to run an application—code, runtime, libraries, environment
variables, and configuration files.

o Functionality:

▪ Containers are isolated from each other and the host system but share the host
system's kernel.

▪ They run based on the image created from a Dockerfile.

▪ Containers are ephemeral—once they are stopped, they can be easily removed.

o How it works: Containers are created from Docker images and can run in isolation or
interact with other containers and services depending on the network configurations.

5. Docker Registry

o What it is: A Docker registry is a storage and distribution system for Docker images.
It allows Docker images to be stored, versioned, and shared.

o Functionality:
▪ The Docker registry is where you store your images. The default public registry
is Docker Hub. There are also private registries that organizations may use for
storing proprietary images.

▪ The Docker client interacts with the registry to pull (download) and push (upload)
images.

o How it works: When you use the command docker pull <image>, it fetches the image
from the Docker registry, and when you use docker push, it uploads the image to the
registry.

6. Docker Network

o What it is: Docker networking provides the ability to connect Docker containers to each
other and to external systems.

o Functionality:

▪ Containers can be connected to different types of networks, such as bridge


networks, host networks, and overlay networks.

▪ Docker supports multiple network modes to ensure isolated or shared networking


between containers.

o How it works: Docker containers can be connected to virtual networks to communicate


with each other, either on the same host or across multiple hosts.

7. Docker Volumes

o What it is: A Docker volume is a persistent data storage mechanism for Docker
containers.

o Functionality:

▪ Volumes are used to store data that needs to persist across container restarts
(e.g., database data, logs).

▪ Volumes are independent of containers, which means the data persists even if
the container is deleted.

o How it works: Volumes can be shared between containers or used by a single container,
and they can be mounted into a container at a specific location within the container’s
filesystem.

Docker file Components and Their Working


A Dockerfile is a script that contains instructions on how to build a Docker image. Docker uses this file
to automate the process of building a container image. Each instruction in a Dockerfile creates a layer
in the image, and once the image is built, it can be used to create and run containers.

In this guide, we’ll break down the key components of a Dockerfile, show you how each one works,
and provide practical examples of how to use them.

Common Dockerfile Components

Here are the most commonly used instructions in a Dockerfile:

1. FROM – Specifies the base image for the container.


2. LABEL – Adds metadata to an image.

3. RUN – Executes commands to install software inside the container.

4. COPY – Copies files from the host machine into the container.

5. ADD – Similar to COPY but with additional features like extracting tar files and pulling files from
URLs.

6. WORKDIR – Sets the working directory for any following instructions.

7. CMD – Specifies the default command to run when the container starts.

8. ENTRYPOINT – Defines the main command that will always be executed.

9. EXPOSE – Informs Docker that the container will listen on the specified network ports.

10. ENV – Sets environment variables.

11. VOLUME – Creates a mount point with a volume for persistent data storage.

12. USER – Specifies which user to run the commands as.

13. ARG – Defines build-time variables.

14. HEALTHCHECK – Defines a command to check the health of the container.

1. FROM

The FROM instruction is used to specify the base image for the Docker image you are building. It’s
usually the first line in a Dockerfile.

Syntax:

dockerfile

FROM <image_name>:<tag>

Example:

dockerfile

FROM ubuntu:20.04

This uses the ubuntu image with the tag 20.04 as the base image for the new image.

Working Example:

1. Create a simple Dockerfile to start from a basic Ubuntu image:

dockerfile

# Use Ubuntu 20.04 as base image

FROM ubuntu:20.04

# Print Hello World

RUN echo "Hello, Docker!"

2. Build the Docker image:

docker build -t ubuntu-hello-world .

3. Run the Docker container:


docker run ubuntu-hello-world

2. LABEL

The LABEL instruction allows you to add metadata to your image, such as version, author, or
description.

Syntax:

dockerfile

LABEL <key>=<value>

Example:

dockerfile

LABEL maintainer="[email protected]"

Working Example:

1. Add a LABEL to the Dockerfile:

dockerfile

FROM ubuntu:20.04

LABEL maintainer="[email protected]"

RUN echo "This is a labeled image."

2. Build and run the image:

docker build -t ubuntu-labeled .

docker run ubuntu-labeled

3. To inspect the image and see the label:

docker inspect ubuntu-labeled

3. RUN

The RUN instruction executes commands in the container. It is often used to install packages or perform
other setup tasks.

Syntax:

dockerfile

RUN <command>

Example:

RUN apt-get update && apt-get install -y curl

Working Example:

1. Modify the Dockerfile to install curl:

dockerfile

FROM ubuntu:20.04

RUN apt-get update && apt-get install -y curl


RUN curl --version

2. Build and run the image:

docker build -t ubuntu-curl .

docker run ubuntu-curl

4. COPY and ADD

• COPY: Copies files from the host machine to the container.

• ADD: Similar to COPY, but with additional features like extracting .tar files and supporting remote
URLs.

Syntax:

dockerfile

COPY <src> <dest>

ADD <src> <dest>

Example:

COPY ./localfile.txt /app/localfile.txt

ADD ./archive.tar /app/

Working Example:

1. Modify the Dockerfile to copy a file:

dockerfile

FROM ubuntu:20.04

COPY ./localfile.txt /app/localfile.txt

RUN cat /app/localfile.txt

2. Create a file localfile.txt in your project directory.

3. Build and run the image:

docker build -t ubuntu-copy .

docker run ubuntu-copy

5. WORKDIR

The WORKDIR instruction sets the working directory inside the container for any subsequent
instructions.

Syntax:

dockerfile

WORKDIR /path/to/directory

Example:

dockerfile

WORKDIR /app
RUN touch file.txt

Working Example:

1. Add WORKDIR to the Dockerfile:

dockerfile

FROM ubuntu:20.04

WORKDIR /app

RUN echo "Hello from /app directory" > hello.txt

2. Build and run the image:

docker build -t ubuntu-workdir .

docker run ubuntu-workdir cat /app/hello.txt

6. CMD

The CMD instruction specifies the default command that will run when the container starts. It can be
overridden by providing a command when running the container.

Syntax:

CMD ["executable", "param1", "param2"]

CMD ["param1", "param2"]

CMD ["executable param1 param2"]

Example:

CMD ["echo", "Hello, World!"]

Working Example:

1. Modify the Dockerfile to print a message:

dockerfile

FROM ubuntu:20.04

CMD ["echo", "This is the default command."]

2. Build and run the image:

docker build -t ubuntu-cmd .

docker run ubuntu-cmd

7. ENTRYPOINT

The ENTRYPOINT instruction is similar to CMD but provides more control over the container's default
behavior. It defines the executable that will always run when the container starts.

Syntax:

dockerfile

ENTRYPOINT ["executable", "param1", "param2"]

Example:
ENTRYPOINT ["echo"]

CMD ["Hello from ENTRYPOINT"]

Working Example:

1. Modify the Dockerfile to use ENTRYPOINT:

dockerfile

FROM ubuntu:20.04

ENTRYPOINT ["echo"]

CMD ["Hello from ENTRYPOINT"]

2. Build and run the image:

docker build -t ubuntu-entrypoint .

docker run ubuntu-entrypoint

3. The output will always include the ENTRYPOINT message and can be overridden by passing
additional parameters.

8. EXPOSE

The EXPOSE instruction tells Docker that the container will listen on specific network ports at runtime.
This is for informational purposes and does not actually publish the port.

Syntax:

dockerfile

EXPOSE <port>

Example:

dockerfile

EXPOSE 8080

Working Example:

1. Modify the Dockerfile to expose a port:

dockerfile

FROM ubuntu:20.04

EXPOSE 8080

RUN echo "Exposing port 8080"

2. Build and run the image:

docker build -t ubuntu-expose .

docker run -p 8080:8080 ubuntu-expose

9. ENV

The ENV instruction sets environment variables in the container.

Syntax:
dockerfile

ENV <key>=<value>

Example:

dockerfile

ENV MY_VAR="Some value"

Working Example:

1. Modify the Dockerfile to set an environment variable:

dockerfile

FROM ubuntu:20.04

ENV MY_VAR="Hello, Docker"

RUN echo $MY_VAR

2. Build and run the image:

docker build -t ubuntu-env .

docker run ubuntu-env

10. VOLUME

The VOLUME instruction creates a mount point with a volume to persist data.

Syntax:

dockerfile

VOLUME ["/data"]

Working Example:

1. Modify the Dockerfile to create a volume:

FROM ubuntu:20.04

VOLUME ["/data"]

RUN echo "Volume created at /data"

2. Build and run the image:

docker build -t ubuntu-volume .

docker run -v /path/on/host:/data ubuntu-volume

Difference between the Components


ADD and COPY:

• COPY: Copies files from the host machine (build context) into the container's filesystem. It is a
simple and reliable method to copy files and directories.

Use case: Use COPY when you want to copy local files or directories into your Docker image without
additional functionalities.
Syntax:

COPY <src> <dest>

• ADD: Similar to COPY, but more powerful. It can:

o Copy files and directories like COPY.

o Automatically unpack tar archives (both .tar and .tar.gz).

o Copy files from remote URLs.

Use case: Use ADD when you need to extract compressed files or copy remote URLs to the container.

Syntax:

ADD <src> <dest>

Key Difference:

o COPY is preferred for most scenarios because it’s simpler and more predictable.

o ADD should only be used when you need the additional features, such as auto-extraction
of tar files or copying from a URL.

Example:

dockerfile

# COPY example: Copies local files into container

COPY ./localfile.txt /app/

# ADD example: Unzips the tar archive into the container

ADD ./archive.tar /app/

RUN vs CMD vs ENTRYPOINT

• RUN: Executes commands during the image build process and creates a new image layer with
the result of the command.

Use case: Use RUN to install software, create directories, or execute other setup tasks during the
image build process.

Syntax:

dockerfile

RUN <command>

• CMD: Specifies the default command to run when the container starts. This command can be
overridden at runtime.

Use case: Use CMD for default behavior that can be overridden by the user when running the container.

Syntax:

dockerfile

CMD ["executable", "param1", "param2"]

• ENTRYPOINT: Specifies the main executable that runs when the container starts. This is a fixed
command and is not easily overridden by the user.
Use case: Use ENTRYPOINT when you want to define a fixed command that always runs, and
optionally pass arguments through CMD.

Syntax:

dockerfile

ENTRYPOINT ["executable", "param1", "param2"]

Key Difference:

o RUN runs commands at image build time.

o CMD and ENTRYPOINT define commands to be run when the container starts, but CMD
is more flexible and can be overridden by the user, whereas ENTRYPOINT is fixed.

Example:

dockerfile

# RUN example: Install curl during image build

RUN apt-get update && apt-get install -y curl

# CMD example: Default behavior when container starts

CMD ["echo", "This is CMD"]

# ENTRYPOINT example: Fixed behavior when container starts

ENTRYPOINT ["echo"]

CMD ["This is CMD"]

CMD vs ENTRYPOINT

Both CMD and ENTRYPOINT specify the command that runs when the container starts, but they
behave differently.

• CMD: Specifies the default command to run. If you provide arguments when running the
container, they will override the CMD instruction. CMD is used to provide default arguments to
ENTRYPOINT or to define the default command when no command is specified.

Use case: Use CMD to set default behavior that can be overridden by the user when running the
container.

Syntax:

dockerfile

CMD ["executable", "param1", "param2"]

• ENTRYPOINT: Specifies the executable that will always run when the container starts, and
cannot be easily overridden. If the ENTRYPOINT is defined, you can pass arguments to it using
CMD or during container runtime.

Use case: Use ENTRYPOINT when you want to specify the main application or process that will run in
the container, and ensure that it’s always executed.

Syntax:

dockerfile
ENTRYPOINT ["executable", "param1", "param2"]

Key Difference:

o CMD is the default command that can be replaced by the user at runtime.

o ENTRYPOINT defines the fixed command to be run, and any arguments passed during
runtime are appended to the ENTRYPOINT.

Example:

dockerfile

# CMD example: The default behavior can be overridden at runtime

CMD ["echo", "Hello, World!"]

# ENTRYPOINT example: Fixed executable

ENTRYPOINT ["echo"]

CMD ["Hello, World!"]

When you run a container based on the above Dockerfile, the output will be the same (Hello, World!),
but ENTRYPOINT is fixed and will always run, while CMD is configurable at runtime.

ARG vs ENV

• ARG: Defines build-time variables that can be passed during the image build process using the
--build-arg flag.

Use case: Use ARG to define variables that will only be used during the image build process and are
not available after the image is built.

Syntax:

ARG <variable_name>[=<default_value>]

• ENV: Sets environment variables inside the container that are available to the running container
and any processes within it.

Use case: Use ENV to define variables that will be available during runtime.

Syntax:

ENV <key>=<value>

Key Difference:

o ARG is available only during the image build process.

o ENV is available during both the build process and runtime.

Example:

# ARG example: Build-time variable

ARG VERSION=1.0

RUN echo "Building version $VERSION"

# ENV example: Runtime variable

ENV APP_NAME="MyApp"
RUN echo "Running $APP_NAME"

Implementation of Docker components with Examples


Objective:

We will create a simple web application using Docker, exploring key Dockerfile components such as
FROM, RUN, CMD, ENTRYPOINT, COPY, ADD, WORKDIR, EXPOSE, ENV, VOLUME, etc.

Pre-requisites:

1. Docker should be installed on your system.

2. Basic understanding of Docker and its components.

3. A basic web app (e.g., Node.js, Python, or even a simple HTML file) will be used for this example.

Step 1: Create a Simple Web App

For this example, let's create a simple index.html file for our web app.

Create a folder structure as follows:

perl

my-docker-app/

├── Dockerfile

└── index.html

index.html:

html

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="UTF-8">

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<title>My Docker Web App</title>

</head>

<body>

<h1>Hello, Docker!</h1>

<p>This is a simple web application running in a Docker container.</p>

</body>

</html>

Step 2: Create the Dockerfile

Now let’s create the Dockerfile to build the container image.


1. FROM: The first instruction, which specifies the base image for the container. 2. LABEL: Adds
metadata to the Docker image. 3. COPY: Copies files from the host to the container. 4. WORKDIR:
Sets the working directory for subsequent instructions. 5. EXPOSE: Specifies which ports the container
will expose. 6. CMD: Defines the command that will run when the container starts.

Create the following Dockerfile inside the my-docker-app/ folder:

# Step 1: Set the base image to nginx (a web server)

FROM nginx:alpine

# Step 2: Add a label to the image for metadata

LABEL maintainer="[email protected]"

LABEL version="1.0"

# Step 3: Copy the local index.html file to the nginx server's web directory

COPY ./index.html /usr/share/nginx/html/

# Step 4: Set the working directory (optional but can be useful for context)

WORKDIR /usr/share/nginx/html/

# Step 5: Expose port 80 for HTTP traffic

EXPOSE 80

# Step 6: CMD to keep the container running NGINX

CMD ["nginx", "-g", "daemon off;"

Step 3: Build the Docker Image

Now, let’s build the Docker image from the Dockerfile:

1. Open a terminal in the my-docker-app/ directory.

2. Run the following command to build the Docker image:

docker build -t my-docker-webapp .

This will create an image named my-docker-webapp based on the nginx base image, and it will copy
your index.html file into the appropriate directory for NGINX to serve.

Step 4: Run the Docker Container

Once the image is built, let’s run the container:

docker run -d -p 8080:80 my-docker-webapp

Explanation of the command:

• -d: Runs the container in detached mode (in the background).

• -p 8080:80: Maps port 8080 on your host machine to port 80 inside the container (NGINX
default).

• my-docker-webapp: The image we just built.

Now, you can open a browser and visit http://localhost:8080. You should see the “Hello, Docker!”
message from the index.html file.
Step 5: Explanation of Dockerfile Components

1. FROM

dockerfile

FROM nginx:alpine

• Specifies the base image (NGINX in this case) to build the image upon.

• We are using the alpine version of NGINX, which is a lightweight version of the NGINX image.

2. LABEL

dockerfile

LABEL maintainer="[email protected]"

LABEL version="1.0"

• Adds metadata to the image, like the maintainer's contact info and the image version. This
information is helpful for future reference or when sharing the image.

3. COPY

dockerfile

COPY ./index.html /usr/share/nginx/html/

• Copies the index.html file from your local machine into the Docker image.

• It places the file in the NGINX default location (/usr/share/nginx/html/) where it expects web files
to be.

4. WORKDIR

WORKDIR /usr/share/nginx/html/

• Sets the working directory inside the container.

• Any subsequent instructions (such as RUN, CMD, or ENTRYPOINT) will execute from this
directory.

5. EXPOSE

EXPOSE 80

• Informs Docker that the container will listen on port 80 (the default HTTP port). However, this
doesn’t actually map the port to the host system; it’s just for documentation and networking
purposes.

6. CMD

CMD ["nginx", "-g", "daemon off;"]

• The default command to run when the container starts. It tells NGINX to run in the foreground
(daemon off) and not exit after starting.

Step 6: Modifying the Dockerfile for More Hands-On Practice

Example 1: Use RUN to Install a Package

Let's modify the Dockerfile to install additional packages using the RUN instruction.
dockerfile

# Step 1: Set the base image to nginx (a web server)

FROM nginx:alpine

# Step 2: Install curl (for demonstration purposes)

RUN apk add --no-cache curl

# Step 3: Copy the local index.html file to the nginx server's web directory

COPY ./index.html /usr/share/nginx/html/

# Step 4: Set the working directory

WORKDIR /usr/share/nginx/html/

# Step 5: Expose port 80 for HTTP traffic

EXPOSE 80

# Step 6: CMD to run NGINX in the foreground

CMD ["nginx", "-g", "daemon off;"]

Explanation:

• RUN: Installs curl in the image. This can be useful if you need additional tools or dependencies
in your container.

• You can verify this by connecting to the running container and executing curl.

To build and run the container:

docker build -t my-docker-webapp-curl .

docker run -d -p 8081:80 my-docker-webapp-curl

Example 2: Use ENV to Set Environment Variables

We can use ENV to define environment variables that can be accessed within the container.

# Step 1: Set the base image to nginx (a web server)

FROM nginx:alpine

# Step 2: Set an environment variable for the app name

ENV APP_NAME="Docker Web App"

# Step 3: Copy the local index.html file to the nginx server's web directory

COPY ./index.html /usr/share/nginx/html/

# Step 4: Set the working directory

WORKDIR /usr/share/nginx/html/

# Step 5: Expose port 80 for HTTP traffic

EXPOSE 80
# Step 6: CMD to run NGINX in the foreground

CMD ["nginx", "-g", "daemon off;"]

Now, the APP_NAME variable is available to the running container. You can modify the index.html to
display this value dynamically.

To build and run the container:

docker build -t my-docker-webapp-env .

docker run -d -p 8082:80 my-docker-webapp-env

Docker Registry Types and Use Cases


A Docker Registry is a storage and distribution system for Docker images. It stores images that can
be pulled and pushed between different environments. Docker Hub is the default registry, but you can
also create your own private registry.

Types of Docker Registries

1. Docker Hub (Public Registry)

o Description: Docker Hub is the default public registry maintained by Docker. It hosts a
large number of official and community-contributed images.

o Use Cases:

▪ Public sharing of Docker images.

▪ Easy access to official base images (e.g., nginx, alpine, ubuntu).

▪ Simple and fast distribution of images in public applications.

2. Private Docker Registry (Self-hosted)

o Description: A private Docker registry is a registry you host on your own infrastructure.
It’s useful for storing proprietary or sensitive images that you don’t want to share publicly.

o Use Cases:

▪ Storing custom or private images within an organization.

▪ Keeping production-ready images that should not be made public.

▪ Ensuring compliance with company-specific policies or security requirements.

3. Third-Party Registries

o Description: These are Docker registries provided by cloud providers and other services
such as Google Container Registry (GCR), Amazon Elastic Container Registry
(ECR), and Azure Container Registry (ACR).

o Use Cases:

▪ Hosting private Docker images on cloud services.

▪ Tight integration with the respective cloud platform (e.g., GCR with Google Cloud,
ACR with Azure).
Hands-On: Using Docker Registries

Step 1: Using Docker Hub (Public Registry)

1. Pull an Image from Docker Hub

o Let's pull an image from Docker Hub (e.g., nginx).

docker pull nginx

This command pulls the nginx image from Docker Hub to your local machine.

2. Run the Image

o Once the image is pulled, you can run it with the following command:

docker run -d -p 8080:80 nginx

• This will run the nginx container in detached mode (-d) and map port 8080 on the host to port
80 inside the container.

• Visit http://localhost:8080 in your browser. You should see the default NGINX page.

Step 2: Using a Private Docker Registry (Self-Hosted)

To set up a private Docker registry, you can use the official registry image provided by Docker.

1. Run a Private Registry Locally

First, let's start a private registry container using the official Docker Registry image.

docker run -d -p 5000:5000 --name registry registry:2

• This will start a private Docker registry container on port 5000 locally (on your machine).

• The registry:2 image is the latest version of the official Docker registry.

2. Tag an Image and Push it to Your Private Registry

o First, pull any image (e.g., nginx) to push to your private registry.

docker pull nginx

• Tag the image for your private registry (running on localhost:5000):

docker tag nginx localhost:5000/my-nginx

• Push the tagged image to your private registry:

docker push localhost:5000/my-nginx

• After the push is complete, you’ve stored the image on your private registry.

3. Verify the Image is Pushed

o You can verify the image is in your private registry by visiting


http://localhost:5000/v2/_catalog from your browser or using curl:

curl http://localhost:5000/v2/_catalog

This should list the image my-nginx you just pushed.

4. Pull the Image from Your Private Registry


o To verify that the image was pushed successfully, pull the image back from your private
registry:

docker pull localhost:5000/my-nginx

Step 3: Using Cloud-Based Docker Registries

Cloud-based Docker registries like Amazon Elastic Container Registry (ECR), Google Container
Registry (GCR), and Azure Container Registry (ACR) are similar in many ways to private registries,
but they provide additional benefits for integration with cloud services.

Let’s walk through an example of using Amazon Elastic Container Registry (ECR).

Using Amazon Elastic Container Registry (ECR)

1. Create a Repository in ECR

o First, log into the AWS Management Console.

o Navigate to ECR under the Services tab.

o Create a new repository (e.g., my-app-repo).

2. Authenticate Docker to Your AWS ECR

o Install the AWS CLI and configure it with your credentials.

o Authenticate your Docker client to ECR using the following command:

aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin
<aws_account_id>.dkr.ecr.<your-region>.amazonaws.com

• Replace <your-region> with your AWS region and <aws_account_id> with your AWS account
ID.

3. Tag an Image for ECR

o Tag your local image (e.g., nginx) for your ECR repository:

docker tag nginx:latest <aws_account_id>.dkr.ecr.<your-region>.amazonaws.com/my-app-repo:latest

4. Push the Image to ECR

o Now push the tagged image to your ECR repository:

docker push <aws_account_id>.dkr.ecr.<your-region>.amazonaws.com/my-app-repo:latest

Docker Volumes
A Docker Volume is a persistent storage location for Docker containers. Volumes are managed by
Docker and are stored outside the container’s filesystem. Volumes can be shared and mounted across
containers, making them useful for storing database files, application logs, or configuration data that
needs to persist.

Types of Volumes in Docker

1. Named Volumes: Managed by Docker, with a specific name assigned.

2. Anonymous Volumes: These are created by Docker with random names and are used when
no specific name is provided.
3. Host Volumes (Bind Mounts): These link a file or directory on the host system to a container.

Why Use Volumes?

• Data Persistence: Data inside containers is ephemeral, so using volumes allows data to survive
container restarts and removals.

• Sharing Data: Volumes can be shared between containers, enabling them to access and modify
the same data.

• Backup and Restore: You can easily back up, restore, and migrate volumes.

Basic Commands for Docker Volumes

• Create a Volume:

docker volume create my-volume

• List Volumes:

docker volume ls

• Inspect a Volume:

docker volume inspect my-volume

• Remove a Volume:

docker volume rm my-volume

• Prune unused Volumes:

docker volume prune

Hands-On Examples of Docker Volumes

1. Using a Named Volume

Let's create a named volume and use it with a container to persist data.

1. Create a Volume: Create a named volume my-volume.

docker volume create my-volume

2. Run a Container Using the Volume: Start a container (e.g., a busybox container) and mount
the volume at /data:

docker run -it --name my-container -v my-volume:/data busybox

Here:

o -v my-volume:/data: Mounts the volume my-volume to the /data directory in the


container.

o busybox: A lightweight container image.

3. Write Data to the Volume: Once inside the container, create a file in /data:

echo "Hello from Docker Volume!" > /data/hello.txt

4. Exit the Container: Exit the container after creating the file:

exit
5. Verify Data Persistence: Run another container and mount the same volume to check if the
data persists.

docker run -it --rm -v my-volume:/data busybox cat /data/hello.txt

You should see:

csharp

Hello from Docker Volume!

This confirms that the data stored in the volume persists across container restarts.

2. Using Anonymous Volumes

Docker can automatically create an anonymous volume when you don’t specify a name. Let’s see how
to work with anonymous volumes.

1. Run a Container with an Anonymous Volume:

docker run -it --name my-container -v /data busybox

Here, Docker will automatically create an anonymous volume and mount it to /data inside the container.

2. Write Data to the Anonymous Volume: Inside the container, create a file in /data:

echo "This is an anonymous volume." > /data/anonymous.txt

3. Exit the Container: Exit the container:

exit

4. List Volumes: You can check the created anonymous volume by listing the volumes:

docker volume ls

You’ll notice a volume with a random name (something like 6f8b7a7d9b4e).

5. Inspect the Volume: If you want to check the details of the anonymous volume, run:

docker volume inspect <volume_id>

Replace <volume_id> with the actual volume ID you got from the previous step.

3. Using Host Volumes (Bind Mounts)

Bind mounts allow you to link a directory or file on the host system to a container.

1. Run a Container with a Host Volume:

Suppose you have a directory on your host at /home/user/data. You can mount it into the container like
this:

docker run -it --name my-container -v /home/user/data:/data busybox

Here:

o /home/user/data: A directory on the host machine.

o /data: A directory inside the container.

2. Modify the Data: Inside the container, modify the /data directory, which will reflect changes on
the host:
echo "This is a bind mount!" > /data/host_data.txt

3. Check the File on the Host: On the host system, check if the file host_data.txt is created in
/home/user/data:

cat /home/user/data/host_data.txt

You should see:

This is a bind mount!

This demonstrates that changes made inside the container are reflected in the host directory.

4. Backup and Restore Volumes

You can back up and restore data from a Docker volume.

1. Create a Backup of a Volume:

First, create a backup of the my-volume to a .tar file.

docker run --rm -v my-volume:/volume -v $(pwd):/backup busybox tar czf /backup/volume-backup.tar.gz


-C /volume .

Here:

o -v my-volume:/volume: Mounts the Docker volume.

o -v $(pwd):/backup: Mounts the current directory on your host to /backup in the container.

o tar czf: Compresses the contents of the /volume directory into a .tar.gz file.

2. Restore the Backup to a New Volume:

Create a new volume and restore the backup to it.

docker volume create restore-volume

docker run --rm -v restore-volume:/volume -v $(pwd):/backup busybox tar xzf /backup/volume-


backup.tar.gz -C /volume

3. Verify the Data: Run a container with the restored volume and check the contents:

docker run -it --rm -v restore-volume:/data busybox cat /data/hello.txt

You should see the original file data, indicating the volume has been restored.

Prune Unused Volumes

Over time, unused volumes can accumulate, taking up unnecessary disk space. Docker provides the
docker volume prune command to remove all unused volumes.

Step 1: Prune Unused Volumes

To remove unused volumes (those that are not being used by any container), run:

docker volume prune

You will be prompted to confirm the deletion:

css

Are you sure you want to continue? [y/N] y


After pruning, you can run docker volume ls again to see the remaining volumes.

Scenario: Creating a Simple Node.js Application with Docker Volumes

We will create a simple Node.js application that will:

1. Use a Docker volume to persist data generated by the application.

2. Mount the volume to a specific directory in the container to save logs or other data.

3. Show how the volume can be used to persist data between container runs.

Step 1: Set Up Your Node.js Application

1. Create a Project Directory

Create a directory to store your application:

mkdir my-docker-app

cd my-docker-app

2. Create a Simple app.js File

Create a file named app.js with the following content:

javascript

const fs = require('fs');

const path = require('path');

const http = require('http');

// Path to save logs inside the volume

const logFilePath = path.join(__dirname, 'logs', 'app.log');

const requestHandler = (req, res) => {

const logMessage = `Request received at ${new Date().toISOString()}\n`;

// Write the log message to a file inside the volume

fs.appendFileSync(logFilePath, logMessage);

res.write('Request logged!');

res.end();

};

const server = http.createServer(requestHandler);

server.listen(3000, () => {

console.log('Server running on http://localhost:3000');

});
3. Create a package.json File

Create a package.json file to define dependencies for the Node.js application:

json

"name": "my-docker-app",

"version": "1.0.0",

"description": "A simple app to demonstrate Docker volumes",

"main": "app.js",

"scripts": {

"start": "node app.js"

},

"dependencies": {}

4. Install Node.js Dependencies

Run the following command to initialize the Node.js project:

npm init -y

Since this is a simple app and we are not using external dependencies, you don’t need to install anything
specific.

Step 2: Create a Dockerfile

In the project directory, create a Dockerfile to containerize the application.

1. Dockerfile Contents

dockerfile

# Use official Node.js image as a base image

FROM node:14

# Set the working directory in the container

WORKDIR /usr/src/app

# Copy the package.json and app.js files into the container

COPY package.json ./

COPY app.js ./

# Install dependencies

RUN npm install

# Create a directory for logs (to be used as a volume)

RUN mkdir -p /usr/src/app/logs


# Expose port 3000 to access the app

EXPOSE 3000

# Define the volume for storing logs

VOLUME ["/usr/src/app/logs"]

# Start the Node.js application

CMD ["npm", "start"]

Explanation of Dockerfile Steps:

• FROM node:14: Use the official Node.js 14 image as the base image.

• WORKDIR /usr/src/app: Set the working directory inside the container.

• COPY package.json ./: Copy the package.json file into the container.

• COPY app.js ./: Copy the app.js file into the container.

• RUN npm install: Install Node.js dependencies (none in this case, but it's useful for larger
apps).

• RUN mkdir -p /usr/src/app/logs: Create a logs directory inside the container, where logs will
be stored.

• VOLUME ["/usr/src/app/logs"]: This line defines a volume at /usr/src/app/logs inside the


container, where logs will be stored. This will persist data even if the container is removed.

• EXPOSE 3000: Expose port 3000 for the application.

• CMD ["npm", "start"]: Start the application using the npm start command, which runs the app.js
file.

Step 3: Build and Run the Docker Container

1. Build the Docker Image

In the terminal, inside your project directory (where the Dockerfile is located), build the Docker image:

docker build -t my-docker-app .

2. Run the Docker Container with Volume

Run the Docker container with the -v flag to mount the volume. This will persist data (logs in this case)
in the logs directory on your local system.

docker run -d -p 3000:3000 -v my-app-logs:/usr/src/app/logs my-docker-app

• -d: Run the container in detached mode.

• -p 3000:3000: Bind port 3000 on the host to port 3000 in the container.

• -v my-app-logs:/usr/src/app/logs: Mount the volume my-app-logs to the /usr/src/app/logs


directory in the container.

• my-docker-app: The name of the image you just built.

This command will run the application, and any logs generated will be stored in the my-app-logs volume.

3. Check if the Application is Running


You can check if the application is running by accessing it in your browser:

arduino

http://localhost:3000

Each time you refresh the page, a log entry will be added to the logs directory inside the container, and
this will be saved in the Docker volume.

Step 4: Inspect the Volume

After the container is running and logs are being generated, you can inspect the volume to verify that
the data is stored.

1. List Docker Volumes:

docker volume ls

This will list all the Docker volumes, and you should see my-app-logs in the list.

2. Inspect the Volume:

docker volume inspect my-app-logs

This will provide detailed information about the volume, including the mount point where the volume
data is stored on the host machine.

Step 5: Access Logs Outside the Container

To access the logs stored in the volume, you can run a temporary container to examine the contents of
the volume. For example:

docker run -it --rm -v my-app-logs:/logs busybox cat /logs/app.log

• This will mount the my-app-logs volume to /logs in a busybox container.

• The cat /logs/app.log command will display the contents of the app.log file where logs are being
stored.

Step 6: Clean Up

After you're done with the container and volume, you can remove them:

1. Stop the Running Container:

docker stop <container_id>

Replace <container_id> with the actual container ID or name.

2. Remove the Container:

docker rm <container_id>

3. Remove the Volume (if you no longer need the logs):

docker volume rm my-app-logs


Docker Networking
Docker Networking is a vital concept in containerized applications, allowing containers to communicate
with each other, the host system, or external networks. By default, Docker provides a network bridge
for container-to-container communication, but it also supports multiple networking modes to suit
different use cases.

In this guide, we will walk through the basics of Docker networking, various network types, and practical
hands-on examples with commands and Dockerfiles.

Docker Networking Basics

Docker provides several network modes, each suited for different use cases:

1. Bridge Network (Default Mode)

o This is the default network mode when no network is specified. It creates a private
internal network on your host system, and containers connected to this network can
communicate with each other but not with the host.

o Commonly used for containers that need to communicate within the same host.

2. Host Network

o In this mode, containers share the host’s networking namespace, meaning they can
access the host’s IP directly. Containers use the host’s IP address for external
communication.

o Typically used for performance-critical applications that need direct access to the host’s
networking stack.

3. Overlay Network

o This is used for multi-host networking, often in Docker Swarm or Kubernetes clusters. It
allows containers across multiple hosts to communicate with each other as if they were
on the same network.

o Requires a Docker Swarm or Kubernetes setup for orchestration.

4. None Network

o This mode disables all networking for the container. The container cannot communicate
with other containers or the host system.

o Used when the container doesn’t need network access.

Creating and Managing Docker Networks

1. Create a Bridge Network (Custom Network)

You can create a custom bridge network to manage your containers more easily. A custom bridge
network allows containers to communicate with each other more effectively than the default bridge
network.

docker network create --driver bridge my_custom_network

• --driver bridge: Specifies that we want to use the bridge network driver.

• my_custom_network: The name of the network.

Verify the network creation:


docker network ls

You should see my_custom_network listed.

2. Run Containers on the Custom Network

Now let’s run two containers that will communicate with each other over the custom bridge network.

docker run -d --name container1 --network my_custom_network busybox sleep 3600

docker run -d --name container2 --network my_custom_network busybox sleep 3600

• --network my_custom_network: This attaches both containers to the custom network.

3. Test Communication Between Containers

You can enter one of the containers and ping the other container to verify communication:

docker exec -it container1 sh

Inside container1, ping container2:

ping container2

You should see responses, confirming that the containers are able to communicate within the same
custom bridge network.

Host Networking Mode

The Host Network mode makes the container share the host’s network stack directly. In this mode, the
container uses the host’s IP address.

Run a Container in Host Networking Mode

docker run -d --name host_container --network host busybox sleep 3600

• --network host: This attaches the container to the host network.

You can check the networking mode by inspecting the container:

docker inspect host_container | grep NetworkMode

This should output:

"NetworkMode": "host",

In this case, the container will share the host’s networking interface, and it won’t have its own private
IP address.

Overlay Network (For Multi-Host Communication)

The Overlay Network is designed for communication between containers across different hosts, often
used in Docker Swarm mode. Here, we'll set up an overlay network in a multi-host environment.

1. Initialize Docker Swarm

Overlay networks require Docker Swarm. Start by initializing Docker Swarm:

docker swarm init

This will start the Swarm manager and provide a command to join worker nodes (if necessary).

2. Create an Overlay Network


Create an overlay network that allows communication across different Docker hosts:

docker network create --driver overlay my_overlay_network

3. Deploy Services on the Overlay Network

With Swarm mode active, you can deploy services that use the overlay network.

docker service create --name service1 --replicas 1 --network my_overlay_network busybox sleep 3600

docker service create --name service2 --replicas 1 --network my_overlay_network busybox sleep 3600

The services service1 and service2 can now communicate with each other using the overlay network,
even if they are deployed on different Docker hosts.

None Network Mode

In the None network mode, containers cannot access any network resources. This is useful when a
container needs to be isolated from the network completely.

Run a Container with None Network Mode

docker run -d --name no_network_container --network none busybox sleep 3600

This container has no access to any networking resources. You cannot ping or communicate with it
from other containers or the host.

Dockerfile Networking Configuration

Dockerfiles don't directly control networking, but you can specify exposed ports and set up environment
variables that may affect how containers use networks.

Create a Simple Node.js App and Dockerfile with Network Exposure

We’ll create a Node.js application and expose a port in the Dockerfile for network communication.

1. Create a Simple Node.js App (app.js):

javascript

const http = require('http');

const requestHandler = (req, res) => {

res.write('Hello, Docker Networking!');

res.end();

};

const server = http.createServer(requestHandler);

server.listen(3000, () => {

console.log('Server is running on port 3000');

});

2. Create a Dockerfile:

# Use official Node.js image

FROM node:14
# Set working directory

WORKDIR /app

# Copy application files

COPY package.json ./

COPY app.js ./

# Install dependencies

RUN npm install

# Expose port 3000 for networking

EXPOSE 3000

# Run the application

CMD ["node", "app.js"]

3. Build the Docker Image:

docker build -t node-networking-app .

4. Run the Container on a Custom Network:

docker network create --driver bridge my_custom_network

docker run -d --name node_app --network my_custom_network -p 3000:3000 node-networking-app

• -p 3000:3000: Expose the app on port 3000.

• --network my_custom_network: Attach the container to the custom network.

5. Test Communication with the Container:

You can now access the app in your browser:

arduino

http://localhost:3000

It will display Hello, Docker Networking!.

Inspect Networks and Container Communication

To inspect networks and containers, Docker provides useful commands.

List Networks

docker network ls

Inspect a Network

docker network inspect my_custom_network

Inspect a Container's Network Configuration

docker inspect node_app

This will provide detailed network information, including IP addresses and attached networks.
Clean Up

After testing, you can clean up resources like containers, networks, and images.

1. Stop and Remove Containers:

docker stop node_app

docker rm node_app

2. Remove Custom Network:

docker network rm my_custom_network

3. Remove the Docker Image:

docker rmi node-networking-app

Free Tier Application of Docker: Step-by-Step with Real-Time Examples


In this guide, we'll walk through a real-time application of Docker in a free-tier environment like AWS,
GCP, or Azure. We'll cover the process of deploying a simple application using Docker on a cloud server
in the free-tier environment, giving you a clear example of how Docker can be integrated into real-world
cloud setups.

We'll be using AWS EC2 (Elastic Compute Cloud) in this example, which offers a free-tier for new
users. This process can also be adapted for other cloud platforms like Google Cloud Platform (GCP)
or Microsoft Azure, which also offer free-tier resources for deploying Docker applications.

Pre-requisites

• AWS account or any cloud provider account with a free-tier eligible instance.

• Basic knowledge of Docker, Docker Compose, and cloud platforms.

• A running instance with a public IP or DNS name.

Step 1: Setting Up an EC2 Instance (Free-Tier)

1.1 Launch an EC2 Instance

1. Login to AWS Console:

o Navigate to the AWS Management Console.

o Sign in to your AWS account.

2. Create EC2 Instance:

o In the Search Bar, type EC2 and click on EC2.

o Click Launch Instance to start a new virtual server.

o Select an Amazon Machine Image (AMI): Choose Ubuntu Server 20.04 LTS (or any
Linux distribution you prefer).

o Select Instance Type: Choose t2.micro (free-tier eligible).

o Configure instance details and storage as per your preference (default options are
typically fine for free-tier usage).
o In Security Group, select Create a new security group and configure it to allow HTTP
(port 80), HTTPS (port 443), and SSH (port 22) connections.

o Launch the instance and download the key pair (.pem file) to access the instance.

3. Access your Instance:

o Once the instance is running, you can access it through SSH. Open a terminal and use
the following command (replace <your-instance-public-ip> with the actual public IP of
your EC2 instance):

ssh -i "path-to-your-key.pem" ubuntu@<your-instance-public-ip>

1.2 Install Docker on EC2 Instance

Once logged into the EC2 instance, the next step is to install Docker.

# Update packages

sudo apt-get update

# Install dependencies

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

# Add Docker’s official GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Add Docker repository

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -


cs) stable"

# Update package database again

sudo apt-get update

# Install Docker CE (Community Edition)

sudo apt-get install docker-ce

# Verify Docker installation

sudo docker --version

1.3 Allow Docker to Start on Boot

sudo systemctl enable docker

sudo systemctl start docker

Now Docker should be installed and running on your EC2 instance.

Step 2: Dockerizing the Application

Let’s set up a simple Node.js application and Dockerize it.

2.1 Create a Simple Node.js Application

1. Install Node.js and npm:

sudo apt-get install nodejs npm


2. Create Application Directory:

mkdir my-docker-app

cd my-docker-app

3. Create app.js File:

Create a simple Node.js application that listens on port 3000 and serves a message:

javascript

// app.js

const http = require('http');

const requestHandler = (req, res) => {

res.write('Hello from Dockerized App!');

res.end();

};

const server = http.createServer(requestHandler);

server.listen(3000, () => {

console.log('Server is listening on port 3000');

});

4. Create package.json File:

json

"name": "docker-app",

"version": "1.0.0",

"main": "app.js",

"scripts": {

"start": "node app.js"

},

"dependencies": {}

5. Install Dependencies:

npm install

2.2 Create a Dockerfile

Now, let's create a Dockerfile to containerize the Node.js application.

dockerfile
# Use Node.js official image from Docker Hub

FROM node:14

# Set the working directory in the container

WORKDIR /usr/src/app

# Copy package.json and app.js into the container

COPY package.json ./

COPY app.js ./

# Install the app dependencies

RUN npm install

# Expose port 3000 to allow communication with the outside world

EXPOSE 3000

# Command to run the application

CMD ["npm", "start"]

This Dockerfile sets up the app in a containerized environment.

Step 3: Build and Run Docker Container

3.1 Build Docker Image

Run the following command to build the Docker image from the Dockerfile:

sudo docker build -t node-docker-app .

This will create an image called node-docker-app.

3.2 Run the Docker Container

Run the container with the following command:

sudo docker run -d -p 80:3000 node-docker-app

• -d: Run the container in detached mode.

• -p 80:3000: Map port 3000 inside the container to port 80 on the EC2 instance (so you can
access the app from the web).

Now, your Node.js application should be running on port 80 of your EC2 instance.

Step 4: Access the Application

1. Open a browser and go to the Public IP of your EC2 instance (found in the EC2 dashboard). If
everything is set up correctly, you should see the message:

Hello

from Dockerized App!

yaml

This means your Node.js app is successfully running inside a Docker container on AWS EC2.

---
### **Step 5: Set Up Persistent Storage Using Docker Volumes (Optional)**

In this step, we'll show how to persist data using Docker volumes, which can be helpful if your
application needs to store files or logs.

#### 5.1 Create a Volume

Create a Docker volume to persist data:

```

docker volume create my_app_data

5.2 Run the Container with Volume

Run the container again but this time mount the volume to a specific directory in your container.

sudo docker run -d -p 80:3000 -v my_app_data:/usr/src/app/data node-docker-app

• -v my_app_data:/usr/src/app/data: This creates a volume mount, which maps the


my_app_data volume to /usr/src/app/data inside the container.

With this setup, any data written to /usr/src/app/data inside the container will be persisted in the
my_app_data volume, even if the container is stopped or removed.

Step 6: Push the Docker Image to Docker Hub (Optional)

To make your Docker image available for deployment elsewhere, you can push it to Docker Hub. Here's
how:

6.1 Create a Docker Hub Account

• Go to Docker Hub and create a free account.

6.2 Log in to Docker Hub from the EC2 Instance

docker login

Enter your Docker Hub credentials.

6.3 Tag the Image for Docker Hub

docker tag node-docker-app your-dockerhub-username/node-docker-app:latest

6.4 Push the Image to Docker Hub

docker push your-dockerhub-username/node-docker-app:latest

Now, your Docker image is available on Docker Hub and can be pulled from anywhere.

Step 7: Clean Up Resources

To avoid unnecessary charges and resource usage, don’t forget to clean up your resources once you’re
done.

7.1 Stop and Remove Docker Container

docker stop <container_id>

docker rm <container_id>

Replace <container_id> with the actual container ID you want to remove.

7.2 Remove Docker Image


docker rmi node-docker-app

7.3 Terminate EC2 Instance

In the AWS EC2 dashboard, select the EC2 instance and click Terminate.

This will stop the instance and delete it, preventing any further charges from accumulating.

Free Tier Application Deployment Using Tomcat in Docker


In this guide, we will deploy a Tomcat server in a Docker container on a free-tier cloud server such
as AWS, Google Cloud Platform (GCP), or Azure. Tomcat is a popular open-source web server and
servlet container that serves Java applications.

We will cover the following steps:

1. Setting up the free-tier cloud server (AWS EC2 / GCP Compute Engine / Azure VM)

2. Installing Docker on the cloud instance

3. Dockerizing a Tomcat application (using the official Tomcat Docker image)

4. Running the Tomcat server in Docker

5. Deploying a simple Java web application (WAR file)

6. Exposing the Tomcat server through the cloud server’s public IP

7. Scaling and managing Tomcat with Docker Compose (Optional)

8. Cleaning up resources after deployment

Step 1: Set Up Free-Tier Cloud Server

1.1 AWS EC2 Setup (Free-Tier)

1. Login to AWS Console:

o Go to AWS Management Console and sign in.

2. Launch EC2 Instance:

o Navigate to EC2 and click Launch Instance.

o Choose Ubuntu Server 20.04 LTS (or any Linux version).

o Choose the t2.micro instance type (free-tier eligible).

o Configure the instance and allow ports 22 (SSH), 80 (HTTP), and 443 (HTTPS).

o Click Launch, and download the key pair ( .pem file) for SSH access.

3. SSH Access to EC2:

o Once the instance is running, access it using SSH:

ssh -i "path-to-your-key.pem" ubuntu@<your-instance-public-ip>

1.2 GCP Compute Engine Setup (Free-Tier)

1. Login to Google Cloud Console:

o Visit Google Cloud Console.


2. Create a Compute Engine VM Instance:

o Go to Compute Engine and click Create Instance.

o Choose Ubuntu for the OS and f1-micro for the machine type (eligible for the free tier).

o Open ports 22 (SSH), 80 (HTTP), and 443 (HTTPS).

o Click Create, and note the external IP address of the VM.

3. SSH into VM Instance:

o Use the SSH command to connect to the VM:

gcloud compute ssh --zone "<your-zone>" <instance-name>

1.3 Azure VM Setup (Free-Tier)

1. Login to Azure Portal:

o Visit Azure Portal.

2. Create a Virtual Machine:

o Go to Virtual Machines and click Create.

o Select Ubuntu 20.04 and a B1S instance (free-tier eligible).

o Configure ports for SSH (22), HTTP (80), and HTTPS (443).

o Click Create and note the public IP of your VM.

3. SSH into VM:

ssh username@<your-vm-ip>

Step 2: Install Docker on the Cloud Server

Once you have access to your cloud instance (EC2, GCP, or Azure), install Docker.

For Ubuntu (All Cloud Providers)

1. Update package list:

sudo apt-get update

2. Install dependencies:

sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

3. Add Docker’s official GPG key:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

4. Add Docker repository:

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -


cs) stable"

5. Install Docker:

sudo apt-get update

sudo apt-get install docker-ce


6. Enable Docker to start on boot:

sudo systemctl enable docker

sudo systemctl start docker

7. Verify installation: sudo docker --version

Step 3: Dockerizing Tomcat Application

We will use the official Tomcat Docker image to run the Tomcat server in a container.

3.1 Pull the Tomcat Docker Image

Use the official Tomcat image from Docker Hub.

docker pull tomcat:latest

3.2 Run the Tomcat Container

Run the Tomcat server in a Docker container. We'll expose port 8080 from the container to port 80 on
the host so it’s accessible via the web.

docker run -d -p 80:8080 --name tomcat-server tomcat:latest

• -d: Run the container in detached mode (background).

• -p 80:8080: Expose container’s port 8080 to host’s port 80 (so you can access it from the
browser).

• --name tomcat-server: Name the container tomcat-server.

Now, Tomcat is running in Docker on your cloud instance.

Step 4: Deploying a Java Web Application (WAR File)

We will now deploy a simple Java web application (WAR file) to Tomcat in Docker.

4.1 Create a Simple Java Web Application (WAR)

If you don’t have a WAR file ready, you can create one using a simple Java Servlet or download a
sample from GitHub.

For demonstration, we’ll use a basic "Hello World" servlet or a simple sample.war file.

4.2 Copy the WAR File to the Tomcat Container

Use the docker cp command to copy the WAR file into the Tomcat container:

docker cp sample.war tomcat-server:/usr/local/tomcat/webapps/

This command copies sample.war to the /webapps/ directory inside the Tomcat container.

4.3 Restart the Tomcat Container (Optional)

After copying the WAR file, you may need to restart the container for Tomcat to deploy the application:

docker restart tomcat-server

4.4 Access the Application

You can now access your deployed application by navigating to the public IP of your cloud server (AWS,
GCP, or Azure) in a web browser.
arduino

http://<your-server-ip>/sample

This will open your Java web application deployed in the Tomcat container.

Step 5: Exposing the Tomcat Application

If the firewall on your cloud server is blocking HTTP (port 80), ensure that the required ports are open
in the security settings of your cloud provider.

• AWS EC2: Go to Security Groups in the EC2 dashboard and add inbound rules for HTTP (port
80).

• GCP Compute Engine: In Firewall rules, ensure HTTP traffic (port 80) is allowed.

• Azure: Open Network Security Group for the VM and ensure HTTP is allowed.

After confirming the firewall settings, try accessing the Tomcat server again using the public IP.

Step 6: Scaling and Managing with Docker Compose (Optional)

To scale your application, you can use Docker Compose, which allows you to define multi-container
applications.

6.1 Create a docker-compose.yml File

Create a docker-compose.yml file to define the services (Tomcat, database, etc.).

yaml

version: '3'

services:

tomcat:

image: tomcat:latest

ports:

- "80:8080"

volumes:

- ./webapps:/usr/local/tomcat/webapps

environment:

- TZ=America/New_York

6.2 Start Docker Compose

Use the following command to start the application with Docker Compose:

docker-compose up -d

This command will start the Tomcat server and deploy the application defined in the webapps folder.

Step 7: Clean Up Resources

After completing the deployment, make sure to clean up any resources to avoid unnecessary charges.

7.1 Stop and Remove Docker Containers


docker stop tomcat-server

docker rm tomcat-server

7.2 Remove Docker Images - docker rmi tomcat:latest

7.3 Terminate Cloud Instance

• For AWS: Go to EC2 Dashboard, select your instance, and click Terminate.

• For GCP: Go to Compute Engine, and delete the VM instance.

• For Azure: Go to Virtual Machines, and delete the VM.

Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you
to define all your services, networks, and volumes in a single file (docker-compose.yml) and spin up
your entire application with a single command.

• Docker Compose is primarily used for managing multi-container applications. It lets you define
all the configurations for multiple containers (like databases, web servers, etc.) in one YAML file
(docker-compose.yml).

• With Docker Compose, you can manage dependencies, network configurations, volumes, and
environment variables for all services in one place.

• It simplifies the process of managing complex applications by handling communication between


containers.

In this guide, we will:

1. Understand the basics of Docker Compose.

2. Set up a multi-container application using Docker Compose.

3. Deploy a simple web app and database using Docker Compose.

4. Scale services and use Docker Compose for management.

Docker Compose Architecture

Docker Compose is built around a few key components:

1. Service: A service is a container in the application. You can think of it as the running component
of your app, such as a database or web server.

2. Network: Docker Compose automatically creates a network for your containers to communicate
with each other.

3. Volume: Volumes are used to persist data generated by and used by Docker containers.

4. Configuration File: The docker-compose.yml file describes the services, networks, and
volumes for your application.

Step 1: Install Docker Compose

Before getting started, ensure Docker Compose is installed on your machine.

For Ubuntu (Linux):


1. Install Docker Compose:

sudo apt-get install docker-compose

2. Verify installation:

docker-compose --version

For macOS / Windows: Docker Compose is bundled with Docker Desktop, so you don’t need to install
it separately.

Step 2: Create a Multi-Container Application

Let’s create a simple web application with a database. We’ll use the following technologies:

1. Web Application: A simple Node.js application.

2. Database: MySQL as the database for our app.

2.1 Define the Application Structure

Create the following directory structure:

kotlin

my-docker-compose-app/

├── docker-compose.yml

├── app/

│ ├── Dockerfile

│ ├── app.js

│ ├── package.json

└── db/

└── init.sql

2.2 Create the Node.js Application

1. app/package.json (for Node.js project configuration):

json

"name": "docker-web-app",

"version": "1.0.0",

"description": "Node.js web app with MySQL database",

"main": "app.js",

"dependencies": {

"mysql": "^2.18.1",

"express": "^4.17.1"

},
"scripts": {

"start": "node app.js"

2. app/app.js (The main application file):

javascript

const express = require('express');

const mysql = require('mysql');

const app = express();

const port = 3000;

const db = mysql.createConnection({

host: 'db', // Use the service name defined in docker-compose.yml

user: 'root',

password: 'example',

database: 'mydb'

});

db.connect((err) => {

if (err) throw err;

console.log('Connected to the database!');

});

app.get('/', (req, res) => {

db.query('SELECT * FROM users', (err, result) => {

if (err) throw err;

res.send(result);

});

});

app.listen(port, () => {

console.log(`App is listening at http://localhost:${port}`);

});

2.3 Create the Dockerfile for Node.js Application

Create a Dockerfile inside the app/ directory to build the image for the web application.

dockerfile

# Use official Node.js image


FROM node:14

# Create and set the working directory

WORKDIR /usr/src/app

# Copy package.json and install dependencies

COPY package.json ./

RUN npm install

# Copy the rest of the application code

COPY . .

# Expose port 3000 for the web app

EXPOSE 3000

# Run the Node.js app

CMD ["node", "app.js"]

2.4 Create SQL Initialization Script

Create an init.sql file to initialize the MySQL database with some sample data.

sql

CREATE DATABASE mydb;

USE mydb;

CREATE TABLE users (

id INT AUTO_INCREMENT PRIMARY KEY,

name VARCHAR(255) NOT NULL

);

INSERT INTO users (name) VALUES ('Alice'), ('Bob'), ('Charlie');

Step 3: Docker Compose Configuration

Now, create the docker-compose.yml file to define how to run the app and the database together.

yaml

version: '3'

services:

app:

build: ./app

ports:

- "3000:3000"

environment:

- DB_HOST=db
- DB_USER=root

- DB_PASSWORD=example

- DB_NAME=mydb

depends_on:

- db

db:

image: mysql:5.7

environment:

MYSQL_ROOT_PASSWORD: example

MYSQL_DATABASE: mydb

ports:

- "3306:3306"

volumes:

- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql

networks:

- app-network

networks:

app-network:

driver: bridge

Explanation:

• app service: Defines the Node.js application. It will be built from the Dockerfile in the app/
directory.

• db service: Defines the MySQL database. It uses the official MySQL image and initializes the
database with the init.sql script.

• depends_on: Ensures that the app container starts only after the db service is ready.

• networks: Defines a custom bridge network app-network to allow the containers to


communicate with each other.

Step 4: Build and Run the Application

1. Navigate to the Project Directory:

cd my-docker-compose-app

2. Start the Application:

Run Docker Compose to build and start the application:

docker-compose up --build

o The --build flag ensures Docker Compose rebuilds the images if necessary.
3. Access the Application:

Once the application is up and running, open your browser and visit:

arduino

http://localhost:3000

You should see the result of the query to the MySQL database, which will show all the users in the users
table.

Step 5: Scaling Services with Docker Compose

Docker Compose makes it easy to scale your services. For example, you can scale the app service to
run multiple instances of the web application for better load balancing.

1. Scale the app Service:

Run the following command to scale the app service to 3 instances:

docker-compose up --scale app=3

2. Check Running Containers:

To check the status of your containers:

docker ps

You should see multiple instances of the app container running.

Step 6: Stopping and Cleaning Up Resources

1. Stop the Services:

To stop all running services:

docker-compose down

2. Remove Volumes and Networks:

If you want to remove volumes and networks along with containers, use the following:

docker-compose down -v

This will stop and remove the containers, networks, and volumes defined in the docker-compose.yml
file.

Docker Swarm
Docker Swarm is a container orchestration tool built into Docker that enables you to manage a cluster
of Docker engines. It allows you to create and manage a cluster of Docker nodes, known as a "swarm,"
and deploy services across multiple Docker hosts. Swarm provides high availability, load balancing,
scaling, and self-healing capabilities to applications running in containers.

In this guide, we will:

1. Explain the concepts of Docker Swarm.

2. Set up a simple Docker Swarm cluster.


3. Deploy a multi-container service across the swarm.

4. Scale services and demonstrate swarm features.

Docker Swarm provides the following key features:

• Swarm Mode: Allows Docker containers to run on multiple hosts in a coordinated manner.

• Service Management: You can define services that run on the swarm, which Docker will
manage, monitor, and scale.

• Scaling: Easily scale the services (increase/decrease the number of replicas).

• High Availability: If a node in the swarm fails, Docker Swarm automatically reschedules
services to another available node.

• Load Balancing: Swarm load balances incoming traffic to the services across nodes.

• Rolling Updates: Perform updates to services with no downtime by rolling out changes to
replicas.

Step 1: Set Up Docker Swarm Cluster

In this tutorial, we will use 3 nodes: one manager node and two worker nodes.

1.1 Install Docker on All Nodes

First, ensure that Docker is installed on all the machines (manager and worker nodes). Refer to the
steps in the previous sections on how to install Docker on a machine.

1.2 Initialize Swarm Mode on Manager Node

The first step is to initialize the Docker Swarm mode on the manager node.

1. SSH into the manager node.

ssh user@manager-node-ip

2. Run the following command to initialize the Docker Swarm:

sudo docker swarm init --advertise-addr <manager-node-ip>

o --advertise-addr: This is the IP address on which the manager node will advertise itself
to other nodes in the swarm.

o The command will return a token that you will need to join the worker nodes to the swarm.

Example output:

Swarm initialized: current node is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token <token> <manager-node-ip>:2377

1.3 Join Worker Nodes to the Swarm

Next, you need to add worker nodes to the swarm. On each worker node:

1. SSH into the worker node.

2. Run the docker swarm join command with the token provided from the manager node.

sudo docker swarm join --token <token> <manager-node-ip>:2377


This will connect the worker nodes to the swarm.

3. On the manager node, verify the worker nodes have successfully joined the swarm:

sudo docker node ls

You should see output like:

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

z9m61bs71f7g4z82hdz9q5ovw * manager-node Ready Active Leader

6c1bda2g09w1cbf0e54q1pp0c worker-node1 Ready Active

9r42dbshf01w2hdo9mjb5j0jf worker-node2 Ready Active

Now, you have a Swarm cluster with 1 manager node and 2 worker nodes.

Step 2: Deploy a Service in the Swarm

With the swarm initialized and the nodes added, you can now deploy services across the swarm.

2.1 Create a Simple Web Service

Let’s deploy a simple Nginx web server as a service in Docker Swarm.

1. On the manager node, run the following command to deploy the Nginx service:

sudo docker service create --name web-server --replicas 3 -p 80:80 nginx

o --name: Specifies the name of the service (in this case, web-server).

o --replicas: Defines the number of instances of the service (replicas) you want to run
across the swarm.

o -p: Exposes the service on port 80 on all nodes.

2. Check the status of the service:

sudo docker service ls

Example output:

ID NAME MODE REPLICAS IMAGE

h3r4xj4k0jdy web-server replicated 3/3 nginx:latest

3. To view the tasks (containers) that Docker Swarm has created for this service, run:

sudo docker service ps web-server

You should see a list of tasks across the manager and worker nodes.

4. Now, you can access the Nginx web service by navigating to the manager node's IP in a
browser:

arduino

http://<manager-node-ip>:80

You should see the default Nginx welcome page. Docker Swarm has automatically load-balanced the
service across the available nodes.

Step 3: Scale Services


You can scale the service up or down easily with Docker Swarm. This means you can change the
number of replicas of the service to handle more traffic or reduce the load.

3.1 Scale Up the Service

To scale the web-server service to 5 replicas:

sudo docker service scale web-server=5

Verify the scaling:

sudo docker service ps web-server

You should see 5 tasks (containers) running across the nodes in the swarm.

3.2 Scale Down the Service

Similarly, to scale down the service to 2 replicas:

sudo docker service scale web-server=2

Again, verify the scaling:

sudo docker service ps web-server

Step 4: Update Services with Rolling Updates

Docker Swarm provides the ability to update services in a rolling manner without downtime.

4.1 Update Service

Let’s update the web-server service to use a different version of the Nginx image.

sudo docker service update --image nginx:alpine web-server

This will update the web-server service to use the nginx:alpine image while keeping the rolling update
process.

4.2 Monitor the Update

Check the update progress:

sudo docker service ps web-server

Docker Swarm will gradually replace the old containers with new ones, ensuring no downtime during
the update.

Step 5: Self-Healing in Docker Swarm

One of the great features of Docker Swarm is self-healing. If a container or node fails, Docker Swarm
automatically re-schedules the service on available nodes.

5.1 Simulate a Failure

You can simulate a failure by stopping a worker node. For example, on a worker node:

sudo docker node update --availability drain <worker-node-id>

This will mark the worker node as unavailable. Docker Swarm will automatically reschedule the tasks
(containers) that were running on that node to other available nodes.

5.2 Verify the Task Re-Scheduling

On the manager node, check the status of the service:


sudo docker service ps web-server

Docker Swarm should have re-scheduled the tasks to the remaining available worker nodes.

Step 6: Remove Services and Clean Up

Once you are done with the setup, you can remove the service and clean up the resources.

1. Remove the service:

sudo docker service rm web-server

2. Leave the Swarm (on worker nodes):

On each worker node:

sudo docker swarm leave

3. Remove the swarm (on manager node):

On the manager node, to completely leave the swarm:

sudo docker swarm leave --force

4. Remove Docker Swarm Nodes (optional):

On any node, you can remove Docker if desired:

sudo apt-get purge docker-ce

sudo apt-get autoremove

Manager and Worker Concept in Docker Swarm


In Docker Swarm mode, a Docker cluster consists of two types of nodes: Manager nodes and Worker
nodes. Both types of nodes have different roles and responsibilities within the Docker Swarm cluster.
Understanding the roles and differences between manager and worker nodes is crucial for effective
management of a Docker Swarm cluster.

Let's go through the concepts and roles of Manager and Worker nodes in Docker Swarm, followed by
a practical hands-on demonstration of setting up and managing them.

1. Manager Nodes

Role and Responsibilities:

1. Orchestrating the Swarm Cluster: Manager nodes are responsible for managing the state and
behavior of the Swarm cluster. They handle tasks such as service creation, scaling, and task
scheduling.

2. Leader Election: Only one manager node acts as the leader at any given time. The leader
manager is responsible for maintaining the overall state of the Swarm, while other manager
nodes act as followers that replicate the state of the leader.

3. Service Management: Managers maintain the desired state of services, ensuring that the
services are running with the right number of replicas.

4. Task Scheduling: Managers are responsible for scheduling tasks on worker nodes. They
decide which worker node should run a specific container for a service.
5. Cluster Management: Managers also handle administrative tasks such as adding/removing
nodes, updating services, and handling network configurations.

6. Fault Tolerance: If the leader manager node fails, another manager node can be elected to
take over as the new leader.

High Availability:

A Docker Swarm cluster should ideally have an odd number of manager nodes (e.g., 3, 5, etc.) to
ensure fault tolerance. This ensures that in case of failure, there’s always a majority of managers to
perform a leader election and keep the cluster functional.

2. Worker Nodes

Role and Responsibilities:

1. Running Containers: Worker nodes are responsible for executing the tasks (containers)
assigned by the manager nodes. These tasks are the running instances of the services defined
in the Docker Swarm.

2. Task Execution: Worker nodes don't manage the cluster state, but they handle the
containerized application and run the actual workloads (services) that are part of the application.

3. Scaling Services: While managers can scale services (increase or decrease the number of
replicas), the actual work of running the containers happens on the worker nodes.

4. Communication with Managers: Worker nodes communicate with the manager nodes to get
updates and instructions on which tasks (containers) they need to run.

5. No Administrative Functions: Unlike manager nodes, worker nodes do not perform


administrative tasks such as adding/removing nodes, updating services, or managing the state
of the swarm.

3. Key Differences Between Manager and Worker Nodes

Feature Manager Node Worker Node

Manages and orchestrates the swarm Executes containers and tasks


Role
cluster assigned by managers

Maintains the desired state of the swarm


State Management Does not manage swarm state
and services

Schedules tasks (services) and assigns Executes the tasks assigned by


Task Scheduling
them to worker nodes the manager

One manager acts as the leader; others Does not participate in leader
Leadership
are followers election

If the manager fails, another manager can Can fail, but does not affect the
Fault Tolerance
be elected swarm’s state

Manages node additions/removals and


Cluster Management No cluster management role
service updates

Responsible for scaling services and Runs the containers associated


Services
updating them with services
Feature Manager Node Worker Node

Number of Nodes in Must have at least 1, but typically 3 or Can scale horizontally; multiple
the Swarm more for high availability worker nodes

4. Hands-On Example: Setting Up Manager and Worker Nodes in Docker Swarm

Let's walk through the steps of setting up Docker Swarm with 1 manager node and 2 worker nodes.

4.1 Pre-requisite: Install Docker on All Nodes

Make sure Docker is installed on all nodes (Manager and Worker). You can follow the installation steps
from previous tutorials if necessary.

4.2 Step 1: Initialize Swarm on Manager Node

1. SSH into Manager Node (The first node that will be the manager):

ssh user@manager-node-ip

2. Initialize the Docker Swarm:

Run the following command to initialize the Docker Swarm. Replace <manager-node-ip> with the actual
IP address of the manager node.

sudo docker swarm init --advertise-addr <manager-node-ip>

This will return a join token that worker nodes will use to join the swarm. You will see output like:

Swarm initialized: current node is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token <worker-join-token> <manager-node-ip>:2377

Copy the join token from the output.

4.3 Step 2: Add Worker Nodes to the Swarm

1. SSH into Worker Node 1:

ssh user@worker-node1-ip

2. Join the Worker Node to the Swarm:

Run the docker swarm join command with the join token you copied earlier:

sudo docker swarm join --token <worker-join-token> <manager-node-ip>:2377

Repeat the same steps for Worker Node 2.

4.4 Step 3: Verify the Cluster

1. Go back to the Manager Node:

ssh user@manager-node-ip

2. Verify the Worker Nodes are Added to the Swarm:

Run the following command to see the list of nodes in the swarm:

sudo docker node ls


You should see output similar to the following:

ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS

x2a9bqw20jvw4mjz2rbsgz1l7 * manager-node Ready Active Leader

e9r4fchz1hpqovlgkm0dfofzm worker-node1 Ready Active

f8v8rd2d0wpk4hzgsjehgqf8e worker-node2 Ready Active

The manager-node is listed as the leader, and both worker nodes (worker-node1 and worker-node2)
are listed as Ready.

4.5 Step 4: Deploy a Service on the Swarm

Let’s deploy a simple Nginx web service across the swarm and verify that it runs on all available nodes.

1. On the Manager Node, create a service with 3 replicas (which will be distributed across the
manager and worker nodes):

sudo docker service create --name web-server --replicas 3 -p 80:80 nginx

2. Verify the Service is Running:

To check the status of the service:

sudo docker service ps web-server

You should see the Nginx web service running on the manager and worker nodes:

ID NAME IMAGE NODE DESIRED STATE CURRENT STATE


ERROR

9ik0zqv8s7oi web-server.1 nginx:latest manager-node Running Running 2 seconds


ago

2e64fjzff0ou web-server.2 nginx:latest worker-node1 Running Running 2 seconds ago

m0g9h8vwt2ga web-server.3 nginx:latest worker-node2 Running Running 2 seconds


ago

3. Access the Web Service:

Open a web browser and navigate to the manager node's IP:

arduino

http://<manager-node-ip>:80

You should see the default Nginx welcome page. Docker Swarm has automatically distributed the
replicas across the available manager and worker nodes.

4.6 Step 5: Scaling the Service

To scale the service, you can increase or decrease the number of replicas. For example, to scale to 5
replicas:

sudo docker service scale web-server=5

Verify the scaling:

sudo docker service ps web-server


This will show you the updated number of replicas running across the nodes.

What is Docker Stack?


A Docker Stack is a feature within Docker Swarm (Docker’s native clustering and orchestration tool)
that allows you to deploy and manage multi-container applications in a simple and scalable way. A stack
is a group of related services, networks, and volumes that you can deploy, scale, and manage as a
single unit.

In Docker, a stack refers to the application and all the associated services and infrastructure required
to run it. Using Docker Stack, you can define all the components of an application, such as web servers,
databases, and networking, in a Docker Compose file and then deploy them to a Docker Swarm
cluster.

Key Features of Docker Stack

1. Multi-container Application Management: A stack allows you to manage multiple services


(containers) together as one application. Each service runs in its own container but can be
interlinked with other services via networks.

2. Declarative Application Deployment: You define the entire application in a configuration file
(docker-compose.yml), and Docker handles deploying and managing it across the Swarm
cluster.

3. Scaling Services: Docker Stack supports scaling services up or down, i.e., adjusting the
number of container instances for a service depending on the load or demand.

4. Service Discovery and Load Balancing: Docker Stack provides automatic service discovery
and load balancing across the containers within a service. This ensures that the application is
resilient and highly available.

5. Simplified Networking: Docker Stack automatically sets up networking between services


within the stack, ensuring that they can communicate with each other.

6. Persistent Volumes: Docker Stack allows you to define persistent storage for your containers
using Docker volumes, ensuring that data is retained even when containers are stopped or
removed.

How Docker Stack Works

A Docker Stack is defined using a Docker Compose file (typically named docker-compose.yml). This
file describes the services, networks, and volumes required for the application. Once the file is created,
the stack is deployed to a Docker Swarm cluster, where Docker manages the deployment, scaling, and
orchestration of the containers.

Basic Structure of a Docker Stack

Here is an example of a simple docker-compose.yml file used for defining a Docker Stack:

yaml

version: '3.8'

services:

web:

image: nginx:alpine
ports:

- "8080:80"

networks:

- webnet

db:

image: mysql:5.7

environment:

MYSQL_ROOT_PASSWORD: example

networks:

- webnet

volumes:

- db_data:/var/lib/mysql

networks:

webnet:

volumes:

db_data:

• services: This section defines the containers (services) that will be part of the stack. In this
example, we have two services: web (a simple Nginx container) and db (a MySQL container).

• networks: The containers defined in the stack can communicate with each other over defined
networks. Here, webnet is the network connecting both services.

• volumes: Volumes are defined for persistent storage. In this case, db_data stores the MySQL
database data to persist even if the container is recreated.

Deploying a Docker Stack

To deploy a Docker Stack, you use the following command:

docker stack deploy -c docker-compose.yml my_stack

• -c docker-compose.yml: Specifies the Docker Compose file to use for the stack.

• my_stack: The name of the stack being deployed.

After deployment, Docker Swarm will launch the necessary containers based on the docker-
compose.yml file.

Managing Docker Stack

Once the stack is deployed, you can use various Docker commands to manage the stack:

1. List the stacks:

docker stack ls

2. List the services in a stack:


docker stack services my_stack

3. Scale a service: You can scale a service (i.e., change the number of replicas):

docker service scale my_stack_web=3

4. View logs for a service:

docker service logs my_stack_web

5. Remove a stack: To remove the stack and all its services, networks, and volumes:

docker stack rm my_stack

Docker Stack vs Docker Compose

• Docker Compose: It is used for defining and running multi-container Docker applications on a
single host. Docker Compose is typically used for local development or testing.

• Docker Stack: It is used to deploy multi-container applications in a Docker Swarm cluster,


which can span across multiple nodes. Docker Stack is typically used in production
environments where you need to manage clusters of Docker containers.

Benefits of Docker Stack

1. Simplifies Multi-Container Deployment: Docker Stack abstracts away the complexities of


managing multi-container applications, making deployment and orchestration easier.

2. Scalable and Resilient: With Docker Swarm, you can scale services based on demand. Docker
Stack ensures that the application is resilient, and services can be automatically rescheduled in
case of failure.

3. Declarative Management: Define the application in a single configuration file (docker-


compose.yml), and Docker takes care of the deployment, scaling, and service management.

4. Integrated Networking and Volumes: Docker Stack handles networking between services and
provides a way to create and manage persistent storage.

5. Service Discovery: Docker Stack provides built-in service discovery, making it easy for services
to find and communicate with each other.

Reduce image size


1. Use a Minimal Base Image

The choice of base image plays a significant role in the overall image size. Instead of using large base
images, opt for smaller ones:

• Alpine Linux: It's a minimal Docker image, usually around 5 MB. If you're using a language like
Python, Node.js, or Java, there are Alpine versions available for those languages (e.g.,
python:3.8-alpine or node:14-alpine).

• Distroless Images: These images only contain your application and its runtime dependencies,
with no package manager, shell, or other unnecessary tools. For example,
gcr.io/distroless/base.

dockerfile

Copy
FROM python:3.8-alpine

2. Leverage Multi-stage Builds

Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. The first stage is used
to build your application, and the second stage copies only the necessary artifacts into a smaller image.
This avoids including unnecessary build tools and dependencies in the final image.

Example:

dockerfile

Copy

# Stage 1: Build stage

FROM node:14 AS build

WORKDIR /app

COPY . .

RUN npm install

RUN npm run build

# Stage 2: Production stage (smaller image)

FROM node:14-alpine

WORKDIR /app

COPY --from=build /app/build /app

CMD ["node", "server.js"]

Here, the build stage installs dependencies and builds the app, but only the necessary build artifacts
are copied to the final smaller image.

3. Minimize Layers

Each instruction in a Dockerfile creates a new layer in the image. To reduce the image size, you should
combine related instructions into fewer layers. For example:

• Instead of using separate RUN commands for each package installation, combine them into a
single RUN statement.

dockerfile

RUN apt-get update && apt-get install -y \

package1 \

package2 \

package3

4. Remove Unnecessary Files and Dependencies

After installing packages or copying files, delete unnecessary files to reduce the image size:

• Clean up cache: For package managers like apt, npm, or pip, delete cache files to reduce size.

Example for apt (for Ubuntu-based images):


RUN apt-get update && apt-get install -y package1 && \

apt-get clean && rm -rf /var/lib/apt/lists/*

For npm:

RUN npm install && npm cache clean --force

For pip:

RUN pip install -r requirements.txt && \

rm -rf /root/.cache

5. Minimize the Number of Dependencies

Only install the dependencies you need. For example, in a Node.js or Python app, you might be
installing unnecessary development dependencies. Make sure you're only installing the production
dependencies in your Dockerfile.

For Node.js, you can install only the production dependencies by using the --production flag:

RUN npm install --production

For Python, ensure you use a requirements.txt that includes only the necessary packages.

6. Use .dockerignore

Like .gitignore, the .dockerignore file specifies which files should not be copied into the Docker image.
This can significantly reduce the size by avoiding unnecessary files like build artifacts, temporary files,
and development tools from being included in the image.

Example .dockerignore:

node_modules

*.log

*.md

tests/

.git

7. Optimize for Specific Architecture

If you're working in an environment where certain architectures (e.g., ARM, AMD64) are required, you
can build an image optimized for that architecture. This ensures you're not adding unnecessary
architecture-specific dependencies.

8. Use .env Files Wisely

Sometimes, a .env file or configuration file can be very large and be accidentally copied into your Docker
image. Use .dockerignore to make sure they aren't included, or move configuration to environment
variables or Docker Secrets.

9. Avoid Installing Unnecessary Tools

Avoid installing debugging or build tools like compilers, curl, git, and text editors unless absolutely
necessary. These tools can add a lot of bloat to the image.

10. Use Scratch for Bare Minimum Images


If your application doesn't need any base operating system, you can build it directly on a blank base
image called scratch. This is the smallest image you can have, but it requires your app to be statically
linked (no dependencies or libraries that need the OS).

Example:

dockerfile

Copy

FROM scratch

COPY myapp /myapp

CMD ["/myapp"]

This method is useful for things like Go or statically compiled C/C++ binaries.

Example: Optimized Dockerfile

Here’s an example Dockerfile that applies many of the techniques above:

dockerfile

# Stage 1: Build Stage

FROM node:14-alpine AS build

WORKDIR /app

COPY . .

RUN npm install --production && npm run build

# Stage 2: Final Stage (Production)

FROM node:14-alpine

WORKDIR /app

COPY --from=build /app/build /app

COPY --from=build /app/package*.json /app

RUN npm install --production

CMD ["node", "server.js"]

# Clean up unnecessary files and caches

RUN rm -rf /var/cache/apk/* /root/.npm

In this Dockerfile:

• We're using node:14-alpine for a minimal base image.

• Multi-stage build is used to separate build-time dependencies from runtime dependencies.

• We combine COPY and RUN steps efficiently.

• Cache and unnecessary files are cleaned up.


REFERENCS

1. Official Docker Documentation

• Docker Docs – The official documentation is the best place to start, covering everything from
installation to advanced topics like Docker Swarm and Docker Compose.

2. Books

• "Docker Deep Dive" by Nigel Poulton – Great for beginners and intermediate users alike. It offers
detailed explanations of Docker concepts and real-world use cases.

• "The Docker Book" by James Turnbull – A comprehensive guide for learning Docker, covering
everything from the basics to advanced features.

• "Docker in Action" by Jeffrey Nickoloff – Focuses on practical Docker usage with clear examples
and explanations.

3. Online Courses

• Udemy:

o "Docker for Beginners" – A great course for those just getting started.

o "Docker Mastery: with Kubernetes +Swarm from a Docker Captain" – An in-depth,


beginner-to-advanced course that covers Docker, Swarm, and Kubernetes.

• LinkedIn Learning:

o "Learning Docker" – A short and sweet introduction to Docker.

4. YouTube Channels

• Docker’s Official YouTube Channel – Features tutorials, webinars, and conference sessions.

• TechWorld with Nana – Offers clear and easy-to-follow tutorials on Docker and Kubernetes.

5. Blogs & Tutorials

• Docker Blog: https://www.docker.com/blog/ – Official blog with use cases, updates, and best
practices.

• Medium: Search for Docker tutorials and articles on Medium; lots of experienced developers
share their insights.

• DigitalOcean Tutorials: Docker Tutorials on DigitalOcean – A great collection of beginner.

6. Hands-On Practice Platforms

• Play with Docker: https://labs.play-with-docker.com/ – A free, online environment where you can
experiment with Docker without needing to set it up locally.

• Katacoda: https://www.katacoda.com/courses/docker – Interactive tutorials for Docker and other


DevOps tools.

7. Communities and Forums

• Stack Overflow: Look up Docker-related questions or ask your own.

• Docker Community Slack: Engage with other Docker users for questions, discussions, and help.

• Reddit (r/docker): A very active community where you can discuss Docker or ask for help.

You might also like