Getting Started with Docker
Getting Started with Docker
Introduction to Docker
Docker is an open-source platform designed to automate the deployment, scaling, and
management of applications using containers. A container is a lightweight, portable, and self-sufficient
unit that packages up an application and all its dependencies (including libraries, frameworks,
configurations, etc.) so that the application can run consistently across any environment—whether it's
on a developer's laptop, a test server, or in production on the cloud.
Benefits of Docker:
2. Efficiency: Containers are lightweight because they share the host system's kernel rather than
having their own operating system (like VMs). This reduces overhead and allows for faster
startup times.
3. Isolation: Containers run applications in isolated environments. This isolation ensures that
dependencies and configurations for one application don’t interfere with another.
4. Version Control: Docker images can be versioned, so you can have different versions of your
app, dependencies, or configurations, and easily switch between them.
• Microservices: Docker allows developers to break down complex applications into smaller,
manageable services (microservices) that can run independently in containers.
Docker Terminology
1. Container: A lightweight, stand-alone, executable package that includes everything needed to
run a piece of software (e.g., code, libraries, system tools, etc.). Containers run from Docker
images.
2. Image: A read-only template used to create containers. It consists of a series of layers (files and
commands). You can think of it as a snapshot of a file system at a particular point.
3. Dockerfile: A script containing a series of instructions on how to build a Docker image. It defines
the environment in which the application runs.
5. Registry: A storage and distribution system for Docker images. The default registry is Docker
Hub, but you can also set up private registries.
6. Volume: A persistent data storage mechanism. Volumes are used to store and manage data
that should not be lost when containers are removed.
7. Network: A virtual network allowing containers to communicate with each other and with
external services.
Basic Docker Commands
Install Docker - For Linux (Ubuntu):
docker --version
2. Run a container
Example:
docker ps
docker ps -a
7. Remove a container
docker rm <container_id>
docker rm -f <container_id>
9. Remove an image
docker images
Docker Networking
docker network ls
Docker Volumes
1. Create a volume
2. List volumes
docker volume ls
3. Inspect a volume
4. Remove a volume
Monolithic Architecture
A monolithic architecture is a traditional approach where an entire application is built as a single,
unified unit. All the different components (like the user interface, business logic, and data access layers)
are tightly integrated into a single codebase and are deployed together as one unit.
1. Single Codebase:
o The entire application’s functionality (user interface, business logic, data management)
resides in a single codebase.
2. Tightly Coupled:
o All components of the application are interdependent, meaning changes in one part can
affect the entire application.
3. Single Deployment:
5. Development:
o Monolithic applications are often easier to develop initially, as everything is in one place,
and there’s no need for complex communication between different services or
components.
6. Maintenance:
o As the application grows, it can become harder to maintain and update because changes
in one area might impact other areas, and testing can become more complex.
• Simpler to develop initially: A unified codebase makes it easier to get started and build the
application.
• Lack of flexibility: Difficult to update or add new features without affecting the entire application.
• Risk of tight coupling: Changes in one area of the application can unintentionally break other
parts.
• A small e-commerce website where all the components (product catalog, user management,
checkout, etc.) are tightly coupled together.
Microservices Architecture
Microservices architecture breaks down an application into small, loosely coupled, and independently
deployable services. Each microservice is responsible for a specific business function and
communicates with other microservices via APIs (often HTTP or message queues).
1. Decoupled Services:
2. Independent Deployment:
o Each microservice can be developed, deployed, and scaled independently of the others.
This allows for easier updates and better fault isolation.
3. Technology Agnostic:
o Microservices communicate with each other via lightweight protocols (e.g., REST APIs,
gRPC, or messaging queues). This allows services to interact in a decoupled manner.
5. Scaling:
o Each microservice can be scaled independently based on demand, which makes it more
efficient for resource allocation and improves performance for specific features of the
application.
6. Fault Isolation:
o If one microservice fails, it doesn’t necessarily bring down the entire system. This
isolation improves system resilience and robustness.
• Scalability: Each microservice can be scaled independently based on its own requirements.
• Flexibility: Development teams can use different technologies for different services, based on
specific needs.
• Distributed systems: Because services are distributed, issues like network latency, failure
handling, and data consistency become more complicated.
• Data management: Managing data consistency and transactions across multiple services can
be challenging.
• A large e-commerce platform where different services (product catalog, order management,
payment, inventory) are managed by separate teams and need to be scaled independently.
Monolithic vs. Microservices: Key Differences
Easier to develop initially, but becomes More complex due to the need for
Development
harder as the app grows managing multiple services
Scaling Scale the entire application as a unit Scale individual services independently
Harder to maintain as the app grows in Easier to maintain with smaller, focused
Maintenance
size services
Direct function calls within the same Communication via APIs between
Communication
process services
Docker Architecture
Docker follows a client-server architecture and relies on several components that work together to
enable the creation, management, and running of containers. The architecture is designed to provide a
consistent environment across different stages of development, testing, and production.
1. Docker Client
o What it is: The Docker client is the primary interface through which users interact with
Docker. It can be a command-line interface (CLI) or a graphical user interface (GUI).
o Functionality:
▪ The Docker client sends requests to the Docker daemon (server) to perform
actions like building containers, running containers, and pulling images.
▪ Common commands include docker run, docker build, docker pull, and docker
push.
o How it works: The client communicates with the Docker daemon via the Docker API
over a network (either locally or remotely).
2. Docker Daemon (dockerd)
o What it is: The Docker daemon (also called dockerd) is a background process that
manages the creation, running, and monitoring of Docker containers.
o Functionality:
▪ The daemon listens for Docker API requests from the Docker client.
o How it works: The daemon can run on the same machine as the client or on a remote
machine.
3. Docker Images
o What it is: A Docker image is a read-only template used to create containers. It includes
the application code, libraries, dependencies, and runtime needed for the containerized
application.
o Functionality:
▪ Images are the foundation of Docker containers. They can be created from a
Dockerfile, which contains instructions for how to build the image.
▪ Docker images are stored in a registry and can be pulled from a registry like
Docker Hub or a private Docker registry.
o How it works: When you run a Docker container, you are essentially creating a running
instance of an image.
4. Docker Containers
o Functionality:
▪ Containers are isolated from each other and the host system but share the host
system's kernel.
▪ Containers are ephemeral—once they are stopped, they can be easily removed.
o How it works: Containers are created from Docker images and can run in isolation or
interact with other containers and services depending on the network configurations.
5. Docker Registry
o What it is: A Docker registry is a storage and distribution system for Docker images.
It allows Docker images to be stored, versioned, and shared.
o Functionality:
▪ The Docker registry is where you store your images. The default public registry
is Docker Hub. There are also private registries that organizations may use for
storing proprietary images.
▪ The Docker client interacts with the registry to pull (download) and push (upload)
images.
o How it works: When you use the command docker pull <image>, it fetches the image
from the Docker registry, and when you use docker push, it uploads the image to the
registry.
6. Docker Network
o What it is: Docker networking provides the ability to connect Docker containers to each
other and to external systems.
o Functionality:
7. Docker Volumes
o What it is: A Docker volume is a persistent data storage mechanism for Docker
containers.
o Functionality:
▪ Volumes are used to store data that needs to persist across container restarts
(e.g., database data, logs).
▪ Volumes are independent of containers, which means the data persists even if
the container is deleted.
o How it works: Volumes can be shared between containers or used by a single container,
and they can be mounted into a container at a specific location within the container’s
filesystem.
In this guide, we’ll break down the key components of a Dockerfile, show you how each one works,
and provide practical examples of how to use them.
4. COPY – Copies files from the host machine into the container.
5. ADD – Similar to COPY but with additional features like extracting tar files and pulling files from
URLs.
7. CMD – Specifies the default command to run when the container starts.
9. EXPOSE – Informs Docker that the container will listen on the specified network ports.
11. VOLUME – Creates a mount point with a volume for persistent data storage.
1. FROM
The FROM instruction is used to specify the base image for the Docker image you are building. It’s
usually the first line in a Dockerfile.
Syntax:
dockerfile
FROM <image_name>:<tag>
Example:
dockerfile
FROM ubuntu:20.04
This uses the ubuntu image with the tag 20.04 as the base image for the new image.
Working Example:
dockerfile
FROM ubuntu:20.04
2. LABEL
The LABEL instruction allows you to add metadata to your image, such as version, author, or
description.
Syntax:
dockerfile
LABEL <key>=<value>
Example:
dockerfile
LABEL maintainer="[email protected]"
Working Example:
dockerfile
FROM ubuntu:20.04
LABEL maintainer="[email protected]"
3. RUN
The RUN instruction executes commands in the container. It is often used to install packages or perform
other setup tasks.
Syntax:
dockerfile
RUN <command>
Example:
Working Example:
dockerfile
FROM ubuntu:20.04
• ADD: Similar to COPY, but with additional features like extracting .tar files and supporting remote
URLs.
Syntax:
dockerfile
Example:
Working Example:
dockerfile
FROM ubuntu:20.04
5. WORKDIR
The WORKDIR instruction sets the working directory inside the container for any subsequent
instructions.
Syntax:
dockerfile
WORKDIR /path/to/directory
Example:
dockerfile
WORKDIR /app
RUN touch file.txt
Working Example:
dockerfile
FROM ubuntu:20.04
WORKDIR /app
6. CMD
The CMD instruction specifies the default command that will run when the container starts. It can be
overridden by providing a command when running the container.
Syntax:
Example:
Working Example:
dockerfile
FROM ubuntu:20.04
7. ENTRYPOINT
The ENTRYPOINT instruction is similar to CMD but provides more control over the container's default
behavior. It defines the executable that will always run when the container starts.
Syntax:
dockerfile
Example:
ENTRYPOINT ["echo"]
Working Example:
dockerfile
FROM ubuntu:20.04
ENTRYPOINT ["echo"]
3. The output will always include the ENTRYPOINT message and can be overridden by passing
additional parameters.
8. EXPOSE
The EXPOSE instruction tells Docker that the container will listen on specific network ports at runtime.
This is for informational purposes and does not actually publish the port.
Syntax:
dockerfile
EXPOSE <port>
Example:
dockerfile
EXPOSE 8080
Working Example:
dockerfile
FROM ubuntu:20.04
EXPOSE 8080
9. ENV
Syntax:
dockerfile
ENV <key>=<value>
Example:
dockerfile
Working Example:
dockerfile
FROM ubuntu:20.04
10. VOLUME
The VOLUME instruction creates a mount point with a volume to persist data.
Syntax:
dockerfile
VOLUME ["/data"]
Working Example:
FROM ubuntu:20.04
VOLUME ["/data"]
• COPY: Copies files from the host machine (build context) into the container's filesystem. It is a
simple and reliable method to copy files and directories.
Use case: Use COPY when you want to copy local files or directories into your Docker image without
additional functionalities.
Syntax:
Use case: Use ADD when you need to extract compressed files or copy remote URLs to the container.
Syntax:
Key Difference:
o COPY is preferred for most scenarios because it’s simpler and more predictable.
o ADD should only be used when you need the additional features, such as auto-extraction
of tar files or copying from a URL.
Example:
dockerfile
• RUN: Executes commands during the image build process and creates a new image layer with
the result of the command.
Use case: Use RUN to install software, create directories, or execute other setup tasks during the
image build process.
Syntax:
dockerfile
RUN <command>
• CMD: Specifies the default command to run when the container starts. This command can be
overridden at runtime.
Use case: Use CMD for default behavior that can be overridden by the user when running the container.
Syntax:
dockerfile
• ENTRYPOINT: Specifies the main executable that runs when the container starts. This is a fixed
command and is not easily overridden by the user.
Use case: Use ENTRYPOINT when you want to define a fixed command that always runs, and
optionally pass arguments through CMD.
Syntax:
dockerfile
Key Difference:
o CMD and ENTRYPOINT define commands to be run when the container starts, but CMD
is more flexible and can be overridden by the user, whereas ENTRYPOINT is fixed.
Example:
dockerfile
ENTRYPOINT ["echo"]
CMD vs ENTRYPOINT
Both CMD and ENTRYPOINT specify the command that runs when the container starts, but they
behave differently.
• CMD: Specifies the default command to run. If you provide arguments when running the
container, they will override the CMD instruction. CMD is used to provide default arguments to
ENTRYPOINT or to define the default command when no command is specified.
Use case: Use CMD to set default behavior that can be overridden by the user when running the
container.
Syntax:
dockerfile
• ENTRYPOINT: Specifies the executable that will always run when the container starts, and
cannot be easily overridden. If the ENTRYPOINT is defined, you can pass arguments to it using
CMD or during container runtime.
Use case: Use ENTRYPOINT when you want to specify the main application or process that will run in
the container, and ensure that it’s always executed.
Syntax:
dockerfile
ENTRYPOINT ["executable", "param1", "param2"]
Key Difference:
o CMD is the default command that can be replaced by the user at runtime.
o ENTRYPOINT defines the fixed command to be run, and any arguments passed during
runtime are appended to the ENTRYPOINT.
Example:
dockerfile
ENTRYPOINT ["echo"]
When you run a container based on the above Dockerfile, the output will be the same (Hello, World!),
but ENTRYPOINT is fixed and will always run, while CMD is configurable at runtime.
ARG vs ENV
• ARG: Defines build-time variables that can be passed during the image build process using the
--build-arg flag.
Use case: Use ARG to define variables that will only be used during the image build process and are
not available after the image is built.
Syntax:
ARG <variable_name>[=<default_value>]
• ENV: Sets environment variables inside the container that are available to the running container
and any processes within it.
Use case: Use ENV to define variables that will be available during runtime.
Syntax:
ENV <key>=<value>
Key Difference:
Example:
ARG VERSION=1.0
ENV APP_NAME="MyApp"
RUN echo "Running $APP_NAME"
We will create a simple web application using Docker, exploring key Dockerfile components such as
FROM, RUN, CMD, ENTRYPOINT, COPY, ADD, WORKDIR, EXPOSE, ENV, VOLUME, etc.
Pre-requisites:
3. A basic web app (e.g., Node.js, Python, or even a simple HTML file) will be used for this example.
For this example, let's create a simple index.html file for our web app.
perl
my-docker-app/
├── Dockerfile
└── index.html
index.html:
html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
</head>
<body>
<h1>Hello, Docker!</h1>
</body>
</html>
FROM nginx:alpine
LABEL maintainer="[email protected]"
LABEL version="1.0"
# Step 3: Copy the local index.html file to the nginx server's web directory
# Step 4: Set the working directory (optional but can be useful for context)
WORKDIR /usr/share/nginx/html/
EXPOSE 80
This will create an image named my-docker-webapp based on the nginx base image, and it will copy
your index.html file into the appropriate directory for NGINX to serve.
• -p 8080:80: Maps port 8080 on your host machine to port 80 inside the container (NGINX
default).
Now, you can open a browser and visit http://localhost:8080. You should see the “Hello, Docker!”
message from the index.html file.
Step 5: Explanation of Dockerfile Components
1. FROM
dockerfile
FROM nginx:alpine
• Specifies the base image (NGINX in this case) to build the image upon.
• We are using the alpine version of NGINX, which is a lightweight version of the NGINX image.
2. LABEL
dockerfile
LABEL maintainer="[email protected]"
LABEL version="1.0"
• Adds metadata to the image, like the maintainer's contact info and the image version. This
information is helpful for future reference or when sharing the image.
3. COPY
dockerfile
• Copies the index.html file from your local machine into the Docker image.
• It places the file in the NGINX default location (/usr/share/nginx/html/) where it expects web files
to be.
4. WORKDIR
WORKDIR /usr/share/nginx/html/
• Any subsequent instructions (such as RUN, CMD, or ENTRYPOINT) will execute from this
directory.
5. EXPOSE
EXPOSE 80
• Informs Docker that the container will listen on port 80 (the default HTTP port). However, this
doesn’t actually map the port to the host system; it’s just for documentation and networking
purposes.
6. CMD
• The default command to run when the container starts. It tells NGINX to run in the foreground
(daemon off) and not exit after starting.
Let's modify the Dockerfile to install additional packages using the RUN instruction.
dockerfile
FROM nginx:alpine
# Step 3: Copy the local index.html file to the nginx server's web directory
WORKDIR /usr/share/nginx/html/
EXPOSE 80
Explanation:
• RUN: Installs curl in the image. This can be useful if you need additional tools or dependencies
in your container.
• You can verify this by connecting to the running container and executing curl.
We can use ENV to define environment variables that can be accessed within the container.
FROM nginx:alpine
# Step 3: Copy the local index.html file to the nginx server's web directory
WORKDIR /usr/share/nginx/html/
EXPOSE 80
# Step 6: CMD to run NGINX in the foreground
Now, the APP_NAME variable is available to the running container. You can modify the index.html to
display this value dynamically.
o Description: Docker Hub is the default public registry maintained by Docker. It hosts a
large number of official and community-contributed images.
o Use Cases:
o Description: A private Docker registry is a registry you host on your own infrastructure.
It’s useful for storing proprietary or sensitive images that you don’t want to share publicly.
o Use Cases:
3. Third-Party Registries
o Description: These are Docker registries provided by cloud providers and other services
such as Google Container Registry (GCR), Amazon Elastic Container Registry
(ECR), and Azure Container Registry (ACR).
o Use Cases:
▪ Tight integration with the respective cloud platform (e.g., GCR with Google Cloud,
ACR with Azure).
Hands-On: Using Docker Registries
This command pulls the nginx image from Docker Hub to your local machine.
o Once the image is pulled, you can run it with the following command:
• This will run the nginx container in detached mode (-d) and map port 8080 on the host to port
80 inside the container.
• Visit http://localhost:8080 in your browser. You should see the default NGINX page.
To set up a private Docker registry, you can use the official registry image provided by Docker.
First, let's start a private registry container using the official Docker Registry image.
• This will start a private Docker registry container on port 5000 locally (on your machine).
• The registry:2 image is the latest version of the official Docker registry.
o First, pull any image (e.g., nginx) to push to your private registry.
• After the push is complete, you’ve stored the image on your private registry.
curl http://localhost:5000/v2/_catalog
Cloud-based Docker registries like Amazon Elastic Container Registry (ECR), Google Container
Registry (GCR), and Azure Container Registry (ACR) are similar in many ways to private registries,
but they provide additional benefits for integration with cloud services.
Let’s walk through an example of using Amazon Elastic Container Registry (ECR).
aws ecr get-login-password --region <your-region> | docker login --username AWS --password-stdin
<aws_account_id>.dkr.ecr.<your-region>.amazonaws.com
• Replace <your-region> with your AWS region and <aws_account_id> with your AWS account
ID.
o Tag your local image (e.g., nginx) for your ECR repository:
Docker Volumes
A Docker Volume is a persistent storage location for Docker containers. Volumes are managed by
Docker and are stored outside the container’s filesystem. Volumes can be shared and mounted across
containers, making them useful for storing database files, application logs, or configuration data that
needs to persist.
2. Anonymous Volumes: These are created by Docker with random names and are used when
no specific name is provided.
3. Host Volumes (Bind Mounts): These link a file or directory on the host system to a container.
• Data Persistence: Data inside containers is ephemeral, so using volumes allows data to survive
container restarts and removals.
• Sharing Data: Volumes can be shared between containers, enabling them to access and modify
the same data.
• Backup and Restore: You can easily back up, restore, and migrate volumes.
• Create a Volume:
• List Volumes:
docker volume ls
• Inspect a Volume:
• Remove a Volume:
Let's create a named volume and use it with a container to persist data.
2. Run a Container Using the Volume: Start a container (e.g., a busybox container) and mount
the volume at /data:
Here:
3. Write Data to the Volume: Once inside the container, create a file in /data:
4. Exit the Container: Exit the container after creating the file:
exit
5. Verify Data Persistence: Run another container and mount the same volume to check if the
data persists.
csharp
This confirms that the data stored in the volume persists across container restarts.
Docker can automatically create an anonymous volume when you don’t specify a name. Let’s see how
to work with anonymous volumes.
Here, Docker will automatically create an anonymous volume and mount it to /data inside the container.
2. Write Data to the Anonymous Volume: Inside the container, create a file in /data:
exit
4. List Volumes: You can check the created anonymous volume by listing the volumes:
docker volume ls
5. Inspect the Volume: If you want to check the details of the anonymous volume, run:
Replace <volume_id> with the actual volume ID you got from the previous step.
Bind mounts allow you to link a directory or file on the host system to a container.
Suppose you have a directory on your host at /home/user/data. You can mount it into the container like
this:
Here:
2. Modify the Data: Inside the container, modify the /data directory, which will reflect changes on
the host:
echo "This is a bind mount!" > /data/host_data.txt
3. Check the File on the Host: On the host system, check if the file host_data.txt is created in
/home/user/data:
cat /home/user/data/host_data.txt
This demonstrates that changes made inside the container are reflected in the host directory.
Here:
o -v $(pwd):/backup: Mounts the current directory on your host to /backup in the container.
o tar czf: Compresses the contents of the /volume directory into a .tar.gz file.
3. Verify the Data: Run a container with the restored volume and check the contents:
You should see the original file data, indicating the volume has been restored.
Over time, unused volumes can accumulate, taking up unnecessary disk space. Docker provides the
docker volume prune command to remove all unused volumes.
To remove unused volumes (those that are not being used by any container), run:
css
2. Mount the volume to a specific directory in the container to save logs or other data.
3. Show how the volume can be used to persist data between container runs.
mkdir my-docker-app
cd my-docker-app
javascript
const fs = require('fs');
fs.appendFileSync(logFilePath, logMessage);
res.write('Request logged!');
res.end();
};
server.listen(3000, () => {
});
3. Create a package.json File
json
"name": "my-docker-app",
"version": "1.0.0",
"main": "app.js",
"scripts": {
},
"dependencies": {}
npm init -y
Since this is a simple app and we are not using external dependencies, you don’t need to install anything
specific.
1. Dockerfile Contents
dockerfile
FROM node:14
WORKDIR /usr/src/app
COPY package.json ./
COPY app.js ./
# Install dependencies
EXPOSE 3000
VOLUME ["/usr/src/app/logs"]
• FROM node:14: Use the official Node.js 14 image as the base image.
• COPY package.json ./: Copy the package.json file into the container.
• COPY app.js ./: Copy the app.js file into the container.
• RUN npm install: Install Node.js dependencies (none in this case, but it's useful for larger
apps).
• RUN mkdir -p /usr/src/app/logs: Create a logs directory inside the container, where logs will
be stored.
• CMD ["npm", "start"]: Start the application using the npm start command, which runs the app.js
file.
In the terminal, inside your project directory (where the Dockerfile is located), build the Docker image:
Run the Docker container with the -v flag to mount the volume. This will persist data (logs in this case)
in the logs directory on your local system.
• -p 3000:3000: Bind port 3000 on the host to port 3000 in the container.
This command will run the application, and any logs generated will be stored in the my-app-logs volume.
arduino
http://localhost:3000
Each time you refresh the page, a log entry will be added to the logs directory inside the container, and
this will be saved in the Docker volume.
After the container is running and logs are being generated, you can inspect the volume to verify that
the data is stored.
docker volume ls
This will list all the Docker volumes, and you should see my-app-logs in the list.
This will provide detailed information about the volume, including the mount point where the volume
data is stored on the host machine.
To access the logs stored in the volume, you can run a temporary container to examine the contents of
the volume. For example:
• The cat /logs/app.log command will display the contents of the app.log file where logs are being
stored.
Step 6: Clean Up
After you're done with the container and volume, you can remove them:
docker rm <container_id>
In this guide, we will walk through the basics of Docker networking, various network types, and practical
hands-on examples with commands and Dockerfiles.
Docker provides several network modes, each suited for different use cases:
o This is the default network mode when no network is specified. It creates a private
internal network on your host system, and containers connected to this network can
communicate with each other but not with the host.
o Commonly used for containers that need to communicate within the same host.
2. Host Network
o In this mode, containers share the host’s networking namespace, meaning they can
access the host’s IP directly. Containers use the host’s IP address for external
communication.
o Typically used for performance-critical applications that need direct access to the host’s
networking stack.
3. Overlay Network
o This is used for multi-host networking, often in Docker Swarm or Kubernetes clusters. It
allows containers across multiple hosts to communicate with each other as if they were
on the same network.
4. None Network
o This mode disables all networking for the container. The container cannot communicate
with other containers or the host system.
You can create a custom bridge network to manage your containers more easily. A custom bridge
network allows containers to communicate with each other more effectively than the default bridge
network.
• --driver bridge: Specifies that we want to use the bridge network driver.
Now let’s run two containers that will communicate with each other over the custom bridge network.
You can enter one of the containers and ping the other container to verify communication:
ping container2
You should see responses, confirming that the containers are able to communicate within the same
custom bridge network.
The Host Network mode makes the container share the host’s network stack directly. In this mode, the
container uses the host’s IP address.
"NetworkMode": "host",
In this case, the container will share the host’s networking interface, and it won’t have its own private
IP address.
The Overlay Network is designed for communication between containers across different hosts, often
used in Docker Swarm mode. Here, we'll set up an overlay network in a multi-host environment.
This will start the Swarm manager and provide a command to join worker nodes (if necessary).
With Swarm mode active, you can deploy services that use the overlay network.
docker service create --name service1 --replicas 1 --network my_overlay_network busybox sleep 3600
docker service create --name service2 --replicas 1 --network my_overlay_network busybox sleep 3600
The services service1 and service2 can now communicate with each other using the overlay network,
even if they are deployed on different Docker hosts.
In the None network mode, containers cannot access any network resources. This is useful when a
container needs to be isolated from the network completely.
This container has no access to any networking resources. You cannot ping or communicate with it
from other containers or the host.
Dockerfiles don't directly control networking, but you can specify exposed ports and set up environment
variables that may affect how containers use networks.
We’ll create a Node.js application and expose a port in the Dockerfile for network communication.
javascript
res.end();
};
server.listen(3000, () => {
});
2. Create a Dockerfile:
FROM node:14
# Set working directory
WORKDIR /app
COPY package.json ./
COPY app.js ./
# Install dependencies
EXPOSE 3000
arduino
http://localhost:3000
List Networks
docker network ls
Inspect a Network
This will provide detailed network information, including IP addresses and attached networks.
Clean Up
After testing, you can clean up resources like containers, networks, and images.
docker rm node_app
We'll be using AWS EC2 (Elastic Compute Cloud) in this example, which offers a free-tier for new
users. This process can also be adapted for other cloud platforms like Google Cloud Platform (GCP)
or Microsoft Azure, which also offer free-tier resources for deploying Docker applications.
Pre-requisites
• AWS account or any cloud provider account with a free-tier eligible instance.
o Select an Amazon Machine Image (AMI): Choose Ubuntu Server 20.04 LTS (or any
Linux distribution you prefer).
o Configure instance details and storage as per your preference (default options are
typically fine for free-tier usage).
o In Security Group, select Create a new security group and configure it to allow HTTP
(port 80), HTTPS (port 443), and SSH (port 22) connections.
o Launch the instance and download the key pair (.pem file) to access the instance.
o Once the instance is running, you can access it through SSH. Open a terminal and use
the following command (replace <your-instance-public-ip> with the actual public IP of
your EC2 instance):
Once logged into the EC2 instance, the next step is to install Docker.
# Update packages
# Install dependencies
mkdir my-docker-app
cd my-docker-app
Create a simple Node.js application that listens on port 3000 and serves a message:
javascript
// app.js
res.end();
};
server.listen(3000, () => {
});
json
"name": "docker-app",
"version": "1.0.0",
"main": "app.js",
"scripts": {
},
"dependencies": {}
5. Install Dependencies:
npm install
dockerfile
# Use Node.js official image from Docker Hub
FROM node:14
WORKDIR /usr/src/app
COPY package.json ./
COPY app.js ./
EXPOSE 3000
Run the following command to build the Docker image from the Dockerfile:
• -p 80:3000: Map port 3000 inside the container to port 80 on the EC2 instance (so you can
access the app from the web).
Now, your Node.js application should be running on port 80 of your EC2 instance.
1. Open a browser and go to the Public IP of your EC2 instance (found in the EC2 dashboard). If
everything is set up correctly, you should see the message:
Hello
yaml
This means your Node.js app is successfully running inside a Docker container on AWS EC2.
---
### **Step 5: Set Up Persistent Storage Using Docker Volumes (Optional)**
In this step, we'll show how to persist data using Docker volumes, which can be helpful if your
application needs to store files or logs.
```
Run the container again but this time mount the volume to a specific directory in your container.
With this setup, any data written to /usr/src/app/data inside the container will be persisted in the
my_app_data volume, even if the container is stopped or removed.
To make your Docker image available for deployment elsewhere, you can push it to Docker Hub. Here's
how:
docker login
Now, your Docker image is available on Docker Hub and can be pulled from anywhere.
To avoid unnecessary charges and resource usage, don’t forget to clean up your resources once you’re
done.
docker rm <container_id>
In the AWS EC2 dashboard, select the EC2 instance and click Terminate.
This will stop the instance and delete it, preventing any further charges from accumulating.
1. Setting up the free-tier cloud server (AWS EC2 / GCP Compute Engine / Azure VM)
o Configure the instance and allow ports 22 (SSH), 80 (HTTP), and 443 (HTTPS).
o Click Launch, and download the key pair ( .pem file) for SSH access.
o Choose Ubuntu for the OS and f1-micro for the machine type (eligible for the free tier).
o Configure ports for SSH (22), HTTP (80), and HTTPS (443).
ssh username@<your-vm-ip>
Once you have access to your cloud instance (EC2, GCP, or Azure), install Docker.
2. Install dependencies:
5. Install Docker:
We will use the official Tomcat Docker image to run the Tomcat server in a container.
Run the Tomcat server in a Docker container. We'll expose port 8080 from the container to port 80 on
the host so it’s accessible via the web.
• -p 80:8080: Expose container’s port 8080 to host’s port 80 (so you can access it from the
browser).
We will now deploy a simple Java web application (WAR file) to Tomcat in Docker.
If you don’t have a WAR file ready, you can create one using a simple Java Servlet or download a
sample from GitHub.
For demonstration, we’ll use a basic "Hello World" servlet or a simple sample.war file.
Use the docker cp command to copy the WAR file into the Tomcat container:
This command copies sample.war to the /webapps/ directory inside the Tomcat container.
After copying the WAR file, you may need to restart the container for Tomcat to deploy the application:
You can now access your deployed application by navigating to the public IP of your cloud server (AWS,
GCP, or Azure) in a web browser.
arduino
http://<your-server-ip>/sample
This will open your Java web application deployed in the Tomcat container.
If the firewall on your cloud server is blocking HTTP (port 80), ensure that the required ports are open
in the security settings of your cloud provider.
• AWS EC2: Go to Security Groups in the EC2 dashboard and add inbound rules for HTTP (port
80).
• GCP Compute Engine: In Firewall rules, ensure HTTP traffic (port 80) is allowed.
• Azure: Open Network Security Group for the VM and ensure HTTP is allowed.
After confirming the firewall settings, try accessing the Tomcat server again using the public IP.
To scale your application, you can use Docker Compose, which allows you to define multi-container
applications.
yaml
version: '3'
services:
tomcat:
image: tomcat:latest
ports:
- "80:8080"
volumes:
- ./webapps:/usr/local/tomcat/webapps
environment:
- TZ=America/New_York
Use the following command to start the application with Docker Compose:
docker-compose up -d
This command will start the Tomcat server and deploy the application defined in the webapps folder.
After completing the deployment, make sure to clean up any resources to avoid unnecessary charges.
docker rm tomcat-server
• For AWS: Go to EC2 Dashboard, select your instance, and click Terminate.
Docker Compose
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you
to define all your services, networks, and volumes in a single file (docker-compose.yml) and spin up
your entire application with a single command.
• Docker Compose is primarily used for managing multi-container applications. It lets you define
all the configurations for multiple containers (like databases, web servers, etc.) in one YAML file
(docker-compose.yml).
• With Docker Compose, you can manage dependencies, network configurations, volumes, and
environment variables for all services in one place.
1. Service: A service is a container in the application. You can think of it as the running component
of your app, such as a database or web server.
2. Network: Docker Compose automatically creates a network for your containers to communicate
with each other.
3. Volume: Volumes are used to persist data generated by and used by Docker containers.
4. Configuration File: The docker-compose.yml file describes the services, networks, and
volumes for your application.
2. Verify installation:
docker-compose --version
For macOS / Windows: Docker Compose is bundled with Docker Desktop, so you don’t need to install
it separately.
Let’s create a simple web application with a database. We’ll use the following technologies:
kotlin
my-docker-compose-app/
├── docker-compose.yml
├── app/
│ ├── Dockerfile
│ ├── app.js
│ ├── package.json
└── db/
└── init.sql
json
"name": "docker-web-app",
"version": "1.0.0",
"main": "app.js",
"dependencies": {
"mysql": "^2.18.1",
"express": "^4.17.1"
},
"scripts": {
javascript
const db = mysql.createConnection({
user: 'root',
password: 'example',
database: 'mydb'
});
db.connect((err) => {
});
res.send(result);
});
});
app.listen(port, () => {
});
Create a Dockerfile inside the app/ directory to build the image for the web application.
dockerfile
WORKDIR /usr/src/app
COPY package.json ./
COPY . .
EXPOSE 3000
Create an init.sql file to initialize the MySQL database with some sample data.
sql
USE mydb;
);
Now, create the docker-compose.yml file to define how to run the app and the database together.
yaml
version: '3'
services:
app:
build: ./app
ports:
- "3000:3000"
environment:
- DB_HOST=db
- DB_USER=root
- DB_PASSWORD=example
- DB_NAME=mydb
depends_on:
- db
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: example
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
networks:
- app-network
networks:
app-network:
driver: bridge
Explanation:
• app service: Defines the Node.js application. It will be built from the Dockerfile in the app/
directory.
• db service: Defines the MySQL database. It uses the official MySQL image and initializes the
database with the init.sql script.
• depends_on: Ensures that the app container starts only after the db service is ready.
cd my-docker-compose-app
docker-compose up --build
o The --build flag ensures Docker Compose rebuilds the images if necessary.
3. Access the Application:
Once the application is up and running, open your browser and visit:
arduino
http://localhost:3000
You should see the result of the query to the MySQL database, which will show all the users in the users
table.
Docker Compose makes it easy to scale your services. For example, you can scale the app service to
run multiple instances of the web application for better load balancing.
docker ps
docker-compose down
If you want to remove volumes and networks along with containers, use the following:
docker-compose down -v
This will stop and remove the containers, networks, and volumes defined in the docker-compose.yml
file.
Docker Swarm
Docker Swarm is a container orchestration tool built into Docker that enables you to manage a cluster
of Docker engines. It allows you to create and manage a cluster of Docker nodes, known as a "swarm,"
and deploy services across multiple Docker hosts. Swarm provides high availability, load balancing,
scaling, and self-healing capabilities to applications running in containers.
• Swarm Mode: Allows Docker containers to run on multiple hosts in a coordinated manner.
• Service Management: You can define services that run on the swarm, which Docker will
manage, monitor, and scale.
• High Availability: If a node in the swarm fails, Docker Swarm automatically reschedules
services to another available node.
• Load Balancing: Swarm load balances incoming traffic to the services across nodes.
• Rolling Updates: Perform updates to services with no downtime by rolling out changes to
replicas.
In this tutorial, we will use 3 nodes: one manager node and two worker nodes.
First, ensure that Docker is installed on all the machines (manager and worker nodes). Refer to the
steps in the previous sections on how to install Docker on a machine.
The first step is to initialize the Docker Swarm mode on the manager node.
ssh user@manager-node-ip
o --advertise-addr: This is the IP address on which the manager node will advertise itself
to other nodes in the swarm.
o The command will return a token that you will need to join the worker nodes to the swarm.
Example output:
Next, you need to add worker nodes to the swarm. On each worker node:
2. Run the docker swarm join command with the token provided from the manager node.
3. On the manager node, verify the worker nodes have successfully joined the swarm:
Now, you have a Swarm cluster with 1 manager node and 2 worker nodes.
With the swarm initialized and the nodes added, you can now deploy services across the swarm.
1. On the manager node, run the following command to deploy the Nginx service:
o --name: Specifies the name of the service (in this case, web-server).
o --replicas: Defines the number of instances of the service (replicas) you want to run
across the swarm.
Example output:
3. To view the tasks (containers) that Docker Swarm has created for this service, run:
You should see a list of tasks across the manager and worker nodes.
4. Now, you can access the Nginx web service by navigating to the manager node's IP in a
browser:
arduino
http://<manager-node-ip>:80
You should see the default Nginx welcome page. Docker Swarm has automatically load-balanced the
service across the available nodes.
You should see 5 tasks (containers) running across the nodes in the swarm.
Docker Swarm provides the ability to update services in a rolling manner without downtime.
Let’s update the web-server service to use a different version of the Nginx image.
This will update the web-server service to use the nginx:alpine image while keeping the rolling update
process.
Docker Swarm will gradually replace the old containers with new ones, ensuring no downtime during
the update.
One of the great features of Docker Swarm is self-healing. If a container or node fails, Docker Swarm
automatically re-schedules the service on available nodes.
You can simulate a failure by stopping a worker node. For example, on a worker node:
This will mark the worker node as unavailable. Docker Swarm will automatically reschedule the tasks
(containers) that were running on that node to other available nodes.
Docker Swarm should have re-scheduled the tasks to the remaining available worker nodes.
Once you are done with the setup, you can remove the service and clean up the resources.
Let's go through the concepts and roles of Manager and Worker nodes in Docker Swarm, followed by
a practical hands-on demonstration of setting up and managing them.
1. Manager Nodes
1. Orchestrating the Swarm Cluster: Manager nodes are responsible for managing the state and
behavior of the Swarm cluster. They handle tasks such as service creation, scaling, and task
scheduling.
2. Leader Election: Only one manager node acts as the leader at any given time. The leader
manager is responsible for maintaining the overall state of the Swarm, while other manager
nodes act as followers that replicate the state of the leader.
3. Service Management: Managers maintain the desired state of services, ensuring that the
services are running with the right number of replicas.
4. Task Scheduling: Managers are responsible for scheduling tasks on worker nodes. They
decide which worker node should run a specific container for a service.
5. Cluster Management: Managers also handle administrative tasks such as adding/removing
nodes, updating services, and handling network configurations.
6. Fault Tolerance: If the leader manager node fails, another manager node can be elected to
take over as the new leader.
High Availability:
A Docker Swarm cluster should ideally have an odd number of manager nodes (e.g., 3, 5, etc.) to
ensure fault tolerance. This ensures that in case of failure, there’s always a majority of managers to
perform a leader election and keep the cluster functional.
2. Worker Nodes
1. Running Containers: Worker nodes are responsible for executing the tasks (containers)
assigned by the manager nodes. These tasks are the running instances of the services defined
in the Docker Swarm.
2. Task Execution: Worker nodes don't manage the cluster state, but they handle the
containerized application and run the actual workloads (services) that are part of the application.
3. Scaling Services: While managers can scale services (increase or decrease the number of
replicas), the actual work of running the containers happens on the worker nodes.
4. Communication with Managers: Worker nodes communicate with the manager nodes to get
updates and instructions on which tasks (containers) they need to run.
One manager acts as the leader; others Does not participate in leader
Leadership
are followers election
If the manager fails, another manager can Can fail, but does not affect the
Fault Tolerance
be elected swarm’s state
Number of Nodes in Must have at least 1, but typically 3 or Can scale horizontally; multiple
the Swarm more for high availability worker nodes
Let's walk through the steps of setting up Docker Swarm with 1 manager node and 2 worker nodes.
Make sure Docker is installed on all nodes (Manager and Worker). You can follow the installation steps
from previous tutorials if necessary.
1. SSH into Manager Node (The first node that will be the manager):
ssh user@manager-node-ip
Run the following command to initialize the Docker Swarm. Replace <manager-node-ip> with the actual
IP address of the manager node.
This will return a join token that worker nodes will use to join the swarm. You will see output like:
ssh user@worker-node1-ip
Run the docker swarm join command with the join token you copied earlier:
ssh user@manager-node-ip
Run the following command to see the list of nodes in the swarm:
The manager-node is listed as the leader, and both worker nodes (worker-node1 and worker-node2)
are listed as Ready.
Let’s deploy a simple Nginx web service across the swarm and verify that it runs on all available nodes.
1. On the Manager Node, create a service with 3 replicas (which will be distributed across the
manager and worker nodes):
You should see the Nginx web service running on the manager and worker nodes:
arduino
http://<manager-node-ip>:80
You should see the default Nginx welcome page. Docker Swarm has automatically distributed the
replicas across the available manager and worker nodes.
To scale the service, you can increase or decrease the number of replicas. For example, to scale to 5
replicas:
In Docker, a stack refers to the application and all the associated services and infrastructure required
to run it. Using Docker Stack, you can define all the components of an application, such as web servers,
databases, and networking, in a Docker Compose file and then deploy them to a Docker Swarm
cluster.
2. Declarative Application Deployment: You define the entire application in a configuration file
(docker-compose.yml), and Docker handles deploying and managing it across the Swarm
cluster.
3. Scaling Services: Docker Stack supports scaling services up or down, i.e., adjusting the
number of container instances for a service depending on the load or demand.
4. Service Discovery and Load Balancing: Docker Stack provides automatic service discovery
and load balancing across the containers within a service. This ensures that the application is
resilient and highly available.
6. Persistent Volumes: Docker Stack allows you to define persistent storage for your containers
using Docker volumes, ensuring that data is retained even when containers are stopped or
removed.
A Docker Stack is defined using a Docker Compose file (typically named docker-compose.yml). This
file describes the services, networks, and volumes required for the application. Once the file is created,
the stack is deployed to a Docker Swarm cluster, where Docker manages the deployment, scaling, and
orchestration of the containers.
Here is an example of a simple docker-compose.yml file used for defining a Docker Stack:
yaml
version: '3.8'
services:
web:
image: nginx:alpine
ports:
- "8080:80"
networks:
- webnet
db:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: example
networks:
- webnet
volumes:
- db_data:/var/lib/mysql
networks:
webnet:
volumes:
db_data:
• services: This section defines the containers (services) that will be part of the stack. In this
example, we have two services: web (a simple Nginx container) and db (a MySQL container).
• networks: The containers defined in the stack can communicate with each other over defined
networks. Here, webnet is the network connecting both services.
• volumes: Volumes are defined for persistent storage. In this case, db_data stores the MySQL
database data to persist even if the container is recreated.
• -c docker-compose.yml: Specifies the Docker Compose file to use for the stack.
After deployment, Docker Swarm will launch the necessary containers based on the docker-
compose.yml file.
Once the stack is deployed, you can use various Docker commands to manage the stack:
docker stack ls
3. Scale a service: You can scale a service (i.e., change the number of replicas):
5. Remove a stack: To remove the stack and all its services, networks, and volumes:
• Docker Compose: It is used for defining and running multi-container Docker applications on a
single host. Docker Compose is typically used for local development or testing.
2. Scalable and Resilient: With Docker Swarm, you can scale services based on demand. Docker
Stack ensures that the application is resilient, and services can be automatically rescheduled in
case of failure.
4. Integrated Networking and Volumes: Docker Stack handles networking between services and
provides a way to create and manage persistent storage.
5. Service Discovery: Docker Stack provides built-in service discovery, making it easy for services
to find and communicate with each other.
The choice of base image plays a significant role in the overall image size. Instead of using large base
images, opt for smaller ones:
• Alpine Linux: It's a minimal Docker image, usually around 5 MB. If you're using a language like
Python, Node.js, or Java, there are Alpine versions available for those languages (e.g.,
python:3.8-alpine or node:14-alpine).
• Distroless Images: These images only contain your application and its runtime dependencies,
with no package manager, shell, or other unnecessary tools. For example,
gcr.io/distroless/base.
dockerfile
Copy
FROM python:3.8-alpine
Multi-stage builds allow you to use multiple FROM statements in your Dockerfile. The first stage is used
to build your application, and the second stage copies only the necessary artifacts into a smaller image.
This avoids including unnecessary build tools and dependencies in the final image.
Example:
dockerfile
Copy
WORKDIR /app
COPY . .
FROM node:14-alpine
WORKDIR /app
Here, the build stage installs dependencies and builds the app, but only the necessary build artifacts
are copied to the final smaller image.
3. Minimize Layers
Each instruction in a Dockerfile creates a new layer in the image. To reduce the image size, you should
combine related instructions into fewer layers. For example:
• Instead of using separate RUN commands for each package installation, combine them into a
single RUN statement.
dockerfile
package1 \
package2 \
package3
After installing packages or copying files, delete unnecessary files to reduce the image size:
• Clean up cache: For package managers like apt, npm, or pip, delete cache files to reduce size.
For npm:
For pip:
rm -rf /root/.cache
Only install the dependencies you need. For example, in a Node.js or Python app, you might be
installing unnecessary development dependencies. Make sure you're only installing the production
dependencies in your Dockerfile.
For Node.js, you can install only the production dependencies by using the --production flag:
For Python, ensure you use a requirements.txt that includes only the necessary packages.
6. Use .dockerignore
Like .gitignore, the .dockerignore file specifies which files should not be copied into the Docker image.
This can significantly reduce the size by avoiding unnecessary files like build artifacts, temporary files,
and development tools from being included in the image.
Example .dockerignore:
node_modules
*.log
*.md
tests/
.git
If you're working in an environment where certain architectures (e.g., ARM, AMD64) are required, you
can build an image optimized for that architecture. This ensures you're not adding unnecessary
architecture-specific dependencies.
Sometimes, a .env file or configuration file can be very large and be accidentally copied into your Docker
image. Use .dockerignore to make sure they aren't included, or move configuration to environment
variables or Docker Secrets.
Avoid installing debugging or build tools like compilers, curl, git, and text editors unless absolutely
necessary. These tools can add a lot of bloat to the image.
Example:
dockerfile
Copy
FROM scratch
CMD ["/myapp"]
This method is useful for things like Go or statically compiled C/C++ binaries.
dockerfile
WORKDIR /app
COPY . .
FROM node:14-alpine
WORKDIR /app
In this Dockerfile:
• Docker Docs – The official documentation is the best place to start, covering everything from
installation to advanced topics like Docker Swarm and Docker Compose.
2. Books
• "Docker Deep Dive" by Nigel Poulton – Great for beginners and intermediate users alike. It offers
detailed explanations of Docker concepts and real-world use cases.
• "The Docker Book" by James Turnbull – A comprehensive guide for learning Docker, covering
everything from the basics to advanced features.
• "Docker in Action" by Jeffrey Nickoloff – Focuses on practical Docker usage with clear examples
and explanations.
3. Online Courses
• Udemy:
o "Docker for Beginners" – A great course for those just getting started.
• LinkedIn Learning:
4. YouTube Channels
• Docker’s Official YouTube Channel – Features tutorials, webinars, and conference sessions.
• TechWorld with Nana – Offers clear and easy-to-follow tutorials on Docker and Kubernetes.
• Docker Blog: https://www.docker.com/blog/ – Official blog with use cases, updates, and best
practices.
• Medium: Search for Docker tutorials and articles on Medium; lots of experienced developers
share their insights.
• Play with Docker: https://labs.play-with-docker.com/ – A free, online environment where you can
experiment with Docker without needing to set it up locally.
• Docker Community Slack: Engage with other Docker users for questions, discussions, and help.
• Reddit (r/docker): A very active community where you can discuss Docker or ask for help.