Learn Docker in Depth 1700611666
Learn Docker in Depth 1700611666
Learn Docker in Depth 1700611666
Day 11
What is Docker?
Using Docker, developers can build, test, and deploy applications quickly
and consistently, regardless of the underlying infrastructure or hosting
environment. This makes it an essential tool for modern software
development and deployment.
@Sandip Das
@Sandip Das Docker Architecture
The Docker architecture is designed as a client-server architecture. The client, known as the
Docker client, is a command-line tool that allows users to interact with the Docker daemon, which
is the Docker server responsible for managing Docker images, containers, networks, and volumes.
Here's a breakdown of the Docker architecture:
1. Docker Client: The Docker client is a command-line interface (CLI) tool that users use to
interact with Docker. The client sends commands to the Docker daemon and receives
responses back. The Docker client can be installed on the same machine as the Docker
daemon, or it can be installed on a remote machine and connected to the Docker daemon over
the network.
2. Docker Daemon: The Docker daemon is the server component of Docker that manages Docker
images, containers, networks, and volumes. The daemon runs in the background and listens for
API requests from the Docker client. It's responsible for starting, stopping, and managing
Docker containers, creating and managing Docker images, and managing Docker networks and
volumes.
3. Docker Registry: A Docker registry is a repository that stores Docker images. Docker Hub is
the default public registry provided by Docker, where users can store, share, and download
Docker images. However, users can also set up their own private Docker registry for storing
and sharing images within their organization.
4. Docker Images: A Docker image is a read-only template that contains the application and its
dependencies. Docker images are built using a Dockerfile, which is a text file that contains
instructions for building the image. Once built, Docker images can be stored in a registry and
used to create Docker containers.
5. Docker Containers: A Docker container is a running instance of a Docker image. Containers are
isolated from each other and from the host system, which provides a secure environment for
running applications. Docker containers can be started, stopped, and managed using the
Docker daemon.
6. Docker Networks: Docker networks are used to connect Docker containers to each other and
to the outside world. Docker provides several types of network drivers, including bridge, host,
overlay, and macvlan, to support different networking scenarios.
7. Docker Volumes: Docker volumes are used to persist data generated by Docker containers.
Docker volumes can be mounted into a container at runtime, allowing data to be shared and
persisted across multiple containers.
Docker Client @Sandip Das
The Docker client is a command-line interface (CLI) tool that allows users to interact with the Docker daemon, which is the Docker server
responsible for managing Docker images, containers, networks, and volumes.
The Docker client communicates with the Docker daemon through the Docker API, which allows users to execute Docker commands from the
command line or from scripts. The Docker client can be installed on the same machine as the Docker daemon, or it can be installed on a remote
machine and connected to the Docker daemon over the network.
The Docker client provides a wide range of commands for managing Docker images, containers, networks, and volumes. Here are some of the
most common commands:
docker run: Creates and starts a new Docker container from a Docker image.
docker build: Builds a new Docker image from a Dockerfile.
docker pull: Downloads a Docker image from a Docker registry.
docker push: Uploads a Docker image to a Docker registry.
docker ps: Lists all running Docker containers.
docker stop: Stops a running Docker container.
docker rm: Removes a stopped Docker container.
docker network: Manages Docker networks.
docker volume: Manages Docker volumes.
In addition to the command-line interface, Docker also provides a graphical user interface (GUI) called Docker Desktop, which provides a more
user-friendly interface for managing Docker resources on a local machine. The Docker Desktop GUI is available for Windows and macOS.
Docker daemon
The Docker daemon is the server component of Docker that manages Docker images, containers, networks, and volumes. The daemon runs
in the background and listens for API requests from the Docker client. It's responsible for starting, stopping, and managing Docker
containers, creating and managing Docker images, and managing Docker networks and volumes.
The Docker daemon runs as a background process on the Docker host and listens on a Unix socket or a network port for API requests from
the Docker client. The Docker daemon can be configured using various options, including storage drivers, network drivers, and logging
options, to customize its behavior.
@Sandip Das
Docker Registry
A Docker registry is a repository that stores Docker images. Docker Hub is the default public registry provided by Docker, where users can
store, share, and download Docker images. However, users can also set up their own private Docker registry for storing and sharing images
within their organization.
1. Image storage: The Docker registry stores Docker images, which can be downloaded and used to create Docker containers.
2. Image management: The Docker registry allows users to manage Docker images, including tagging, pushing, and pulling images from
the registry.
3. Access control: The Docker registry allows users to control access to Docker images by configuring authentication and authorization
policies.
4. Replication: The Docker registry supports replication of Docker images across multiple servers, allowing users to distribute images to
different geographic locations for faster downloads.
5. Search: The Docker registry provides a search functionality that allows users to search for Docker images based on keywords and tags.
Users can use the Docker CLI to interact with Docker registries. For example, the "docker pull" command downloads a Docker image from a
registry, and the "docker push" command uploads a Docker image to a registry. Docker also provides an open-source registry
implementation called Docker Distribution, which can be used to set up a private Docker registry. Additionally, users can use third-party
registry solutions, such as JFrog Artifactory and Google Container Registry, to host their Docker images.
@Sandip Das
@Sandip Das
Docker Images Here are some examples of Docker CLI commands that can be used to manage
Docker images:
Docker images are the building blocks of Docker containers. An image is a read-only List local images:
template that contains the application and its dependencies. Docker images can be built docker images
using a Dockerfile, which is a text file that contains instructions for building the image. Once This command lists all the Docker images that are currently stored locally on the
Docker host.
built, Docker images can be stored in a registry and used to create Docker containers.
Pull an image from a registry:
docker pull <image-name>
Here are some key features of Docker images:
This command downloads a Docker image from a Docker registry, such as Docker
1. Layered architecture: Docker images are built using a layered architecture. Each layer in Hub, and stores it locally on the Docker host.
the image represents a change or modification to the previous layer. This layered Build an image from a Dockerfile:
architecture allows Docker to reuse layers across multiple images, reducing the size of docker build -t <image-name> <path-to-dockerfile>
images and improving build times. This command builds a Docker image from a Dockerfile located at the specified path
2. Versioning: Docker images can be versioned using tags. A tag is a label that is applied to and gives it the specified name (-t option).
an image, indicating the version of the image. Users can use tags to identify and manage Tag an image:
different versions of an image. docker tag <image-id> <new-image-name>:<tag>
3. Caching: Docker images are cached locally on the Docker host, which allows Docker to This command applies a new tag to an existing Docker image. The tag is used to
identify different versions of the same image.
reuse images that have already been built. This caching mechanism can speed up build
Push an image to a registry:
times and reduce network traffic.
docker push <image-name>
4. Portability: Docker images are portable, which means they can be easily moved between
This command uploads a Docker image to a Docker registry, such as Docker Hub.
different Docker hosts and environments. This portability is achieved through the use of a Remove an image:
standardized image format and the Docker registry. docker rmi <image-name>
5. Security: Docker images can be scanned for vulnerabilities using tools like Docker This command removes a Docker image from the local Docker host.
Security Scanning. This allows users to identify and address security issues in their Search for an image:
Docker images. docker search <keyword>
This command searches for Docker images on Docker Hub based on the specified
Users can use the Docker CLI to interact with Docker images. For example, the "docker build" keyword.
Show detailed information about an image:
command builds a Docker image from a Dockerfile, and the "docker push" command uploads
docker inspect <image-name>
a Docker image to a Docker registry. Additionally, Docker provides a public registry called
This command shows detailed information about a Docker image, including its
Docker Hub, where users can store, share, and download Docker images. Users can also set
metadata, configuration, and layers.
up their own private Docker registry to store and share images within their organization.
Dockerfile @Sandip Das
A Dockerfile is a text file that contains a set of instructions for building a Docker image. Here are the most commonly used Dockerfile commands and their purposes:
The instructions in a Dockerfile are executed in order to create a Docker image that can be
used to run a containerized application. 1. FROM: Specifies the base image for the Docker image being built. The base image must be the
Here are some key components of a Dockerfile: first instruction in the Dockerfile. For example, FROM python:3.8 specifies that the base image
1. Base image: The base image is the starting point for the Docker image. It provides the is the official Python 3.8 image.
operating system and basic set of tools and libraries that the application needs to run. 2. RUN: Runs a command in the Docker image. This command can be used to install packages,
For example, to create a Docker image for a Python application, the base image might update the system, or build the application. For example, RUN apt-get update && apt-get
be the official Python image from Docker Hub. install -y some-package installs the "some-package" package using the "apt-get" package
2. Environment variables: Environment variables can be set in the Dockerfile to configure manager.
the application environment. For example, environment variables can be used to set 3. COPY and ADD: Copies files from the host machine to the Docker image. COPY is preferred
the port number that the application listens on or to specify the database connection over ADD because it has fewer side effects. For example, COPY app.py /app/ copies the
string. "app.py" file from the host machine to the "/app" directory in the Docker image.
3. Copying files: Files can be copied from the host machine to the Docker image using 4. WORKDIR: Sets the working directory for subsequent instructions in the Dockerfile. For
the "COPY" or "ADD" commands. This is used to include the application code and any example, WORKDIR /app sets the working directory to "/app".
necessary configuration files in the Docker image. 5. ENV: Sets environment variables in the Docker image. For example, ENV PORT 8000 sets the
4. Running commands: Commands can be executed in the Dockerfile to install "PORT" environment variable to 8000.
dependencies, compile code, and configure the application. For example, to install 6. EXPOSE: Exposes a port from the Docker image to the host machine. For example, EXPOSE
Python dependencies, the "RUN pip install" command can be used. 8000 exposes port 8000 in the Docker image.
5. Exposing ports: Ports can be exposed in the Dockerfile using the "EXPOSE" 7. CMD and ENTRYPOINT: Specifies the command that is run when the Docker image is started
command. This tells Docker to expose the specified port when the container is run. as a container. CMD is used to provide default arguments to the command, while ENTRYPOINT
6. Entrypoint: The entrypoint is the command that is executed when the container is run. is used to specify the main command. For example, CMD ["python", "app.py"] specifies that the
This can be specified in the Dockerfile using the "CMD" or "ENTRYPOINT" commands. "app.py" file should be run with the Python interpreter.
For example, to run a Python application, the entrypoint might be the command 8. LABEL: Adds metadata to the Docker image. This metadata can be used to provide information
"python app.py". about the image, such as the version, maintainer, or source code repository. For example,
Example: LABEL version="1.0" adds a "version" label to the Docker image with the value "1.0".
FROM python:3.8 9. USER: Sets the user that the Docker image should run as. For example, USER appuser sets the
WORKDIR /app user to "appuser".
COPY requirements.txt . These are just some of the most commonly used Dockerfile commands. There are many other
RUN pip install --no-cache-dir -r requirements.txt commands available, such as ARG, VOLUME, HEALTHCHECK, and STOPSIGNAL, that can be
COPY . . used to further customize the Docker image.
CMD [ "python", "app.py" ]
Docker Containers @Sandip Das
Docker containers are running instances of Docker images. A container Users can use the Docker CLI to interact with Docker containers. Here are
is a lightweight, portable, and isolated environment that can run an some examples of Docker CLI commands that can be used to manage Docker
application and its dependencies consistently across different systems, containers:
including development, testing, and production environments.
List running containers:
Here are some key features of Docker containers: docker ps
1. Isolation: Docker containers are isolated from each other and from This command lists all the Docker containers that are currently running on the
the host system. Each container has its own file system, network, Docker host.
and process namespace, which provides a secure environment for Start a container:
running applications. docker run <image-name>
2. Lightweight: Docker containers are lightweight because they share This command starts a new Docker container from the specified Docker
the same kernel as the host system. This means that Docker image.
containers require fewer resources than traditional virtual Stop a container:
machines. docker stop <container-id>
3. Portability: Docker containers are portable, which means they can This command stops a running Docker container.
be easily moved between different Docker hosts and environments. Remove a container:
This portability is achieved through the use of a standardized docker rm <container-id>
container format and the Docker runtime. This command removes a stopped Docker container from the Docker host.
4. Reproducibility: Docker containers provide reproducible builds
View logs from a container:
because they are built from a Docker image, which contains all the
docker logs <container-id>
application's dependencies. This means that Docker containers can
This command shows the logs generated by a Docker container.
be reliably built and deployed across different systems.
Connect to a running container:
5. Scalability: Docker containers can be scaled horizontally by running
docker exec -it <container-id> /bin/bash
multiple instances of the same container across different systems.
This command connects to a running Docker container and opens a terminal
This allows applications to handle increased traffic and load.
session inside the container.
Docker Networking
Docker networks are used to connect Docker containers to each other Users can use the Docker CLI to interact with Docker networks. Here are some
and to the outside world. Docker provides several types of network examples of Docker CLI commands that can be used to manage Docker
drivers, including bridge, host, overlay, and macvlan, to support networks:
different networking scenarios.
List Docker networks:
Here are some key features of Docker networks: docker network ls
1. Isolation: Docker networks provide isolation between different This command lists all the Docker networks that are currently available on the
Docker containers. Each container can be assigned to one or more Docker host.
Docker networks, and communication between containers is only Create a new Docker network:
allowed within the same network. docker network create <network-name>
2. Flexibility: Docker networks provide flexibility in how containers This command creates a new Docker network with the specified name.
communicate with each other. For example, containers can be Connect a container to a Docker network:
connected to multiple networks, and different types of network docker network connect <network-name> <container-name>
drivers can be used to support different network topologies. This command connects a Docker container to the specified Docker network.
3. Security: Docker networks provide security by allowing users to Disconnect a container from a Docker network:
control which containers can communicate with each other. This docker network disconnect <network-name> <container-name>
can be achieved by using network policies and access controls. This command disconnects a Docker container from the specified Docker
4. Scalability: Docker networks provide scalability by allowing users network.
to create multiple containers that can communicate with each other. Inspect a Docker network:
This allows applications to handle increased traffic and load. docker network inspect <network-name>
This command shows detailed information about the specified Docker
network, including its configuration and connected containers.
Remove a Docker network:
docker network rm <network-name>
@Sandip Das This command removes the specified Docker network from the Docker host.
Docker Networking Modes @Sandip Das
Docker provides several network modes that can be used to connect Docker containers to each other and to the outside world. Each network mode
provides a different level of isolation and network connectivity.
Users can specify the network mode when running a Docker container using the "docker run" command. For example, to run a container in bridge
network mode, you can use the following command:
Additionally, users can create custom Docker networks with specific configurations, such as IP address range and subnet mask, to provide more
control over container networking.
Docker Networking Tools
Docker provides several networking tools that can be used to manage Docker networks and troubleshoot network connectivity issues. Here are some
of the most commonly used Docker networking tools:
1. Docker Compose: Docker Compose is a tool for defining and running multi-container Docker applications. It allows users to define the network
configuration for their containers, including networks, IP addresses, and port mappings, in a single YAML file. Docker Compose can be used to
start, stop, and manage multiple containers at once.
2. Docker Network CLI: The Docker Network CLI is a command-line tool for managing Docker networks. It allows users to create, list, inspect, and
delete Docker networks. Additionally, it can be used to connect and disconnect Docker containers from networks, and to configure network
policies and access controls.
3. Docker Network Inspector: The Docker Network Inspector is a command-line tool for troubleshooting network connectivity issues in Docker
containers. It allows users to inspect the network configuration of a Docker container, including IP addresses, network interfaces, and routing
tables. Additionally, it can be used to perform network diagnostics, such as pinging or tracing network routes.
4. Weave Scope: Weave Scope is a tool for visualizing and monitoring Docker networks. It provides a real-time map of the Docker network
topology, including containers, hosts, and network connections. Additionally, it can be used to monitor network traffic, troubleshoot network
issues, and identify security threats.
5. Calico: Calico is a network plugin for Docker that provides advanced networking features, such as policy-based network segmentation, network
isolation, and network encryption. Calico can be used to deploy and manage large-scale Docker environments, such as Kubernetes clusters, and
to secure containerized applications.
These tools can be used in conjunction with each other to manage and troubleshoot Docker networks. Additionally, there are many third-party tools
available that can be used to extend the functionality of Docker networking, such as CNI plugins and network security solutions.
@Sandip Das
Docker DNS
Docker DNS (Domain Name System) is used to resolve domain names to IP addresses within Docker containers. By default, Docker containers use
the DNS server provided by the Docker host. The DNS server is responsible for resolving domain names to IP addresses, and it can be configured to
use different DNS servers or to forward DNS requests to another DNS server.
Additionally, Docker provides a built-in DNS server, called Docker DNS, that can be used to provide DNS resolution for Docker containers. Docker
DNS is automatically configured when a Docker network is created, and it can be used to provide DNS resolution for containers within the same
network.
@Sandip Das
Docker Volumes
Docker volumes are used to persist data generated by Docker Users can use the Docker CLI to interact with Docker volumes. Here are some
containers. Docker volumes can be mounted into a container at examples of Docker CLI commands that can be used to manage Docker
runtime, allowing data to be shared and persisted across multiple volumes:
containers.
List Docker volumes:
Here are some key features of Docker volumes: docker volume ls
1. Persistence: Docker volumes provide persistence by allowing data This command lists all the Docker volumes that are currently available on the
to be stored outside of the container's file system. This means that Docker host.
data can be preserved even if the container is deleted or recreated. Create a new Docker volume:
2. Isolation: Docker volumes provide isolation by allowing data to be docker volume create <volume-name>
shared between containers without requiring the containers to be This command creates a new Docker volume with the specified name.
on the same Docker network or Docker host. Attach a Docker volume to a container:
3. Flexibility: Docker volumes provide flexibility by allowing users to docker run -v <volume-name>:<mount-point> <image-name>
choose where the data is stored and how it is accessed. For This command starts a new Docker container from the specified Docker image
example, volumes can be stored locally on the Docker host, or they and attaches the specified Docker volume to the container.
can be stored on a remote storage system like NFS or AWS EBS. Inspect a Docker volume:
4. Security: Docker volumes provide security by allowing users to docker volume inspect <volume-name>
control access to the data stored in the volume. This can be This command shows detailed information about the specified Docker volume,
achieved by using file system permissions or access controls. including its configuration and attached containers.
5. Scalability: Docker volumes provide scalability by allowing data to Remove a Docker volume:
be shared across multiple containers. This allows applications to docker volume rm <volume-name>
handle increased traffic and load. This command removes the specified Docker volume from the Docker host.
@Sandip Das
Docker orchestration
Docker orchestration is the process of managing and scaling Docker containers in a distributed environment. Orchestration provides a way to automate the deployment,
scaling, and management of containerized applications. Docker provides several tools for orchestration, including Docker Swarm and Kubernetes.
Here are some key features of Docker orchestration:
1. Container deployment: Docker orchestration allows users to deploy containerized applications to multiple Docker hosts. This provides a way to distribute application
workloads across multiple systems, improving performance and availability.
2. Container scaling: Docker orchestration provides a way to scale containerized applications automatically based on application workload. This allows applications to
handle increased traffic and load without manual intervention.
3. Service discovery: Docker orchestration provides service discovery, which allows applications to locate and communicate with each other within the same cluster. This
can be achieved through the use of a service discovery tool, such as Consul or etcd.
4. Load balancing: Docker orchestration provides load balancing, which allows network traffic to be distributed across multiple containers to improve performance and
reliability. This can be achieved through the use of a load balancer, such as HAProxy or NGINX.
5. Rolling updates: Docker orchestration allows users to perform rolling updates of containerized applications, which allows updates to be applied to individual containers
without affecting the entire application. This can be achieved through the use of a rolling update tool, such as Kubernetes or Docker Compose.
6. High availability: Docker orchestration provides high availability, which ensures that containerized applications are always available, even in the event of a failure or
outage. This can be achieved through the use of a high availability tool, such as Docker Swarm or Kubernetes.
Docker Swarm and Kubernetes are the most popular Docker orchestration tools. Docker Swarm is a built-in Docker tool that provides simple orchestration for Docker
containers, while Kubernetes is a more advanced orchestration tool that provides a wide range of features and functionality for containerized applications.
Both Docker Swarm and Kubernetes provide a command-line interface and a graphical user interface for managing Docker containers and orchestration. Users can also use
third-party tools and plugins to extend the functionality of Docker orchestration, such as monitoring and logging tools, automation tools, and security tools.
@Sandip Das
Advanced
Docker
Concepts
Docker Volumes and Bind Mounts
Docker provides two ways to persist data generated by Docker containers: volumes
Here are some examples of how to use Docker volumes and bind mounts:
and bind mounts. Both volumes and bind mounts allow data to be shared and Create a Docker volume:
persisted across multiple containers, but they work in slightly different ways. docker volume create <volume-name>
Here are the key differences between Docker volumes and bind mounts: This command creates a new Docker volume with the specified name.
1. Storage location: Docker volumes are stored in a Docker-managed volume Create a Docker container with a volume:
storage area, which can be located on the Docker host or in a remote storage docker run -v <volume-name>:<mount-point> <image-name>
system, such as AWS EBS or NFS. Bind mounts, on the other hand, can be This command starts a new Docker container from the specified Docker image
located anywhere on the Docker host file system, and they are not managed by and attaches the specified Docker volume to the container.
Docker. Create a bind mount:
2. Persistence: Docker volumes are persistent, meaning that they can be preserved docker run -v /host/folder:/container/folder <image-name>
even if the container is deleted or recreated. Bind mounts are not persistent, This command starts a new Docker container from the specified Docker image
meaning that they are tied to the underlying file system, and they are removed if and mounts the specified host folder to the container's file system.
the host file system is deleted or recreated. Remove a Docker volume:
3. Sharing: Docker volumes can be shared across multiple containers, allowing data docker volume rm <volume-name>
to be accessed and modified by multiple containers at the same time. Bind This command removes the specified Docker volume from the Docker host.
mounts can also be shared across multiple containers, but they can cause Remove a bind mount: To remove a bind mount, simply delete the file or folder
conflicts if multiple containers write to the same file at the same time. that was mounted to the container.
4. Access control: Docker volumes can be managed with access control lists (ACLs)
to provide fine-grained control over who can access the data stored in the
volume. Bind mounts do not have built-in access control features and rely on the
underlying file system permissions for access control.
5. Ease of use: Docker volumes are easy to use and manage, and they can be
created, deleted, and backed up using simple Docker commands. Bind mounts
require more manual configuration, and they can be difficult to manage in large-
scale deployments.
@Sandip Das
Docker BuildKit
Docker BuildKit is a tool for building Docker images that provides improved performance and security over the traditional Docker build process. BuildKit uses a new build
engine that is designed to be more efficient and flexible than the old engine.
Here are some key features of Docker BuildKit:
1. Parallelism: BuildKit allows for parallel building of Docker images, which can significantly improve build times for large images. This is achieved through the use of a
concurrent build graph, which enables multiple dependencies to be built simultaneously.
2. Cache management: BuildKit provides improved cache management for Docker builds, which reduces the amount of time required to rebuild images. BuildKit can cache
individual layers of an image, allowing for incremental builds that only rebuild what has changed.
3. Security: BuildKit provides improved security for Docker builds, by isolating the build process from the host system. This is achieved through the use of user
namespaces, which create a separate user and group ID space for each build, preventing privilege escalation attacks.
4. Extensibility: BuildKit is highly extensible, with a modular architecture that allows for the addition of new build components and customization of the build process.
BuildKit also supports the use of external tools and plugins, which can be used to add new functionality to the build process.
5. Dockerfile syntax: BuildKit supports a new Dockerfile syntax that provides additional functionality and flexibility over the traditional Dockerfile syntax. The new syntax
includes support for multi-stage builds, build-time variables, and build-time secrets.
To use BuildKit, users must first enable it by setting the DOCKER_BUILDKIT environment variable to 1. This can be done using the following command:
export DOCKER_BUILDKIT=1
Once BuildKit is enabled, users can use the standard Docker build command to build their images. However, to take full advantage of BuildKit's features, users should
use the new Dockerfile syntax and enable caching and parallelism. Here is an example command for building a Docker image with BuildKit:
@Sandip Das
Docker Build Target Architecture
Docker allows users to target specific architectures when building Docker images. This is useful for creating images that can run on different platforms, such as ARM or x86, without the need to
maintain multiple images.
To build Docker images for different architectures, users can use the "docker buildx" command, which is part of the Docker CLI. The "--platform" option can be used to specify the target
architecture when building the Docker image. Here are some example commands for building Docker images for different architectures:
AMD64/x86_64: docker buildx build --platform linux/amd64 -t my-image .
ARMv6: docker buildx build --platform linux/arm/v6 -t my-image .
ARMv7: docker buildx build --platform linux/arm/v7 -t my-image .
ARMv8/AArch64: docker buildx build --platform linux/arm64 -t my-image .
PowerPC: docker buildx build --platform linux/ppc64le -t my-image .
IBM Z: docker buildx build --platform linux/s390x -t my-image .
Multi-architecture image: docker buildx build --platform linux/amd64,linux/arm64 -t my-image .
@Sandip Das
Multi-Stage Docker Builds
Multi-stage Docker builds allow users to create Docker images that are optimized for production environments while still using a single Dockerfile. Multi-stage builds allow users to compile code,
install dependencies, and generate artifacts in one stage and then copy only the necessary files to the final stage. This results in smaller and more efficient Docker images that are optimized for
production use.
Here are the steps to create a multi-stage Docker build:
Define multiple stages in the Dockerfile: To create a multi-stage Docker build, users must define multiple stages in the Dockerfile. Each stage can include its own set of commands and
dependencies. For example, here is a Dockerfile that uses two stages: one for compiling code and generating artifacts and another for running the application:
In this example, the first stage installs the Python dependencies and builds the application into a wheel file. The second stage installs the wheel file and runs the application.
Build the Docker image: Once the Dockerfile has been defined, users can build the Docker image using the standard "docker build" command. For example, to build the Docker image with the
name "my-image", users can use the following command:
@Sandip Das
GO Dockerfile Example
# Use an official Golang runtime as a parent image
FROM golang:1.17.2-alpine3.14 AS builder
# Copy the source code into the container This example uses the official golang:1.17.2-alpine3.14 base
COPY . .
image and builds the application inside the container. It then
# Build the application uses a smaller base image for the final image and copies the
RUN go build -o myapp binary from the builder stage. It also sets environment variables
and exposes the application's port (8080 in this case). Finally, it
# Use a smaller base image for the final image
uses the CMD instruction to run the application.
FROM alpine:3.14
@Sandip Das
Node.js Dockerfile Example
# Use an official Node.js runtime as a parent image
FROM node:16.13.0-alpine3.14 AS builder This example uses the official node:16.13.0-alpine3.14 base
image and builds the application inside the container. It then
# Set the working directory to /app
uses a smaller base image for the final image and copies the
WORKDIR /app
necessary files from the builder stage. It also sets environment
# Copy the source code into the container variables and exposes the application's port (8080 in this case).
COPY . . Finally, it uses the CMD instruction to run the application.
@Sandip Das
@LearnTechWithSandip