Docker Guide
Docker Guide
DevOps Shack
Docker Comprehensive Guide
Table of Contents
1. Introduction to Docker
1.1 What is Docker?
1.2 Benefits of Docker
1.3 Docker vs Virtual Machines
1.4 Overview of Docker Architecture
1.5 Installation and Setup (Windows, macOS, Linux)
2. Docker Basics
2.1 Understanding Docker Images and Containers
2.2 Key Docker Components: Images, Containers, Volumes, Networks
2.3 Docker CLI Commands: run, pull, push, build, stop, rm
2.4 Creating and Managing Containers
2.5 Understanding Docker Registry (Docker Hub and Private Registries)
3. Working with Docker Images
3.1 Pulling Images from Docker Hub
3.2 Building Custom Images with Dockerfiles
3.3 Docker Image Layers and Caching
3.4 Managing and Tagging Docker Images
3.5 Publishing Images to Docker Hub or Private Registries
4. Docker Networking
4.1 Overview of Docker Networking
4.2 Network Drivers: Bridge, Host, None, Overlay, Macvlan
4.3 Exposing and Mapping Ports
4.4 Communication Between Containers
4.5 Custom Networks and DNS Configuration
5. Docker Volumes and Storage
5.1 Introduction to Docker Volumes
5.2 Creating and Mounting Volumes
5.3 Persistent Data Storage with Bind Mounts
2
5.4 Managing Volumes with Docker CLI
5.5 Best Practices for Data Management
6. Docker Compose
6.1 Introduction to Docker Compose
6.2 YAML File Structure
6.3 Defining Multi-Container Applications
6.4 Running and Managing Applications with docker-compose
6.5 Networking and Scaling with Docker Compose
7. Docker Swarm and Orchestration
7.1 Overview of Container Orchestration
7.2 Getting Started with Docker Swarm
7.3 Creating and Managing Services
7.4 Scaling and Updating Services
7.5 Docker Swarm Networking
8. Docker in Development
8.1 Setting Up a Development Environment with Docker
8.2 Debugging and Logs
8.3 Hot Reloading with Docker for Developers
8.4 Multi-Stage Builds for Optimized Images
8.5 Best Practices for Development with Docker
9. Docker in Production
9.1 Optimizing Docker Images for Production
9.2 Monitoring and Logging (ELK Stack, Prometheus)
9.3 Load Balancing and Service Discovery
9.4 Security Best Practices for Docker in Production
9.5 Deploying Applications with Docker in Production
10. Docker and Kubernetes
10.1 Introduction to Kubernetes and Docker
10.2 Differences Between Docker Compose and Kubernetes
10.3 Deploying Docker Containers to Kubernetes
10.4 Docker CLI to Kubernetes Integration
11. Advanced Docker Topics
3
11.1 Docker Plugins
11.2 Docker API and Automation
11.3 Customizing Docker Daemon Configurations
11.4 Debugging Docker Containers with exec
11.5 Creating a CI/CD
4
Introduction: Docker - Revolutionizing
Software Development
Docker is a powerful platform that has transformed the way applications are
developed, shipped, and deployed. By using lightweight, portable units called
containers, Docker allows developers to package applications along with their
dependencies, ensuring consistency across development, testing, and
production environments.
Containers solve the common problem of "it works on my machine" by creating
isolated environments that work identically regardless of where they are
deployed. Unlike traditional virtual machines, Docker containers share the host
OS kernel, making them faster and more efficient.
Since its launch in 2013, Docker has become an essential tool for modern
software development, enabling practices like microservices architecture,
DevOps automation, and cloud-native applications. It is widely used by
developers to simplify application deployment, by DevOps engineers to
streamline CI/CD pipelines, and by IT professionals to manage scalable and
secure infrastructure.
This guide is designed to take you from the basics of Docker—understanding
images, containers, volumes, and networks—to advanced topics like multi-
stage builds, security best practices, and running Docker in production.
Whether you're a beginner or a seasoned professional, this guide will equip
you with the knowledge and skills to leverage Docker effectively in your
workflows.
Dive in and discover how Docker can revolutionize the way you build, deploy,
and scale applications!
5
Section 1: Introduction to Docker
Docker has revolutionized the way software is developed, shipped, and
deployed, offering lightweight, portable, and consistent environments. This
section provides a comprehensive introduction to Docker, its architecture,
benefits, and how it differs from traditional virtual machines.
6
3. Faster Development and Deployment
With Docker, developers can create standardized development
environments, speeding up the development cycle. Pre-built Docker
images and reusable containers also reduce setup times.
4. Scalability
Docker works seamlessly with orchestration tools like Docker Swarm and
Kubernetes, enabling easy scaling of applications based on demand.
5. Portability
Docker containers can run anywhere: on-premises, in the cloud, or on
hybrid infrastructure, making it a favorite for multi-cloud strategies.
7
1. Docker Engine
The core of Docker, responsible for creating and managing containers. It
includes:
o Docker Daemon (dockerd): Runs in the background and manages
Docker objects.
o Docker CLI (docker): Command-line tool to interact with the
Docker Daemon.
o REST API: Provides programmatic access to Docker.
2. Docker Images
Immutable templates used to create containers. Images are built from
Dockerfiles, which define the application environment, dependencies,
and configurations.
3. Docker Containers
Running instances of Docker images, encapsulating everything needed
for the application to run.
4. Docker Registry
A repository to store and share Docker images. Popular registries
include:
o Docker Hub: Public registry provided by Docker.
o Private Registries: Self-hosted or cloud-based solutions for
proprietary images.
5. Docker Storage
Handles data persistence for containers using volumes, bind mounts, and
tmpfs mounts.
6. Docker Networking
Allows containers to communicate with each other and the outside
world via network drivers.
8
1.5.1 Prerequisites
• A 64-bit OS.
• A supported version of the Linux kernel (for Linux users).
• Virtualization enabled (for Windows and macOS users).
1.5.2 Installation on Linux
1. Update the package index:
sudo apt-get update
2. Install required packages:
sudo apt-get install -y apt-transport-https ca-certificates curl software-
properties-common
3. Add Docker's official GPG key and repository:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --
dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-
keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs)
stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
4. Install Docker Engine:
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io
5. Verify the installation:
docker --version
1.5.3 Installation on Windows/Mac
1. Download Docker Desktop from Docker's official website.
2. Run the installer and follow the setup wizard.
3. Restart your system if prompted.
4. Verify the installation:
docker --version
9
1.5.4 Post-Installation Steps
• Add your user to the docker group to avoid using sudo for Docker
commands:
sudo usermod -aG docker $USER
• Enable and start the Docker service:
sudo systemctl enable docker
sudo systemctl start docker
1.5.5 Testing Docker
Run a test container to verify the installation:
docker run hello-world
If successful, you’ll see a message confirming Docker is installed and
functioning.
10
Section 2: Docker Basics
This section delves into the foundational elements of Docker, including Docker
images, containers, CLI commands, and the Docker Registry. By understanding
these basics, you'll gain the knowledge necessary to start building and
managing Dockerized applications.
11
2.2 Key Docker Components
1. Images
o Pre-built images available on Docker Hub (e.g., nginx, mysql).
o Custom images can be created using Dockerfiles.
2. Containers
o Created from images and run applications.
o Support operations like start, stop, pause, and remove.
3. Volumes
o Used for data persistence.
o Allow data sharing between containers or with the host system.
4. Networks
o Enable container communication internally and externally.
o Include built-in drivers like bridge, host, and none.
14
2. Tag an image for the local registry:
docker tag <image_id> localhost:5000/<image_name>
3. Push the image to the registry:
docker push localhost:5000/<image_name>
15
Section 3: Building and Managing Docker Images
Docker images are at the core of the containerization process. This section
focuses on understanding images, building custom images with Dockerfiles,
tagging and managing images, and best practices for optimizing image size and
performance.
16
2. Maintainer: (Optional) Define the author.
3. Commands: Instructions to install dependencies, copy files, and set up
the environment.
4. Entrypoint/Command: Define the default executable for the container.
Sample Dockerfile
# Step 1: Specify the base image
FROM node:14
17
3.3 Managing Docker Images
Viewing Image Details To inspect an image’s metadata:
docker inspect <image_id>
Removing Images Delete unnecessary images to free up space:
docker rmi <image_id>
Remove all unused images:
docker image prune
Pushing Images to Docker Hub
1. Login to Docker Hub:
docker login
2. Tag the image for your Docker Hub repository:
docker tag <image_id> <dockerhub_username>/<repo_name>:<tag>
3. Push the image:
docker push <dockerhub_username>/<repo_name>:<tag>
18
RUN apt-get update && apt-get install -y python3
2. Place frequently changing instructions (e.g., COPY) at the end of the
Dockerfile.
19
FROM node:14 AS builder
WORKDIR /app
COPY . .
RUN npm install && npm run build
# Stage 2: Runtime
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
20
This approach ensures that only the production-ready files are included in the
final image, keeping it lightweight.
This section provides a detailed guide on creating and managing Docker images
effectively. The next section will explore Docker Networking, explaining how
containers communicate with each other and external systems.
21
Section 4: Docker Networking
Docker networking enables containers to communicate with each other, the
host system, and the external world. This section explores Docker's networking
capabilities, its various network modes, and how to configure and manage
networks effectively.
22
• Offers better performance but lacks isolation.
• Example:
docker run --network host nginx
3. None Network
• Disables networking for the container.
• Suitable for highly isolated applications.
• Example:
docker run --network none nginx
4. Overlay Network
• Used in Docker Swarm for multi-host networking.
• Enables containers on different hosts to communicate securely.
• Example:
docker network create --driver overlay my_overlay
5. Macvlan Network
• Assigns a unique MAC address to each container.
• Containers appear as physical devices on the local network.
• Example:
docker network create -d macvlan --subnet=192.168.1.0/24 my_macvlan
23
• Use the -p or --publish flag to bind container ports to host ports:
docker run -d -p 8080:80 nginx
• Here, port 8080 on the host maps to port 80 inside the container.
Dynamic Port Mapping
• Let Docker assign an available port on the host:
docker run -d -P nginx
• Use docker port to view the mapping:
docker port <container_id>
24
4.5 Creating Custom Networks
Custom networks offer enhanced control over container connectivity.
Bridge Network Example
1. Create a custom network:
docker network create my_custom_network
2. Run containers on the network:
docker run --network my_custom_network --name app1 nginx
docker run --network my_custom_network --name app2 busybox ping app1
Inspecting Networks
• View all networks:
docker network ls
• Inspect a specific network:
docker network inspect my_custom_network
Removing Networks
• To remove unused networks:
docker network prune
25
Encrypted Networks
• Use the overlay driver with encryption for secure communication
between containers:
docker network create --driver overlay --opt encrypted my_secure_network
26
Section 5: Docker Volumes and Storage
Data persistence is critical for modern applications, especially when containers
are ephemeral by design. Docker provides multiple ways to manage and persist
data through volumes, bind mounts, and tmpfs mounts. This section explores
these options and their best practices.
27
• Stores data in the host system's memory.
• Data is not persisted after the container stops.
• Example: Sensitive data or temporary files.
28
• Development scenarios where real-time updates are needed.
• Testing configurations and logs stored on the host system.
29
docker run -d --name container1 -v shared_volume:/data busybox
2. Create another container using the same volume:
docker run -it --name container2 --volumes-from container1 busybox
3. Data written by container1 will be accessible to container2.
30
5.10 Advanced Volume Usage
Driver Plugins
• Docker supports external volume drivers (e.g., AWS EFS, Azure File
Storage).
• Specify the driver when creating a volume:
docker volume create --driver local my_custom_volume
Mount Options
• Control how volumes are mounted using options like ro (read-only):
docker run -v my_volume:/data:ro nginx
This section covered Docker's storage options, including volumes, bind mounts,
and tmpfs mounts, along with best practices for managing data. The next
section will explore Docker Compose, which simplifies managing multi-
container applications.
31
Section 6: Docker Compose
Docker Compose is a tool for defining and running multi-container Docker
applications. By using a simple YAML configuration file, you can orchestrate
multiple services, volumes, and networks, making it ideal for complex
application stacks.
32
The docker-compose.yml file defines the services, networks, and volumes for
your application.
Basic Example
version: "3.9"
services:
web:
image: nginx
ports:
- "8080:80"
db:
image: mysql
environment:
MYSQL_ROOT_PASSWORD: password
33
3. Networks
Define custom networks for inter-service communication.
networks:
my_network:
driver: bridge
4. Volumes
Configure shared storage for services.
volumes:
data_volume:
34
6.6 Managing Multi-Container Applications
Linking Services Compose automatically links services defined in the same file.
Containers can communicate using their service names as hostnames.
Example
services:
web:
image: nginx
depends_on:
- app
app:
image: node:14
In this example, web can communicate with app using app as the hostname.
Environment Variables Pass environment variables to containers:
services:
app:
image: node:14
environment:
- NODE_ENV=production
- PORT=3000
35
Load Balancing Docker Compose does not provide built-in load balancing. Use
tools like NGINX or a dedicated load balancer to distribute traffic across scaled
containers.
36
db:
image: mysql
networks:
- backend
networks:
frontend:
backend:
37
retries: 3
38
Section 7: Docker Swarm and Orchestration
Docker Swarm is Docker’s native clustering and orchestration tool, enabling you
to manage and scale containerized applications across multiple nodes. This
section covers setting up a Swarm cluster, deploying services, scaling, and
managing updates in a distributed environment.
39
7.3 Deploying Services in Swarm Mode
In Swarm, containers are deployed as services. A service defines the desired
state, including the number of replicas, network configurations, and update
strategies.
Example: Deploying a Service
docker service create --name web --replicas 3 -p 8080:80 nginx
• --name web: Names the service.
• --replicas 3: Specifies three replicas.
• -p 8080:80: Maps port 8080 on the host to port 80 in the containers.
Viewing Services List all running services:
docker service ls
Inspecting a Service View details of a specific service:
docker service inspect <service_name>
40
Rolling Update Example Deploy a new image version with minimal impact:
docker service update --image nginx:latest web
Custom Update Configurations Control the update strategy:
docker service update --update-delay 10s --update-parallelism 2 --image
nginx:latest web
• --update-delay: Time between updating service replicas.
• --update-parallelism: Number of replicas updated simultaneously.
Rolling Back Updates Revert to the previous service version:
docker service rollback web
41
Preferences Distribute services based on node attributes:
docker service create --replicas 5 --placement-pref 'spread=node.labels.zone'
nginx
42
Section 8: Docker Security
Securing Docker environments is critical for protecting containerized
applications and the infrastructure they run on. This section covers best
practices, managing secrets, configuring access controls, and using tools to
identify vulnerabilities.
43
1. Create a Secret Store sensitive data as a secret:
echo "my-secret-password" | docker secret create db_password -
2. Use Secrets in Services Reference the secret in a service:
docker service create --name db --secret db_password mysql
3. Access Secrets in Containers Secrets are mounted as files in
/run/secrets/:
cat /run/secrets/db_password
44
docker network create --driver bridge secure_network
docker run --network secure_network myapp
Restrict Container Ports
• Limit exposed ports using -p or EXPOSE only when necessary:
docker run -d -p 8080:80 nginx
Enable Firewall Rules
• Use host-level firewalls to restrict external access:
ufw allow 8080
45
Inspect Container Activity Check running containers and their resource usage:
docker stats
docker inspect <container_id>
46
4. Open Policy Agent (OPA): Policy enforcement tool for containerized
systems.
47
Section 9: Advanced Docker Topics
This section delves into advanced Docker concepts, including multi-stage
builds, BuildKit for efficient image creation, debugging containers, optimizing
performance, and integrating Docker into CI/CD pipelines. Mastering these
topics will help you elevate your containerization skills to a professional level.
# Stage 2: Runtime
FROM nginx:alpine
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Benefits of Multi-Stage Builds:
• Smaller image size.
• Improved security by excluding build tools and unnecessary
dependencies.
48
9.2 Using BuildKit for Enhanced Build
BuildKit is an advanced image builder for Docker that improves performance
and flexibility during the build process.
Enabling BuildKit Set the DOCKER_BUILDKIT environment variable to 1:
export DOCKER_BUILDKIT=1
docker build .
Features of BuildKit:
1. Parallel Build Stages: Speeds up multi-stage builds.
2. Secret Management: Securely passes secrets during builds.
3. Cache Export: Shares build cache between builds for faster rebuilds.
Example: Using Secrets in Builds
# Use BuildKit secrets
RUN --mount=type=secret,id=mysecret echo "Secret is $(cat
/run/secrets/mysecret)"
Build with secrets:
bash
Copy code
DOCKER_BUILDKIT=1 docker build --secret id=mysecret,src=mysecret.txt .
49
docker logs <container_id>
Debugging Network Issues Use tools like ping or curl to test connectivity:
docker exec <container_id> ping <hostname>
docker exec <container_id> curl http://service:port
Using Debug Images Use lightweight debugging images like busybox or alpine:
docker run -it busybox sh
51
docker {
image 'node:14'
}
}
stages {
stage('Build') {
steps {
sh 'npm install'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
}
}
52
docker run --network host myapp
3. Monitor Resource Usage
• Check real-time resource consumption:
docker stats
4. Improve Build Performance
• Leverage BuildKit’s caching mechanism for faster builds.
54
Section 10: Docker in Production
Deploying Docker containers in production requires careful planning and
execution to ensure reliability, scalability, security, and performance. This
section covers best practices, monitoring, logging, scaling strategies, and
disaster recovery for running Dockerized applications in production
environments.
55
o Use --memory and --cpus flags to limit container resource
consumption:
docker run --memory=512m --cpus=1 myapp
3. Implement Health Checks
o Define health checks in Dockerfiles or Compose files to monitor
container health:
HEALTHCHECK --interval=30s CMD curl -f http://localhost || exit 1
4. Use Rolling Updates
o Gradually roll out updates to minimize downtime and risk:
docker service update --image myapp:v2 web
5. Externalize Configurations
o Use environment variables, Docker Compose, or secrets
management for application configurations.
6. Automate Backups
o Regularly back up data volumes and configurations.
56
• Configure alerts for anomalies (e.g., high memory usage or failed health
checks).
57
10.6 Security in Production
1. Use Private Registries
• Store images in private registries like Docker Trusted Registry or AWS
ECR.
2. Enable Image Scanning
• Use tools like Trivy or Docker Scan to detect vulnerabilities in images.
3. Limit Container Privileges
• Avoid running containers as the root user.
• Use security profiles like AppArmor or SELinux to restrict container
actions.
4. Secure Networks
• Isolate containers using private networks.
• Use VPNs or firewalls to control external access.
5. Encrypt Sensitive Data
• Use Docker Secrets or external tools like HashiCorp Vault for secure
secret management.
58
10.8 Using CI/CD Pipelines for Production
1. Continuous Integration
• Automate image builds and testing in CI pipelines using Jenkins, GitHub
Actions, or GitLab CI/CD.
2. Continuous Deployment
• Automate container deployments to staging and production
environments:
o Example: A GitHub Actions pipeline that builds, tests, and deploys
an image to production.
3. Rollback Strategies
• Use orchestration tools for version control and rollback capabilities:
docker service rollback my_service
59
2. Use Spot Instances (Cloud Providers)
• Use cost-effective spot or preemptible instances for non-critical
workloads.
3. Monitor Resource Usage
• Identify and eliminate unused containers and volumes:
docker system prune
This section concludes the Docker Comprehensive Guide, providing all the
essential knowledge for deploying and managing Dockerized applications in
production environments. From basic concepts to advanced techniques, you
are now equipped to utilize Docker effectively across development and
production workflows.
60
Conclusion: Mastering Docker
Docker has revolutionized the way applications are developed, shipped, and
deployed, offering a unified platform for building, managing, and scaling
containerized applications. This comprehensive guide has walked you through
Docker's lifecycle, from understanding its core concepts to mastering its
advanced capabilities for production environments.
Here are the key takeaways:
1. Foundations of Docker
o Docker provides a lightweight, portable, and efficient way to
package and run applications.
o Core components like images, containers, volumes, and networks
form the building blocks of Dockerized environments.
2. Practical Usage
o Managing containers, building custom images, and leveraging
Docker Compose streamline workflows for developers and DevOps
engineers.
o Docker networking and storage options ensure seamless
communication and data persistence for applications.
3. Advanced Topics
o Multi-stage builds, BuildKit, and performance tuning optimize the
efficiency and security of containerized applications.
o Integration with CI/CD pipelines enables rapid deployment and
scaling in modern software development.
4. Production Readiness
o Docker Swarm and orchestration tools facilitate high availability
and scalability.
o Best practices for security, monitoring, logging, and disaster
recovery ensure a robust production setup.
5. Future Potential
61
o Docker's ecosystem is constantly evolving, with growing
integrations in cloud-native tools like Kubernetes, Prometheus,
and HashiCorp Vault.
o As organizations adopt microservices and DevOps practices,
Docker remains a cornerstone technology for modern application
development and deployment.
By mastering Docker, you are equipped to tackle complex software challenges,
streamline workflows, and drive efficiency in both development and
operations. Whether you’re building a simple app or managing a distributed
system, Docker provides the tools and flexibility to succeed in today’s fast-
paced tech landscape.
What’s Next?
To continue growing your Docker expertise:
• Explore Kubernetes for advanced orchestration and scaling.
• Dive deeper into Docker security to safeguard containerized
environments.
• Experiment with multi-cloud deployments and hybrid cloud strategies.
With Docker as a fundamental skill, you’re well-prepared to thrive in the world
of containerization and cloud-native technologies. Happy Dockerizing
62