DevOps Chapter-5
DevOps Chapter-5
Docker
Docker is an open platform for developing, shipping, and running applications. Docker enables you to
separate your applications from your infrastructure so you can deliver software quickly. With Docker, you
can manage your infrastructure in the same ways you manage your applications. By taking advantage of
Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce
the delay between writing code and running it in production.
The Docker platform
Docker provides the ability to package and run an application in a loosely isolated environment called a
container. The isolation and security allows you to run many containers simultaneously on a given host.
Containers are lightweight and contain everything needed to run the application, so you do not need to
rely on what is currently installed on the host. You can easily share containers while you work, and be sure
that everyone you share with gets the same container that works in the same way.
Docker provides tooling and a platform to manage the lifecycle of your containers:
1. Develop your application and its supporting components using containers.
2. The container becomes the unit for distributing and testing your application.
3. When you’re ready, deploy your application into your production environment, as a container or an
orchestrated service. This works the same whether your production environment is a local data
center, a cloud provider, or a hybrid of the two.
What can I use Docker for? Fast, consistent delivery of your applications
Docker streamlines the development lifecycle by allowing developers to work in standardized
environments using local containers which provide your applications and services. Containers are great for
continuous integration and continuous delivery (CI/CD) workflows.
Consider the following scenario:
1. Developers write code locally and share their work with their colleagues using Docker containers.
2. They use Docker to push their applications into a test environment and execute automated and
manual tests.
3. When developers find bugs, they can fix them in the development environment and redeploy them
to the test environment for testing and validation.
4. When testing is complete, getting the fix to the customer is as simple as pushing the updated image
to the production environment.
Docker is more than containers, though. It is a suite of tools for developers to build, share, run and
orchestrate containerized apps.
● Developer tools for building container images: Docker Build creates a container image, the
blueprint for a container, including everything needed to run an application – the application code,
binaries, scripts, dependencies, configuration, environment variables, and so on. Docker
Compose is a tool for defining and running multi-container applications. These tools integrate
tightly with code repositories (such as GitHub) and continuous integration and continuous delivery
(CI/CD) pipeline tools (such as Jenkins).
● Sharing images: Docker Hub is a registry service provided by Docker for finding and sharing
container images with your team or the public. Docker Hub is similar in functionality to GitHub.
● Running containers: Docker Engine is a container runtime that runs in almost any environment:
Mac and Windows PCs, Linux and Windows servers, the cloud, and on edge devices. Docker
Engine is built on top containerd, the leading open-source container runtime, a project of the Cloud
Native Computing Foundation (DNCF).
● Built-in container orchestration: Docker Swarm manages a cluster of Docker Engines (typically on
different nodes) called a swarm. Here the overlap with Kubernetes begins.
What is Kubernetes?
Kubernetes is an open-source container orchestration platform for managing, automating, and scaling
containerized applications. Although Docker Swarm is also an orchestration tool, Kubernetes is the de
facto standard for container orchestration because of its greater flexibility and capacity to scale.
Organizations use Kubernetes to automate the deployment and management of containerized applications.
Rather than individually managing each container in a cluster, a DevOps team can instead tell Kubernetes
how to allocate the necessary resources in advance.
Where Kubernetes and the Docker suite intersect is at container orchestration. So when people talk about
Kubernetes vs. Docker, what they really mean is Kubernetes vs. Docker Swarm.
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as
images, containers, networks, and volumes. A daemon can also communicate with other daemons to
manage Docker services.
The Docker client
The Docker client (docker) is the primary way that many Docker users interact with Docker. When you
use commands such as docker run, the client sends these commands to dockerd, which carries them out.
The docker command uses the Docker API. The Docker client can communicate with more than one
daemon.
Docker Desktop
Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you
to build and share containerized applications and microservices. Docker Desktop includes the Docker
daemon (dockerd), the Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes, and
Credential Helper.
Docker registries
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker
is configured to look for images on Docker Hub by default. You can even run your own private registry.
When you use the docker pull or docker run commands, the required images are pulled from your
configured registry. When you use the docker push command, your image is pushed to your configured
registry.
Docker objects
When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and
other objects. Here we describe some of those objects.
Images
An image is a read-only template with instructions for creating a Docker container. Often, an image
is based on another image, with some additional customization. For example, you may build an image
which is based on the ubuntu image, but installs the Apache web server and your application, as well as
the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a
registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps
needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When
you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is
part of what makes images so lightweight, small, and fast, when compared to other virtualization
technologies.
Containers
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container
using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or
even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can
control how isolated a container’s network, storage, or other underlying subsystems are from other
containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when you create
or start it. When a container is removed, any changes to its state that are not stored in persistent storage
disappear.
Example docker run command
Command runs an ubuntu container, attaches interactively to your local command-line session, and
runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, it happens (assuming that we use the default registry configuration):
1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as
though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container create command
manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a running
container to create or modify files and directories in its local filesystem.
4. Docker creates a network interface to connect the container to the default network, since you did
not specify any networking options. This includes assigning an IP address to the container. By
default, containers can connect to external networks using the host machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is running interactively
and attached to your terminal (due to the -i and -t flags), you can provide input using your
keyboard while the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is not removed.
You can start it again or remove it.
The underlying technology
Docker is written in the Go Programming Language and takes advantage of several features of the Linux
kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated
workspace called the container. When you run a container, Docker creates a set of namespaces for that
container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace
and its access is limited to that namespace.
Install Docker Desktop on Windows
Update to the Docker Desktop terms
Commercial use of Docker Desktop in larger enterprises (more than 250 employees OR more than $10
million USD in annual revenue) now requires a paid subscription.
Welcome to Docker Desktop for Windows.
This page contains information about Docker Desktop for Windows system requirements, download URL,
instructions to install and update Docker Desktop for Windows.
Download Docker Desktop for Windows
System requirements
Your Windows machine must meet the following requirements to successfully install Docker Desktop.
● WSL 2 backend
● Hyper-V backend and Windows containers
WSL 2 backend
● Windows 11 64-bit / Windows 10 64-bit
● Enable the WSL 2 feature on Windows
● The hardware prerequisites to successfully run WSL 2 on Windows 10 /Windows 11
o 64-bit processor
o 4GB RAM
o BIOS-level hardware virtualization support must be enabled in the BIOS settings.
Download and install the Linux kernel update package.
Docker only supports Docker Desktop on Windows for those versions of Windows 10 that are still
within Microsoft’s servicing timeline.
Containers and images created with Docker Desktop are shared between all user accounts on machines
where it is installed. This is because all Windows accounts use the same VM to build and run containers.
Note that it is not possible to share containers and images between user accounts when using the Docker
Desktop WSL 2 backend.
Nested virtualization scenarios, such as running Docker Desktop on a VMWare or Parallels instance might
work, but there are no guarantees.
About Windows containers
● Switch between Windows and Linux containers describes how you can toggle between Linux and
Windows containers in Docker Desktop.
● Getting Started with Windows Containers (Lab) provides a tutorial on how to set up and run
Windows containers on Windows 10. It shows how to use a MusicStore application with Windows
containers.
● Docker Container Platform for Windows articles and blog posts on the Docker website.
Note
To run Windows containers, you need Windows 10 or Windows 11 Professional or Enterprise edition.
Windows Home or Education editions will only allow you to run Linux containers.
Install Docker Desktop on Windows
Install interactively
1. Double-click Docker Desktop Installer.exe to run the installer.
If you haven’t already downloaded the installer (Docker Desktop Installer.exe), you can get it
from Docker Hub. It downloads to your Downloads folder.
When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration page is selected or
not depending on your choice of backend.
If your system only supports one of the two options, you will not be able to select which backend to use.
2. Follow the instructions on the installation wizard to authorize the installer and proceed with the
install.
If your admin account is different to your user account, you must add the user to the docker-users group:
net localgroup docker-users <user> /add
Start Docker Desktop
Docker Desktop does not start automatically after installation. To start Docker Desktop:
1. Search for Docker and select Docker Desktop in the search results.
2. The Docker menu ( ) displays the Docker Subscription Service Agreement window. It includes
a change to the terms of use for Docker Desktop.
Here’s a summary of the key changes:
o Our Docker Subscription Service Agreement includes a change to the terms of use for
Docker Desktop
o It remains free for small businesses (fewer than 250 employees AND less than $10 million
in annual revenue), personal use, education, and non-commercial open source projects.
o It requires a paid subscription for professional use in larger enterprises.
o The effective date of these terms is August 31, 2021.
o The existing Docker Free subscription has been renamed Docker Personal and we have
introduced a Docker Business subscription .
o The Docker Pro, Team, and Business subscriptions include commercial use of Docker
Desktop.
3. Click the checkbox to indicate that you accept the updated terms and then click Accept to
continue. Docker Desktop starts after you accept the terms.
Install Docker Desktop on Linux
Welcome to Docker Desktop for Linux. This page contains information about system requirements,
download URLs, and instructions on how to install and update Docker Desktop for Linux.
Download Docker Desktop for Linux packages
System requirements
To install Docker Desktop Linux host must meet the following requirements:
● 64-bit kernel and CPU support for virtualization
● KVM virtualization support. To check if the KVM kernel modules are enabled and how to provide
access to the kvm device.
● QEMU must be version 5.2 or newer.
● systemd init system.
● Gnome or KDE Desktop environment - For many Linux distorts, the Gnome environment does not
support tray icons. To add support for tray icons, you need to install a Gnome extension. For
example, AppIndicator).
● At least 4 GB of RAM.
Docker Desktop for Linux runs a Virtual Machine (VM).
Supported platforms
Docker provides .deb and .rpm packages from the Ubuntu, Debian and Fedora Linux distributions and
architectures:
Note:
An experimental package is available for Arch-based distributions. Docker has not tested or verified the
installation.
Docker supports Docker Desktop on the current LTS release of the aforementioned distributions and the
most recent version. As new versions are made available, Docker stops supporting the oldest version and
supports the newest version.
KVM virtualization support
Docker Desktop runs a VM that requires KVM support.
The kvm module should load automatically if the host has virtualization support.
To load the module manually, run:
$ modprobe kvm
Depending on the processor of the host machine, the corresponding module must be loaded:
$ modprobe kvm_intel # Intel processors
$ modprobe kvm_amd # AMD processors
If the above commands fail, you can view the diagnostics by running:
$ kvm-ok
To check if the KVM modules are enabled, run:
$ lsmod | grep kvm
kvm_amd 167936 0
ccp 126976 1 kvm_amd
kvm 1089536 1 kvm_amd
irqbypass 16384 1 kvm
1. docker –version
2. docker pull
3. docker run
4. docker ps
5. docker ps -a
6. docker exec
7. docker stop
8. docker kill
9. docker commit
10. docker login
11. docker push
12. docker images
13. docker rm
14. docker rmi
15. docker build
Docker Commands
1. docker –version
This command is used to get the currently installed version of docker
2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository(hub.docker.com)
3. docker run
Usage: docker run -it -d <image name>
This command is used to create a container from an image
4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers
6. docker exec
Usage: docker exec -it <container id> bash
This command is used to access the running container
7. docker stop
Usage: docker stop <container id>
This command stops a running container
8. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference between ‘docker kill’ and
‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in situations when it is taking too
much time for getting the container to stop, one can opt to kill it
9. docker commit
Usage: docker commit <conatainer id> <username/imagename>
This command creates a new image of an edited container on the local system
10. docker login
This command is used to login to the docker hub repository
11. docker push
Usage: docker push <username/image name>
This command is used to push an image to the docker hub repository
12. docker images
This command lists all the locally stored docker images
13. docker rm
Usage: docker rm <container id>
This command is used to delete a stopped container
14. docker rmi
Usage: docker rmi <image-id>
This command is used to delete an image from local storage
15. docker build
Usage: docker build <path to docker file>
This command is used to build an image from a specified docker file.
Docker Hub is a hosted repository service provided by Docker for finding and sharing container
images with your team.
Docker Hub is a cloud-based repository in which Docker users and partners create, test, store and
distribute container images.
1. Key features include:
2. Private Repositories:
3. Push and pull container images.
4. Automated Builds: Automatically build container images from GitHub and Bitbucket and push
them to Docker Hub.
Repositories
Docker Hub repositories allow you share container images with your team, customers, or the Docker
community at large.
Docker images are pushed to Docker Hub through the docker push command. A single Docker Hub
repository can hold many Docker images (stored as tags).
Creating repositories
To create a repository, sign into Docker Hub, click on Repositories then Create Repository:
● You can choose to put it in your Docker ID namespace or in any organization where you are
an owner.
● The repository name needs to be unique in that namespace, can be two to 255 characters, and can
only contain lowercase letters, numbers, hyphens (-), and underscores (_).
Note:
You cannot rename a Docker Hub repository once it has been created.
● The description can be up to 100 characters and is used in the search result.
● You can link a GitHub or Bitbucket account now, or choose to do it later in the repository settings.
After you hit the Create button, you can start using docker push to push images to this repository.
Deleting a repository
1. Sign into Docker Hub and click Repositories.
2. Select a repository from the list, click Settings and then Delete Repository.
Note:
Deleting a repository deletes all the images it contains and its build settings. This action cannot be undone.
1. Enter the name of the repository to confirm the deletion and click Delete.
Now you can push this repository to the registry designated by its name or tag.
$ docker push <hub-user>/<repo-name>:<tag>
The image is then uploaded and available for use by your teammates and/or the community.
Private repositories
Private repositories let you keep container images private, either to your own account or within an
organization or team.
To create a private repository, select Private when creating a repository:
You get one private repository for free with your Docker Hub user account (not usable for organizations
you’re a member of). If you need more private repositories for your user account, upgrade your Docker
Hub plan from your Billing Information page.
Once the private repository is created, you can push and pull images to and from it using Docker.
Note: You need to be signed in and have access to work with a private repository.
Note: Private repositories are not currently available to search through the top-level search or docker
search.
You can designate collaborators and manage their access to a private repository from that
repository’s Settings page. You can also toggle the repository’s status between public and private, if you
have an available repository slot open. Otherwise, you can upgrade your Docker Hub plan.
Collaborators and their role
A collaborator is someone you want to give access to a private repository. Once designated, they
can push and pull to your repositories. They are not allowed to perform any administrative tasks such as
deleting the repository or changing its status from private to public.
Note
A collaborator cannot add other collaborators. Only the owner of the repository has administrative access.
You can also assign more granular collaborator rights (“Read”, “Write”, or “Admin”) on Docker Hub by
using organizations and teams. For more information see the organizations documentation.
Viewing repository tags
Docker Hub’s individual repositories view shows you the available tags and the size of the associated
image. Go to the Repositories view and click on a repository to see its tags.
Image sizes are the cumulative space taken up by the image and all its parent images. This is also the disk
space used by the contents of the .tar file created when you docker save an image.
To view individual tags, click on the Tags tab.
An image is considered stale if there has been no push/pull activity for more than 1 month, i.e.:
A multi-architecture image is considered stale if all single-architecture images part of its manifest are
stale.
To delete a tag, select the corresponding checkbox and select Delete from the Action drop-down list.
Note
Only a user with administrative access (owner or team member with Admin permission) over the
repository can delete tags.
Select a tag’s digest to view details.
1. A serverless option with AWS Fargate. With AWS Fargate, you don't need to manage servers,
handle capacity planning, or isolate container workloads for security. Fargate handles the
infrastructure management aspects of your workload for you. You can schedule the placement of
your containers across your cluster based on your resource needs, isolation policies, and
availability requirements.
2. Integration with AWS Identity and Access Management (IAM). You can assign granular
permissions for each of your containers. This allows for a high level of isolation when building
your applications. In other words, you can launch your containers with the security and compliance
levels that you've come to expect from AWS.
3. AWS managed container orchestration. As a fully managed service, Amazon ECS comes with
AWS configuration and operational best practices built-in. This also means that you don't need to
manage control plane, nodes, or add-ons. It's integrated with both Alexa Web Information Service
and third-party tools, such as Amazon Elastic Container Registry and Docker. This integration
makes it easier for teams to focus on building the applications, not the environment.
4. Continuous integration and continuous deployment (CI/CD). This is a common process for
microservice architectures that are based on Docker containers. You can create a CI/CD pipeline
that takes the following actions:
o Monitors changes to a source code repository
o Builds a new Docker image from that source
o Pushes the image to an image repository such as Amazon ECR or Docker Hub
o Updates your Amazon ECS services to use the new image in your application
5. Support for service discovery. This is a key component of most distributed systems and
service-oriented architectures. With service discovery, your microservice components are
automatically discovered as they're created and terminated on a given infrastructure.
6. Support for sending your container instance log information to CloudWatch Logs. After you send
this information to Amazon CloudWatch, you can view the logs from your container instances in
one convenient location. This prevents your container logs from taking up disk space on your
container instances.
The AWS container services team maintains a public roadmap on GitHub. The roadmap contains
information about what the teams are working on and enables AWS customers to provide direct feedback.
Launch types
There are two models that you can use to run your containers:
● Fargate launch type - This is a serverless pay-as-you-go option. You can run containers without
needing to manage your infrastructure.
● EC2 launch type - Configure and deploy EC2 instances in your cluster to run your containers.
● Workloads that require consistently high CPU core and memory usage
● Large workloads that need to be optimized for price
● Your applications need to access persistent storage
● You must directly manage your infrastructure
You can create, access, and manage your Amazon ECS resources using any of the following interfaces:
● AWS Management Console — provides a web interface that you can use to access your Amazon
ECS resources.
● AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services,
including Amazon ECS. It's supported on Windows, Mac, and Linux.
● AWS SDKs — Provides language-specific APIs and takes care of many of the connection details.
These include calculating signatures, handling request retries, and error handling. For more
information, see AWS SDKs.
● AWS Copilot — Provides an open-source tool for developers to build, release, and operate production
ready containerized applications on Amazon ECS. For more information, see AWS Copilot on the
GitHub website.
● Amazon ECS CLI — Provides a command line interface for you to run your applications on Amazon
ECS and AWS Fargate using the Docker Compose file format. You can quickly provision resources,
push and pull images using Amazon Elastic Container Registry, and monitor running applications on
Amazon ECS or Fargate. You can also test containers that are running locally along with containers in
the Cloud within the CLI. For more information, see Amazon ECS CLI on the GitHub website.
● AWS CDK — Provides an open-source software development framework that you can use to model
and provision your cloud application resources using familiar programming languages. The AWS
CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. For more
information, see Getting started with Amazon ECS using the AWS CDK.
Pricing
Amazon ECS pricing is dependent on whether you use AWS Fargate or Amazon EC2 infrastructure to
host your containerized workloads. When using Amazon ECS on AWS Outposts, the pricing follows the
same model that's used when you use Amazon EC2 directly.
Amazon ECS and Fargate also offer Savings Plans that provide significant savings based on your AWS
usage. For
To view your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost
Management console. Your bill contains links to usage reports that provide additional details about your
bill.
Trusted Advisor is a service that you can use to help optimize the costs, security, and performance of your
AWS environment.
Container
Use containers to Build, Share and Run your applications Package Software into Standardized Units
for Development, Shipment and Deployment.
A container is a standard unit of software that packages up code and all its dependencies so the application
runs quickly and reliably from one computing environment to another. A Docker container image is a
lightweight, standalone, executable package of software that includes everything needed to run an
application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers – images become
containers when they run on Docker Engine. Available for both Linux and Windows-based applications,
containerized software will always run the same, regardless of the infrastructure. Containers isolate
software from its environment and ensure that it works uniformly despite differences for instance between
development and staging.
Docker containers that run on Docker Engine:
● Standard: Docker created the industry standard for containers, so they could be portable anywhere
● Lightweight: Containers share the machine’s OS system kernel and therefore do not require an OS per
application, driving higher server efficiencies and reducing server and licensing costs.
● Secure: Applications are safer in containers and Docker provides the strongest default isolation
capabilities in the industry
CONTAINERS
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple
containers can run on the same machine and share the OS kernel with other containers, each running as
isolated processes in user space. Containers take up less space than VMs (container images are typically
tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.
VIRTUAL MACHINES
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The
hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating
system, the application, necessary binaries and libraries – taking up tens of GBs. VMs can also be slow to
boot.
Containers and Virtual Machines Together
Containers and VMs used together provide a great deal of flexibility in deploying and managing app
Container Standards and Industry Leadership
The launch of Docker in 2013 jump started a revolution in application development – by democratizing
software containers. Docker developed a Linux container technology – one that is portable, flexible and
easy to deploy. Docker open sourced libcontainer and partnered with a worldwide community of
contributors to further its development. In June 2015, Docker donated the container image specification
and runtime code now known as runc, to the Open Container Initiative (OCI) to help establish
standardization as the container ecosystem grows and matures.
Following this evolution, Docker continues to give back with the containerd project, which Docker
donated to the Cloud Native Computing Foundation (CNCF) in 2017. containerd is an industry-standard
container runtime that leverages runc and was created with an emphasis on simplicity, robustness and
portability. containerd is the core container runtime of the Docker Engine.
A docker container is an isolated environment that usually contains a single application with all required
dependencies. Many times we need to run some commands inside a docker container. There are several
ways in which we can execute a command inside a container and get the required output.
Dockerfile Way
This is not the exact way you can run commands inside a docker container though it may be helpful in
development situations or for initial deployment debugging etc. We can use RUN command inside a
Dockerfile. Here’s our sample Dockerfile:
FROM nginx:latest
RUN nginx -V
It simply pulls the latest nginx image from the registry and then runs the nginx -V command to display the
Nginx version when you build the image.
$ docker build -t nginx-test .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM nginx:latest
---> 7ce4f91ef623
Step 2/2 : RUN nginx -V
---> Running in 43918bbbeaa5
nginx version: nginx/1.19.9
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
--group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module
--with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module
--with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module
--with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module
--with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2
-fdebug-prefix-map=/data/builder/debuild/nginx-1.19.9/debian/debuild-base/nginx-1.19.9=. -fstack-protector-strong -Wformat
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'
How to run multiple containers of the same image and expose different ports?
I have built a droplet with Ubuntu 18.04 and Docker CE edition. For teaching purposes, I want to run 18
times the same container based on a Docker image hosted on
DockerHub: https://hub.docker.com/repository/docker/scienceparkstudygroup/master-gls
You would just need to adjust the -p argument and change the first part from 8787 to the new port that you
would like to use.
For example:
Note: Make sure to use different names as well otherwise it would not work.