0% found this document useful (0 votes)
164 views28 pages

DevOps Chapter-5

This document provides an overview of Docker and how it can be used in DevOps. It discusses Docker architecture including images, containers, registries, and orchestration tools. Docker is a tool that allows applications and dependencies to be packaged into standardized units called containers that can run on any infrastructure. Key benefits include portability, reduced delays between coding and production, and increased hardware efficiency. Docker is commonly used with DevOps practices like CI/CD pipelines and Kubernetes is the standard for container orchestration at scale.

Uploaded by

vort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
164 views28 pages

DevOps Chapter-5

This document provides an overview of Docker and how it can be used in DevOps. It discusses Docker architecture including images, containers, registries, and orchestration tools. Docker is a tool that allows applications and dependencies to be packaged into standardized units called containers that can run on any infrastructure. Key benefits include portability, reduced delays between coding and production, and increased hardware efficiency. Docker is commonly used with DevOps practices like CI/CD pipelines and Kubernetes is the standard for container orchestration at scale.

Uploaded by

vort
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

IT-31 DevOps

Docker– Containers & Build tool- Maven


4.1. Introduction: What is a Docker, Use case of Docker, Platforms for Docker, Dockers vs. Virtualization
4.2. Architecture: Docker Architecture., Understanding the Docker components
4.3. Installation: Installing Docker on Linux. Understanding Installation of Docker on windows. Some
Docker commands. Provisioning.
4.4. Docker Hub.: Downloading Docker images. Uploading the images in Docker Registry and AWS
ECS, Understanding the containers, Running commands in container. Running multiple containers.
4.5. Custom images: Creating a custom image. Running a container from the custom image. Publishing
the custom image.
4.6. Docker Networking: Accessing containers, linking containers, Exposing container ports, Container
Routing.

Docker
Docker is an open platform for developing, shipping, and running applications. Docker enables you to
separate your applications from your infrastructure so you can deliver software quickly. With Docker, you
can manage your infrastructure in the same ways you manage your applications. By taking advantage of
Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce
the delay between writing code and running it in production.
The Docker platform
Docker provides the ability to package and run an application in a loosely isolated environment called a
container. The isolation and security allows you to run many containers simultaneously on a given host.
Containers are lightweight and contain everything needed to run the application, so you do not need to
rely on what is currently installed on the host. You can easily share containers while you work, and be sure
that everyone you share with gets the same container that works in the same way.
Docker provides tooling and a platform to manage the lifecycle of your containers:
1. Develop your application and its supporting components using containers.
2. The container becomes the unit for distributing and testing your application.
3. When you’re ready, deploy your application into your production environment, as a container or an
orchestrated service. This works the same whether your production environment is a local data
center, a cloud provider, or a hybrid of the two.

What can I use Docker for? Fast, consistent delivery of your applications
Docker streamlines the development lifecycle by allowing developers to work in standardized
environments using local containers which provide your applications and services. Containers are great for
continuous integration and continuous delivery (CI/CD) workflows.
Consider the following scenario:
1. Developers write code locally and share their work with their colleagues using Docker containers.
2. They use Docker to push their applications into a test environment and execute automated and
manual tests.
3. When developers find bugs, they can fix them in the development environment and redeploy them
to the test environment for testing and validation.
4. When testing is complete, getting the fix to the customer is as simple as pushing the updated image
to the production environment.

Responsive deployment and scaling


Docker’s container-based platform allows for highly portable workloads. Docker containers can run on a
developer’s local laptop, on physical or virtual machines in a data center, on cloud providers, or in a
mixture of environments.
Docker’s portability and lightweight nature also make it easy to dynamically manage workloads, scaling
up or tearing down applications and services as business needs dictate, in near real time.
Running more workloads on the same hardware
Docker is lightweight and fast. It provides a viable, cost-effective alternative to hypervisor-based virtual
machines, so you can use more of your compute capacity to achieve your business goals. Docker is perfect
for high density environments and for small and medium deployments where you need to do more with
fewer resources.
Docker architecture
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the
heavy lifting of building, running, and distributing your Docker containers. The Docker client and
daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The
Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
Another Docker client is Docker Compose, that lets you work with applications consisting of a set of
containers.

Docker is more than containers, though. It is a suite of tools for developers to build, share, run and
orchestrate containerized apps.
● Developer tools for building container images: Docker Build creates a container image, the
blueprint for a container, including everything needed to run an application – the application code,
binaries, scripts, dependencies, configuration, environment variables, and so on. Docker
Compose is a tool for defining and running multi-container applications. These tools integrate
tightly with code repositories (such as GitHub) and continuous integration and continuous delivery
(CI/CD) pipeline tools (such as Jenkins).
● Sharing images: Docker Hub is a registry service provided by Docker for finding and sharing
container images with your team or the public. Docker Hub is similar in functionality to GitHub.
● Running containers: Docker Engine is a container runtime that runs in almost any environment:
Mac and Windows PCs, Linux and Windows servers, the cloud, and on edge devices. Docker
Engine is built on top containerd, the leading open-source container runtime, a project of the Cloud
Native Computing Foundation (DNCF).
● Built-in container orchestration: Docker Swarm manages a cluster of Docker Engines (typically on
different nodes) called a swarm. Here the overlap with Kubernetes begins.

What is Kubernetes?
Kubernetes is an open-source container orchestration platform for managing, automating, and scaling
containerized applications. Although Docker Swarm is also an orchestration tool, Kubernetes is the de
facto standard for container orchestration because of its greater flexibility and capacity to scale.

Organizations use Kubernetes to automate the deployment and management of containerized applications.
Rather than individually managing each container in a cluster, a DevOps team can instead tell Kubernetes
how to allocate the necessary resources in advance.

Where Kubernetes and the Docker suite intersect is at container orchestration. So when people talk about
Kubernetes vs. Docker, what they really mean is Kubernetes vs. Docker Swarm.

What is Docker in Devops?


Docker is a virtual machine, but unlike virtual machines that create a completely separate operating
system. Docker allows the applications to use the Linux kernel of the same machine on which it is
installed.

The Docker daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as
images, containers, networks, and volumes. A daemon can also communicate with other daemons to
manage Docker services.
The Docker client

The Docker client (docker) is the primary way that many Docker users interact with Docker. When you
use commands such as docker run, the client sends these commands to dockerd, which carries them out.
The docker command uses the Docker API. The Docker client can communicate with more than one
daemon.

Docker Desktop

Docker Desktop is an easy-to-install application for your Mac or Windows environment that enables you
to build and share containerized applications and microservices. Docker Desktop includes the Docker
daemon (dockerd), the Docker client (docker), Docker Compose, Docker Content Trust, Kubernetes, and
Credential Helper.

Docker registries

A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker
is configured to look for images on Docker Hub by default. You can even run your own private registry.

When you use the docker pull or docker run commands, the required images are pulled from your
configured registry. When you use the docker push command, your image is pushed to your configured
registry.

Docker objects

When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and
other objects. Here we describe some of those objects.

Images

An image is a read-only template with instructions for creating a Docker container. Often, an image
is based on another image, with some additional customization. For example, you may build an image
which is based on the ubuntu image, but installs the Apache web server and your application, as well as
the configuration details needed to make your application run.

You might create your own images or you might only use those created by others and published in a
registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps
needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When
you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is
part of what makes images so lightweight, small, and fast, when compared to other virtualization
technologies.

Containers

A container is a runnable instance of an image. You can create, start, stop, move, or delete a container
using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or
even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can
control how isolated a container’s network, storage, or other underlying subsystems are from other
containers or from the host machine.

A container is defined by its image as well as any configuration options you provide to it when you create
or start it. When a container is removed, any changes to its state that are not stored in persistent storage
disappear.
Example docker run command
Command runs an ubuntu container, attaches interactively to your local command-line session, and
runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, it happens (assuming that we use the default registry configuration):
1. If you do not have the ubuntu image locally, Docker pulls it from your configured registry, as
though you had run docker pull ubuntu manually.
2. Docker creates a new container, as though you had run a docker container create command
manually.
3. Docker allocates a read-write filesystem to the container, as its final layer. This allows a running
container to create or modify files and directories in its local filesystem.
4. Docker creates a network interface to connect the container to the default network, since you did
not specify any networking options. This includes assigning an IP address to the container. By
default, containers can connect to external networks using the host machine’s network connection.
5. Docker starts the container and executes /bin/bash. Because the container is running interactively
and attached to your terminal (due to the -i and -t flags), you can provide input using your
keyboard while the output is logged to your terminal.
6. When you type exit to terminate the /bin/bash command, the container stops but is not removed.
You can start it again or remove it.
The underlying technology
Docker is written in the Go Programming Language and takes advantage of several features of the Linux
kernel to deliver its functionality. Docker uses a technology called namespaces to provide the isolated
workspace called the container. When you run a container, Docker creates a set of namespaces for that
container.
These namespaces provide a layer of isolation. Each aspect of a container runs in a separate namespace
and its access is limited to that namespace.
Install Docker Desktop on Windows
Update to the Docker Desktop terms
Commercial use of Docker Desktop in larger enterprises (more than 250 employees OR more than $10
million USD in annual revenue) now requires a paid subscription.
Welcome to Docker Desktop for Windows.
This page contains information about Docker Desktop for Windows system requirements, download URL,
instructions to install and update Docker Desktop for Windows.
Download Docker Desktop for Windows
System requirements
Your Windows machine must meet the following requirements to successfully install Docker Desktop.

● WSL 2 backend
● Hyper-V backend and Windows containers
WSL 2 backend
● Windows 11 64-bit / Windows 10 64-bit
● Enable the WSL 2 feature on Windows
● The hardware prerequisites to successfully run WSL 2 on Windows 10 /Windows 11
o 64-bit processor
o 4GB RAM
o BIOS-level hardware virtualization support must be enabled in the BIOS settings.
Download and install the Linux kernel update package.
Docker only supports Docker Desktop on Windows for those versions of Windows 10 that are still
within Microsoft’s servicing timeline.
Containers and images created with Docker Desktop are shared between all user accounts on machines
where it is installed. This is because all Windows accounts use the same VM to build and run containers.
Note that it is not possible to share containers and images between user accounts when using the Docker
Desktop WSL 2 backend.

Nested virtualization scenarios, such as running Docker Desktop on a VMWare or Parallels instance might
work, but there are no guarantees.
About Windows containers
● Switch between Windows and Linux containers describes how you can toggle between Linux and
Windows containers in Docker Desktop.
● Getting Started with Windows Containers (Lab) provides a tutorial on how to set up and run
Windows containers on Windows 10. It shows how to use a MusicStore application with Windows
containers.
● Docker Container Platform for Windows articles and blog posts on the Docker website.
Note
To run Windows containers, you need Windows 10 or Windows 11 Professional or Enterprise edition.
Windows Home or Education editions will only allow you to run Linux containers.
Install Docker Desktop on Windows
Install interactively
1. Double-click Docker Desktop Installer.exe to run the installer.
If you haven’t already downloaded the installer (Docker Desktop Installer.exe), you can get it
from Docker Hub. It downloads to your Downloads folder.
When prompted, ensure the Use WSL 2 instead of Hyper-V option on the Configuration page is selected or
not depending on your choice of backend.

If your system only supports one of the two options, you will not be able to select which backend to use.
2. Follow the instructions on the installation wizard to authorize the installer and proceed with the
install.

3. When the installation is successful, click Close to complete the installation process.


4. If your admin account is different to your user account, you must add the user to
the docker-users group. Run Computer Management as an administrator and navigate to Local
Users and Groups > Groups > docker-users. Right-click to add the user to the group. Log out
and log back in for the changes to take effect.
Install from the command line
After downloading Docker Desktop Installer.exe, run the following command in a terminal to install Docker
Desktop:

⮚ "Docker Desktop Installer.exe" install

⮚ If you’re using PowerShell you should run it as:

⮚ Start-Process '.\win\build\Docker Desktop Installer.exe' -Wait install

⮚ If using the Windows Command Prompt:

⮚ start /w "" "Docker Desktop Installer.exe" install

The install command accepts the following flags:

● --quiet: suppresses information output when running the installer


● --accept-license: accepts the Docker Subscription Service Agreement now, rather than requiring it to be
accepted when the application is first run
● --allowed-org=<org name>: requires the user to sign in and be part of the specified Docker Hub
organization when running the application
● --backend=<backend name>: selects the backend to use for Docker Desktop, hyper-v or wsl-2 (default)

If your admin account is different to your user account, you must add the user to the docker-users group:
net localgroup docker-users <user> /add
Start Docker Desktop
Docker Desktop does not start automatically after installation. To start Docker Desktop:
1. Search for Docker and select Docker Desktop in the search results.
2. The Docker menu ( ) displays the Docker Subscription Service Agreement window. It includes
a change to the terms of use for Docker Desktop.
Here’s a summary of the key changes:
o Our Docker Subscription Service Agreement includes a change to the terms of use for
Docker Desktop
o It remains free for small businesses (fewer than 250 employees AND less than $10 million
in annual revenue), personal use, education, and non-commercial open source projects.
o It requires a paid subscription for professional use in larger enterprises.
o The effective date of these terms is August 31, 2021.
o The existing Docker Free subscription has been renamed Docker Personal and we have
introduced a Docker Business subscription .
o The Docker Pro, Team, and Business subscriptions include commercial use of Docker
Desktop.
3. Click the checkbox to indicate that you accept the updated terms and then click Accept to
continue. Docker Desktop starts after you accept the terms.
Install Docker Desktop on Linux
Welcome to Docker Desktop for Linux. This page contains information about system requirements,
download URLs, and instructions on how to install and update Docker Desktop for Linux.
Download Docker Desktop for Linux packages
System requirements
To install Docker Desktop Linux host must meet the following requirements:
● 64-bit kernel and CPU support for virtualization
● KVM virtualization support. To check if the KVM kernel modules are enabled and how to provide
access to the kvm device.
● QEMU must be version 5.2 or newer.
● systemd init system.
● Gnome or KDE Desktop environment - For many Linux distorts, the Gnome environment does not
support tray icons. To add support for tray icons, you need to install a Gnome extension. For
example, AppIndicator).
● At least 4 GB of RAM.
Docker Desktop for Linux runs a Virtual Machine (VM).
Supported platforms
Docker provides .deb and .rpm packages from the Ubuntu, Debian and Fedora Linux distributions and
architectures:

Note:
An experimental package is available for Arch-based distributions. Docker has not tested or verified the
installation.
Docker supports Docker Desktop on the current LTS release of the aforementioned distributions and the
most recent version. As new versions are made available, Docker stops supporting the oldest version and
supports the newest version.
KVM virtualization support
Docker Desktop runs a VM that requires KVM support.
The kvm module should load automatically if the host has virtualization support.
To load the module manually, run:
$ modprobe kvm
Depending on the processor of the host machine, the corresponding module must be loaded:
$ modprobe kvm_intel # Intel processors
$ modprobe kvm_amd # AMD processors
If the above commands fail, you can view the diagnostics by running:
$ kvm-ok
To check if the KVM modules are enabled, run:
$ lsmod | grep kvm
kvm_amd 167936 0
ccp 126976 1 kvm_amd
kvm 1089536 1 kvm_amd
irqbypass 16384 1 kvm

Set up KVM device user permissions


To check ownership of /dev/kvm, run :
$ ls -al /dev/kvm
Add your user to the kvm group in order to access the kvm device:
$ sudo usermod -aG kvm $USER
Log out and log back in so that your group membership is re-evaluated.
Generic installation steps
1. Download the correct package for your Linux distribution and install it with the corresponding
package manager.
o Install on Debian
o Install on Fedora
o Install on Ubuntu
o Install on Arch
2. Open your Applications menu in Gnome/KDE Desktop and search for Docker Desktop.
3. Select Docker Desktop to start Docker.
The Docker menu ( ) displays the Docker Subscription Service Agreement window.
4. Select the checkbox to accept the updated terms and then click Accept to continue. Docker
Desktop starts after you accept the terms.
Important
If you do not agree to the terms, the Docker Desktop application will close and you can no longer
run Docker Desktop on your machine. You can choose to accept the terms at a later date by
opening Docker Desktop.

Differences between Docker Desktop for Linux and Docker Engine


Docker Desktop for Linux and Docker Engine can be installed side-by-side on the same machine.
Docker Desktop for Linux stores containers and images in an isolated storage location within a VM and
offers controls to restrict its resources. Using a dedicated storage location for Docker Desktop prevents it
from interfering with a Docker Engine installation on the same machine.
While it’s possible to run both Docker Desktop and Docker Engine simultaneously, there may be
situations where running both at the same time can cause issues.
For example, when mapping network ports (-p / --publish) for containers, both Docker Desktop and
Docker Engine may attempt to reserve the same port on your machine, which can lead to conflicts (“port
already in use”).
We generally recommend stopping the Docker Engine while you’re using Docker Desktop to prevent the
Docker Engine from consuming resources and to prevent conflicts as described above.
Use the following command to stop the Docker Engine service:
$ sudo systemctl stop docker docker.socket containerd
Depending on your installation, the Docker Engine may be configured to automatically start as a system
service when your machine starts. Use the following command to disable the Docker Engine service, and
to prevent it from starting automatically:
$ sudo systemctl disable docker docker.socket containerd
Switch between Docker Desktop and Docker Engine
The Docker CLI can be used to interact with multiple Docker Engines.
For example, you can use the same Docker CLI to control a local Docker Engine and to control a remote
Docker Engine instance running in the cloud. Docker Contexts allow you to switch between Docker
Engines instances.
When installing Docker Desktop, a dedicated “desktop-linux” context is created to interact with Docker
Desktop. On startup, Docker Desktop automatically sets its own context (desktop-linux) as the current
context. This means that subsequent Docker CLI commands target Docker Desktop. On shutdown, Docker
Desktop resets the current context to the default context.
Use the docker context ls command to view what contexts are available on your machine. The current
context is indicated with an asterisk (*);
$ docker context ls
NAME DESCRIPTION DOCKER ENDPOINT
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock ...
desktop-linux unix:///home/<user>/.docker/desktop/docker.sock ...
If you have both Docker Desktop and Docker Engine installed on the same machine, you can run
the docker context use command to switch between the Docker Desktop and Docker Engine contexts.
For example, use the “default” context to interact with the Docker Engine;
$ docker context use default
default
Current context is now "default"
And use the desktop-linux context to interact with Docker Desktop:
$ docker context use desktop-linux
desktop-linux
Current context is now "desktop-linux"
Refer to the Docker Context documentation for more details.
Today, in this blog, I will talk about the Top 15 Docker Commands that you will be using frequently while
you are working with Docker. The trend of Docker container has been growing uncontainably with
organizations actively looking for professionals possessing Docker certification and a sound knowledge
of these Docker commands will give you the needed expertise.

Following are the commands which are being covered:

1. docker –version
2. docker pull
3. docker run
4. docker ps
5. docker ps -a
6. docker exec
7. docker stop
8. docker kill
9. docker commit
10. docker login
11. docker push
12. docker images
13. docker rm
14. docker rmi
15. docker build

Docker Commands
1. docker –version
This command is used to get the currently installed version of docker

2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository(hub.docker.com)

3. docker run
Usage: docker run -it -d <image name>
This command is used to create a container from an image

4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers

6. docker exec
Usage: docker exec -it <container id> bash
This command is used to access the running container

7. docker stop
Usage: docker stop <container id>
This command stops a running container

8. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference between ‘docker kill’ and
‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in situations when it is taking too
much time for getting the container to stop, one can opt to kill it
9. docker commit
Usage: docker commit <conatainer id> <username/imagename>
This command creates a new image of an edited container on the local system

10. docker login
This command is used to login to the docker hub repository

11. docker push
Usage: docker push <username/image name>
This command is used to push an image to the docker hub repository

12. docker images
This command lists all the locally stored docker images
13. docker rm
Usage: docker rm <container id>
This command is used to delete a stopped container

14. docker rmi
Usage: docker rmi <image-id>
This command is used to delete an image from local storage

15. docker build
Usage: docker build <path to docker file>
This command is used to build an image from a specified docker file.

What is Docker Hub?

Docker Hub is a hosted repository service provided by Docker for finding and sharing container
images with your team.

Docker Hub is a cloud-based repository in which Docker users and partners create, test, store and
distribute container images.
1. Key features include:
2. Private Repositories:
3. Push and pull container images.
4. Automated Builds: Automatically build container images from GitHub and Bitbucket and push
them to Docker Hub.

Repositories
Docker Hub repositories allow you share container images with your team, customers, or the Docker
community at large.
Docker images are pushed to Docker Hub through the docker push command. A single Docker Hub
repository can hold many Docker images (stored as tags).
Creating repositories
To create a repository, sign into Docker Hub, click on Repositories then Create Repository:

When creating a new repository:

● You can choose to put it in your Docker ID namespace or in any organization where you are
an owner.
● The repository name needs to be unique in that namespace, can be two to 255 characters, and can
only contain lowercase letters, numbers, hyphens (-), and underscores (_).
Note:
You cannot rename a Docker Hub repository once it has been created.

● The description can be up to 100 characters and is used in the search result.
● You can link a GitHub or Bitbucket account now, or choose to do it later in the repository settings.
After you hit the Create button, you can start using docker push to push images to this repository.
Deleting a repository
1. Sign into Docker Hub and click Repositories.
2. Select a repository from the list, click Settings and then Delete Repository.
Note:
Deleting a repository deletes all the images it contains and its build settings. This action cannot be undone.

1. Enter the name of the repository to confirm the deletion and click Delete.

Pushing a Docker container image to Docker Hub


To push an image to Docker Hub, you must first name your local image using your Docker Hub username
and the repository name that you created through Docker Hub on the web.
You can add multiple images to a repository by adding a specific :<tag> to them (for
example docs/base:testing). If it’s not specified, the tag defaults to latest.
Name your local images using one of these methods:

● When you build them, using docker build -t <hub-user>/<repo-name>[:<tag>]


● By re-tagging an existing local image docker tag <existing-image>
<hub-user>/<repo-name>[:<tag>]
● By using docker commit <existing-container> <hub-user>/<repo-name>[:<tag>] to commit
changes

Now you can push this repository to the registry designated by its name or tag.
$ docker push <hub-user>/<repo-name>:<tag>
The image is then uploaded and available for use by your teammates and/or the community.
Private repositories
Private repositories let you keep container images private, either to your own account or within an
organization or team.
To create a private repository, select Private when creating a repository:

You can also make an existing repository private by going to its Settings tab:

You get one private repository for free with your Docker Hub user account (not usable for organizations
you’re a member of). If you need more private repositories for your user account, upgrade your Docker
Hub plan from your Billing Information page.
Once the private repository is created, you can push and pull images to and from it using Docker.
Note: You need to be signed in and have access to work with a private repository.
Note: Private repositories are not currently available to search through the top-level search or docker
search.
You can designate collaborators and manage their access to a private repository from that
repository’s Settings page. You can also toggle the repository’s status between public and private, if you
have an available repository slot open. Otherwise, you can upgrade your Docker Hub plan.
Collaborators and their role
A collaborator is someone you want to give access to a private repository. Once designated, they
can push and pull to your repositories. They are not allowed to perform any administrative tasks such as
deleting the repository or changing its status from private to public.
Note
A collaborator cannot add other collaborators. Only the owner of the repository has administrative access.
You can also assign more granular collaborator rights (“Read”, “Write”, or “Admin”) on Docker Hub by
using organizations and teams. For more information see the organizations documentation.
Viewing repository tags
Docker Hub’s individual repositories view shows you the available tags and the size of the associated
image. Go to the Repositories view and click on a repository to see its tags.

Image sizes are the cumulative space taken up by the image and all its parent images. This is also the disk
space used by the contents of the .tar file created when you docker save an image.
To view individual tags, click on the Tags tab.
An image is considered stale if there has been no push/pull activity for more than 1 month, i.e.:

● It has not been pulled for more than 1 month


● And it has not been pushed for more than 1 month

A multi-architecture image is considered stale if all single-architecture images part of its manifest are
stale.
To delete a tag, select the corresponding checkbox and select Delete from the Action drop-down list.
Note
Only a user with administrative access (owner or team member with Admin permission) over the
repository can delete tags.
Select a tag’s digest to view details.

Searching for Repositories


You can search the Docker Hub registry through its search interface or by using the command line
interface. Searching can find images by image name, username, or description:
$ docker search centos
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
centos The official build of CentOS. 1034 [OK]
ansible/centos7-ansible Ansible on Centos7 43 [OK]
tutum/centos Centos image with SSH access. For the root... 13 [OK]
...
There you can see two example results: centos and ansible/centos7-ansible. The second result shows that it
comes from the public repository of a user, named ansible/, while the first result, centos, doesn’t explicitly
list a repository which means that it comes from the top-level namespace for Docker Official Images.
The / character separates a user’s repository from the image name.
Once you’ve found the image you want, you can download it with docker pull <imagename>:
$ docker pull centos
latest: Pulling from centos
6941bfcbbfca: Pull complete
41459f052977: Pull complete
fd44297e2ddb: Alrea
dy exists
centos:latest: The image you are pulling has been verified. Important: image verification is a tech preview
feature and should not be relied on to provide security.
Digest: sha256:d601d3b928eb2954653c59e65862aabb31edefa868bd5148a41fa45004c12288
Status: Downloaded newer image for centos:latest
You now have an image from which you can run containers.
Starring Repositories
Your repositories can be starred and you can star repositories in return. Stars are a way to show that you
like a repository. They are also an easy way of bookmarking your favorites.
Docker, docker, trusted, registry, accounts, plans, Dockerfile, Docker
Hub, webhooks, docs, documentation

What is Amazon Elastic Container Service?


Amazon Elastic Container Service (Amazon ECS) is a highly scalable and fast container management
service. You can use it to run, stop, and manage containers on a cluster. With Amazon ECS, your
containers are defined in a task definition that you use to run an individual task or task within a service. In
this context, a service is a configuration that you can use to run and maintain a specified number of tasks
simultaneously in a cluster. You can run your tasks and services on a serverless infrastructure that's
managed by AWS Fargate. Alternatively, for more control over your infrastructure, you can run your tasks
and services on a cluster of Amazon EC2 instances that you manage.

Amazon ECS provides the following features:

1. A serverless option with AWS Fargate. With AWS Fargate, you don't need to manage servers,
handle capacity planning, or isolate container workloads for security. Fargate handles the
infrastructure management aspects of your workload for you. You can schedule the placement of
your containers across your cluster based on your resource needs, isolation policies, and
availability requirements.
2. Integration with AWS Identity and Access Management (IAM). You can assign granular
permissions for each of your containers. This allows for a high level of isolation when building
your applications. In other words, you can launch your containers with the security and compliance
levels that you've come to expect from AWS.
3. AWS managed container orchestration. As a fully managed service, Amazon ECS comes with
AWS configuration and operational best practices built-in. This also means that you don't need to
manage control plane, nodes, or add-ons. It's integrated with both Alexa Web Information Service
and third-party tools, such as Amazon Elastic Container Registry and Docker. This integration
makes it easier for teams to focus on building the applications, not the environment.
4. Continuous integration and continuous deployment (CI/CD). This is a common process for
microservice architectures that are based on Docker containers. You can create a CI/CD pipeline
that takes the following actions:
o Monitors changes to a source code repository
o Builds a new Docker image from that source
o Pushes the image to an image repository such as Amazon ECR or Docker Hub
o Updates your Amazon ECS services to use the new image in your application
5. Support for service discovery. This is a key component of most distributed systems and
service-oriented architectures. With service discovery, your microservice components are
automatically discovered as they're created and terminated on a given infrastructure.
6. Support for sending your container instance log information to CloudWatch Logs. After you send
this information to Amazon CloudWatch, you can view the logs from your container instances in
one convenient location. This prevents your container logs from taking up disk space on your
container instances.

The AWS container services team maintains a public roadmap on GitHub. The roadmap contains
information about what the teams are working on and enables AWS customers to provide direct feedback.

Launch types

There are two models that you can use to run your containers:

● Fargate launch type - This is a serverless pay-as-you-go option. You can run containers without
needing to manage your infrastructure.
● EC2 launch type - Configure and deploy EC2 instances in your cluster to run your containers.

The Fargate launch type is suitable for the following workloads:

● Large workloads that need to be optimized for low overhead


● Small workloads that have occasional burst
● Tiny workloads
● Batch workloads
The EC2 launch type is suitable for the following workloads:

● Workloads that require consistently high CPU core and memory usage
● Large workloads that need to be optimized for price
● Your applications need to access persistent storage
● You must directly manage your infrastructure

Access Amazon ECS

You can create, access, and manage your Amazon ECS resources using any of the following interfaces:

● AWS Management Console — provides a web interface that you can use to access your Amazon
ECS resources.
● AWS Command Line Interface (AWS CLI) — Provides commands for a broad set of AWS services,
including Amazon ECS. It's supported on Windows, Mac, and Linux.
● AWS SDKs — Provides language-specific APIs and takes care of many of the connection details.
These include calculating signatures, handling request retries, and error handling. For more
information, see AWS SDKs.
● AWS Copilot — Provides an open-source tool for developers to build, release, and operate production
ready containerized applications on Amazon ECS. For more information, see AWS Copilot on the
GitHub website.
● Amazon ECS CLI — Provides a command line interface for you to run your applications on Amazon
ECS and AWS Fargate using the Docker Compose file format. You can quickly provision resources,
push and pull images using Amazon Elastic Container Registry, and monitor running applications on
Amazon ECS or Fargate. You can also test containers that are running locally along with containers in
the Cloud within the CLI. For more information, see Amazon ECS CLI on the GitHub website.
● AWS CDK — Provides an open-source software development framework that you can use to model
and provision your cloud application resources using familiar programming languages. The AWS
CDK provisions your resources in a safe, repeatable manner through AWS CloudFormation. For more
information, see Getting started with Amazon ECS using the AWS CDK.

Pricing

Amazon ECS pricing is dependent on whether you use AWS Fargate or Amazon EC2 infrastructure to
host your containerized workloads. When using Amazon ECS on AWS Outposts, the pricing follows the
same model that's used when you use Amazon EC2 directly.

Amazon ECS and Fargate also offer Savings Plans that provide significant savings based on your AWS
usage. For

To view your bill, go to the Billing and Cost Management Dashboard in the AWS Billing and Cost
Management console. Your bill contains links to usage reports that provide additional details about your
bill.
Trusted Advisor is a service that you can use to help optimize the costs, security, and performance of your
AWS environment.

Container
Use containers to Build, Share and Run your applications Package Software into Standardized Units
for Development, Shipment and Deployment.
A container is a standard unit of software that packages up code and all its dependencies so the application
runs quickly and reliably from one computing environment to another. A Docker container image is a
lightweight, standalone, executable package of software that includes everything needed to run an
application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers – images become
containers when they run on Docker Engine. Available for both Linux and Windows-based applications,
containerized software will always run the same, regardless of the infrastructure. Containers isolate
software from its environment and ensure that it works uniformly despite differences for instance between
development and staging.
Docker containers that run on Docker Engine:
● Standard: Docker created the industry standard for containers, so they could be portable anywhere
● Lightweight: Containers share the machine’s OS system kernel and therefore do not require an OS per
application, driving higher server efficiencies and reducing server and licensing costs.
● Secure: Applications are safer in containers and Docker provides the strongest default isolation
capabilities in the industry

Docker Containers Are Everywhere:


Linux, Windows, Data center, Cloud, Serverless, etc. Docker container technology was launched in 2013
as an open source Docker Engine. It leveraged existing computing concepts around containers and
specifically in the Linux world, primitives known as cgroups and namespaces. Docker’s technology is
unique because it focuses on the requirements of developers and systems operators to separate application
dependencies from infrastructure. Success in the Linux world drove a partnership with Microsoft that
brought Docker containers and its functionality to Windows Server.
Technology available from Docker and its open source project, Moby has been leveraged by all major data
center vendors and cloud providers. Many of these providers are leveraging Docker for their
container-native IaaS offerings. Additionally, the leading open source serverless frameworks utilize
Docker container technology.
Comparing Containers and Virtual Machines
Containers and virtual machines have similar resource isolation and allocation benefits, but function
differently because containers virtualize the operating system instead of hardware. Containers are more
portable and efficient.

CONTAINERS
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple
containers can run on the same machine and share the OS kernel with other containers, each running as
isolated processes in user space. Containers take up less space than VMs (container images are typically
tens of MBs in size), can handle more applications and require fewer VMs and Operating systems.

VIRTUAL MACHINES
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The
hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating
system, the application, necessary binaries and libraries – taking up tens of GBs. VMs can also be slow to
boot.
Containers and Virtual Machines Together
Containers and VMs used together provide a great deal of flexibility in deploying and managing app
Container Standards and Industry Leadership
The launch of Docker in 2013 jump started a revolution in application development – by democratizing
software containers. Docker developed a Linux container technology – one that is portable, flexible and
easy to deploy. Docker open sourced libcontainer and partnered with a worldwide community of
contributors to further its development. In June 2015, Docker donated the container image specification
and runtime code now known as runc, to the Open Container Initiative (OCI) to help establish
standardization as the container ecosystem grows and matures.
Following this evolution, Docker continues to give back with the containerd project, which Docker
donated to the Cloud Native Computing Foundation (CNCF) in 2017. containerd is an industry-standard
container runtime that leverages runc and was created with an emphasis on simplicity, robustness and
portability. containerd is the core container runtime of the Docker Engine.

Running Command in Container :


Running Commands in an Alternate Directory in a Docker Container. To run a command in a certain
directory of your container, use the --workdir flag to specify the directory: docker exec --workdir /tmp
container-name pwd.

Running commands inside a docker container is easier.

A docker container is an isolated environment that usually contains a single application with all required
dependencies. Many times we need to run some commands inside a docker container. There are several
ways in which we can execute a command inside a container and get the required output.

Using Interactive Shell


We can directly access the shell of a container and execute our commands as with a normal Linux
terminal. To get an interactive shell of a stopped (not in running state) container, you can use:
$ docker run -it ubuntu bash
root@c520631f652d:/#
As you can see, we landed directly inside a new Ubuntu container where we can run our commands. If a
container is already running, you can use exec command as below.
First, let’s find out the container ID.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c2d969adde7a nginx "/docker-entrypoint.…" 2 hours ago Up 2 hours 127.0.0.1:80->80/tcp nginx
0960560bc24b mariadb "docker-entrypoint.s…" 2 hours ago Up 2 hours 127.0.0.1:3306->3306/tcp mariadb
And, then get inside container ID c2d969adde7a

$ docker exec -it c2d969adde7a bash


root@c2d969adde7a:/#
In the above output, you can observe that we started a bash session of nginx container which was in
running state. Here we can execute any supported command and get the output.
Note – your container may not have bash and if so, you can use sh.
Ex:
docker exec -it c2d969adde7a sh
Direct Output
Often we just need the output of one or two commands and do not require a full-blown interactive session
for our task. You can run the required command inside a container and get its output directly without
opening a new shell session using exec command without -it flag.
Its syntax would be:
$ docker exec <container-id or name> <command>
Here’s an example:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c2d969adde7a nginx "/docker-entrypoint.…" 2 hours ago Up 2 hours 127.0.0.1:80->80/tcp nginx
0960560bc24b mariadb "docker-entrypoint.s…" 2 hours ago Up 2 hours 127.0.0.1:3306->3306/tcp mariadb

$ docker exec 0960560bc24b ps -ef | grep mysql


mysql 1 0 0 13:35 ? 00:00:02 mysqld
$
We executed ps -ef | grep mysql command inside the running MariaDB container and got the output directly.

Dockerfile Way
This is not the exact way you can run commands inside a docker container though it may be helpful in
development situations or for initial deployment debugging etc. We can use RUN command inside a
Dockerfile. Here’s our sample Dockerfile:
FROM nginx:latest
RUN nginx -V
It simply pulls the latest nginx image from the registry and then runs the nginx -V command to display the
Nginx version when you build the image.
$ docker build -t nginx-test .
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM nginx:latest
---> 7ce4f91ef623
Step 2/2 : RUN nginx -V
---> Running in 43918bbbeaa5
nginx version: nginx/1.19.9
built by gcc 8.3.0 (Debian 8.3.0-6)
built with OpenSSL 1.1.1d 10 Sep 2019
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules
--conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
--group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module
--with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module
--with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module
--with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module
--with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2
-fdebug-prefix-map=/data/builder/debuild/nginx-1.19.9/debian/debuild-base/nginx-1.19.9=. -fstack-protector-strong -Wformat
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

How to run multiple containers of the same image and expose different ports?
I have built a droplet with Ubuntu 18.04 and Docker CE edition. For teaching purposes, I want to run 18
times the same container based on a Docker image hosted on
DockerHub: https://hub.docker.com/repository/docker/scienceparkstudygroup/master-gls

You would just need to adjust the -p argument and change the first part from 8787 to the new port that you
would like to use.

For example:

● Starting the first container on port 8787:

1. docker run --rm --name rstudio-1 -e PASSWORD=mypwd -p 8787:8787


scienceparkstudygroup/master-gls:openr-latest

● Starting the second container on port 8788:

1. docker run --rm --name rstudio-2 -e PASSWORD=mypwd -p 8788:8787


scienceparkstudygroup/master-gls:openr-latest

Note: Make sure to use different names as well otherwise it would not work.

You might also like