0% found this document useful (0 votes)
342 views134 pages

Curso de Docker 1

This document provides an introduction and overview of Docker networking concepts and configuration. It covers the different network driver types (bridge, overlay, macvlan), configuring networks and connecting containers, publishing ports, and more. Tutorial sections walk through examples of using bridge and overlay networks for containers. The document also discusses configuring the Docker daemon and client, as well as container networking topics like published ports and DNS services.

Uploaded by

Anacleto Agente
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
342 views134 pages

Curso de Docker 1

This document provides an introduction and overview of Docker networking concepts and configuration. It covers the different network driver types (bridge, overlay, macvlan), configuring networks and connecting containers, publishing ports, and more. Tutorial sections walk through examples of using bridge and overlay networks for containers. The document also discusses configuring the Docker daemon and client, as well as container networking topics like published ports and DNS services.

Uploaded by

Anacleto Agente
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 134

Contenido

4
4
7
10
13
Introducción al networking en Docker26
38
38
40
Swarms (enjambres)42
47
51
Networking a fondo (docs.docker.com)52
Tipos de drivers de red52
Uso de redes “Bridge”53
Differences between user-defined bridges and the default bridge53
55
Connect a container to a user-defined bridge55
Enable forwarding from Docker containers to the outside world57
Use the default bridge network58
Connect a container to the default bridge network59
Uso de redes “Overlay”60
Create an overlay network60
Encrypt traffic on an overlay network60
Customize the default ingress network61
Customize the docker_gwbridge interface62
Publish ports on an overlay network63
Bypass the routing mesh for a swarm service64
Separate control and data traffic64
Attach a standalone container to an overlay network64
65
Container discovery65
66
Uso de redes “macvlan”67
Create a macvlan network67
Bridge mode67
802.1q trunk bridge mode68
Use an ipvlan instead of macvlan69
Use IPv669
70
Tutoriales de Networking (docs.docker.com)71
Tutorial de redes “Bridge”71
77
Turorial de redes “Overlay”78
Use the default overlay network78
Use an overlay network for standalone containers83
86
88
Configurar el daemon para IPv688
Dockers e IPTables89
Add iptables policies before Docker’s rules89
Restrict connections to the Docker daemon89
Prevent Docker from manipulating iptables90
Redes de contenedor91
Published ports91
IP address and hostname91
DNS services92
93
Configure the Docker client93
Use environment variables94
Set the environment variables manually94
95
Resumen del almacenamiento95
Choose the right type of mount95
More details about mount types96
Good use cases for volumes98
Good use cases for bind mounts98
Tips for using bind mounts or volumes99
Volúmenes100
Choose the -v or --mount flag100
Differences between -v and --mount behavior102
Create and manage volumes102
Start a container with a volume102
Start a service with volumes104
105
Use a read-only volume105
Share data among machines106
Use a volume driver106
Initial set-up107
Create a volume using a volume driver107
Start a container which creates a volume using a volume driver107
Create a service which creates an NFS volume108
Backup, restore, or migrate data volumes108
Backup a container108
Restore container from backup109
Remove volumes109
Remove anonymous volumes109
Remove all volumes109
Bind mounts110
Choose the -v or --mount flag110
Differences between -v and --mount behavior111
Start a container with a bind mount112
Mount into a non-empty directory on the container113
Use a read-only bind mount113
Configure bind propagation114
tmpfs mounts116
116
Choose the --tmpfs or --mount flag116
Differences between --tmpfs and --mount behavior117
Use a tmpfs mount in a container117
Specify tmpfs options118
Tipos de drivers de almacenamiento119
Images and layers119
Container and layers120
Container size on disk121
The copy-on-write (CoW) strategy122
Sharing promotes smaller images122
Copying makes containers efficient125
Seleccionar un driver de almacenamiento128
Docker Engine - Enterprise and Docker Enterprise128
Docker Engine - Community129
129
130
Shared storage systems and the storage driver130
Stability131
Check your current storage driver131
Uso del driver de almacenamiento AUSFS133
Uso del driver de almacenamiento BTRFS133
Uso del driver de almacenamiento Devicemapper133
134
134
134

Primera toma de contacto con Docker


Instalación de Docker (Para Ubuntu Debian)

Comprobación de la versión del kernel de Linux.

uname -a

Instalación de los paquetes linux-image-extra y linux-image-extra-virtual, que contienen el


driver de almacenamiento aufs.

sudo apt-get install linux-image-extra-$(uname -r) \


linux-imageextra-virtual

Me ha fallado porque no existe el paquete con la versión 4.15.0-47-generic. Instalo la más alta,
que es la 4.15.0-15-generic
Reinicio.

sudo reboot

Añadimos paquetes de prerrequisitos.

sudo apt-get install apt-transport-https ca-certificates curl


software-properties-common

Añadimos la clave GPG de Docker

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo


apt-key add -

Añadimos el repositorio APT de Docker

sudo add-apt-repository "deb [arch=amd64] \$(lsb_release -cs)\stable"

Actualizamos las fuentes del repositorio.

sudo apt-get update

Instalamos la versión de comunidad de Docker.

sudo apt-get install docker-ce

Comprobación de que Docker está funcionando.

sudo docker info

Habilitamos el forwarding editando la configuración de "Uncomplicated firewall (ufw)"


/etc/default/ufw.

DEFAULT_FORWARD_POLICY="ACCEPT"

Recargamos la configuración del firewall.

sudo ufw reload

Opcional: Cambiar la configuración de red del demonio de Docker. Enlazar todas las interfaces
del host.
sudo dockerd -H tcp://0.0.0.0:2375

Opcional: Especificar un path de socket de Unix alternativo.

sudo dockerd -H unix://home/docker/docker.sock

Comprobar que el demonio de Docker está corriendo.

sudo systemctl status docker

Si falla (upstart no está instalado, hacerlo mediante systemctl)

sudo systemctl restart docker

Actualizar Docker.

sudo apt-get update


sudo apt-get install docker-engine
Iniciando nuestros primeros contenedores

Ejecutar un contenedor. -i mantiene abierta la STDIN del contenedor. -t asigna una pseudo-tty
al contenedor que se va a crear. ubuntu es la imagen base del contenedor. /bin/bash es el
comando que ejecutamos en el contenedor.

sudo docker run -i -t ubuntu /bin/bash

Crear un contenedor con un nombre.

sudo docker run --name bob_the_container -i -t ubuntu /bin/bash

Iniciar un contenedor.

sudo docker start bob_the_container

sudo docker start aa3f365f0f4e

Ver los contenedores (incluso los que no están corriendo)

sudo docker ps -a

Conectarse a un contenedor.

sudo docker attach bob_the_container

Crear un contenedor como demonio o desconectado (dettached).

sudo docker run --name daemon_dave -d ubuntu /bin/sh -c "while true;


do echo hello world; sleep 1; done"
Ver el log del demonio.

sudo docker logs daemon_dave

Ver el log como tail -f

sudo docker logs -f daemon_dave

Habilitar syslog en el nivel del contenedor.

sudo docker run --log-driver="syslog" --name daemon_dwayne -d ubuntu


/bin/sh -c "while true; do echo hello world; sleep 1; done"

Inspeccionar los procesos de un contenedor en forma de demonio.

sudo docker top daemon_dave

Mostrar la estadística del contenedor.

sudo docker stats daemon_dave daemon_dwayne

Ejecutar una tarea en segundo plano dentro de un contenedor.sudo conta

sudo docker exec -d daemon_dave touch /etc/new_config_file

Ejecutar un commando interactivo dentro de un contenedor.

sudo docker exec -t -i daemon_dave /bin/bash

Detener un contenedor en forma de demonio.

sudo docker stop daemon_dave


Reinicio automático de un contenedor.

sudo docker run --restart=always --name daemon_alice -d ubuntu /bin/sh


-c "while true; do echo hello world; sleep 1; done"

Inspeccionar un contenedor.

sudo docker inspect daemon_alice

Inspeccionar de forma selectiva un contenedor.

sudo docker inspect --format='{{ .State.Running }}' daemon_alice

Inspeccionar la dirección IP del contenedor.

sudo docker inspect --format '{{.NetworkSettings.IPAddress}}' daemon_alice

Borrar un contenedor.

sudo docker rm 80430f8d0921

Borrar todos los contenedores.

sudo docker rm -f `sudo docker ps -a -q`


Trabajando con imágenes y repositorios de Docker

Listar las imágenes de Docker.

sudo docker images

Descargar una imagen de ubuntu.

sudo docker pull ubuntu:16.04

Ejecutar una imagen etiquetada de Docker.

sudo docker run -t -i --name new_container ubuntu:16.04 /bin/bash

Por defecto se usa la etiqueta latest

sudo docker run -t -i --name next_container ubuntu /bin/bash

Descargar una imagen de fedora.

sudo docker pull fedora:21

Para ver las imágenes concretas.

sudo docker images fedora

Buscar imágenes en el repositorio de Docker.

sudo docker search puppet

Descargar la imagen jamtur01/puppetmaster.


sudo docker pull jamtur01/puppetmaster

Eliminar una imagen.

sudo docker rmi 371a2f0f080c

Crear un contenedor de la imagen anterior.

sudo docker run -i -t jamtur01/puppetmaster /bin/bash

Iniciar sesión en Docker Hub.

sudo docker login

Cerrar sesión.

sudo docker logout

Creamos un contenedor que modificaremos posteriormente.

sudo docker run -i -t ubuntu /bin/bash

Añadimos el paquete Apache.

apt-get -yqq update

apt-get -y install apache2

service apache2 start

exit

Hacemos un commit (mejor hacer un dockerfile más adelante) del container.

sudo docker commit 4aab3ce3cb76 rbcost/apache2


Haciendo otro commit con más información.

sudo docker commit -m "A new custom image" -a "Antonio Salazar"


3567d143b61d rbcost/apache2:webserver

sudo docker inspect rbcost/apache2:webserver

Corremos un contenedor desde nuestra nueva imagen.

sudo docker run -t -i rbcost/apache2:webserver /bin/bash

service apache2 status


Construir imágenes desde un Dockerfile.

Crear un directorio de ejemplo. This directory is our build environment, which is what Docker
calls a context or build context. Docker will upload the build context, as well as any files and
directories contained in it, to our Docker daemon when the build is run. This provides the
Docker daemon with direct access to any code, files or other data you might want to include in
the image.

Each instruction adds a new layer to the image and then commits the image. Docker executing
instructions roughly follow a workflow:

• Docker runs a container from the image.

• An instruction executes and makes a change to the container.

• Docker runs the equivalent of docker commit to commit a new layer.

• Docker then runs a new container from this new image.

• The next instruction in the file is executed, and the process repeats until all instructions have
been executed.

sudo mkdir static_web

cd static_web

sudo touch Dockerfile

Nuestro primer Dockerfile (contenido del archivo)

# Version: 0.0.1

FROM ubuntu:16.04

RUN apt-get update; apt-get install -y nginx

RUN echo 'Hi, I am in your container'> /var/www/html/index.html

EXPOSE 80

Ejecutar el Dockerfile

sudo docker build -t="rbcost/static_web" .

Se puede etiquetar el build.


sudo docker build -t="rbcost/static_web:v1" .

Hacer el build desde un repositorio Git.

sudo docker build -t="rbcost/static_web:fromGitv1"


github.com/turnbullpress/docker-static_web

Administrar un fallo en una instrucción. Editar el Dockerfile y cambian "nginx" por "ngin".

cd static_web

sudo nano Dockerfile

# Hacer el cambio y guardar.

sudo docker build -t="rbcost/static_web" .

# Se produce un error porque no se encuentra el paquete.

Supongamos que queremos depurar este fallo. Podemos usar docker run para crear un
contenedor desde el último paso que funcionó en el build. Poner el ID de la imagen que se
creó en último lugar.

sudo docker run -t -i 997485f46ec4 /bin/bash

# Podemos intentar ejecutar apt-get install con el nombre

# correcto del paquete o realizar otra depuración para

# determinar qué fue erróneo. Salir del contenedor,

# arreglar el Dockerfile y reintentar el build.

Saltarse a caché del build de Dockerfile. Útil para asegurar que todo se construye desde cero.

sudo docker build --no-cache -t="rbcost/static_web" .


FROM ubuntu:16.04

MAINTAINER Antonio Salazar [email protected]

ENV REFRESHED_AT 2016-07-01

RUN apt-get -qq update

I’ve specified the ENV instruction to set an environment variable called REFRESHED_AT,
showing when the template was last updated. Lastly, I’ve specified the apt-get -qq update
command in a RUN instruction. This refreshes the APT package cache when it’s run, ensuring
that the latest packages are available to install.

With my template, when I want to refresh the build, I change the date in my ENV instruction.
Docker then resets the cache when it hits that ENV instruction and runs every subsequent
instruction a new without relying on the cache.

Uso del commando docker history sobre una imagen.


sudo docker history 22d47c8cb6e5

Docker has two methods of assigning ports on the Docker host: Docker can randomly assign a
high port from the range 32768 to 61000 on the Docker host that maps to port 80 on the
container. You can specify a specific port on the Docker host that maps to port 80 on the
container.

The docker run command will open a random port on the Docker host that will connect to port
80 on the Docker container.

sudo docker run -d -p 80 --name static_web rbcost/static_web nginx -g


"daemon off;"

Para ver el puerto asignado podemos usar el comando docker ps.

sudo docker ps -l

sudo docker port 6751b94bb5c0 80

Establecer el mapeo de puertos al ejecutar el contenedor.


sudo docker run -d -p 80:80 --name static_web_80 rbcost/static_web \

nginx -g "daemon off;"

Enlazando con otro puerto.

sudo docker run -d -p 8080:80 --name static_web_8080 rbcost/static_web \

nginx -g "daemon off;"

Binding con una interface concreta.

sudo docker run -d -p 127.0.0.1:80:80 --name static_web_lb \

rbcost/static_web nginx -g "daemon off;"

CMD y ENTRYPOINT.

# Version: 0.0.1

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-07-01

RUN apt-get -qq update

RUN apt-get install -y nginx

RUN echo 'Hi, I am in your container' > /var/www/html/index.html

EXPOSE 80

ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]

Creamos la imagen

sudo docker build -t="rbcost/static_web" .

Lanzamos el contenedor

sudo docker run -d -p 80:80 --name static_web_80_ENTRY rbcost/static_web


Ahora mezclando ENTRYPOINT y argumentos al lanzar el contenedor.

# Version: 0.0.1

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-07-01

RUN apt-get -qq update

RUN apt-get install -y nginx

RUN echo 'Hi, I am in your container' > /var/www/html/index.html

EXPOSE 80

ENTRYPOINT ["/usr/sbin/nginx"]

sudo docker build -t="rbcost/static_web" .

sudo docker run -i -t -p 80:80 --name static_web_80_ENTRY_2 \

rbcost/static_web "-g daemon off;"

Ahora con ENTRYPOINT y CMD como parámetros por defecto.

# Version: 0.0.1

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-07-01

RUN apt-get -qq update

RUN apt-get install -y nginx

RUN echo 'Hi, I am in your container' > /var/www/html/index.html

EXPOSE 80

ENTRYPOINT ["/usr/sbin/nginx"]

CMD ["-h"]

sudo docker build -t="rbcost/static_web" .

Now when we launch a container, any option we specify will be passed to the Nginx daemon;
for example, we could specify -g "daemon off"; as we did above to run the daemon in the
foreground.
If we don’t specify anything to pass to the container, then the -h is passed by the CMD
instruction and returns the Nginx help

sudo docker run -i -t -p 80:80 --name static_web_80_ENTRY_2


rbcost/static_web

The WORKDIR instruction provides a way to set the working directory for the container and
the ENTRYPOINT and/or CMD to be executed when a container is launched from the image.

WORKDIR /opt/webapp/db

RUN bundle install

WORKDIR /opt/webapp

ENTRYPOINT [ "backup" ]

Using an environment variable in other Dockerfile instructions.

ENV TARGET_DIR /opt/app

WORKDIR $TARGET_DIR

The ENV instruction is used to set environment variables during the image build process. For
example:

ENV RVM_PATH /home/rvm/

These environment variables will also be persisted into any containers created from your
image. So, if we were to run the env command in a container built with the ENV RVM_PATH
/home/rvm/ instruction we’d see:

root@bf42aadc7f09:~# env

. . .
RVM_PATH=/home/rvm/

. . .

You can also pass environment variables on the docker run command line using the -e flag.
These variables will only apply at runtime, for example:

sudo docker run -ti -e "WEB_PORT=8080" ubuntu env

HOME=/

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

HOSTNAME=792b171c5e9f

TERM=xterm

WEB_PORT=8080

The USER instruction specifies a user that the image should be run as; for example:

USER nginx

This will cause containers created from the image to be run by the nginx user.

You can also override this at runtime by specifying the -u flag with the docker run command.

The default user if you don’t specify the USER instruction is root.

The VOLUME instruction adds volumes to any container created from the image. A volume is a
specially designated directory within one or more containers that bypasses the Union File
System to provide several useful features for persistent or shared data:

• Volumes can be shared and reused between containers.

• A container doesn’t have to be running to share its volumes.

• Changes to a volume are made directly.

• Changes to a volume will not be included when you update an image.

• Volumes persist until no containers use them.


You can use the VOLUME instruction like so:

VOLUME ["/opt/project"]

The ADD instruction adds files and directories from our build environment into our image, like
so:

ADD software.lic /opt/application/software.lic

This ADD instruction will copy the file software.lic from the build directory to
/opt/application/software.lic in the image.

ADD http://wordpress.org/latest.zip /root/wordpress.zip

ADD latest.tar.gz /var/www/wordpress/

If a tar archive (valid archive types include gzip, bzip2, xz) is specified as the source file, then
Docker will automatically unpack it for you.

The COPY instruction is closely related to the ADD instruction. The key difference is that the
COPY instruction is purely focused on copying local files from the build context and does not
have any extraction or decompression capabilities.

COPY conf.d/ /etc/apache2/

The LABEL instruction adds metadata to a Docker image. The metadata is in the form of
key/value pairs. We recommend combining all your metadata in a single LABEL instruction to
save creating multiple layers with each piece of metadata.

You can inspect the labels on an image using the docker inspect command.

LABEL version="1.0"

LABEL location="New York" type="Data Center" role="Web Server"


The STOPSIGNAL instruction sets the system call signal that will be sent to the container when
you tell it to stop. This signal can be a valid number from the kernel syscall table, for instance
9, or a signal name in the format SIGNAME, for instance SIGKILL.

The ARG instruction defines variables that can be passed at build-time via the docker build
command. This is done using the --build-arg flag. You can only specify build-time arguments
that have been defined in the Dockerfile.

ARG build

ARG webapp_user=user

The second ARG instruction sets a default, if no value is specified for the argument at build-
time then the default is used.

docker build --build-arg build=1234 -t jamtur01/webapp .

Docker has a set of predefined ARG variables that you can use at build-time without a
corresponding ARG instruction in the Dockerfile:

HTTP_PROXY

http_proxy

HTTPS_PROXY

https_proxy

FTP_PROXY

ftp_proxy

NO_PROXY

no_proxy

To use these predefined variables, pass them using the--build-arg <variable>=<value> flag to
the docker build command.

The SHELL instruction allows the default shell used for the shell form of commands to be
overridden. The default shell on Linux is ["/bin/sh", "-c"] and on Windows is ["cmd", "/S",
"/C"].
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still
working correctly. Contains options and then the command you wish to run itself, separated by
a CMD keyword.

HEALTHCHECK --interval=10s --timeout=1m --retries=5 \

CMD curl http://localhost || exit 1

We can see the state of the health check using the docker inspect command.

sudo docker inspect --format '{{.State.Health.Status}}' static_web

Healthy

The ONBUILD instruction adds triggers to images. A trigger is executed when the image is used
as the basis of another image (e.g., if you have an image that needs source code added from a
specific location that might not yet be available, or if you need to execute a build script that is
specific to the environment in which the image is built).

The trigger inserts a new instruction in the build process, as if it were specified right after the
FROM instruction. The trigger can be any build instruction. For example:

ONBUILD ADD . /app/src

ONBUILD RUN cd /app/src; make

This would add an ONBUILD trigger to the image being created, which we see when we run
docker inspect on the image:

sudo docker inspect 508efa4e4bf8

...

"OnBuild": [

"ADD . /app/src",

"RUN cd /app/src/; make"

]
...

For example, we’ll build a new Dockerfile for an Apache2 image that we’ll call rbcost/apache2.

FROM ubuntu:16.04

RUN apt-get update; apt-get install -y apache2

ENV APACHE_RUN_USER www-data

ENV APACHE_RUN_GROUP www-data

ENV APACHE_LOG_DIR /var/log/apache2

ONBUILD ADD . /var/www/

EXPOSE 80

ENTRYPOINT ["/usr/sbin/apache2"]

CMD ["-D", "FOREGROUND"]

sudo docker build -t="rbcost/apache2" .

La imagen añade (copia) los archivos del directorio de contexto cuando se realiza el build.

Ahora creamos otro Dockerfile. Nótese el FROM

FROM rbcost/apache2

ENV APPLICATION_NAME webapp

ENV ENVIRONMENT development

sudo docker build -t="rbcost/webapp" .

Nótese como se muestra el mensaje de que se ha disparado un trigger en el build. Esto


permite que se copie al contenedor todos los archivos locales al directorio www y podría
usarse como una plantilla.

Borrar todas las imágenes.


sudo docker rmi `sudo docker images -a -q`

Push una imagen de Docker.

# Version: 0.0.1

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-07-01

RUN apt-get -qq update

RUN apt-get install -y nginx

RUN echo 'Hi, I am in your container' > /var/www/html/index.html

EXPOSE 80

ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]

sudo docker build -t="rbcost/static_web" .

sudo docker push rbcost/static_web:latest

Mirar en el repositorio de Docker Hub.

Crear nuestro propio Registry (local)

docker run -d -p 5000:5000 --name registry registry:2

Listar las imágenes y copiar el ID de una de ellas. Ahora vamos a etiquetarla para un
repositorio.

sudo docker tag 4ad57fb7a110 localhost:5000/rbcost/static_web

localhost habría que cambiarlo por el dns del servidor de repositorio.

sudo docker push localhost:5000/rbcost/static_web

Probamos a lanzar el contenedor desde el nuevo repositorio.


sudo docker run -d -p 80:80 --name static_web \

localhost:5000/rbcost/static_web
Introducción al networking en Docker

Creating a directory for our Sample website Dockerfile.

mkdir sample

cd sample

Descargar algunos archivos de configuración de nginx.

wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/nginx/global.conf

wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/nginx/nginx.conf

Archivo Dockerfile para el sitio web

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update; apt-get -yqq install nginx

RUN mkdir -p /var/www/html/website

ADD global.conf /etc/nginx/conf.d/

ADD nginx.conf /etc/nginx/nginx.conf

EXPOSE 80

In nginx.conf, the daemon off; option stops Nginx from going into the background and forces
it to run in the foreground. This is because Docker containers rely on the running process
inside them to remain active.

By default,Nginx daemonizes itself when started, which would cause the container to run
briefly and then stop when the daemon was forked and launched and the original process that
forked it stopped.
Compilamos la imagen de nginx

sudo docker build -t rbcost/nginx .

Mostramos la historia de la imagen Nginx. La salida muestras las "capas", la más reciente
arriba.

sudo docker history rbcost/nginx

Descargamos el sitio web Sample (dentro del directorio sample)

mkdir website; cd website

wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/website/index.html

cd ..

Ponemos a correr el contenedor.

sudo docker run -d -p 80 --name website -v


$PWD/website:/var/www/html/website:ro rbcost/nginx nginx

You can see we’ve passed the nginx command to docker run. Normally this wouldn’t make
Nginx run interactively. In the configuration we supplied to Docker, though, we’ve added the
directive daemon off. This directive causes Nginx to run interactively in the foreground when
launched.

the -v option is new. This new option allows us to create a volume in our container from a
directory on the host.

Use Volumes when:

• We want to work on and test it simultaneously.

• It changes frequently, and we don’t want to rebuild the image during our development
process.

• We want to share the code between multiple containers.


The -v option works by specifying a directory or mount on the local host separated from the
directory on the container with a ':'

If the container directory doesn’t exist Docker will create it. We can also specify the read/write
status of the container directory by adding either rw or ro after that directory.

Mirar el puerto de servicio del contenedor y conectar con el navegador.

Editamos index.html y cambiamos el texto.

cd sample

sudo nano index.html

Refrescamos el navegador.

Vamos a crear una aplicación Sinatra: a Rubybased web application framework. It contains a
web application library and a simple Domain Specific Language or DSL for creating web
applications.

(desde el directorio sample)

mkdir -p sinatra

cd sinatra

Creamos el siguiente Dockerfile

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-06-01

RUN apt-get update -yqq; apt-get -yqq install ruby ruby-dev build-
essential redis-tools

RUN gem install --no-rdoc --no-ri sinatra json redis

RUN mkdir -p /opt/webapp

EXPOSE 4567

CMD ["/opt/webapp/bin/webapp"]
We have used the gem binary to install the sinatra, json, and redis gems. The sinatra and json
gems contain Ruby’s Sinatra library and support for JSON.

The redis gem we’re going to use a little later on to provide integration to a Redis database.

We’ve also created a directory to hold our new web application and exposed the default
WEBrick port of 4567.

Finally, we’ve specified a CMD of /opt/webapp/bin/webapp, which will be the binary that
launches our web application.

Compilamos la imagen de Sinatra.

sudo docker build -t rbcost/sinatra .

Descargamos la aplicación web Sinatra.

cd sinatra

En el libro hay una errata porque la siguiente instrucción fallará con un error de Not found.

wget --cut-dirs=3 -nH -r --reject Dockerfile,index.html --no-parent \

http://dockerbook.com/code/5/sinatra/webapp/

Lo que he hecho es descargar los ejemplos del libro desde Github


https://github.com/turnbullpress/dockerbook-code en la carpeta Downloads y manualmente
copiar los archivos.

cd ~/Downloads

unzip dockerbook-code-master.zip

cd dockerbook-code-master

cd code

cd 5

cd sinatra

cd webapp

sudo cp -r bin/ ~/sample/sinatra/webapp/bin/


sudo cp -r lib/ ~/sample/Sinatra/webapp/lib/

Let’s quickly look at the core of the webapp source code contained in the
sinatra/webapp/lib/app.rb file.

This is a simple application that converts any parameters posted to the /json endpoint to JSON
and displays them.

Now let’s launch a new container from our image using the docker run command. To launch
we should be inside the sinatra directory because we’re going to mount our source code into
the container using a volume.

sudo docker run -d -p 4567 --name webapp -v $PWD/webapp:/opt/webapp


rbcost/sinatra

Hemos montado $PWD/webapp en el directorio /opt/webapp que se creó en el Dockerfile.

We’ve not provided a command to run on the command line; instead, we’re using the
command we specified via the CMD instruction in the Dockerfile of the image (CMD
["/opt/webapp/bin/webapp"])

Comprobamos los logs del contenedor sinatra (comprobar puertos)

sudo docker logs webapp

sudo docker ps -a

sudo docker top webapp

sudo docker port webapp

Right now, our basic Sinatra application doesn’t do much. It just takes incoming parameters,
turns them into JSON, and then outputs them. We can now use the curl command to test our
application. (Ojo con el puerto)
curl -i -H 'Accept: application/json' -d 'name=Foo&status=Bar'
http://localhost:49160/json
We’re going to extend our Sinatra application now by adding a Redis back end and storing our
incoming URL parameters in a Redis database. We need download a new version of the sinatra
app.

cd ~/Downloads

cd dockerbook-code-master

cd code

cd 5

cd sinatra

cd webapp_redis

sudo cp -r bin/ ~/sample/sinatra/webapp_redis/bin/

sudo cp -r lib/ ~/sample/Sinatra/webapp_redis/lib/

Let’s look at its core code in lib/app.rb now (Editarlo con nano). We now create a connection
to a Redis database on a host called db on port 6379. We also post our parameters to that
Redis database and then get them back from it when required.

We’re going to create a new image. Let’s create a directory, redis inside our sinatra directory,
to hold any associated files we’ll need for the Redis container build.

mkdir redis

cd redis

Crear el siguiente Dockerfile.

FROM ubuntu:16.04

ENV REFRESHED_AT 2016-06-01

RUN apt-get -yqq update; apt-get -yqq install redis-server redis-tools

EXPOSE 6379

ENTRYPOINT ["/usr/bin/redis-server"]

CMD []

Compilamos la imagen.
sudo docker build -t rbcost/redis .

Creamos el contenedor. Note that we’ve specified the -p flag to publish port 6379

sudo docker run -d -p 6379 --name redis rbcost/redis

Vemos en qué puerto del host está mapeado.

sudo docker port redis 6379

Está publicado en el 32773. Intentamos conectar a la instancia.

sudo apt-get -y install redis-tools

Conectamos con el cliente.

redis-cli -h 127.0.0.1 -p 32773

quit

Let’s now update our Sinatra application to connect to Redis and store our incoming
parameters. We’re going to need to be able to talk to the Redis server. There are two ways we
could do this using:

• Docker’s own internal network.

• From Docker 1.9 and later, using Docker Networking and the docker network command.

So which method should I choose? Well the first method, Docker’s internal network, is not an
overly flexible or powerful solution. We don’t recommend it as a solution for connecting
containers.

The more realistic method for connecting containers is Docker Networking.

• Docker Networking can connect containers to each other across different hosts.
• Containers connected via Docker Networking can be stopped, started or restarted without
needing to update connections.

• With Docker Networking you don’t need to create a container before you can connect to it.
You also don’t need to worry about the order in which you run containers and you get
internal container name resolution and discovery inside the network.

The first method involves Docker’s own network stack. So far, we’ve seen Docker containers
exposing ports and binding interfaces so that container services are published on the local
Docker host’s external network (e.g., binding port 80 inside a container to a high port on the
local host). In addition to this capability, Docker has a facet we haven’t yet seen: internal
networking.

Every Docker container is assigned an IP address, provided through an interface created when
we installed Docker. That interface is called docker0. Let’s look at that interface on our Docker
host now.

ifconfig docker0

The docker0 interface is a virtual Ethernet bridge that connects our containers and the local
host network. If we look further at the other interfaces on our Docker host (ejecutamos el
commando ifconfig), we’ll find a series of interfaces starting with veth.

Every time Docker creates a container, it creates a pair of peer interfaces that are like opposite
ends of a pipe (i.e., a packet sent on one will be received on the other).

It gives one of the peers to the container to become its eth0 interface and keeps the other
peer, with a unique name like vethec6a, out on the host machine.

You can think of a veth interface as one end of a virtual network cable. One end is plugged into
the docker0 bridge, and the other end is plugged into the container.

By binding every veth* interface to the docker0 bridge, Docker creates a virtual subnet shared
between the host machine and every Docker container.

Let’s look inside a container now and see the other end of this pipe.

sudo docker run -t -i ubuntu /bin/bash

Dentro del contenedor


apt-get update

apt-get install -y net-tools

ifconfig

We see that Docker has assigned an IP address, 172.17.0.29, for our container that will be
peered with a virtual interface on the host side

Desde dentro del contenedor, hacemos un traceroute. Ver el enrutamiento.

apt-get -yqq update; apt-get install -yqq traceroute

traceroute www.google.es

But there’s one other piece of Docker networking that enables this connectivity: firewall rules
and NAT configuration allow Docker to route between containers and the host network.

Exit out of our container and let’s look at the IPTables NAT configuration on our Docker host.

sudo iptables -t nat -L -n

También Podemos sacar información de red desde el comando docker inspect y buscar
NetworkSettings.

sudo docker inspect redis

So, while this initially looks like it might be a good solution for connecting our containers
together, sadly, this approach has two big rough edges: Firstly, we’d need to hard-code the IP
address of our Redis container into our applications. Secondly, if we restart the container,
Docker (podría) changes the IP address,

Let’s see this now using the docker restart command (we’ll get the same result if we kill our
container using the docker kill command).

sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis


Anotar la IP devuelta.

sudo docker restart redis

sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' redis

La ip podria ser otra y el contenedor sinatra no podrá conectar con su base de datos redis.

Docker Networking allows you to setup your own networks through which containers can
communicate. Essentially this supplements the existing docker0 network with new, user
managed networks. Importantly, containers can now communicate with each across hosts and
your networking configuration can be highly customizable.

To use Docker networks we first need to create a network and then launch a container inside
that network.

sudo docker network create app

This uses the docker network command to create a bridge network called app. A network ID is
returned for the network.

We can then inspect this network using the docker network inspect command.

sudo docker network inspect app

Our new network is a local, bridged network much like our docker0 network and that currently
no containers are running inside the network.

In addition to bridge networks, which exist on a single host, we can also create overlay
networks, which allow us to span multiple hosts.

You can list all current networks using the docker network ls command (se eliminan con el
commando docker network rm)

sudo docker network ls

Creamos un contenedor redis en nuestra nueva red.


sudo docker run -d --net=app --name db rbcost/redis

sudo docker network inspect app

Now let’s add a container to the network we’ve created. To do this we need to be back in the
sinatra directory.

sudo docker run -p 4567 --net=app --name network_test -t -i rbcost/sinatra


/bin/bash

We’ve launched it interactively so we can peek inside to see what’s happening. As the
container has been started inside the app network, Docker will have taken note of all other
containers running inside that network and populated their addresses in local DNS. Let’s see
this now in the network_test container.

apt-get install -y dnsutils iputils-ping

nslookup db

We see that using the nslookup command to resolve the db container it returns the IP
address: 172.18.0.2.

A Docker network will also add the app network as a domain suffix for the network, any host
in the app network can be resolved by hostname.app, here db.app.

ping db.app

In our case we just need the db entry to make our application function. To make that work our
webapp’s Redis connection code already uses the db hostname.

redis = Redis.new(:host => 'db', :port => '6379')

We could now start our application and have our Sinatra application write its variables into
Redis via the connection between the db and webapp containers that we’ve established via
the app network.
Let’s try it now by exiting the network_test container and starting up a new container running
our Redis-enabled web application.

sudo docker run -d -p 4567 --net=app --name webapp_redis -v


$PWD/webapp_redis:/opt/webapp rbcost/sinatra

sudo docker port webapp_redis 4567

curl -i -H 'Accept: application/json' -d 'name=Foo&status=Bar' \

http://localhost:32776/json

curl -i http://localhost:32776/json \

"[{\"name\":\"Foo\",\"status\":\"Bar\"}]"

You can also add already running containers to existing networks using the docker network
connect command. So we can add an existing container to our app network. Let’s say we have
an existing container called db2 that also runs Redis.

sudo docker run -d --name db2 rbcost/redis

Let’s add that to the app network (we could have also used the --net flag to automatically add
the container to the network at runtime).

sudo docker network connect app db2

sudo docker network inspect app

We can also disconnect a container from a network using the docker network disconnect
command.

sudo docker network disconnect app db2

Containers can belong to multiple networks at once so you can create quite complex
networking models.
Empezando con Docker (docs.docker.com)
Contenedores

sudo nano /etc/hostname, sudo nano /etc/hosts Cambiar nombre a equipo…

docker ps  lista contenedores en ejecución.

docker --version  versión de docker.

docker info ó docker versión (sin --)  más detalles sobre la versión.

docker image ls  lista las imágenes descargadas en el equipo.

docker container ls  Muestra contenedores que están corriendo.

docker container ls --all  Muestra todos los contenedores, incluso los que terminaron.

(“requirements.txt” y “app.py” son dos archivos que se copian al directorio de contexto)

# Use an official Python runtime as a parent image


FROM python:2.7-slim
# Set the working directory to /app (dentro del contenedor)
WORKDIR /app
# Copy the current directory (del host) contents into the container at /app
# (dentro del contenedor)
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container.Expone Puerto 80
# del contenedor (habrá que mapear el Puerto cuando se ejecute el contenedor)
EXPOSE 80
# Define environment variable
ENV NAME World
# Run app.py when the container launches
CMD ["python", "app.py"]

docker build --tag=friendlyhello .  Crear un contenedor desde un Dockerfile


docker image ls  Muestra las imágenes.

docker images  Lo mismo

docker run -p 4000:80 friendlyhello  Ejecuta un container. El puerto dentro del contenedor
es el 80 y en el host es el 4000

docker run -d -p 4000:80 friendlyhello  Ejecuta el contenedor como un servicio (Daemon)

docker container ls  muestra contenedores que corren.

docker container ps  Lo mismo.

docker container stop 1fa4ab2cf395  Detener un contenedor que corre.

docker login  Se conecta al registry (Dockerhub)


The notation for associating a local image with a repository on a registry is
username/repository:tag

docker tag friendlyhello gordon/get-started:part2  Etiqueta una imagen local en el formato


anterior.

docker push username/repository:tag  Sube la imagen etiquetada al repositorio del


registry.
Servicios

Instalación de Docker compose:

sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-


compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

sudo chmod +x /usr/local/bin/docker-compose

Docker-compose.yml

version: "3"

services:

web:

# replace username/repo:tag with your name and image details

image: rbcost/get-started:part2

deploy:

replicas: 5

resources:

limits:

cpus: "0.1"

memory: 50M

restart_policy:

condition: on-failure

ports:

- "4000:80"

networks:

- webnet

networks:

webnet:

docker swarm init  Inicializamos el enjambre

docker stack deploy -c docker-compose.yml getstartedlab  Arranca la aplicación con 5


contenedores en el servicio, a los que se les llama tasks. (“getstartedlab” es el nombre del
“stack”)

docker container ls  Se pueden ver los 5 contenedores del servicio corriendo y del resto

docker stack services getstartedlab  otra forma de verlo.


A single container running in a service is called a task. Tasks are given unique IDs that
numerically increment, up to the number of replicas you defined in docker-compose.yml.

docker service ps getstartedlab_web  Lista las tareas del servicio. las tareas tienen en
nombre formado por “nombreAplicacionOStack_nombreServicio.x”, p.e.
“getstartedlab_web.4”

You can run curl -4 http://localhost:4000 several times in a row, or go to that URL in
your browser and hit refresh a few times. Either way, the container ID changes,
demonstrating the load-balancing; with each request, one of the 5 tasks is chosen, in a
round-robin fashion, to respond.

docker stack ls  Lista los “stacks”


docker stack ps getstartedlab  para ver tolas las “tasks” de nuestro “stack”.

editar docker-compose.yml y cambiar la cantidad de réplicas.

docker stack deploy -c docker-compose.yml getstartedlab  para escalar el stack a


las nuevas réplicas.

docker stack rm getstartedlab  Parar la aplicación

docker swarm leave --force  Parar el “swarm” (--force en el último manager destruye el
swarm)

Resumen

docker stack ls # List stacks or apps

docker stack deploy -c <composefile> <appname> # Run the specified Compose file

docker service ls # List running services associated with an app

docker service ps <service> # List tasks associated with an app

docker inspect <task or container> # Inspect task or container

docker container ls -q # List container IDs

docker stack rm <appname> # Tear down an application

docker swarm leave --force # Take down a single node swarm from the manager
Swarms (enjambres)

A swarm is a group of machines that are running Docker and joined into a cluster. After
that has happened, you continue to run the Docker commands you’re used to, but now they
are executed on a cluster by a swarm manager. The machines in a swarm can be physical
or virtual. After joining a swarm, they are referred to as nodes.

Swarm managers can use several strategies to run containers, such as “emptiest node” --
which fills the least utilized machines with containers. Or “global”, which ensures that each
machine gets exactly one instance of the specified container. You instruct the swarm
manager to use these strategies in the Compose file, just like the one you have already
been using.

Swarm managers are the only machines in a swarm that can execute your commands or
authorize other machines to join the swarm as workers. Workers are just there to provide
capacity and do not have the authority to tell any other machine what it can and cannot do.

Habría que usar el comando docker-machine, para administrar los hipervisores, pero
estos necesitan ser 2016 o W10, así que lo hago a mano.

Docker-01 es el primer nodo. Docker-02 el segundo.

(en Docker-01)
docker swarm init --advertise-addr 192.168.1.111  iniciamos el swarm (Puerto por
defecto es 2377. Se podría cambiar)

(en Docker-02)
docker swarm join --token SWMTKN-1-2ka6mwrya7e4uy9m370mz3vxlepdjgv7gfxi7fwcjggeu2iyk2-
dliwpzlirieikrcnl4pycklao 192.168.1.111:2377  Convierte al nodo en un Worker

(en Docker-01)
docker node ls  Lista los nodos del swarm

Desplegar la app en el manager del swarm (sin docker-machine)


(en Docker-01)
docker stack deploy -c docker-compose.yml getstartedlab  esto despliega el stack.
el Compose despliega 5 contenedores en el servicio.
docker stack ps getstartedlab  Se puede ver como las tasks del servicio se reparten
entre los dos nodos de docker.

Probar que funciona conectando con el navegador, contra la IP de Docker-01 y contra la ip


de Docker-02

The reason both IP addresses work is that nodes in a swarm participate in an ingress
routing mesh. This ensures that a service deployed at a certain port within your swarm
always has that port reserved to itself, no matter what node is actually running the
container. Here’s a diagram of how a routing mesh for a service called my-web published
at port 8080 on a three-node swarm would look:

docker stack rm getstartedlab  tear down (quita) el stack

RECREO EL ESCENARIO DE SWARM EN MI EQUIPO DE


WINDOWS 10.
(La razón es que con los hiper-V de Avante no puedo usar
docker-machine porque se requiere WS 2016)

Descargar docker para Windows:

Al instalar docker para Windows en Hyper-V se crea una máquina virtual llamada
MobyLinuxVM que es el motor de Docker.

No hace falta instalar Compose porque viene con la instalación de docker para Windows.
Ejecutar como administrador desde PS.
docker-machine create -d hyperv --hyperv-virtual-switch "Red Local" myvm1  Este
commando despliega una VM de 1GB RAM en HV. Aquí la salida.

docker-machine create -d hyperv --hyperv-virtual-switch "Red Local" myvm2  Creo


otra VM.

docker-machine ls  Lista las VMs con sus direcciones IPs.

The first machine acts as the manager, which executes management commands and
authenticates workers to join the swarm, and the second is a worker.

docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.1.25"  Iniciamos


el swarm

Añado la VM2 como “worker node” al swarm

docker-machine ssh myvm2 "docker swarm join" `


" --token SWMTKN-1-0pajwir5kk4ias0w1f4lpcp0z7mkpr6h9w6mfnirhonsnzc0lz-
iwomxaqy4v951nq93skrj933" `
" 192.168.1.25:2377"
Ejecutamos en el manager para listar los nodos

docker-machine ssh myvm1 "docker node ls"

Ahora configure una shell de docker-machine contra la VM que es administradora del swarm.

docker-machine env myvm1  Nos indicará los comandos a ejecutar para conseguirlo.

que se reduce a este

& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env myvm1 |


Invoke-Expression

A partir de ahora, la Shell está conectada a myvm1, que es el administrador del swarm, y los
comandos se ejecutan directamente en ella.

docker-machine ls

Volvemos a crear el Docker-compose.yml en una carpeta. Estos serían los comandos

cd C:\Users\Antonio\Desktop

mkdir compose-test

cd .\compose-test

notepad .\Docker-compose.yml

Copiar el texto de Docker-compose.yml y guardar el archivo.

docker login -u rbcost -p XXXXXXXXX  Nos logamos en el registry

docker stack deploy -c docker-compose.yml getstartedlab  Desplegamos el stack

docker stack ps getstartedlab  Vemos las tasks del stack


You can access your app from the IP address of either myvm1 or myvm2.

docker stack rm getstartedlab  Tiramos el stack

Para desconectarnos de la sesión de docker-machine…

& "C:\Program Files\Docker\Docker\Resources\bin\docker-machine.exe" env -u | Invoke-


Expression

Gestión de las VMs.

docker-machine stop myvm2  Apaga la VM

docker-machine start myvm2  La enciende.

A PARTIR DE AHORA SE PUEDE USAR docker-machine con la Shell conectada a un nodo


administrador o usar una conexión de máquina virtual de HV con dicha VM.
Stacks (Aplicaciones distribuidas en micro servicios)

Editar Docker-compose.yml y cambiarlo por éste.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: rbcost/get-started:part2
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
The only thing new here is the peer service to web, named visualizer. Notice two new
things here: a volumes key, giving the visualizer access to the host’s socket file for Docker,
and a placement key, ensuring that this service only ever runs on a swarm manager --
never a worker. That’s because this container, built from , displays Docker services running
on a swarm in a diagram.

docker stack deploy -c docker-compose.yml getstartedlab  Redesplegamos el stack, que en


este caso tiene dos servicios.

Opcion de gestion swarmpit

Nos conectamos con un navegador a la ip del manager: 192.167.1.111:8080 para ver el


servicio “Visualizer”

Editamos el yml para añadir un servicio redis para persistencia.

version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: rbcost/get-started:part2
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:

Redis has an official image in the Docker library and has been granted the
short image name of just redis, so no username/repo notation here. The Redis port, 6379,
has been pre-configured by Redis to be exposed from the container to the host, and here in
our Compose file we expose it from the host to the world, so you can actually enter the IP
for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you
so choose.
Most importantly, there are a couple of things in the redis specification that make data
persist between deployments of this stack:

 redis always runs on the manager, so it’s always using the same filesystem.
 redis accesses an arbitrary directory in the host’s file system as /data inside the
container, which is where Redis stores data.

Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis
data. Without this, Redis would store its data in /data inside the container’s filesystem,
which would get wiped out if that container were ever redeployed.

This source of truth has two components:

 The placement constraint you put on the Redis service, ensuring that it always uses
the same host.
 The volume you created that lets the container access ./data (on the host)
as /data (inside the Redis container). While containers come and go, the files
stored on ./data on the specified host persists, enabling continuity.

mkdir ./data  Creamos el directorio data en /home/docker, sino el punto de montaje falla.

docker stack deploy -c docker-compose.yml getstartedlab  Redesplegamos el stack, que en


este caso tiene tres servicios.

docker service ls  Muestra los tres servicios

Ahora nos conectamos a una de las ips, puerto 80 para ver funcionando el contador.

Si nos conectamos a la ip del manager, puerto 8080, el visualizador nos mostrará el servicio
redis.
Despliegue de la aplicación: Docker EE vs CE

Customers of Docker Enterprise Edition run a stable, commercially-supported version of


Docker Engine, and as an add-on they get our first-class management software, Docker
Datacenter. You can manage every aspect of your application through the interface using
Universal Control Plane, (preguntas de examen) run a private image registry with Docker
Trusted Registry, integrate with your LDAP provider, sign production images with Docker
Content Trust, and many other features.

Como es una solución de pago, pruebo una trial de 24 horas:

(tarda 20 minutos en desplegarse)

hacer un login hacia mi repositorio.

docker login -u rbcost -p elmio

En UCP (Universal Control Plane) ir a shared resources/stacks y crear uno

En nombre aplicación  getstartedlab y dejar swarm y compose file.

Pegar el docker-compose.yml, comentar las dos línea del montaje del volumen. en AWS no se
permite acceso al sistema de archivos del host. crear.
Networking a fondo (docs.docker.com)
Tipos de drivers de red

Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default,
and provide core networking functionality:

 bridge: The default network driver. If you don’t specify a driver, this is the type of
network you are creating. Bridge networks are usually used when your
applications run in standalone containers that need to communicate. See .
 host: For standalone containers, remove network isolation between the container
and the Docker host, and use the host’s networking directly. host is only available
for swarm services on Docker 17.06 and higher. See .
 overlay: Overlay networks connect multiple Docker daemons together and enable
swarm services to communicate with each other. You can also use overlay
networks to facilitate communication between a swarm service and a standalone
container, or between two standalone containers on different Docker daemons.
This strategy removes the need to do OS-level routing between these containers.
See .
 macvlan: Macvlan networks allow you to assign a MAC address to a container,
making it appear as a physical device on your network. The Docker daemon routes
traffic to containers by their MAC addresses. Using the macvlan driver is sometimes
the best choice when dealing with legacy applications that expect to be directly
connected to the physical network, rather than routed through the Docker host’s
network stack. See .
 none: For this container, disable all networking. Usually used in conjunction with a
custom network driver. none is not available for swarm services. See .

 : You can install and use third-party network plugins with Docker. These plugins are
available from or from third-party vendors. See the vendor’s documentation for
installing and using a given network plugin.
Uso de redes “Bridge”

In terms of networking, a bridge network is a Link Layer device which forwards traffic
between network segments.

In terms of Docker, a bridge network uses a software bridge which allows containers
connected to the same bridge network to communicate, while providing isolation from
containers which are not connected to that bridge network. The Docker bridge driver
automatically installs rules in the host machine so that containers on different bridge
networks cannot communicate directly with each other.

Bridge networks apply to containers running on the same Docker daemon host. For
communication among containers running on different Docker daemon hosts, you can use
an .

When you start Docker, a default bridge network (also called bridge) is created
automatically, and newly-started containers connect to it unless otherwise specified. You
can also create user-defined custom bridge networks. User-defined bridge networks are
superior to the default bridge network.

Differences between user-defined bridges and the default


bridge

Ojo, la red de bridge creadas por los usuarios tienen


resolución de nombres y la por defecto no, mejora el
aislamiento y se pueden asignar a los containers al vuelo
 User-defined bridges provide better isolation and interoperability between
containerized applications.

Containers connected to the same user-defined bridge network automatically


expose all ports to each other, and no ports to the outside world. This allows
containerized applications to communicate with each other easily, without
accidentally opening access to the outside world.

Imagine an application with a web front-end and a database back-end. The outside
world needs access to the web front-end (perhaps on port 80), but only the back-
end itself needs access to the database host and port. Using a user-defined bridge,
only the web port needs to be opened, and the database application doesn’t need
any ports open, since the web front-end can reach it over the user-defined bridge.

If you run the same application stack on the default bridge network, you need to
open both the web port and the database port, using the -p or --publish flag for
each. This means the Docker host needs to block access to the database port by
other means.

 User-defined bridges provide automatic DNS resolution between containers.

Containers on the default bridge network can only access each other by IP
addresses, unless you use the option, which is considered legacy. On a user-
defined bridge network, containers can resolve each other by name or alias.

Imagine the same application as in the previous point, with a web front-end and a
database back-end. If you call your containers web and db, the web container can
connect to the db container at db, no matter which Docker host the application stack
is running on.

If you run the same application stack on the default bridge network, you need to
manually create links between the containers (using the legacy --link flag). These Commented [1]: Si aparece esta opción es incorrecta
porque está obsoleta
links need to be created in both directions, so you can see this gets complex with
more than two containers which need to communicate. Alternatively, you can
manipulate the /etc/hosts files within the containers, but this creates problems that
are difficult to debug.

 Containers can be attached and detached from user-defined networks on the


fly.

During a container’s lifetime, you can connect or disconnect it from user-defined


networks on the fly. To remove a container from the default bridge network, you
need to stop the container and recreate it with different network options.

 Each user-defined network creates a configurable bridge.

If your containers use the default bridge network, you can configure it, but all the
containers use the same settings, such as MTU and iptables rules. In addition,
configuring the default bridge network happens outside of Docker itself, and
requires a restart of Docker.
User-defined bridge networks are created and configured using docker network
create. If different groups of applications have different network requirements, you
can configure each user-defined bridge separately, as you create it.

 Linked containers on the default bridge network share environment variables.

Originally, the only way to share environment variables between two containers
was to link them using the flag. This type of variable sharing is not possible with
user-defined networks. However, there are superior ways to share environment
variables. A few ideas:

o Multiple containers can mount a file or directory containing the shared


information, using a Docker volume.
o Multiple containers can be started together using docker-compose and the
compose file can define the shared variables.

o You can use swarm services instead of standalone containers, and take
advantage of shared and .

Containers connected to the same user-defined bridge network effectively expose all ports
to each other. For a port to be accessible to containers or non-Docker hosts on different
networks, that port must be published using the -p or --publish flag.

Manage a user-defined bridge

docker network create my-net  Crea una red definida por el usuario. Con la opción ---
help podemos ver todas las opciones: rango de IP, gateway, etc.

docker network rm my-net  La elimina. Si hay contenedores enganchados,


desconectarlos primero.

Connect a container to a user-defined bridge

When you create a new container, you can specify one or more --network flags. This
example connects a Nginx container to the my-net network. It also publishes port 80 in the
container to port 8080 on the Docker host, so external clients can access that port. Any
other container connected to the my-net network has access to all ports on the my-
nginx container, and vice versa.

docker network create my-net


docker create --name my-nginx --network my-net --publish 8080:80 nginx:latest  el
container se ha creado, pero aún no se ejecuta.

docker start my-nginx  Pulsar secuencia CTRL + p, CTRL + q, para volver al host.

docker network disconnect my-net my-nginx  Desconectamos el contenedor de la


red.

Se puede ver que en la parte de contenedores está vacía.


docker container inspect my-nginx  Mirar la parte “Network”, también está vacía.
"NetworkSettings": {
"Bridge": "",
"SandboxID": "1ffa899287653161c2f1ade1b800c0550bf7ad3acca0520701bf28cb2feb6add",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {},
"SandboxKey": "/var/run/docker/netns/1ffa89928765",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {}
}

docker network rm my-net  Borramos la net.

docker container stop my-nginx  paramos el contenedor

docker container prune --force  Borramos todos los contenedores detenidos.

Enable forwarding from Docker containers to the outside


world
By default, traffic from containers connected to the default bridge network is not forwarded
to the outside world. To enable forwarding, you need to change two settings. These are
not Docker commands and they affect the Docker host’s kernel.

1. Configure the Linux kernel to allow IP forwarding.


2. $sysctl -a | grep net.ipv4.ip_forward

3. $ sysctl net.ipv4.conf.all.forwarding=1

4. Change the policy for the iptables FORWARD policy from DROP to ACCEPT.
5. $ sudo iptables -P FORWARD ACCEPT

These settings do not persist across a reboot, so you may need to add them to a start-up
script.
Use the default bridge network
The default bridge network is considered a legacy detail of Docker and is not
recommended for production use.
Connect a container to the default bridge network
If you do not specify a network using the --network flag, and you do specify a network
driver, your container is connected to the default bridge network by default.

Containers connected to the default bridge network can communicate, but only by IP
address, unless they are linked using the --link flag.
Uso de redes “Overlay”

The overlay network driver creates a distributed network among multiple Docker daemon
hosts. This network sits on top of (overlays) the host-specific networks, allowing containers
connected to it (including swarm service containers) to communicate securely. Docker
transparently handles routing of each packet to and from the correct Docker daemon host
and the correct destination container.

When you initialize a swarm or join a Docker host to an existing swarm, two new networks
are created on that Docker host:
 an overlay network called ingress, which handles control and data traffic related to
swarm services. When you create a swarm service and do not connect it to a user-
defined overlay network, it connects to the ingress network by default.
 a bridge network called docker_gwbridge, which connects the individual Docker
daemon to the other daemons participating in the swarm.

You can create user-defined overlay networks using docker network create, in the same
way that you can create user-defined bridge networks. Services or containers can be
connected to more than one network at a time. Services or containers can only
communicate across networks they are each connected to.

Although you can connect both swarm services and standalone containers to an overlay
network, the default behaviors and configuration concerns are different. For that reason,
the rest of this topic is divided into operations that apply to all overlay networks, those that
apply to swarm service networks, and those that apply to overlay networks used by
standalone containers.

Create an overlay network


docker network create -d overlay my-overlay  Crea una red overlay

docker network create -d overlay --attachable my-attachable-overlay  Crea una red


overlay que puede ser usada por servicios swarm o contenedores standalone para
comunicarse con contenedores que corren en otros demonios de docker.

Encrypt traffic on an overlay network


All swarm service management traffic is encrypted by default, using the in GCM mode.
Manager nodes in the swarm rotate the key used to encrypt gossip data every 12 hours.
To encrypt application data as well, add --opt encrypted when creating the overlay
network. This enables IPSEC encryption at the level of the vxlan. This encryption imposes
a non-negligible performance penalty, so you should test this option before using it in
production.

When you enable overlay encryption, Docker creates IPSEC tunnels between all the nodes
where tasks are scheduled for services attached to the overlay network. These tunnels also
use the AES algorithm in GCM mode and manager nodes automatically rotate the keys
every 12 hours.

Customize the default ingress network


Most users never need to configure the ingress network, but Docker 17.05 and higher
allow you to do so. This can be useful if the automatically-chosen subnet conflicts with one
that already exists on your network, or you need to customize other low-level network
settings such as the MTU.

Customizing the ingress network involves removing and recreating it. This is usually done
before you create any services in the swarm. If you have existing services which publish
ports, those services need to be removed before you can remove the ingressnetwork.
During the time that no ingress network exists, existing services which do not publish ports
continue to function but are not load-balanced. This affects services which publish ports,
such as a WordPress service which publishes port 80.
1. Inspect the ingress network using docker network inspect ingress, and
remove any services whose containers are connected to it. These are
services that publish ports, such as a WordPress service which
publishes port 80. If all such services are not stopped, the next step
fails.
2. Remove the existing ingress network:
3. $ docker network rm ingress
4.
5. WARNING! Before removing the routing-mesh network, make sure all the
nodes
6. in your swarm run the same docker engine version. Otherwise, removal
may not
7. be effective and functionality of newly created ingress networks will
be
8. impaired.
9. Are you sure you want to continue? [y/N]
10. Create a new overlay network using the --ingress flag, along with the
custom options you want to set. This example sets the MTU to 1200,
sets the subnet to 10.11.0.0/16, and sets the gateway to 10.11.0.2.
11. $ docker network create \
12. --driver overlay \
13. --ingress \
14. --subnet=10.11.0.0/16 \
15. --gateway=10.11.0.2 \
16. --opt com.docker.network.driver.mtu=1200 \
17. my-ingress

Note: You can name your ingress network something other than ingress, but you
can only have one. An attempt to create a second one fails.

18. Restart the services that you stopped in the first step

Customize the docker_gwbridge interface

The docker_gwbridge is a virtual bridge that connects the overlay networks (including
the ingress network) to an individual Docker daemon’s physical network. Docker creates it
automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a
Docker device. It exists in the kernel of the Docker host. If you need to customize its
settings, you must do so before joining the Docker host to the swarm, or after temporarily
removing the host from the swarm.

1. Stop Docker.
2. Delete the existing docker_gwbridge interface.
3. $ sudo ip link set docker_gwbridge down
4.
5. $ sudo ip link del dev docker_gwbridge
6. Start Docker. Do not join or initialize the swarm.
7. Create or re-create the docker_gwbridge bridge manually with your
custom settings, using the docker network create command. This example
uses the subnet 10.11.0.0/16. For a full list of customizable options,
see Bridge driver options.
8. $ docker network create \
9. --subnet 10.11.0.0/16 \
10. --opt com.docker.network.bridge.name=docker_gwbridge \
11. --opt com.docker.network.bridge.enable_icc=false \
12. --opt com.docker.network.bridge.enable_ip_masquerade=true \
13. docker_gwbridge
14. Initialize or join the swarm. Since the bridge already exists, Docker
does not create it with automatic settings.

Publish ports on an overlay network

Swarm services connected to the same overlay network effectively expose all ports to each
other. For a port to be accessible outside of the service, that port must be publishedusing
the -p or --publish flag on docker service create or docker service update. Both the
legacy colon-separated syntax and the newer comma-separated value syntax are
supported. The longer syntax is preferred because it is somewhat self-documenting.

Flag value Description

Map TCP port 80 on the


-p 8080:80 or
service to port 8080 on the
-p published=8080,target=80
routing mesh.

Map UDP port 80 on the


-p 8080:80/udp or
service to port 8080 on the
-p published=8080,target=80,protocol=udp
routing mesh.

Map TCP port 80 on the


service to TCP port 8080 on
-p 8080:80/tcp -p 8080:80/udp or
the routing mesh, and map
-p published=8080,target=80,protocol=tcp -p
UDP port 80 on the service to
published=8080,target=80,protocol=udp
UDP port 8080 on the routing
mesh.
Bypass the routing mesh for a swarm service

By default, swarm services which publish ports do so using the routing mesh. When you
connect to a published port on any swarm node (whether it is running a given service or
not), you are redirected to a worker which is running that service, transparently. Effectively,
Docker acts as a load balancer for your swarm services. Services using the routing mesh
are running in virtual IP (VIP) mode. Even a service running on each node (by means of
the --mode global flag) uses the routing mesh. When using the routing mesh, there is no
guarantee about which Docker node services client requests.
To bypass the routing mesh, you can start a service using DNS Round Robin (DNSRR)
mode, by setting the --endpoint-mode flag to dnsrr. You must run your own load balancer
in front of the service. A DNS query for the service name on the Docker host returns a list
of IP addresses for the nodes running the service. Configure your load balancer to
consume this list and balance the traffic across the nodes.

Separate control and data traffic

By default, control traffic relating to swarm management and traffic to and from your
applications runs over the same network, though the swarm control traffic is encrypted. You
can configure Docker to use separate network interfaces for handling the two different
types of traffic. When you initialize or join the swarm, specify --advertise-addr and --
datapath-addr separately. You must do this for each node joining the swarm.

Attach a standalone container to an overlay network

The ingress network is created without the --attachable flag, which means that only
swarm services can use it, and not standalone containers. You can connect standalone
containers to user-defined overlay networks which are created with the --attachableflag.
This gives standalone containers running on different Docker daemons the ability to
communicate without the need to set up routing on the individual Docker daemon hosts.
Publish ports

Flag value Description

Map TCP port 80 in the container to port 8080 on the overlay


-p 8080:80 network.

Map UDP port 80 in the container to port 8080 on the overlay


-p 8080:80/udp
network.

Map SCTP port 80 in the container to port 8080 on the overlay


-p 8080:80/sctp
network.

Map TCP port 80 in the container to TCP port 8080 on the overlay
-p 8080:80/tcp -p
network, and map UDP port 80 in the container to UDP port 8080
8080:80/udp
on the overlay network.

Container discovery

For most situations, you should connect to the service name, which is load-balanced and
handled by all containers (“tasks”) backing the service. To get a list of all tasks backing the
service, do a DNS lookup for tasks.<service-name>.
Uso de redes de “host”

If you use the host network driver for a container, that container’s network stack is not
isolated from the Docker host. For instance, if you run a container which binds to port 80
and you use host networking, the container’s application will be available on port 80 on the
host’s IP address.

The host networking driver only works on Linux hosts, and is not supported on Docker
Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.

In Docker 17.06 and higher, you can also use a host network for a swarm service, by
passing --network host to the docker container create command. In this case, control
traffic (traffic related to managing the swarm and the service) is still sent across an overlay
network, but the individual swarm service containers send data using the Docker daemon’s
host network and ports. This creates some extra limitations. For instance, if a service
container binds to port 80, only one service container can run on a given swarm node.

If your container or service publishes no ports, host networking has no effect.


Uso de redes “macvlan”

Some applications, especially legacy applications or applications which monitor network


traffic, expect to be directly connected to the physical network. In this type of situation, you
can use the macvlan network driver to assign a MAC address to each container’s virtual
network interface, making it appear to be a physical network interface directly connected to
the physical network. In this case, you need to designate a physical interface on your
Docker host to use for the macvlan, as well as the subnet and gateway of the macvlan. You
can even isolate your macvlan networks using different physical network interfaces. Keep
the following things in mind:

 It is very easy to unintentionally damage your network due to IP address exhaustion


or to “VLAN spread”, which is a situation in which you have an inappropriately large
number of unique MAC addresses in your network.

 Your networking equipment needs to be able to handle “promiscuous mode”, where


one physical interface can be assigned multiple MAC addresses.

 If your application can work using a bridge (on a single Docker host) or overlay (to
communicate across multiple Docker hosts), these solutions may be better in the
long term.

Create a macvlan network

When you create a macvlan network, it can either be in bridge mode or 802.1q trunk bridge
mode.
 In bridge mode, macvlan traffic goes through a physical device on the host.

 In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface which
Docker creates on the fly. This allows you to control routing and filtering at a more
granular level.

Bridge mode

To create a macvlan network which bridges with a given physical network interface, use --
driver macvlan with the docker network create command. You also need to specify
the parent, which is the interface the traffic will physically go through on the Docker host.
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net

If you need to exclude IP addresses from being used in the macvlan network, such as when
a given IP address is already in use, use --aux-addresses:

$ docker network create -d macvlan \


--subnet=192.168.32.0/24 \
--ip-range=192.168.32.128/25 \
--gateway=192.168.32.254 \
--aux-address="my-router=192.168.32.129" \
-o parent=eth0 macnet32

802.1q trunk bridge mode

If you specify a parent interface name with a dot included, such as eth0.50, Docker
interprets that as a sub-interface of eth0and creates the sub-interface automatically.

$ docker network create -d macvlan \


--subnet=192.168.50.0/24 \
--gateway=192.168.50.1 \
-o parent=eth0.50 macvlan50
Use an ipvlan instead of macvlan

In the above example, you are still using a L3 bridge. You can use ipvlan instead, and get
an L2 bridge. Specify -o ipvlan_mode=l2.

$ docker network create -d ipvlan \


--subnet=192.168.210.0/24 \
--subnet=192.168.212.0/24 \
--gateway=192.168.210.254 \
--gateway=192.168.212.254 \
-o ipvlan_mode=l2 ipvlan210

Use IPv6

If you have , you can use dual-stack IPv4/IPv6 macvlan networks.

$ docker network create -d macvlan \


--subnet=192.168.216.0/24 --subnet=192.168.218.0/24 \
--gateway=192.168.216.1 --gateway=192.168.218.1 \
--subnet=2001:db8:abc8::/64 --gateway=2001:db8:abc8::10 \
-o parent=eth0.218 \
-o macvlan_mode=bridge macvlan216
Desabilitar la red en un contenedor

If you want to completely disable the networking stack on a container, you can use the --
network none flag when starting the container. Within the container, only the loopback
device is created. The following example illustrates this.

1. Create the container.


2. $ docker run --rm -dit \
3. --network none \
4. --name no-net-alpine \
5. alpine:latest \
6. ash
7. Check the container’s network stack, by executing some common
networking commands within the container. Notice that no eth0 was
created.
8. $ docker exec no-net-alpine ip link show
9.
10. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
qlen 1
11. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
12. 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1
13. link/ipip 0.0.0.0 brd 0.0.0.0
14. 3: ip6tnl0@NONE: <NOARP> mtu 1452 qdisc noop state DOWN qlen 1
15. link/tunnel6 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 brd
00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00
16. $ docker exec no-net-alpine ip route
17. The second command returns empty because there is no routing table.
18. Stop the container. It is removed automatically because it was created
with the --rm flag.
19. $ docker container rm no-net-alpine
Tutoriales de Networking (docs.docker.com)
Tutorial de redes “Bridge”

Use the default bridge network demonstrates how to use the default bridge network that
Docker sets up for you automatically. This network is not the best choice for production
systems.

In this example, you start two different alpine containers on the same Docker host.

docker network ls  Muestra las redes.

Este tutorial conectará dos contenedores a la red “bridge”

Iniciamos dos contenedores “alpine”

docker run -dit --name alpine1 alpine ash  -d (detached) -i (interactivo abre STDIN) -t
(asigna una pseudo TTY) --name (Nombre del contenedor) ash (ash = Alquimist sh)

docker run -dit --name alpine2 alpine ash  iniciamos el Segundo.

docker container ls  Listamos contenedores.

docker network inspect bridge  Inspeccionamos la red “Bridge”


[
{
"Name": "bridge",
"Id": "6d52b69510f1edb7752ef4ac8438119ed8eabbad1a0e94ff1d629e511d062434",
"Created": "2019-06-08T10:06:36.8184197+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"69b9f888ac26fec08d747844bb896e5f6f90212c06f33473c16f7617a8e20273": {
"Name": "alpine1",
"EndpointID": "8d3387760cf6503a586d798e81095c9800dcec8286b349f40c7b8b7759f2f371",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
},
"af6e779afd6b3659637b4adec17afa9bc47c3e05dc03da3dc05fc839722d32c1": {
"Name": "alpine2",
"EndpointID": "335f02135493f52db92cc0d7507727feac598b24eddb30101489bf3fd87d4335",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]

docker attach alpine1  Nos conectarmos al container

ip addr show  Info ip

Probamos salida a internet y conexión con el otro contenedor.

Sin embargo haciendo ping al nombre del contenedor, falla.

Nos desconectamos de los contenedores con esta secuencia. CTRL + p, CTRL + q

docker container stop alpine1 alpine2  paramos los contenedores

docker container rm alpine1 alpine2  Los borramos.


Use user-defined bridge networks shows how to create and use your own custom bridge
networks, to connect containers running on the same Docker host. This is recommended for
standalone containers running in production.

Creamos la red “alpine-network”

docker network create --driver bridge alpine-net  “--driver bridge” no es necesario ponerlo.

docker network ls  Listamos redes.

docker network inspect alpine-net  Inspeccionamos la red.

[
{
"Name": "alpine-net",
"Id": "1bf63af0d2e5f2b47b48a1d6d1fe4059aaca83b35117bae179b72638c6549739",
"Created": "2019-06-08T17:01:10.82959791+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]

Creamos 4 contenedores. Con “docker run” solo se puede conectar a una sola red. A posteriori
podemos hacerlo con “docker network connect”

docker run -dit --name alpine1 --network alpine-net alpine ash  Creamos contenedor y
conectamos a “alpine-net”

docker run -dit --name alpine2 --network alpine-net alpine ash  Creamos contenedor y
conectamos a “alpine-net”
docker run -dit --name alpine3 alpine-net alpine ash  Creamos contenedor y conectamos a
red por defecto (bridge)

docker run -dit --name alpine4 --network alpine-net alpine ash  Creamos contenedor y
conectamos a “alpine-net”

docker network connect bridge alpine4  Conectamos “alpine4” a la red “bridge”

docker network inspect bridge  Inspeccionamos la red “bridge” y comprobamos que el 3 y


el 4 están conectados a esta red.

"Containers": {
"867e3cb9a48b3c351b00bb7979253175a5e9a11810e7d74a208f9ba26c89d3a1": {
"Name": "alpine4",
"EndpointID": "4c90d3f8dd8e4099ece8e434c77df4672f299f1243796370b979592cc918871f",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"9b5f89b2ca6c2e04aa2dbd7e03fc7d2c0ca3cdebdbede0c388137becb2911e5c": {
"Name": "alpine3",
"EndpointID": "244ff2245ccbaf4deb7f46fbe5dc4dd458ea97b8be0fdabd25a5c03af85faa51",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},

docker network inspect alpine-net  1, 2 y 4 están conectados a esta red.

"Containers": {
"16181f194b77fd4218e2f1a0d7057d09669a8b6f5ea3ad6b6069904b58436282": {
"Name": "alpine2",
"EndpointID": "c71fba3b70d3c1a51060d4ae6e48fc7228418c111d8886ab671ac877fdaa2fe7",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"29a5ce36b9c73ca98ec452bc239026f15402c52aaed408511de85067c3a7689b": {
"Name": "alpine1",
"EndpointID": "7f853c6f64edbff6d1236072b8b69c5fd5b9863eba507ba04ae76896f90a4c5c",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"867e3cb9a48b3c351b00bb7979253175a5e9a11810e7d74a208f9ba26c89d3a1": {
"Name": "alpine4",
"EndpointID": "b3507be7846b6cfbaef18bccac6650306ab9ca63e62a3dace6038e79d5e2c77b",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},

On user-defined networks like alpine-net, containers can not only communicate by IP


address, but can also resolve a container name to an IP address. This capability is
called automatic service discovery. Let’s connect to alpine1 and test this
out. alpine1 should be able to resolve alpine2 and alpine4 (and alpine1, itself) to IP
addresses.

docker container attach alpine1  Nos conectamos al contenedor 1.

ping -c 2 alpine2  Funciona.


Probamos otras cosas. No funciona la resolución de “alpine3”

ping -c 2 172.17.0.2  Tampoco funciona porque está en otra red.

Nos desconectamos (detach) con CTRL + p, CTRL + q

Remember that alpine4 is connected to both the default bridge network and alpine-net. It
should be able to reach all of the other containers. However, you will need to
address alpine3 by its IP address. (Porque alpine3 no está conectada a una red definida
por el usuario, que es la que tiene la capacidad de “automatic Service Direcovery” y poder
resolver el nombre del contenedor a una ip) Attach to it and run the tests.
El comando “traceroute” muestra como se toman los gateways. Para el “alpine4”sería.

Pasa por el GW 172.20.0.1 (de la red Alpine-net) y luego al 192.168.1.1 (el de Avante)

Salimos del contenedor CTRL + p, CTRL + q


docker container stop alpine1 alpine2 alpine3 alpine4  Paramos contenendores.

docker container rm alpine1 alpine2 alpine3 alpine4  Los borramos.

docker network rm alpine-net  Borramos la red.


Tutorial de redes de “Host”

The goal of this tutorial is to start a nginx container which binds directly to port 80 on the
Docker host. From a networking point of view, this is the same level of isolation as if
the nginx process were running directly on the Docker host and not in a container.
However, in all other ways, such as storage, process namespace, and user namespace,
the nginxprocess is isolated from the host.

Prerequisites

 This procedure requires port 80 to be available on the Docker host. To make Nginx
listen on a different port, see the nginx image
 The host networking driver only works on Linux hosts, and is not supported on
Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows
Server.

docker run --rm -d --network host --name my_nginx nginx  --rm indica que se borre el
contenedor cuando se pare o termine.

Nos conectamos a y comprobamos que está funcionando nginx.

ip add show  Comrpbamos que no se han creado nuevas interfaces en el host. La interfaz
“docker0” es para la red “Bridge”

netstat -tulpn | grep :80  verificamos que el proceso está enlazado con el puerto 80 del
host.

docker container stop my_nginx  al parar el container se borra porque usamos la opción --
rm
Turorial de redes “Overlay”

This topic includes four different tutorials. You can run each of them on Linux, Windows, or a
Mac, but for the last two, you need a second Docker host running elsewhere.
 Use the default overlay network demonstrates how to use the default overlay network
that Docker sets up for you automatically when you initialize or join a swarm. This
network is not the best choice for production systems.
 Use user-defined overlay networks shows how to create and use your own custom
overlay networks, to connect services. This is recommended for services running in
production.
 Use an overlay network for standalone containers shows how to communicate
between standalone containers on different Docker daemons using an overlay
network.
 Communicate between a container and a swarm service sets up communication
between a standalone container and a swarm service, using an attachable overlay
network. This is supported in Docker 17.06 and higher.

These requires you to have at least a single-node swarm, which means that you have started
Docker and run docker swarm init on the host. You can run the examples on a multi-node
swarm as well.

The last example requires Docker 17.06 or higher.

Use the default overlay network


In this example, you start an alpine service and examine the characteristics of the network
from the point of view of the individual service containers.

Prerequisites

This tutorial requires three physical or virtual Docker hosts which can all communicate with
one another, all running new installations of Docker 17.03 or higher.

This tutorial assumes that the three hosts are running on the same network with no firewall
involved. These hosts will be referred to as manager, worker-1, and worker-2.

The manager host will function as both a manager and a worker, which means it can both run
service tasks and manage the swarm. worker-1 and worker-2 will function as workers only

Desplegar las VMs, cambiar los nombres de las VMs, los ficheros de hosts y borrar todos los
contenedores si los tuvieran.

Creamos el swarm.
(En docker-01, el manager)
docker swarm init --advertise-addr=192.168.1.109  Iniciamos el swarm.

(En docker-01, el worker-1)


docker swarm join --token SWMTKN-1-
3ijolct45amguqmxixgbg49fkww06jessy2gvpe4yiptf4o3wz-9bomckyr9gmlgk8lfnili3hhn
192.168.1.109:2377 Agregamos el nodo como worker.

(En docker-02, el worker-2)


docker swarm join --token SWMTKN-1-
3ijolct45amguqmxixgbg49fkww06jessy2gvpe4yiptf4o3wz-9bomckyr9gmlgk8lfnili3hhn
192.168.1.109:2377 Agregamos el nodo como worker.

(En docker-01, el manager)


docker node ls  Listamos los nodos del swarm

(En docker-01, el manager)


docker node ls --filter role=manager  Listamos los nodos manager del swarm.
docker node ls --filter role=worker  Listamos los nodos workers del swarm.

Listamos las redes en el manager y en los workers y nos fijamos en que cada uno de ellos tiene
una red de overlay (superpuesta) llamada “ingress”.

(En docker-01, el manager)


docker network ls  Listamos los redes configuradas en el docker-01
The docker_gwbridge connects the ingress network to the Docker host’s network interface
so that traffic can flow to and from swarm managers and workers. If you create swarm
services and do not specify a network, they are connected to the ingress network. It is
recommended that you use separate overlay networks for each application or group of
applications which will work together

Creamos el servicio.

Vamos a crear una red de overlay llamada “nginx-net”

docker network create -d overlay nginx-net  -d es igual que --driver.

You don’t need to create the overlay network on the other nodes, beacause it will be
automatically created when one of those nodes starts running a service task which requires
it.

(En docker-01, el manager)


docker service create --name my-nginx \
--publish target=80,published=80 \
--replicas=5 \
--network nginx-net \
nginx  Creamos un servicio nginx con 5 réplicas conectadas a la red “nginx-net”

The default publish mode of ingress, which is used when you do not specify a modefor the -
-publish flag, means that if you browse to port 80 on manager, worker-1, or worker-2, you
will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently
running on the node you browse to. If you want to publish the port using host mode, you
can add mode=host to the --publish output. However, you should also use --mode
global instead of --replicas=5 in this case, since only one service task can bind a given
port on a given node.

Veamos como va
( en el manager, docker-01)
docker network inspect nginx-net Inspecciono nginx-net y miro los contenedores que
tienen configurada esta red. Tambien está el contenedor del balanceador.

"Containers": {
"1ebd92275be3cb9c26afee929bfa819033adacd9e6d399d0626ccc5582467c72": {
"Name": "my-nginx.5.znv2c6v9pqokjd1f8ol4plzuj",
"EndpointID": "98785a9000825ae6ae5418088a6b4465ed2a6b5b7188e6cdbfca858497488e95",
"MacAddress": "02:42:0a:00:00:11",
"IPv4Address": "10.0.0.17/24",
"IPv6Address": ""
},
"5f813331af637e281b9e298d80cfb03689c77392cbd3ad0b4312ce0785f7d574": {
"Name": "my-nginx.2.w8fqpbv3fha0e3hyq6jhg54tu",
"EndpointID": "3e9e7a5f47b71ac537896d438ecf9886214471a9692ad467c822af2f3bfa6816",
"MacAddress": "02:42:0a:00:00:0e",
"IPv4Address": "10.0.0.14/24",
"IPv6Address": ""
},
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "7cec9ee2f7a9d158a454a4056d79919eaa0aad465a8ed253e25918bc211fa3ef",
"MacAddress": "02:42:0a:00:00:14",
"IPv4Address": "10.0.0.20/24",
"IPv6Address": ""
}

(En el worker1, docker-02)


docker network inspect nginx-net Inspecciono nginx-net y miro los contenedores que
tienen configurada esta red. Como se puede ver, se ha creado automáticamente la red “nginx-
net” como se anunció antes. Lo mismo ocurre en el nodo docker-03 (worker2)

"Containers": {
"fe84f867fce3f0bfee34d325c2a4155447c324bdf2673d9bb446c2073a8c84b5": {
"Name": "my-nginx.3.30uh39wu8dsa9uiydpgpxdhx9",
"EndpointID": "7d828e9c579c2b95e7311d24eb71197778e409ee3204fc1b81c69b1b7f17413c",
"MacAddress": "02:42:0a:00:00:0f",
"IPv4Address": "10.0.0.15/24",
"IPv6Address": ""
},
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "d3c808e16b0161e5cd9ae543b3136c5281c0dfa39636e65e44b455318c8a6c4f",
"MacAddress": "02:42:0a:00:00:12",
"IPv4Address": "10.0.0.18/24",
"IPv6Address": ""
}
},

docker network create -d overlay nginx-net-2  (-d=driver) Creamos una nueva red de
overlay llamada “nginx-net-2”

docker service update --network-add nginx-net-2 --network-rm nginx-net my-nginx 


actualizamos el servicio “my_nginx” para que use la red de overlay “nginx-net-2” y no la
“nginx-net”
docker network inspect nginx-net  Comprobamos que no hay contenedores conectados a la
red “nginx-net” (A excepción de “lb-nginx-net”)

"Containers": {
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "7cec9ee2f7a9d158a454a4056d79919eaa0aad465a8ed253e25918bc211fa3ef",
"MacAddress": "02:42:0a:00:00:14",
"IPv4Address": "10.0.0.20/24",
"IPv6Address": ""
}
},

docker network inspect nginx-net-2  Comprobamos que sí hay contenedores conectados a


la red “nginx-net-2”

"Containers": {
"7005dc5ab77e9be0eb7b1e4affaf03113588be9ffb23ef9f723fa2fd92992f3e": {
"Name": "my-nginx.3.z6cyugn2hfw4ieb8rx3w47bll",
"EndpointID": "932f06e63b1de05cbc3a5eca5bcd56ef57063570b5d3a15bf3e838788ad5c0e3",
"MacAddress": "02:42:0a:00:01:0c",
"IPv4Address": "10.0.1.12/24",
"IPv6Address": ""
},
"9e1905a87f519d68a3b108ff852253969bae481ca168b9bf20e02aa7e56fb860": {
"Name": "my-nginx.1.7k5q66axtyq8q2zbi91vu2onx",
"EndpointID": "61da071abc0c079a26f8d29d445d936d12633954c65390fcaffbc87a4892f49b",
"MacAddress": "02:42:0a:00:01:0b",
"IPv4Address": "10.0.1.11/24",
"IPv6Address": ""
},
"be7fae9347697fab385f00b4a1ac1f10cefd89107cf895c698e2f77a409c55e6": {
"Name": "my-nginx.5.0ybrb7269p2b6pfnj4x9bs7ua",
"EndpointID": "7fc8b12a39aa6527dd522a2a7910b55561d0bd1a117ac0868eac524efaf31ed0",
"MacAddress": "02:42:0a:00:01:09",
"IPv4Address": "10.0.1.9/24",
"IPv6Address": ""
},
"lb-nginx-net-2": {
"Name": "nginx-net-2-endpoint",
"EndpointID": "ba8535794ffd29123ba9274cdd41a27e68d2194fe29bc9972a5b4dd2a8d7f931",
"MacAddress": "02:42:0a:00:01:0a",
"IPv4Address": "10.0.1.10/24",
"IPv6Address": ""
}
},

Hemos visto que las redes de overlay se crean automáticamente en los nodos workers, pero su
eliminación debe ser manual.

docker service rm my-nginx  Eliminamos el servicio.


docker network rm nginx-net nginx-net-2  Eliminamos las redes.
Use an overlay network for standalone containers

This example demonstrates DNS container discovery -- specifically, how to communicate


between standalone containers on different Docker daemons using an overlay network.
Steps are:

 On host1, initialize the node as a swarm (manager).


 On host2, join the node to the swarm (worker).
 On host1, create an attachable overlay network (test-net).
 On host1, run an interactive container (alpine1) on test-net.
 On host2, run an interactive, and detached, container (alpine2) on test-net.
 On host1, from within a session of alpine1, ping alpine2.

Prerequisites

For this test, you need two different Docker hosts that can communicate with each other.
Each host must have Docker 17.06 or higher with the following ports open between the two
Docker hosts:

 TCP port 2377


 TCP and UDP port 7946
 UDP port 4789

(En docker-01)
docker swarm init  Inicializamos el swarm

(En docker-02)
docker swarm join --token SWMTKN-1-
1xu1nb215dwpsd2xc6fkbmqi3lyetk2k3ob366thq8vpmy4d1u-8u6y5wkuw96m2vildzphkzlhj
192.168.1.109:2377  Agregamos worker.

(En docker-01)
docker network create --driver=overlay --attachable test-net  Creamos una red de overlay
conectable (attachable = Permite el enganche manual de contenedores) llamada “test-net”

(En docker-01)
docker run -it --name alpine1 --network test-net alpine  Iniciamos un contenedor alpine
interactivo (-it) y lo conectamos a la red de overlay “test-net”

(En docker-02)
docker network ls  notar que la red “test-net” no existe aún.

(En docker-02)
docker run -dit --name alpine2 --network test-net alpine  Creamos contenedor detached e
interactivo llamado “alpine2” conectado a “test-net”
(En docker-02)
docker network ls  Verificamos que se ha creado la red “test-net” y que tiene el mismo
NETWORK ID que en docker-01

(En docker-01, dentro del contenedor)


pinc -c 2 alpine2  Hacemos ping para ver si se resuelve el nombre “alpine2”

The two containers communicate with the overlay network connecting the two hosts. If you
run another alpine container on host2 that is not detached, you can
ping alpine1 from host2 (and here we add the remove option for automatic container
cleanup):

(En docker-02)
docker run -it --rm --name alpine3 --network test-net alpine  Creamos un contenedor
interactivo, que se borre cuando se pare o termine.

(En docker-02, desde dentro del contenedor)


ping -c 2 alpine1  hacemos ping usando la red de overlay con resolución de DNS para el
nombre del contenedor.

Salimos de los contenedores con “exit”. Se eliminarán porque se crearon con la opción “--rm”
docker container stop alpine2  En docker-02 sigue corriendo “alpine2” pues lo creamos de
forma detached. Lo detenemos.

docker network ls  Listamos las redes.

docker container rm alpine2  eliminamos en contenedor.

(En docker-01)
docker container rm alpine1  Eliminamos el contenedor.

docker network rm test-net  Eliminamos la red de overlay.


Tutorial de redes “Macvlan”

This series of tutorials deals with networking standalone containers which connect
to macvlan networks. In this type of network, the Docker host accepts requests for multiple
MAC addresses at its IP address, and routes those requests to the appropriate container.
For other networking topics, see the .
The goal of these tutorials is to set up a bridged macvlan network and attach a container to
it, then set up an 802.1q trunked macvlan network and attach a container to it.

docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o


parent=eth0 my-macvlan-net  Crea una macvlan llamada “my-macvlan.net”. Hay
que poner en “subnet” y “gateway” valores que tengan sentido en la red física.

docker run --rm -itd --network my-macvlan-net --name my-macvlan-alpine alpine:latest


ash  Iniciamos contenedor alpine conectado a “my-macvlan-net”

docker container inspect my-macvlan-alpine  Veamos la parte de “network”

"Networks": {
"my-macvlan-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"62d172ba8926"
],
"NetworkID": "933f41f89bedd49127f751ba9449e4b5036695470160af9101c39f5565ed8d90",
"EndpointID": "600243c3cb9ac1fe247a359b1e6b564bd6b4b70784ea2dd749f9c417811fd615",
"Gateway": "192.168.1.1",
"IPAddress": "192.168.1.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:01:02",
"DriverOpts": null
}

docker exec my-macvlan-alpine ip addr show eth0  Veamos cómo ve el container su


configuración de red. A todos los efectos es como si fuera un equipo físico de la red física.
docker exec my-macvlan-alpine ip route  la puerta de enlace

docker container stop my-macvlan-alpine  Paramos (se borra automaticamente por la


opción --rm)

docker network rm my-macvlan-net  Eliminamos la red


Configurar el daemon y los contenedores
(docs.docker.com)
Configurar el daemon para IPv6

Before you can use IPv6 in Docker containers or swarm services, you need to enable IPv6
support in the Docker daemon. Afterward, you can choose to use either IPv4 or IPv6 (or
both) with any container, service, or network.

Note: IPv6 networking is only supported on Docker daemons running on Linux hosts.

1. Edit /etc/docker/daemon.json and set the ipv6 key to true. Commented [2]: El fichero al principio no está pero lo
puedes crear y siempre debe estar ahí para agregar
2. {
configuraciones
3. "ipv6": true
4. }
5. Save the file.
6. Reload the Docker configuration file.
7. $ systemctl reload docker

You can now create networks with the --ipv6 flag and assign containers IPv6 addresses
using the --ip6 flag.
Dockers e IPTables

On Linux, Docker manipulates iptables rules to provide network isolation. This is an


implementation detail, and you should not modify the rules Docker inserts into
your iptables policies.

Add iptables policies before Docker’s rules

All of Docker’s iptables rules are added to the DOCKER chain. Do not manipulate this table
manually. If you need to add rules which load before Docker’s rules, add them to
the DOCKER-USER chain. These rules are loaded before any rules Docker creates
automatically.

Restrict connections to the Docker daemon


By default, all external source IPs are allowed to connect to the Docker daemon. To allow
only a specific IP or network to access the containers, insert a negated rule at the top of the
DOCKER filter chain. For example, the following rule restricts external access to all IP
addresses except 192.168.1.1:

$ iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.1 -j DROP

You could instead allow connections from a source subnet. The following rule only allows
access from the subnet 192.168.1.0/24:

$ iptables -I DOCKER-USER -i ext_if ! -s 192.168.1.0/24 -j DROP

Finally, you can specify a range of IP addresses to accept using --src-range (Remember
to also add -m iprange when using --src-range or --dst-range):
$ iptables -I DOCKER-USER -m iprange -i ext_if ! --src-range 192.168.1.1-
192.168.1.3 -j DROP

You can combine -s or --src-range with -d or --dst-range to control both the source and
destination. For instance, if the Docker daemon listens on both 192.168.1.99 and 10.1.2.3,
you can make rules specific to 10.1.2.3 and leave 192.168.1.99open.
iptables is complicated and more complicated rule are out of scope for this topic. See
the for a lot more information.
Prevent Docker from manipulating iptables

To prevent Docker from manipulating the iptables policies at all, set the iptables key
to false in /etc/docker/daemon.json. This is inappropriate for most users, because
the iptables policies then need to be managed by hand.
Redes de contenedor

The type of network a container uses, whether it is a , an , a , or a custom network plugin,


is transparent from within the container. From the container’s point of view, it has a network
interface with an IP address, a gateway, a routing table, DNS services, and other
networking details (assuming the container is not using the none network driver). This topic
is about networking concerns from the point of view of the container.

Published ports

By default, when you create a container, it does not publish any of its ports to the outside
world. To make a port available to services outside of Docker, or to Docker containers
which are not connected to the container’s network, use the --publish or -p flag. This
creates a firewall rule which maps a container port to a port on the Docker host. Here are
some examples.

Flag value Description

Map TCP port 80 in the container to port 8080 on the Docker


-p 8080:80
host.

-p Map TCP port 80 in the container to port 8080 on the Docker


192.168.1.100:8080:80 host for connections to host IP 192.168.1.100.

Map UDP port 80 in the container to port 8080 on the Docker


-p 8080:80/udp
host.

Map TCP port 80 in the container to TCP port 8080 on the


-p 8080:80/tcp -p
8080:80/udp
Docker host, and map UDP port 80 in the container to UDP
port 8080 on the Docker host.

IP address and hostname


By default, the container is assigned an IP address for every Docker network it connects to.
The IP address is assigned from the pool assigned to the network, so the Docker daemon
effectively acts as a DHCP server for each container. Each network also has a default
subnet mask and gateway.

When the container starts, it can only be connected to a single network, using --network.
However, you can connect a running container to multiple networks using docker network
connect. When you start a container using the --network flag, you can specify the IP
address assigned to the container on that network using the --ip or --ip6 flags.
When you connect an existing container to a different network using docker network
connect, you can use the --ip or --ip6flags on that command to specify the container’s IP
address on the additional network.
In the same way, a container’s hostname defaults to be the container’s ID in Docker. You
can override the hostname using --hostname. When connecting to an existing network
using docker network connect, you can use the --alias flag to specify an additional
network alias for the container on that network.

DNS services

By default, a container inherits the DNS settings of the Docker daemon, including
the /etc/hosts and /etc/resolv.conf.You can override these settings on a per-container
basis.

Flag Description

The IP address of a DNS server. To specify multiple DNS servers, use multiple --
dns flags. If the container cannot reach any of the IP addresses you specify,
--dns
Google’s public DNS server 8.8.8.8 is added, so that your container can resolve
internet domains.

--dns- A DNS search domain to search non-fully-qualified hostnames. To specify multiple


search DNS search prefixes, use multiple --dns-search flags.

A key-value pair representing a DNS option and its value. See your operating
--dns-opt
system’s documentation for resolv.conf for valid options.

-- The hostname a container uses for itself. Defaults to the container’s ID if not
hostname specified.
Configurar Docker para que use un servidor proxy

If your container needs to use an HTTP, HTTPS, or FTP proxy server, you can configure it
in different ways:

 In Docker 17.07 and higher, you can configure the Docker client to pass proxy
information to containers automatically.

 In Docker 17.06 and lower, you must set appropriate environment variables within
the container. You can do this when you build the image (which makes the image
less portable) or when you create or run the container.

Configure the Docker client


1. On the Docker client, create or edit the file ~/.docker/config.json in the home
directory of the user which starts containers. Add JSON such as the following,
substituting the type of proxy with httpsProxy or ftpProxy if necessary, and
substituting the address and port of the proxy server. You can configure multiple
proxy servers at the same time.
You can optionally exclude hosts or ranges from going through the proxy server by
setting a noProxy key to one or more comma-separated IP addresses or hosts.
Using the * character as a wildcard is supported, as shown in this example.
{
"proxies":
{
"default":
{
"httpProxy": "http://127.0.0.1:3001",
"httpsProxy": "http://127.0.0.1:3001",
"noProxy": "*.test.example.com,.example2.com"
}
}
}

Save the file.

2. When you create or start new containers, the environment variables are set
automatically within the container.
Use environment variables
Set the environment variables manually
When you build the image, or using the --env flag when you create or run the container,
you can set one or more of the following variables to the appropriate value. This method
makes the image less portable, so if you have Docker 17.07 or higher, you
should configure the Docker client instead.

Variable Dockerfile example docker run Example

HTTP_PROX ENV HTTP_PROXY --env


Y "http://127.0.0.1:3001" HTTP_PROXY="http://127.0.0.1:3001"

--env
HTTPS_PRO ENV HTTPS_PROXY
HTTPS_PROXY="https://127.0.0.1:3001
XY "https://127.0.0.1:3001"
"

ENV FTP_PROXY --env


FTP_PROXY
"ftp://127.0.0.1:3001" FTP_PROXY="ftp://127.0.0.1:3001"

ENV NO_PROXY --env


NO_PROXY "*.test.example.com,.example NO_PROXY="*.test.example.com,.examp
2.com" le2.com"
Administración de los datos de la aplicación
(docs.docker.com)

Resumen del almacenamiento

By default all files created inside a container are stored on a writable container layer. This
means that:

 The data doesn’t persist when that container no longer exists, and it can be difficult
to get the data out of the container if another process needs it.
 A container’s writable layer is tightly coupled to the host machine where the
container is running. You can’t easily move the data somewhere else.
 Writing into a container’s writable layer requires a to manage the filesystem. The
storage driver provides a union filesystem, using the Linux kernel. This extra
abstraction reduces performance as compared to using data volumes, which write
directly to the host filesystem.

Docker has two options for containers to store files in the host machine, so that the files are
persisted even after the container stops: volumes, and bind mounts. If you’re running
Docker on Linux you can also use a tmpfs mount.

Keep reading for more information about these two ways of persisting data.

Choose the right type of mount


No matter which type of mount you choose to use, the data looks the same from within the
container. It is exposed as either a directory or an individual file in the container’s
filesystem.

An easy way to visualize the difference among volumes, bind mounts, and tmpfsmounts is
to think about where the data lives on the Docker host.
 Volumes are stored in a part of the host filesystem which is managed by
Docker(/var/lib/docker/volumes/ on Linux). Non-Docker processes should not
modify this part of the filesystem. Volumes are the best way to persist data in
Docker.

 Bind mounts may be stored anywhere on the host system. They may even be
important system files or directories. Non-Docker processes on the Docker host or
a Docker container can modify them at any time.

 tmpfs mounts are stored in the host system’s memory only, and are never written
to the host system’s filesystem.

More details about mount types


 : Created and managed by Docker. You can create a volume explicitly using
the docker volume create command, or Docker can create a volume during
container or service creation.

When you create a volume, it is stored within a directory on the Docker host. When
you mount the volume into a container, this directory is what is mounted into the
container. This is similar to the way that bind mounts work, except that volumes are
managed by Docker and are isolated from the core functionality of the host
machine.

A given volume can be mounted into multiple containers simultaneously. When no


running container is using a volume, the volume is still available to Docker and is
not removed automatically. You can remove unused volumes using docker volume
prune.
When you mount a volume, it may be named or anonymous. Anonymous volumes
are not given an explicit name when they are first mounted into a container, so
Docker gives them a random name that is guaranteed to be unique within a given
Docker host. Besides the name, named and anonymous volumes behave in the
same ways.

Volumes also support the use of volume drivers, which allow you to store your data
on remote hosts or cloud providers, among other possibilities.

 : Available since the early days of Docker. Bind mounts have limited functionality
compared to volumes. When you use a bind mount, a file or directory on the host
machine is mounted into a container. The file or directory is referenced by its full
path on the host machine. The file or directory does not need to exist on the Docker
host already. It is created on demand if it does not yet exist. Bind mounts are very
performant, but they rely on the host machine’s filesystem having a specific
directory structure available. If you are developing new Docker applications,
consider using named volumes instead. You can’t use Docker CLI commands to
directly manage bind mounts.

Bind mounts allow access to sensitive files

One side effect of using bind mounts, for better or for worse, is that you can change
the host filesystem via processes running in a container, including creating,
modifying, or deleting important system files or directories. This is a powerful ability
which can have security implications, including impacting non-Docker processes on
the host system.

 : A tmpfs mount is not persisted on disk, either on the Docker host or within a
container. It can be used by a container during the lifetime of the container, to store
non-persistent state or sensitive information. For instance, internally, swarm
services use tmpfs mounts to mount into a service’s containers.

Bind mounts and volumes can both be mounted into containers using the -v or--
volume flag, but the syntax for each is slightly different. For tmpfs mounts, you can use
the --tmpfs flag. However, in Docker 17.06 and higher, we recommend using the --
mount flag for both containers and services, for bind mounts, volumes, or tmpfsmounts, as
the syntax is more clear.
Good use cases for volumes
Volumes are the preferred way to persist data in Docker containers and services. Some
use cases for volumes include:

 Sharing data among multiple running containers. If you don’t explicitly create it, a
volume is created the first time it is mounted into a container. When that container
stops or is removed, the volume still exists. Multiple containers can mount the same
volume simultaneously, either read-write or read-only. Volumes are only removed
when you explicitly remove them.

 When the Docker host is not guaranteed to have a given directory or file structure.
Volumes help you decouple the configuration of the Docker host from the container
runtime.

 When you want to store your container’s data on a remote host or a cloud provider,
rather than locally.
 When you need to back up, restore, or migrate data from one Docker host to
another, volumes are a better choice. You can stop containers using the volume,
then back up the volume’s directory (such as /var/lib/docker/volumes/<volume-
name>).

Good use cases for bind mounts


In general, you should use volumes where possible. Bind mounts are appropriate for the
following types of use case:

 Sharing configuration files from the host machine to containers. This is how Docker
provides DNS resolution to containers by default, by
mounting /etc/resolv.conffrom the host machine into each container.
 Sharing source code or build artifacts between a development environment on the
Docker host and a container. For instance, you may mount a
Maven target/directory into a container, and each time you build the Maven project
on the Docker host, the container gets access to the rebuilt artifacts.

If you use Docker for development this way, your production Dockerfile would copy
the production-ready artifacts directly into the image, rather than relying on a bind
mount.

 When the file or directory structure of the Docker host is guaranteed to be


consistent with the bind mounts the containers require.
Tips for using bind mounts or volumes
If you use either bind mounts or volumes, keep the following in mind:

 If you mount an empty volume into a directory in the container in which files or
directories exist, these files or directories are propagated (copied) into the volume.
Similarly, if you start a container and specify a volume which does not already exist,
an empty volume is created for you. This is a good way to pre-populate data that
another container needs.
 If you mount a bind mount or non-empty volume into a directory in the container
in which some files or directories exist, these files or directories are obscured by the
mount, just as if you saved files into /mnt on a Linux host and then mounted a USB
drive into /mnt. The contents of /mnt would be obscured by the contents of the USB
drive until the USB drive were unmounted. The obscured files are not removed or
altered, but are not accessible while the bind mount or volume is mounted.
Volúmenes

Volumes are the preferred mechanism for persisting data generated by and used by
Docker containers. While are dependent on the directory structure of the host machine,
volumes are completely managed by Docker. Volumes have several advantages over bind
mounts:

 Volumes are easier to back up or migrate than bind mounts.


 You can manage volumes using Docker CLI commands or the Docker API.
 Volumes work on both Linux and Windows containers.
 Volumes can be more safely shared among multiple containers.
 Volume drivers let you store volumes on remote hosts or cloud providers, to encrypt
the contents of volumes, or to add other functionality.
 New volumes can have their content pre-populated by a container.

In addition, volumes are often a better choice than persisting data in a container’s writable
layer, because a volume does not increase the size of the containers using it, and the
volume’s contents exist outside the lifecycle of a given container.

If your container generates non-persistent state data, consider using a to avoid storing the
data anywhere permanently, and to increase the container’s performance by avoiding
writing into the container’s writable layer.

Choose the -v or --mount flag


Originally, the -v or --volume flag was used for standalone containers and the --mount flag
was used for swarm services. However, starting with Docker 17.06, you can also use --
mount with standalone containers. In general, --mount is more explicit and verbose. The
biggest difference is that the -v syntax combines all the options together in one field, while
the --mount syntax separates them. Here is a comparison of the syntax for each flag.
New users should try --mount syntax which is simpler than --volume syntax.

If you need to specify volume driver options, you must use --mount.

 -v or --volume: Consists of three fields, separated by colon characters (:). The


fields must be in the correct order, and the meaning of each field is not immediately
obvious.
o In the case of named volumes, the first field is the name of the volume, and
is unique on a given host machine. For anonymous volumes, the first field is
omitted.
o The second field is the path where the file or directory are mounted in the
container.
o The third field is optional, and is a comma-separated list of options, such
as ro. These options are discussed below.

 --mount: Consists of multiple key-value pairs, separated by commas and each


consisting of a <key>=<value> tuple. The --mount syntax is more verbose than -
v or --volume, but the order of the keys is not significant, and the value of the flag is
easier to understand.
o The type of the mount, which can be , volume, or . This topic discusses
volumes, so the type is always volume.
o The source of the mount. For named volumes, this is the name of the
volume. For anonymous volumes, this field is omitted. May be specified
as source or src.
o The destination takes as its value the path where the file or directory is
mounted in the container. May be specified as destination, dst, or target.
o The readonly option, if present, causes the bind mount to be mounted into
the container as read-only.
o The volume-opt option, which can be specified more than once, takes a
key-value pair consisting of the option name and its value.

Escape values from outer CSV parser

If your volume driver accepts a comma-separated list as an option, you must escape the
value from the outer CSV parser. To escape a volume-opt, surround it with double quotes
(") and surround the entire mount parameter with single quotes (').
For example, the local driver accepts mount options as a comma-separated list in
the o parameter. This example shows the correct way to escape the list.
$ docker service create \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-
path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
--name myservice \
<IMAGE>
Differences between -v and --mount behavior

As opposed to bind mounts, all options for volumes are available for both --mount and -
v flags.

When using volumes with services, only --mount is supported.

Create and manage volumes


Unlike a bind mount, you can create and manage volumes outside the scope of any
container.

docker volume create my-vol  Crea un volumen

docker volume rm my-vol  Elimina el volumen

Start a container with a volume

If you start a container with a volume that does not yet exist, Docker creates the volume for
you. The following example mounts the volume myvol2 into /app/ in the container.

(Ejemplo con --mount)


docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest 
Creamos un contenedor que a su vez crea el volumen. “source” es el nombre de la
carpeta en /var/lib/docker/volumes/myvol2/_data.

Ahora con el formato -v sería así.


docker run -d --name devtest -v myvol2:/app nginx:latest

docker volume inspect devest  Inspeccionamos el volumen.

[
{
"CreatedAt": "2019-06-09T16:23:20+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/myvol2/_data",
"Name": "myvol2",
"Options": null,
"Scope": "local"
}
]

docker container stop devtest  Paramos.


docker container rm devtest  Lo borramos.
docker volume rm myvol2  Eliminamos el volumen.

Una prueba de la persistencia.

Creo el contenedor y compruebo (desde el host) que el volumen no tiene archivos

Ahora entro en el contenedor, a la carpeta /app y creo un archivo. Salgo del contenedor.
Desde el host compruebo que el volumen tiene el archivo.

Paro el contenedor.

Lo arranco de nuevo.

Vuelvo a entrar en el contenedor y compruebo la persistencia.

CTRL + p, CTRL + q, para salir del contenedor


docker container stop alpinetest  Paro el contenedor
docker container prune --force  borro todos los contenedores detenidos.
docker volume rm alpinevol  elimino el volumen.

Start a service with volumes

When you start a service and define a volume, each service container uses its own local
volume. None of the containers can share this data if you use the local volume driver, but
some volume drivers do support shared storage. Docker for AWS and Docker for Azure
both support persistent storage using the Cloudstor plugin.

The following example starts a nginx service with four replicas, each of which uses a local
volume called myvol2.

docker service create -d --replicas=4 --name devtest-service --mount


source=myvol2,target=/app nginx:latest  Inicio un servicio que arranca 4 nginxs
donde cada uno monta un volumen local llamado “myvol2”

docker service rm devtest-service  Elimina el servicio.


docker volume rm myvol2  borra el volumen.
Populate a volume using a container

If you start a container which creates a new volume, as above, and the container has files
or directories in the directory to be mounted (such as /app/ above), the directory’s contents
are copied into the volume. The container then mounts and uses the volume, and other
containers which use the volume also have access to the pre-populated content.

To illustrate this, this example starts an nginx container and populates the new
volume nginx-vol with the contents of the container’s /usr/share/nginx/html directory,
which is where Nginx stores its default HTML content.

docker run -d --name=nginxtest --mount source=nginx-vol,destination=/usr/share/nginx/html

nginx:latest  Creamos el contenedor y el volumen.

docker container stop nginxtest  Paramos contenedor

docker container rm nginxtest  Lo borro.

docker volume rm nginx-vol  borro volumen.

Use a read-only volume


For some development applications, the container needs to write into the bind mount so
that changes are propagated back to the Docker host. At other times, the container only
needs read access to the data. Remember that multiple containers can mount the same
volume, and it can be mounted read-write for some of them and read-only for others, at the
same time.

This example modifies the one above but mounts the directory as a read-only volume, by
adding ro to the (empty by default) list of options, after the mount point within the container.
Where multiple options are present, separate them by commas.

docker run -d --name=nginxtest --mount source=nginx-vol,destination=/usr/share/nginx/html,readonly

nginx:latest  Crea el volumen en solo lectura (forma --mount)


docker run -d --name=nginxtest -v nginx-vol:/usr/share/nginx/html:ro nginx:latest  Lo mismo pero con

-v)

docker container inspect nginxtest  Miramos si es de solo lectura.


"Mounts": [
{
"Type": "volume",
"Name": "nginx-vol",
"Source": "/var/lib/docker/volumes/nginx-vol/_data",
"Destination": "/usr/share/nginx/html",
"Driver": "local",
"Mode": "ro",
"RW": false,
"Propagation": ""
}

docker container stop nginxtest  Paramos el contenedor.


docker container rm nginxtest  Lo borramos.
docker volume rm nginx-vol  Borramos el volumen.

Share data among machines


When building fault-tolerant applications, you might need to configure multiple replicas of
the same service to have access to the same files.

There are several ways to achieve this when developing your applications. One is to add
logic to your application to store files on a cloud object storage system like Amazon S3.
Another is to create volumes with a driver that supports writing files to an external storage
system like NFS or Amazon S3.

Volume drivers allow you to abstract the underlying storage system from the application
logic. For example, if your services use a volume with an NFS driver, you can update the
services to use a different driver, as an example to store data in the cloud, without
changing the application logic.

Use a volume driver


When you create a volume using docker volume create, or when you start a container
which uses a not-yet-created volume, you can specify a volume driver. The following
examples use the vieux/sshfs volume driver, first when creating a standalone volume, and
then when starting a container which creates a new volume.

Initial set-up
This example assumes that you have two nodes, the first of which is a Docker host and can
connect to the second using SSH.

On the Docker host, install the vieux/sshfs plugin:


$ docker plugin install --grant-all-permissions vieux/sshfs

Create a volume using a volume driver

This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options, each of which is specified using an -o flag.
$ docker volume create --driver vieux/sshfs \
-o sshcmd=test@node2:/home/test \
-o password=testpassword \
sshvolume

Start a container which creates a volume using a volume


driver
This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options. If the volume driver requires you to pass options, you must use the --
mountflag to mount the volume, rather than -v.

$ docker run -d \
--name sshfs-container \
--volume-driver vieux/sshfs \
--mount src=sshvolume,target=/app,volume-
opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
nginx:latest
Create a service which creates an NFS volume
This example shows how you can create an NFS volume when creating a service. This
example uses 10.0.0.10 as the NFS server and /var/docker-nfs as the exported directory
on the NFS server. Note that the volume driver specified is local.

NFSV3

$ docker service create -d \


--name nfs-service \
--mount 'type=volume,source=nfsvolume,target=/app,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=:/var/docker-nfs,volume-
opt=o=addr=10.0.0.10' \
nginx:latest

NFSV4

docker service create -d \


--name nfs-service \
--mount 'type=volume,source=nfsvolume,target=/app,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=:/,"volume-
opt=o=10.0.0.10,rw,nfsvers=4,async"' \
nginx:latest

Backup, restore, or migrate data volumes


Volumes are useful for backups, restores, and migrations. Use the --volumes-from flag to
create a new container that mounts that volume.

Backup a container
For example, in the next command, we:

 Launch a new container and mount the volume from the dbstore container
 Mount a local host directory as /backup
 Pass a command that tars the contents of the dbdata volume to a backup.tarfile
inside our /backup directory.

$ docker run --rm --volumes-from dbstore -v $(pwd):/backup ubuntu tar cvf


/backup/backup.tar /dbdata

When the command completes and the container stops, we are left with a backup of
our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you
made elsewhere.

For example, create a new container named dbstore2:


$ docker run -v /dbdata --name dbstore2 ubuntu /bin/bash

Then un-tar the backup file in the new container`s data volume:

$ docker run --rm --volumes-from dbstore2 -v $(pwd):/backup ubuntu bash -c


"cd /dbdata && tar xvf /backup/backup.tar --strip 1"

You can use the techniques above to automate backup, migration and restore testing using
your preferred tools.

Remove volumes
A Docker data volume persists after a container is deleted. There are two types of volumes
to consider:

 Named volumes have a specific source from outside the container, for
example awesome:/bar.
 Anonymous volumes have no specific source so when the container is deleted,
instruct the Docker Engine daemon to remove them.

Remove anonymous volumes

To automatically remove anonymous volumes, use the --rm option. For example, this
command creates an anonymous /foo volume. When the container is removed, the Docker
Engine removes the /foo volume but not the awesome volume.
$ docker run --rm -v /foo -v awesome:/bar busybox top

Remove all volumes

To remove all unused volumes and free up space:

$ docker volume prune


Bind mounts

Bind mounts have been around since the early days of Docker. Bind mounts have limited
functionality compared to . When you use a bind mount, a file or directory on the host
machine is mounted into a container. The file or directory is referenced by its full or relative
path on the host machine. By contrast, when you use a volume, a new directory is created
within Docker’s storage directory on the host machine, and Docker manages that
directory’s contents.

The file or directory does not need to exist on the Docker host already. It is created on
demand if it does not yet exist. Bind mounts are very performant, but they rely on the host
machine’s filesystem having a specific directory structure available. If you are developing
new Docker applications, consider using instead. You can’t use Docker CLI commands to
directly manage bind mounts.

Choose the -v or --mount flag

Originally, the -v or --volume flag was used for standalone containers and the --mount flag
was used for swarm services. However, starting with Docker 17.06, you can also use --
mount with standalone containers. In general, --mount is more explicit and verbose. The
biggest difference is that the -v syntax combines all the options together in one field, while
the --mount syntax separates them. Here is a comparison of the syntax for each flag.

Tip: New users should use the --mount syntax. Experienced users may be more familiar
with the -v or --volume syntax, but are encouraged to use --mount, because research has
shown it to be easier to use.
 -v or --volume: Consists of three fields, separated by colon characters (:). The
fields must be in the correct order, and the meaning of each field is not immediately
obvious.
o In the case of bind mounts, the first field is the path to the file or directory on
the host machine.
o The second field is the path where the file or directory is mounted in the
container.
o The third field is optional, and is a comma-separated list of options, such
as ro, consistent, delegated, cached, z, and Z. These options are
discussed below.

 --mount: Consists of multiple key-value pairs, separated by commas and each


consisting of a <key>=<value> tuple. The --mount syntax is more verbose than -
v or --volume, but the order of the keys is not significant, and the value of the flag is
easier to understand.
o The type of the mount, which can be bind, volume, or tmpfs. This topic
discusses bind mounts, so the type is always bind.
o The source of the mount. For bind mounts, this is the path to the file or
directory on the Docker daemon host. May be specified as source or src.
o The destination takes as its value the path where the file or directory is
mounted in the container. May be specified as destination, dst, or target.
o The readonly option, if present, causes the bind mount to be mounted into
the container as read-only.
o The bind-propagation option, if present, changes the bind propagation.
May be one of rprivate, private, rshared, shared, rslave, slave.
o The consistency option, if present, may be one of consistent, delegated,
or cached. This setting only applies to Docker Desktop for Mac, and is
ignored on all other platforms.
o The --mount flag does not support z or Z options for modifying selinux
labels.

Differences between -v and --mount behavior

Because the -v and --volume flags have been a part of Docker for a long time, their
behavior cannot be changed. This means that there is one behavior that is different
between -v and --mount.

If you use -v or --volume to bind-mount a file or directory that does not yet exist on the
Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker
host, Docker does not automatically create it for you, but generates an error.
Start a container with a bind mount

Consider a case where you have a directory source and that when you build the source
code, the artifacts are saved into another directory, source/target/. You want the artifacts
to be available to the container at /app/, and you want the container to get access to a new
build each time you build the source on your development host. Use the following
command to bind-mount the target/ directory into your container at /app/. Run the
command from within the source directory. The $(pwd) sub-command expands to the
current working directory on Linux or macOS hosts.

The --mount and -v examples below produce the same result. You can’t run them both
unless you remove the devtest container after running the first one.

docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app


nginx:latest  Con “mount” (Crear primero el directorio “target”)

docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest  (con -v quedaría así)

docker container inspect devtest  Miramos la configuración

"Mounts": [
{
"Type": "bind",
"Source": "/home/docker/target",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],

This shows that the mount is a bind mount, it shows the correct source and destination, it
shows that the mount is read-write, and that the propagation is set to rprivate.

Stop the container:

$ docker container stop devtest

$ docker container rm devtest


Mount into a non-empty directory on the container
If you bind-mount into a non-empty directory on the container, the directory’s existing
contents (on the container) are obscured by the bind mount. This can be beneficial, such as
when you want to test a new version of your application without building a new image.
However, it can also be surprising and this behavior differs from that of .

This example is contrived to be extreme, but replaces the contents of the


container’s /usr/ directory with the /tmp/ directory on the host machine. In most cases, this
would result in a non-functioning container.
docker run -d -it --name broken-container --mount type=bind,source=/tmp,target=/usr
nginx:latest  Creamos el contenedor
docker run -d -it --name broken-container -v /tmp:/usr nginx:latest  Con -v quedaría así.

The container is created but does not start. Remove it:

$ docker container rm broken-container

Use a read-only bind mount


For some development applications, the container needs to write into the bind mount, so
changes are propagated back to the Docker host. At other times, the container only needs
read access.

This example modifies the one above but mounts the directory as a read-only bind mount,
by adding ro to the (empty by default) list of options, after the mount point within the
container. Where multiple options are present, separate them by commas.
The --mount and -v examples have the same result.
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app,readonly
nginx:latest  Creamos el contenedor.

docker run container inspect devtest  Miramos la configuración de los montajes.

Use docker inspect devtest to verify that the bind mount was created correctly. Look for
the Mounts section:

"Mounts": [
{
"Type": "bind",
"Source": "/home/docker/target",
"Destination": "/app",
"Mode": "",
"RW": false,
"Propagation": "rprivate"
}
],

Stop the container:

$ docker container stop devtest

$ docker container rm devtest

Configure bind propagation

Bind propagation defaults to rprivate for both bind mounts and volumes. It is only
configurable for bind mounts, and only on Linux host machines. Bind propagation is an
advanced topic and many users never need to configure it.

Bind propagation refers to whether or not mounts created within a given bind-mount or
named volume can be propagated to replicas of that mount. Consider a mount point /mnt,
which is also mounted on /tmp. The propagation settings control whether a mount
on /tmp/a would also be available on /mnt/a. Each propagation setting has a recursive
counterpoint. In the case of recursion, consider that /tmp/a is also mounted as /foo. The
propagation settings control whether /mnt/a and/or /tmp/a would exist.

Propagation
Description
setting

Sub-mounts of the original mount are exposed to replica mounts,


shared and sub-mounts of replica mounts are also propagated to the
original mount.

similar to a shared mount, but only in one direction. If the original


mount exposes a sub-mount, the replica mount can see it.
slave
However, if the replica mount exposes a sub-mount, the original
mount cannot see it.

The mount is private. Sub-mounts within it are not exposed to


private replica mounts, and sub-mounts of replica mounts are not
exposed to the original mount.
Propagation
Description
setting

The same as shared, but the propagation also extends to and


rshared from mount points nested within any of the original or replica
mount points.

The same as slave, but the propagation also extends to and from
rslave mount points nested within any of the original or replica mount
points.

The default. The same as private, meaning that no mount points


rprivate anywhere within the original or replica mount points propagate in
either direction.
tmpfs mounts

and let you share files between the host machine and container so that you can persist
data even after the container is stopped.

If you’re running Docker on Linux, you have a third option: tmpfs mounts. When you create
a container with a tmpfs mount, the container can create files outside the container’s
writable layer.
As opposed to volumes and bind mounts, a tmpfs mount is temporary, and only persisted
in the host memory. When the container stops, the tmpfs mount is removed, and files
written there won’t be persisted.

This is useful to temporarily store sensitive files that you don’t want to persist in either the
host or the container writable layer.

Limitations of tmpfs mounts


 Unlike volumes and bind mounts, you can’t share tmpfs mounts between
containers.
 This functionality is only available if you’re running Docker on Linux.

Choose the --tmpfs or --mount flag

Originally, the --tmpfs flag was used for standalone containers and the --mount flag was
used for swarm services. However, starting with Docker 17.06, you can also use --
mount with standalone containers. In general, --mount is more explicit and verbose. The
biggest difference is that the --tmpfs flag does not support any configurable options.
 --tmpfs: Mounts a tmpfs mount without allowing you to specify any configurable
options, and can only be used with standalone containers.
 --mount: Consists of multiple key-value pairs, separated by commas and each
consisting of a <key>=<value> tuple. The --mount syntax is more verbose than --
tmpfs:
o The type of the mount, which can be , volume, or . This topic
discusses tmpfs, so the type is always tmpfs.
o The destination takes as its value the path where the tmpfs mount is
mounted in the container. May be specified as destination, dst, or target.
o The tmpfs-type and tmpfs-mode options. See tmpfs options.

The examples below show both the --mount and --tmpfs syntax where possible, and --
mount is presented first.

Differences between --tmpfs and --mount behavior


 The --tmpfs flag does not allow you to specify any configurable options.
 The --tmpfs flag cannot be used with swarm services. You must use --mount.

Use a tmpfs mount in a container


To use a tmpfs mount in a container, use the --tmpfs flag, or use the --mount flag
with type=tmpfs and destination options. There is no source for tmpfs mounts. The
following example creates a tmpfs mount at /app in a Nginx container. The first example
uses the --mount flag and the second uses the --tmpfs flag.

docker run -d -it --name tmptest --mount type=tmpfs,destination=/app nginx:latest  monta en el


contenedor

docker run -d -it --name tmptest --tmpfs /app nginx:latest  Sintaxis con “tmpfs”

docker container inspect tmptest  Miramos la configuración.

"Mounts": [
{
"Type": "tmpfs",
"Source": "",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": ""
}
],

Remove the container:

$ docker container stop tmptest

$ docker container rm tmptest

Specify tmpfs options

tmpfs mounts allow for two configuration options, neither of which is required. If you need
to specify these options, you must use the --mount flag, as the --tmpfs flag does not
support them.

Option Description

tmpfs- Size of the tmpfs mount in bytes. Unlimited by default.


size

tmpfs- File mode of the tmpfs in octal. For instance, 700 or 0770. Defaults to 1777 or
mode world-writable.

The following example sets the tmpfs-mode to 1770, so that it is not world-readable within
the container.

docker run -d \
-it \
--name tmptest \
--mount type=tmpfs,destination=/app,tmpfs-mode=1770 \
nginx:latest
Tipos de drivers de almacenamiento

To use storage drivers effectively, it’s important to know how Docker builds and stores
images, and how these images are used by containers. You can use this information to
make informed choices about the best way to persist data from your applications and avoid
performance problems along the way.

Storage drivers allow you to create data in the writable layer of your container. The files
won’t be persisted after the container is deleted, and both read and write speeds are lower
than native file system performance.

Note: Operations that are known to be problematic include write-intensive database


storage, particularly when pre-existing data exists in the write-only layer. More details are
provided in this document.

to persist data and improve performance.

Images and layers


A Docker image is built up from a series of layers. Each layer represents an instruction in
the image’s Dockerfile. Each layer except the very last one is read-only. Consider the
following Dockerfile:

FROM ubuntu:18.04
COPY . /app
RUN make /app
CMD python /app/app.py

This Dockerfile contains four commands, each of which creates a layer.


The FROM statement starts out by creating a layer from the ubuntu:18.04 image.
The COPY command adds some files from your Docker client’s current directory.
The RUN command builds your application using the make command. Finally, the last layer
specifies what command to run within the container.

Each layer is only a set of differences from the layer before it. The layers are stacked on
top of each other. When you create a new container, you add a new writable layer on top of
the underlying layers. This layer is often called the “container layer”. All changes made to
the running container, such as writing new files, modifying existing files, and deleting files,
are written to this thin writable container layer. The diagram below shows a container based
on the Ubuntu 18.04 image.
A storage driver handles the details about the way these layers interact with each other.
Different storage drivers are available, which have advantages and disadvantages in
different situations.

Container and layers


The major difference between a container and an image is the top writable layer. All writes
to the container that add new or modify existing data are stored in this writable layer. When
the container is deleted, the writable layer is also deleted. The underlying image remains
unchanged.

Because each container has its own writable container layer, and all changes are stored in
this container layer, multiple containers can share access to the same underlying image
and yet have their own data state. The diagram below shows multiple containers sharing
the same Ubuntu 18.04 image.
Note: If you need multiple images to have shared access to the exact same data, store this
data in a Docker volume and mount it into your containers.

Docker uses storage drivers to manage the contents of the image layers and the writable
container layer. Each storage driver handles the implementation differently, but all drivers
use stackable image layers and the copy-on-write (CoW) strategy.

Container size on disk

To view the approximate size of a running container, you can use the docker ps -
s command. Two different columns relate to size.
 size: the amount of data (on disk) that is used for the writable layer of each
container.
 virtual size: the amount of data used for the read-only image data used by the
container plus the container’s writable layer size. Multiple containers may share
some or all read-only image data. Two containers started from the same image
share 100% of the read-only data, while two containers with different images which
have layers in common share those common layers. Therefore, you can’t just total
the virtual sizes. This over-estimates the total disk usage by a potentially non-trivial
amount.
The total disk space used by all of the running containers on disk is some combination of
each container’s size and the virtual size values. If multiple containers started from the
same exact image, the total size on disk for these containers would be SUM (size of
containers) plus one image size (virtual size- size).

This also does not count the following additional ways a container can take up disk space:

 Disk space used for log files if you use the json-file logging driver. This can be
non-trivial if your container generates a large amount of logging data and log
rotation is not configured.
 Volumes and bind mounts used by the container.
 Disk space used for the container’s configuration files, which are typically small.
 Memory written to disk (if swapping is enabled).
 Checkpoints, if you’re using the experimental checkpoint/restore feature.

The copy-on-write (CoW) strategy


Copy-on-write is a strategy of sharing and copying files for maximum efficiency. If a file or
directory exists in a lower layer within the image, and another layer (including the writable
layer) needs read access to it, it just uses the existing file. The first time another layer
needs to modify the file (when building the image or running the container), the file is
copied into that layer and modified. This minimizes I/O and the size of each of the
subsequent layers. These advantages are explained in more depth below.

Sharing promotes smaller images

When you use docker pull to pull down an image from a repository, or when you create a
container from an image that does not yet exist locally, each layer is pulled down
separately, and stored in Docker’s local storage area, which is usually /var/lib/docker/ on
Linux hosts. You can see these layers being pulled in this example:

(en docker-01, borrar primero todas las imagenes descargadas.)

docker pull ubuntu:18.04  Se descarga la imagen de ubuntu que tiene 3 capas.

Each of these layers is stored in its own directory inside the Docker host’s local storage
area. To examine the layers on the filesystem, list the contents
of /var/lib/docker/<storage-driver>. This example uses the overlay2 storage driver:
ls /var/lib/docker/overlay2  miramos las capas que se han descargado (ignorar el
directorio “l”)

Now imagine that you have two different Dockerfiles. You use the first one to create an
image called rbcost/my-base-image:1.0.

mkdir image1  Creo un directorio para el contexto del Dockerfile.

cd image1

nano Dockerfile  crear el Dockerfile con el siguiente contenido.

FROM ubuntu:18.04

COPY . /app

(Guardar el Dockerfile)

volvemos al directorio anterior.

mkdir image2  Creo un directorio para el contexto del Dockerfile.

cd image2

nano Dockerfile  crear el Dockerfile con el siguiente contenido.

FROM rbcost/my-base-image:1.0

CMD /app/hello.sh

(Guardar el Dockerfile)

The second image contains all the layers from the first image, plus a new layer with
the CMD instruction, and a read-write container layer. Docker already has all the layers from
the first image, so it does not need to pull them again. The two images share any layers
they have in common.

Volvemos al directorio HOME

mkdir cow-test  Creamos un nuevo directorio.


cd cow-test  Entramos.

(creamos un archivo con nano)

nano hello.sh  Copiar dentro el siguiente contenido.

#!/bin/sh

echo “Hello world”

(Guardamos y salimos de nano)

chmod +x hello.sh  Lo hacemos ejecutable.

cp ../image1/Dockerfile Dockerfile.base  (Copiamos el primer Dockerfile a la carpeta


actual y lo cambiamos de nombre)

cp ../image2/Dockerfile Dockerfile  (Copiamos el segundo Dokcerfile a la carpeta


actual y lo cambiamos de nombre)

(Tiene que quedar así)

docker build -t rbcost/my-base-image:1.0 -f Dockerfile.base .  Creamos la primera


imagen. -t  Pone etiqueta

docker image ls  Mostramos las imágenes.

docker build -t rbcost/my-final-image:1.0 -f Dockerfile .  Creamos la segunda imagen.


sudo docker image ls  mostramos las imágenes.

sudo docker history 42a09360380e  Miramos las capas que forman la imagen base.

sudo docker history fd8ce85fdda3  Miramos las capas que forman la imagen final.

Notice that all the layers are identical except the top layer of the second image. All the
other layers are shared between the two images, and are only stored once
in /var/lib/docker/. The new layer actually doesn’t take any room at all, because it is not
changing any files, but only running a command.

Note: The <missing> lines in the docker history output indicate that those layers were
built on another system and are not available locally. This can be ignored.

Copying makes containers efficient


When you start a container, a thin writable container layer is added on top of the other
layers. Any changes the container makes to the filesystem are stored here. Any files the
container does not change do not get copied to this writable layer. This means that the
writable layer is as small as possible.

When an existing file in a container is modified, the storage driver performs a copy-on-write
operation. The specifics steps involved depend on the specific storage driver. For
the aufs, overlay, and overlay2 drivers, the copy-on-write operation follows this rough
sequence:

 Search through the image layers for the file to update. The process starts at the
newest layer and works down to the base layer one layer at a time. When results
are found, they are added to a cache to speed future operations.
 Perform a copy_up operation on the first copy of the file that is found, to copy the file
to the container’s writable layer.

 Any modifications are made to this copy of the file, and the container cannot see
the read-only copy of the file that exists in the lower layer.

Btrfs, ZFS, and other drivers handle the copy-on-write differently. You can read more about
the methods of these drivers later in their detailed descriptions.

Containers that write a lot of data consume more space than containers that do not. This is
because most write operations consume new space in the container’s thin writable top
layer.

Note: for write-heavy applications, you should not store the data in the container. Instead,
use Docker volumes, which are independent of the running container and are designed to
be efficient for I/O. In addition, volumes can be shared among containers and do not
increase the size of your container’s writable layer.

A copy_up operation can incur a noticeable performance overhead. This overhead is


different depending on which storage driver is in use. Large files, lots of layers, and deep
directory trees can make the impact more noticeable. This is mitigated by the fact that
each copy_up operation only occurs the first time a given file is modified.

To verify the way that copy-on-write works, the following procedures spins up 5 containers
based on the rbcost/my-final-image:1.0 image we built earlier and examines how much
room they take up.

Note: This procedure doesn’t work on Docker Desktop for Mac or Docker Desktop for
Windows.

docker run -dit --name my_container_1 rbcost/my-final-image:1.0 bash \

&& docker run -dit --name my_container_2 rbcost/my-final-image:1.0 bash \

&& docker run -dit --name my_container_3 rbcost/my-final-image:1.0 bash \

&& docker run -dit --name my_container_4 rbcost/my-final-image:1.0 bash \


&& docker run -dit --name my_container_5 rbcost/my-final-image:1.0 bash 
Iniciamos todos estos contenedores .

sudo docker container ps  Comprobamos si corren.

ls -l /var/lib/docker/containers  Listamos el contenido del área de almacenamiento de


contenedores.

du -h /var/lib/docker/containers  Miramos sus tamaños. Cada uno de estos


contenedores solo ocupan 36 KB.

Not only does copy-on-write save space, but it also reduces start-up time. When you start a
container (or multiple containers from the same image), Docker only needs to create the
thin writable container layer.

If Docker had to make an entire copy of the underlying image stack each time it started a
new container, container start times and disk space used would be significantly increased.
This would be similar to the way that virtual machines work, with one or more virtual disks
per virtual machine.
Seleccionar un driver de almacenamiento

Ideally, very little data is written to a container’s writable layer, and you use Docker
volumes to write data. However, some workloads require you to be able to write to the
container’s writable layer. This is where storage drivers come in.

Docker supports several different storage drivers, using a pluggable architecture. The
storage driver controls how images and containers are stored and managed on your
Docker host.

After you have read the , the next step is to choose the best storage driver for your
workloads. In making this decision, there are three high-level factors to consider:

If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which
storage driver to use if no storage driver is explicitly configured, assuming that the storage
driver meets the prerequisites.

Use the storage driver with the best overall performance and stability in the most usual
scenarios.

Docker supports the following storage drivers:

 overlay2 is the preferred storage driver, for all currently supported Linux
distributions, and requires no extra configuration.
 aufs is the preferred storage driver for Docker 18.06 and older, when running on
Ubuntu 14.04 on kernel 3.13 which has no support for overlay2.
 devicemapper is supported, but requires direct-lvm for production environments,
because loopback-lvm, while zero-configuration, has very poor
performance. devicemapper was the recommended storage driver for CentOS and
RHEL, as their kernel version did not support overlay2. However, current versions
of CentOS and RHEL now have support for overlay2, which is now the
recommended driver.
 The btrfs and zfs storage drivers are used if they are the backing filesystem (the
filesystem of the host on which Docker is installed). These filesystems allow for
advanced options, such as creating “snapshots”, but require more maintenance and
setup. Each of these relies on the backing filesystem being configured correctly.
 The vfs storage driver is intended for testing purposes, and for situations where no
copy-on-write filesystem can be used. Performance of this storage driver is poor,
and is not generally recommended for production use.

Docker Engine - Enterprise and Docker Enterprise


For Docker Engine - Enterprise and Docker Enterprise, the definitive resource for which
storage drivers are supported is the . To get commercial support from Docker, you must
use a supported configuration.
Docker Engine - Community
For Docker Engine - Community, only some configurations are tested, and your operating
system’s kernel may not support every storage driver. In general, the following
configurations work on recent versions of the Linux distribution:

Linux
Recommended storage drivers Alternative drivers
distribution

Docker
Engine - overlay2 or aufs (for Ubuntu 14.04 overlay¹, devicemapper², zfs, vfs
Community running on kernel 3.13)
on Ubuntu

Docker
overlay2 (Debian
Engine -
Stretch), aufs or devicemapper (older overlay¹, vfs
Community
versions)
on Debian

Docker
Engine -
overlay2 overlay¹, devicemapper², zfs, vfs
Community
on CentOS

Docker
Engine -
overlay2 overlay¹, devicemapper², zfs, vfs
Community
on Fedora

Supported backing filesystems

With regard to Docker, the backing filesystem is the filesystem where /var/lib/docker/is
located. Some storage drivers only work with specific backing filesystems.
Storage driver Supported backing filesystems

overlay2, overlay xfs with ftype=1, ext4

aufs xfs, ext4

devicemapper direct-lvm

btrfs btrfs

zfs zfs

vfs any filesystem

Suitability for your workload


Among other things, each storage driver has its own performance characteristics that make
it more or less suitable for different workloads. Consider the following generalizations:

 overlay2, aufs, and overlay all operate at the file level rather than the block level.
This uses memory more efficiently, but the container’s writable layer may grow
quite large in write-heavy workloads.
 Block-level storage drivers such as devicemapper, btrfs, and zfs perform better for
write-heavy workloads (though not as well as Docker volumes).
 For lots of small writes or containers with many layers or deep
filesystems,overlay may perform better than overlay2, but consumes more inodes,
which can lead to inode exhaustion.
 btrfs and zfs require a lot of memory.
 zfs is a good choice for high-density workloads such as PaaS.

More information about performance, suitability, and best practices is available in the
documentation for each storage driver.

Shared storage systems and the storage driver


If your enterprise uses SAN, NAS, hardware RAID, or other shared storage systems, they
may provide high availability, increased performance, thin provisioning, deduplication, and
compression. In many cases, Docker can work on top of these storage systems, but
Docker does not closely integrate with them.
Each Docker storage driver is based on a Linux filesystem or volume manager. Be sure to
follow existing best practices for operating your storage driver (filesystem or volume
manager) on top of your shared storage system. For example, if using the ZFS storage
driver on top of a shared storage system, be sure to follow best practices for operating ZFS
filesystems on top of that specific shared storage system.

Stability

For some users, stability is more important than performance. Though Docker considers all
of the storage drivers mentioned here to be stable, some are newer and are still under
active development. In general, overlay2, aufs, overlay, and devicemapper are the choices
with the highest stability.

Check your current storage driver


The detailed documentation for each individual storage driver details all of the set-up steps
to use a given storage driver.

To see what storage driver Docker is currently using, use docker info and look for
the Storage Driver line:

docker info  Info sobre el motor de docker.

Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 3
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file

To change the storage driver, see the specific instructions for the new storage driver. Some
drivers require additional configuration, including configuration to physical or logical disks
on the Docker host.

Important: When you change the storage driver, any existing images and containers
become inaccessible. This is because their layers cannot be used by the new storage
driver. If you revert your changes, you can access the old images and containers again, but
any that you pulled or created using the new driver are then inaccessible.
Uso del driver de almacenamiento AUSFS

AUFS is a union filesystem. The aufs storage driver was previously the default storage
driver used for managing images and layers on Docker for Ubuntu, and for Debian versions
prior to Stretch. If your Linux kernel is version 4.0 or higher, and you use Docker CE,
consider using the newer , which has potential performance advantages over
the aufs storage driver.

En la web viene explicado cómo se configura.

Uso del driver de almacenamiento BTRFS

Btrfs is a next generation copy-on-write filesystem that supports many advanced storage
technologies that make it a good fit for Docker. Btrfs is included in the mainline Linux
kernel.

Docker’s btrfs storage driver leverages many Btrfs features for image and container
management. Among these features are block-level operations, thin provisioning, copy-on-
write snapshots, and ease of administration. You can easily combine multiple physical
block devices into a single Btrfs filesystem.

En la web viene explicado cómo se configura.

Uso del driver de almacenamiento Devicemapper

Device Mapper is a kernel-based framework that underpins many advanced volume


management technologies on Linux. Docker’s devicemapper storage driver leverages
the thin provisioning and snapshotting capabilities of this framework for image and
container management.

En la web viene como configurarlo.


Uso del driver de almacenamiento overlayfs

OverlayFS is a modern union filesystem that is similar to AUFS, but faster and with a
simpler implementation. Docker provides two storage drivers for OverlayFS: the
original overlay, and the newer and more stable overlay2.

En la web viene como configurarlo.

Uso del driver de almacenamiento ZFS

ZFS is a next generation filesystem that supports many advanced storage technologies
such as volume management, snapshots, checksumming, compression and deduplication,
replication and more.

It was created by Sun Microsystems (now Oracle Corporation) and is open sourced under
the CDDL license. Due to licensing incompatibilities between the CDDL and GPL, ZFS
cannot be shipped as part of the mainline Linux kernel. However, the ZFS On Linux (ZoL)
project provides an out-of-tree kernel module and userspace tools which can be installed
separately.

En la web viene como configurarlo.

Uso del driver de almacenamiento VFS

The VFS storage driver is not a union filesystem; instead, each layer is a directory on disk,
and there is no copy-on-write support. To create a new layer, a “deep copy” is done of the
previous layer. This leads to lower performance and more space used on disk than other
storage drivers. However, it is robust, stable, and works in every environment. It can also
be used as a mechanism to verify other storage back-ends against, in a testing
environment.
En la web viene como configurarlo.

You might also like