Curso de Docker 1
Curso de Docker 1
4
4
7
10
13
Introducción al networking en Docker26
38
38
40
Swarms (enjambres)42
47
51
Networking a fondo (docs.docker.com)52
Tipos de drivers de red52
Uso de redes “Bridge”53
Differences between user-defined bridges and the default bridge53
55
Connect a container to a user-defined bridge55
Enable forwarding from Docker containers to the outside world57
Use the default bridge network58
Connect a container to the default bridge network59
Uso de redes “Overlay”60
Create an overlay network60
Encrypt traffic on an overlay network60
Customize the default ingress network61
Customize the docker_gwbridge interface62
Publish ports on an overlay network63
Bypass the routing mesh for a swarm service64
Separate control and data traffic64
Attach a standalone container to an overlay network64
65
Container discovery65
66
Uso de redes “macvlan”67
Create a macvlan network67
Bridge mode67
802.1q trunk bridge mode68
Use an ipvlan instead of macvlan69
Use IPv669
70
Tutoriales de Networking (docs.docker.com)71
Tutorial de redes “Bridge”71
77
Turorial de redes “Overlay”78
Use the default overlay network78
Use an overlay network for standalone containers83
86
88
Configurar el daemon para IPv688
Dockers e IPTables89
Add iptables policies before Docker’s rules89
Restrict connections to the Docker daemon89
Prevent Docker from manipulating iptables90
Redes de contenedor91
Published ports91
IP address and hostname91
DNS services92
93
Configure the Docker client93
Use environment variables94
Set the environment variables manually94
95
Resumen del almacenamiento95
Choose the right type of mount95
More details about mount types96
Good use cases for volumes98
Good use cases for bind mounts98
Tips for using bind mounts or volumes99
Volúmenes100
Choose the -v or --mount flag100
Differences between -v and --mount behavior102
Create and manage volumes102
Start a container with a volume102
Start a service with volumes104
105
Use a read-only volume105
Share data among machines106
Use a volume driver106
Initial set-up107
Create a volume using a volume driver107
Start a container which creates a volume using a volume driver107
Create a service which creates an NFS volume108
Backup, restore, or migrate data volumes108
Backup a container108
Restore container from backup109
Remove volumes109
Remove anonymous volumes109
Remove all volumes109
Bind mounts110
Choose the -v or --mount flag110
Differences between -v and --mount behavior111
Start a container with a bind mount112
Mount into a non-empty directory on the container113
Use a read-only bind mount113
Configure bind propagation114
tmpfs mounts116
116
Choose the --tmpfs or --mount flag116
Differences between --tmpfs and --mount behavior117
Use a tmpfs mount in a container117
Specify tmpfs options118
Tipos de drivers de almacenamiento119
Images and layers119
Container and layers120
Container size on disk121
The copy-on-write (CoW) strategy122
Sharing promotes smaller images122
Copying makes containers efficient125
Seleccionar un driver de almacenamiento128
Docker Engine - Enterprise and Docker Enterprise128
Docker Engine - Community129
129
130
Shared storage systems and the storage driver130
Stability131
Check your current storage driver131
Uso del driver de almacenamiento AUSFS133
Uso del driver de almacenamiento BTRFS133
Uso del driver de almacenamiento Devicemapper133
134
134
134
uname -a
Me ha fallado porque no existe el paquete con la versión 4.15.0-47-generic. Instalo la más alta,
que es la 4.15.0-15-generic
Reinicio.
sudo reboot
DEFAULT_FORWARD_POLICY="ACCEPT"
Opcional: Cambiar la configuración de red del demonio de Docker. Enlazar todas las interfaces
del host.
sudo dockerd -H tcp://0.0.0.0:2375
Actualizar Docker.
Ejecutar un contenedor. -i mantiene abierta la STDIN del contenedor. -t asigna una pseudo-tty
al contenedor que se va a crear. ubuntu es la imagen base del contenedor. /bin/bash es el
comando que ejecutamos en el contenedor.
Iniciar un contenedor.
sudo docker ps -a
Conectarse a un contenedor.
Inspeccionar un contenedor.
Borrar un contenedor.
Cerrar sesión.
exit
Crear un directorio de ejemplo. This directory is our build environment, which is what Docker
calls a context or build context. Docker will upload the build context, as well as any files and
directories contained in it, to our Docker daemon when the build is run. This provides the
Docker daemon with direct access to any code, files or other data you might want to include in
the image.
Each instruction adds a new layer to the image and then commits the image. Docker executing
instructions roughly follow a workflow:
• The next instruction in the file is executed, and the process repeats until all instructions have
been executed.
cd static_web
# Version: 0.0.1
FROM ubuntu:16.04
EXPOSE 80
Ejecutar el Dockerfile
Administrar un fallo en una instrucción. Editar el Dockerfile y cambian "nginx" por "ngin".
cd static_web
Supongamos que queremos depurar este fallo. Podemos usar docker run para crear un
contenedor desde el último paso que funcionó en el build. Poner el ID de la imagen que se
creó en último lugar.
Saltarse a caché del build de Dockerfile. Útil para asegurar que todo se construye desde cero.
I’ve specified the ENV instruction to set an environment variable called REFRESHED_AT,
showing when the template was last updated. Lastly, I’ve specified the apt-get -qq update
command in a RUN instruction. This refreshes the APT package cache when it’s run, ensuring
that the latest packages are available to install.
With my template, when I want to refresh the build, I change the date in my ENV instruction.
Docker then resets the cache when it hits that ENV instruction and runs every subsequent
instruction a new without relying on the cache.
Docker has two methods of assigning ports on the Docker host: Docker can randomly assign a
high port from the range 32768 to 61000 on the Docker host that maps to port 80 on the
container. You can specify a specific port on the Docker host that maps to port 80 on the
container.
The docker run command will open a random port on the Docker host that will connect to port
80 on the Docker container.
sudo docker ps -l
CMD y ENTRYPOINT.
# Version: 0.0.1
FROM ubuntu:16.04
EXPOSE 80
Creamos la imagen
Lanzamos el contenedor
# Version: 0.0.1
FROM ubuntu:16.04
EXPOSE 80
ENTRYPOINT ["/usr/sbin/nginx"]
# Version: 0.0.1
FROM ubuntu:16.04
EXPOSE 80
ENTRYPOINT ["/usr/sbin/nginx"]
CMD ["-h"]
Now when we launch a container, any option we specify will be passed to the Nginx daemon;
for example, we could specify -g "daemon off"; as we did above to run the daemon in the
foreground.
If we don’t specify anything to pass to the container, then the -h is passed by the CMD
instruction and returns the Nginx help
The WORKDIR instruction provides a way to set the working directory for the container and
the ENTRYPOINT and/or CMD to be executed when a container is launched from the image.
WORKDIR /opt/webapp/db
WORKDIR /opt/webapp
ENTRYPOINT [ "backup" ]
WORKDIR $TARGET_DIR
The ENV instruction is used to set environment variables during the image build process. For
example:
These environment variables will also be persisted into any containers created from your
image. So, if we were to run the env command in a container built with the ENV RVM_PATH
/home/rvm/ instruction we’d see:
root@bf42aadc7f09:~# env
. . .
RVM_PATH=/home/rvm/
. . .
You can also pass environment variables on the docker run command line using the -e flag.
These variables will only apply at runtime, for example:
HOME=/
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=792b171c5e9f
TERM=xterm
WEB_PORT=8080
The USER instruction specifies a user that the image should be run as; for example:
USER nginx
This will cause containers created from the image to be run by the nginx user.
You can also override this at runtime by specifying the -u flag with the docker run command.
The default user if you don’t specify the USER instruction is root.
The VOLUME instruction adds volumes to any container created from the image. A volume is a
specially designated directory within one or more containers that bypasses the Union File
System to provide several useful features for persistent or shared data:
VOLUME ["/opt/project"]
The ADD instruction adds files and directories from our build environment into our image, like
so:
This ADD instruction will copy the file software.lic from the build directory to
/opt/application/software.lic in the image.
If a tar archive (valid archive types include gzip, bzip2, xz) is specified as the source file, then
Docker will automatically unpack it for you.
The COPY instruction is closely related to the ADD instruction. The key difference is that the
COPY instruction is purely focused on copying local files from the build context and does not
have any extraction or decompression capabilities.
The LABEL instruction adds metadata to a Docker image. The metadata is in the form of
key/value pairs. We recommend combining all your metadata in a single LABEL instruction to
save creating multiple layers with each piece of metadata.
You can inspect the labels on an image using the docker inspect command.
LABEL version="1.0"
The ARG instruction defines variables that can be passed at build-time via the docker build
command. This is done using the --build-arg flag. You can only specify build-time arguments
that have been defined in the Dockerfile.
ARG build
ARG webapp_user=user
The second ARG instruction sets a default, if no value is specified for the argument at build-
time then the default is used.
Docker has a set of predefined ARG variables that you can use at build-time without a
corresponding ARG instruction in the Dockerfile:
HTTP_PROXY
http_proxy
HTTPS_PROXY
https_proxy
FTP_PROXY
ftp_proxy
NO_PROXY
no_proxy
To use these predefined variables, pass them using the--build-arg <variable>=<value> flag to
the docker build command.
The SHELL instruction allows the default shell used for the shell form of commands to be
overridden. The default shell on Linux is ["/bin/sh", "-c"] and on Windows is ["cmd", "/S",
"/C"].
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still
working correctly. Contains options and then the command you wish to run itself, separated by
a CMD keyword.
We can see the state of the health check using the docker inspect command.
Healthy
The ONBUILD instruction adds triggers to images. A trigger is executed when the image is used
as the basis of another image (e.g., if you have an image that needs source code added from a
specific location that might not yet be available, or if you need to execute a build script that is
specific to the environment in which the image is built).
The trigger inserts a new instruction in the build process, as if it were specified right after the
FROM instruction. The trigger can be any build instruction. For example:
This would add an ONBUILD trigger to the image being created, which we see when we run
docker inspect on the image:
...
"OnBuild": [
"ADD . /app/src",
]
...
For example, we’ll build a new Dockerfile for an Apache2 image that we’ll call rbcost/apache2.
FROM ubuntu:16.04
EXPOSE 80
ENTRYPOINT ["/usr/sbin/apache2"]
La imagen añade (copia) los archivos del directorio de contexto cuando se realiza el build.
FROM rbcost/apache2
# Version: 0.0.1
FROM ubuntu:16.04
EXPOSE 80
Listar las imágenes y copiar el ID de una de ellas. Ahora vamos a etiquetarla para un
repositorio.
localhost:5000/rbcost/static_web
Introducción al networking en Docker
mkdir sample
cd sample
wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/nginx/global.conf
wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/nginx/nginx.conf
FROM ubuntu:16.04
EXPOSE 80
In nginx.conf, the daemon off; option stops Nginx from going into the background and forces
it to run in the foreground. This is because Docker containers rely on the running process
inside them to remain active.
By default,Nginx daemonizes itself when started, which would cause the container to run
briefly and then stop when the daemon was forked and launched and the original process that
forked it stopped.
Compilamos la imagen de nginx
Mostramos la historia de la imagen Nginx. La salida muestras las "capas", la más reciente
arriba.
wget https://raw.githubusercontent.com/jamtur01/dockerbook-
code/master/code/5/sample/website/index.html
cd ..
You can see we’ve passed the nginx command to docker run. Normally this wouldn’t make
Nginx run interactively. In the configuration we supplied to Docker, though, we’ve added the
directive daemon off. This directive causes Nginx to run interactively in the foreground when
launched.
the -v option is new. This new option allows us to create a volume in our container from a
directory on the host.
• It changes frequently, and we don’t want to rebuild the image during our development
process.
If the container directory doesn’t exist Docker will create it. We can also specify the read/write
status of the container directory by adding either rw or ro after that directory.
cd sample
Refrescamos el navegador.
Vamos a crear una aplicación Sinatra: a Rubybased web application framework. It contains a
web application library and a simple Domain Specific Language or DSL for creating web
applications.
mkdir -p sinatra
cd sinatra
FROM ubuntu:16.04
RUN apt-get update -yqq; apt-get -yqq install ruby ruby-dev build-
essential redis-tools
EXPOSE 4567
CMD ["/opt/webapp/bin/webapp"]
We have used the gem binary to install the sinatra, json, and redis gems. The sinatra and json
gems contain Ruby’s Sinatra library and support for JSON.
The redis gem we’re going to use a little later on to provide integration to a Redis database.
We’ve also created a directory to hold our new web application and exposed the default
WEBrick port of 4567.
Finally, we’ve specified a CMD of /opt/webapp/bin/webapp, which will be the binary that
launches our web application.
cd sinatra
En el libro hay una errata porque la siguiente instrucción fallará con un error de Not found.
http://dockerbook.com/code/5/sinatra/webapp/
cd ~/Downloads
unzip dockerbook-code-master.zip
cd dockerbook-code-master
cd code
cd 5
cd sinatra
cd webapp
Let’s quickly look at the core of the webapp source code contained in the
sinatra/webapp/lib/app.rb file.
This is a simple application that converts any parameters posted to the /json endpoint to JSON
and displays them.
Now let’s launch a new container from our image using the docker run command. To launch
we should be inside the sinatra directory because we’re going to mount our source code into
the container using a volume.
We’ve not provided a command to run on the command line; instead, we’re using the
command we specified via the CMD instruction in the Dockerfile of the image (CMD
["/opt/webapp/bin/webapp"])
sudo docker ps -a
Right now, our basic Sinatra application doesn’t do much. It just takes incoming parameters,
turns them into JSON, and then outputs them. We can now use the curl command to test our
application. (Ojo con el puerto)
curl -i -H 'Accept: application/json' -d 'name=Foo&status=Bar'
http://localhost:49160/json
We’re going to extend our Sinatra application now by adding a Redis back end and storing our
incoming URL parameters in a Redis database. We need download a new version of the sinatra
app.
cd ~/Downloads
cd dockerbook-code-master
cd code
cd 5
cd sinatra
cd webapp_redis
Let’s look at its core code in lib/app.rb now (Editarlo con nano). We now create a connection
to a Redis database on a host called db on port 6379. We also post our parameters to that
Redis database and then get them back from it when required.
We’re going to create a new image. Let’s create a directory, redis inside our sinatra directory,
to hold any associated files we’ll need for the Redis container build.
mkdir redis
cd redis
FROM ubuntu:16.04
EXPOSE 6379
ENTRYPOINT ["/usr/bin/redis-server"]
CMD []
Compilamos la imagen.
sudo docker build -t rbcost/redis .
Creamos el contenedor. Note that we’ve specified the -p flag to publish port 6379
quit
Let’s now update our Sinatra application to connect to Redis and store our incoming
parameters. We’re going to need to be able to talk to the Redis server. There are two ways we
could do this using:
• From Docker 1.9 and later, using Docker Networking and the docker network command.
So which method should I choose? Well the first method, Docker’s internal network, is not an
overly flexible or powerful solution. We don’t recommend it as a solution for connecting
containers.
• Docker Networking can connect containers to each other across different hosts.
• Containers connected via Docker Networking can be stopped, started or restarted without
needing to update connections.
• With Docker Networking you don’t need to create a container before you can connect to it.
You also don’t need to worry about the order in which you run containers and you get
internal container name resolution and discovery inside the network.
The first method involves Docker’s own network stack. So far, we’ve seen Docker containers
exposing ports and binding interfaces so that container services are published on the local
Docker host’s external network (e.g., binding port 80 inside a container to a high port on the
local host). In addition to this capability, Docker has a facet we haven’t yet seen: internal
networking.
Every Docker container is assigned an IP address, provided through an interface created when
we installed Docker. That interface is called docker0. Let’s look at that interface on our Docker
host now.
ifconfig docker0
The docker0 interface is a virtual Ethernet bridge that connects our containers and the local
host network. If we look further at the other interfaces on our Docker host (ejecutamos el
commando ifconfig), we’ll find a series of interfaces starting with veth.
Every time Docker creates a container, it creates a pair of peer interfaces that are like opposite
ends of a pipe (i.e., a packet sent on one will be received on the other).
It gives one of the peers to the container to become its eth0 interface and keeps the other
peer, with a unique name like vethec6a, out on the host machine.
You can think of a veth interface as one end of a virtual network cable. One end is plugged into
the docker0 bridge, and the other end is plugged into the container.
By binding every veth* interface to the docker0 bridge, Docker creates a virtual subnet shared
between the host machine and every Docker container.
Let’s look inside a container now and see the other end of this pipe.
ifconfig
We see that Docker has assigned an IP address, 172.17.0.29, for our container that will be
peered with a virtual interface on the host side
traceroute www.google.es
But there’s one other piece of Docker networking that enables this connectivity: firewall rules
and NAT configuration allow Docker to route between containers and the host network.
Exit out of our container and let’s look at the IPTables NAT configuration on our Docker host.
También Podemos sacar información de red desde el comando docker inspect y buscar
NetworkSettings.
So, while this initially looks like it might be a good solution for connecting our containers
together, sadly, this approach has two big rough edges: Firstly, we’d need to hard-code the IP
address of our Redis container into our applications. Secondly, if we restart the container,
Docker (podría) changes the IP address,
Let’s see this now using the docker restart command (we’ll get the same result if we kill our
container using the docker kill command).
La ip podria ser otra y el contenedor sinatra no podrá conectar con su base de datos redis.
Docker Networking allows you to setup your own networks through which containers can
communicate. Essentially this supplements the existing docker0 network with new, user
managed networks. Importantly, containers can now communicate with each across hosts and
your networking configuration can be highly customizable.
To use Docker networks we first need to create a network and then launch a container inside
that network.
This uses the docker network command to create a bridge network called app. A network ID is
returned for the network.
We can then inspect this network using the docker network inspect command.
Our new network is a local, bridged network much like our docker0 network and that currently
no containers are running inside the network.
In addition to bridge networks, which exist on a single host, we can also create overlay
networks, which allow us to span multiple hosts.
You can list all current networks using the docker network ls command (se eliminan con el
commando docker network rm)
Now let’s add a container to the network we’ve created. To do this we need to be back in the
sinatra directory.
We’ve launched it interactively so we can peek inside to see what’s happening. As the
container has been started inside the app network, Docker will have taken note of all other
containers running inside that network and populated their addresses in local DNS. Let’s see
this now in the network_test container.
nslookup db
We see that using the nslookup command to resolve the db container it returns the IP
address: 172.18.0.2.
A Docker network will also add the app network as a domain suffix for the network, any host
in the app network can be resolved by hostname.app, here db.app.
ping db.app
In our case we just need the db entry to make our application function. To make that work our
webapp’s Redis connection code already uses the db hostname.
We could now start our application and have our Sinatra application write its variables into
Redis via the connection between the db and webapp containers that we’ve established via
the app network.
Let’s try it now by exiting the network_test container and starting up a new container running
our Redis-enabled web application.
http://localhost:32776/json
curl -i http://localhost:32776/json \
"[{\"name\":\"Foo\",\"status\":\"Bar\"}]"
You can also add already running containers to existing networks using the docker network
connect command. So we can add an existing container to our app network. Let’s say we have
an existing container called db2 that also runs Redis.
Let’s add that to the app network (we could have also used the --net flag to automatically add
the container to the network at runtime).
We can also disconnect a container from a network using the docker network disconnect
command.
Containers can belong to multiple networks at once so you can create quite complex
networking models.
Empezando con Docker (docs.docker.com)
Contenedores
docker info ó docker versión (sin --) más detalles sobre la versión.
docker container ls --all Muestra todos los contenedores, incluso los que terminaron.
docker run -p 4000:80 friendlyhello Ejecuta un container. El puerto dentro del contenedor
es el 80 y en el host es el 4000
Docker-compose.yml
version: "3"
services:
web:
image: rbcost/get-started:part2
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
docker container ls Se pueden ver los 5 contenedores del servicio corriendo y del resto
docker service ps getstartedlab_web Lista las tareas del servicio. las tareas tienen en
nombre formado por “nombreAplicacionOStack_nombreServicio.x”, p.e.
“getstartedlab_web.4”
You can run curl -4 http://localhost:4000 several times in a row, or go to that URL in
your browser and hit refresh a few times. Either way, the container ID changes,
demonstrating the load-balancing; with each request, one of the 5 tasks is chosen, in a
round-robin fashion, to respond.
docker swarm leave --force Parar el “swarm” (--force en el último manager destruye el
swarm)
Resumen
docker stack deploy -c <composefile> <appname> # Run the specified Compose file
docker swarm leave --force # Take down a single node swarm from the manager
Swarms (enjambres)
A swarm is a group of machines that are running Docker and joined into a cluster. After
that has happened, you continue to run the Docker commands you’re used to, but now they
are executed on a cluster by a swarm manager. The machines in a swarm can be physical
or virtual. After joining a swarm, they are referred to as nodes.
Swarm managers can use several strategies to run containers, such as “emptiest node” --
which fills the least utilized machines with containers. Or “global”, which ensures that each
machine gets exactly one instance of the specified container. You instruct the swarm
manager to use these strategies in the Compose file, just like the one you have already
been using.
Swarm managers are the only machines in a swarm that can execute your commands or
authorize other machines to join the swarm as workers. Workers are just there to provide
capacity and do not have the authority to tell any other machine what it can and cannot do.
Habría que usar el comando docker-machine, para administrar los hipervisores, pero
estos necesitan ser 2016 o W10, así que lo hago a mano.
(en Docker-01)
docker swarm init --advertise-addr 192.168.1.111 iniciamos el swarm (Puerto por
defecto es 2377. Se podría cambiar)
(en Docker-02)
docker swarm join --token SWMTKN-1-2ka6mwrya7e4uy9m370mz3vxlepdjgv7gfxi7fwcjggeu2iyk2-
dliwpzlirieikrcnl4pycklao 192.168.1.111:2377 Convierte al nodo en un Worker
(en Docker-01)
docker node ls Lista los nodos del swarm
The reason both IP addresses work is that nodes in a swarm participate in an ingress
routing mesh. This ensures that a service deployed at a certain port within your swarm
always has that port reserved to itself, no matter what node is actually running the
container. Here’s a diagram of how a routing mesh for a service called my-web published
at port 8080 on a three-node swarm would look:
Al instalar docker para Windows en Hyper-V se crea una máquina virtual llamada
MobyLinuxVM que es el motor de Docker.
No hace falta instalar Compose porque viene con la instalación de docker para Windows.
Ejecutar como administrador desde PS.
docker-machine create -d hyperv --hyperv-virtual-switch "Red Local" myvm1 Este
commando despliega una VM de 1GB RAM en HV. Aquí la salida.
The first machine acts as the manager, which executes management commands and
authenticates workers to join the swarm, and the second is a worker.
Ahora configure una shell de docker-machine contra la VM que es administradora del swarm.
docker-machine env myvm1 Nos indicará los comandos a ejecutar para conseguirlo.
A partir de ahora, la Shell está conectada a myvm1, que es el administrador del swarm, y los
comandos se ejecutan directamente en ella.
docker-machine ls
cd C:\Users\Antonio\Desktop
mkdir compose-test
cd .\compose-test
notepad .\Docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: rbcost/get-started:part2
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
The only thing new here is the peer service to web, named visualizer. Notice two new
things here: a volumes key, giving the visualizer access to the host’s socket file for Docker,
and a placement key, ensuring that this service only ever runs on a swarm manager --
never a worker. That’s because this container, built from , displays Docker services running
on a swarm in a diagram.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: rbcost/get-started:part2
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
Redis has an official image in the Docker library and has been granted the
short image name of just redis, so no username/repo notation here. The Redis port, 6379,
has been pre-configured by Redis to be exposed from the container to the host, and here in
our Compose file we expose it from the host to the world, so you can actually enter the IP
for any of your nodes into Redis Desktop Manager and manage this Redis instance, if you
so choose.
Most importantly, there are a couple of things in the redis specification that make data
persist between deployments of this stack:
redis always runs on the manager, so it’s always using the same filesystem.
redis accesses an arbitrary directory in the host’s file system as /data inside the
container, which is where Redis stores data.
Together, this is creating a “source of truth” in your host’s physical filesystem for the Redis
data. Without this, Redis would store its data in /data inside the container’s filesystem,
which would get wiped out if that container were ever redeployed.
The placement constraint you put on the Redis service, ensuring that it always uses
the same host.
The volume you created that lets the container access ./data (on the host)
as /data (inside the Redis container). While containers come and go, the files
stored on ./data on the specified host persists, enabling continuity.
mkdir ./data Creamos el directorio data en /home/docker, sino el punto de montaje falla.
Ahora nos conectamos a una de las ips, puerto 80 para ver funcionando el contador.
Si nos conectamos a la ip del manager, puerto 8080, el visualizador nos mostrará el servicio
redis.
Despliegue de la aplicación: Docker EE vs CE
Pegar el docker-compose.yml, comentar las dos línea del montaje del volumen. en AWS no se
permite acceso al sistema de archivos del host. crear.
Networking a fondo (docs.docker.com)
Tipos de drivers de red
Docker’s networking subsystem is pluggable, using drivers. Several drivers exist by default,
and provide core networking functionality:
bridge: The default network driver. If you don’t specify a driver, this is the type of
network you are creating. Bridge networks are usually used when your
applications run in standalone containers that need to communicate. See .
host: For standalone containers, remove network isolation between the container
and the Docker host, and use the host’s networking directly. host is only available
for swarm services on Docker 17.06 and higher. See .
overlay: Overlay networks connect multiple Docker daemons together and enable
swarm services to communicate with each other. You can also use overlay
networks to facilitate communication between a swarm service and a standalone
container, or between two standalone containers on different Docker daemons.
This strategy removes the need to do OS-level routing between these containers.
See .
macvlan: Macvlan networks allow you to assign a MAC address to a container,
making it appear as a physical device on your network. The Docker daemon routes
traffic to containers by their MAC addresses. Using the macvlan driver is sometimes
the best choice when dealing with legacy applications that expect to be directly
connected to the physical network, rather than routed through the Docker host’s
network stack. See .
none: For this container, disable all networking. Usually used in conjunction with a
custom network driver. none is not available for swarm services. See .
: You can install and use third-party network plugins with Docker. These plugins are
available from or from third-party vendors. See the vendor’s documentation for
installing and using a given network plugin.
Uso de redes “Bridge”
In terms of networking, a bridge network is a Link Layer device which forwards traffic
between network segments.
In terms of Docker, a bridge network uses a software bridge which allows containers
connected to the same bridge network to communicate, while providing isolation from
containers which are not connected to that bridge network. The Docker bridge driver
automatically installs rules in the host machine so that containers on different bridge
networks cannot communicate directly with each other.
Bridge networks apply to containers running on the same Docker daemon host. For
communication among containers running on different Docker daemon hosts, you can use
an .
When you start Docker, a default bridge network (also called bridge) is created
automatically, and newly-started containers connect to it unless otherwise specified. You
can also create user-defined custom bridge networks. User-defined bridge networks are
superior to the default bridge network.
Imagine an application with a web front-end and a database back-end. The outside
world needs access to the web front-end (perhaps on port 80), but only the back-
end itself needs access to the database host and port. Using a user-defined bridge,
only the web port needs to be opened, and the database application doesn’t need
any ports open, since the web front-end can reach it over the user-defined bridge.
If you run the same application stack on the default bridge network, you need to
open both the web port and the database port, using the -p or --publish flag for
each. This means the Docker host needs to block access to the database port by
other means.
Containers on the default bridge network can only access each other by IP
addresses, unless you use the option, which is considered legacy. On a user-
defined bridge network, containers can resolve each other by name or alias.
Imagine the same application as in the previous point, with a web front-end and a
database back-end. If you call your containers web and db, the web container can
connect to the db container at db, no matter which Docker host the application stack
is running on.
If you run the same application stack on the default bridge network, you need to
manually create links between the containers (using the legacy --link flag). These Commented [1]: Si aparece esta opción es incorrecta
porque está obsoleta
links need to be created in both directions, so you can see this gets complex with
more than two containers which need to communicate. Alternatively, you can
manipulate the /etc/hosts files within the containers, but this creates problems that
are difficult to debug.
If your containers use the default bridge network, you can configure it, but all the
containers use the same settings, such as MTU and iptables rules. In addition,
configuring the default bridge network happens outside of Docker itself, and
requires a restart of Docker.
User-defined bridge networks are created and configured using docker network
create. If different groups of applications have different network requirements, you
can configure each user-defined bridge separately, as you create it.
Originally, the only way to share environment variables between two containers
was to link them using the flag. This type of variable sharing is not possible with
user-defined networks. However, there are superior ways to share environment
variables. A few ideas:
o You can use swarm services instead of standalone containers, and take
advantage of shared and .
Containers connected to the same user-defined bridge network effectively expose all ports
to each other. For a port to be accessible to containers or non-Docker hosts on different
networks, that port must be published using the -p or --publish flag.
docker network create my-net Crea una red definida por el usuario. Con la opción ---
help podemos ver todas las opciones: rango de IP, gateway, etc.
When you create a new container, you can specify one or more --network flags. This
example connects a Nginx container to the my-net network. It also publishes port 80 in the
container to port 8080 on the Docker host, so external clients can access that port. Any
other container connected to the my-net network has access to all ports on the my-
nginx container, and vice versa.
docker start my-nginx Pulsar secuencia CTRL + p, CTRL + q, para volver al host.
3. $ sysctl net.ipv4.conf.all.forwarding=1
4. Change the policy for the iptables FORWARD policy from DROP to ACCEPT.
5. $ sudo iptables -P FORWARD ACCEPT
These settings do not persist across a reboot, so you may need to add them to a start-up
script.
Use the default bridge network
The default bridge network is considered a legacy detail of Docker and is not
recommended for production use.
Connect a container to the default bridge network
If you do not specify a network using the --network flag, and you do specify a network
driver, your container is connected to the default bridge network by default.
Containers connected to the default bridge network can communicate, but only by IP
address, unless they are linked using the --link flag.
Uso de redes “Overlay”
The overlay network driver creates a distributed network among multiple Docker daemon
hosts. This network sits on top of (overlays) the host-specific networks, allowing containers
connected to it (including swarm service containers) to communicate securely. Docker
transparently handles routing of each packet to and from the correct Docker daemon host
and the correct destination container.
When you initialize a swarm or join a Docker host to an existing swarm, two new networks
are created on that Docker host:
an overlay network called ingress, which handles control and data traffic related to
swarm services. When you create a swarm service and do not connect it to a user-
defined overlay network, it connects to the ingress network by default.
a bridge network called docker_gwbridge, which connects the individual Docker
daemon to the other daemons participating in the swarm.
You can create user-defined overlay networks using docker network create, in the same
way that you can create user-defined bridge networks. Services or containers can be
connected to more than one network at a time. Services or containers can only
communicate across networks they are each connected to.
Although you can connect both swarm services and standalone containers to an overlay
network, the default behaviors and configuration concerns are different. For that reason,
the rest of this topic is divided into operations that apply to all overlay networks, those that
apply to swarm service networks, and those that apply to overlay networks used by
standalone containers.
When you enable overlay encryption, Docker creates IPSEC tunnels between all the nodes
where tasks are scheduled for services attached to the overlay network. These tunnels also
use the AES algorithm in GCM mode and manager nodes automatically rotate the keys
every 12 hours.
Customizing the ingress network involves removing and recreating it. This is usually done
before you create any services in the swarm. If you have existing services which publish
ports, those services need to be removed before you can remove the ingressnetwork.
During the time that no ingress network exists, existing services which do not publish ports
continue to function but are not load-balanced. This affects services which publish ports,
such as a WordPress service which publishes port 80.
1. Inspect the ingress network using docker network inspect ingress, and
remove any services whose containers are connected to it. These are
services that publish ports, such as a WordPress service which
publishes port 80. If all such services are not stopped, the next step
fails.
2. Remove the existing ingress network:
3. $ docker network rm ingress
4.
5. WARNING! Before removing the routing-mesh network, make sure all the
nodes
6. in your swarm run the same docker engine version. Otherwise, removal
may not
7. be effective and functionality of newly created ingress networks will
be
8. impaired.
9. Are you sure you want to continue? [y/N]
10. Create a new overlay network using the --ingress flag, along with the
custom options you want to set. This example sets the MTU to 1200,
sets the subnet to 10.11.0.0/16, and sets the gateway to 10.11.0.2.
11. $ docker network create \
12. --driver overlay \
13. --ingress \
14. --subnet=10.11.0.0/16 \
15. --gateway=10.11.0.2 \
16. --opt com.docker.network.driver.mtu=1200 \
17. my-ingress
Note: You can name your ingress network something other than ingress, but you
can only have one. An attempt to create a second one fails.
18. Restart the services that you stopped in the first step
The docker_gwbridge is a virtual bridge that connects the overlay networks (including
the ingress network) to an individual Docker daemon’s physical network. Docker creates it
automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a
Docker device. It exists in the kernel of the Docker host. If you need to customize its
settings, you must do so before joining the Docker host to the swarm, or after temporarily
removing the host from the swarm.
1. Stop Docker.
2. Delete the existing docker_gwbridge interface.
3. $ sudo ip link set docker_gwbridge down
4.
5. $ sudo ip link del dev docker_gwbridge
6. Start Docker. Do not join or initialize the swarm.
7. Create or re-create the docker_gwbridge bridge manually with your
custom settings, using the docker network create command. This example
uses the subnet 10.11.0.0/16. For a full list of customizable options,
see Bridge driver options.
8. $ docker network create \
9. --subnet 10.11.0.0/16 \
10. --opt com.docker.network.bridge.name=docker_gwbridge \
11. --opt com.docker.network.bridge.enable_icc=false \
12. --opt com.docker.network.bridge.enable_ip_masquerade=true \
13. docker_gwbridge
14. Initialize or join the swarm. Since the bridge already exists, Docker
does not create it with automatic settings.
Swarm services connected to the same overlay network effectively expose all ports to each
other. For a port to be accessible outside of the service, that port must be publishedusing
the -p or --publish flag on docker service create or docker service update. Both the
legacy colon-separated syntax and the newer comma-separated value syntax are
supported. The longer syntax is preferred because it is somewhat self-documenting.
By default, swarm services which publish ports do so using the routing mesh. When you
connect to a published port on any swarm node (whether it is running a given service or
not), you are redirected to a worker which is running that service, transparently. Effectively,
Docker acts as a load balancer for your swarm services. Services using the routing mesh
are running in virtual IP (VIP) mode. Even a service running on each node (by means of
the --mode global flag) uses the routing mesh. When using the routing mesh, there is no
guarantee about which Docker node services client requests.
To bypass the routing mesh, you can start a service using DNS Round Robin (DNSRR)
mode, by setting the --endpoint-mode flag to dnsrr. You must run your own load balancer
in front of the service. A DNS query for the service name on the Docker host returns a list
of IP addresses for the nodes running the service. Configure your load balancer to
consume this list and balance the traffic across the nodes.
By default, control traffic relating to swarm management and traffic to and from your
applications runs over the same network, though the swarm control traffic is encrypted. You
can configure Docker to use separate network interfaces for handling the two different
types of traffic. When you initialize or join the swarm, specify --advertise-addr and --
datapath-addr separately. You must do this for each node joining the swarm.
The ingress network is created without the --attachable flag, which means that only
swarm services can use it, and not standalone containers. You can connect standalone
containers to user-defined overlay networks which are created with the --attachableflag.
This gives standalone containers running on different Docker daemons the ability to
communicate without the need to set up routing on the individual Docker daemon hosts.
Publish ports
Map TCP port 80 in the container to TCP port 8080 on the overlay
-p 8080:80/tcp -p
network, and map UDP port 80 in the container to UDP port 8080
8080:80/udp
on the overlay network.
Container discovery
For most situations, you should connect to the service name, which is load-balanced and
handled by all containers (“tasks”) backing the service. To get a list of all tasks backing the
service, do a DNS lookup for tasks.<service-name>.
Uso de redes de “host”
If you use the host network driver for a container, that container’s network stack is not
isolated from the Docker host. For instance, if you run a container which binds to port 80
and you use host networking, the container’s application will be available on port 80 on the
host’s IP address.
The host networking driver only works on Linux hosts, and is not supported on Docker
Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows Server.
In Docker 17.06 and higher, you can also use a host network for a swarm service, by
passing --network host to the docker container create command. In this case, control
traffic (traffic related to managing the swarm and the service) is still sent across an overlay
network, but the individual swarm service containers send data using the Docker daemon’s
host network and ports. This creates some extra limitations. For instance, if a service
container binds to port 80, only one service container can run on a given swarm node.
If your application can work using a bridge (on a single Docker host) or overlay (to
communicate across multiple Docker hosts), these solutions may be better in the
long term.
When you create a macvlan network, it can either be in bridge mode or 802.1q trunk bridge
mode.
In bridge mode, macvlan traffic goes through a physical device on the host.
In 802.1q trunk bridge mode, traffic goes through an 802.1q sub-interface which
Docker creates on the fly. This allows you to control routing and filtering at a more
granular level.
Bridge mode
To create a macvlan network which bridges with a given physical network interface, use --
driver macvlan with the docker network create command. You also need to specify
the parent, which is the interface the traffic will physically go through on the Docker host.
$ docker network create -d macvlan \
--subnet=172.16.86.0/24 \
--gateway=172.16.86.1 \
-o parent=eth0 pub_net
If you need to exclude IP addresses from being used in the macvlan network, such as when
a given IP address is already in use, use --aux-addresses:
If you specify a parent interface name with a dot included, such as eth0.50, Docker
interprets that as a sub-interface of eth0and creates the sub-interface automatically.
In the above example, you are still using a L3 bridge. You can use ipvlan instead, and get
an L2 bridge. Specify -o ipvlan_mode=l2.
Use IPv6
If you want to completely disable the networking stack on a container, you can use the --
network none flag when starting the container. Within the container, only the loopback
device is created. The following example illustrates this.
Use the default bridge network demonstrates how to use the default bridge network that
Docker sets up for you automatically. This network is not the best choice for production
systems.
In this example, you start two different alpine containers on the same Docker host.
docker run -dit --name alpine1 alpine ash -d (detached) -i (interactivo abre STDIN) -t
(asigna una pseudo TTY) --name (Nombre del contenedor) ash (ash = Alquimist sh)
docker network create --driver bridge alpine-net “--driver bridge” no es necesario ponerlo.
[
{
"Name": "alpine-net",
"Id": "1bf63af0d2e5f2b47b48a1d6d1fe4059aaca83b35117bae179b72638c6549739",
"Created": "2019-06-08T17:01:10.82959791+02:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {},
"Labels": {}
}
]
Creamos 4 contenedores. Con “docker run” solo se puede conectar a una sola red. A posteriori
podemos hacerlo con “docker network connect”
docker run -dit --name alpine1 --network alpine-net alpine ash Creamos contenedor y
conectamos a “alpine-net”
docker run -dit --name alpine2 --network alpine-net alpine ash Creamos contenedor y
conectamos a “alpine-net”
docker run -dit --name alpine3 alpine-net alpine ash Creamos contenedor y conectamos a
red por defecto (bridge)
docker run -dit --name alpine4 --network alpine-net alpine ash Creamos contenedor y
conectamos a “alpine-net”
"Containers": {
"867e3cb9a48b3c351b00bb7979253175a5e9a11810e7d74a208f9ba26c89d3a1": {
"Name": "alpine4",
"EndpointID": "4c90d3f8dd8e4099ece8e434c77df4672f299f1243796370b979592cc918871f",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"9b5f89b2ca6c2e04aa2dbd7e03fc7d2c0ca3cdebdbede0c388137becb2911e5c": {
"Name": "alpine3",
"EndpointID": "244ff2245ccbaf4deb7f46fbe5dc4dd458ea97b8be0fdabd25a5c03af85faa51",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Containers": {
"16181f194b77fd4218e2f1a0d7057d09669a8b6f5ea3ad6b6069904b58436282": {
"Name": "alpine2",
"EndpointID": "c71fba3b70d3c1a51060d4ae6e48fc7228418c111d8886ab671ac877fdaa2fe7",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"29a5ce36b9c73ca98ec452bc239026f15402c52aaed408511de85067c3a7689b": {
"Name": "alpine1",
"EndpointID": "7f853c6f64edbff6d1236072b8b69c5fd5b9863eba507ba04ae76896f90a4c5c",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"867e3cb9a48b3c351b00bb7979253175a5e9a11810e7d74a208f9ba26c89d3a1": {
"Name": "alpine4",
"EndpointID": "b3507be7846b6cfbaef18bccac6650306ab9ca63e62a3dace6038e79d5e2c77b",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
Remember that alpine4 is connected to both the default bridge network and alpine-net. It
should be able to reach all of the other containers. However, you will need to
address alpine3 by its IP address. (Porque alpine3 no está conectada a una red definida
por el usuario, que es la que tiene la capacidad de “automatic Service Direcovery” y poder
resolver el nombre del contenedor a una ip) Attach to it and run the tests.
El comando “traceroute” muestra como se toman los gateways. Para el “alpine4”sería.
Pasa por el GW 172.20.0.1 (de la red Alpine-net) y luego al 192.168.1.1 (el de Avante)
The goal of this tutorial is to start a nginx container which binds directly to port 80 on the
Docker host. From a networking point of view, this is the same level of isolation as if
the nginx process were running directly on the Docker host and not in a container.
However, in all other ways, such as storage, process namespace, and user namespace,
the nginxprocess is isolated from the host.
Prerequisites
This procedure requires port 80 to be available on the Docker host. To make Nginx
listen on a different port, see the nginx image
The host networking driver only works on Linux hosts, and is not supported on
Docker Desktop for Mac, Docker Desktop for Windows, or Docker EE for Windows
Server.
docker run --rm -d --network host --name my_nginx nginx --rm indica que se borre el
contenedor cuando se pare o termine.
ip add show Comrpbamos que no se han creado nuevas interfaces en el host. La interfaz
“docker0” es para la red “Bridge”
netstat -tulpn | grep :80 verificamos que el proceso está enlazado con el puerto 80 del
host.
docker container stop my_nginx al parar el container se borra porque usamos la opción --
rm
Turorial de redes “Overlay”
This topic includes four different tutorials. You can run each of them on Linux, Windows, or a
Mac, but for the last two, you need a second Docker host running elsewhere.
Use the default overlay network demonstrates how to use the default overlay network
that Docker sets up for you automatically when you initialize or join a swarm. This
network is not the best choice for production systems.
Use user-defined overlay networks shows how to create and use your own custom
overlay networks, to connect services. This is recommended for services running in
production.
Use an overlay network for standalone containers shows how to communicate
between standalone containers on different Docker daemons using an overlay
network.
Communicate between a container and a swarm service sets up communication
between a standalone container and a swarm service, using an attachable overlay
network. This is supported in Docker 17.06 and higher.
These requires you to have at least a single-node swarm, which means that you have started
Docker and run docker swarm init on the host. You can run the examples on a multi-node
swarm as well.
Prerequisites
This tutorial requires three physical or virtual Docker hosts which can all communicate with
one another, all running new installations of Docker 17.03 or higher.
This tutorial assumes that the three hosts are running on the same network with no firewall
involved. These hosts will be referred to as manager, worker-1, and worker-2.
The manager host will function as both a manager and a worker, which means it can both run
service tasks and manage the swarm. worker-1 and worker-2 will function as workers only
Desplegar las VMs, cambiar los nombres de las VMs, los ficheros de hosts y borrar todos los
contenedores si los tuvieran.
Creamos el swarm.
(En docker-01, el manager)
docker swarm init --advertise-addr=192.168.1.109 Iniciamos el swarm.
Listamos las redes en el manager y en los workers y nos fijamos en que cada uno de ellos tiene
una red de overlay (superpuesta) llamada “ingress”.
Creamos el servicio.
You don’t need to create the overlay network on the other nodes, beacause it will be
automatically created when one of those nodes starts running a service task which requires
it.
The default publish mode of ingress, which is used when you do not specify a modefor the -
-publish flag, means that if you browse to port 80 on manager, worker-1, or worker-2, you
will be connected to port 80 on one of the 5 service tasks, even if no tasks are currently
running on the node you browse to. If you want to publish the port using host mode, you
can add mode=host to the --publish output. However, you should also use --mode
global instead of --replicas=5 in this case, since only one service task can bind a given
port on a given node.
Veamos como va
( en el manager, docker-01)
docker network inspect nginx-net Inspecciono nginx-net y miro los contenedores que
tienen configurada esta red. Tambien está el contenedor del balanceador.
"Containers": {
"1ebd92275be3cb9c26afee929bfa819033adacd9e6d399d0626ccc5582467c72": {
"Name": "my-nginx.5.znv2c6v9pqokjd1f8ol4plzuj",
"EndpointID": "98785a9000825ae6ae5418088a6b4465ed2a6b5b7188e6cdbfca858497488e95",
"MacAddress": "02:42:0a:00:00:11",
"IPv4Address": "10.0.0.17/24",
"IPv6Address": ""
},
"5f813331af637e281b9e298d80cfb03689c77392cbd3ad0b4312ce0785f7d574": {
"Name": "my-nginx.2.w8fqpbv3fha0e3hyq6jhg54tu",
"EndpointID": "3e9e7a5f47b71ac537896d438ecf9886214471a9692ad467c822af2f3bfa6816",
"MacAddress": "02:42:0a:00:00:0e",
"IPv4Address": "10.0.0.14/24",
"IPv6Address": ""
},
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "7cec9ee2f7a9d158a454a4056d79919eaa0aad465a8ed253e25918bc211fa3ef",
"MacAddress": "02:42:0a:00:00:14",
"IPv4Address": "10.0.0.20/24",
"IPv6Address": ""
}
"Containers": {
"fe84f867fce3f0bfee34d325c2a4155447c324bdf2673d9bb446c2073a8c84b5": {
"Name": "my-nginx.3.30uh39wu8dsa9uiydpgpxdhx9",
"EndpointID": "7d828e9c579c2b95e7311d24eb71197778e409ee3204fc1b81c69b1b7f17413c",
"MacAddress": "02:42:0a:00:00:0f",
"IPv4Address": "10.0.0.15/24",
"IPv6Address": ""
},
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "d3c808e16b0161e5cd9ae543b3136c5281c0dfa39636e65e44b455318c8a6c4f",
"MacAddress": "02:42:0a:00:00:12",
"IPv4Address": "10.0.0.18/24",
"IPv6Address": ""
}
},
docker network create -d overlay nginx-net-2 (-d=driver) Creamos una nueva red de
overlay llamada “nginx-net-2”
"Containers": {
"lb-nginx-net": {
"Name": "nginx-net-endpoint",
"EndpointID": "7cec9ee2f7a9d158a454a4056d79919eaa0aad465a8ed253e25918bc211fa3ef",
"MacAddress": "02:42:0a:00:00:14",
"IPv4Address": "10.0.0.20/24",
"IPv6Address": ""
}
},
"Containers": {
"7005dc5ab77e9be0eb7b1e4affaf03113588be9ffb23ef9f723fa2fd92992f3e": {
"Name": "my-nginx.3.z6cyugn2hfw4ieb8rx3w47bll",
"EndpointID": "932f06e63b1de05cbc3a5eca5bcd56ef57063570b5d3a15bf3e838788ad5c0e3",
"MacAddress": "02:42:0a:00:01:0c",
"IPv4Address": "10.0.1.12/24",
"IPv6Address": ""
},
"9e1905a87f519d68a3b108ff852253969bae481ca168b9bf20e02aa7e56fb860": {
"Name": "my-nginx.1.7k5q66axtyq8q2zbi91vu2onx",
"EndpointID": "61da071abc0c079a26f8d29d445d936d12633954c65390fcaffbc87a4892f49b",
"MacAddress": "02:42:0a:00:01:0b",
"IPv4Address": "10.0.1.11/24",
"IPv6Address": ""
},
"be7fae9347697fab385f00b4a1ac1f10cefd89107cf895c698e2f77a409c55e6": {
"Name": "my-nginx.5.0ybrb7269p2b6pfnj4x9bs7ua",
"EndpointID": "7fc8b12a39aa6527dd522a2a7910b55561d0bd1a117ac0868eac524efaf31ed0",
"MacAddress": "02:42:0a:00:01:09",
"IPv4Address": "10.0.1.9/24",
"IPv6Address": ""
},
"lb-nginx-net-2": {
"Name": "nginx-net-2-endpoint",
"EndpointID": "ba8535794ffd29123ba9274cdd41a27e68d2194fe29bc9972a5b4dd2a8d7f931",
"MacAddress": "02:42:0a:00:01:0a",
"IPv4Address": "10.0.1.10/24",
"IPv6Address": ""
}
},
Hemos visto que las redes de overlay se crean automáticamente en los nodos workers, pero su
eliminación debe ser manual.
Prerequisites
For this test, you need two different Docker hosts that can communicate with each other.
Each host must have Docker 17.06 or higher with the following ports open between the two
Docker hosts:
(En docker-01)
docker swarm init Inicializamos el swarm
(En docker-02)
docker swarm join --token SWMTKN-1-
1xu1nb215dwpsd2xc6fkbmqi3lyetk2k3ob366thq8vpmy4d1u-8u6y5wkuw96m2vildzphkzlhj
192.168.1.109:2377 Agregamos worker.
(En docker-01)
docker network create --driver=overlay --attachable test-net Creamos una red de overlay
conectable (attachable = Permite el enganche manual de contenedores) llamada “test-net”
(En docker-01)
docker run -it --name alpine1 --network test-net alpine Iniciamos un contenedor alpine
interactivo (-it) y lo conectamos a la red de overlay “test-net”
(En docker-02)
docker network ls notar que la red “test-net” no existe aún.
(En docker-02)
docker run -dit --name alpine2 --network test-net alpine Creamos contenedor detached e
interactivo llamado “alpine2” conectado a “test-net”
(En docker-02)
docker network ls Verificamos que se ha creado la red “test-net” y que tiene el mismo
NETWORK ID que en docker-01
The two containers communicate with the overlay network connecting the two hosts. If you
run another alpine container on host2 that is not detached, you can
ping alpine1 from host2 (and here we add the remove option for automatic container
cleanup):
(En docker-02)
docker run -it --rm --name alpine3 --network test-net alpine Creamos un contenedor
interactivo, que se borre cuando se pare o termine.
Salimos de los contenedores con “exit”. Se eliminarán porque se crearon con la opción “--rm”
docker container stop alpine2 En docker-02 sigue corriendo “alpine2” pues lo creamos de
forma detached. Lo detenemos.
(En docker-01)
docker container rm alpine1 Eliminamos el contenedor.
This series of tutorials deals with networking standalone containers which connect
to macvlan networks. In this type of network, the Docker host accepts requests for multiple
MAC addresses at its IP address, and routes those requests to the appropriate container.
For other networking topics, see the .
The goal of these tutorials is to set up a bridged macvlan network and attach a container to
it, then set up an 802.1q trunked macvlan network and attach a container to it.
"Networks": {
"my-macvlan-net": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"62d172ba8926"
],
"NetworkID": "933f41f89bedd49127f751ba9449e4b5036695470160af9101c39f5565ed8d90",
"EndpointID": "600243c3cb9ac1fe247a359b1e6b564bd6b4b70784ea2dd749f9c417811fd615",
"Gateway": "192.168.1.1",
"IPAddress": "192.168.1.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:c0:a8:01:02",
"DriverOpts": null
}
Before you can use IPv6 in Docker containers or swarm services, you need to enable IPv6
support in the Docker daemon. Afterward, you can choose to use either IPv4 or IPv6 (or
both) with any container, service, or network.
Note: IPv6 networking is only supported on Docker daemons running on Linux hosts.
1. Edit /etc/docker/daemon.json and set the ipv6 key to true. Commented [2]: El fichero al principio no está pero lo
puedes crear y siempre debe estar ahí para agregar
2. {
configuraciones
3. "ipv6": true
4. }
5. Save the file.
6. Reload the Docker configuration file.
7. $ systemctl reload docker
You can now create networks with the --ipv6 flag and assign containers IPv6 addresses
using the --ip6 flag.
Dockers e IPTables
All of Docker’s iptables rules are added to the DOCKER chain. Do not manipulate this table
manually. If you need to add rules which load before Docker’s rules, add them to
the DOCKER-USER chain. These rules are loaded before any rules Docker creates
automatically.
You could instead allow connections from a source subnet. The following rule only allows
access from the subnet 192.168.1.0/24:
Finally, you can specify a range of IP addresses to accept using --src-range (Remember
to also add -m iprange when using --src-range or --dst-range):
$ iptables -I DOCKER-USER -m iprange -i ext_if ! --src-range 192.168.1.1-
192.168.1.3 -j DROP
You can combine -s or --src-range with -d or --dst-range to control both the source and
destination. For instance, if the Docker daemon listens on both 192.168.1.99 and 10.1.2.3,
you can make rules specific to 10.1.2.3 and leave 192.168.1.99open.
iptables is complicated and more complicated rule are out of scope for this topic. See
the for a lot more information.
Prevent Docker from manipulating iptables
To prevent Docker from manipulating the iptables policies at all, set the iptables key
to false in /etc/docker/daemon.json. This is inappropriate for most users, because
the iptables policies then need to be managed by hand.
Redes de contenedor
Published ports
By default, when you create a container, it does not publish any of its ports to the outside
world. To make a port available to services outside of Docker, or to Docker containers
which are not connected to the container’s network, use the --publish or -p flag. This
creates a firewall rule which maps a container port to a port on the Docker host. Here are
some examples.
When the container starts, it can only be connected to a single network, using --network.
However, you can connect a running container to multiple networks using docker network
connect. When you start a container using the --network flag, you can specify the IP
address assigned to the container on that network using the --ip or --ip6 flags.
When you connect an existing container to a different network using docker network
connect, you can use the --ip or --ip6flags on that command to specify the container’s IP
address on the additional network.
In the same way, a container’s hostname defaults to be the container’s ID in Docker. You
can override the hostname using --hostname. When connecting to an existing network
using docker network connect, you can use the --alias flag to specify an additional
network alias for the container on that network.
DNS services
By default, a container inherits the DNS settings of the Docker daemon, including
the /etc/hosts and /etc/resolv.conf.You can override these settings on a per-container
basis.
Flag Description
The IP address of a DNS server. To specify multiple DNS servers, use multiple --
dns flags. If the container cannot reach any of the IP addresses you specify,
--dns
Google’s public DNS server 8.8.8.8 is added, so that your container can resolve
internet domains.
A key-value pair representing a DNS option and its value. See your operating
--dns-opt
system’s documentation for resolv.conf for valid options.
-- The hostname a container uses for itself. Defaults to the container’s ID if not
hostname specified.
Configurar Docker para que use un servidor proxy
If your container needs to use an HTTP, HTTPS, or FTP proxy server, you can configure it
in different ways:
In Docker 17.07 and higher, you can configure the Docker client to pass proxy
information to containers automatically.
In Docker 17.06 and lower, you must set appropriate environment variables within
the container. You can do this when you build the image (which makes the image
less portable) or when you create or run the container.
2. When you create or start new containers, the environment variables are set
automatically within the container.
Use environment variables
Set the environment variables manually
When you build the image, or using the --env flag when you create or run the container,
you can set one or more of the following variables to the appropriate value. This method
makes the image less portable, so if you have Docker 17.07 or higher, you
should configure the Docker client instead.
--env
HTTPS_PRO ENV HTTPS_PROXY
HTTPS_PROXY="https://127.0.0.1:3001
XY "https://127.0.0.1:3001"
"
By default all files created inside a container are stored on a writable container layer. This
means that:
The data doesn’t persist when that container no longer exists, and it can be difficult
to get the data out of the container if another process needs it.
A container’s writable layer is tightly coupled to the host machine where the
container is running. You can’t easily move the data somewhere else.
Writing into a container’s writable layer requires a to manage the filesystem. The
storage driver provides a union filesystem, using the Linux kernel. This extra
abstraction reduces performance as compared to using data volumes, which write
directly to the host filesystem.
Docker has two options for containers to store files in the host machine, so that the files are
persisted even after the container stops: volumes, and bind mounts. If you’re running
Docker on Linux you can also use a tmpfs mount.
Keep reading for more information about these two ways of persisting data.
An easy way to visualize the difference among volumes, bind mounts, and tmpfsmounts is
to think about where the data lives on the Docker host.
Volumes are stored in a part of the host filesystem which is managed by
Docker(/var/lib/docker/volumes/ on Linux). Non-Docker processes should not
modify this part of the filesystem. Volumes are the best way to persist data in
Docker.
Bind mounts may be stored anywhere on the host system. They may even be
important system files or directories. Non-Docker processes on the Docker host or
a Docker container can modify them at any time.
tmpfs mounts are stored in the host system’s memory only, and are never written
to the host system’s filesystem.
When you create a volume, it is stored within a directory on the Docker host. When
you mount the volume into a container, this directory is what is mounted into the
container. This is similar to the way that bind mounts work, except that volumes are
managed by Docker and are isolated from the core functionality of the host
machine.
Volumes also support the use of volume drivers, which allow you to store your data
on remote hosts or cloud providers, among other possibilities.
: Available since the early days of Docker. Bind mounts have limited functionality
compared to volumes. When you use a bind mount, a file or directory on the host
machine is mounted into a container. The file or directory is referenced by its full
path on the host machine. The file or directory does not need to exist on the Docker
host already. It is created on demand if it does not yet exist. Bind mounts are very
performant, but they rely on the host machine’s filesystem having a specific
directory structure available. If you are developing new Docker applications,
consider using named volumes instead. You can’t use Docker CLI commands to
directly manage bind mounts.
One side effect of using bind mounts, for better or for worse, is that you can change
the host filesystem via processes running in a container, including creating,
modifying, or deleting important system files or directories. This is a powerful ability
which can have security implications, including impacting non-Docker processes on
the host system.
: A tmpfs mount is not persisted on disk, either on the Docker host or within a
container. It can be used by a container during the lifetime of the container, to store
non-persistent state or sensitive information. For instance, internally, swarm
services use tmpfs mounts to mount into a service’s containers.
Bind mounts and volumes can both be mounted into containers using the -v or--
volume flag, but the syntax for each is slightly different. For tmpfs mounts, you can use
the --tmpfs flag. However, in Docker 17.06 and higher, we recommend using the --
mount flag for both containers and services, for bind mounts, volumes, or tmpfsmounts, as
the syntax is more clear.
Good use cases for volumes
Volumes are the preferred way to persist data in Docker containers and services. Some
use cases for volumes include:
Sharing data among multiple running containers. If you don’t explicitly create it, a
volume is created the first time it is mounted into a container. When that container
stops or is removed, the volume still exists. Multiple containers can mount the same
volume simultaneously, either read-write or read-only. Volumes are only removed
when you explicitly remove them.
When the Docker host is not guaranteed to have a given directory or file structure.
Volumes help you decouple the configuration of the Docker host from the container
runtime.
When you want to store your container’s data on a remote host or a cloud provider,
rather than locally.
When you need to back up, restore, or migrate data from one Docker host to
another, volumes are a better choice. You can stop containers using the volume,
then back up the volume’s directory (such as /var/lib/docker/volumes/<volume-
name>).
Sharing configuration files from the host machine to containers. This is how Docker
provides DNS resolution to containers by default, by
mounting /etc/resolv.conffrom the host machine into each container.
Sharing source code or build artifacts between a development environment on the
Docker host and a container. For instance, you may mount a
Maven target/directory into a container, and each time you build the Maven project
on the Docker host, the container gets access to the rebuilt artifacts.
If you use Docker for development this way, your production Dockerfile would copy
the production-ready artifacts directly into the image, rather than relying on a bind
mount.
If you mount an empty volume into a directory in the container in which files or
directories exist, these files or directories are propagated (copied) into the volume.
Similarly, if you start a container and specify a volume which does not already exist,
an empty volume is created for you. This is a good way to pre-populate data that
another container needs.
If you mount a bind mount or non-empty volume into a directory in the container
in which some files or directories exist, these files or directories are obscured by the
mount, just as if you saved files into /mnt on a Linux host and then mounted a USB
drive into /mnt. The contents of /mnt would be obscured by the contents of the USB
drive until the USB drive were unmounted. The obscured files are not removed or
altered, but are not accessible while the bind mount or volume is mounted.
Volúmenes
Volumes are the preferred mechanism for persisting data generated by and used by
Docker containers. While are dependent on the directory structure of the host machine,
volumes are completely managed by Docker. Volumes have several advantages over bind
mounts:
In addition, volumes are often a better choice than persisting data in a container’s writable
layer, because a volume does not increase the size of the containers using it, and the
volume’s contents exist outside the lifecycle of a given container.
If your container generates non-persistent state data, consider using a to avoid storing the
data anywhere permanently, and to increase the container’s performance by avoiding
writing into the container’s writable layer.
If you need to specify volume driver options, you must use --mount.
If your volume driver accepts a comma-separated list as an option, you must escape the
value from the outer CSV parser. To escape a volume-opt, surround it with double quotes
(") and surround the entire mount parameter with single quotes (').
For example, the local driver accepts mount options as a comma-separated list in
the o parameter. This example shows the correct way to escape the list.
$ docker service create \
--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-
driver=local,volume-opt=type=nfs,volume-opt=device=<nfs-server>:<nfs-
path>,"volume-opt=o=addr=<nfs-address>,vers=4,soft,timeo=180,bg,tcp,rw"'
--name myservice \
<IMAGE>
Differences between -v and --mount behavior
As opposed to bind mounts, all options for volumes are available for both --mount and -
v flags.
If you start a container with a volume that does not yet exist, Docker creates the volume for
you. The following example mounts the volume myvol2 into /app/ in the container.
[
{
"CreatedAt": "2019-06-09T16:23:20+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/myvol2/_data",
"Name": "myvol2",
"Options": null,
"Scope": "local"
}
]
Ahora entro en el contenedor, a la carpeta /app y creo un archivo. Salgo del contenedor.
Desde el host compruebo que el volumen tiene el archivo.
Paro el contenedor.
Lo arranco de nuevo.
When you start a service and define a volume, each service container uses its own local
volume. None of the containers can share this data if you use the local volume driver, but
some volume drivers do support shared storage. Docker for AWS and Docker for Azure
both support persistent storage using the Cloudstor plugin.
The following example starts a nginx service with four replicas, each of which uses a local
volume called myvol2.
If you start a container which creates a new volume, as above, and the container has files
or directories in the directory to be mounted (such as /app/ above), the directory’s contents
are copied into the volume. The container then mounts and uses the volume, and other
containers which use the volume also have access to the pre-populated content.
To illustrate this, this example starts an nginx container and populates the new
volume nginx-vol with the contents of the container’s /usr/share/nginx/html directory,
which is where Nginx stores its default HTML content.
This example modifies the one above but mounts the directory as a read-only volume, by
adding ro to the (empty by default) list of options, after the mount point within the container.
Where multiple options are present, separate them by commas.
-v)
There are several ways to achieve this when developing your applications. One is to add
logic to your application to store files on a cloud object storage system like Amazon S3.
Another is to create volumes with a driver that supports writing files to an external storage
system like NFS or Amazon S3.
Volume drivers allow you to abstract the underlying storage system from the application
logic. For example, if your services use a volume with an NFS driver, you can update the
services to use a different driver, as an example to store data in the cloud, without
changing the application logic.
Initial set-up
This example assumes that you have two nodes, the first of which is a Docker host and can
connect to the second using SSH.
This example specifies a SSH password, but if the two hosts have shared keys configured,
you can omit the password. Each volume driver may have zero or more configurable
options, each of which is specified using an -o flag.
$ docker volume create --driver vieux/sshfs \
-o sshcmd=test@node2:/home/test \
-o password=testpassword \
sshvolume
$ docker run -d \
--name sshfs-container \
--volume-driver vieux/sshfs \
--mount src=sshvolume,target=/app,volume-
opt=sshcmd=test@node2:/home/test,volume-opt=password=testpassword \
nginx:latest
Create a service which creates an NFS volume
This example shows how you can create an NFS volume when creating a service. This
example uses 10.0.0.10 as the NFS server and /var/docker-nfs as the exported directory
on the NFS server. Note that the volume driver specified is local.
NFSV3
NFSV4
Backup a container
For example, in the next command, we:
Launch a new container and mount the volume from the dbstore container
Mount a local host directory as /backup
Pass a command that tars the contents of the dbdata volume to a backup.tarfile
inside our /backup directory.
When the command completes and the container stops, we are left with a backup of
our dbdata volume.
Restore container from backup
With the backup just created, you can restore it to the same container, or another that you
made elsewhere.
Then un-tar the backup file in the new container`s data volume:
You can use the techniques above to automate backup, migration and restore testing using
your preferred tools.
Remove volumes
A Docker data volume persists after a container is deleted. There are two types of volumes
to consider:
Named volumes have a specific source from outside the container, for
example awesome:/bar.
Anonymous volumes have no specific source so when the container is deleted,
instruct the Docker Engine daemon to remove them.
To automatically remove anonymous volumes, use the --rm option. For example, this
command creates an anonymous /foo volume. When the container is removed, the Docker
Engine removes the /foo volume but not the awesome volume.
$ docker run --rm -v /foo -v awesome:/bar busybox top
Bind mounts have been around since the early days of Docker. Bind mounts have limited
functionality compared to . When you use a bind mount, a file or directory on the host
machine is mounted into a container. The file or directory is referenced by its full or relative
path on the host machine. By contrast, when you use a volume, a new directory is created
within Docker’s storage directory on the host machine, and Docker manages that
directory’s contents.
The file or directory does not need to exist on the Docker host already. It is created on
demand if it does not yet exist. Bind mounts are very performant, but they rely on the host
machine’s filesystem having a specific directory structure available. If you are developing
new Docker applications, consider using instead. You can’t use Docker CLI commands to
directly manage bind mounts.
Originally, the -v or --volume flag was used for standalone containers and the --mount flag
was used for swarm services. However, starting with Docker 17.06, you can also use --
mount with standalone containers. In general, --mount is more explicit and verbose. The
biggest difference is that the -v syntax combines all the options together in one field, while
the --mount syntax separates them. Here is a comparison of the syntax for each flag.
Tip: New users should use the --mount syntax. Experienced users may be more familiar
with the -v or --volume syntax, but are encouraged to use --mount, because research has
shown it to be easier to use.
-v or --volume: Consists of three fields, separated by colon characters (:). The
fields must be in the correct order, and the meaning of each field is not immediately
obvious.
o In the case of bind mounts, the first field is the path to the file or directory on
the host machine.
o The second field is the path where the file or directory is mounted in the
container.
o The third field is optional, and is a comma-separated list of options, such
as ro, consistent, delegated, cached, z, and Z. These options are
discussed below.
Because the -v and --volume flags have been a part of Docker for a long time, their
behavior cannot be changed. This means that there is one behavior that is different
between -v and --mount.
If you use -v or --volume to bind-mount a file or directory that does not yet exist on the
Docker host, -v creates the endpoint for you. It is always created as a directory.
If you use --mount to bind-mount a file or directory that does not yet exist on the Docker
host, Docker does not automatically create it for you, but generates an error.
Start a container with a bind mount
Consider a case where you have a directory source and that when you build the source
code, the artifacts are saved into another directory, source/target/. You want the artifacts
to be available to the container at /app/, and you want the container to get access to a new
build each time you build the source on your development host. Use the following
command to bind-mount the target/ directory into your container at /app/. Run the
command from within the source directory. The $(pwd) sub-command expands to the
current working directory on Linux or macOS hosts.
The --mount and -v examples below produce the same result. You can’t run them both
unless you remove the devtest container after running the first one.
docker run -d -it --name devtest -v "$(pwd)"/target:/app nginx:latest (con -v quedaría así)
"Mounts": [
{
"Type": "bind",
"Source": "/home/docker/target",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
This shows that the mount is a bind mount, it shows the correct source and destination, it
shows that the mount is read-write, and that the propagation is set to rprivate.
This example modifies the one above but mounts the directory as a read-only bind mount,
by adding ro to the (empty by default) list of options, after the mount point within the
container. Where multiple options are present, separate them by commas.
The --mount and -v examples have the same result.
docker run -d -it --name devtest --mount type=bind,source="$(pwd)"/target,target=/app,readonly
nginx:latest Creamos el contenedor.
Use docker inspect devtest to verify that the bind mount was created correctly. Look for
the Mounts section:
"Mounts": [
{
"Type": "bind",
"Source": "/home/docker/target",
"Destination": "/app",
"Mode": "",
"RW": false,
"Propagation": "rprivate"
}
],
Bind propagation defaults to rprivate for both bind mounts and volumes. It is only
configurable for bind mounts, and only on Linux host machines. Bind propagation is an
advanced topic and many users never need to configure it.
Bind propagation refers to whether or not mounts created within a given bind-mount or
named volume can be propagated to replicas of that mount. Consider a mount point /mnt,
which is also mounted on /tmp. The propagation settings control whether a mount
on /tmp/a would also be available on /mnt/a. Each propagation setting has a recursive
counterpoint. In the case of recursion, consider that /tmp/a is also mounted as /foo. The
propagation settings control whether /mnt/a and/or /tmp/a would exist.
Propagation
Description
setting
The same as slave, but the propagation also extends to and from
rslave mount points nested within any of the original or replica mount
points.
and let you share files between the host machine and container so that you can persist
data even after the container is stopped.
If you’re running Docker on Linux, you have a third option: tmpfs mounts. When you create
a container with a tmpfs mount, the container can create files outside the container’s
writable layer.
As opposed to volumes and bind mounts, a tmpfs mount is temporary, and only persisted
in the host memory. When the container stops, the tmpfs mount is removed, and files
written there won’t be persisted.
This is useful to temporarily store sensitive files that you don’t want to persist in either the
host or the container writable layer.
Originally, the --tmpfs flag was used for standalone containers and the --mount flag was
used for swarm services. However, starting with Docker 17.06, you can also use --
mount with standalone containers. In general, --mount is more explicit and verbose. The
biggest difference is that the --tmpfs flag does not support any configurable options.
--tmpfs: Mounts a tmpfs mount without allowing you to specify any configurable
options, and can only be used with standalone containers.
--mount: Consists of multiple key-value pairs, separated by commas and each
consisting of a <key>=<value> tuple. The --mount syntax is more verbose than --
tmpfs:
o The type of the mount, which can be , volume, or . This topic
discusses tmpfs, so the type is always tmpfs.
o The destination takes as its value the path where the tmpfs mount is
mounted in the container. May be specified as destination, dst, or target.
o The tmpfs-type and tmpfs-mode options. See tmpfs options.
The examples below show both the --mount and --tmpfs syntax where possible, and --
mount is presented first.
docker run -d -it --name tmptest --tmpfs /app nginx:latest Sintaxis con “tmpfs”
"Mounts": [
{
"Type": "tmpfs",
"Source": "",
"Destination": "/app",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
tmpfs mounts allow for two configuration options, neither of which is required. If you need
to specify these options, you must use the --mount flag, as the --tmpfs flag does not
support them.
Option Description
tmpfs- File mode of the tmpfs in octal. For instance, 700 or 0770. Defaults to 1777 or
mode world-writable.
The following example sets the tmpfs-mode to 1770, so that it is not world-readable within
the container.
docker run -d \
-it \
--name tmptest \
--mount type=tmpfs,destination=/app,tmpfs-mode=1770 \
nginx:latest
Tipos de drivers de almacenamiento
To use storage drivers effectively, it’s important to know how Docker builds and stores
images, and how these images are used by containers. You can use this information to
make informed choices about the best way to persist data from your applications and avoid
performance problems along the way.
Storage drivers allow you to create data in the writable layer of your container. The files
won’t be persisted after the container is deleted, and both read and write speeds are lower
than native file system performance.
FROM ubuntu:18.04
COPY . /app
RUN make /app
CMD python /app/app.py
Each layer is only a set of differences from the layer before it. The layers are stacked on
top of each other. When you create a new container, you add a new writable layer on top of
the underlying layers. This layer is often called the “container layer”. All changes made to
the running container, such as writing new files, modifying existing files, and deleting files,
are written to this thin writable container layer. The diagram below shows a container based
on the Ubuntu 18.04 image.
A storage driver handles the details about the way these layers interact with each other.
Different storage drivers are available, which have advantages and disadvantages in
different situations.
Because each container has its own writable container layer, and all changes are stored in
this container layer, multiple containers can share access to the same underlying image
and yet have their own data state. The diagram below shows multiple containers sharing
the same Ubuntu 18.04 image.
Note: If you need multiple images to have shared access to the exact same data, store this
data in a Docker volume and mount it into your containers.
Docker uses storage drivers to manage the contents of the image layers and the writable
container layer. Each storage driver handles the implementation differently, but all drivers
use stackable image layers and the copy-on-write (CoW) strategy.
To view the approximate size of a running container, you can use the docker ps -
s command. Two different columns relate to size.
size: the amount of data (on disk) that is used for the writable layer of each
container.
virtual size: the amount of data used for the read-only image data used by the
container plus the container’s writable layer size. Multiple containers may share
some or all read-only image data. Two containers started from the same image
share 100% of the read-only data, while two containers with different images which
have layers in common share those common layers. Therefore, you can’t just total
the virtual sizes. This over-estimates the total disk usage by a potentially non-trivial
amount.
The total disk space used by all of the running containers on disk is some combination of
each container’s size and the virtual size values. If multiple containers started from the
same exact image, the total size on disk for these containers would be SUM (size of
containers) plus one image size (virtual size- size).
This also does not count the following additional ways a container can take up disk space:
Disk space used for log files if you use the json-file logging driver. This can be
non-trivial if your container generates a large amount of logging data and log
rotation is not configured.
Volumes and bind mounts used by the container.
Disk space used for the container’s configuration files, which are typically small.
Memory written to disk (if swapping is enabled).
Checkpoints, if you’re using the experimental checkpoint/restore feature.
When you use docker pull to pull down an image from a repository, or when you create a
container from an image that does not yet exist locally, each layer is pulled down
separately, and stored in Docker’s local storage area, which is usually /var/lib/docker/ on
Linux hosts. You can see these layers being pulled in this example:
Each of these layers is stored in its own directory inside the Docker host’s local storage
area. To examine the layers on the filesystem, list the contents
of /var/lib/docker/<storage-driver>. This example uses the overlay2 storage driver:
ls /var/lib/docker/overlay2 miramos las capas que se han descargado (ignorar el
directorio “l”)
Now imagine that you have two different Dockerfiles. You use the first one to create an
image called rbcost/my-base-image:1.0.
cd image1
FROM ubuntu:18.04
COPY . /app
(Guardar el Dockerfile)
cd image2
FROM rbcost/my-base-image:1.0
CMD /app/hello.sh
(Guardar el Dockerfile)
The second image contains all the layers from the first image, plus a new layer with
the CMD instruction, and a read-write container layer. Docker already has all the layers from
the first image, so it does not need to pull them again. The two images share any layers
they have in common.
#!/bin/sh
sudo docker history 42a09360380e Miramos las capas que forman la imagen base.
sudo docker history fd8ce85fdda3 Miramos las capas que forman la imagen final.
Notice that all the layers are identical except the top layer of the second image. All the
other layers are shared between the two images, and are only stored once
in /var/lib/docker/. The new layer actually doesn’t take any room at all, because it is not
changing any files, but only running a command.
Note: The <missing> lines in the docker history output indicate that those layers were
built on another system and are not available locally. This can be ignored.
When an existing file in a container is modified, the storage driver performs a copy-on-write
operation. The specifics steps involved depend on the specific storage driver. For
the aufs, overlay, and overlay2 drivers, the copy-on-write operation follows this rough
sequence:
Search through the image layers for the file to update. The process starts at the
newest layer and works down to the base layer one layer at a time. When results
are found, they are added to a cache to speed future operations.
Perform a copy_up operation on the first copy of the file that is found, to copy the file
to the container’s writable layer.
Any modifications are made to this copy of the file, and the container cannot see
the read-only copy of the file that exists in the lower layer.
Btrfs, ZFS, and other drivers handle the copy-on-write differently. You can read more about
the methods of these drivers later in their detailed descriptions.
Containers that write a lot of data consume more space than containers that do not. This is
because most write operations consume new space in the container’s thin writable top
layer.
Note: for write-heavy applications, you should not store the data in the container. Instead,
use Docker volumes, which are independent of the running container and are designed to
be efficient for I/O. In addition, volumes can be shared among containers and do not
increase the size of your container’s writable layer.
To verify the way that copy-on-write works, the following procedures spins up 5 containers
based on the rbcost/my-final-image:1.0 image we built earlier and examines how much
room they take up.
Note: This procedure doesn’t work on Docker Desktop for Mac or Docker Desktop for
Windows.
Not only does copy-on-write save space, but it also reduces start-up time. When you start a
container (or multiple containers from the same image), Docker only needs to create the
thin writable container layer.
If Docker had to make an entire copy of the underlying image stack each time it started a
new container, container start times and disk space used would be significantly increased.
This would be similar to the way that virtual machines work, with one or more virtual disks
per virtual machine.
Seleccionar un driver de almacenamiento
Ideally, very little data is written to a container’s writable layer, and you use Docker
volumes to write data. However, some workloads require you to be able to write to the
container’s writable layer. This is where storage drivers come in.
Docker supports several different storage drivers, using a pluggable architecture. The
storage driver controls how images and containers are stored and managed on your
Docker host.
After you have read the , the next step is to choose the best storage driver for your
workloads. In making this decision, there are three high-level factors to consider:
If multiple storage drivers are supported in your kernel, Docker has a prioritized list of which
storage driver to use if no storage driver is explicitly configured, assuming that the storage
driver meets the prerequisites.
Use the storage driver with the best overall performance and stability in the most usual
scenarios.
overlay2 is the preferred storage driver, for all currently supported Linux
distributions, and requires no extra configuration.
aufs is the preferred storage driver for Docker 18.06 and older, when running on
Ubuntu 14.04 on kernel 3.13 which has no support for overlay2.
devicemapper is supported, but requires direct-lvm for production environments,
because loopback-lvm, while zero-configuration, has very poor
performance. devicemapper was the recommended storage driver for CentOS and
RHEL, as their kernel version did not support overlay2. However, current versions
of CentOS and RHEL now have support for overlay2, which is now the
recommended driver.
The btrfs and zfs storage drivers are used if they are the backing filesystem (the
filesystem of the host on which Docker is installed). These filesystems allow for
advanced options, such as creating “snapshots”, but require more maintenance and
setup. Each of these relies on the backing filesystem being configured correctly.
The vfs storage driver is intended for testing purposes, and for situations where no
copy-on-write filesystem can be used. Performance of this storage driver is poor,
and is not generally recommended for production use.
Linux
Recommended storage drivers Alternative drivers
distribution
Docker
Engine - overlay2 or aufs (for Ubuntu 14.04 overlay¹, devicemapper², zfs, vfs
Community running on kernel 3.13)
on Ubuntu
Docker
overlay2 (Debian
Engine -
Stretch), aufs or devicemapper (older overlay¹, vfs
Community
versions)
on Debian
Docker
Engine -
overlay2 overlay¹, devicemapper², zfs, vfs
Community
on CentOS
Docker
Engine -
overlay2 overlay¹, devicemapper², zfs, vfs
Community
on Fedora
With regard to Docker, the backing filesystem is the filesystem where /var/lib/docker/is
located. Some storage drivers only work with specific backing filesystems.
Storage driver Supported backing filesystems
devicemapper direct-lvm
btrfs btrfs
zfs zfs
overlay2, aufs, and overlay all operate at the file level rather than the block level.
This uses memory more efficiently, but the container’s writable layer may grow
quite large in write-heavy workloads.
Block-level storage drivers such as devicemapper, btrfs, and zfs perform better for
write-heavy workloads (though not as well as Docker volumes).
For lots of small writes or containers with many layers or deep
filesystems,overlay may perform better than overlay2, but consumes more inodes,
which can lead to inode exhaustion.
btrfs and zfs require a lot of memory.
zfs is a good choice for high-density workloads such as PaaS.
More information about performance, suitability, and best practices is available in the
documentation for each storage driver.
Stability
For some users, stability is more important than performance. Though Docker considers all
of the storage drivers mentioned here to be stable, some are newer and are still under
active development. In general, overlay2, aufs, overlay, and devicemapper are the choices
with the highest stability.
To see what storage driver Docker is currently using, use docker info and look for
the Storage Driver line:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 3
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
…
To change the storage driver, see the specific instructions for the new storage driver. Some
drivers require additional configuration, including configuration to physical or logical disks
on the Docker host.
Important: When you change the storage driver, any existing images and containers
become inaccessible. This is because their layers cannot be used by the new storage
driver. If you revert your changes, you can access the old images and containers again, but
any that you pulled or created using the new driver are then inaccessible.
Uso del driver de almacenamiento AUSFS
AUFS is a union filesystem. The aufs storage driver was previously the default storage
driver used for managing images and layers on Docker for Ubuntu, and for Debian versions
prior to Stretch. If your Linux kernel is version 4.0 or higher, and you use Docker CE,
consider using the newer , which has potential performance advantages over
the aufs storage driver.
Btrfs is a next generation copy-on-write filesystem that supports many advanced storage
technologies that make it a good fit for Docker. Btrfs is included in the mainline Linux
kernel.
Docker’s btrfs storage driver leverages many Btrfs features for image and container
management. Among these features are block-level operations, thin provisioning, copy-on-
write snapshots, and ease of administration. You can easily combine multiple physical
block devices into a single Btrfs filesystem.
OverlayFS is a modern union filesystem that is similar to AUFS, but faster and with a
simpler implementation. Docker provides two storage drivers for OverlayFS: the
original overlay, and the newer and more stable overlay2.
ZFS is a next generation filesystem that supports many advanced storage technologies
such as volume management, snapshots, checksumming, compression and deduplication,
replication and more.
It was created by Sun Microsystems (now Oracle Corporation) and is open sourced under
the CDDL license. Due to licensing incompatibilities between the CDDL and GPL, ZFS
cannot be shipped as part of the mainline Linux kernel. However, the ZFS On Linux (ZoL)
project provides an out-of-tree kernel module and userspace tools which can be installed
separately.
The VFS storage driver is not a union filesystem; instead, each layer is a directory on disk,
and there is no copy-on-write support. To create a new layer, a “deep copy” is done of the
previous layer. This leads to lower performance and more space used on disk than other
storage drivers. However, it is robust, stable, and works in every environment. It can also
be used as a mechanism to verify other storage back-ends against, in a testing
environment.
En la web viene como configurarlo.