Docker Cheat Sheet
Docker Cheat Sheet
Table of Contents
• Why Docker
• Prerequisites
• Installation
• Containers
• Images
• Networks
• Registry and Repository
• Dockerfile
• Layers
• Links
• Volumes
• Exposing Ports
• Best Practices
• Docker-Compose
• Security
• Tips
• Contributing
Why Docker
"With Docker, developers can build any app in any language using any toolchain.
“Dockerized” apps are completely portable and can run anywhere - colleagues’ OS X
and Windows laptops, QA servers running Ubuntu in the cloud, and production data
center VMs running Red Hat.
Developers can get going quickly by starting with one of the 13,000+ apps available on
Docker Hub. Docker manages and tracks changes and dependencies, making it easier
for sysadmins to understand how the apps that developers build work. And with Docker
Hub, developers can automate their build pipeline and share artifacts with collaborators
through public or private repositories.
Docker helps developers build and ship higher-quality applications, faster." -- What is
Docker
Prerequisites
I use Oh My Zsh with the Docker plugin for autocompletion of docker commands.
YMMV.
Linux
The 3.10.x kernel is the minimum requirement for Docker.
MacOS
10.8 “Mountain Lion” or newer is required.
Installation
Linux
Quick and easy install script provided by Docker:
curl -sSL https://get.docker.com/ | sh
If you're not willing to run a random shell script, please see the installation instructions
for your distribution.
If you are a complete Docker newbie, you should follow the series of tutorials now.
macOS
Download and install Docker Community Edition. if you have Homebrew-Cask, just
type brew cask install docker. Or Download and install Docker Toolbox. Docker For
Mac is nice, but it's not quite as finished as the VirtualBox install. See the comparison.
NOTE Docker Toolbox is legacy. You should to use Docker Community Edition,
See Docker Toolbox.
Once you've installed Docker Community Edition, click the docker icon in Launchpad.
Then start up a container:
If you are a complete Docker newbie, you should probably follow the series of
tutorials now.
Check Version
It is very important that you always know the current version of Docker you are currently
running on at any point in time. This is very helpful because you get to know what
features are compatible with what you have running. This is also important because you
know what containers to run from the docker store when you are trying to get template
containers. That said let see how to know which version of docker we have running
currently.
1.8.0
You can also dump raw JSON data:
{"Client":{"Version":"1.8.0","ApiVersion":"1.20","GitCommit":"f5bae0a","GoVersion":"g
o1.4.2","Os":"linux","Arch":"am"}
Containers
Your basic isolated Docker process. Containers are to Virtual Machines as threads are to
processes. Or you can think of them as chroots on steroids.
Lifecycle
Normally if you run a container without options it will start and stop immediately, if you
want keep it running you can use the command, docker run -td container_id this will
use the option -t that will allocate a pseudo-TTY session and -d that will detach
automatically the container (run container in background and print container ID).
If you want a transient container, docker run --rm will remove the container after it
stops.
If you want to map a directory on the host to a docker container, docker run -v
$HOSTDIR:$DOCKERDIR. Also see Volumes.
If you want to remove also the volumes associated with the container, the deletion of
the container must include the -v switch like in docker rm -v.
There's also a logging driver available for individual containers in docker 1.10. To run
docker with a custom log driver (i.e., to syslog), use docker run --log-driver=syslog.
Another useful option is docker run --name yourname docker_image because when you
specify the --name inside the run command this will allow you to start and stop a
container by calling it with the name the you specified when you created it.
If you want to detach from a running container, use Ctrl + p, Ctrl + q. If you want to
integrate a container with a host process manager, start the daemon with -r=false then
use docker start -a.
If you want to expose container ports through the host, see the exposing ports section.
CPU Constraints
You can limit CPU, either using a percentage of all CPUs, or by using specific cores.
For example, you can tell the cpu-shares setting. The setting is a bit strange -- 1024
means 100% of the CPU, so if you want the container to take 50% of all CPU cores, you
should specify 512. See https://goldmann.pl/blog/2014/09/11/resource-management-
in-docker/#_cpu for more:
docker run -it -c 512 agileek/cpuset-test
You can also only use some CPU cores using cpuset-cpus.
See https://agileek.github.io/docker/2014/08/06/docker-cpuset/ for details and some
nice videos:
docker run -it --cpuset-cpus=0,4,6 agileek/cpuset-test
Note that Docker can still see all of the CPUs inside the container -- it just isn't using all
of them. See https://github.com/docker/docker/issues/20770 for more details.
Memory Constraints
Capabilities
Info
Import / Export
• docker cp copies files or folders between a container and the local filesystem.
• docker export turns container filesystem into tarball archive stream to STDOUT.
Executing Commands
To enter a running container, attach a new shell process to a running container called
foo, use: docker exec -it foo /bin/bash.
Images
Images are just templates for docker containers.
Lifecycle
Info
Load/Save image
Load an image from file:
Import/Export container
Import a container as an image from file:
cat my_container.tar.gz | docker import - my_image:my_tag
Export an existing container:
Networks
Docker has a networks feature. Docker automatically creates 3 network interfaces when
you install it (bridge, host none). A new container is launched into the bridge network by
default. To enable communication between multiple containers, you can create a new
network and launch containers in it. This enables containers to communicate to each
other while being isolated from containers that are not connected to the network.
Furthermore, it allows to map container names to their IP addresses. See working with
networks for more details.
Lifecycle
• docker network create NAME Create a new network (default type: bridge).
• docker network rm NAME Remove one or more networks by name or identifier.
No containers can be connected to the network when deleting it.
Info
Connection
# create a new bridge network with your subnet and gateway for your ip block
docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
A registry is a host -- a server that stores repositories and provides an HTTP API
for managing the uploading and downloading of repositories.
Docker.com hosts its own index to a central registry which contains a large number of
repositories. Having said that, the central docker registry does not do a good job of
verifying images and should be avoided if you're worried about security.
Dockerfile
The configuration file. Sets up a Docker container when you run docker build on it.
Vastly preferable to docker commit.
Here are some common text editors and their syntax highlighting modules you could
use to create Dockerfiles:
• If you use jEdit, I've put up a syntax highlighting module for Dockerfile you can
use.
• Sublime Text 2
• Atom
• Vim
• Emacs
• TextMate
• VS Code
• Also see Docker meets the IDE
Instructions
• .dockerignore
• FROM Sets the Base Image for subsequent instructions.
• MAINTAINER (deprecated - use LABEL instead) Set the Author field of the
generated images.
• RUN execute any commands in a new layer on top of the current image and
commit the results.
• CMD provide defaults for an executing container.
• EXPOSE informs Docker that the container listens on the specified network ports
at runtime. NOTE: does not actually make ports accessible.
• ENV sets environment variable.
• ADD copies new files, directories or remote file to container. Invalidates caches.
Avoid ADD and use COPY instead.
• COPY copies new files or directories to container. By default this copies as root
regardless of the USER/WORKDIR settings. Use --chown=<user>:<group> to give
ownership to another user/group. (Same for ADD.)
• ENTRYPOINT configures a container that will run as an executable.
• VOLUME creates a mount point for externally mounted volumes or other
containers.
• USER sets the user name for following RUN / CMD / ENTRYPOINT commands.
• WORKDIR sets the working directory.
• ARG defines a build-time variable.
• ONBUILD adds a trigger instruction when the image is used as the base for
another build.
• STOPSIGNAL sets the system call signal that will be sent to the container to exit.
• LABEL apply key/value metadata to your images, containers, or daemons.
• SHELL override default shell is used by docker to run commands.
• HEALTHCHECK tells docker how to test a container to check that it is still working.
Tutorial
Examples
• Examples
• Best practices for writing Dockerfiles
• Michael Crosby has some more Dockerfiles best practices / take 2.
• Building Good Docker Images / Building Better Docker Images
• Managing Container Configuration with Metadata
• How to write excellent Dockerfiles
Layers
The versioned filesystem in Docker is based on layers. They're like git commits or
changesets for filesystems.
Links
Links are how Docker containers talk to each other through TCP/IP ports. Atlassian show
worked examples. You can also resolve links by hostname.
NOTE: If you want containers to ONLY communicate with each other through links, start
the docker daemon with -icc=false to disable inter process communication.
If you have a container with the name CONTAINER (specified by docker run --name
CONTAINER) and in the Dockerfile, it has an exposed port:
EXPOSE 1337
Then if we create another container called LINKED like so:
docker run -d --link CONTAINER:ALIAS --name LINKED user/wordpress
Then the exposed ports and aliases of CONTAINER will show up in LINKED with the
following environment variables:
$ALIAS_PORT_1337_TCP_PORT
$ALIAS_PORT_1337_TCP_ADDR
And you can connect to it that way.
Volumes
Docker volumes are free-floating filesystems. They don't have to be connected to a
particular container. You can use volumes mounted from data-only containers for
portability. As of Docker 1.9.0, Docker has named volumes which replace data-only
containers. Consider using named volumes to implement it rather than data containers.
Lifecycle
• docker volume create
• docker volume rm
Info
• docker volume ls
• docker volume inspect
Volumes are useful in situations where you can't use links (which are TCP/IP only). For
instance, if you need to have two docker instances communicate by leaving stuff on the
filesystem.
You can mount them in several docker containers at once, using docker run --volumes-
from.
Because volumes are isolated filesystems, they are often used to store state from
computations between transient containers. That is, you can have a stateless and
transient container run from a recipe, blow it away, and then have a second instance of
the transient container pick up from where the last one left off.
You may also consider running data-only containers as described here to provide some
data portability.
Exposing ports
Exposing incoming ports through the host container is fiddly but doable.
This is done by mapping the container port to the host port (only using localhost
interface) using -p:
docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage
You can tell Docker that the container listens on the specified network ports at runtime
by using EXPOSE:
EXPOSE <CONTAINERPORT>
Note that EXPOSE does not expose the port itself -- only -p will do that. To expose the
container's port on your localhost's port:
iptables -t nat -A DOCKER -p tcp --dport <LOCALHOSTPORT> -j DNAT --to-destination
<CONTAINERIP>:<PORT>
If you're running Docker in Virtualbox, you then need to forward the port there as well,
using forwarded_port. Define a range of ports in your Vagrantfile like this so you can
dynamically map them:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
(49000..49900).each do |port|
config.vm.network :forwarded_port, :host => port, :guest => port
end
...
end
If you forget what you mapped the port to on the host container, use docker port to
show it:
docker port CONTAINER $CONTAINERPORT
Best Practices
This is where general Docker best practices and war stories go:
Docker-Compose
Compose is a tool for defining and running multi-container Docker applications. With
Compose, you use a YAML file to configure your application’s services. Then, with a
single command, you create and start all the services from your configuration. To learn
more about all the features of Compose, see the list of features.
Security
This is where security tips about Docker go. The Docker security page goes into more
detail.
First things first: Docker runs as root. If you are in the docker group, you effectively have
root access. If you expose the docker unix socket to a container, you are giving the
container root access to the host.
Docker should not be your only defense. You should secure and harden it.
For an understanding of what containers leave exposed, you should read Understanding
and Hardening Linux Containers by Aaron Grattafiori. This is a complete and
comprehensive guide to the issues involved with containers, with a plethora of links and
footnotes leading on to yet more useful content. The security tips following are useful if
you've already hardened containers in the past, but are not a substitute for
understanding.
Security Tips
For greatest security, you want to run Docker inside a virtual machine. This is straight
from the Docker Security Team Lead -- slides / notes. Then, run with AppArmor /
seccomp / SELinux / grsec etc to limit the container permissions. See the Docker 1.10
security features for more details.
Docker image ids are sensitive information and should not be exposed to the outside
world. Treat them like passwords.
See the Docker Security Cheat Sheet by Thomas Sjögren: some good stuff about
container hardening in there.
Check out the docker bench security script, download the white papers.
You should start off by using a kernel with unstable patches for grsecurity / pax
compiled in, such as Alpine Linux. If you are using grsecurity in production, you should
spring for commercial support for the stable patches, same as you would do for RedHat.
It's $200 a month, which is nothing to your devops budget.
Since docker 1.11 you can easily limit the number of active processes running inside a
container to prevent fork bombs. This requires a linux kernel >= 4.3 with
CGROUP_PIDS=y to be in the kernel configuration.
docker run --pids-limit=64
Also available since docker 1.11 is the ability to prevent processes from gaining new
privileges. This feature have been in the linux kernel since version 3.5. You can read
more about it in this blog post.
User Namespaces
There's also work on user namespaces -- it is in 1.10 but is not enabled by default.
To enable user namespaces ("remap the userns") in Ubuntu 15.10, follow the blog
example.
Security Videos
Security Roadmap
The Docker roadmap talks about seccomp support. There is an AppArmor policy
generator called bane, and they're working on security profiles.
Tips
Sources:
Prune
The new Data Management Commands have landed as of Docker 1.13:
df
docker system df presents a summary of the space currently used by different docker
objects.
Get IP address
docker inspect $(dl) | grep -wm1 IPAddress | cut -d '"' -f 4
or with jq installed:
This should be done in the same layer as other apt commands. Otherwise, the previous
layers still persist the original information and your images will still be fat.
RUN {apt commands} \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
• Flatten an image
• For backup