0% found this document useful (0 votes)
50 views22 pages

ROS Docker

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views22 pages

ROS Docker

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/317751755

ROS and Docker

Chapter in Studies in Computational Intelligence · May 2017


DOI: 10.1007/978-3-319-54927-9_9

CITATIONS READS

20 8,789

2 authors:

Ruffin White Henrik Iskov Christensen


Georgia Institute of Technology University of California, San Diego
25 PUBLICATIONS 514 CITATIONS 579 PUBLICATIONS 13,769 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Ruffin White on 01 October 2019.

The user has requested enhancement of the downloaded file.


ROS & Docker

Ruffin White, Henrik Christensen

Contextual Robotics Institute,


University of California, San Diego
http://jacobsschool.ucsd.edu/contextualrobotics

Abstract. In this tutorial chapter we’ll cover the growing intersection


between ROS & Docker, showcasing new development tools and strate-
gies to advance robotic software design and deployment within a ROS/Gazebo
context by utilizing advances in Linux containers. Tutorial examples here
will focus on robotics software development for education, research, and
industry, specifically: constructing repeatable & reproducible environ-
ments, leveraging software defined networking, as well as running and
shipping portable ROS applications.

Keywords: ROS, Docker, Repeatability, Reproducibility, Node Net-


working, Portable Deployment, Distributed Computing

1 Introduction

The ROS ecosystem builds from a fast growing, open source, continuously evolv-
ing community of newly released distros, updated dependencies, and deprecated
packages. This can prove troublesome for teaching, developing or even publish-
ing while using any ROS code-base. Additionally, practitioners in the multidis-
ciplinary field of robotics are certainly not solely composed of trained software
engineers. Building, running and shipping complex ROS apps and services can
be a daunting endeavor for non-experts, presenting a formidable learning curve
encountered by those proceeding beyond beginner tutorials.
Robotics still lacks a suitable work flow with respect to continuous integra-
tion and test verification [3,4]. Issues with repeatable and reproducible envi-
ronmental setups can make developing robotics software with collaborators non
trivial, discouraging code-reuse thus prompting much unnecessary reinvention.
Many however are finding the use of Docker to be a helpful tool to tackle these
challenges [2,8].
One of the largest robotic planning projects, MoveIt! [7], now uses ROS with
Docker as a tool for continuous integration and collaboration between main-
tainers. Containers have enabled the MoveIt! community to perform faster and
more frequent tests with the same CI resources, as well as enabling maintain-
ers to build patches and review pull requests for various releases and branches
without cross contaminating their own development environments.
Additionally, new projects such as Secure ROS (SROS) have also found uses
for containers. SROS is an addition to the ROS API and ecosystem to support
2 Ruffin White, Henrik Christensen

modern cryptography and security measures to improve the state of security


for future robotics subsystems [9]. SROS currently uses Docker to distribute
portable run-time and development environments, inviting the community to
quickly interact and contribute with the latest progress of the project1 .
In work with maritime autonomy [6], authors utilize containers as a means
to deploy experimental algorithms to autonomous robotic systems. The methods
presented enable the Naval Surface Warfare Center to minimize the level of effort
and lead time required to repeatedly re-baseline unmanned underwater vehicles,
high cost equipment that is also time-shared among many other research groups.
Authors show results of a quickly-deployable, easily reconfigurable, and vehicle-
agnostic autonomy solution that helps maximize the usage of limited resources.
The topics and examples covered to address the listed challenges are overviewed,
ordered in steadily advancing complexity, and are intended to guide the reader
from novice to well informed. However to best apprehend the material pre-
sented, readers should have prior developer level experience with ROS, e.g. build-
ing packages from source. Having a basic user level acquaintance with Docker,
e.g. being familiar with at least the most common Docker CLI commands, e.g.
docker pull, build, run, may also helpful but not required.

1.1 Overview

– Background: What is Docker? What are Linux Containers vs. Virtual Ma-
chines? What is the dependency matrix from hell? What are the official ROS
and Gazebo DockerHub repos? What role can Docker play in robotics and
how does its development mirror what is accruing in the web development
community?
– Setup: What minimum requirements are needed? Is my OS and hardware
supported? How can I download various releases of ROS using Docker?
– Examples:
• Education — Container and Image Basics: How can Docker soften
the learning curve for ROS and Gazebo by simplifying the setup process?
How can the community share working examples or reproduce broken
ones for collaborative debugging? How can containers provide fail-fast
learn-fast disposable work-spaces, encouraging experimentation without
hesitation?
• Industry — Networking and Deployment: How can nodes be dis-
tributed across machines without local access or VPNs? How can com-
pose files encapsulate the start-up of complex launch procedures?
• Research — Using Devices and Peripherals: How can GPUs and
hardware peripherals be mounted for deep learning and perception tasks?
How can images serve to archive code alongside publications? How can we
quickly collaborate with others by sharing complex compilation setups?
– Notes: What are some caveats, best practices, and suggested third party
tools to watch out for?
1
http://wiki.ros.org/SROS/Installation/Docker
ROS & Docker 3

2 Background

Over the last few years an insurgence of Linux containers has taken root in
the world of software development. Linux containers themselves have existed for
some time, but until recently creating and managing them has not always been
simple or straightforward.
However, thanks in part to improved tooling and a simplified user experience
offered by growing open source projects such as Docker, this method of building
and distributing software is beginning to change how we work and collaborate.
And with the establishment of the Open Container Initiative, a open governance
structure formed under the auspices of the Linux Foundation and backed by
much of the web industry, open industry standards around container formats
and runtimes will inevitably continue to mature.
A Linux container is basically an operating-system-level virtualization method
for running multiple isolated Linux systems on a control host using a single Linux
kernel, offering an environment similar to a Virtual Machine, but without the
overhead that comes with running a separate kernel and simulating all the hard-
ware and networking. Simply speaking, containers sit between the spectrum of
chroots and VMs, being slightly closer to the former.
Docker implements a client-server system where the Docker daemon (or en-
gine) runs on a host and it is accessed via a client. The client, which may or
may not be on the same host, can control an engine (or even a swarm of en-
gines on multiple hosts) to spawn, manage, and network multiple containers.
Containers run from a thin writable layer on top of a specified image, where an
image is a list of read-only layers that represent filesystem differences. A con-
tainer’s writable layer can be committed to construct a new read-only image,
while common read-only layers can be shared across images.
During the ROS release of Jade Turtle in 2015, Open Source Robotic Founda-
tion (OSRF) and authors collaborated to publish an Official Docker Hub reposi-
tory for ROS2 and Gazebo3 [8]. These Dockerized images are intended to provide
a simplified and consistent foundation to build and deploy robotic applications.
Built from the official Ubuntu image and OSRF’s official Debian packages, the
images serve as a quick and secure vector for releases.
Developing such complex robotic systems with cutting edge implementations
of newly published algorithms remains challenging, as repeatability and repro-
ducibility of robotic software can fall to the wayside in the race to innovate
and satisfy the ever growing dependency matrix from hell–the various permuta-
tions of architectures, peripherals, and libraries that exist in robotics. With the
added difficulty in coding, tuning and deploying multiple software components
that span many engineering disciplines, a more collaborative approach becomes
attractive. However, the technical difficulties in sharing and maintaining a collec-
tion of software over multiple robots and platforms has for some time exceeded
the time and effort that many smaller labs and businesses could afford.
2
https://hub.docker.com/_/ros/
3
https://hub.docker.com/_/gazebo/
4 Ruffin White, Henrik Christensen

With the advance and standardization of software containers, roboticists are


primed to acquire a host of improved developer tooling for building and shipping
software. To help alleviate the growing pains and technical challenges of adopting
new practices, we have focused on providing an official resource for using ROS
with these new technologies.

3 Setup

3.1 Requirements

For this tutorial we’ll be leveraging many of the modern features found in recent
releases of the Docker engine, as well as additional tools surrounding the larger
Docker ecosystem. The exact versions used while authoring these examples are
shown below, however later versions should also function since many of these
features are now quite stable and have matured.
1 $ docker -v
2 Docker version 1.11.1, build 5604cbe
3 $ docker-compose -v
4 docker-compose version 1.6.2, build 4d72027
Optional requirements include a local installation of ROS on the same ma-
chine you plan to have host your Docker engine/install, since all our encountered
examples will be “containerized”. Since a demonstration interconnecting ROS
from the host will be provided, a matching ROS installation may be useful for
using GUI’s and visualizations. Note however that any mention of a required host
operating system has been omitted, because, unlike ROS, Docker can be easily
installed on many distributions that support any modern Linux kernel (current
minimum 3.10). Mac-OSX and MS-Windows are also Docker supported, but re-
quire a VM to service a running Linux kernel, however this and the general 64-bit
requirement may change. A host installation of a recent LTS such as Ubuntu
14.04 or 16.04 is advised, especially if you wish to install the only other optional
requirement–Nadia driver and nvidia-docker4 plugin for improved performance
of the ros_caffe example using CUDA enabled hardware.

3.2 Installation

You can follow the proscribed and up-to-date installation instructions5 for your
OS from Docker’s Document website, or if your distribution supports deb/rpm,
you may use the script provided by Docker to install the engine (compose in-
stalled separately6 ):

curl -fsSL https://get.docker.com/ | sh


sudo service docker restart
4
https://github.com/NVIDIA/nvidia-docker
5
https://docs.docker.com/linux/step_one/
6
https://docs.docker.com/compose/
ROS & Docker 5

Additionally, to avoid sudo while commanding Docker, you may choose to add
your user to the docker Unix group. Take care however, as the Docker group
is equivalent to root. After installing Docker, you’ll want to install Docker-
compose from the same documentation website, permitting you to succinctly
describe and launch our later examples using short yaml compose files. To test
Docker, and download some useful images for later, you needn’t run more than
this command from line 1:
1 $ docker run -it --rm ros roscore
2 Unable to find image ’ros:latest’ locally
3 latest: Pulling from library/ros
4 943c334059c7: Pull complete
5 ...
6 f9b3f610dc9c: Pull complete
7 Digest: sha256:e1c7...c2b1
8 Status: Downloaded newer image for ros:latest
9 ... logging to /root/.ros/log/ab44...0002/roslaunch-c6619a48f368-1.log
10 Checking log directory for disk usage. This may take awhile.
11 Press Ctrl-C to interrupt
12 Done checking log file disk usage. Usage is <1GB.
13 started roslaunch server http://c6619a48f368:35403/
14 ros_comm version 1.11.19
15 SUMMARY
16 ========
17 PARAMETERS
18 * /rosdistro: indigo
19 ...
You should see an output very similar to the above, where Docker simply runs
roscore from the ros:latest image. If the Docker engine can not find this image
locally, it will automatically pull what it needs from DockerHub. Once roscore
is running, and because we’ve specified this session to be interactive and clean
up afterward, you can kill roscore, and thereby stop and remove the originating
container using Ctrl-C.

3.3 Building

Before we proceed with any examples, let’s take a moment to walk through a
Dockerfile used to build the tagged images you just downloaded. The hierarchy of
available official tags is keyed to the most common ROS meta-packages, designed
to have small disk footprints and simple configurations:

– ros-core: minimal bare-bones ROS install


– ros-base: basic libraries (tagged with distro name, newest LTS as latest)
– robot: basic install for robot platforms
– perception: basic install for perception tasks

The rest of the common meta-packages such as desktop and desktop-full


are hosted on automatic build repos under OSRF’s DockerHub profile. These
6 Ruffin White, Henrik Christensen

last meta-packages include graphical libraries and hook to a host of other large
dependencies such as X11, X server, etc. In the interest of keeping the official
library images lean and secure, the desktop images are just hosted by OSRF.
1 # This is an auto generated Dockerfile for ros:indigo-ros-core
2 # generated from
,→ templates/docker_images/create_ros_core_image.Dockerfile.em
3 # generated on 2016-04-26 21:55:54 +0000
4 FROM ubuntu:trusty
5 MAINTAINER Tully Foote [email protected]
First thing you’ll notice is the auto generated comments that specify the date
the Dockerfile was generated as well as the template used to derive it. There are
many ROS Dockerfiles, one for each tagged image under the official DockerHub
repo, so a template engine is used to generate and maintain all the Dockerfiles
within the osrf/docker images7 GitHub repo. The template engine itself is made
available at the osrf/docker templates8 , providing a means to programmatically
generate custom ROS related Dockerfiles, to be simultaneously leveraged within
the second generation ROS build farm using Docker.
The final two lines, 4 and 5, define the originating parent image and main-
tainer contact information. Currently all ROS images are built from LTS Ubuntu
images also provided by the Official DockerHub Library. Note here that the OS
minor version number is not necessarily specified. This permits the Official ROS
images to quickly rebuild with the latest minor release updates to the Ubuntu
image. It is wise to consider this compromise of abstraction vs. specificity, e.g.
with ROS version dependent code-base; defining only FROM ros may break the
application upon the next LTS release, bumping up the implicit latest tag.
7 # setup environment
8 RUN locale-gen en_US.UTF-8
9 ENV LANG en_US.UTF-8
10

11 # setup keys
12 RUN apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys
,→ 421C365BD9FF1F717815A3895523BAEEB01FA116
13

14 # setup sources.list
15 RUN echo "deb http://packages.ros.org/ros/ubuntu trusty main" >
,→ /etc/apt/sources.list.d/ros-latest.list
Above we see the basic ROS installation setup for Ubuntu. The base image
used derives from a modified version of Ubuntu’s Cloud Images, rather than the
full desktop install, so we need to configure some locales and expected variables
for ROS environments. Here we also use a key-server with a high-availability
server pool to add the ROS repository credentials. Note that a Dockerfile should
be written to help mitigate any man-in-the-middle attacks during the build pro-
cess: using https; importing PGP keys full fingerprints to check package signing;
embedding check sums directly in the Dockerfile if PGP signing is not provided.
7
https://github.com/osrf/docker_images
8
https://github.com/osrf/docker_templates
ROS & Docker 7

17 # install bootstrap tools


18 RUN apt-get update && apt-get install --no-install-recommends -y \
19 python-rosdep \
20 python-rosinstall \
21 python-vcstools \
22 && rm -rf /var/lib/apt/lists/*
23

24 # bootstrap rosdep
25 RUN rosdep init \
26 && rosdep update

Next, some Python dependencies are bootstrapped for the rosdep tool. Note
the style that every apt-get update/install is written on the same line and
superseded with rm -rf /var/lib/apt/lists/*. This is roughly the opposite
of apt-get update since it ensures that the resulting layer doesn’t include the
extra ∼ 8M B of APT package list data. This also enforces appropriate apt-get
update usage, preventing images from containing stale package listing data.

28 # install ros packages


29 ENV ROS_DISTRO indigo
30 RUN apt-get update && apt-get install -y \
31 ros-indigo-ros-core=1.1.4-0* \
32 && rm -rf /var/lib/apt/lists/*

Now we finally install the particular ROS meta package that the deriving
image is tagged for. Here we intend that rebuilding the same Dockerfile should
result in the same version of the image being packaged. And an official repo
Dockerfile will serve as a base image for all those preceding, so being version
explicit is valuable to the repeatability and transparency of the builds.
If an installation version can not be satisfied, the build should fail outright,
preventing an inadvertent rebuild of a Dockerfile containing something other
than what is given by its tag. For dependent packages installed by apt there’s
usually no need to pin them to a version, but this is something you may want
to consider. An additional benefit to pinning the version is this provides the
maintainer a chance to preserve and break the build cache where needed, when
for instance updating a package with a version bump. Updating of environment
variables within Dockerfiles is also sometimes used for the same purpose.

34 # setup entrypoint
35 COPY ./ros_entrypoint.sh /
36

37 ENTRYPOINT ["/ros_entrypoint.sh"]
38 CMD ["bash"]

Lastly, the default entrypoint is configured and then the default run command
defined. Here, the entrypoint simply sources ROS’s own setup script, as shown
below. The entrypoint can be amended to source your own ROS workspace,
enabling brief Docker run commands to launch your own ROS package.
8 Ruffin White, Henrik Christensen

1 #!/bin/bash
2 set -e
3

4 # setup ros environment


5 source "/opt/ros/$ROS_DISTRO/setup.bash"
6 exec "$@"

4 Examples
Now let’s cover some example use cases for using Docker with ROS. All example
code and detailed tutorials will be made freely available in the corresponding
public GitHub repo9 .

4.1 Education
Let’s take the scenario that you are the instructor for a robotics course utilizing
ROS. It’s just the beginning of the course, but you would like to give the students
a working ROS tutorial to keep them engaged. However you’d also like to prevent
the first half of the lecture and following office hours from diverging into an
arduous ROS install-fest. We’ll make an optimistic assumption your students
already have a working Linux or VM install with Docker, but not necessarily
a homogeneous set of releases or distributions. However, we’d also like to avoid
breaking anything in any way, due in part to their other coursework dependencies
and setups.
Let’s begin by giving the students a small Dockerfile to build our example:
1 FROM ros:indigo
2 RUN apt-get update && apt-get install -y \
3 build-essential \
4 && rm -rf /var/lib/apt/lists/
5 ENV CATKIN_WS=/root/catkin_ws
6 RUN rm /bin/sh \
7 && ln -s /bin/bash /bin/sh
Here we’ll start from an official ROS image and install the dependencies we
know the students will need. The official images cater to runtime deployments,
but can be easily extended for our build requirements. We’ll also define our
catkin workspace directory, as well as switch to bash for sourcing the environment
needed with catkin. One could instead COPY and RUN an executable bash script
in the Docker build context, that being same directory as the Dockerfile, but
we’ll swap the shell to keep everything self-contained in the Dockerfile.
8 RUN source /ros_entrypoint.sh \
9 && mkdir -p $CATKIN_WS/src \
10 && cd $CATKIN_WS/src \
11 && catkin_init_workspace \
12 && git clone https://github.com/ros/ros_tutorials.git \
13 && touch ros_tutorials/turtlesim/CATKIN_IGNORE
9
https://github.com/ruffsl/ros_docker_demos
ROS & Docker 9

Next we’ll show the students how to create a catkin workspace and clone the
source for the example. Note that we’re ignoring one package, as we’ve omitted
to download and install any large GUI desktop dependencies such as QT.
14 RUN source /ros_entrypoint.sh \
15 && cd $CATKIN_WS \
16 && catkin_make
17 RUN sed -i \
18 ’/source "\/opt\/ros\/$ROS_DISTRO\/setup.bash"/a source
,→ "\$CATKIN_WS\/devel\/setup.bash"’ \
19 /ros_entrypoint.sh
Finally we’ll build the tutorial package and include the setup of our catkin
workspace into the original entrypoint. Students can then be instructed to start
with the following two commands in the same path they’ve saved the Dockerfile:
1 $ docker build --tag=ros:tutorials .
2 Sending build context to Docker daemon 2.56 kB
3 Step 1 : FROM ros:indigo
4 ---> e7ccb7b11eeb
5 ...
6 Successfully built f2cc5810fb94
7 $ docker run -it ros:tutorials bash -c "roscore & rosrun
,→ roscpp_tutorials listener & rosrun roscpp_tutorials talker"
8 ...
9 [ INFO] [1462299420.261297314]: hello world 5
10 [ INFO] [1462299420.261495662]: I heard: [hello world 5]
11 [ INFO] [1462299420.361333784]: hello world 6
12 ^C[ INFO] [1462299420.361548617]: I heard: [hello world 6]
13 [rosout-1] killing on exit
From here, students can swap out the URLs for their own repositories and
append additional dependencies. Should students encounter any build or run-
time errors, Dockerfiles and/or images could be shared (from Git Hub and/or
Docker Hub) with the instructor or other peers on say answers.ros.org to serve
as a minimal example, capable of quickly replicating the errors encountered for
further collaborative debugging.
What we’ve shown so far has been a rather structured work-flow from build
to runtime, however containers also offer a more interactive and dynamic work-
flow as well. As shown from this tutorial video10 , we can interact with containers
directly. A container can persist beyond the life cycle of its starting process, and
is not removed until the docker daemon is directed to do so. Naming or keeping
track of your containers affords you the use of isolated ephemeral work-spaces
in which to experiment or test, stopping and restarting them as needed.
Note that you should avoid using containers to store a system state or files you
wish to preserve. Instead, a developer may work within a container iteratively,
progressively building the larger application in increments and taking periodic
respites to commit the state of their container/progress to a newly tagged image
layer. This could be seen as a form of state wide revision control, with save points
10
https://youtu.be/9xqekKwzmV8
10 Ruffin White, Henrik Christensen

allowing the developer to reverse changes by simply spawning a new container


from a previous layer. All the while the developer could also consolidate his
progress by noting the setup procedure within a new Dockerfile, testing and
comparing it against the linage of working scratchwork images.

4.2 Industry

In our previous education example, it was evident how we simply spawned all the
tutorial nodes for a single bash process. When this process (PID 1) is killed, the
container is also killed. This explains the popular convention of keeping to one
process per container, as it is indicative to modern paradigm of microservices
architecture, etc. This is handy should we desire the life-cycles of certain deployed
ROS nodes to be longer than others. Let’s revisit the previous example utilizing
software defined networking to interlink the same ROS nodes and services, only
now, running from separate containers.
Within a new directory, foo, we’ll create a file named docker-compose.yml:
1 version: ’2’
2

3 services:
4 master:
5 image: ros:indigo
6 environment:
7 - "ROS_HOSTNAME=master.foo_default"
8 command: roscore
9

10 talker:
11 build: talker/.
12 environment:
13 - "ROS_HOSTNAME=talker.foo_default"
14 - "ROS_MASTER_URI=http://master.foo_default:11311"
15 command: rosrun roscpp_tutorials talker
16

17 listener:
18 build: listener/.
19 environment:
20 - "ROS_HOSTNAME=listener.foo_default"
21 - "ROS_MASTER_URI=http://master.foo_default:11311"
22 command: rosrun roscpp_tutorials listener
With this compose file, we have encapsulated the entire setup and structure
of our simple set of ROS ’microservices’. Here, each service, (master, talker,
listener), will spawn a new container named appropriately, originating from the
image designated or Dockerfiles specified in the build field. Notice that the
environment fields configure the ROS network variables to match each service’s
domain name under the foo_default network named by our project’s directory.
The foo_default name-space can be omitted, as the default DNS resolution
within the foo_default will resolve using the local service or container names.
ROS & Docker 11

Still, remaining explicit helps avoid collisions while adding host enabled DNS
resolution (later on) over multiple Docker networks.
Before starting up the project, we’ll also copy the same Dockerfile from the
previous example into the project’s talker and listener sub-directories. With
this, we can start up the project detached, and then monitor the logs as below:
1 ~/foo$ docker-compose up -d
2 Creating foo_master_1
3 Creating foo_listener_1
4 Creating foo_talker_1
5

6 ~/foo$ docker-compose logs


7 Attaching to foo_talker_1, foo_master_1, foo_listener_1
8 ...
9 talker_1 | [ INFO] [1462307688.323794165]: hello world 42
10 listener_1 | [ INFO] [1462307688.324099631]: I heard: [hello world 42]
Now let’s consider the example where we’d like to upgrade the ROS distro
release used for just our talker service, leaving the rest of our ROS nodes running
and uninterrupted. We’ll use Docker-compose to recreate our new talker service:
12 ~/foo$ docker exec -it foo_talker_1 printenv ROS_DISTRO
13 indigo
14

15 ~/foo$ sed -i -- ’s/indigo/jade/g’ talker/Dockerfile


16

17 ~/foo$ docker-compose up -d --build --force-recreate talker


18 Building talker
19 Step 1 : FROM ros:jade
20 ...
21 Successfully built 3608a3e9e788
22 Recreating foo_talker_1
23

24 ~/foo$ docker exec -it foo_talker_1 printenv ROS_DISTRO


25 jade
Here we first check the ROS release used in the container, and change the
version used in the originating Dockerfile for the talker service. Next we use some
shorthand flags to inform Docker-compose to re-bring-up the talker service by
recreating a new talker container by rebuilding the talker image. We then check
the ROS distro again and see the reflected update. You may also go back to
docker compose logs and find that the counter in the published message has
been reset.
From here on we can abstract our interaction with the docker engine, and
instead point our client towards a Docker Swarm11 , a method for one client
to spin up containers from a cluster of Docker engines. Normally a tool such
as Docker Machine12 can be used to bootstrap a swarm and define a swarm
master. This entails provisioning and networking engines from multiple hosts
11
https://docs.docker.com/swarm
12
https://docs.docker.com/machine
12 Ruffin White, Henrik Christensen

together, such that requested containers can be load balanced across the swarm,
and containers running from different hosts can securely communicate.

4.3 Research

Up to this point, we’ve considered relatively benign Docker enabled ROS projects
where our build dependencies were fairly shallow, simply those accrued through
default apt-get, and run time dependencies without any external hardware. How-
ever, this is not always the case when an original project builds from fairly new
and evolving research. Let’s assume for the moment you’re a computer vision
researcher, and a component of your experiment utilises ROS for image acqui-
sition and transport culminating into live published classification probabilities
from a trained deep convolutional neural network (CNN). Your bleeding edge
CNN relies on a specific release of parallel programming libraries, not to mention
the supporting GPU and imaging capture peripheral hardware.
Here we’ll demonstrate the reuse of existing public Dockerfiles to quickly
obtain a running setup, stringing together the latest published images with pre-
configured installations of CUDA/CUDNN, and meticulous source build config-
urations for Caffe[5]. Specifically we’ll use a Caffe image from a Docker Hub pro-
vided by the community. This image then-in-turn builds from a CUDA/CUDNN
image from NVIDIA, that then-in-turn uses the official Ubuntu image on Docker
Hub. All necessary Dockerfiles are made available through the respective Docker
Hub repos, so that you may build the stack locally if you choose.

nvidia/cuda:
ubuntu: kaixhin/cuda ruffsl/ros_caffe:
7.5-cudnn5-
trusty -caffe gpu
devel

Fig. 1. A visual of the base image inheritance for the ros_caffe:gpu image.

However, in the interest of build time and demonstration, we literally build


FROM those before us. This involves a small modification and addition to the
Dockerfile for ros-core. By simply redirecting the parent image structure of
the Dockerfile to point to the end-chain image, with each image in the prior
chain encompassing a component of our requirements, we can easily customize
and concatenate the lot to describe and construct an environment that contains
just what we need.
For brevity, detailed and updated documentation/Dockerfiles are kept to
same repository as the ros-caffe project13 . A link to a video demonstration14
13
https://github.com/ruffsl/ros_caffe
14
https://youtu.be/T8ZnnTpriC0
ROS & Docker 13

can also be found at the project repo. Shown here will be the notable key-points
in pulling/running the image classification node from your own Docker machine.
First we’ll modify the ros-core Dockerfile to build from an image with Caffe
built using CUDA/CUDNN, in this case we’ll use a popular set of maintained
automated build repos from Kai Arulkumaran [1]:
1 FROM kaixhin/cuda-caffe
Next we’ll amend the RUN command that installs ROS packages to include
the additional ROS dependencies for our ros_caffe example package:
24 # install ros packages
25 RUN apt-get update && apt-get install -y \
26 ros-${ROS_DISTRO}-ros-core \
27 ros-${ROS_DISTRO}-usb-cam \
28 ros-${ROS_DISTRO}-rosbridge-server \
29 ros-${ROS_DISTRO}-roswww \
30 ros-${ROS_DISTRO}-mjpeg-server \
31 ros-${ROS_DISTRO}-dynamic-reconfigure \
32 python-twisted \
33 python-catkin-tools && \
34 rm -rf /var/lib/apt/lists/*
Note the reuse of the ROS_DISTRO variable within the Dockerfile. When build-
ing from the official ROS image, this helps makes your Dockerfile more adaptable,
allowing for easy reuse and migration to the next ROS release, just by updating
the base image reference.
40 # setup catkin workspace
41 ENV CATKIN_WS=/root/catkin_ws
42 RUN mkdir -p $CATKIN_WS/src
43 WORKDIR $CATKIN_WS/src
44

45 # clone ros-caffe project


46 RUN git clone https://github.com/ruffsl/ros_caffe.git
47

48 # Replacing shell with bash for later source, catkin build commands
49 RUN mv /bin/sh /bin/sh-old && \
50 ln -s /bin/bash /bin/sh
51

52 # build ros-caffe ros wrapper


53 WORKDIR $CATKIN_WS
54 ENV TERM xterm
55 ENV PYTHONIOENCODING UTF-8
56 RUN source "/opt/ros/$ROS_DISTRO/setup.bash" && \
57 catkin build --no-status && \
58 ldconfig
Finally, we can simply clone and build the catkin package. Note the use of
WORKDIR to execute RUN commands from the proper directories, avoiding the need
to hard-code the paths in the command. The optional variables and arguments
around the catkin build command are used to clear a few warnings and printing
behaviors the catkin tool has while running from a basic terminal session.
14 Ruffin White, Henrik Christensen

Now that we know how this is all built, let’s skip ahead to running the
example. You’ll first need to clone the project’s git repo and then download
the caffe model to acquire the necessary files to run the example network, as
explained in the project README on the github repo. We can then launch the
node by using the run command to pull the necessary images from the project’s
automated build repo on Docker Hub. The run script within the docker folder
shows an example of using the GPU version:
1 nvidia-docker run \
2 -it \
3 --publish 8080:8080 \
4 --publish 8085:8085 \
5 --publish 9090:9090 \
6 --volume="${PWD}/../ros_caffe/data:
,→ /root/catkin_ws/src/ros_caffe/ros_caffe/data" \
7 --device /dev/video0:/dev/video0 \
8 ruffsl/ros_caffe:gpu roslaunch ros_caffe_web ros_caffe_web.launch
The Nvidia Docker plug-in is a simple wrapper function around Docker’s
own run call, injecting additional arguments that include mounting the device
hardware and driver directories. This permits our CUDA code to function easily
within the container without necessarily baking the version specific Nvidia driver
within the image itself. You can easily see all implicit properties affected by
using the docker inspect command with the name of the container generated
and notice devices such as /dev/nvidia0 and mounted volume driver named
after your graphics driver version. Be sure you have enough available VRAM,
about 500MB to load this network. You can check your memory usage using
nvidia-smi. If you don’t have a GPU, then you may simply alter the above
command by changing nvidia-docker to just docker, as well as swapping the
:gpu image tag with :cpu.
The rest of the command is relatively clear; specifying the port mapping for
the container to expose web ros interface through localhost as well as mounting
the volume including our downloaded caffe model. The device argument here is
used to provide the container with a video capture device; one can just as easily
mount /dev/bus/usb or /dev/joy0 for other such peripherals. Lastly we specify
the image name and roslaunch command. Note that we can use this command as
is since we’ve modified the image’s entrypoint to source our workspace as well.
Once the ros-caffe node is running, we can redirect our browser to the local
URL15 to see a live video feed of the current published image and label prediction
from the neural network as shown in Fig. 2.

4.4 Graphical Interfaces

One particular aspect routinely utilized by the ROS community includes all the
tools used to introspect and debug robotic software through the use of graphical
interfaces, such as rqt, Rviz, or gzviewer. Although using graphical interfaces is
15
http://127.0.0.1:8085/ros_caffe_web/index.html
ROS & Docker 15

Fig. 2. A simple ros-caffe web interface with live video stream and current predicted
labels, published from containerized nodes with GPU and webcamera device access.

perhaps outside of the original use case of Docker, it is perfectly possible and
in-fact relatively viable for many applications. Thanks to Linux’s pervasive use
of the files system for everything, including video and audio devices, we can
expose what we need from the host system to the container.
Although the easiest means of permitting the use of a GUI may be to simply
use the host’s installation of ROS or Gazebo, as demonstrated in this video16 ,
and thus set the master URI or server address to connect to the containers via
virtual networking and DNS containers described earlier, it may be necessary to
run a GUI from within the container, be it custom dependencies or accelerated
graphics. There are of course a plethora of solutions for various requirements and
containers, ranging from display tunneling over SSH, VNC client server sessions,
or directly mounting X-server unix sockets and forwarded alsa or pulseaudio
connections. Each method of course comes with its own pros and cons, and in
light of this evolving frontier, the reader is encouraged to read on ROS Wiki’s
Docker page17 in order to follow the latest tutorials and resources.
Below is a brief example of the turtlebot demo using Gazebo and RVIZ GUIs
via X Server sockets and graphical acceleration from within a container. First
we’ll build from OSRF’s ROS image using the desktop-full tag, as this will
have the Gazebo and RVIZ pre-installed. Then we’ll add the turtlebot packages,
the necessary world models, and custom launch file.
16
https://youtu.be/P__phnA57LM
17
http://wiki.ros.org/docker
16 Ruffin White, Henrik Christensen

Fig. 3. An example of containerized GUI windows rendered from within the host’s
desktop environment.

1 FROM osrf/ros:kinetic-desktop-full
2

3 # install turtlebot simulator


4 RUN apt-get update && apt-get install -y \
5 ros-${ROS_DISTRO}-turtlebot* \
6 && rm -rf /var/lib/apt/lists/*
7

8 # Getting models from[http://gazebosim.org/models/]. This may take a few


,→ seconds.
9 RUN gzserver --verbose --iters 1
,→ /opt/ros/${ROS_DISTRO}/share/turtlebot_gazebo/
,→ worlds/playground.world
10

11 # install custom launchfile


12 ADD my_turtlebot_simulator.launch /

Note the single iteration of gzserver with the default turtlebot world used to
prefetch the model from the web and into the image. This helps cuts Gazebo’s
start-up time, saving each deriving container from downloading and initializing
the needed model database at runtime. The launchfile here is relatively basic,
launching the simulation, the visualisation, and a user control interface:
ROS & Docker 17

1 <launch>
2 <include file="$(find
,→ turtlebot_gazebo)/launch/turtlebot_world.launch" />
3 <include file="$(find
,→ turtlebot_teleop)/launch/keyboard_teleop.launch" />
4 <include file="$(find
,→ turtlebot_rviz_launchers)/launch/view_robot.launch" />
5 </launch>

For hardware acceleration using discreet graphic for Intel, we’ll need to also
add some common Mesa libraries:
14 # Add Intel display support by installing Mesa libraries
15 RUN apt-get update && apt-get install -y \
16 libgl1-mesa-glx \
17 libgl1-mesa-dri \
18 && rm -rf /var/lib/apt/lists/*

For hardware acceleration using dedicated graphics for Nvidia, we’ll need to
add some hooks and variables instead for the nvidia-docker plugin:
14 # Add Nvidia display support by including nvidia-docker hooks
15 LABEL com.nvidia.volumes.needed="nvidia_driver"
16 ENV PATH /usr/local/nvidia/bin:${PATH}
17 ENV LD_LIBRARY_PATH
,→ /usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}

Note how any deviations between the two setups was left to the last few
lines of the Dockerfile, specifically any layers of the image that will no longer be
hardware agnostic. This enables you to share as much of the common previous
layers between the two tags as possible, saving disk space, and shortening build
times by reusing the cache. Finally we can launch GUI containers by permitting
access to the X Server, then mounting the Direct Rendering Infrastructure and
unix socket:
1 xhost +local:root
2

3 # Run container with necessary Xorg and DRI mounts


4 docker run -it \
5 --env="DISPLAY" \
6 --env="QT_X11_NO_MITSHM=1" \
7 --device=/dev/dri:/dev/dri \
8 --volume=/tmp/.X11-unix:/tmp/.X11-unix \
9 ros:turtlebot-intel \
10 roslaunch my_turtlebot_simulator.launch
11

12 xhost -local:root

The environment variables are used to inform GUIs of the display to use, as
well as fix a subtle QT rendering issue. For Nvidia, things look much the same,
except for use of the nvidia-docker plugin to add the needed device and volume
arguments:
18 Ruffin White, Henrik Christensen

1 xhost +local:root
2

3 # Run container with necessary Xorg and GPU mounts


4 nvidia-docker run -it \
5 --env="DISPLAY" \
6 --env="QT_X11_NO_MITSHM=1" \
7 --volume=/tmp/.X11-unix:/tmp/.X11-unix \
8 ros:turtlebot-nvidia \
9 roslaunch my_turtlebot_simulator.launch
10

11 xhost -local:root
You can view an example using this method from the previous linked demo
video for ros-caffe, or a older GUI demo video18 now made simpler via the
nvidia-plugin for qualitative evaluation.

5 Notes

As you take further advantage of the growing Docker ecosystem for your robotics
applications, you may find certain methods and third-party tools useful in con-
tinuing simplifying or becoming more efficient in common development tasks
while using Docker. Here we’ll cover just a few helpful practices and tools most
relevant for ROS users.

5.1 Best Practices & Caveats

There are many best practices to consider while using Docker, and as with
any new technology or paradigm, we need to know the gotchas. While much is
revealed within Docker’s own tutorial documentation and helpful posts within
the community 19 , there are a few subjects that are more pertinent to ROS users
than others.
ROS is a relatively large ’stack’ as compared to other commonly used code-
bases with Docker, such as smaller lightweight web stacks. If the objective is
to distribute and share Robotics based images using ROS, it’s worthwhile to be
mindful of the size of the images you generate to be bandwidth considerate. There
are many ways to mitigate bloat from an image through careful thought while
structuring the Dockerfile. Some of this was described while going over the official
ROS Dockerfile, such as always removing temporary files before completion of a
layer generated from each Docker command.
However there are a few other caveats to consider concerning how a layer
is constructed. One being to never change the permissions of a file inside a
Dockerfile unless unavoidable; consider using the entrypoint script to make the
changes if necessary for runtime. Although a git/Docker comparison could be
18
https://youtu.be/djLKmDMsdxM
19
https://docs.docker.com/engine/userguide/eng-image/dockerfile_
best-practices/
ROS & Docker 19

made, Docker only notes what files have changed, not necessarily how the files
have been modified inside the layer. This causes Docker to replicate/replace the
files while creating a new layer, potentially doubling the size if you’re modifying
large files, or potentially worse, every file.
Another way keep disk size down can be to flatten the image, or certain
spans of layers. This however prevents the sharing of intermediate layers among
commonly derived images, a method Docker uses to minimize the overall disk
usage. Flattening images also only helps in squashing large modifications to
image files, but does nothing if the squashed file system is just inherently large.
When building Dockerfiles, you’ll want to be considerate of the build context,
i.e. the parent folder of the Dockerfile itself. For example, it’s best to build a
Dockerfile from a folder that includes just the files and folders you’d like to ADD
or COPY into the image. This is because the docker client will tar/compressing
the directory (and all subdirectories) where you executed the build and send
it to the docker daemon. Although files that are not referenced to will not be
included in the image, building a Dockerfile from say your /root/, /home/ or
/tmp/ directory for example would be unwise, as the amount of unnecessary
data sent to the daemon would slow/kill the build. A .dockerignore could also
be used to avoid this side effect.
Finally, a docker container should not necessarily be thought of as a complete
virtual environment. As opposed to VM’s with their own hypervized kernel and
start-up processes, the only process that runs within the container is that which
you command. This means that there is no system init, up-start or system start-
ing syslog, cron jobs and daemons, or even reaping orphaned zombie processes.
This is usually ok, as a container’s life cycle is quite short and we normally only
want to execute what we specify. However, if you intend to use containers as a
more full fledged system requiring say proper signals handling, consider using
minimal init system for Linux containers such as dumb-init 20 . For most cases
with ROS users, roslaunch does a rather good job signalling child processes and
thus serves as a fine anchor for a container’s PID 1, and so simply running
multiple ROS nodes per container is reasonable. For those more concerned us-
ing custom launch alternatives, a relevant post here21 expands on this subject
further.

5.2 Transparent Proxy

One task you may find yourself preforming frequently while building and tweak-
ing images, especially if debugging say minimum dependency sets, is download-
ing and installing packages. This is sometimes a painful endeavor, made even
more so if your network bandwidth is all but extraordinary, or your corpora-
tion works behind custom proxy and time is short. One way around this is to
leverage Docker’s shared networking and utilize a proxy container. Squid-in-a-
20
https://github.com/Yelp/dumb-init
21
https://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem
20 Ruffin White, Henrik Christensen

can 22 is one such example of a simple transparent squid proxy within a Docker
container.
This services every other Docker container, including containers used during
the build process while generating image layers, a local cache of any frequent
http traffic. By easily changing the configuration file, you can leverage any of
the more advanced squid proxy features, while avoiding the tedious install and
setup of a proper squid server on various hosts’ distribution.

5.3 Docker DNS resolver


We’ve shown before how ROS nodes running from separate containers within
a common software defined network can communicate utilising domain names
given to containers and resolved by Docker’s internal DNS. Communicating to
the same containers from the host through the default bridge network is also
possible, although not as straightforward without the host having similar access
to the software defined network’s local DNS. We can quickly circumvent this
issue as we did with the proxy, by running the required service from another
container within the same network. In this case we can use a simple DNS server
such as Resolvable 23 to help the local host resolve container domain names within
the virtual network.
One word of caution: one should avoid using domain names that could collide,
as in the case of running two instances of the industry networking example on
the same Docker engine, e.g. two sets of roscores and nodes on different project
networks, say foo and bar. If we were to then include a Resolvable container
into each project, the use of local domain names such as master or talker could
then collide for the host, whereas explicit domain naming including the project’s
network post-fix such as foo_default would still properly resolve.

References
1. Arulkumaran, K.: Kaixhin/dockerfiles (2015), https://github.com/Kaixhin/
dockerfiles
2. Boettiger, C.: An introduction to docker for reproducible research. SIGOPS
Oper. Syst. Rev. 49(1), 71–79 (Jan 2015), http://doi.acm.org/10.1145/2723872.
2723882
3. Bonsignorio, F., del Pobil, A.P.: Toward Replicable and Measurable Robotics
Research. IEEE Robotics & Automation Magazine 22(3), 32–35 (2015), http:
//ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7254310
4. Guglielmelli, E.: Research Reproducibility and Performance Evaluation for De-
pendable Robots. IEEE Robotics & Automation Magazine 22(3), 4–4 (2015),
http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7254300
5. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar-
rama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding.
arXiv preprint arXiv:1408.5093 (2014)
22
https://github.com/jpetazzo/squid-in-a-can
23
https://github.com/gliderlabs/resolvable
ROS & Docker 21

6. Mabry, R., Ardonne, J., Weaver, J., Lucas, D., Bays, M.: Maritime autonomy in
a box: Building a quickly-deployable autonomy solution using the docker container
environment. IEEE Oceans (2016)
7. Sucan, I.A., Chitta, S.: Moveit! http://moveit.ros.org
8. White, R.: ROS + Docker: Enabling repeatable, reproducible and deployable robotic
software via containers (2015), https://vimeo.com/142150815, ROSCon, Hamburg
Germany
9. White, R., Quigley, M., Christensen, H.: SROS: Securing ROS over the wire, in
the graph, and through the kernel. In: Humanoids Workshop: Towards Humanoid
Robots OS. Cancun, Mexico (2016)

Biography

Ruffin White is a Ph.D. student in the Contextual Robotics


Institute at UC San Diego, under the direction of Dr. Henrik
Christensen. Having earned his Masters of Computer Science
at the Institute for Robotics & Intelligent Machines, Georgia
Tech, he remains an active contributor to ROS and collabora-
tor with the Open Source Robotics Foundation. His research
interests include mobile robotic mapping, with a focus on se-
mantic understanding for SLAM and navigation, as well as
advancing repeatable and reproducible research in the field of
robotics by improving development tools for robotic software.

Dr. Henrik I. Christensen is a Professor of Computer Sci-


ence at Dept. of Computer Science and Engineering UC San
Diego . He is also the director of the Institute for Contextual
Robotics. Prior to UC San Diego he was the founding director
of the Institute for Robotics and Intelligent machines (IRIM)
at Georgia Institute of Technology (2006-2016). Dr. Chris-
tensen does research on systems integration, human-robot in-
teraction, mapping and robot vision. He has published more
than 300 contributions across AI, robotics and vision. His re-
search has a strong emphasis on ”real problems with real so-
lutions.” A problem needs a theoretical model, implementation, evaluation, and
translation to the real world.

View publication stats

You might also like