Docker For Java Developers
Docker For Java Developers
Arun Gupta
Beijing
Tokyo
First Edition
June 2016:
First Release
The OReilly logo is a registered trademark of OReilly Media, Inc. Docker for Java
Developers, the cover image, and related trade dress are trademarks of OReilly
Media, Inc.
While the publisher and the author have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the author disclaim all responsibility for errors or omissions, including without limi
tation responsibility for damages resulting from the use of or reliance on this work.
Use of the information and instructions contained in this work is at your own risk. If
any code samples or other technology this work contains or describes is subject to
open source licenses or the intellectual property rights of others, it is your responsi
bility to ensure that your use thereof complies with such licenses and/or rights.
978-1-491-95756-1
[LSI]
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1. Introduction to Docker. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Docker Concepts
Docker Images and Containers
Docker Toolbox
Kubernetes
Other Platforms
3
4
5
11
15
19
20
23
24
26
26
32
37
40
43
47
4. Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Getting Started with Java and Docker
52
Preface
vii
Acknowledgments
I would like to express gratitude to the people who made writing
this book a fun experience. First and foremost, many thanks to
OReilly for providing an opportunity to write this book. The team
provided excellent support throughout the editing, reviewing,
proofreading, and publishing processes. At OReilly, Brian Foster
believed in the idea and helped launch the project. Nan Barber was
thorough and timely with her editing, which made the book fluent
and consistent. Thanks also to the rest of the OReilly team, some of
whom we may not have interacted with directly, but who helped in
many other ways. Daniel Bryant (@danielbryantuk) and Roland
Hu (@ro14nd) did an excellent technical review of the book. This
ensured that the book stayed true to its purpose and explained the
concepts in the simplest possible ways. A vast amount of informa
tion in this book is the result of delivering the Docker for Java
Developers workshop all around the world. A huge thanks goes to
all the attendees of these workshops whose questions helped clarify
my thoughts. Last, but not least, I seek forgiveness from all those
who have helped us over the past few months and whose names we
have failed to mention.
viii
Preface
CHAPTER 1
Introduction to Docker
Docker Concepts
Docker simplifies software delivery of distributed applications in
three ways:
Build
Provides tools you can use to create containerized applications.
Developers package the application, its dependencies and infra
structure, as read-only templates. These are called the Docker
image.
Ship
Run
Docker Concepts
Docker Toolbox
Docker Toolbox is the fastest way to get up and running with
Docker in development. It provides different tools required to get
started with Docker.
The complete set of installation instructions are available from the
Docker website as well.
Docker Toolbox
Docker Engine
Docker Engine is the central piece of Docker. It is a lightweight run
time that builds and runs your Docker containers. The runtime con
sists of a daemon that communicates with the Docker client and
execute commands to build, ship, and run containers.
Docker Engine uses Linux kernel features like cgroups, kernel
namespaces, and a union-capable filesystem. These features allow
the containers to share a kernel and run in isolation with their own
process ID space, filesystem structure, and network interfaces.
Docker Engine is supported on Linux, Windows, and OS X.
Docker Machine
Docker Machine allows you to create Docker hosts on your com
puter, on cloud providers, and inside your own data center. It creates
servers, installs Docker on them, and then configures the Docker
client to talk to them. The docker-machine CLI comes with Docker
Toolbox and allows you to create and manage machines.
Once your Docker host has been created, it then has a number of
commands for managing containers:
Start, stop, restart container
Upgrade Docker
Configure the Docker client to talk to a host
Commonly used commands for Docker Machine are listed in
Table 1-1.
Table 1-1. Common commands for Docker Machine
Command Purpose
create Create a machine
ls
List machines
env
Display the commands to set up the environment for the Docker client
stop
Stop a machine
rm
Remove a machine
ip
Docker Toolbox
Any commands from the docker CLI will now run on this Docker
Machine.
Docker Compose
Docker Compose is a tool that allows you to define and run applica
tions with one or more Docker containers. Typically, an application
would consist of multiple containers such as one for the web server,
another for the application server, and another one for the database.
With Compose, a multi-container application can be easily defined
in a single file. All the containers required for the application can be
then started and managed with a single command.
With Docker Compose, there is no need to write scripts or use any
additional tools to start your containers. All the containers are
defined in a configuration file using services, and then dockercompose script is used to start, stop, restart, and scale the application
and all the services in that application, and all the containers within
that service.
Commonly used commands for Docker Compose are listed in
Table 1-2.
scale
stop
Stop services
kill
Kill containers
logs
ps
List containers
Docker Swarm
An application typically consists of multiple containers. Running all
containers on a single Docker host makes that host a single point of
failure (SPOF). This is undesirable in any system because the entire
system will stop working, and thus your application will not be
accessible.
Docker Swarm allows you to run a multi-container application on
multiple hosts. It allows you to create and access a pool of Docker
hosts using the full suite of Docker tools. Because Docker Swarm
serves the standard Docker API, any tool that already communicates
with a Docker daemon can use Swarm to transparently scale to mul
tiple hosts. This means an application that consists of multiple con
tainers can now be seamlessly deployed to multiple hosts.
Figure 1-4 shows the main concepts of Docker Swarm.
Docker Toolbox
10
Kubernetes
Kubernetes is an open source orchestration system for managing
containerized applications. These can be deployed across multiple
hosts. Kubernetes provides basic mechanisms for deployment,
maintenance, and scaling of applications. An applications desired
state, such as 3 instances of WildFly or 2 instances of Couchbase,
Kubernetes
11
12
Replication controller
A replication controller ensures that a specified number of pod
replicas are running on worker nodes at all times. It allows both
up- and down-scaling of the number of replicas. Pods inside a
replication controller are re-created when the worker node
reboots or otherwise fails.
A replication controller creates two instances of a Couchbase
pod can be defined as shown here:
apiVersion: v1
kind: ReplicationController
metadata:
name: couchbase-controller
spec:
# Two replicas of the pod to be created
replicas: 2
# Identifies the label key and value on the Pod
# that this replication controller is responsible
# for managing
selector:
app: couchbase-rc-pod
# "cookie cutter" used for creating new pods when
# necessary
template:
metadata:
labels:
# label key and value on the pod.
# These must match the selector above.
app: couchbase-rc-pod
spec:
containers:
- name: couchbase
image: couchbase
ports:
- containerPort: 8091
Service
Each pod is assigned a unique IP address. If the pod is inside a
replication controller, then it is re-created but may be given a
different IP address. This makes it difficult for an application
server such as WildFly to access a database such as Couchbase
using its IP address.
A service defines a logical set of pods and a policy by which to
access them. The IP address assigned to a service does not
change over time, and thus can be relied upon by other pods.
Kubernetes
13
Note that the labels used in selector must match the metadata
used for the pods created by the replication controller.
So far, we have learned the application concepts of Kubernetes. Lets
look at some of the system concepts of Kubernetes, as shown in
Figure 1-5.
14
Cluster
A Kubernetes cluster is a set of physical or virtual machines and
other infrastructure resources that are used to run your applica
tions. The machines that manage the cluster are called masters
and the machines that run the containers are called workers.
Node
A node is a physical or virtual machine. It has the necessary
services to run application containers.
A master node is the central control point that provides a uni
fied view of the cluster. Multiple masters can be set up to create
a high-availability cluster.
A worker node runs tasks as delegated by the master; it can run
one or more pods.
Kubelet
This is a service running on each node that allows you to run
containers and be managed from the master. This service reads
container manifests as YAML or JSON files that describe the
application. A typical way to provide this manifest is using the
configuration file.
A Kubernetes cluster can be started easily on a local machine for
development purposes. It can also be started on hosted solutions,
turn-key cloud solutions, or custom solutions.
Kubernetes can be easily started on Google Cloud using the follow
ing command:
curl -sS https://get.k8s.io | bash
Other Platforms
Docker Swarm allows multiple containers to run on multiple hosts.
Kubernetes provides an alternative to running multi-container
Other Platforms
15
Apache Mesos
Apache Mesos provides high-level building blocks by abstracting
CPU, memory, storage, and other resources from machines (physi
cal or virtual). Multiple applications that use these blocks to provide
resource isolation and sharing across distributed applications can
run on Mesos.
Marathon is one such framework that provides container orchestra
tion. Docker containers can be easily managed in Marathon. Kuber
netes can also be started as a framework on Mesos.
Rancher Labs
Rancher Labs develops software that makes it easy to deploy and
manage containers in production. They have two main offerings
Rancher and RancherOS.
Rancher is a container management platform that natively supports
and manages your Docker Swarm and Kubernetes clusters. Rancher
takes a Linux host, either a physical machine or virtual machine, and
makes its CPU, memory, local disk storage, and network connectiv
ity available on the platform. Users can now choose between Kuber
16
Joyent Triton
Triton by Joyent virtualizes the entire data center as a single, elastic
Docker host. The Triton data center is built using Solaris Zones but
offers an endpoint that serves the Docker remote API. This allows
Docker CLI and other tools that can talk to this API to run contain
ers easily.
Triton can be used as a service from Joyent or installed as onpremise from Joyent. You can also download the open source ver
sion and run it yourself.
Other Platforms
17
CHAPTER 2
This chapter explains how to build and run your first Docker con
tainer using Java.
Youll learn the syntax needed to create Docker images using Dock
erfiles and run them as a containers. Sharing these images using
Docker Hub is explained. Deploying a sample Java EE application
using prebuilt Docker images is then covered. This application will
consist of an application server and a database container on a single
host. The application will be deployed using Docker Compose and
Docker Swarm. The same application will also be deployed using
Kubernetes.
Dockerfile
Docker builds images by reading instructions from a text document,
usually called a Dockerfile. This file contains all the commands a
user can usually call on the command line to assemble an image.
The docker build command uses this file and executes all the
instructions in this file to create an image.
The build command is also passed a context that is used during
image creation. This context can be a path on your local filesystem
or a URL to a Git repository. The context is processed recursively,
which means any subdirectories on the local filesystem path and any
submodules of the repository are included.
19
Example
FROM ubuntu
COPY
COPY .bash_profile /
home
ENV
ENV HOSTNAME=test
RUN
Executes a command
CMD
CMD ["/bin/echo",
"hello world"]
EXPOSE
EXPOSE 8093
20
Other images can use this new image in the FROM command. It
allows you to create a chain where multipurpose base images are
used and additional software is installedfor example, the Docker
file for WildFly. The complete chain for this image is shown here:
jboss/wildfly -> jboss/base-jdk:7 -> jboss/base-jdk ->
jboss/base -> centos
Often the base image for your application will be a base image that
already has some software included in itfor example, the base
image for Java. So your applications Dockerfile will have an instruc
tion as shown here:
FROM java
Each image has a tag associated with it that defines multiple versions
of the image. For example, java:8 is the JDK that has OpenJDK 8
already included in it. Similarly, the java:9-jre image has the
OpenJDK 9 Java runtime environment (JRE) included in it.
The Dockerfile can also contain CMD instruction. CMD provides
defaults for executing the container. If multiple CMD instructions are
listed then only the last CMD will take effect. This ensures that the
Docker container can run one command, and only one.
21
directory for copying any files to the image. No files are copied
in this case.
The java image is downloaded from Docker Hub. It also down
loads the complete chain of base images.
The CMD instruction added as a new layer to the image.
TAG
latest
latest
IMAGE ID
2547fe6782bd
97d87da6866e
CREATED
3 minutes ago
9 days ago
SIZE
642.9 MB
642.9 MB
Our image hello-java and the base image java are both shown in
this list.
Each image can optionally be tagged using the name:tag format.
This allows multiple versions of the image to be created. By default,
an image is assigned the latest tag. The total size of the image is
shown in the last column.
The docker run command has multiple options that can be speci
fied to customize the container. Multiple options can be combined
together. Some of the common options are listed in Table 2-2.
23
Purpose
Keep STDIN open even if not attached
-t
Allocate a pseudo-TTY
-d
--name
--rm
-e
-P
-p
-m
Purpose
Register or log in to a Docker registry
search
pull
push
logout
tag
24
TAG
IMAGE ID
CREATED
SIZE
latest 2547fe6782bd 4 minutes ago 642.9 MB
latest 2547fe6782bd 4 minutes ago 642.9 MB
latest 97d87da6866e 9 days ago
642.9 MB
25
917c0fc99b35: Pushed
latest: digest: sha256:3155410e3950ebec18ecf5fcda3e1caf7c68fd \
29ab4a34bc19fb56841a630924 size: 9334
Anyone can download this image using the docker pull command.
Alternatively, a container using this image can be started using the
docker run command. This command downloads the image as
well, if it does not already exist on your Docker Host.
27
28
4. Run application.
Use the Docker Compose file to start the multi-container appli
cation on the Docker Swarm cluster.
A sample script to create a three-node Docker Swarm cluster is in
Example 2-5.
Example 2-5. Three-node Swarm cluster
echo "Creating Docker Machine for Consul ..."
docker-machine \
create \
-d virtualbox \
consul-machine
echo "Starting Consul ..."
docker $(docker-machine config consul-machine) run -d \
--restart=always \
-p "8500:8500" \
-h "consul" \
progrium/consul -server -bootstrap
echo "Creating Docker Swarm master ..."
docker-machine \
create \
-d virtualbox \
--swarm \
--swarm-master \
--swarm-discovery="consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
swarm-master
echo "Creating Docker Swarm worker node 1 ..."
docker-machine \
create \
-d virtualbox \
--swarm \
29
--swarm-discovery="consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
swarm-node-01
echo "Creating Docker Swarm worker node 2 ..."
docker-machine \
create \
-d virtualbox \
--virtualbox-disk-size "5000" \
--swarm \
--swarm-discovery="consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-store=consul://$(docker-machine ip \
consul-machine):8500" \
--engine-opt="cluster-advertise=eth1:2376" \
swarm-node-02
echo "Configure to use Docker Swarm cluster ..."
eval "$(docker-machine env --swarm swarm-master)"
More details about using Docker Compose with Docker Swarm are
available on the Docker website.
31
32
labels:
name: couchbase-rc
context: oreilly-docker-book
spec:
replicas: 1
template:
metadata:
name: couchbase-rc-pod
labels:
name: couchbase-rc-pod
context: oreilly-docker-book
spec:
containers:
- name: couchbase-rc-pod
image: arungupta/oreilly-couchbase:latest
ports:
- containerPort: 8091
- containerPort: 8092
- containerPort: 8093
- containerPort: 11210
---apiVersion: v1
kind: ReplicationController
metadata:
name: wildfly-rc
labels:
name: wildfly-rc
context: oreilly-docker-book
spec:
replicas: 1
template:
metadata:
labels:
name: wildfly
context: oreilly-docker-book
spec:
containers:
- name: wildfly-rc-pod
image: arungupta/oreilly-wildfly:latest
ports:
- containerPort: 8080
33
34
The list of created replication controllers can be seen using the com
mand kubectl get rc, which gives the output shown here:
NAME
couchbase-rc
wildfly-rc
DESIRED
1
1
CURRENT
1
1
AGE
6m
6m
The list of services can be seen using the command kubectl get
svc, which gives this output:
NAME
couchbase-service
kubernetes
CLUSTER-IP
10.0.54.170
10.0.0.1
EXTERNAL-IP
<none>
<none>
PORT(S)
AGE
8091/TCP,... 6m
443/TCP
2h
The list of ports is truncated here, but the command will show a
complete output. Similarly, the list of pods can be seen using the
command kubectl get po.
Replication controllers, pods, and services are accessible within the
Kubernetes cluster. The WildFly replication controller needs to be
exposed as an external service outside the Kubernetes cluster as
shown in the following code. This allows us to perform the CRUD
operation on the application deployed in WildFly:
kubectl expose rc wildfly-rc --target-port=8080 --port=8080
--type=LoadBalancer
The complete service description can now be seen using the com
mand kubectl describe svc wildfly-rc, which produces this
output:
Name:
wildfly-rc
Namespace:
default
Labels:
context=oreilly-docker-book,name=wildfly-rc
Selector:
context=oreilly-docker-book,name=wildfly
Type:
LoadBalancer
IP:
10.0.242.62
LoadBalancer Ingress: aca5ae155f86011e5aa71025a2ab0be1-658723113
.us-west-2.elb.amazonaws.com
Port:
<unset> 8080/TCP
NodePort: <unset> 30591/TCP
Endpoints:
10.244.2.3:8080
Session Affinity: None
Events:
FirstSeen LastSeen Count From
SubobjectPath Type...
35
The output is truncated, but executing the command will show the
complete output. Wait about three minutes for the load balancer to
settle. The value of attribute LoadBalancer Ingress shows the
externally accessible address of the service. The application is now
fully deployed and ready to be accessed.
The application expects a JSON document as input that defines a
book entity, which can be added as shown in Example 2-7. Make
sure to change the IP address obtained from the previous step. Exe
cuting this command will display the exactly output shown in
Example 2-8.
The complete list of entities can be queried as shown in
Example 2-9. Make sure to change the IP address obtained in the
previous step. Executing the command will display the output
shown in Example 2-9.
For more information, check out the Kubernetes documentation.
36
CHAPTER 3
NetBeans
NetBeans is adding integrated support for Docker in their next
release. At the time of writing, this feature is currently available in
the nightly build, which can be downloaded from the NetBeans
website.
37
Pull an Image
A new image can be downloaded by right-clicking the newly created
connection and selecting Pull as shown in Figure 3-3.
38
Build an Image
A new image can be created by selecting Build as shown in
Figure 3-3 and specifying the directory name that contains Docker
file. This directory serves as the build context for creating the image.
The Dockerfile editor provides syntax coloring to highlight instruc
tions from comments.
This image can then be pushed to a registry (e.g., Docker Hub).
Run a Container
Once an image is downloaded, a container can be started by rightclicking on the image and selecting Run as shown in Figure 3-4.
NetBeans
39
A new tag can be attached to this image by using the Tag menu.
More details about NetBeans and Docker are available on the Net
Beans website.
Eclipse
Docker support in Eclipse is available by installing a plug-in.
Eclipse introduces a new perspective as shown in Figure 3-5. It con
tains Docker Explorer, which allows you to manage images and con
tainers.
Properties
Detailed information about connections, images, and contain
ers.
Pull an Image
A new image can be downloaded by right-clicking on the newly cre
ated connection and selecting Pull as shown in Figure 3-7.
Eclipse
41
Build an Image
A new image can be created by clicking the Build Image wizard as
shown in Figure 3-8.
42
Run a Container
Once an image is downloaded, a container can be started by rightclicking on the image and selecting Run as shown in Figure 3-9.
IntelliJ IDEA
Docker support in IntelliJ IDEA is available by installing a plug-in
from the plug-in repository. Install the plug-in as shown in
Figure 3-10.
IntelliJ IDEA
43
Pull an Image
A new image can be downloaded by right-clicking on the newly cre
ated connection and selecting Pull image as shown in Figure 3-12.
44
Build an Image
Creating a new image requires us to create a new directory, typically
called docker-dir. This directory will contain Dockerfile, the applica
tion archive such as the .war file, and a file that has settings for run
ning the container. This directory serves as the build context for
creating the image.
The application needs to be configured so the archive is generated in
this directory. A new Docker deployment configuration can be cre
ated so this archive is built before the image as shown in
Figure 3-14.
IntelliJ IDEA
45
Run a Container
Once an image is downloaded, a container can be started by rightclicking on the image and selecting Create container, as shown in
Figure 3-15.
More details about Docker and IntelliJ are available from the IntelliJ
IDEA Docker help page.
Maven
Maven is a build tool for Java applications that makes the build pro
cess easy by providing a uniform build system. Maven projects are
built using a project object model (POM) defined in a file in the
main directory of the project. This file, typically called pom.xml,
contains a set of plug-ins that builds the project.
Each Maven project has a well-defined location for source, tests,
resources, and other related files of the project. The process for
building and distributing a project is clearly defined. There are pre
defined phases that map to the lifecycle of a project. For example, the
compile phase will compile the code and the package phase will
take the compiled code and package it into a distributable format,
such as a JAR.
Each plug-in in pom.xml offers multiple goals and the plug-in asso
ciates a goal with one or more phases. For example, mavencompiler-plugin is the plug-in that compiles your source. One of
the goals offered by this plug-in is compiler:compile, and its tied to
the compile phase. So the developer can invoke the mvn compile
command, which invokes the associated goal from the plug-in. Read
more about the Maven lifecycle at the Apache website.
There are a few Maven plug-ins that provide goals to manage
Docker images and containers:
fabric8io/docker-maven-plugin
spotify/docker-maven-plugin
wouterd/docker-maven-plugin
alexec/docker-maven-plugin
All plug-ins offer goals that allow Docker lifecycle commands to be
tied to a Maven phase. For example, the standard Maven package
phase packages the compiled code into a JAR or WAR archive. Asso
ciating a goal from the plug-in to this phase can take the created
archive and package it as a Docker image. Similarly, the standard
Maven install phase installs the archive in the local repository.
Attaching a plug-ins goal to this phase can run the container as well.
Maven
47
48
<log>Hello</log>
</wait>
</run>
</image>
</images>
</configuration>
</plugin>
</plugins>
</build>
</profile>
</profiles>
Maven
49
The plug-in used here offers the goals listed in Table 3-1.
Table 3-1. Docker Maven plug-in goals
Goal
docker:start
Description
Create and start containers
docker:stop
docker:build
Build images
docker:watch
docker:push
TAG
latest
IMAGE ID
3432332a5c80
CREATED
32 seconds ago
SIZE
642.9 MB
50
CHAPTER 4
Conclusion
51
52
Chapter 4: Conclusion