Learning Docker Networking - Sample Chapter
Learning Docker Networking - Sample Chapter
$ 39.99 US
25.99 UK
P U B L I S H I N G
Vaibhav Kohli
Rajdeep Dua
Learning Docker
Networking
ee
Sa
m
pl
C o m m u n i t y
E x p e r i e n c e
D i s t i l l e d
Learning Docker
Networking
Become a proficient Linux administrator by learning the art of
container networking with elevated efficiency using Docker
Rajdeep Dua
Vaibhav Kohli
worked in R&D and Developer Relation roles at Microsoft, Google, VMware, and
Salesforce.com. He has exposure to multiple cloud platforms like Google App
Engine, Heroku, Force.com, vSphere, and Google Compute Engine.
Rajdeep has been working on Docker and related container technologies for more
than two years now. He did his MBA in IT from IIM Lucknow in the year 2000.
Preface
This book helps the reader to learn, create, deploy, and provide administration steps
for Docker networking. Docker is a Linux container implementation that enables the
creation of light-weight portable development and production-quality environments.
These environments can be updated incrementally. Docker achieves this by leveraging
containment principles, such as cgroups and Linux namespaces, along with overlay
filesystem-based portable images.
Docker provides the networking primitives that allow administrators to specify
how different containers network with each application, connect to each of their
components, then distribute them across a large number of servers, and ensure
coordination between them irrespective of the host or the VM that they are running
on. This book aggregates all the latest Docker networking technology and provides
great in depth explanation with setup details.
Preface
Preface
Chapter 6, Next Generation Networking Stack for Docker: libnetwork, will look into
some of the deeper and conceptual aspects of Docker networking. One of these is
libnetworkingthe future of the Docker network model, which is already getting
into shape with the release of Docker 1.9. While explaining the libnetworking concept,
we will also study the CNM model, its various objects and components, along with
its implementation code snippets. Next, we will look into drivers of CNM, the prime
one being the overlay driver, in detail with deployment as part of Vagrant setup.
We will look at standalone integrations of containers with overlay network with
Docker Swarm and Docker Machine as well. In the next section, we explain the CNI
interface, its executable plugins, and give a tutorial to configure Docker networking
with the CNI plugin. In the last section, Project Calico is explained in detail, which
provides scalable networking solutions that are based out of libnetwork and provides
integration with Docker, Kubernetes, Mesos, bare-metal, and VMs, primarily.
Autonomy
Decentralization
Parallelism
Isolation
[1]
[2]
Chapter 1
Linux bridges
These are L2/MAC learning switches built into the kernel and are to be used
for forwarding.
Open vSwitch
This is an advanced bridge that is programmable and supports tunneling.
NAT
Network address translators are immediate entities that translate IP addresses
and ports (SNAT, DNAT, and so on).
IPtables
This is a policy engine in the kernel used for managing packet forwarding, firewall,
and NAT features.
AppArmor/SELinux
Firewall policies for each application can be defined with these.
Various networking components can be used to work with Docker, providing new
ways to access and use Docker-based services. As a result, we see a lot of libraries
that follow a different approach to networking. Some of the prominent ones are
Docker Compose, Weave, Kubernetes, Pipework, libnetwork, and so on. The
following figure depicts the root ideas of Docker networking:
Container A
Container B
Unix-domain
sockets and other IPC
Container C
Container D
Open vSwitch
Container E
Docker0
Linux Bridge
Port
Docker Proxy
Mapping Using IPTables
Host
Network
[3]
Container F
Direct
Host
Network
--net default
--net=none
--net=container:$container2
--net=host
[4]
Chapter 1
If we create two containers called Container1 and Container2, both of them are
assigned an IP address from a private IP address space and also connected to the
docker0 bridge, as shown in the following figure:
Both the preceding containers will be able to ping each other as well as reach the
external world.
For external access, their port will be mapped to a host port.
As mentioned in the previous section, containers use network namespaces. When
the first container is created, a new network namespace is created for the container.
A vEthernet link is created between the container and the Linux bridge. Traffic sent
from eth0 of the container reaches the bridge through the vEthernet interface and
gets switched thereafter. The following code can be used to show a list of Linux
bridges:
# show linux bridges
$ sudo brctl show
The output will be similar to the one shown as follows, with a bridge name and the
veth interfaces on the containers it is mapped to:
bridge name
docker0
bridge id
STP enabled
interfaces
8000.56847afe9799
no
veth44cb727
veth98c3700
[5]
How does the container connect to the external world? The iptables nat table on
the host is used to masquerade all external connections, as shown here:
$ sudo iptables -t nat -L n
...
Chain POSTROUTING (policy ACCEPT) target prot opt
source destination MASQUERADE all -- 172.17.0.0/16
!172.17.0.0/16
...
How to reach containers from the outside world? The port mapping is again done
using the iptables nat option on the host machine.
Container1
eth0
docker0
Container2
eth0: 500
eth0
vethAQI2T
veth01qe
172.17.42.1:49514
-P flag
Docker OVS
Open vSwitch is a powerful network abstraction. The following figure shows how
OVS interacts with the VMs, Hypervisor, and the Physical Switch. Every VM has
a vNIC associated with it. Every vNIC is connected through a VIF (also called a
virtual interface) with the Virtual Switch:
[6]
Chapter 1
OVS uses tunnelling mechanisms such as GRE, VXLAN, or STT to create virtual
overlays instead of using physical networking topologies and Ethernet components.
The following figure shows how OVS can be configured for the containers to
communicate between multiple hosts using GRE tunnels:
host1
host2
Docker
Docker
Container1
Container2
Container1
Container2
eth0
eth0
eth0
eth0
docker0
docker0
Open vSwitch
Open vSwitch
br0
gre0
gre0
GRE
tunnel
eth0
br0
eth0
[7]
$ docker run
/bin/bash
Apps on c1 and c2 can communicate over the following Unix socket address:
struct
sockaddr_un address;
address.sun_family = AF_UNIX;
snprintf(address.sun_path, UNIX_PATH_MAX, "/var/run/foo/bar" );
C1: Server.c
C2: Client.c
connect(socket_fd, (struct
sockaddr *) &address,
sizeof(struct sockaddr_un));
write(socket_fd, buffer,
nbytes);
aed84ee21bde
...
172.17.0.2
c1alaias 6e5cdeb2d300 c1
[8]
Chapter 1
The first is an entry for the container c2 that uses the Docker container ID as
a host name
The second entry, 172.17.0.2 c1alaias 6e5cdeb2d300 c1, uses the link
alias to reference the IP address of the c1 container
The following figure shows two containers Container 1 and Container 2 connected
using veth pairs to the docker0 bridge with --icc=true. This means these two
containers can access each other through the bridge:
--icc=true
Container1
Container2
eth0
docker0
veth01qe
vethAQI2T
Links
Links provide service discovery for Docker. They allow containers to discover
and securely communicate with each other by using the flag -link name:alias.
Inter-container communication can be disabled with the daemon flag -icc=false.
With this flag set to false, Container 1 cannot access Container 2 unless explicitly
allowed via a link. This is a huge advantage for securing your containers. When two
containers are linked together, Docker creates a parent-child relationship between
them, as shown in the following figure:
secure tunnel
web
db
eth0
docker0
vethAQI2T
veth01qe
[9]
DataMapper.setup(:default, dburl)
Sandbox
A sandbox contains the configuration of a container's network stack. This includes
management of the container's interfaces, routing table, and DNS settings. An
implementation of a sandbox could be a Linux network namespace, a FreeBSD
jail, or other similar concept. A sandbox may contain many endpoints from
multiple networks.
Endpoint
An endpoint connects a sandbox to a network. An implementation of an endpoint
could be a veth pair, an Open vSwitch internal port, or something similar. An
endpoint can belong to only one network but may only belong to one sandbox.
[ 10 ]
Chapter 1
Network
A network is a group of endpoints that are able to communicate with each other
directly. An implementation of a network could be a Linux bridge, a VLAN, and
so on. Networks consist of many endpoints, as shown in the following diagram:
Docker Container
Docker Container
Docker Container
Network Sandbox
Network Sandbox
Network Sandbox
Endpoint
Endpoint
Endpoint
Endpoint
All containers on the same network can communicate freely with each other
We will discuss the details of how CNM is implemented in Chapter 6, Next Generation
Networking Stack for Docker: libnetwork.
Summary
In this chapter, we learned about the essential components of Docker networking,
which have evolved from coupling simple Docker abstractions and powerful
network components such as Linux bridges and Open vSwitch.
We learned how Docker containers can be created with various modes. In the
default mode, port mapping helps through the use of iptables NAT rules, allowing
traffic arriving at the host to reach containers. Later in the chapter, we covered the
basic linking of containers. We also talked about the next generation of Docker
networking, which is called libnetwork.
[ 11 ]
www.PacktPub.com
Stay Connected: