Containers With Docker
Containers With Docker
A humble introduction
foreword.
Kernel & Operating System
The Operating System (OS) provides
an interface between the User and the Machine.
Usually located in
/boot
Computer
3
1.
WHat is DOCKER ?
Real-world analogy = Intermodal Containers
before 1960:
- Multiplicity of methods for storing/transporting
- Problems of interaction between goods
= It was a mess… so it was really expensive and not really trustable
5
2.
WHat is really DOCKER ?
Docker is a tool for running applications in
isolated environments called Containers.
Docker Engine
3.
Is it just another Virtual Machine ?
Nope ^^
● Hardware-level virtualization ● OS virtualization
● Each VM runs its own OS (and kernel) ● Share the host OS (and kernel)
● Fully isolated, hence more secure ● Process-level isolation, possibly less secure
● Heavyweight ● Lightweight
● Startup time in minutes ● Startup time in milliseconds
● Allocates fixed memory ● Requires less memory space
kernel kernel
4.
Great ! is it ready for production ?
Of course !!!
The first release of Docker was in 2013
https://docs.docker.com/engine/install/
http://localhost:8000/
17
7.
What Kind of Sorcery Is This ?
STOPPED RUNNING
IMAGE CONTAINER CONTAINER
START
BUILD CREATE
STOP
Main concepts:
You can build “Images” from a “Dockerfile”.
// You can easily write your own Dockerfile, it’s a simple text file (seen later).
To create a new running “Container”, you have to run an “Image”.
// You can do the same in 2 steps: “docker create” + “docker start”
Next you can stop or start your existing “Containers”.
19
There are tons of images “ready to use” available on DockerHub
(hosted repository service provided by Docker) but you can have
your own repo, we call that a Registry.
20
When we ran the Image jgreat/2048
As the Image was missing on the host, it automatically
downloaded from the Registry, built the Container and started
21
You can have any distribution while it’s based on the linux kernel.
Of course, it’s perfect for web applications, databases, etc…
But also for a Minecraft server or a good old CS 1.6 ^^
docker run -d -p 27015:27015 -p 27015:27015/udp --name cs16-server ggoulart/cs1.6-server-more-maps
22
8.
ok let’s do it !
Create a directory named “test” and inside the “Dockerfile” file.
Put this 2 lines:
build and Image on top of the “ubuntu” Image
FROM ubuntu
execute the command: echo “Hello World” DOCKERFILE
CMD ["echo", "Hello World"]
To create a new container from the Image hello_img and start it:
docker run hello_img
CONTAINERS
To see all the existing containers on the host (-a option to see all the containers, stopped too):
docker ps -a
24
Well understand, each time you RUN an IMAGE, it creates a NEW CONTAINER !!!
27
10.
how we specify this “main” process ?
In short, if it’s present it will be the ENTRYPOINT instruction,
else it will be the CMD instruction.
FROM ubuntu
CMD ["echo", "Hello World"]
docker run hello_img echo bye → output: Hello World echo bye
29
11.
Why “cmd” and “entrypoint” ExiST both?
Because the power comes when you combine them !
FROM ubuntu → in this form, CMD add
ENTRYPOINT ["echo", "Hello"] overridable parameters to the
CMD ["World"] ENTRYPOINT instruction
→ without parameter
it displays Hello World
At the moment, maybe you don’t see the “real” power of that.
But it will let you create images with a default command and/or arguments
that can be overwritten from command line when creating containers.
31
12.
It’s time to Share our image
You need to push your image to a Registry.
Of course you can have your own private registry
but here we will use the public DockerHub repo.
docker push hello_img
33
It’s well available now on DockerHub:
And/or you can create a new Container with this Image like this:
docker run ludk/hello_img
13.
WAIT...Does it download everything everytime?
Each container is
an image with a readable/writeable layer on top of
a bunch of read-only layers.
These layers (also called intermediate images) are generated when the
commands in the Dockerfile are executed during the Docker image build.
LE
AB
RIT
L E/W
DAB
A
RE
FROM debian
RUN apt-get update && apt-get install -y nano
READ ONLY
RUN apt-get install -y apache2 && apt-get clean
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
36
Each line in the Dockerfile create a new Image Layer.
And each Image layer has its own id (like a commit in a Git project).
FROM debian
RUN apt-get update && apt-get install -y nano
RUN apt-get install -y apache2 && apt-get clean
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
38
14.
How to interact with the container ?
The main kinds of processes we usually run
inside a Container are what we call a “Server”.
If you need to connect to this container (to see the logs for example)
docker exec -it websrv /bin/bash
41
15.
How to work inside a container ?
You’re right, until now we just played with Containers.
To really work with them you have to
share data between the Container and the Host
=> Let’s me introduce the Volumes.
Inside your test directory, create myproject/index.html
<html>
<head>Hello</head>
<body>Volumes are great !!!</body>
</html>
We need to delete our container and create a new one with a Volume:
docker rm -f websrv
docker run -d -p 80:80 --name websrv -v /path/to/myproject:/var/www/html debian_nano_apache
As this path is relative to the Host directories, it’s not the best way :/
The proper way to mount Volumes (more complicated) is described here.
43
16.
And if I want to copy stuff inside my container
You just need to use the COPY instruction, example:
FROM debian
RUN apt-get update && apt-get install -y nano
RUN apt-get install -y apache2 && apt-get clean
COPY myproject /var/www/html
ENTRYPOINT ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
We need to build a new Image from the new version of the Dockerfile.
And next, we can delete our Container and create a new one.
docker build -t debian_nano_apache .
docker rm -f websrv
docker run -d -p 80:80 --name websrv debian_nano_apache
It’s perfect if you want to distribute your project with the source code inside,
but it’s not the best way when developing because for every change
in your code you need to create a new container for testing.
45
17.
containers working together
A really basic LAMP architecture could look like this:
Volume
mysql
EXPOSE
MySQL Server DB folder
3306
8000:80
Volume
www
Docker Network
Docker Host
47
./db/Dockerfile ./web/Dockerfile
MySQL Server
FROM mysql:5.6 FROM webdevops/php-apache-dev:7.4 Apache + PHP 7.4
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$conn->close();
49
18.
Docker compose
https://jstobigdata.com/docker-compose-cheatsheet/
With Compose, you use a YAML file to configure your application’s services.
docker-compose up Starts the project (-d for detach mod and --rebuild to rebuild images).
=> build images, (re)create containers and start containers.
docker-compose stop Stops running containers without removing them.
docker-compose down Stops containers and removes containers, networks, volumes, and images
52
In fact, as we just set environment variables, we don’t need to define
custom Dockerfiles, we can just do like this:
version: "3"
services:
db_container:
image: mysql:5.6
environment:
MYSQL_ROOT_PASSWORD: my_secret_pw
MySQL Server
MYSQL_DATABASE: test_docker
MYSQL_USER: devuser
MYSQL_PASSWORD: devpass
networks:
- myNetwork
web_container:
image: webdevops/php-apache-dev:7.4
environment:
WEB_DOCUMENT_ROOT: /var/www/html
Apache + PHP 7.4
depends_on:
- db_container
volumes:
- /path/to/myproject:/var/www/html
ports:
- 8000:80
networks:
- myNetwork
networks:
myNetwork: 53
19.
dockerfile instructions SUMMARY
55
20.
other docker commands
57
21.
Where To go ?
Container Orchestration
= managing the life cycles of containers,
especially in large, dynamic environments.
https://docs.docker.com/engine/swarm/ https://kubernetes.io/
59