Introduction - Docker & Containerization

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 17

Introduction

Back in the Days


Back in the 90s, deploying applications (web) was a different process.
Get a physical machine, connect it to the internet and load your program / application
there.
People who use your service will send a request to your domain and that machine will
serve the request/s.

Buying physical computer servers / machines to run the server.


If multiple machines due to high usage, setting up network and other things would be a
necessity.

Docker is just one type of a tool that helps us create the containers.
Applications that work in one machine but don't work in another is the most common
problem.
There are many reasons why this happens
1. Missing tools
2. Different configuration
3. Hardware dependencies

To solve this, there are multiple ways


1. Configuration management tools such as Chef, Puppet, Ansible.
But these require knowledge about hardware and OS.

2. Virtual machines as code (Vagrant)


Heavy, slow and requires inconvenient configuration.

3. Docker which takes a simple approach.

Docker uses images and containers to allow apps to run anywhere consistently.
Images are created from lightweight configuration files that basically describe everything
that your app needs to run.
When Virtual Machines came
As computers / machines got better, those servers back in the days could be squeezed
into virtual machines inside a physical machine.
Multiple virtual machines using software inside a single computer.

Each virtual machine can serve as a dedicated machine with enough resources to
handle requests.

So it basically simulates actual machines / servers, but these are inside a single
computer now.

So tech companies can buy a bunch of these machines, set up virtual machines and
rent these virtual machines.
Virtual machines have dedicated memory, RAM, CPU, etc.

So this is Web 2.0.

Issues with VMs


The virtual machines made the software development and deployment much much
better, but there were still some flaws and limitations.

Issues with dependency.


Software might work in windows, but it might not run in other OS such as MAC.

Deploying the software on someone else’s computer might not make the software work.
With the virtual machine, one still has to install programs, operating systems which can
take time.
Also need to make sure whether there are the right ports, latest software and other
dependencies.
Containerization
Containerization can be considered as a next step of evolution to the virtual machine.
Containerization basically contains everything that your web application / application
needs.
So in a way, one does not need to install any OS.

By putting everything that you need inside a container, we can put that same container
in another machine and it will work flawlessly and exactly the same.

This resolves the hassle of dependencies and other issues.

Containers vs VMs
Virtual machines virtualize hardware and containers virtualize the OS kernel itself.
Anatomy of a Container
A container is composed of two things.
1. A linux namespace
Namespaces provide different views of your system.
Namespaces allow admins to restrict what processes can see on a system.
The namespaces provide a layer of isolation.

2. Linux Control Groups


Another linux feature which limits how much any resource one can use.
This allows docker engines to share available hardware resources to containers
and optionally enforce limits and constraints.

Namespaces and control groups are unique to linux only. Hence, docker is designed to
only run on linux.
Docker
In the docker repository, there are many different images that one can pull.
It is a public repository with images.
Using these images from the public repository, we can build a container.
Using the container, one can create a service.

Containers can be considered actual working apps.


Containers can have multiple images.

The docker-compose.yml file basically is for one to outline all the different services
or containers that our application needs to run / work.

There is no image of the application written by us on the repository.


For this, we create our own docker file.
For this, we create our own docker file and basically tell docker how to create this
image.

Containers need different services which are basically lightweight, standalone, and
executable software components that make up an application.
Each service has an image associated with it.

A docker container is its own environment.


If there are multiple services inside the container, we have to open up different ports.
With these port numbers, these services can communicate with each other using the
internal ports.

The virtual machines have their own kernel, whereas the containers in the docker share
the common kernel with the linux.
The reason why it boots up so fast is because it only requires one kernel.
Docker Image
Image can be considered as a template for the container.
Container Images are compressed and prepackaged files that contain the application,
along with the configuration, environment with the instructions on how to start your
application.
That instruction is called the entry point.

Images don’t have to be made from scratch.


They can be stacked on top of each other easily allowing us to take advantage of other
images as well.
A docker image is similar to a VM image (e.g. CentOS images) while a docker
container is similar to a VM instance (e.g. CentOS, Windows 10, etc.).

A docker image is similar to a CD (with an OS image loaded) while a container is similar


to a physical machine (which consists of CPUs, memory, storage, and network.
You can use a CD to install an OS on any machine. In docker world, you can use
docker images to create containers

Dockerfile
The Images are defined using a Dockerfile.
It is just a text file with a list of steps and configurations to create a docker image.
Building the docker file can lead to an image.

Fortunately, we don’t need to make images from scratch as images are already
available on the docker hub.
We can use the images and build upon it.

Example of a Docker File


FROM php:7.0-apache
LABEL maintainer=”Test test [email protected]
USER root
COPY src/ /var/www/html
EXPOSE 80

1. The FROM keyword specifies the base image.


This mentions which image to base your own image off from basically.
2. The LABEL keyword adds additional data such as the maintainer of this image
which can be the person who is maintaining the application image.

3. The USER word mentions which user to use for any commands.
By default, the docker would use a ROOT user to execute commands.

4. RUN statements are commands that give further customization to the image such
as installing software or configuration of files.

5. The EXPOSE 80 means that when the container is running, it will listen on port
number 80.

6. COPY will basically copy the contents from one destination to another depending
on the use case and instructions mentioned in the image.

To build this image, we basically use the command


docker build -t [name] [location-of-docker-file]
Docker Port Binding
No need for ports if the data imports are already inside the container.
However, it is more useful when we are able to communicate with the container from
the local machine and import data that way.

Port Binding allows one to take one port from the machine (Where the container is
running) and map it to the port in the container.

The Port Mapping


outside:inside

Outside means outside the container on the host machine


Inside means inside the container.
Docker Saving Data
Containers are meant to be disposable.
When they are deleted, they are gone for good and cannot be recovered.
But what happens to the data inside it?
Data is also gone for good.

Everything you create inside the container stays within that container.
Once the container stops, the data gets deleted with it which could be really
inconvenient for the applications that need to save data.

Fortunately, we have a volume mounting feature to overcome this issue.


Similar to the port binding feature, this allows us to map a folder / directory tree in the
host machine to that of a container (Inside the container).

This can be done by the -v or --volume flag.


-v /tmp/container:/tmp/test

This basically maps the directory /tmp/container/ in the host machine to /tmp/test/
inside the container.

This keeps the data in sync.


Docker Commands
1. To check the status of the docker, simply type…
sudo systemctl status docker

2. To start the docker, simply type…


sudo systemctl start docker

3. To pull an image…
docker pull <image-name>

4. Create a volume
docker volume create <volume-name>

5. To see the list of containers created, we basically…


docker container ls -a
To see the list of containers filtered by the name, we…
docker container ls -a -f name=jrvs-psql

6. When you execute docker ps, it displays a list of active containers along with
their essential details.

7. To start and stop the container, we use


docker container stop jrvs-psql
docker container start jrvs-psql

8. To remove the container, we basically


docker container rm jrvs-psql
docker run --name jrvs-psql -e POSTGRES_PASSWORD= -d -v
pgdata:/var/lib/postgresql/data -p 5432:5432 postgres:9.6-alpine

1. docker run actually runs the docker container.

2. --name jrvs-psql specifies the name of the container.

3. -e POSTGRES_PASSWORD= sets an environment variable named


POSTGRES_PASSWORD to an empty value.
This is used to set the password for the PostgreSQL database.

4. -d Runs the container in detached mode, meaning it runs in the background.

5. -v pgdata:/var/lib/postgresql/data creates a volume named pgdata


mounting it to the location /var/lib/postgresql/data directory inside the
container.

6. -p 5432:5432 maps port 5432 of the host machine (left side) to port 5432 of
the container (right side).
This allows access to the PostgreSQL database running inside the container
through port 5432 on the host machine.

7. postgres:9.6-alpine specifies the Docker image to use for creating the


container. In this case, it's using the "postgres" image with the tag "9.6-alpine,"
which refers to PostgreSQL version 9.6 running on Alpine Linux (a lightweight
distribution).
Misc
Volumes and Images
In the above command, when the container’s volume is created with the -v flag, the
volume named ‘pgdata’ is mounted to the location specified inside the container.

So, when the PostgreSQL container (postgres:9.6-alpine) is started using this


command, Docker creates a volume named pgdata and mounts it to the specified
directory /var/lib/postgresql/data within the container.

This volume helps persist the PostgreSQL database data even if the container is
stopped or removed, allowing data to persist beyond the container's lifecycle.
The same location inside the container.

postgres:9.6-alpine refers to the Docker image.


In Docker, images are the blueprints used to create containers.
postgres:9.6-alpine is the Docker image for PostgreSQL version 9.6 using the Alpine
Linux distribution as the base image.

When you run the docker run command with postgres:9.6-alpine, Docker pulls
the postgres:9.6-alpine image from the Docker Hub repository (If it's not already
downloaded) and starts a container based on that image.

This container then runs an instance of PostgreSQL version 9.6 within the Alpine Linux
environment.
Ports
Two identical port numbers are specified (5432:5432), it's simply mapping the same port
on the host and container.

The first 5432 is the host port, meaning you can connect to the PostgreSQL service
from the host machine using localhost:5432 or <host_machine_IP>:5432, while the
second 5432 is the container port, which is the port where the PostgreSQL service
inside the container is running.

By mapping these ports, it allows external connections to reach the PostgreSQL service
running within the container and provides a consistent and familiar way to access the
service from the host machine.

Accessing the Container Terminal


The docker exec command will let you run arbitrary commands inside an existing
container.
docker exec -it <mycontainer> bash

1. docker exec: This is the command used to execute a command inside a


Docker container.

2. -it: These are flags used together to allocate a pseudo-TTY (terminal) and keep
STDIN open, allowing you to interact with the container's command line.

3. <mycontainer>: This is a placeholder for the name or ID of the Docker


container where you want to execute the command.

4. bash: This is the command to be executed inside the container. In this case, it's
starting an interactive Bash shell within the specified container.

The container shares the host operating system's kernel but operates as a separate
entity, with its own filesystem, processes, and networking.

Docker containers share the same underlying kernel as the host operating
system.
They use features like namespaces and control groups to provide isolation and
resource constraints, allowing multiple containers to run on the same host while
maintaining separation.
Different shells but same Kernel.

Here, the docker0 is an interface used by docker to facilitate communication between


docker containers and the host’s network.

While docker0 is not a physical network interface, it functions as a virtual bridge


facilitating communication among Docker containers and between containers and the
host system.
Therefore, it appears when listing network interfaces using commands like ifconfig or ip
addr, alongside physical and other logical interfaces on the system.

Environment Variables
The environment variables declared inside the container can be seen by this command.
docker exec jrvs-psql env
Docker Troubleshooting
1. How can I fix the Error of Docker-compose up exited with code 1 - Stack
Overflow
References
1. Quick Docker Tutorial

You might also like