Introduction - Docker & Containerization
Introduction - Docker & Containerization
Introduction - Docker & Containerization
Docker is just one type of a tool that helps us create the containers.
Applications that work in one machine but don't work in another is the most common
problem.
There are many reasons why this happens
1. Missing tools
2. Different configuration
3. Hardware dependencies
Docker uses images and containers to allow apps to run anywhere consistently.
Images are created from lightweight configuration files that basically describe everything
that your app needs to run.
When Virtual Machines came
As computers / machines got better, those servers back in the days could be squeezed
into virtual machines inside a physical machine.
Multiple virtual machines using software inside a single computer.
Each virtual machine can serve as a dedicated machine with enough resources to
handle requests.
So it basically simulates actual machines / servers, but these are inside a single
computer now.
So tech companies can buy a bunch of these machines, set up virtual machines and
rent these virtual machines.
Virtual machines have dedicated memory, RAM, CPU, etc.
Deploying the software on someone else’s computer might not make the software work.
With the virtual machine, one still has to install programs, operating systems which can
take time.
Also need to make sure whether there are the right ports, latest software and other
dependencies.
Containerization
Containerization can be considered as a next step of evolution to the virtual machine.
Containerization basically contains everything that your web application / application
needs.
So in a way, one does not need to install any OS.
By putting everything that you need inside a container, we can put that same container
in another machine and it will work flawlessly and exactly the same.
Containers vs VMs
Virtual machines virtualize hardware and containers virtualize the OS kernel itself.
Anatomy of a Container
A container is composed of two things.
1. A linux namespace
Namespaces provide different views of your system.
Namespaces allow admins to restrict what processes can see on a system.
The namespaces provide a layer of isolation.
Namespaces and control groups are unique to linux only. Hence, docker is designed to
only run on linux.
Docker
In the docker repository, there are many different images that one can pull.
It is a public repository with images.
Using these images from the public repository, we can build a container.
Using the container, one can create a service.
The docker-compose.yml file basically is for one to outline all the different services
or containers that our application needs to run / work.
Containers need different services which are basically lightweight, standalone, and
executable software components that make up an application.
Each service has an image associated with it.
The virtual machines have their own kernel, whereas the containers in the docker share
the common kernel with the linux.
The reason why it boots up so fast is because it only requires one kernel.
Docker Image
Image can be considered as a template for the container.
Container Images are compressed and prepackaged files that contain the application,
along with the configuration, environment with the instructions on how to start your
application.
That instruction is called the entry point.
Dockerfile
The Images are defined using a Dockerfile.
It is just a text file with a list of steps and configurations to create a docker image.
Building the docker file can lead to an image.
Fortunately, we don’t need to make images from scratch as images are already
available on the docker hub.
We can use the images and build upon it.
3. The USER word mentions which user to use for any commands.
By default, the docker would use a ROOT user to execute commands.
4. RUN statements are commands that give further customization to the image such
as installing software or configuration of files.
5. The EXPOSE 80 means that when the container is running, it will listen on port
number 80.
6. COPY will basically copy the contents from one destination to another depending
on the use case and instructions mentioned in the image.
Port Binding allows one to take one port from the machine (Where the container is
running) and map it to the port in the container.
Everything you create inside the container stays within that container.
Once the container stops, the data gets deleted with it which could be really
inconvenient for the applications that need to save data.
This basically maps the directory /tmp/container/ in the host machine to /tmp/test/
inside the container.
3. To pull an image…
docker pull <image-name>
4. Create a volume
docker volume create <volume-name>
6. When you execute docker ps, it displays a list of active containers along with
their essential details.
6. -p 5432:5432 maps port 5432 of the host machine (left side) to port 5432 of
the container (right side).
This allows access to the PostgreSQL database running inside the container
through port 5432 on the host machine.
This volume helps persist the PostgreSQL database data even if the container is
stopped or removed, allowing data to persist beyond the container's lifecycle.
The same location inside the container.
When you run the docker run command with postgres:9.6-alpine, Docker pulls
the postgres:9.6-alpine image from the Docker Hub repository (If it's not already
downloaded) and starts a container based on that image.
This container then runs an instance of PostgreSQL version 9.6 within the Alpine Linux
environment.
Ports
Two identical port numbers are specified (5432:5432), it's simply mapping the same port
on the host and container.
The first 5432 is the host port, meaning you can connect to the PostgreSQL service
from the host machine using localhost:5432 or <host_machine_IP>:5432, while the
second 5432 is the container port, which is the port where the PostgreSQL service
inside the container is running.
By mapping these ports, it allows external connections to reach the PostgreSQL service
running within the container and provides a consistent and familiar way to access the
service from the host machine.
2. -it: These are flags used together to allocate a pseudo-TTY (terminal) and keep
STDIN open, allowing you to interact with the container's command line.
4. bash: This is the command to be executed inside the container. In this case, it's
starting an interactive Bash shell within the specified container.
The container shares the host operating system's kernel but operates as a separate
entity, with its own filesystem, processes, and networking.
Docker containers share the same underlying kernel as the host operating
system.
They use features like namespaces and control groups to provide isolation and
resource constraints, allowing multiple containers to run on the same host while
maintaining separation.
Different shells but same Kernel.
Environment Variables
The environment variables declared inside the container can be seen by this command.
docker exec jrvs-psql env
Docker Troubleshooting
1. How can I fix the Error of Docker-compose up exited with code 1 - Stack
Overflow
References
1. Quick Docker Tutorial