
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to create a container using Podman?
These days, we hear a lot about the terms containers and VMs, and if you are a beginner, it's hard to differentiate between these terms and technologies that are similar and almost do the same work.
In this comprehensive tutorial, we will explain in an easy way what a container is, what a container engine is, and finally, we will install and use Podman as a container technology to run our first container.
What is a Container?
To understand what a container is, basically let's take an example where we have an application that uses PHP version 8, for instance. Each version has its own library and dependency. We need to download and install all that on our machine, and we do that with no problem. Next, we have a new application with another version, which is totally different from the first one, using its own libraries and binaries. Now, if we try to download those packages, we will get an error because we already have the same packages with different versions in the first application, leading to an error called version mismatch and a lot of struggling.
To fix this, we could separate the applications, each in its own folder with its own libraries and dependencies, but this isn't very operational.
The second solution is to use a VM (a virtual machine), so we set up each application in a separate VM where we first set up an operating system and then the PHP and other dependencies for the application. We do the same for the other application, and in total, to run two applications, we need to set up two VMs, each with its own resources. This was annoying and expensive in terms of cost. This way of doing things worked for quite some time until we had what we now call containers.
So, the solution containers give us here is that we take the first application with its own libraries and dependencies, put it in a container (box) itself, give this box a name, and save it in a location. We can take it to any operating system and run it without any problem because it's isolated from the operating system; we just need to set the environment for it.
This environment that we need to set up is what we call a container runtime environment.
See this illustration to understand where we are at this point ?
This runtime environment is the layer that gives the container everything it needs to function correctly. As long as this layer exists, the container (our box) will work well in any place or on any operating system we use it doesn't matter. Even the hardware doesn't matter except for the architecture.
For example, we cannot take what we build using an x86_64 architecture and run it on a Raspberry Pi because the architecture is different. Otherwise, we can do anything we want; the only thing we need is the same architecture.
Now the question is, what is this runtime environment actually? If you remember what we said before, when we need to set up a new application using a VM, we first set up the operating system and then install all the dependencies our application needs to run. Well, this runtime does the same when we run the application as if it's running inside an operating system.
Ok, now we understand how this layer works, but how was it built in the first place?
How Do Containers Work Behind the Scenes?
In Linux, we have something called namespaces, which you can think of like a prison. Anything that enters this "prison" cannot get out; the same goes for an application. If we run it in this namespace, it cannot get out and is isolated from the outside.
Another thing containers use is cgroups, which is another Linux concept that basically allows us to control the resources the process inside the container uses. So, using cgroups, we can set the resources for a container.
Another important concept containers use is SELinux, which allows us to control the security and access for the container.
There are other parts, but for now, we can stick with just these three parts (Linux namespaces, cgroups, SELinux) and understand what each does.
Containers and Microservices
Let's take another example where we have an application that has many parts and services. Before containers, we would take all these parts and put them in one single VM to run them. But if there's a problem, we have to test the entire application to find it. We call this operation troubleshooting, which is hard because we don't know exactly where the problem is, so we check the whole application.
Another painful operation is if we make some modification to a part, we need to check the entire application to ensure the changes we made are working because everything is interconnected. We need to ensure that the changes we made are correct for the rest of the application.
Now, to solve this, what containers do is allow us to take the application and separate each part of it into a single container to run independently. This makes troubleshooting easier because if a container (in this case, a service) has a problem, we can go directly to it and fix it without relying on other containers. And if we made a change to a part of the application, we only need to test the container responsible for that part, not the entire application.
We call this operation of separating containers and services microservices, meaning every service is in a container itself. The following illustration shows how it works:
In this example, we have two different applications running inside the runtime, each with its own service.
Containers and Scaling
Let's take another scenario where we have an application that is running, and at some point, we run a promotion, leading to many users coming into our application. This may cause a problem because the setup of the application cannot serve all the users. To solve this, we need to create more VMs to handle the high traffic, and these VMs will contain all the binaries and the code. To demonstrate this, let's take this illustration.
Let's understand this example: we have two VMs running our application, and because we're getting a lot of traffic, we scale by adding another one.
This takes up a lot of resources; in some cases, the application itself could be smaller, but to run, you would need a VM that consumes many resources. And as we keep growing and traffic increases, we may run out of machine resources, at which point we would need another machine, etc.
You may see a layer called Hypervisor that we didn't explain. Basically, this layer allows us to run a virtual machine on top of our main machine.
If you notice in this example, to scale the application, we scale the entire application, which isn't useful at all. Using the microservice architecture, if we have an application with many components, we may only need to scale the part getting more traffic, not the whole app.
If we use containers, we can do that because, as we said before, every component is in a separate container. We just take the component with more traffic and scale it.
Container Engine
Now that you understood how and why we need a container, let's learn how we can use the container technology on our machines. To achieve that, you will need what we call a container engine.
There are many container engine platforms that allow us to run containers, one of which you may have heard of is Docker, a popular and widely used platform. But in our tutorial, we're going to use another one called Podman, which is similar to Docker but with more functionality and capability.
Conclusion
This article was an introduction to container technology, how it works on Linux, and what you should know to be comfortable working with it.