Apress Deploy Container Applications Using Kubernetes
Apress Deploy Container Applications Using Kubernetes
Applications
Using Kubernetes
Implementations with microk8s
and AWS EKS
—
Shiva Subramanian
Deploy Container
Applications Using
Kubernetes
Implementations with microk8s
and AWS EKS
Shiva Subramanian
Deploy Container Applications Using Kubernetes: Implementations with microk8s
and AWS EKS
Shiva Subramanian
Georgia, GA, USA
Introduction������������������������������������������������������������������������������������������������������������� xi
iii
Table of Contents
iv
Table of Contents
v
Table of Contents
vi
Table of Contents
vii
Table of Contents
Index��������������������������������������������������������������������������������������������������������������������� 439
viii
About the Author
Shiva Subramanian is a servant leader with a focus on business software engineering
gained through 20+ years of progressive roles in Atari (Pac-Man Champ!), Basic/
Pascal/Fortran, dBase/FoxPro, Visual Basic/Visual C++, infrastructure (Windows NT/
Linux), software development, information security, architecture, team leadership,
management, business partnerships, contributing to P&L, launching new business,
creating and leading global SW dev teams to containers, Docker, cgroups, Kubernetes
(K8S), Jenkins, cloud (AWS/GCP/Azure), Java, Spring Boot, Redis, MongoDB, JSON,
and Scala.
He has 25 years’ experience in the FinTech and BFSI sector in areas such as core
banking solutions, payment networks, electronic billpay/bill presentment solutions, anti-
money laundering solutions, loan origination platforms, credit union platforms, teller/
customer/branch management systems, investment banking platforms (APL), mobile
commerce
(SMS banking), and bank intelligence platforms (BI/BW), just to name a few knowledge
domains.
ix
Introduction
Google launches several billion containers per week into Google Cloud. Mercedes-Benz
runs nearly 1000 Kubernetes clusters with 6000+ Kubernetes nodes. Datadog runs tens of
clusters with 10,000+ nodes and 100,000+ pods across multiple cloud providers.
The world’s leading companies are switching fast to Kubernetes clusters as a means
to deploy their applications at scale. Kubernetes is a technology enabler and a valuable
skill to gain.
Are you a computer science student, a system administrator, or perhaps even a
systems engineer working primarily with physical/virtual machines? Does the call of
modern technologies, such as Kubernetes, the cloud, AWS EKS, etc., that purports to
solve all your problems allure you – but you do not know where to start? Then this book
is for you.
After reading this book and doing the homework exercises, you will have a firm
understanding of the concepts of Kubernetes; you will also be able to stand up your own
Kubernetes cluster in two forms, inside of a physical or a virtual machine and in Amazon
Web Services (AWS) with proper RBAC (Role-Based Access Control) setup. While this
book isn’t a certification preparation–focused book, skills gained here will also help you
obtain Kubernetes certifications.
We will begin the journey by setting up a Kubernetes cluster from scratch in your
own virtual machine. Yes, you heard that right; we will set up a fully functioning
Kubernetes cluster in your virtual machine first. We will learn the basic concepts of
Kubernetes, what’s a pod, what’s a deployment, what’s a node, etc.
We will then progress to intermediate Kubernetes concepts, such as where does the
Kubernetes cluster get its storage from? How does the cluster scale the application up
and down? How do we scale the underlying compute infrastructure? What is a container
repo? How does this play into the CI/CD process?
We will then progress to switching our setup to Amazon Web Services (AWS), where
we will set up the same Kubernetes cluster using AWS’s Elastic Kubernetes Services
(EKS). We will set up the cluster, deploy sample applications, and learn about scaling the
underlying compute and how cloud computing really supports the Kubernetes platform.
xi
Introduction
Then we will finish with advanced concepts like utilizing AWS’s Elastic File System
(EFS) inside our Kubernetes cluster for persistent storage and how to enable ingress to
expose our application to the Internet as we would in a typical web-based application in
a production setting. You can then build on these skills to become a Kubernetes expert,
whether you are deploying in bare metal, in AWS, in Azure AKS, or in GCP’s GKS – the
core and the concepts remain the same.
I wrote this book based on my own experience learning about Kubernetes – how
to stand up a Kubernetes cluster from scratch for little to no cost, how to deploy an
application, how to build my own containers and where would I host those container
artifacts, what impact does this have on the CI/CD process, how do I scale my cluster,
how does this work on a public cloud such as AWS EKS – those experiments, results, and
knowledge are what is captured in this book.
This book assumes that some familiarity with computers and virtual machines,
Linux knowledge, and some cloud knowledge will greatly speed up your understanding
of the concepts of Kubernetes. With that said, let us begin our journey by understanding
the root of the problem the containers and Kubernetes are trying to solve in Chapter 1.
xii
CHAPTER 1
Dependency Hell
Anyone who has installed Microsoft runtime libraries on a Windows VM running
multiple applications or had to upgrade a package system in Linux can tell you how
complex this can be.
Ever since computers made it into the business applications’ world, supporting
real-world applications and problem solving, there has always been the problem of
dependencies among the various components, both hardware and software, that
comprise the application stack.
The technical stack the application software is written on, for example, Java, has
versions; this Java version is designed to run on top of a specified set of runtime libraries,
which in turn run on top of an operating system; this specified operating system runs
on top of a specified hardware device. Any changes and updates for security and feature
enhancements must take into account the various interconnects, and when one breaks
compatibility, we enter the dependency hell.
1
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_1
Chapter 1 From VMs to Containers
This task of updating system components was even more complicated when IT
systems were vertically scaled, where there was one underlying operating system and
many applications that ran on top of it. Each application might come with its own runtime
library requirements; an OS upgrade might not be compatible with the dependencies of
some applications running on top, creating technical debt and operational complexity.
The VM Way
Task: Suppose the developers have developed a static website which they have asked
you to host in your production environment.
As a seasoned systems engineer or administrator, you know how to do this. Easy!, you
say. I’ll create a VM, deploy a web server, and voilà! But what’s the fun in that? However,
let us still deploy the static website the traditional way and the container way. This way,
we will learn the similarities and differences between the two approaches. Thus
2
Chapter 1 From VMs to Containers
Note There are several virtual machine technologies available, such as the cloud,
ESXi host, VMware Workstation, Parallels, KVM, etc. Instructions for installing the
Linux operating system will vary widely depending on the virtualization software in
use; thus, we assume that the systems engineer/administrator is familiar with their
workstation setup.
lsb_release -s -d
Note The author has chosen to list plain command(s) only at the top of the listings
and the executed output and results right below it, making it easy to differentiate
between the command and the author’s illustrated outputs as shown in Listing 1-1.
3
Chapter 1 From VMs to Containers
lsb_release -s -d
shiva@wks01:~$ lsb_release -s -d
Ubuntu 22.04 LTS
shiva@wks01:~$
The package name for nginx is just nginx; thus, we will install
nginx using the standard package manager command as shown in
Listing 1-3.
4
Chapter 1 From VMs to Containers
Note <SNIP> in the output section indicates the sections of the output snipped
out of the listing to maintain brevity; it does not impact the concepts we are
learning.
5
Chapter 1 From VMs to Containers
Please note that your mileage may vary with respect to the
output messages due to variations in installed editions of the
operating system.
This marks the end of step 2, which is installing the web server;
now on to step 3.
Listing 1-5. Browsing to the nginx default website using the command line
curl localhost
6
Chapter 1 From VMs to Containers
Enter Containers
Welcome to the wonderful world of containers.
Containers solve both of the major problems associated with the VM way:
7
Chapter 1 From VMs to Containers
Here, the term image refers to the binary format or artifact of the
technology container.
1
https://cloud.google.com/containers
8
Chapter 1 From VMs to Containers
each application in its own container as the resource required to run a container is
very minimal; you will not be conserving any resources and will be reintroducing the
dependency management complexity that got us here in the first place!
Summary
In this chapter, we learned about two of the major problems associated with deploying
and managing an application and how containers promise to solve both the problems,
enabling systems engineers to reduce complexity while increasing efficiency, scalability,
and reliability of hosting applications via containers.
In the next chapter, we will deploy a simple static application via container
technology, realizing for ourselves how easy it is to deploy and manage applications
deployed via containers.
9
CHAPTER 2
Container Hello-World
Continuing from where we left off in the VM world, the goal of this chapter is to set up
container technology in our workstation and run our first container, hello-world, and
nginx web server, using docker with the intent to learn the basics of containers.
Docker Technology
When we say containers, for many, Docker comes to mind, and with a good reason.
Docker is a popular container technology that allows for users and developers to
build and run containers. Since it is a good starting point and allows us to understand
containers better, let us build and run a few containers based on docker technology
before branching out into the world of Kubernetes.
11
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_2
Chapter 2 Container Hello-World
The astute reader will notice and ask: Why do we need a VM still? I thought we were
going with containers. The short answer is, yes, we still need a host machine to provide
compute, that is, the CPU, memory, and disk, for the containers to run; however, the
VM can be replaced with special-purpose host OSes that can be stripped down to a bare
minimum; we will learn more about compute nodes and observe these in later chapters.
First, let us install the docker package onto our workstation as shown in Listing 2-1.
12
Chapter 2 Container Hello-World
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
shiva@wks01:~$
The Docker package is installed; to confirm the same, run the command shown in
Listing 2-2.
dpkg -l docker.io
Note Highlights in the output show the key elements we are looking for in the
output section.
13
Chapter 2 Container Hello-World
shiva@wks01:~$
Notice docker is in active (running) state; this is good. We can now set this user up
for using docker.
14
Chapter 2 Container Hello-World
Note Unless noted otherwise, remarks starting with # are NOT part of the system
output; it is used by the author to highlight something being present or not present
as a form of explanation.
For this to take effect, you can log out and log back in or add the newly added group
to the current session by using the command shown in Listing 2-5.
Listing 2-5. Changing the acting primary group for the regular user
newgrp docker
Before proceeding, we need to verify the group docker shows up in our session; we
can do that by executing the command shown in Listing 2-6.
id
shiva@wks01:~$ id
uid=1000(shiva) gid=112(docker) groups=112(docker),4(adm),24(cdrom),
27(sudo),30(dip),46(plugdev),110(lxd),1000(shiva)
shiva@wks01:~$
15
Chapter 2 Container Hello-World
At this point, we have met the first prerequisite of installing and running the CRI.
Now that we have installed the container runtime and the service is running,
we need an image to run. This act of running an image using a container runtime is
generally known as container, and the process of packaging an application to run inside
the container is generally known as containerizing an application. More on that later.
A simple hello-world application exists for the container world also; let us run that to
make sure our setup is working properly before moving ahead.
Container Hello-World
Run the command to start our first container as shown in Listing 2-7. What this
command does is it instructs docker to find and run a container named hello-world; not
to worry, if this image is not present, docker is smart enough to download this image
from its default container repository and then run it. More on container repositories in
later chapters.
16
Chapter 2 Container Hello-World
4. The Docker daemon streamed that output to the Docker client, which sent
it to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
shiva@wks01:~$
Notice, right below the status: line, the output says “Hello from Docker!”. This
confirms our docker setup is functioning correctly. The description from the output
further elaborates what happened in the background. Docker service pulled the image
from Docker Hub and ran it. Container images are stored in container repos and are
“pulled” by the runtime as necessary. More on container repos later.
Congratulations! You just ran your first container!
Now that we have our workstation set up, we can now run the nginx container using
the command shown in Listing 2-8. In this command, first we are stopping the nginx
that’s running on the Linux workstation we started in Chapter 1, then we are asking
docker to run the nginx container and map localhost port 80 to container port 80.
If the container image is not locally found, docker automatically downloads it and
then runs it.
The port 80 mapping is so we can access container port 80 from the localhost,
meaning our Linux workstation, on port 80.
Currently, we are just using all defaults; we can change the mapped ports, etc.; more
on this later.
Have two terminal windows open and run the command in one window; run the
sudo and docker commands shown in Listing 2-8 in one window and run the curl
command shown in Listing 2-9 in another window.
17
Chapter 2 Container Hello-World
18
Chapter 2 Container Hello-World
Notice how the docker launched the nginx container; the container then started the
nginx worker threads; docker would have also mapped port 80 so that we can access the
default website. We can do that using curl on terminal 2, which we opened earlier, as
shown in Listing 2-9.
Notice how easy that was; in one command, we were able to deploy the static
website, the task that required a full VM install as we did in the previous chapter. The
memory and disk footprint is also very small when compared to a full VM. You can now
press CTRL+C on the terminal running the docker run command.
19
Chapter 2 Container Hello-World
Summary
In this chapter, you learned how to set up your workstation to begin working with
containers, installed Docker, and ran your first two container applications – the hello-
world container and a web server – using the stock nginx container image, as well as
how to map a port on the container to your local workstation so that you can access the
services the container is providing.
In the next chapter, we’ll begin building on what you have learned so far, expanding
on learning more container basic commands and concepts using Docker.
Your Turn
Whatever your workstation flavor is, ensure docker is set up and running like we have
done here as well as run the two sample containers and ensure we receive the expected
results. The next chapters build on what we have set up here.
20
CHAPTER 3
21
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_3
Chapter 3 Container Basics Using Docker
Note You are welcome to sign up for a free account (free for personal use).
Type in nginx at the top-left search bar; select the first result “nginx” under verified
content, which will bring you to the dialog shown in Figure 3-2.
Security Note As with any public repository, anyone can publish images;
please exercise caution when downloading and running images from unknown
sources. In the docker repository, known good images have the “DOCKER OFFICIAL
IMAGE” badge.
22
Chapter 3 Container Basics Using Docker
Notice the badge next to the nginx name field; this is the docker official image and
thus is more safe than an unofficial image. Select the “Tags” tab, as shown in Figure 3-3.
23
Chapter 3 Container Basics Using Docker
You will see all the different flavors this image is available in, for example, you
can see the various OS/ARCH in the middle column. To pull a particular image; the
command shortcut is shown next to the container image; you can copy that shortcut to
the clipboard and paste in your terminal when we are ready to download. Right now,
we do not have to download this image yet - as we are only identifying the various nginx
container images available at hub.docker.com.
Notice the naming convention of the container images; stable-<something>, which
is typical in the container world, it indicates whether the image is experimental, beta
branch, stable, etc., and the alpine indicates the container was built with the Alpine
Linux as the base OS.
24
Chapter 3 Container Basics Using Docker
Let us now look at another container image that has the nginx application built-in. This
time it is from the Ubuntu software vendor. Now search again by typing “ubuntu/nginx” on
the search bar at the top left of the page, and select the result under “Verified Content” as
shown in Figure 3-4.
Select the ubuntu/nginx shown on the search bar; we land on the official ubuntu/
nginx image page as shown in Figure 3-5.
25
Chapter 3 Container Basics Using Docker
This is another container image built and published by Ubuntu. We will use this
image since this nginx container is built on top of the familiar Ubuntu operating system.
Similar to what you did previously, select the “Tags” tab, as shown in Figure 3-6, to
check out the container flavors available to us.
26
Chapter 3 Container Basics Using Docker
Figure 3-6. Image showing all the available tags for ubuntu/nginx
Notice the middle column, OS/ARCH; we see the architecture we need, which is
linux/amd64. Recall that from Chapter 1, our process architecture is x86_64. This is the
one we will be using.
27
Chapter 3 Container Basics Using Docker
The output indicates docker found the image and pulled them in chunks; when all
the chunks are “Pull complete,” docker computes the Digest to confirm the image has
not been corrupted during the download process, then confirms the status with us and
exits without errors.
All we have done at this point is downloaded the container image to our workstation.
We can verify the image is present in our local workstation by running the code shown in
Listing 3-2, which gives us pertinent information toward the container images stored on
this workstation.
28
Chapter 3 Container Basics Using Docker
Note Both nginx images are different; the plain nginx image is built and
maintained by the nginx vendor, while the ubuntu/nginx image is built and
maintained by Ubuntu. While in the previous chapter we used the nginx image
by nginx.com, in this chapter we are using the ubuntu/nginx image. There is no
preference with one over the other; as a systems engineer, you get to choose
which image you’d like to use; we have opted to continue with ubuntu/nginx. As
mentioned in the earlier SECURITY NOTE, care must be taken as to the source
of the image; here, both nginx.com and Ubuntu are well-established software
vendors.
Pay attention to the REPOSITORY column – it indicates the name of the container
image; the TAG column indicates the version, either a number or latest available; IMAGE
ID is the unique identifier (hash); CREATED indicates when the container was originally
created (not when it was locally created on our workstation); and SIZE indicates the size
of the container image.
Notice that the entire nginx container based off of the Ubuntu OS is ONLY
140MB. Compare that with the VM that we created in Chapter 2, which was several GB in
size – see the compactness of the containers!
An astute reader would have noticed that we cannot run the container yet, unlike the
hello-world container we ran, because hello-world did not provide any running service/
port. Here, as is typical with a web server, we need to expose a port that the web server
can run on. As shown in Figure 3-7, the “Usage” section on the “Overview” page provides
the information on how to pass the port number to the container.
29
Chapter 3 Container Basics Using Docker
Figure 3-7. Usage instructions on how to expose the port on the container
30
Chapter 3 Container Basics Using Docker
docker run
Before running this command, let us check and ensure if 8080 is free on the local
machine; the command in Listing 3-3 lists all the listening ports on the Linux machine.
If we find 8080 here, then that means some other process is using that port; if we don’t
see 8080, then it is available for us to use. Since we do not see 8080 in the output in
Listing 3-3, we are free to use it for our container.
Listing 3-3. Confirming port 8080 is not in use using the ss command
ss -ntl
shiva@wks01:~$ ss -ntl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
Process
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
31
Chapter 3 Container Basics Using Docker
lsof is another popular way to see if any process is using port 8080; run it as shown
in Listing 3-4. The lack of output from the command indicates that no other process is
using port 8080, so we can utilize this port.
Listing 3-4. Confirming port 8080 is not in use via the lsof command
lsof -i:8080
Finally, when we launch our container, we will be connecting to port 8080, where our
nginx will be running; before making any changes, let us document the before picture by
connecting to port 8080 with the curl command as shown in Listing 3-5. The expected
output is an error, since nothing should be running on 8080 at this time.
curl localhost:8080
We have confirmed nothing is running on 8080, and thus it is free to use; we also
confirmed curl is not able to connect to 8080 yet.
Now we are ready to launch the container and expose the ports; run the command
we just learned including all the command-line options to name the container and pass
the port to be exposed, the image to be run, and any optional parameters like timezone
as shown in Listing 3-6.
32
Chapter 3 Container Basics Using Docker
The command in Listing 3-6 exited showing the a hash value (this is the CONTAINER
ID) and without any other errors, meaning it completed successfully. We can now
confirm if the container is running as expected using the docker ps command, as shown
in Listing 3-7, which shows information about all the running containers.
docker ps
shiva@wks01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 26 seconds
ago Up 25 seconds 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
shiva@wks01:~$
Here, we can see the container we launched is UP for 25 seconds, and we have
mapped 8080 on the localhost to 80 on the container, where the web server is running.
We can also confirm this is the container we launched by comparing the output hash
value displayed when we launched the container with the CONTAINER ID column on
the docker ps output, though the CONTAINER ID column is truncated due to display
space constraints, up to the characters displayed - they match.
Notice that this time around, we do see something listening on port 8080, which is
what we asked docker to do – map localhost port 8080 to the container port 80; that’s
what is shown in the output of the docker ps command:
0.0.0.0:8080->80/tcp
We have confirmed the container process is listening on port 8080; now, we can
conduct an end-to-end test by connecting to port 8080 via CURL. If the web server is
running on the container, then we should see the website’s default landing page, which
we can do using our familiar curl command as shown in Listing 3-9.
Listing 3-9. Accessing the nginx container web server via curl
curl localhost:8080
34
Chapter 3 Container Basics Using Docker
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
shiva@wks01:~$
Curl connected to port 8080, which docker redirected to port 80 on the nginx
container it is running, which in turn served the web page which was then provided to
curl, which is what we see as the output in Listing 3-9!
Want to launch another web server instance? Easy. Let us name our container
instance nginx02, map 8081 to 80, and launch – the following command accomplishes
this; we are just changing the name and port number to be unique; thus, the command
then becomes
We can run it on our system as shown in Listing 3-10; notice how easy it is to launch
another instance of the nginx web server, one command.
The command completed successfully; let us confirm with docker ps that the second
container is running as shown in Listing 3-11.
35
Chapter 3 Container Basics Using Docker
docker ps
shiva@wks01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
5e25f3c059ec ubuntu/nginx:latest "/docker-entrypoint...." 18 seconds
ago Up 17 seconds 0.0.0.0:8081->80/tcp, :::8081->80/tcp nginx02
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 5 minutes
ago Up 5 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
shiva@wks01:~$
We now see two instances running; both are UP and docker is exposing the ports we
requested for it to map.
Let us confirm both ports 8080 and 8081 are listening on the localhost with the
command shown in Listing 3-12.
Listing 3-13. Confirming we can access a website on both service ports via curl
curl localhost:8080
curl localhost:8081
36
Chapter 3 Container Basics Using Docker
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
[SNIP]
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Both web servers are running; no reinstall of OS or anything, just run the container.
The astute reader would be wondering, I can do this in an OS too, launching nginx as
separate processes under various ports and scaling out that way. While that is possible,
we will read more about the benefits of containers as we go along further in the chapters.
For one, if one container crashes, it is not going to impact the other containers; if you
would like to patch the underlying OS and need to reboot, if you run all the nginx
processes in a single VM, you will have to reboot all the web servers at once. However, in
containers, you can restart containers individually on a rolling basis; thus, downtime is
low, just to name a few.
37
Chapter 3 Container Basics Using Docker
Summary
In this chapter, we have learned how to find a prebuilt containers we would like to run
from hub.docker.com, how to identify the correct architecture, and how to formulate
runtime parameters for an image and run them under docker.
We also met our goal of launching the static website under docker, in a containerized
fashion.
In the next chapter, I’ll show you how to build your container image and run it.
Your Turn
Another popular web server is Apache HTTPD – very similar to nginx. The image name
is ubuntu/apache2. Try running this inside your docker setup.
38
CHAPTER 4
• A containerized base OS
A Containerized Base OS
As we saw in the previous chapters, we go to https://hub.docker.com and search
for Ubuntu 22.04. Search for “Ubuntu” in the top-left search bar and hit enter
(Figure 4-1).
40
Chapter 4 Building Our First Container Image
Figure 4-2. Landing page showing available images and related tags
Great, we see the familiar OS versions; 22.04 is what we will use, and the tag “Latest”
references that version. We can use this tag then: ubuntu/latest.
Download the ubuntu/latest container image onto your workstation and confirm it is
available for us to use in the docker image repo as shown in Listing 4-1; this is just to give
us a taste of how a typical Linux VM is stripped down for use in the container world.
shiva@wks01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
5e25f3c059ec ubuntu/nginx:latest "/docker-entrypoint...." 51 minutes ago
Up 51 minutes 0.0.0.0:8081->80/tcp, :::8081->80/tcp nginx02
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 56 minutes ago
Up 56 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
shiva@wks01:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
8875710cd050 ubuntu:latest "/bin/bash" 12 seconds ago
Exited (0) 11 seconds ago vibrant_
snyder
5e25f3c059ec ubuntu/nginx:latest "/docker-entrypoint...." 51 minutes ago
Up 51 minutes 0.0.0.0:8081->80/tcp, :::8081->80/tcp nginx02
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 56 minutes ago
Up 56 minutes 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
febfee14ae9b hello-world "/hello" 2 hours ago
Exited (0) 2 hours ago hardcore
_lalande
shiva@wks01:~$
What just happened here? In the first command, we ran the ubuntu image, but it
did not stay running, because, unlike our nginx container where the nginx process kept
running, the ubuntu container image does not contain any daemons that will keep the
container running, so it gracefully exited as seen from the following command:
docker ps -a
42
Chapter 4 Building Our First Container Image
Expert Tip We can get a command line on a running container, if it has a shell
of some sort; this is not always true for all containers. The Ubuntu container has
bash preinstalled; thus, we can make use of it by executing the command shown
in Listing 4-2. The -i option is to indicate running this container interactively, -t is
to allocate a pseudo-TTY so our keyboard input can reach the container, and the -e
is to pass environment variables and the command we'd like to execute after the
container launches, thus /bin/bash in our case.
Listing 4-3. Confirming the hash signature is the same across inside and outside
the container
docker ps
shiva@wks01:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
53ba8a318efa ubuntu:latest "/bin/bash" 2 minutes ago
Up 2 minutes practical_
roentgen
43
Chapter 4 Building Our First Container Image
Notice the container ID 53ba8a318efa is running /bin/bash, and that also happens to
be the hostname for this container. Back on the previous terminal, we also noticed how
the nginx package is not preinstalled. We need to get it downloaded first before we can
bake it into the image.
44
Chapter 4 Building Our First Container Image
To download the nginx-light and dependent packages, use the command as shown
in Listing 4-4, the package files that we will need to satisfy all the dependencies for the
nginx-light package.
curl -L -s http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/nginx-
light_1.18.0-6ubuntu14.3_amd64.deb -o nginx-light_1.18.0-6ubuntu14.3_
amd64.deb
45
Chapter 4 Building Our First Container Image
shiva@wks01:~/container-static-website/ubuntupkgs$ ls
nginx-light_1.18.0-6ubuntu14.3_amd64.deb
shiva@wks01:~/container-static-website/ubuntupkgs$
Why do we need this? We need this because the nginx-light package, which is the
nginx application, has these other dependencies which we need to satisfy, that is,
provide a copy for a successful installation of the nginx-light package. You can read about
these dependencies in the package description from the package links given earlier. A
treatment of package management and/or deciphering package dependencies is beyond
the scope of this book.
The minimal list of all the packages needed to satisfy the nginx-light package and
the command to download them all are shown in Listing 4-5; download them all and
have them ready. There should be a total of 14 files including the previously downloaded
nginx-light_1.18.0-6ubuntu14.3_amd64.deb.
Listing 4-5. All the dependent packages are downloaded and shown
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/i/iproute2/
iproute2_5.15.0-1ubuntu2_amd64.deb -o iproute2_5.15.0-1ubuntu2_amd64.deb
curl -L -s http://security.ubuntu.com/ubuntu/pool/main/libb/libbpf/
libbpf0_0.5.0-1ubuntu22.04.1_amd64.deb -o libbpf0_0.5.0-1ubuntu22.04.1_
amd64.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/libb/libbsd/
libbsd0_0.11.5-1_amd64.deb -o libbsd0_0.11.5-1_amd64.deb
curl -L -s http://security.ubuntu.com/ubuntu/pool/main/libc/libcap2/
libcap2_2.44-1ubuntu0.22.04.1_amd64.deb -o libcap2_2.44-1ubuntu0.22.04.1_
amd64.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/e/elfutils/
libelf1_0.186-1build1_amd64.deb -o libelf1_0.186-1build1_amd64.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/libm/libmaxminddb/
libmaxminddb0_1.5.2-1build2_amd64.deb -o libmaxminddb0_1.5.2-1build2_
amd64.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/libm/libmd/
libmd0_1.0.4-1build1_amd64.deb -o libmd0_1.0.4-1build1_amd64.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/libm/libmnl/
libmnl0_1.0.4-3build2_amd64.deb -o libmnl0_1.0.4-3build2_amd64.deb
46
Chapter 4 Building Our First Container Image
curl -L -s http://security.ubuntu.com/ubuntu/pool/universe/n/nginx/
libnginx-mod-http-echo_1.18.0-6ubuntu14.3_amd64.deb -o libnginx-mod-http-
echo_1.18.0-6ubuntu14.3_amd64.deb
curl -L -s http://security.ubuntu.com/ubuntu/pool/main/n/nginx/libnginx-
mod-http-geoip2_1.18.0-6ubuntu14.3_amd64.deb -o libnginx-mod-http-
geoip2_1.18.0-6ubuntu14.3_amd64.deb
curl -L -s http://security.ubuntu.com/ubuntu/pool/main/n/
nginx/nginx-common_1.18.0-6ubuntu14.3_all.deb -o nginx-
common_1.18.0-6ubuntu14.3_all.deb
curl -L -s http://mirrors.kernel.org/ubuntu/pool/main/i/iptables/
libxtables12_1.8.7-1ubuntu5_amd64.deb -o libxtables12_1.8.7-1ubuntu5_amd64.deb
curl -L -s http://security.ubuntu.com/ubuntu/pool/main/libc/
libcap2/libcap2-bin_2.44-1ubuntu0.22.04.1_amd64.deb -o libcap2-
bin_2.44-1ubuntu0.22.04.1_amd64.deb
cd ..
shiva@wks01:~/container-static-website/ubuntupkgs$ ls -1
iproute2_5.15.0-1ubuntu2_amd64.deb
libbpf0_0.5.0-1ubuntu22.04.1_amd64.deb
libbsd0_0.11.5-1_amd64.deb
libcap2_2.44-1ubuntu0.22.04.1_amd64.deb
libcap2-bin_2.44-1ubuntu0.22.04.1_amd64.deb
libelf1_0.186-1build1_amd64.deb
libmaxminddb0_1.5.2-1build2_amd64.deb
libmd0_1.0.4-1build1_amd64.deb
libmnl0_1.0.4-3build2_amd64.deb
libnginx-mod-http-echo_1.18.0-6ubuntu14.3_amd64.deb
libnginx-mod-http-geoip2_1.18.0-6ubuntu14.3_amd64.deb
libxtables12_1.8.7-1ubuntu5_amd64.deb
nginx-common_1.18.0-6ubuntu14.3_all.deb
nginx-light_1.18.0-6ubuntu14.3_amd64.deb
shiva@wks01:~/container-static-website/ubuntupkgs$
shiva@wks01:~/container-static-website/ubuntupkgs$ cd ..
shiva@wks01:~/container-static-website$
47
Chapter 4 Building Our First Container Image
<html>
<title>My own container image </title>
<body>
Hello world!<br>
This is my index file embedded in my first container image!!<br>
</body>
</html>
This completes assembling the index.html we want published. We can now move to
building our container image.
shiva@wks01:~/container-static-website$ pwd
/home/shiva/container-static-website
shiva@wks01:~/container-static-website$
48
Chapter 4 Building Our First Container Image
Using your favorite editor, create a file named Dockerfile with the contents as shown
in Listing 4-8.
FROM ubuntu:22.04
COPY ubuntupkgs/*.deb /tmp/
RUN dpkg -i /tmp/*.deb
ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]
EXPOSE 80/tcp
The FROM statement tells which base container image to start with; in our case, it
will be the minimal Ubuntu:22.04. Recall that though the name implies an Ubuntu 22.04
OS, this is not the typical full-blown OS we install in a VM, for example, the container
image is much, much smaller with all the unneeded, unnecessary components stripped
down; it is up to us to put ONLY the things we need back into the image to keep resource
consumption to a minimum.
The COPY statement tells the docker client to copy the files from inside the pkgs/
directory on the local VM from where we are building the image to inside the container
image we are building.
The RUN statement tells docker what to execute inside the container; since we want
our packages installed before the container image is created, that’s exactly what we are
doing here.
ENTRYPOINT configures the default command the container will run when
launched, /usr/sbin/nginx with those parameters in our case, just like how we would
launch nginx on a traditional VM, except the daemon part, since the container itself is
acting as the daemon part.
EXPOSE tells docker which port inside the container to expose to the outside world,
80 in our case, since that’s the nginx default port.
The astute reader will notice we are not doing anything with our index.html file yet.
We will incorporate that file later in this chapter. First, let us build and run the container
with “stock” nginx. Let us note the list of images available on the system prior to building
our container image as shown in Listing 4-9.
49
Chapter 4 Building Our First Container Image
docker images
We notice all the previous containers we ran are present here, which is typical.
We can now proceed to putting all these ingredients such as Ubuntu packages
and input files such as Dockerfile together in building the actual container image that
includes all our customizations.
docker build .
50
Chapter 4 Building Our First Container Image
<SNIP>
invoke-rc.d: policy-rc.d denied execution of start.
Processing triggers for libc-bin (2.35-0ubuntu3.1) ...
Removing intermediate container e605f0b8f80a
---> 0feb1cd39cf6
Step 4/5 : ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]
---> Running in cffe2f1c97ef
Removing intermediate container cffe2f1c97ef
---> b3e15d67f40a
Step 5/5 : EXPOSE 80/tcp
---> Running in 50d3a0c75018
Removing intermediate container 50d3a0c75018
---> aed4ca1695a0
Successfully built aed4ca1695a0
shiva@wks01:~/container-static-website$
51
Chapter 4 Building Our First Container Image
What just happened here? Docker followed the instructions given in the Dockerfile:
Step 1: It used the ubuntu:22.04 image we already had.
Step 2: It copied all the *.deb package files to the running container in its /tmp
directory.
Step 3: It ran the command to install the packages from inside the container [the
readline errors can be ignored since this isn’t an interactive installation].
Step 4: It knew which binary/service to run when the container was launched, given
using the full path along with its parameters.
Step 5: It exposed port 80, since that’s the default port used by the nginx web server.
Finally, it built the container with the hash aed4ca1695a0.
We should now be able to see the newly built container image in our workstation, as
shown in Listing 4-11, using the command docker images.
Listing 4-11. Confirming the built container image is present in the system
docker images
We notice that our new image is present in the docker container repo on our local
machine. We have not tagged it yet; we’ll do that in just a minute. First, let us ensure the
container works as expected by running it.
52
Chapter 4 Building Our First Container Image
Remember that 8080 and 8081 are already in use on the local machine, and 80 is
what we exposed on the container. Now that we have launched our container, we can use
the docker ps command to confirm it is UP and running, as shown in Listing 4-13.
docker ps
shiva@wks01:~/container-static-website$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
10686bdb60fb aed4ca1695a0 "/usr/sbin/nginx -g ..." 2 minutes ago
Up 2 minutes 0.0.0.0:8082->80/tcp, :::8082->80/tcp mynginx01
5e25f3c059ec ubuntu/nginx:latest "/docker-entrypoint...." 3 hours ago
Up 3 hours 0.0.0.0:8081->80/tcp, :::8081->80/tcp nginx02
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 3 hours ago
Up 3 hours 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
shiva@wks01:~/container-static-website$
53
Chapter 4 Building Our First Container Image
Listing 4-14. Output showing we can connect to the nginx web server
curl localhost:8082
Recall that this is still the stock index.html file; we will replace this stock index.html
with the one that we created shortly.
We are at the basecamp of a process known as continuous integration, for we are
going to codify the process of building and maintaining a container image, which then
lends itself to a continuous build of images as new base images or application versions
are available. Welcome to the CI part of the CI/CD pipeline.
54
Chapter 4 Building Our First Container Image
updated Dockerfile contents are shown in Listing 4-15. We have added an rm command
to delete the stock index.nginx-debian.html file; we then copy the index.html file we
created in Listing 4-6 onto the docker image using the copy command; that’s it.
FROM ubuntu:22.04
COPY ubuntupkgs/*.deb /tmp/
RUN dpkg -i /tmp/*.deb
RUN rm -rf /var/www/html/index.nginx-debian.html
COPY index.html /var/www/html/
ENTRYPOINT ["/usr/sbin/nginx", "-g", "daemon off;"]
EXPOSE 80/tcp
The only thing we have added is the removal of the default index.nginx-debian.html
file that gets installed by the nginx-light package, and we replace that file with our index.
html in the default location where nginx will be looking for.
We can then rebuild the container image using the familiar docker build command
as shown in Listing 4-16.
docker build .
55
Chapter 4 Building Our First Container Image
Note down the container ID. We then use that to launch yet another container to test
this new image as shown in Listing 4-17; only this time, we update with local port 8083
since that’s the next free local port.
Listing 4-17. Launching the updated container image and exposing the web
server via an unused port
56
Chapter 4 Building Our First Container Image
docker images
It is very unfriendly to be using IMAGE ID; let us tag those to be more user-friendly.
The command to tag an image is simple; it has the format:
In our case, we would like to name the IMAGE ID 390c89ba092d as mynginx:01 since
this image has the nginx web server we built as shown in Listing 4-19.
The container image tag allows for a friendly TARGET _IMAGE[:TAG] format; it is
normal for the TARGET_IMAGE to stay unchanged, while the [:TAG] field is typically
used as a version field with incremental numbers, since during the life of the container
image, it’s expected that we make changes, apply updates, so on and so forth; thus, the
versioning will come in handy! Since this is our first version, we go with 01 for the [:TAG]
value and mynginx as the TARGET_IMAGE (friendly) name for the image.
57
Chapter 4 Building Our First Container Image
Then verify using the docker images command as shown in Listing 4-20.
Listing 4-20. Output showing our container image along with the new tag
we applied
docker images
Notice how the IMAGE ID 390c89ba092d now has the name mynginx with tag 01; we
can now launch our containers with the friendly name as shown in Listing 4-21.
Listing 4-21. Launching our container image and exposing the web server via an
unused port
58
Chapter 4 Building Our First Container Image
The container launches; we can then verify it using the familiar docker ps command
as shown in Listing 4-22.
docker ps
shiva@wks01:~/container-static-website$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
20d136c66141 mynginx:01 "/usr/sbin/nginx -g ..." 18 seconds ago
Up 17 seconds 0.0.0.0:8084->80/tcp, :::8084->80/tcp mynginx04
9da0cfa8be22 390c89ba092d "/usr/sbin/nginx -g ..." 15 minutes ago
Up 15 minutes 0.0.0.0:8083->80/tcp, :::8083->80/tcp mynginx02
50231cd5dd48 aed4ca1695a0 "/usr/sbin/nginx -g ..." 22 minutes ago
Up 22 minutes 0.0.0.0:8082->80/tcp, :::8082->80/tcp mynginx01
5e25f3c059ec ubuntu/nginx:latest "/docker-entrypoint...." 3 hours ago
Up 3 hours 0.0.0.0:8081->80/tcp, :::8081->80/tcp nginx02
b51b6f4a5934 ubuntu/nginx:latest "/docker-entrypoint...." 3 hours ago
Up 3 hours 0.0.0.0:8080->80/tcp, :::8080->80/tcp nginx01
shiva@wks01:~/container-static-website$
Notice how the image name is reflected across the docker ecosystem.
Summary
In this chapter, we learned about creating our own container images. This process is
critical to master since enterprise applications typically have several dependencies;
this might take some trial and error to get it right. Spend the time to learn this in detail
if most of your job depends on building containers for your homegrown or enterprise
applications. Sometimes, it may be easier to build your own containers instead of using
prebuilt containers since your own containers free you up from the constraints that the
prebuilt containers come with.
59
Chapter 4 Building Our First Container Image
Since this process is automatable using your favorite CI tool of choice, for example,
Jenkins, eventually your containers can be built in a reliable fashion since manual
intervention will be minimal, and the build process itself is code-enabled, leading to
maturity in the DevOps or DevSecOps paradigm of operating.
Your Turn
Similar to the previous exercise, try building your own containers with the apache2/
httpd server, with your own index.html, build, publish, and run.
60
CHAPTER 5
Introduction to
Kubernetes
Kubernetes, also known as K8s, is an open source system for automating deployment,
scaling, and management of containerized applications. In this chapter, we will learn
about the relationship between Docker and Kubernetes and set up a Kubernetes
workstation.
We’ll then further explore our Kubernetes cluster by deploying a stock nginx
application as well as the mynginx image we built. This will introduce us to the various
capabilities of Kubernetes and get us more familiar with it.
61
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_5
Chapter 5 Introduction to Kubernetes
Docker was born to solve this problem in the early 2010s; by providing the entire
ecosystem, build tools, runtime tools, and repo management tools, it became a
dominant player in this space. This is why we spent the previous chapters learning a bit
of docker while also getting familiar with containers.
Recall we deployed a single static-website application in a single container; while it
is good for development purposes, a production deployment of the application needs a
few things such as container management, that is, monitor the status of the STATUS of
the container, restart if it fails or send an alert, etc. We also need scale-up and scale-
down capabilities in an automated fashion; you wouldn’t want to wake up every day at 4
AM to scale up your container ecosystem in anticipation of morning traffic, would you?
K8S solves almost all of the problems described earlier and does much more. In this
book, we shall get introduced to the K8S ecosystem and its various functionality and test
out its core capabilities.
The official Kubernetes (K8S) website is here: https://kubernetes.io/.
Distributions
K8S is distributed in many forms, all the way from micro/small implementations such as
K3S (https://k3s.io/) and MicroK8s (MicroK8s – Zero-ops Kubernetes for developers,
edge and IoT - https://microk8s.io/) to production-grade K8S in public clouds such
as AWS EKS (Managed Kubernetes Service – Amazon EKS – Amazon Web Services -
https://aws.amazon.com/eks/) and GKE from Google (Kubernetes – Google Kubernetes
Engine (GKE) | Google Cloud - https://cloud.google.com/kubernetes-engine).
Basic Concepts
For our purposes, let us start with our familiar Ubuntu-based MicroK8S implementation
where we will deploy our static-website application. Let’s start with some basic concepts.
Nodes
Nodes provide the compute to the K8S cluster upon which the containers run. Recall
that there are no more VMs in the K8S world, so where is the underlying compute
coming from? It comes in the form of nodes. It is important to note that while nodes
62
Chapter 5 Introduction to Kubernetes
have a base OS such as Ubuntu or CenOS, we should not treat them as regular VMs or
computers. Their only purpose is to provide compute (CPU, memory) and the container
runtime to the containers that will run on top of them.
Pods
A pod is a collection of one or more containers (your application) that the Kubernetes
manages for you. Each pod gets its own IP address.
Namespaces
It is a logical way to segment your workloads in a way that’s meaningful to you, for
example, you can individually create namespaces based on your line of business and align
your workload deployments on those namespaces; this allows you to do things like show
back or charge back and build access controls based on the namespace boundaries.
Service
Recall that when we launched our containers, we exposed a port, the port upon which
the application is listening, then we mapped an external (host) port so that external
entities could access the service. The concept is the same here; we define a service,
which tells K8S to map the port defined in the service definition to the ports on the pods
upon which the application is listening.
CRI
Container Runtime Interface, as the name implies, is the protocol or standard language
that the kubelet and container runtime use to exchange information.
Kubelet
Kubelet can be thought of as the agent software that runs on each node, allowing for
node management such as registering the node status with the API server and reporting
the health of the running containers with the API server. If the container has failed for
some reason, this status is reported to the API server via the kubelet; the API server can
then take corrective action based on the spec in the deployment.
63
Chapter 5 Introduction to Kubernetes
1. MicroK8s.
Start with an Ubuntu 22.04 VM with at least 2vCPUs, 4GB of
memory, and 40GB of HDD space.
cd ~
sudo snap install microk8s --classic --channel=1.27/stable
shiva@wks01:~/container-static-website$ cd ~
shiva@wks01:~$
shiva@wks01:~$ sudo snap install microk8s --classic --channel=1.27/stable
microk8s (1.27/stable) v1.27.4 from Canonical✓ installed
shiva@wks01:~$
microk8s start
microk8s status
65
Chapter 5 Introduction to Kubernetes
6. (Optional, only if you did Step 5 also) Source the profile file for it
to take effect, as shown in Listing 5-5, or log out and log back in.
66
Chapter 5 Introduction to Kubernetes
source ~/.bash_profile
alias
Summary
In this chapter, we learned the similarities and differences between Docker and
Kubernetes, as well as set up our workstation with the minimal distribution of
Kubernetes so that we could run containers as we would run in any other production-
ready Kubernetes distribution, and along the way, we also learned the basic concepts
used in the Kubernetes world.
While we are running our Kubernetes cluster in a single node format, which includes
both the control and data planes (the nodes), the beauty of the cloud and Kubernetes is
that node management is almost minimal; most of what we need to do is in the control
plane management, which we will cover in future chapters.
67
Chapter 5 Introduction to Kubernetes
In the next chapter, we will learn how to deploy our first containerized application
onto this cluster and experiment with it to learn the various functional capabilities of
Kubernetes.
Your Turn
Ensure that you have a working microk8s at this stage; the following chapters rely on this
instance of Kubernetes for further learning.
68
CHAPTER 6
69
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_6
Chapter 6 Deploying Our First App in Kubernetes
In kubernetes, most of the operations via the CLI are driven through the kubectl
command; it is very good to get familiar with it. The kubectl run is the first command
and takes the form
Self-explanatory, we asked kubectl to get a list of all the running pods at that point in
time when the command runs.
The explanation for the output in Listing 6-1 is as follows:
READY is the number of pods that are running; 1/1 is the default,
since we didn't ask for it to launch multiple pods.
RESTARTS indicates the number of times k8s had to restart our pod
regardless of reason, since the desired state is 1, if k8s notices the
pod had failed, it would automatically restart the container to reach
the desired state, while keeping count of the restarts.
There is more we can do to inspect this running pod, which we will get to later; our
goal was to run a pod to ensure the cluster is functioning properly, which it is. We can now
move on to the next stage of this chapter, which is to run our custom-built container image.
The overall process looks something like this:
• Set up a local (container) repo; this will be where the K8S will pull the
containers from to run (https://microk8s.io/docs/registry-images).1
• Create the deployment.
1
MicroK8s – How to use a local registry
70
Chapter 6 Deploying Our First App in Kubernetes
Let us remove the old tag to keep it clean as shown in Listing 6-4.
71
Chapter 6 Deploying Our First App in Kubernetes
A final confirmation that the retag took effect is by listing the docker images as
shown in Listing 6-5.
docker images
Listing 6-6. Exporting the docker image as a tar (Tape Archive) file
Let us confirm the tar file got created successfully as shown in Listing 6-7.
72
Chapter 6 Deploying Our First App in Kubernetes
shiva@wks01:~$ ls -l *.tar
-rw-rw-r-- 1 shiva shiva 88867840 Apr 7 22:56 mynginx_1.0.tar
shiva@wks01:~$
Next, we import this image, which is in tar file format, into the microk8s
environment, with one simple command as shown in Listing 6-8.
For now, we are using the microk8's built-in repo system. In real life, you would use
an external or dedicated artifact repository such as JFrog Artifactory or Sonatype Nexus,
AWS ECR, Google Cloud's GCR, or something like that – more on this later.
Now that the import is completed, we can list the available images to confirm the
image has been successfully imported and is available for use as shown in Listing 6-9.
As expected, our local/mynginx:01 is listed as available for use; now it's time to run
the pod with this image.
image as shown in Listing 6-10. The only option we have added to the kubectl run
command is --port 80; this is so port 80 from the pod is exposed to the Kubernetes
cluster, which we will in turn expose to end users – more on this later in the chapter.
Verify the pod is healthy and running using the kubectl get pods command as shown
in Listing 6-11.
curl localhost:4567
We can see our website! This confirms that our kubernetes infrastructure is running
well, is able to launch pods, is able to port-forward, and is ready for more. Kill the p
ort-
forward command and delete the pod in preparation for the next exercise as shown in
Listing 6-14.
75
Chapter 6 Deploying Our First App in Kubernetes
Another way to create and deploy a pod is to use the create deployment command
and verify the deployment status as shown in Listing 6-15; the benefits of using a
deployment vs. a straight pod run command are described in the next section.
76
Chapter 6 Deploying Our First App in Kubernetes
Thus, all we did was through kubectl we instructed K8S to create a new deployment
called dep-webserver using image local/mynginx/01.
The benefit of deploying the application via the deployment spec is that Kubernetes
will ensure that the deployment is always healthy; supposing the node that's running our
container goes unhealthy, Kubernetes will automatically restore our pod/application in
another health node. Launching the pod directly does not come with these benefits; if
the node goes down, our pod dies with it. Since Kubernetes was not aware it needed to
maintain that container, it will not attempt to relaunch that pod elsewhere.
The syntax is microk8s kubectl create deployment <name of your deployment> –
image <location of the image to be deployed>.
The preceding output shows that our deployment is successful and is in status
READY. More on replicas, etc., later.
Now that we have deployed our container, we need a way to access the service. Recall
that the pods are running inside their own IP space; we have to map an external (host)
port to the container port. To do that, within K8S we have to define a service, akin to how
we did that with the -P option in straight docker, as shown in Listing 6-16.
77
Chapter 6 Deploying Our First App in Kubernetes
Now that Kubernetes created a LoadBalancer service and mapped ports, let us find
out which external port did K8S use to provide this service using the details of the service
it created as shown in Listing 6-17.
The external port is 32491 as seen in the PORTS column. Let us try and access our
service using the familiar CURL command as shown in Listing 6-18.
curl localhost:32491
Voilà! Our application is deployed in K8S and is accessible from the outside world,
well from our host machine at least; we'll learn to export these services via an ingress
controller, AWS Elastic Load Balancer (ELB), etc., in future chapters.
For now, great job!!
78
Chapter 6 Deploying Our First App in Kubernetes
Summary
As planned, we imported our customer container image onto our kubernetes setup,
launched a few containers, and tested them successfully. In the next chapter, we
will learn about deployment files, which are a declarative way of defining what the
deployment should look like and how they are used to deploy applications.
Your Turn
If you have the apache2/httpd image, please import that image onto your kubernetes
setup, expose the port, and access it from the workstation. This way, we know the cluster
is working well and ready for more.
79
CHAPTER 7
Deployment Files
and Automation
In the previous chapter, we looked at how to deploy our website as a container using the
command line.
In this chapter, we will explore how to write deployment files to externalize various
configuration options and help with automation. Using deployment and service files is
the preferred way to configure your services, deployments, and other K8S components
because it allows automatic scaling, which lets teams scale up or down to meet
transactional needs of the application faster.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
selector:
matchLabels:
app: label-nginx
81
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_7
Chapter 7 Deployment Files and Automation
template:
metadata:
labels:
app: label-nginx
spec:
containers:
- ports:
- containerPort: 80
name: name-nginx
image: local/mynginx:01
The following is a brief description of each line contained in the deployment file:
spec: This section starts to describe the end state we desire with
this deployment.
selector: This tells K8S which other services can match this
deployment.
82
Chapter 7 Deployment Files and Automation
ports: This tells which port the application is providing its service
inside the container, 80 in our case.
• YAML based
• Case sensitive
• Space/tab sensitive
You might want to use a syntax checker for YAML and/or use https://monokle.
kubeshop.io/ for managing manifest files for your K8S cluster.
Let us assume the name of the file is deployment.yaml – we are now ready to apply
this deployment.
Let us list the current deployment, before applying our new deployment, as shown in
Listing 7-2, using the get deployments command.
1
Labels and Selectors | Kubernetes
83
Chapter 7 Deployment Files and Automation
The only one we see is the previous deployment we created. Good, the name of the
deployment as given in the mydep01.yaml file will not conflict with this. Now we’ll add
another deployment of the same nature with the manifest file we just created and verify
the deployment using the kubectl apply and get deployments command as shown in
Listing 7-3.
You can see that the deployment “mydeployment” is READY, UP-TO-DATE, and
AVAILABLE to use. It’s that simple.
84
Chapter 7 Deployment Files and Automation
We can also inspect the deployment a bit in detail using the kubectl describe
command as shown in Listing 7-4; this shows additional information about the
deployment.
85
Chapter 7 Deployment Files and Automation
Notice all the pertinent information that we provided in the manifest file showing
up here:
All other fields are defaults and/or added during the deployment creation, for
example, Namespace = default; since we did not specify a namespace to deploy to, K8S
took the default value of “default.” Same for Replicas, since we did not specify how many
replicas we needed this deployment to run, the default value of 1 was used; other values
such as CreationTimestamp and NewReplicaSet values are derived as the K8S sets up the
deployment.
Underlying to this deployment are the pods. Let’s also take a look at them as shown
in Listing 7-5 using the familiar get pods command.
Notice that the NewReplicaSet name matches the pod name with a suffix here.
Just like we described the deployment, using the same describe command, we can
also get more details on the pod(s). Let us describe the mydeployment pod as shown in
Listing 7-6.
86
Chapter 7 Deployment Files and Automation
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-5dx7z:
Type: Projected (a volume that contains injected
data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute
op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute
op=Exists for 300s
Events:
Type Reason Age From
Message
---- ------ ---- ----
-------
Normal Scheduled 2m11s default-scheduler
Successfully assigned default/mydeployment-756d77bc56-trgjx to wks01
Normal Pulled 2m11s kubelet
Container image "local/mynginx:01" already present on machine
Normal Created 2m11s kubelet
Created container name-nginx
Normal Started 2m11s kubelet
Started container name-nginx
88
Chapter 7 Deployment Files and Automation
All the details we expect about the pod and more are shown when we “describe”
something in Kubernetes, such as the Image and ImageSHA are the same as we’d expect,
because it is what we requested and is available to the system. Recall that you can list the
images using the microk8s ctr images list | grep [m]ynginx the name and the sha
hashs from both commands would match.
Pay close attention to the "IP: 10.1.166.11" value, as this is the IP of the container
endpoint upon which the application running inside the container is exposed, that is,
the nginx website in our case. We can see this is used when we map the service to this
deployment/pod.
The deployment is complete; the underlying pods are running. We still don’t have a
way to get to it as the 10.xx.xx.xx is internal to the K8S networking plane; we cannot get to
it directly from our host machine.
For that, we need another manifest file of type service to define the service we want
to expose to the world, just like how we did that in the command line earlier in the
chapter. Before we create the service deployment file, let us first observe the existing
services in the cluster using the get service command as shown in Listing 7-7.
Notice that the kubernetes is the default, and the only other one is “dep-webserver,”
which we created via the command line previously; now we will deploy a new service
using the manifest file method for exposing the deployment we created in Listing 7-3.
89
Chapter 7 Deployment Files and Automation
apiVersion: v1
kind: Service
metadata:
name: "myservice"
spec:
ports:
- port: 80
targetPort: 80
type: LoadBalancer
selector:
app: "label-nginx"
• The first two lines are required for all manifest files, the apiVersion
and kind.
• spec: This section describes the desired state of our service, where
we specify the external port, which we want it to be 80, and the
internal deployment port which it needs to map to, which is also 80 in
our case.
90
Chapter 7 Deployment Files and Automation
Let us create the service using the apply command and verify the service using the
get service command options as shown in Listing 7-9.
Listing 7-9. Creating the kubernetes service using the service definition file
Notice the myservice service is up and running, and the type is LoadBalancer as we
had requested. Ignore the EXTERNAL-IP field for now, more on that later. Notice the
PORT where the service is mapped to; in our case, 30331 is the port where the service is
mapped externally.
Let us now describe the service as shown in Listing 7-10 to obtain more details about
the service we just created using the describe service command.
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30331/TCP
Endpoints: 10.1.166.11:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
shiva@wks01:~$
Same as with the deployment, there are fields here, such as the Name, Selector, Type,
and Ports, that all match what we requested in our manifest file; some fields such as
Namespace and Endpoints are automatically determined by the K8S itself.
Most important of which is the “Endpoints: 10.1.166.11:80” value (recall that
we noted this IP from when we described the pod that our mydeployment had created
earlier!). How did the service know this is the endpoint IP?
It knows through the Selector field. Recall that we asked K8S to map the service to the
container deployment labeled as “label-nginx,” and K8S did just that; it mapped it using
the descriptive label, looked up the endpoint, and mapped it automatically for us. This is
because as deployments are rolled over, destroyed, updated, etc., we don’t have to worry
about specific IP address changing, as K8S takes all that pain away from us. This is one of
the reasons why K8S is so popular as a container orchestrating system, as just explained.
Now, we can test if our website is accessible from our host machine, using the CURL
command as we have done previously and as shown in Listing 7-11.
curl localhost:30331
92
Chapter 7 Deployment Files and Automation
Listing 7-12. Confirming the pod status before deleting the pod
Listing 7-13. Deleting the pod and confirming the service is restored
A few moments later, we run the get pods command, and we can see kubernetes has
launched a new pod to ensure the operating state matches that of what was declared by
us using the deployment file. We know this is a new pod because the unique identifier for
the new pod is mydeployment-756d77bc56-j6gpq, which is different from the old pod.
Summary
In this chapter, we learned how to write kubernetes manifest files, also known as
deployment files, for various kubernetes, such as deployments and services, so that we
can inform kubernetes what should be the state of the cluster at any given time; the rest
kubernetes will take care of.
Your Turn
Write manifest files for your apache2/httpd container and deploy. The end result should
be a running kubernetes service based on your apache2/httpd image.
94
CHAPTER 8
A Closer Look at
Kubernetes
In this chapter, we will look at various kubernetes concepts in detail, which were
introduced in the previous chapter. All we did in the previous chapter was deploy a
container to our microk8s kubernetes cluster using various system defaults. In this
chapter, we’ll answer the following questions. What are the defaults? Why do they
matter? And how do we customize the defaults for an optimal setup? We’ll learn that and
more, so get ready.
Clusters
A kubernetes cluster is a collection of capabilities, including the management plane
that provides us with various APIs to configure and control the cluster itself and the data
plane that consists of nodes and similar constructs where the workloads run.
We first have to authenticate ourselves to the cluster before we can perform any
functions on the cluster. But, in the previous chapter(s), where we deployed our
microk8s based cluster, we immediately started interacting with it by listing nodes and
deploying containers etc., How does the K8S cluster know who we are and whether we
are authorized to perform those functions? Who are we – in relation to the Kubernetes
cluster?
We set it up and started using it right away. Unlike most other software such as
databases, where we have to authenticate first, we did not do so in the kubernetes
ecosystem. Is this true? Let us see which user and groups we have set up as shown in
Listing 8-1.
95
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_8
Chapter 8 A Closer Look at Kubernetes
whoami
id
shiva@wks01:~$ whoami
shiva
shiva@wks01:~$ id
uid=1000(shiva) gid=1000(shiva) groups=1000(shiva),4(adm),24(cdrom),27(sudo),
30(dip),46(plugdev),116(lxd),117(docker),998(microk8s)
shiva@wks01:~$
This tells us that from an operating system perspective, we are a Linux user with
some rights, namely, sudo and the group microk8s.
Who are we connected to and/or authenticated as from the Kubernetes cluster?
To see that information, we first need to know which cluster we are connected to,
since kubectl is capable of working with multiple Kubernetes clusters with different
information. For now, we only have one microk8s-based Kubernetes cluster, and thus we
expect to see only one cluster; we can do that using the config get-clusters command as
shown in Listing 8-2.
This shows the cluster we are connected to. When microk8s was installed, it created
the first cluster for us and aptly named it “microk8s-cluster”. This tells us our command
line is aware of this cluster.
Next, we need to find out which context we are using, using the config current-
context command, as shown in Listing 8-3.
96
Chapter 8 A Closer Look at Kubernetes
This tells us that the current context is set to microk8s; we’ll get to what is a context
in just a moment.
For kubernetes, there isn’t an interactive interface like in the case of an operating
system or a database, where you log in with a username/password and the system
puts you into some kind of a terminal with an interactive shell, so you can interact with
the system.
Kubernetes is composed of various microservices that are accessible and defined by
those API interfaces.
The command-line kubectl is merely translating our familiar command-line
interface into API client calls and executes them on the kubernetes API services.
So how do we authenticate and send a password?
Generally, any API service authenticates the requestor by the use of usernames and
tokens. Okay, which username and token am I using to access our microk8s?
To find out, we run the microk8s config command, as shown in Listing 8-4, and
examine its output.
97
Chapter 8 A Closer Look at Kubernetes
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: YktrYUZEMUYwT0h4NTlweVVjY21EQ3c0WXpialEzTFltMmVEZ1FPTGhKST0K
shiva@wks01:~$
98
Chapter 8 A Closer Look at Kubernetes
Session-ID-ctx:
<SNIP>---
read R BLOCK
closed
shiva@wks01:~$
Focus on this portion in the middle; we are using TLSv1.3 as the protocol version and
an AES 256–based Cipher for encryption as seen here:
SSL-Session:
Protocol : TLSv1.3
Cipher : TLS_AES_256_GCM_SHA384
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: YktrYUZEMUYwT0h4NTlweVVjY21EQ3c0WXpialEzTFltMmVEZ1FPTGhKST0K
shiva@wks01:~$
100
Chapter 8 A Closer Look at Kubernetes
C
ontexts
In this section, we will examine the output from Listing 8-6 line by line in detail.
This output shows all the contexts available for use. In this case, there is only
one, named
name: microk8s
user: admin
cluster: microk8s-cluster
The next line shows which context is the current or the active context in use by the
kubectl client program:
current-context: microk8s
Thus, we know that we are using the username of "admin" to authenticate to the
microk8s-cluster and have named this combination or context microk8s.
Where is the password? As we mentioned before, we use tokens, and these too are
present in the config output:
users:
- name: admin
user:
token: YktrYUZEMUYwT0h4NTlweVVjY21EQ3c0WXpialEzTFltMmVEZ1FPTGhKST0K
The context name is arbitrary; we can change the name of it to whatever we like, and
when we are managing multiple K8S clusters, we will need to switch between contexts,
etc. More on that in later chapters.
In summary, we are authenticating to our cluster as “admin” with the token shown
earlier to the cluster “microk8s-cluster” and have named this combination of things
(context) “microk8s,” and this is the only context available to us for now.
You can always know the K8S context you are operating in by executing the config
current-context command as shown in Listing 8-7.
101
Chapter 8 A Closer Look at Kubernetes
You can list all the available contexts by executing the config get-contexts command,
as shown in Listing 8-8; this is useful when dealing with multiple clusters.
Notice that the * indicates the current context and the only context available.
So we want to rename our context – microk8s isn’t very exciting. We can easily
execute this using the config rename-context command as shown in Listing 8-9.
Just like the Linux mv or Windows ren command, old name then new name and it’s
done. Let us confirm if the rename is successful using the command option config get-
contexts as shown in Listing 8-10.
102
Chapter 8 A Closer Look at Kubernetes
Only the name of the context has changed, nothing else; the cluster we connect to,
the authinfo (username), etc., remain the same.
Let us now put it back to the old value just as an exercise as shown in Listing 8-11.
Other important aspects of the Kubernetes cluster are the Cluster API versions and
the API Client Utility (kubectl) versions; they have to match or closely match, so they can
work together to get the version information. We can use the version option as shown in
Listing 8-13.
This is too much information; we can just get the short version by adding the --
short option to this command as shown in Listing 8-14.
104
Chapter 8 A Closer Look at Kubernetes
Generally speaking, we need to ensure the client version is within one minor version
difference than that of the server version.1
Since both client and server versions were installed by microk8s, they both match
perfectly.
N
odes
Recall that the K8S cluster is composed of a “control plane” and “nodes” upon which the
workloads run. We took a tour of the control plane so far; let us now dig a little deeper
into the other components of the cluster, including “nodes.” To get information about
nodes, we can use the command option get nodes as shown in Listing 8-15.
There is only one node that’s available for us to deploy our workloads to, and we
know that since this is a single instance mgmt plane + worker node.
AGE: How long the node has been up within this cluster.
You can manually tag a node to be cordoned off, for example, in preparation for a
maintenance window. Let us do just that, that is, cordon off our node and validate the
status change, by using the command option cordon followed by get nodes as shown in
Listing 8-16.
1
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
105
Chapter 8 A Closer Look at Kubernetes
Listing 8-16. Command and output showing a node being taken out of service
microk8s kubectl cordon wks01
microk8s kubectl get nodes
You can see that the STATUS changed to Ready,SchedulingDisabled – this means
that the node will not accept any new pod creation; the pods running on that node will
not be disturbed until they are drained off or killed. We can confirm this by attempting to
launch a new pod using the run command as we have done previously and as shown in
Listing 8-17; the expected result is the pod creation is going into a pending status.
Listing 8-17. Command and output showing a pod stuck in Pending status
microk8s kubectl run mynginx02 --image=local/mynginx:01
microk8s kubectl get pods
Notice that though the pod creation command succeeded, the actual creation of the
POD is in PENDING status and will not succeed because there are no other nodes in the
cluster that can take up the workload. In the next chapters, we will discuss adding more
nodes to the cluster and check. Let us give pod creation action five to ten minutes to see
if the pod has somehow managed to launch; after our wait time (patience) has elapsed,
we can run the get pods command for a status update as shown in Listing 8-18.
106
Chapter 8 A Closer Look at Kubernetes
We gave it five minutes, and still the pod is in pending creation. Has the node status
changed? Let us verify it by getting details on the nodes with the get nodes command as
shown in Listing 8-19.
It makes sense because the node is still unavailable for scheduling. Unless we change
the status of the node, the node by itself will not change its status.
Let us make the node available for scheduling; we do that using the uncordon
command option as shown in Listing 8-20.
Listing 8-20. Command and output showing the node being brought back
to service
107
Chapter 8 A Closer Look at Kubernetes
Then we can confirm if the node status has changed using the get nodes command
as shown in Listing 8-21.
Notice that the STATUS of the node has been updated to Ready, meaning it is
available to take workloads again.
D
raining a Node
When you need to perform maintenance on a NODE, we will need to move the
workloads running in a node to a different node to maintain availability. We do that by
first draining the node using the drain command option; it takes the form
Let us attempt to drain our node, as shown in Listing 8-22, and observe what
happens.
108
Chapter 8 A Closer Look at Kubernetes
More on daemonsets later. Since we are running a single-node cluster, we are unable
to drain it successfully. What is the current status of the node then? Let us use the get
nodes command to inspect, as shown in Listing 8-23.
Notice that the STATUS has been automatically updated by the DRAIN command to
“SchedulingDisabled” since our intention is to drain this node for maintenance. Drain
= cordon + move workloads to other nodes. Cordon was successful, but the cluster is
unable to move workloads because there are no other nodes available to accept the
workload (recall: we are running a single-node cluster).
Let us put the node back to normal status by uncordoning it and validate its status as
shown in Listing 8-24.
Listing 8-24. Command and output showing the node being brought back
to service
The node is back the way it was and ready to accept workloads again. We can also
TAINT the node – more on that later.
So far, we have learned about the cluster, NODES, etc. What about other concepts
like NAMESPACES, logging, debugging, etc.?
Namespaces
A namespace is a boundary which we can arbitrarily define to segment workloads.
For example, let us assume that there are three business units (BU): BU01, BU02, and
BU03. We wish to segment the workloads based upon the ownership of the applications;
we can do that using namespaces.
Before we create these three BU-owned namespaces, let’s first check out the existing
namespaces on the cluster using the get namespaces cluster, as shown in Listing 8-25,
and inspect them.
110
Chapter 8 A Closer Look at Kubernetes
Though the default output doesn’t show the namespace, we can get that in one of
three ways.
The first is to use the --output json option to our command and filter out to show
only the fields we need to inspect. This can be accomplished by executing the command
as shown in Listing 8-27.
Listing 8-27. Command and output showing only selected fields from the
deployment description
sudo apt install jq -y
microk8s kubectl get deployments --output json | jq '.items | {name:
.metadata.name, namespace: .metadata.namespace}'
The second and easy way is to use the -A flag on the get deployments command as
shown in Listing 8-28.
Listing 8-28. Command and output showing another way to include namespace
in output
You can describe the deployment and see the namespace there. Although adding
the -A is easy, there will be times when the output field we require is embedded deeply
in the JSON output, and there are no command-line options such as -A to get ONLY the
required out; this is the reason learning a bit of JSON and jQuery will come in handy.
The third way is to describe the deployment; as mentioned earlier, the describe
command provides detailed information about the object we are asking Kubernetes to
describe, and in this case our deployment, which also includes the namespace details,
issues the describe deployment command as shown in Listing 8-29.
113
Chapter 8 A Closer Look at Kubernetes
Notice we had to look closely for the only field, namespace, that we were interested
in, which is buried in a whole bunch of other details that we weren’t interested in at
this time.
So two deployments are in the “default” namespace; now let us go ahead and create
the additional namespaces we set out to do and confirm they got created as shown in
Listing 8-30.
114
Chapter 8 A Closer Look at Kubernetes
It is as simple as that; kubernetes has set up the new namespace, which is available
for us to use now.
Deleting a Namespace
Deleting a namespace is also simple; let us create a new namespace named by04 and
then proceed to delete the namespace and validate it as shown in Listing 8-31.
115
Chapter 8 A Closer Look at Kubernetes
Listing 8-31. Command and output creating, deleting, and listing a namespace
116
Chapter 8 A Closer Look at Kubernetes
What if we have deployments inside that namespace? Let us create the scenario and
attempt to delete a namespace which has pods running inside it.
First, we need a namespace that has a deployment/pod running inside it; let
us launch a pod using the run command as we have done before and as shown in
Listing 8-33.
Listing 8-34. Command and output showing pods from a specific namespace
117
Chapter 8 A Closer Look at Kubernetes
This way, you can deploy the pods in any arbitrary namespace that exists; if you
attempt to deploy to a namespace that doesn’t already exist, kubernetes throws an error
as shown in Listing 8-35, where we are intentionally attempting to launch a pod in a
nonexistent namespace.
Notice the namespace has a typo in it. Like we have done before, we can now list ALL
the pods and their namespaces using the get pods command with the -A option; since
we now have pods running in multiple namespaces, this is a good time to start including
the -A option when we get pods to know which pod is running in which namespace as
shown in Listing 8-36.
Listing 8-36. Command and output showing pod and associated namespaces
118
Chapter 8 A Closer Look at Kubernetes
default mydeployment-55bb4df494-9w5mp 1/1 Running
0 56m
default mynginx02 1/1 Running
0 118s
bu01 mynginx-different-namespace-01 1/1 Running
0 41s
shiva@wks01:~$
Using the -A option, you can see the column named NAMESPACE has been
added, and it tells us that the pod mynginx-different-namespace-01 is deployed in the
NAMESPACE bu01.
Let us also create a proper deployment just in case; create a deployment file named
deployment-bu01.yaml with the content shown in Listing 8-37.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
namespace: bu01
spec:
selector:
matchLabels:
app: label-nginx
template:
metadata:
labels:
app: label-nginx
spec:
containers:
- image: local/mynginx:01
name: name-nginx
ports:
- containerPort: 80
119
Chapter 8 A Closer Look at Kubernetes
Listing 8-38. Deploying the pods using the deployment file we just created
120
Chapter 8 A Closer Look at Kubernetes
Notice that both the deployment and the pod are destroyed along the way.
Caution Notice you did not receive any warning saying PODS and
DEPLOYMENTS exist inside the namespace you are trying to delete. BEWARE of
what's running inside the namespace before issuing the delete command.
You already know how to describe a kubernetes resource using the describe
command:
For example, let us list our mydeployment in detail as shown in Listing 8-41.
121
Chapter 8 A Closer Look at Kubernetes
122
Chapter 8 A Closer Look at Kubernetes
You can use the describe command to describe various types of Kubernetes objects
and resources to obtain detailed information about them.
Pods
Running Commands Inside a Pod
Kubernetes allows us to run commands inside a container. We do that using the
command format
To illustrate this, let us run a simple ps -efl command inside of a container as shown
in Listing 8-42.
We have just executed the command as if we were inside the container running the
ps -efl command. Notice how the container only has the workload process - nginx - in
this case, running; there are no other typical processes you see on a Linux VM, such as
initd, cups, dbus, etc. This is the beauty of containers – process virtualization.
123
Chapter 8 A Closer Look at Kubernetes
Can you execute arbitrary Linux commands inside the container? The short answer
is yes as long as the said binary is present inside the container; let us check to see if our
container has /usr/bin/bash by listing the file on that directory as shown in Listing 8-43.
It looks like bash is present on the container disk image; let’s try launching it as
shown in Listing 8-44. This is the same as executing the command as we have done
previously, except this time we will get an interactive shell on the container as shown in
Listing 8-44.
Voilà! We have an interactive shell from inside the container. Can we run a typical Linux
command? Let us run “hostname” inside the interactive shell as shown in Listing 8-45.
124
Chapter 8 A Closer Look at Kubernetes
Listing 8-45. Output showing the shell prompt from inside the pod
hostname
root@dep-webserver-7d7459d5d7-6m26d:/# hostname
dep-webserver-7d7459d5d7-6m26d
root@dep-webserver-7d7459d5d7-6m26d:/#
uptime
netstat -tan
root@dep-webserver-7d7459d5d7-6m26d:/# uptime
01:40:13 up 2:50, 0 users, load average: 0.67, 0.55, 0.53
Notice that some commands such as netstat are not found in the container image.
Expert Advice Although you are able to launch a bash like you would
on a normal Linux system, you should not do this unless it is for extreme
troubleshooting. We should treat containers as IMMUTABLE infrastructure and as
if you are not able to log in to the system – as many images do not even have a
bash (or other shells) in their system images, the only way in and out is through
the kubernetes cluster management tools. This is best practice. Do not fall into the
anti-pattern treating containers like VMs.
We are done inspecting what we can do inside the container; we can exit out of it.
125
Chapter 8 A Closer Look at Kubernetes
Pod Logs
Kubernetes gives you the log command that shows some information that’s logged by
the resource. This can be very useful in the process of troubleshooting. To get the logs
from a deployment, we use the logs <type>:<resource name> command as shown in
Listing 8-47.
When multiple pods are found for a given deployment, Kubernetes tells you which
pod it is displaying the logs from as shown in the aforementioned output. To obtain logs
from a specific pod, we can simply address it directly as shown in
126
Chapter 8 A Closer Look at Kubernetes
Notice the typo in the image name. Although the pod got created, it won’t find the
image, so it will create some log entries for us to inspect. Let us get the pod status first as
shown in Listing 8-49.
As expected, the pod’s STATUS shows ErrImagePull – meaning the pod creation
process is erroring out in the step when it is attempting to pull the image that will be
used to launch the container.
What do the logs show now? Let us get the logs for this deployment as shown in
Listing 8-50.
A bit more descriptive but we already knew that. This example illustrates how to
obtain pod logs to aid in normal operations as well as troubleshooting.
Let us deploy an application that will produce some normal logs that will aid in
observing application logs.
For this, we’ll use an image that’s available in GitHub; the image name is gitshiva/
primeornot. It is a simple Java/Spring Boot application that runs in a container, and it
provides a simple web service; given a number, it determines if it is prime or not and
outputs the result.
127
Chapter 8 A Closer Look at Kubernetes
Let us launch this application as a pod in our cluster using the kubectl run command
we have done previously and as shown in Listing 8-51.
The image, gitshiva/primeornot, is the author’s example Spring Boot application that
will show some errors. You can also deploy it since it is a publicly available image.
Now that the pod is deployed, let us examine the logs using the kubectl logs
command as shown in Listing 8-52.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.6.RELEASE)
128
Chapter 8 A Closer Look at Kubernetes
You can see the logs from the container that’s running inside this pod. It shows
that the Spring Boot application has started successfully (the last message), among
other things.
129
Chapter 8 A Closer Look at Kubernetes
/determineprime?number=<some number>
but we need to execute this inside the container since the port is :8080 inside of the
container, as we have not exposed it to the outside world.
Alas, we have our exec command, so first let us run the CURL command from
inside the container and see if that executes successfully, which we can do as shown in
Listing 8-53.
Listing 8-53. Command and output of accessing the web service running
inside the pod
130
Chapter 8 A Closer Look at Kubernetes
You can see that the container has produced a bit more logs, as each incoming
request into the app is logged, which is then seen via the “logs” command.
Can we tail the logs?
131
Chapter 8 A Closer Look at Kubernetes
Sure! The --tail=<number of entries to show> is perfect for that. The -1 option is to
show everything, as shown in Listing 8-56.
132
Chapter 8 A Closer Look at Kubernetes
77 is NOT prime
2022-05-14 18:34:18.855 INFO 6 --- [nio-8080-exec-3] us.subbu.
PrimeOrNot01 : We are about to return (in next line of
code) 77 is NOT prime
got the request ..
73
2022-05-14 18:34:25.840 INFO 6 --- [nio-8080-exec-4] us.subbu.
PrimeOrNot01 : number to check for prime is: 73
73 is PRIME
2022-05-14 18:34:25.842 INFO 6 --- [nio-8080-exec-4] us.subbu.
PrimeOrNot01 : We are about to return (in next line of
code) 73 is PRIME
Notice that in one window, we had the log tail open (the prompt hadn’t returned
since we used the -f option).
In another terminal window, run microk8s kubectl exec primeornot -- curl -s
localhost:8080/determineprime?number=1097 and watch the pod logs get updated.
This is a neat troubleshooting tool as you are troubleshooting the pods.
133
Chapter 8 A Closer Look at Kubernetes
We are attached after executing the attach command, but there is no shell prompt
because the running primary process is nginx and it is not programmed to give the user
a shell.
In another window, we ran the command in Listing 8-59.
The web application produced the logs for this access request, which is displayed in
the window with the “microk8s kubectl attach” command running, this is one use case
for why one would attach to a container - debugging
Press CTRL+C to exit out of the attached state.
Port-Forward
Earlier, we thought to expose the container port of :8080; until the port-forward (expose)
happened, there was no way to access it from our workstation; thus, we ended up using
the exec command instead. Let us proceed forward with exposing the web service to
outside the cluster, as we have done in the past; first, let us get the details of the pod as
shown in Listing 8-60.
134
Chapter 8 A Closer Look at Kubernetes
<SNIP>
Image: gitshiva/primeornot
Image ID: docker.io/gitshiva/primeornot@sha256:1a67bfc989b9819
8c91823dec0ec24f25f1c8a7e78aa773a4a2e47afe240bd4b
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 28 Aug 2023 01:59:14 +0000
<SNIP>
Normal Started 72s kubelet Started container
primeornot01
shiva@wks01:~$
We notice that the container is port 8080 to the cluster. What if we wanted to map this
port to outside the container, so folks outside the container can access this web server/
service?
Port-forward allows us to do that, at least temporarily, since the proper setup would
be to define a service to export that port.
Port-forward is simple; we issue the command:
Listing 8-61. Command and output for port-forward to expose the service the
pod is providing
Kubernetes arbitrarily selected port 34651 on the workstation where we are and
mapped it to 8080 of the container.
We can confirm this in another window on the workstation using our familiar CURL
command as shown in Listing 8-62.
135
Chapter 8 A Closer Look at Kubernetes
Listing 8-62. Accessing web service via the kubernetes service port
curl -s localhost:34651/determineprime?number=3753
Notice that we did not issue the exec command; instead, we just reached our service
via the localhost on port 34651, which is what kubernetes gave us earlier.
Summary
In this chapter, we learned about how to investigate the K8S cluster we are working
with, its endpoints, the contexts, how to switch between contexts, and how to work with
nodes and namespaces. We also learned how to obtain log information of a pod, how
to execute a command inside a running pod and attach to it, and how to utilize port-
forwarding techniques to access services the pod provides.
Your Turn
Using the commands in this chapter, investigate your own K8S cluster. Do you happen
to have a test cluster in any of the cloud providers? Feel free to investigate and learn
about them.
Another task could be to install and run the popular 2048 game. A public container
image is available here: https://github.com/bin2bin-applications/2048-game/pkgs/
container/2048-game – deploy, check out the container and the logs, and attach and
inspect the pod.
136
CHAPTER 9
ReplicaSets
One way to scale up pods is the use of the ReplicaSet concept of Kubernetes. As per
the Kubernetes documentation,1 “ReplicaSet’s purpose is to maintain a stable set of
replica Pods running at any given time.” The documentation also recommends using the
deployment methodology we learned earlier. Still, ReplicaSet gives another way to scale
pods despite its limited application. Let us use this concept to scale our pods.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
1
https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
137
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_9
Chapter 9 Scaling the Deployment
selector:
matchLabels:
app: label-nginx
template:
metadata:
labels:
app: label-nginx
spec:
containers:
- ports:
- containerPort: 80
name: name-nginx
image: local/mynginx:01
replicas: 3
This file is the same as the one we used in Chapter 7 named mydep01.yaml. The only
additional declaration we have made is replicas: 3; kubernetes takes care of the rest.
We can now apply or deploy this to our cluster and confirm the deployment scaled up as
shown in Listing 9-2.
138
Chapter 9 Scaling the Deployment
Notice that the READY, UP-TO-DATE, and AVAILABLE columns all reflect the three
replicas we wanted and asked K8S to scale out to. Did the pods increase in numbers? It
sure must have; we can confirm the same using the get pods command we have used
previously and as shown in Listing 9-3.
NAME READY STATUS RESTARTS AGE
dep-webserver-7d7459d5d7-6m26d 1/1 Running 0 124m
mynginx02 1/1 Running 0 58m
primeornot 1/1 Running 0 27m
primeornot01 1/1 Running 0 13m
this-pod-will-not-start 0/1 ImagePullBackOff 0 29m
mydeployment-55bb4df494-45qkl 1/1 Running 0 2m17s
mydeployment-55bb4df494-kdr7r 1/1 Running 0 2m17s
mydeployment-55bb4df494-tnlhd 1/1 Running 0 2m17s
shiva@wks01:~$
139
Chapter 9 Scaling the Deployment
That said, it is important to note that all these pods are running in a single VM which
is hosting our K8S management plane (microk8s) and its compute nodes; this is not
good for availability since the loss of this VM will result in both the management plane
and the compute nodes being unavailable.
In the real world, we’ll separate placement of the worker nodes upon separate
physical servers or, in the cloud world, across different availability zones and/or regions.
More on this later.
Just to stretch our node, we updated the replicas to 90 and updated our deployment.
You can see that the K8S scheduler is attempting to keep up with our request in
Listing 9-5.
Your Turn: Update the replicas: 3 field in your deployment file to 90 and apply it.
Thirty of the requested ninety are READY, UP-TO-DATE, and AVAILABLE; the K8S
scheduler is still working on the remaining pods. After a while, the VM was very sluggish;
thus, the author ended up rebooting the VM.
Note that the available compute capacity on the compute nodes will limit how
much we can scale the pods/containers that run on top of it; since we are only running
a single worker node on our VM, it is easy to saturate it; as in the case earlier, the cluster
was never able to reach the requested 90 as the resources on the compute node were
exhausted. The author had to reboot the VM to restore stability, then scale down the
deployment back to the original three replicas. Note that there are measures to limit the
number of pods deployed in a node, based on the estimated CPU/memory the pods will
consume; in real life, node stability is important. Thus, there are preventative measures
available; in our case, we want to simulate resource saturation and/or exhaustion, in
order to learn the node behavior.
140
Chapter 9 Scaling the Deployment
The scalability factor lies in the underlying compute nodes; we have to monitor
their utilization and add when resources are low. This is the beauty of the public clouds,
where we can keep adding compute nodes as needed, across various availability zones
and regions for fault tolerance.
Here on a VM with 2GB, we were able to scale up the pods to 30. Imagine replicating
the same with VMs; even with 0.5GB of RAM and 1GB of HDD, the setup would have
taken 15GB of RAM and 30GB of HDD in total, whereas this entire setup was run with
2GB of memory and 8GB of HDD space. This is one of the beauties of running containers
and kubernetes where resource requirements are very minimal for a given workload as
we shed the bulk of the thick OS provisioning – and on top of it, if we need to patch, we
just update the image and do a rolling update of the deployment, and voilà, we have the
latest running across the farm!
Let us bring back the deployment to one pod, by updating the field replicas: to 1 on
the mydep-atscale.yaml file and reapplying it. Note: If the system is slow, you might have
to give it some time for the deployment to scale down. Reboot the system if necessary.
So we begin our experiment with scaling the pods. Our goal is to observe how many
pods we can successfully run in our node with only 2GB of memory before it starts to
suffer resource exhaustion.
Recall that we are back to a one-pod deployment; let us confirm the starting state of
replicas as shown in Listing 9-6.
141
Chapter 9 Scaling the Deployment
In order to scale the pods in a methodical fashion, allowing us to observe the cluster
status along the way, we launch a pod every 30 seconds. We can do that using a simple
bash script; the contents of the bash script are shown in Listing 9-7; create a file named
scale-deployment.sh with this content, save, and have it ready.
#!/bin/bash
num=1
while [ num > 0 ]
do
microk8s kubectl scale deployment --replicas $num mydeployment
echo "deployment scale = $num"
date
echo "sleeping for 30 seconds, on another terminal watch deployment
being scaled"
sleep 30
num=`expr $num + 1`
done
On another terminal window, get the status of our pods using command microk8s
kubectl get deployments – on yet another window, as shown in Listing 9-8, which
provides the underlying compute, this is to observe that the node has no disk/memory
pressure at the beginning of this exercise.
kubernetes.io/hostname=wks01
kubernetes.io/os=linux
microk8s.io/cluster=true
node.kubernetes.io/microk8s-controlplane=microk8s-
controlplane
Annotations: node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.0.209/24
projectcalico.org/IPv4VXLANTunnelAddr: 10.1.166.0
volumes.kubernetes.io/controller-managed-attach-
detach: true
CreationTimestamp: Thu, 31 Aug 2023 16:19:53 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: wks01
AcquireTime: <unset>
RenewTime: Thu, 31 Aug 2023 20:59:41 +0000
Conditions:
Type Status LastHeartbeatTime
LastTransitionTime Reason
Message
---- ------ -----------------
------------------ ------
-------
NetworkUnavailable False Thu, 31 Aug 2023 16:54:01 +0000
Thu, 31 Aug 2023 16:54:01 +0000 CalicoIsUp
Calico is running on this node
MemoryPressure False Thu, 31 Aug 2023 20:59:30 +0000
Thu, 31 Aug 2023 16:19:53 +0000 KubeletHasSufficientMemory
kubelet has sufficient memory available
DiskPressure False Thu, 31 Aug 2023 20:59:30 +0000
Thu, 31 Aug 2023 16:19:53 +0000 KubeletHasNoDiskPressure
kubelet has no disk pressure
PIDPressure False Thu, 31 Aug 2023 20:59:30 +0000
Thu, 31 Aug 2023 16:19:53 +0000 KubeletHasSufficientPID
kubelet has sufficient PID available
143
Chapter 9 Scaling the Deployment
144
Chapter 9 Scaling the Deployment
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 14s kube-proxy
Normal Starting 15s kubelet Starting kubelet.
Warning InvalidDiskCapacity 15s kubelet invalid capacity 0 on
image filesystem
Normal NodeHasSufficientMemory 15s kubelet Node wks01 status is
now: NodeHas
SufficientMemory
Normal NodeHasNoDiskPressure 15s kubelet Node wks01 status is
now: NodeHasNo
DiskPressure
Normal NodeHasSufficientPID 15s kubelet Node wks01 status is
now: NodeHas
SufficientPID
Warning Rebooted 15s kubelet Node wks01 has been
rebooted, boot id:
629217a2-75e5-4ac7-
b062-ea8fd36e5691
Normal NodeAllocatableEnforced 15s kubelet Updated Node
Allocatable limit
across pods
shiva@wks01:~$
Notice that Kubernetes reports the pod has no memory or disk pressure; - also
shown in Figure 9-1 - the node is ready to take on new workloads, and under the
Allocatable section, it describes the CPU units 2 and memory 1896932Ki which is
roughly equivalent to 1.89GB available for workloads.
Meanwhile, our scale script is running in the background and has scaled up
continuously, one pod at a time, as shown in Figure 9-2.
145
Chapter 9 Scaling the Deployment
After about 16 mins, we have scaled up to 38 replicas as shown in Listing 9-9. Note
that when we ask the kubernetes cluster to “scale,” we are effectively telling kubernetes
our desired number of replicas. The Kubernetes cluster will “eventually” deploy these
replicas, provided the underlying compute nodes have the available capacity to deploy
the desired number of nodes.
In the meantime, let us also get the details of the deployment to see how things are
going as shown in Listing 9-10.
146
Chapter 9 Scaling the Deployment
In the window where the scale-deployment.sh script is running, we are seeing signs
of the K8S cluster slowing down as shown in Listing 9-11.
Listing 9-11. Output from the terminal running the scale-deployment.sh script
deployment.apps/mydeployment scaled
deployment scale = 270
Thu Aug 31 09:10:58 PM UTC 2023
sleeping for 30 seconds, on another terminal watch deployment being scaled
deployment.apps/mydeployment scaled
deployment scale = 271
147
Chapter 9 Scaling the Deployment
We are seeing signs of the system slowing down, and the pods are not getting
created; rather, they are getting queued up as follows:
If the system slows down enough, you might even see error messages:
"The connection to the server 127.0.0.1:16443 was refused - did you specify
the right host or port?"
This is most likely because the underlying node is struggling to keep up with the
scale activity; however, it is still able to launch pods, and the system is still functioning.
Let us keep it running for now.
At approximately 103 replicas, the system is getting slower and slower, and the
system has stopped scheduling further pods in this node, because as the description of
the node says, the Allocatable pods on this node are 110, as shown in Listing 9-12.
148
Chapter 9 Scaling the Deployment
Listing 9-12. Output describing the node showing running and allocatable
number of pods
Let us stop the scale-deployment.sh script that is scaling up the deployment to see if
the system will get relief – on the terminal where our scale-up script is running, press ^C
to break its execution as shown in Listing 9-13.
149
Chapter 9 Scaling the Deployment
Notice that our desired number of pods has been upped to 533, while kubernetes
was only able to launch 103 pods for this deployment + 7 other pods already on the
system = 110 pods total allocatable, after which the underlying compute node ran out of
allocatable resources; thus, the cluster would be unable to scale the number of pods to
match our desired state.
It is IMPORTANT to note that this is an anti-pattern that is to load up a node with
unlimited number of pods; kubernetes has built-in controls to prevent system instability
due to resource exhaustion; the cluster will scale up to our “desired” state, only if
resources permit such an activity.
This exercise is to demonstrate two things:
1. The scale of pods vs. the scale of VMs: Imagine, 46 VMs with each
2GB would have needed at least 46x2 = 96GB of memory; however,
we were able to scale up the pods with just 2GB of memory – that’s
the benefit of containerization.
150
Chapter 9 Scaling the Deployment
1. Vertical scaling
a. By increasing the size of the underlying compute VM from say 2GB to 8GB,
since memory seems to be the primary bottleneck in our case.
2. Horizontal scaling
Let’s try horizontal scaling. Here at our labs, we are going to find another physical
machine to act as a node to this cluster; if you have plenty of horsepower in your test
machine, there’s no harm in launching a new VM for that, recalling that you are still
competing/sharing the underlying CPU.
Recall that this is the beauty of the public cloud in that we can add a NODE readily
from the public cloud capacity and instantly scale up.
Bring the deployment back to one replica by applying the mydep-atscale.yaml file,
ensuring mydep-atscale.yaml replicas are set to one as shown in Listing 9-15.
151
Chapter 9 Scaling the Deployment
Summary
In this chapter, we learned about how to scale the deployment, how the resource
availability of the underlying node affects the ability to scale the deployment, and what
are some of the options available to scale the underlying compute, namely, vertical
scaling and horizontal scaling.
In the next chapter, we will, as described earlier, add additional compute nodes to
learn about how to scale the kubernetes compute cluster in order to expand the resource
availability and redundancy of the nodes and accommodate more workloads.
These techniques will come in handy when we roll out our cluster on a public cloud.
152
CHAPTER 10
Node Management
So far, we have been working on a cluster with only a single node. What if we wanted to
add more nodes to the cluster?
We can do that in microk8s itself. First, we need to prepare another machine whether
physical or virtual, then we add the second machine to the cluster by performing the
following steps.
First, create two new Ubuntu 22.04 LTS VMs to act as worker nodes. In my case, I’ve
created two new VMs and named them node02 with IP 192.168.0.191 and node03 with IP
192.168.0.149. Node01 is the wks01 VM where the cluster is running.
Note Make sure that all these VMs are in the same subnet so that we do not
have to worry about routing, firewalls, etc.
153
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_10
Chapter 10 Scaling Compute Nodes
microk8s status
Ensure the SAME version of microk8s is installed on both the parent node and the
worker node, in our case.
On the parent node, run the kubectl version command to obtain cluster client and
server versions as shown in Listing 10-2.
Repeat the same on the worker node, node02, as shown in Listing 10-3, as well as
on node03.
154
Chapter 10 Scaling Compute Nodes
Now we can proceed to join the worker nodes to the cluster; first, we need to add a
host entry for node02 so the hostname resolves and then proceed to use the microk8s
add-node command, which generates the tokens required for the parent-node cluster
relationship; execute this command as shown in Listing 10-4.
Use the '--worker' flag to join a node as a worker not running the control
plane, eg:
microk8s join 192.168.0.81:25000/421b127345456318a5aeed0685e5aaf2/81c1ddb1e
5cb --worker
If the node you are adding is not reachable through the default interface
you can use one of the following:
microk8s join 192.168.0.81:25000/421b127345456318a5aeed0685e5aaf2/81c1ddb1
e5cb
microk8s join 172.17.0.1:25000/421b127345456318a5aeed0685e5aaf2/81c1ddb
1e5cb
shiva@wks01:~$
155
Chapter 10 Scaling Compute Nodes
The join command is displayed in the output; copy the join command, and back
on the node02 terminal, execute it as shown in Listing 10-5 – notice that we added the
WORKER token at the end, since we only want to add a worker node, not make the
cluster-HA, which we will deal with in advanced topics.
The node has joined the cluster and will appear in the nodes list in a few
seconds.
This worker node gets automatically configured with the API server
endpoints.
If the API servers are behind a loadbalancer please set the '--refresh-
interval' to '0s' in:
/var/snap/microk8s/current/args/apiserver-proxy
and replace the API server endpoints with the one provided by the
loadbalancer in:
/var/snap/microk8s/current/args/traefik/provider.yaml
shiva@node02:~$
Repeat the same for node03; first, on the parent node, add a host entry for node03,
then create a new add-node token, then use the token on node03 to join the cluster.
On the parent node, obtain a new join token as shown in Listing 10-6.
Listing 10-6. Obtaining a new add-node token from the parent node
sudo bash -c "echo '192.168.0.149 node03' >> /etc/hosts"
microk8s add-node
If the node you are adding is not reachable through the default interface
you can use one of the following:
microk8s join 192.168.0.26:25000/dbf59cf2b0d45fa59060a597b73cebb1/
4c64e74803d0
microk8s join 172.17.0.1:25000/dbf59cf2b0d45fa59060a597b73cebb1/
4c64e74803d0
shiva@ubuntu2004-02:~$
The node has joined the cluster and will appear in the nodes list in a few
seconds.
This worker node gets automatically configured with the API server
endpoints.
If the API servers are behind a loadbalancer please set the '--refresh-
interval' to '0s' in:
/var/snap/microk8s/current/args/apiserver-proxy
and replace the API server endpoints with the one provided by the
loadbalancer in:
/var/snap/microk8s/current/args/traefik/provider.yaml
shiva@node03:~$
157
Chapter 10 Scaling Compute Nodes
Now back on the parent node, we can check the status by querying the nodes on the
cluster as shown in Listing 10-8.
As expected, the new nodes we added to the cluster are listed as valid nodes in the
cluster and are ready to accept incoming workloads. If we attempt to run any kubectl
commands on the worker nodes, we get a friendly reminder as shown in Listing 10-9.
Listing 10-9. Output of get nodes from the worker node – node02 and node03
Let us repeat the cordon exercise from the previous chapter, only this time we have
more nodes to move the workloads to.
Let us cordon off the parent node first as shown in Listing 10-10.
158
Chapter 10 Scaling Compute Nodes
Listing 10-10. Cordoning off the parent node and confirming status
Let us see where the workloads are running now, as shown in Listing 10-11.
You can also run the microk8s kubectl get pods -o wide command to get a similar
output, which includes the node upon which the pods are running as shown in
Figure 10-1.
159
Chapter 10 Scaling Compute Nodes
Figure 10-1. Output showing workloads still running on the parent node
You can see that the nodeName wks01 is the one running all the pods, since that was
the only node available to use when we started this exercise. Now, we’ll drain this node
in an effort to kick off a workload redistribution as shown in Listing 10-12.
160
Chapter 10 Scaling Compute Nodes
This is because these pods were started using the “run” command, that is, they
are not proper deployments; run is more of a shortcut (generator) to launching a pod
for testing, etc.; since they were not created in a declarative (using deployment files),
kubernetes is reporting that and failing the command.
It also reports
This is safe to ignore for now; we will use the --force-daemonsets to ignore this error.
But notice that the drain command already moved two pods to the other nodes, and
it load-balanced the pods beautifully – it placed each of the pods in different nodes as
you can see from the preceding output, that is, it put some pods on node02 and some on
node03, utilizing all the available nodes.
Let us try to drain the pods from the parent node again with the --force and --ignore-
daemonsets options as shown in Listing 10-13.
161
Chapter 10 Scaling Compute Nodes
pod/coredns-7745f9f87f-5wfv8 evicted
pod/primeornot evicted
pod/primeornot01 evicted
node/wks01 drained
shiva@wks01:~$
This time, you can see that kubernetes is indeed evicting the pods as we have asked
it to and is providing the status along the way. So what is the current status of pods as
in which pod is running in which node? Let’s get nodes with the -A option and selected
fields only as shown in Listing 10-14.
162
Chapter 10 Scaling Compute Nodes
Notice the NODENAME column – nice! Barring that one pod named calico-node-
5q6x6, all the other pods have been evenly distributed among the other nodes.
Let us now bring back the parent node online as shown in Listing 10-15.
Listing 10-15. Putting the parent node online and getting the status
163
Chapter 10 Scaling Compute Nodes
mydeployment-55bb4df494-7c8zz default node03
192.168.0.149
dep-webserver-7d7459d5d7-5z8qb default node02
192.168.0.191
shiva@wks01:~$
Notice the NODENAME column in the output from Listing 10-15; though the parent
node wks01 is back online, kubernetes is not putting any pods back on the parent node,
because it has no reason to load-balance again; there hasn’t been any failures or alerts or
system pressure, so it has left the workloads as is in their nodes.
Let us now drain node03 as shown in Listing 10-16. Recall that we now have two
other nodes wks01 and node02 ready to pick up the workloads; thus, this time we expect
these nodes to pick up the workloads, and the cluster will still be stable and running…
164
Chapter 10 Scaling Compute Nodes
calico-node-xwwbc kube-system node02
192.168.0.191
calico-node-tkzbg kube-system node03
192.168.0.149
calico-kube-controllers-6c99c8747f-7xb6p kube-system node02
192.168.0.191
dep-webserver-7d7459d5d7-5z8qb default node02
192.168.0.191
mydeployment-55bb4df494-tp4vw default wks01
192.168.0.81
coredns-7745f9f87f-cjlpv kube-system wks01
192.168.0.81
shiva@wks01:~$
Notice that all the pods except that one calico-node-tkzbg have moved to
other nodes.
This is because the deployment for calico-node requires three pods; since there were
three nodes, kubernetes put one pod in each node, thus achieving the desired state as
shown in Listing 10-17.
Even if we take down node03, the daemonset will survive because kubernetes will
relaunch another pod in a healthy node; to illustrate that, let us shut down the node03
VM by first getting the current status as shown in Listing 10-18.
165
Chapter 10 Scaling Compute Nodes
Let us power down node03 as shown in Listing 10-19 and observe its effects on the
cluster.
Did the cluster recognize that node03 is now unavailable for running workloads?
How did the pods get distributed across the remaining nodes? Let us get the pod status
from wks01 node as shown in Listing 10-20.
166
Chapter 10 Scaling Compute Nodes
calico-node-xwwbc kube-system node02
192.168.0.191
calico-kube-controllers-6c99c8747f-7xb6p kube-system node02
192.168.0.191
mydeployment-55bb4df494-tp4vw default wks01
192.168.0.81
coredns-7745f9f87f-cjlpv kube-system wks01
192.168.0.81
dep-webserver-7d7459d5d7-5z8qb default node02
192.168.0.191
calico-node-tkzbg kube-system node03
192.168.0.149
shiva@wks01:~$
167
Chapter 10 Scaling Compute Nodes
Let us power node03 back on. What impact does it have on the cluster? Will the
calico-node or other pods go back to this node? And what happened to that calico pod
that was running on node03? Checking on the node and pod status, we realize that
the calico pod on node03 was restarted automatically after the cluster had recognized
node03 was back online as shown in Listing 10-21.
168
Chapter 10 Scaling Compute Nodes
After a while without us doing anything (since node03 came up and microk8s
brought up the services automatically on that node), the cluster recognizes that and
adds the node back as in Ready status. Since we did cordon off the node, scheduling is
disabled; let us uncordon it as shown in Listing 10-22.
Listing 10-23. Obtaining the list of pods and the name of the node they run on
microk8s kubectl get pods -A -o custom-columns=NAME:metadata.name,
NAMESPACE:metadata.namespace,NODENAME:spec.nodeName,HOSTIP:status.hostIP
microk8s kubectl get daemonsets
169
Chapter 10 Scaling Compute Nodes
S
ummary
In this chapter, we learned about node management, how to add and schedule nodes,
the impact on the pods, how the pods get distributed across the nodes, what happens if a
node goes down, and how kubernetes automatically detects such conditions and moves
pods to other healthy nodes to keep the cluster running and keep deployments in their
desired state.
In the next chapter, we will start a new concept, Role-Based Access Control. So far,
we have been connecting to the cluster as the admin user. RBAC is an important concept
that allows us to give cluster access based on least privileges for the job roles; for that, we
have to learn about the RBAC capabilities of Kubernetes, which is the topic of our next
chapter.
170
CHAPTER 11
Kubernetes RBAC
In this chapter, we will learn about the Role-Based Access Control capabilities that
Kubernetes offers so that we can grant access to various types of business and technical
users to the clusters with permissions based on the concept of least privilege to get their
job done.
Let us assume we have three classes of users, namely, (1) K8S administrators; (2)
DevOps users that will need some but not admin access to the cluster, perhaps to deploy
applications and monitor them; and (3) read-only users that will only need to check on
the cluster once in a while.
For this setup, we will start by creating the corresponding OS users as shown in
Listing 11-1.
171
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_11
Chapter 11 Kubernetes RBAC
Notice that as we added the users, we added the microk8s group to users
k8sadmin01 and k8sadmin02 only, which gives these two users admin access to
the microk8s cluster. We did not do that for the k8sdevops01 and k8sdevops02 and
k8sreadonly01 and k8sreadonly02 users; this is by design, since we are going to scope
down the access for the DevOps and read-only users.
Next, we’ll need to set up the users for kubernetes, but, first, let us confirm the
user has the proper OS groups set up. You can do the same for all the other users as a
quality check.
On a new terminal, log in as k8sadmin01, either via sudo su - or set a password for
the user and log in as shown in Listing 11-2.
id
k8sadmin01@wks01:~$ id
uid=1001(k8sadmin01) gid=1002(k8sadmin01)
groups=1002(k8sadmin01),1001(microk8s)
k8sadmin01@wks01:~$
Notice that we have added the user to the microk8s, which allows this user to interact
with the cluster as well as to a group named k8sadmins01, which we will use later in the
chapter when granting the RBAC access to this group for performing functions inside the
cluster.
The next important step is to enable the RBAC capability in the cluster; at least in
microk8s, it isn’t turned on by default, so we have to do this step as an administrator of
the cluster; use a terminal where you are logged in as the cluster admin – shiva, in my
case – as shown in Listing 11-3.
172
Chapter 11 Kubernetes RBAC
173
Chapter 11 Kubernetes RBAC
Roles
As previously described, the Unix user shiva has the admin role on the kubernetes
cluster we set up; being the initial/first user that set up the cluster, the initial admin role
is automatically assigned to the user that instantiated the cluster.
We know and can confirm that by checking the kubectl config get-contexts command
as shown in Listing 11-4.
What other roles are available/predefined in the cluster? We can list all the available
cluster roles using the command shown in Listing 11-5.
Note We have switched admin actions to user k8sadmin01, since this user
is equivalent to user shiva by virtue of them being in the microk8s Linux group,
which gives them cluster-level access. In the next output, we have switched the
user to k8sadmin, and in the future, admin commands are executed either via
shiva, k8sadmin01, or k8sadmin02 since they all are equivalent from the microk8s
cluster perspective.
174
Chapter 11 Kubernetes RBAC
system:discovery 2023-08-29T21:38:00Z
system:monitoring 2023-08-29T21:38:00Z
system:basic-user 2023-08-29T21:38:00Z
system:public-info-viewer 2023-08-29T21:38:00Z
<SNIP>
system:controller:ttl-after-finished-controller 2023-08-29T21:38:00Z
system:controller:root-ca-cert-publisher 2023-08-29T21:38:00Z
view 2023-08-29T21:38:00Z
edit 2023-08-29T21:38:00Z
admin 2023-08-29T21:38:00Z
k8sadmin01@wks01:~$
There are plenty of default roles available. Notice the view role at the bottom of
Listing 11-5; this role grants read-only rights to the cluster, and thus our goal is to add
the Linux user k8sreadonly02 to the view role on the cluster. As is typical, this requires
setting up with authentication and authorization; authentication involves setting up an
authentication on both the server and the user, and authorization involves mapping the
user to a role that’s available in the cluster. This setup process is as follows:
2. Add the auth token to the static token1 file on the kubernetes api-
server #authentication.
3. Set up the Linux user’s .kube/config file to use the preceding token
#authentication.
5. Test.
1
https://kubernetes.io/docs/reference/access-authn-authz/rbac/
www.oreilly.com/library/view/docker-and-kubernetes/9781786468390/8363a45f-029e-
4c84-ba48-da2a29d50d93.xhtml
175
Chapter 11 Kubernetes RBAC
One way is to simply let the computer generate this for us, using the command
shown in Listing 11-6.
Another way is for the user to generate this string just by typing in random characters
on the echo string as shown in Listing 11-7.
Regardless of the method you choose to generate the token, it is important to make a
note of this token.
Step 2: Add the auth token to the static token file on the kubernetes api-server.
First, we need to find the configuration file of the microk8s kubernetes cluster; we
can do that by looking at the launch parameters of the command used to launch the
cluster as shown in Listing 11-9.
Listing 11-9. Finding the location of the configuration directory and file
The static token file for the stock microk8s kubernetes cluster is located at the
directory highlighted in the output in Listing 11-9:
/var/snap/microk8s/5643/credentials/
k8sadmin01@wks01:/var/snap/microk8s/5643/credentials$ pwd
/var/snap/microk8s/5643/credentials
k8sadmin01@wks01:/var/snap/microk8s/5643/credentials$ ll
total 36
drwxrwx--- 2 root microk8s 4096 Aug 28 03:09 ./
drwxr-xr-x 9 root root 4096 Aug 27 23:56 ../
-rw------- 1 root root 65 Aug 28 03:09 callback-token.txt
-rw-rw---- 1 root microk8s 0 Aug 28 03:18 certs-request-tokens.txt
-rw-rw---- 1 root microk8s 1870 Aug 28 00:38 client.config
177
Chapter 11 Kubernetes RBAC
Edit the known_tokens.csv file and add a line at the bottom of the file as described in
Figure 11-1.
No6sy49awQnHRLmrBmn/kt0ucSLvFYNnPpKcFCpxZ6Yw1Ye2QNop9OybP6Iac,k8sreadonly01,
k8sreadonly02,view
meaning for the users k8sreadonly01 and k8sreadonly02, we want to grant the
clusterrole "view," provided they bring/authenticate with this token in the first field. Add
this line at the bottom of the file, save, and quit.
The only way to make the cluster aware of this new token entry is to stop the cluster
and restart it.
So stop the cluster as shown in Listing 11-11.
178
Chapter 11 Kubernetes RBAC
Stopping and starting microk8s require root privileges, typically granted via sudo;
if you haven't granted sudo for users k8sadmin01 and k8sadmin02, you will have
to switch to another user that has sudo – shiva, in my case. This means user
“shiva” has both OS-level and microk8s cluster–level admin privileges, while users
k8sadmin01 and k8sadmin02 only have admin privileges on the microk8s cluster;
they are regular users from the OS perspective.
microk8s stop
Make sure there are no lingering microk8s processes as shown in Listing 11-12.
# no output since there are no lingering process, if any are shown, wait
until they are shutdown on their own or kill the process
Then start the microk8s cluster back and verify RBAC is enabled as shown in
Listing 11-13.
Listing 11-13. Starting the microk8s cluster back and checking status
microk8s start
microk8s status
179
Chapter 11 Kubernetes RBAC
high-availability: no
datastore master nodes: 192.168.0.81:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the
current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for
Kubernetes
rbac # (core) Role-Based Access Control for
authorisation
disabled:
cert-manager # (core) Cloud native certificate management
<SNIP>
registry # (core) Private image registry exposed on
localhost:32000
storage # (core) Alias to hostpath-storage add-on,
deprecated
shiva@wks01:~$
id
microk8s kubectl config current-context
k8sreadonly01@wks01:~$ id
uid=1005(k8sreadonly01) gid=1006(k8sreadonly01) groups=1006(k8sreadonly01)
k8sreadonly01@wks01:~$
k8sreadonly01@wks01:~$ microk8s kubectl config current-context
Insufficient permissions to access MicroK8s.
180
Chapter 11 Kubernetes RBAC
You can either try again with sudo or add the user k8sreadonly01 to the
'microk8s' group:
After this, reload the user groups either via a reboot or by running
'newgrp microk8s'.
k8sreadonly01@wks01:~$
kubectl
k8sreadonly01@wks01:~$ kubectl
Command 'kubectl' not found, but can be installed with:
snap install kubectl
Please ask your administrator.
k8sreadonly01@wks01:~$
Let us not use the package; rather, install the client directly from this location, where
you can also find detailed instructions:
https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/
The command to download and install the kubectl binary is shown in Listing 11-16.
181
Chapter 11 Kubernetes RBAC
The kubectl is working well. Notice the ./ in the ./kubectl command since we are
invoking it from this specific installation; let us run our first cluster command get pods
using this kubectl binary as shown in Listing 11-17.
182
Chapter 11 Kubernetes RBAC
Listing 11-17. Getting pods as a k8sreadonly01 user with the newly installed
kubectl command
Note The error message output by the kubectl binary might be slightly different
based on the version of the kubectl binary as it is constantly changing.
The kubectl doesn’t know about our microk8s cluster yet; this is why it is attempting
to connect to localhost:8080 – the astute reader will recall that our microk8s Kubernetes
API server is running on https://localhost:16443 – we pass this URL to the command
line as an option and attempt to obtain the pod information that way as shown in
Listing 11-18.
That doesn’t seem to work either; although this time we successfully connected to
the Kubernetes API server, we were unable to successfully authenticate to the server.
This is because though we have set up the known_tokens.csv on the server side, on the
client side, where we are running the kubectl command, we have not configured it yet,
and that’s exactly what we will do in the next few steps.
183
Chapter 11 Kubernetes RBAC
We need a proper kube config file for use by this kubectl; we start by creating the
.kube directory on our home directory, so continuing as user k8sreadonly01, create one
using the mkdir command:
mkdir ~/.kube
184
Chapter 11 Kubernetes RBAC
preferences: {}
users:
- name: admin
user:
token: UUtu<SNIP>Zz0K
k8sadmin01@wks01:~$
Notice that the microk8s config gave us the information for the admin user; we will
change these fields and customize them for the k8sreadonly01 user in the upcoming steps.
Switch back to the window with Linux user k8sreadonly01; now we can copy this
/tmp/config file as the starting point for our .kube/config file, as user k8sreadonly01, as
shown in Listing 11-20.
Listing 11-20. Copying the kube config file to the target user and directory
cp /tmp/config .kube/config
For fields 1 and 2, the name will change from admin to k8sreadonly01; since the
original file was copied from the admin user, the config file will reference the admin, which
we are changing to k8sreadonly01 since that’s the user that will be using this config file.
And for field 3, the token is the token we generated for this particular user (recall the
base64 token we saved in file mytoken.txt, that token), so replace the token field on the
original field, which belongs to the admin user, with the token value that we generated
for the k8sreadonly01 user. This updated file is given in Listing 11-22 for reference. That’s
it! The kube config file will now be ready for use by the k8sreadonly01 user.
BEFORE CHANGES (your file will be different):
185
Chapter 11 Kubernetes RBAC
shiva@wks01:~$
186
Chapter 11 Kubernetes RBAC
server: https://192.168.0.81:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: k8sreadonly01
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: k8sreadonly01
user:
token: Dw<K8sreadonly01 years's token here>rf
k8sreadonly01@wks01:~$
Another way to look at this is using the Linux’s diff -y command; the ONLY three
differences are highlighted in Figure 11-2.
Figure 11-2. Screenshot highlighting the fields that were updated for the
k8sreadonly01 user
That’s it; the ./kube/config will look very similar for both the shiva/admin and
k8sreadonly01/k8sreadonly01 (Linux user/k8s user) – the primary difference is the k8s
username and token they use. It is time to test this file by conducting any operation on
the cluster, let us attempt to get the pods as shown in Listing 11-23.
187
Chapter 11 Kubernetes RBAC
Expert Tip The Linux username and the K8S username do not have to match.
You can see that in the case of shiva/admin where shiva is the Linux username,
while admin is the K8S username; in the case of k8sreadonly01, we have made
the Linux username and the k8s username the same, but this doesn't have to be.
This time, we get a different error because we are able to authenticate to the K8S API
server, but we do not have any permissions yet. Those permissions are going to come
from the next step, which is RoleBinding; execute the clusterrolebinding step as any
admin user such as shiva or k8sadmin01/02, as shown in Listing 11-24.
Step 4: Update rolebinding in the cluster.
What we have done here is, for users k8sreadonly01 and k8sreadonly02, we have let
the user assume the “view” role at the cluster level; this “view” role allows for objects to
be read but cannot create/update/delete anything.
This completes the user setup; we’ve set up the Linux/OS user, installed the
kubectl binary, configured the kubectl client package to pick up the configuration and
authentication information from the user’s .kube/config file, and finally we have granted
this user read-only rights inside the cluster. It’s time to test.
188
Chapter 11 Kubernetes RBAC
Step 5: Test.
As usual, as user k8sreadonly01/02 we’ll run a read-only operation, namely, getting
pod information as shown in Listing 11-25.
Now let us try to perform a destruction operation, which will need a write type
privilege in this cluster, which this user does not have; thus, the expected result is the
operation will fail, as shown in Listing 11-26.
Your Turn You can set the same up for user k8sreadonly02 and try it out.
Note that we have already granted the user binding rights when we created the
clusterrole as you could see that we also added k8sreadonly02 user, so you don’t need to
repeat that step.
Our next goal is to do the same, but this time for the DevOps user. Let us assume that
the k8sdevops01 user belongs to bu01 and the k8sdevops02 user belongs to bu02.
189
Chapter 11 Kubernetes RBAC
Thus, we’d like for the k8sdevops01 user to have full rights within the namespace
BU01 and for the k8sdevops02 user to have full rights within the namespace BU02.
Since that’s their operating boundary.
Let us do that. All the steps to set up the authentication token remain the same,
except the rolebinding; in the case of the namespace-level admin, we will use the “edit”
cluster role which gives the create/read/update/delete rights to these users for a given
namespace.
Listing 11-27. Creating token files for k8sdevops01 and k8sdevops02 users
2. Add the token to the static token file on the kubernetes api-server
as shown in Listing 11-28.
190
Chapter 11 Kubernetes RBAC
Note Ensure you use the correct token values you generated; here, to maintain
security, we have truncated all the token values since they are a sensitive piece of
information and should be treated as such.
3. Now, as user shiva, who has sudo rights, we can stop and start the
microk8s Kubernetes cluster for the new users and tokens to take
effect as shown in Listing 11-29.
microk8s stop; sleep 30; microk8s start; sleep 30; microk8s status
191
Chapter 11 Kubernetes RBAC
Listing 11-31. Grant execute permissions for kubectl and testing progress
./kubectl version
./kubectl get pods
5. Update rolebinding.
6. Test.
Listing 11-34. Access denied on the namespace the user does not have access to
194
Chapter 11 Kubernetes RBAC
We also did launch a container within the “bu01” as a test and that was also
successful, and we attempted to list the containers/pods in BU02 which promptly
got denied.
All good so far.
Your Task Set up the k8sdevops02 user such that they only have "edit" access
to the bu02 namespace.
We have accomplished what we set out to do; we have created three classes of K8S
users, namely, k8s-admins, k8s-devops, and k8s-readonly users, and granted them
cluster rights based on the concept of least privilege required to accomplish their tasks in
the Kubernetes cluster. This is not the end though; one can also create custom roles and
use them in the cluster; that’s an advanced topic for another day.
Summary
In this chapter, we learned about the Role-Based Access Control capabilities that
microk8s offers and that of Kubernetes in general. It is important to understand the
various built-in roles that kubernetes offers so that as admins you can utilize them as you
delegate various tasks to your peers and other user groups of the cluster in a safe fashion.
Now that we have mastered the basics of kubernetes via the microk8s
implementation, it’s time for us to start looking at the public cloud. One of the key
ingredients for running pods is the container images.
In the next chapter, we will explore some of the Container Image Repository Software
and/or Service available to us as Container Image management is an important part of
maintaining and managing Kubernetes-based applications.
195
CHAPTER 12
Artifact Repository
and Container Registry
In this chapter, we will look at some of the artifact repositories and Container Registry
software and/or service available to us. This is important because one of the key
ingredients needed to running workloads in a kubernetes cluster is the container images.
We need a central place to store and manage those images.
197
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_12
Chapter 12 Artifact Repository and Container Registry
Like we mentioned earlier, the value-added capabilities are beyond the scope of this
book; we’ll only learn about the base use case of storing and retrieving containers from
these services.
198
Chapter 12 Artifact Repository and Container Registry
Docker Hub
Let us start with Docker Hub.
Pre-requirements
First, let us log in to docker from our VM where the container is as shown in
Listing 12-1. Recall that in an earlier chapter, we created the docker login that is available
to us to push to any repo we want to store it.
docker login
Login Succeeded
shiva@wks01:~$
199
Chapter 12 Artifact Repository and Container Registry
The first thing we need to do is to create a new tag to match the repo name we have
on our Docker Hub account, as shown in Figure 12-1.
For me, it’s gitshiva; use your Docker Hub repo name. We can then retag the image to
match the Docker Hub repo name to maintain consistency as shown in Listing 12-3.
Here, we can see that git gitshiva/mynginx is added as a tag to the same Image ID;
notice that both Image IDs for gitshiva/mynginx and local/mynginx are the same.
Now we are ready to upload. Let us upload using the docker push command as
shown in Listing 12-5.
200
Chapter 12 Artifact Repository and Container Registry
You can see that the push is successful and docker has given us a SHA256 signature
for the uploaded image.
We confirm this on the Docker Hub portal also as shown in Figure 12-2.
Notice that the image was only uploaded 15 mins ago and it is public.
Note Since this image is public, the image will appear in Docker Hub search
results, and users of Docker Hub will be able to download the container; thus,
exercise caution.
201
Chapter 12 Artifact Repository and Container Registry
Also, notice that the image status on Docker Hub says “Not Scanned”; this is a paid
feature of Docker Hub where it will scan the container for security vulnerabilities and
display them. As mentioned before, we will not be exploring this at this time.
Now this image is ready for consumption by any kubernetes cluster; with proper
references, the cluster can download and use this image as part of its workload.
Next, we will do the same on AWS ECR.
An AWS account
Please note that running or leaving running services or workloads in a public cloud
may incur charges. We will try to stay within the free-tier limits; your mileage
may vary.
Search for ECR in the top services search bar as shown in Figure 12-3.
202
Chapter 12 Artifact Repository and Container Registry
Once you are at the Elastic Container Registry landing page, create a Repo by clicking
the “Get Started” button as shown in Figure 12-4.
Once the Create repository screen shows up, we need to fill in the details as shown in
Figure 12-5.
203
Chapter 12 Artifact Repository and Container Registry
Enter your repo name; I’ll use “mynginx,” maintaining consistency with my previous
repo name.
204
Chapter 12 Artifact Repository and Container Registry
Leave all other options to defaults. Click “Create repository” at the bottom right of
the screen as shown in Figure 12-6.
205
Chapter 12 Artifact Repository and Container Registry
Figure 12-7. The Repositories page shows the repo we just created
As you can see, the repo is created and we can see the URI.
Now we need to authenticate to the AWS Service first, before we can upload to
the repo.
For that, we need the AWS CLI v2 installed on our Linux VM. Let us do that. Detailed
instructions for various operating systems are available at https://docs.aws.amazon.
com/cli/latest/userguide/getting-started-install.html.
We will use the Linux install instructions, since Linux has been our workstation
OS. Install the AWS CLI v2 as a user with sudo rights – “shiva” in our case – as shown
in Listing 12-6. Figure 12-8 shows the awscli binary package being downloaded,
Figure 12-9 shows the package being unzipped and Figure 12-10 shows the awscli
package being installed.
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install
206
Chapter 12 Artifact Repository and Container Registry
Let us confirm the AWS CLI is installed properly by running a simple AWS CLI
command as shown in Figure 12-11.
/usr/local/bin/aws --version
207
Chapter 12 Artifact Repository and Container Registry
We now need to go back to the AWS Console and obtain an access key ID and a
secret access key so that they can be used by the AWS CLI to authenticate against the
AWS Services.
Log in to your AWS account as your IAM user, in our case that’s shiva, then go to the
following URL (you can cut and paste this URL):
https://us-east-1.console.aws.amazon.com/iam/home#/security_credentials
or visit your profile and then choose “Security credentials” as shown in Figure 12-12.
Toward the middle of the page is the section “Access keys.” Click “Create access key”
to create your access key as shown in Figure 12-13.
208
Chapter 12 Artifact Repository and Container Registry
A three-step wizard appears; on Step 1, choose the options as shown in Figure 12-14.
209
Chapter 12 Artifact Repository and Container Registry
Now, let’s go to the terminal and configure the AWS CLI to use this access key, using
the aws configure command as shown in Figure 12-16.
Figure 12-16. Configuring the AWS CLI to use the access key
You can validate it worked by issuing the sts get-caller-identity as shown in Listing 12-7;
if everything is set up and working correctly, it should return the AWS account number,
along with the ARN of the user you are logged in as, indicating all is well.
210
Chapter 12 Artifact Repository and Container Registry
Listing 12-7. Confirming the AWS CLI is fully configured and working
We can validate that the AWS CLI was able to call the AWS Services, which then
returned who we are telling us we are now fully authenticated. We would have received
an error message instead if we were not authenticated successfully for some reason.
If we had not passed the correct set of access keys, we would have received an error.
Good, now we can use the AWS CLI to push our container image to AWS Elastic
Container Registry service.
First, we log in to the ECR service by executing the command shown in Listing 12-8.
Remember to update the URI of your public repository:
Provided the login is successful, you should receive a Login Succeeded message as
shown in Figure 12-17.
211
Chapter 12 Artifact Repository and Container Registry
Now we need to tag the container image, then push it to AWS ECR as shown in
Listing 12-9. Figure 12-18 shows the container image being pushed to AWS ECR.
Listing 12-9. Tagging the container image and pushing to AWS ECR
Figure 12-18. Tagging the container image and pushing to AWS ECR succeeded
Now, we can verify from within the AWS Elastic Container Registry page as shown in
Figure 12-19.
212
Chapter 12 Artifact Repository and Container Registry
Figure 12-19. Uploaded container image visible in AWS ECR ready for use
Expert Note The Docker Image ID is NOT the same as the SHA256 the AWS
Console Digest is showing.
You can verify that the correct image got uploaded by comparing the SHA256
signatures by copying the SHA256 value from the AWS ECR page and grepping it on the
local image as shown in Listing 12-10.
213
Chapter 12 Artifact Repository and Container Registry
https://jfrog.com/container-registry/cloud-registration/
214
Chapter 12 Artifact Repository and Container Registry
Click Proceed to go to the next screen, “Set up your JFrog Platform Environment,” as
shown in Figure 12-21.
Select AWS for now, and choose a repo prefix. I’m using gitshiva in my case; you can
choose your own.
An activation email is sent; activate it. Then we can start using the repo.
Log in to the JFrog web console using your login credentials, then choose “Docker”
as the container repo as shown in Figure 12-22.
215
Chapter 12 Artifact Repository and Container Registry
Click “Set Me Up”; the setup progress screen appears as shown in Figure 12-23.
216
Chapter 12 Artifact Repository and Container Registry
Once the setup is completed, the last step is to set up your password; set it up as
shown in Figure 12-24.
Once you’ve set up the password and logged in, the repo creation process begins as
shown in Figure 12-25.
217
Chapter 12 Artifact Repository and Container Registry
Choose “I’ll Do It Later,” then choose Repositories on the left navigation bar as
shown in Figure 12-26.
218
Chapter 12 Artifact Repository and Container Registry
Click the “Set Up Client” seen on the right side of the screen as shown in
Figure 12-26, which pops up the “What Would You Like To Connect To?” screen as
shown in Figure 12-27. Click the “Docker Client”; the instructions are displayed to pull
and push a hello-world container image.
Once you’ve clicked the “Docker Client” button, instructions are displayed on the
screen as shown in Listing 12-11; follow them to test the setup.
219
Chapter 12 Artifact Repository and Container Registry
Listing 12-11. Logging in to the JFrog Artifact Repository and testing with
hello-world
220
Chapter 12 Artifact Repository and Container Registry
Like before, we first tag it to something that we can easily identify as shown in
Listing 12-13.
Then we push the container image to the remote repo with the command shown in
Listing 12-14, as we have done before.
We can confirm the push action was successful by viewing the container image in the
JFrog repository as shown in Figure 12-28.
221
Chapter 12 Artifact Repository and Container Registry
This confirms the correct container image has been uploaded to JFrog’s Container
Registry service and that container image is now ready for use by any kubernetes cluster.
Summary
In this chapter, we learned about three different Container Repository solutions and how
to tag our images and upload them to these various Container Repository solutions for use
by any kubernetes cluster. There are plenty of Container Repository solutions available in
the market; these are just examples of some popular solutions and how to utilize them.
Now that we have mastered the basis of kubernetes via microk8s’s kubernetes
solutions, learned about containers, Container Registry, RBAC, and node management,
to name a few concepts, we are now ready to take it to the next level.
In the next chapter, we will begin our journey into the AWS Elastic Kubernetes
Service, which is a popular solution as it simplifies many of the management overhead
we experienced in the past chapters on node management and workload management,
to name a few.
222
CHAPTER 13
Elastic Kubernetes
Service from AWS
AWS Elastic Kubernetes Service has been generally available since 2018.1 It is one of the
major implementations of Kubernetes-as-a-Service on a public cloud. It has since then
grown and offers tailored solutions for running container-based workloads in AWS as we
will see in this chapter.
This service from AWS takes away the management plane that systems engineers
have to maintain. A lot of the pain that we went through during the initial chapters
setting up Kubernetes clusters, nodes, storage, networking, etc., are either provided as
a service or have plug-ins to make management much easier, so we can focus more on
the application workloads we have to deploy and their availability, security etc., without
having to worry about the minutiae of running the cluster.
The goal of this chapter is to set up a fully configured AWS EKS cluster on a given
AWS account. Let us assume you have a brand-new AWS account with root (email)
credentials.
Getting Started
In order for us to get the EKS to a production-grade ready state, we have to configure a
few things first, primary of which is identity management. Recall that kubernetes provides
authorization service (RBAC), but little or no identity service – the astute reader might recall
that in the previous chapter on microk8s, we created OS users for various roles and tied
these users back to microk8s using known_tokens.csv; this is not sufficient for a production-
grade environment – thus the importance of setting up the identity side of things.
1
https://aws.amazon.com/blogs/aws/amazon-eks-now-generally-available/
223
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_13
Chapter 13 Elastic Kubernetes Service from AWS
This chapter assumes you have an AWS account with admin access via AWS root
credentials or with an admin role to your AWS user. Specifically, we’ll be covering the
following topics:
Note AWS is constantly changing its web console screens; thus, the screenshots
shown here might slightly vary depending on your location, the browser being
used, and the version of the web console in use, for example, beta screens. Though
the screens might look slightly different, the functionality remains the same. One
has to just look around to find the option/selection that is not found in the usual
location.
224
Chapter 13 Elastic Kubernetes Service from AWS
After logging in, launch the IAM Service; the landing page appears as shown in
Figure 13-2.
225
Chapter 13 Elastic Kubernetes Service from AWS
On the landing page, in the search bar, type in IAM; the IAM Service appears in the
search results as shown in Figure 13-3.
Click the IAM Service from the results section, which takes you to the IAM Service
landing page as shown in Figure 13-4.
On this Users page, click the “Create user” button, which kicks off a three-step
wizard, starting with Step 1, “Specify user details,” as shown in Figure 13-5.
227
Chapter 13 Elastic Kubernetes Service from AWS
Enter a name – shiva, in my case – then select “I want to create an IAM user”;
leave everything else to defaults and click “Next,” which takes you to the Step 2, “Set
permissions,” screen as shown in Figure 13-6.
228
Chapter 13 Elastic Kubernetes Service from AWS
Select “Attach policies directly,” type in “PowerUserAccess” on the search bar, and
select the check mark on the search results which is the PowerUserAccess policy, then
click “Next” (not visible in the figure, but located on the bottom right of the web page),
which takes you to Step 3, “Review and create,” as shown in Figure 13-7.
229
Chapter 13 Elastic Kubernetes Service from AWS
Click “Create user” which creates the user and takes you to the final screen, Step 4,
“Retrieve password,” as shown in Figure 13-8.
230
Chapter 13 Elastic Kubernetes Service from AWS
Make a note of your login information; optionally, you can also download the .csv
file that contains this login information and/or email it to yourself. Click “Return to users
list,” which takes you to the IAM ➤ Users screen as shown in Figure 13-9.
231
Chapter 13 Elastic Kubernetes Service from AWS
We now need to generate API keys for this user “shiva” so that the user can
programmatically access the AWS APIs, which we’ll need for later. To do that, click the
user “shiva,” which takes you to the user details page as shown in Figure 13-10.
232
Chapter 13 Elastic Kubernetes Service from AWS
Click the “Security credentials” tab, then the “Create access key” button at the
bottom, which starts a three-step wizard starting with Step 1, “Access key best practices &
alternatives,” as shown in Figure 13-11.
233
Chapter 13 Elastic Kubernetes Service from AWS
Note The intention of this book is not to teach AWS, but to keep the focus on
Kubernetes and the AWS EKS services – thus, we have tried to keep the AWS
configuration to a minimum; advanced topics such as configuring the AWS CLI v2
via IAM Identity Center are out of scope for this book. Please treat ALL security
credentials as very sensitive information and follow best practices to protect your
information.
Select “Command Line Interface (CLI),” and check the “Confirmation” check
box, then click “Next” which takes you to Step 2, “Set description tag,” as shown in
Figure 13-12.
The tag value is optional; click “Create access key” which takes you to the “Retrieve
access keys” screen as shown in Figure 13-13.
234
Chapter 13 Elastic Kubernetes Service from AWS
Copy your access key and secret access key and save it in a safe location.
Alternatively, you can also download the .csv file for future reference; after saving this
information, click “Done.”
235
Chapter 13 Elastic Kubernetes Service from AWS
I’ve selected the us-east-2 region, and you can see the IAM user is named shiva. You
can use any regions that you’d like to use.
Type in eks in the search bar seen in Figure 13-15 to get the output as shown in
Figure 13-16.
Select the “Elastic Kubernetes Service,” a.k.a. EKS in AWS parlance. You can also click
the “star” next to the name to have this service bookmarked on your home screen. Once
you click this link, the EKS landing page shows up as shown in Figure 13-17.
237
Chapter 13 Elastic Kubernetes Service from AWS
238
Chapter 13 Elastic Kubernetes Service from AWS
Choose “Create”; the “Create EKS cluster” ➤ Configure cluster screen pops up.
We have chosen the name myeks01 for the cluster and left the default (1.27)
Kubernetes version for the control plane, then for the “Cluster service role” there are no
roles found, as you can see in Figure 13-19.
Why is this needed? In order to automate the cluster options without a human
operator at the console all the time, there has to be some role that is needed to manage
the cluster on your behalf, for example, to scale up/down, add/delete nodes as they
become healthy/unhealthy, etc. This is the role that we are saying is able to do those
functions on your behalf while you are not there.
239
Chapter 13 Elastic Kubernetes Service from AWS
Recall that AWS is made up of a bunch of API endpoints; this role provides the
authentication and authorization required to make use of those API endpoints that
provide EKS with various services, for example, node management.
Figure 13-20. Launching IAM Service (you can right-click and open in a new tab)
2. On the IAM Service landing page, on the left-hand-side navigation bar, choose
Roles as shown in Figure 13-21.
240
Chapter 13 Elastic Kubernetes Service from AWS
3. Then choose the “Create role” button on the far right as shown in Step 2.
As we continue working in AWS, we’ll be using the same breadcrumb format to
indicate where we need to get to.
The Create role wizard starts as shown in Figure 13-22. Choose “AWS service” ➤ Use
cases for other AWS services; click the drop-down, search/type EKS, and select it.
241
Chapter 13 Elastic Kubernetes Service from AWS
Once we have selected the trusted entity, the next step is to select the use case; in this
case, this will be EKS, so we can type eks as shown in Figure 13-23.
242
Chapter 13 Elastic Kubernetes Service from AWS
There are various options available for the EKS; since we plan to use this role for
cluster operations, we choose EKS – Cluster in this screen as shown in Figure 13-24.
Figure 13-24. Choosing the correct use case for eks for the role creation
Select “EKS – Cluster/Allows access to other AWS service resources that are required
to operate clusters managed by EKS.”
Then click Next. The next screen is the permissions needed by this role; we can add
an AWS managed policy named AmazonEKSClusterPolicy as shown in Figure 13-25.
243
Chapter 13 Elastic Kubernetes Service from AWS
244
Chapter 13 Elastic Kubernetes Service from AWS
245
Chapter 13 Elastic Kubernetes Service from AWS
Type eksClusterRole for the Role name field and click the “Create role” button. You
can choose any name for the role here; it just needs to be user-friendly so that it makes
sense for you when you are reviewing this setup in the future.
All looks good based on the options we selected, so we click “Create role”; the role is
created, as we can see in the IAM ➤ Roles section as shown in Figure 13-27.
Figure 13-27. The newly created IAM role for EKS is visible
Now, we can go to the other browser tab, where we were in the middle of creating the
eks cluster, and refresh the button next to the “Select role” drop-down; we can see the
eksClusterRole role now, select it, leave other settings to default, and click Next as shown
in Figure 13-28.
246
Chapter 13 Elastic Kubernetes Service from AWS
After we click “Next”, the next screen that shows up is the Networking screen where
we have to select the VPC and subnet this cluster should be set up with. We have chosen
to keep things simple by choosing the three public subnets available to us as shown in
Figure 13-29.
247
Chapter 13 Elastic Kubernetes Service from AWS
Figure 13-29. Selecting the VPC, subnets, and security group for the cluster
We can leave the VPC to its defaults and select the default security group as shown
in Figure 13-29. You can also create dedicated VPCs and security groups if you like. Here,
we are leaving it to defaults.
At the bottom half of the screen is the “Cluster endpoint access” selection; choose
“Public,” as shown in Figure 13-30.
248
Chapter 13 Elastic Kubernetes Service from AWS
The Cluster endpoint access defines how you will access the cluster management
plane. Recall that in our VM, the cluster was running on the localhost on port 16443, so
we were just able to access it locally. Here, unless you have the default VPC extended to
your local LAN, we should choose Public:
249
Chapter 13 Elastic Kubernetes Service from AWS
Let us choose to log the three options “API server,” “Audit,” and “Authenticator” as
shown in Figure 13-18 and click “Next,” which takes us to Step 4, “Select add-ons,” as
shown in Figure 13-32.
250
Chapter 13 Elastic Kubernetes Service from AWS
Leave the three add-ons installed by default and click “Next,” which takes us to
Step 5, “Configure selected add-ons settings,” as shown in Figure 13-33.
251
Chapter 13 Elastic Kubernetes Service from AWS
Leave everything to default; there is no need to change it unless we know about any
version compatibility issues. Right now, our cluster is fresh, and we do not have any
version compatibility issues; thus, we can leave everything to defaults and click “Next,”
which takes us to the final step of the wizard, which is Step 6, “Review and create,” as
shown in Figure 13-34.
252
Chapter 13 Elastic Kubernetes Service from AWS
The Review screen appears; confirm everything looks good and click “Create.”
This kicks off the AWS workflow that sets up the EKS cluster, which we can see in
Figure 13-35.
253
Chapter 13 Elastic Kubernetes Service from AWS
254
Chapter 13 Elastic Kubernetes Service from AWS
All we have created now is the management plane. There is no underlying compute
to support workloads yet, which we will do shortly.
Notice that the management plane version is 1.27, and the blue ribbon at the top
reminds us that only the management plane is running and no compute is available.
Recall that in microk8s, during the default install, the system created both the
management plane and the first worker node automatically; here, in AWS, they are two
different steps.
255
Chapter 13 Elastic Kubernetes Service from AWS
From the AWS home page, select IAM ➤ User, then click user “shiva”; the user details
show up, as shown in Figure 13-37.
Choose the Permissions tab; in the middle of the screen appears the Permissions
policies section; click “Add permissions” ➤ “Create inline policy” as shown in
Figure 13-38.
Select Create inline policy, after which the visual editor appears as shown in
Figure 13-39.
256
Chapter 13 Elastic Kubernetes Service from AWS
In the field “Select a service,” type IAM as shown in Figure 13-40. IAM is shown in the
search results; click it.
257
Chapter 13 Elastic Kubernetes Service from AWS
Once IAM is selected, Actions appears; expand Write and select “PassRole” as shown
in Figure 13-41.
258
Chapter 13 Elastic Kubernetes Service from AWS
Then in the Resources section, select “All resources” as shown in Figure 13-42.
Then click “Next”; the Review policy screen appears as shown in Figure 13-43.
259
Chapter 13 Elastic Kubernetes Service from AWS
Name it “eksPassRoleAccess” and click the “Create policy” button at the bottom right
as shown in Figure 13-44.
It takes five to ten minutes for the policy to take effect; now is a good time to get
some coffee or a drink of your choice.
260
Chapter 13 Elastic Kubernetes Service from AWS
We have verified we are logged in using the sts get-caller-identity command and then
listed the clusters, which shows our cluster.
Let us describe the cluster as shown in Listing 13-2.
Listing 13-2. Describing the EKS cluster via the AWS CLI
261
Chapter 13 Elastic Kubernetes Service from AWS
RESOURCESVPCCONFIG sg-0007be804cf34a8d2 False True
vpc-0170477d07283a976
PUBLICACCESSCIDRS 0.0.0.0/0
SUBNETIDS subnet-046606aaf23a20416
SUBNETIDS subnet-068ac163d2cbd074b
SUBNETIDS subnet-0f03119b46d02a63a
shiva@wks01:~$
Back on the AWS Console, we can also see similar details as shown in Figure 13-45.
262
Chapter 13 Elastic Kubernetes Service from AWS
The cluster information is populated now, showing several details about the cluster
itself. Next, we can select and explore the various tabs showing more details about this
cluster, starting with the “Resources” tab as shown in Figure 13-46.
263
Chapter 13 Elastic Kubernetes Service from AWS
The Resources tab shows all the resources inside this cluster; notice that the two
pods shown are coredns pods, which comes from the default selection during the cluster
creation process, which is default for AWS. We did not use coredns in our microk8s.
However, it is important to note, as we said earlier, since there is no underlying
compute yet, these pods are not running yet; they are scheduled and awaiting resources.
This is not explicitly shown in this picture; if you click the pod details, those details
become available.
On the left navigation pane, choose Cluster ➤ Namespaces to see all the namespaces
available in this kubernetes cluster as shown in Figure 13-47.
264
Chapter 13 Elastic Kubernetes Service from AWS
You will see a familiar output in that the namespaces closely match what we had
in our microk8s instance. Some namespaces such as the default and kube-system are
special in that they are required for the cluster to function properly and cannot be
deleted.
Of course, later on we will add/delete more namespaces just like how we did in our
microk8s instance.
Next, we can go to the “Compute” tab as shown in Figure 13-48.
265
Chapter 13 Elastic Kubernetes Service from AWS
As mentioned before, the Compute tab shows the worker nodes; there aren’t any yet.
AWS is also reminding us of the same on the blue info bar at the top.
Nodes: The details of the node itself, when added and available.
The difference between regular nodes and Fargate nodes is that at the basic level
regular nodes are EC2 instances that the user/administrator will need to manage
themselves in terms of patching, etc., whereas Fargate nodes are AWS managed, in that
regular updates and management of the underlying OS are taken care of by AWS, thus
placing much lighter load on the cluster administrators.
See https://docs.aws.amazon.com/eks/latest/userguide/
managed-node-groups.html for all the options available as shown in Figure 13-49.
266
Chapter 13 Elastic Kubernetes Service from AWS
267
Chapter 13 Elastic Kubernetes Service from AWS
We noted the VPC and the subnet the cluster is in, as well as the API server endpoint
access which is noted as “public”; this comes from the choice we made earlier during
cluster creation.
Next, we move on to the “Add-ons” tab as shown in Figure 13-51.
268
Chapter 13 Elastic Kubernetes Service from AWS
269
Chapter 13 Elastic Kubernetes Service from AWS
We notice that coredns is in status “Degraded.” Why is that? Click the coredns, which
takes us to the pod’s detail screen as shown in Figure 13-52.
270
Chapter 13 Elastic Kubernetes Service from AWS
271
Chapter 13 Elastic Kubernetes Service from AWS
Nothing specific here; we will configure an OIDC provider at a later stage. We can
choose the next tab in line, which is the “Logging” tab, as shown in Figure 13-54.
272
Chapter 13 Elastic Kubernetes Service from AWS
The API server and Audit logging are enabled; again, this comes from the choices we
selected when creating the cluster. Next, we can select the “Update History” tab as shown
in Figure 13-55.
No updates have been made to the cluster at this time. This is the place to look for
information about updates made to the cluster’s management plane. Over time as we
apply updates to our cluster, that information is captured and displayed in this tab; since
this is a freshly minted cluster, there aren’t any updates applied yet.
273
Chapter 13 Elastic Kubernetes Service from AWS
274
Chapter 13 Elastic Kubernetes Service from AWS
The Step 1, “Configure node group,” screen pops up as shown in Figure 13-57.
Give it a name, for example, myEKSNodeGroup01, indicating this is our first node
group. We can use this with any cluster; since we only have one cluster for now, we can
assume we’ll use this node group with the cluster myeks01.
275
Chapter 13 Elastic Kubernetes Service from AWS
276
Chapter 13 Elastic Kubernetes Service from AWS
Click “Next.” The Step 2, “Add permissions,” screen shows up; type EKS
in the search bar, press enter, then select the AmazonEKS_CNI_Policy and
AmazonEKSWorkerNodePolicy as shown in Figure 13-59.
While we are at it, click “Clear filter,” then type “container”; from the results, also
select AmazonEC2ContainerRegistryReadOnly, as shown in Figure 13-60, then click
Next. In summary, we have selected three policies to go with this role.
277
Chapter 13 Elastic Kubernetes Service from AWS
Click Next, which takes us to the final step of this wizard as shown in Figure 13-61.
278
Chapter 13 Elastic Kubernetes Service from AWS
In the middle of the review screen, we can see the three policies we selected as a way
of confirmation, as shown in Figure 13-62.
Click “Create Role” at the bottom right of the screen (not visible in Figure 13-61
or 13-62), which creates the role. The AWS Console confirms that action as shown in
Figure 13-63.
279
Chapter 13 Elastic Kubernetes Service from AWS
On the other tab where the node group creation was in progress, click the refresh
button; the role should show up there as shown in Figure 13-64.
280
Chapter 13 Elastic Kubernetes Service from AWS
Click the “Next” button at the bottom right of the screen (not visible in Figure 13-64),
which takes you to the compute and scaling configuration screen as shown in
Figure 13-65. Select t3a.medium or an instance type of your choice; you can leave others
to defaults.
281
Chapter 13 Elastic Kubernetes Service from AWS
On the same page in the middle is the Node group scaling configuration; choose
1,1,1 as the minimum for the Desired, Minimum and Maximum size for the Node group.
In our case, we have chosen to give two for the desired size of the node group, one as the
minimum size since we at least want one worker node to be active at any given time, and
the max size of the node group to be two nodes as shown in Figure 13-66.
282
Chapter 13 Elastic Kubernetes Service from AWS
Toward the bottom of the screen is the Node group update configuration section as
shown in Figure 13-67.
Then click “Next,” which takes you to the final Review and create step, as shown in
Figure 13-69.
284
Chapter 13 Elastic Kubernetes Service from AWS
Validate all your selections in this screen and click “Create” shown at the bottom
right of the screen, also shown in Figure 13-70.
AWS confirms the node group creation status as shown in Figure 13-71.
Once the node group is created, the compute will be available to the cluster, and
Kubernetes will schedule the system pods in this node group. Recall earlier in the
chapter we noticed that the coredns pod, for example, was not running due to lack of
compute; this node group satisfies that compute requirement. If you are curious, you can
check out the pod details and confirm they are indeed scheduled and running!
Congratulations, your first EKS cluster, myeks01, is up; it has both the management
plane provided by EKS itself and the compute plane serviced by the node group we
created.
285
Chapter 13 Elastic Kubernetes Service from AWS
Summary
In this chapter, we created IAM groups and users and attached policies to the IAM
groups required to set up the EKS cluster. We then launched the EKS cluster and learned
the various details about the cluster, how it compares with the microk8s kubernetes
cluster, the similarities and differences, as well as how to add compute capacity to
the cluster. This is where kubernetes implementations on the public cloud excel; the
compute capacity can be scaled up or down on demand, always optimizing for cost and
the needs of the business. The bonus is technologies such as fargate make it easy so that
we do not have to do any node management such as patching and hardening; AWS takes
care of it.
In the next chapter, we will launch some pods and workloads onto our newly minted
Kubernetes cluster and learn how such workloads behave in a public cloud setup and
how to configure and manage them to meet our business needs.
286
CHAPTER 14
You can set a password for this user or place your ssh-key in the ~/.ssh/authorized_
keys to enable login as this user. The author chooses to place his ssh-key in the new users
~/.ssh/authorized_keys to log in. Once you have logged in as this new user and have
been verified, we can proceed to the next step.
287
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_14
Chapter 14 Operating the EKS Cluster
Listing 14-1. Downloading, installing, and verifying the AWS CLI package
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o
"awscliv2.zip"
sudo apt install unzip -y # Optional, if unzip needs to be installed
unzip awscliv2.zip
sudo ./aws/install
aws --version
The successful execution of the command aws -version indicates that the AWS CLI
is installed and working as expected.
Note The AWS CLI package is continuously updated; thus, your version might be
newer than the one shown in our screenshot and/or text.
288
Chapter 14 Operating the EKS Cluster
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.27.4/2023-08-16/
bin/linux/amd64/kubectl
mkdir bin
chmod 755 kubectl; mv kubectl bin/
export PATH=~/bin:$PATH
Now that the kubectl binary is downloaded and made executable and the location
of this binary is added to the PATH variable, the next step is to validate that the binary is
working as expected, which can be done with the instruction shown in Listing 14-3.
which kubectl
kubectl version --short
289
Chapter 14 Operating the EKS Cluster
Note kubectl needs to be configured to talk to the remote AWS-hosted k8s API
endpoints. This hasn't been done yet; that's why the output is complaining that the
connection to the server refused/failed – ignore for now. We will add the .kube/
config file later in this chapter.
The installation of the AWS CLI, kubectl, and eksctl is now complete at this point.
290
Chapter 14 Operating the EKS Cluster
Listing 14-5. Updating the kubeconfig file via the AWS CLI
aws configure
aws sts get-caller-identity
aws eks update-kubeconfig --region us-east-2 --name myeks01
kubectl version
291
Chapter 14 Operating the EKS Cluster
Now that we have updated the kubeconfig file, let’s verify by running the code in
Listing 14-6.
The output shows valid information about our cluster; thus, we can conclude that
the kubectl is set up and working as intended.
We can now start executing the usual kubectl commands, for example, we can
verify what we saw on the AWS Console on the list of pods that were deployed via the
command line – meaning what we saw on the console should be the same information
that should be returned via the kubectl command line, which we can verify with the
command in Listing 14-7.
292
Chapter 14 Operating the EKS Cluster
This information is consistent with the information we saw on the AWS Console,
EKS service page, indicating that we can now access the cluster information both
via AWS Web Console and via the command-line tool kubectl which allows us to
programmatically manage our cluster.
We can check out the node information from the command line too via the kubectl
get nodes command as shown in Listing 14-8. Note the output contains very similar
information about the nodes as we saw in our microk8s node setup; it gives information
about the node's memory pressure, number of CPUs, and available memory for
workloads, among others.
Also, note that the node was set up for us by AWS; we did not have to find a
physical or virtual machine, install the OS, connect to the cluster, and such. All those
tasks were taken care of by the AWS EKS service; all we had to do was to define the
fargate configuration, and the rest is automatic. This is the power of the AWS EKS
managed service.
Now that we have everything set up the way we need it, I will walk you through
deploying your first pod.
293
Chapter 14 Operating the EKS Cluster
It's that simple. Now we can check the status of our deployment, the usual way by
using the get deployment command as shown in Listing 14-10.
The deployment is successful, and we can see the pod is in READY status. We can get
the pod information just to be sure as shown in Listing 14-11.
294
Chapter 14 Operating the EKS Cluster
Now, we can get some more details about the application that's running by obtaining
the pod logs, just as we did in our microk8s cluster; the commands are no different, as
shown in Listing 14-12.
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.2.6.RELEASE)
You might have to CTRL+C out of the kubectl logs -f command since we asked it to
tail the logs.
We can see from the highlighted output that the application running inside the pod
started successfully and that the process, tomcat, is running/listening on port tcp/8080.
As before, we will need this to expose the port as a K8S service so that users external
to the cluster can access this application; we can do this by executing Listing 14-13.
295
Chapter 14 Operating the EKS Cluster
The preceding output shows that the service is created. We can obtain the service
port by getting details about the service as shown in Listing 14-14.
From the output, we can see that our NodePort is mapped to the 32187 port on
the Node, meaning we should be able to access this port via the Node with the format
http://<NODE External IP>:<NodePort for Application>, which so far is http://<NODE
External IP>:32187. We still need to know the node's external IP. What is our node's
external IP address?
We can find out the node's external IP with the command in Listing 14-15.
Here it is. Thus, the full URL for our application is http://3.129.14.199:32187.
296
Chapter 14 Operating the EKS Cluster
Figure 14-1. Security group information shown in the EKS cluster detail page
We'll edit the "Cluster security group" and add an ingress rule there, by clicking the
hyperlink under the "Cluster security group," which opens in a new tab, as shown in
Figure 14-2.
297
Chapter 14 Operating the EKS Cluster
By default, the Cluster Security Group does not allow any traffic from external IP
addresses, so we need to edit the Inbound rules and add traffic from any Internet source
0.0.0.0/0 to target :32187, which is where our application is running.
So, select the "Inbound rules" tab, then click "Edit inbound rules" as shown in
Figure 14-3 to add the ingress rule.
298
Chapter 14 Operating the EKS Cluster
Click "Add rule," then select or enter the following values for the new rule:
Type: Custom TCP
Port range: 32187 [we determined this to be our application node port previously]
Source: Anywhere-IPv4, 0.0.0.0/0 [for access from anywhere in the world, you can
also choose to restrict this if you wish]
Description (Optional): PrimeorNot-App
Then click "Save rules." AWS confirms the rule is updated as shown in Figure 14-4.
Now, we can try to access the URL http://3.129.14.199:32187; recall that the
full application URL was determined earlier by the formula http://<NODE External
IP>:<NodePort for Application>.
And voilà! Just like that, our application is out on the Internet, served via an AWS EKS
cluster. We can now access our application via any browser as shown in Figure 14-5.
299
Chapter 14 Operating the EKS Cluster
Now that we have successfully launched a pod running our application on EKS, we
can branch into other capabilities of the Kubernetes cluster on EKS. One of the main
responsibilities of a systems engineer/administrator is to manage access to the EKS
cluster. We will explore this in the next section.
Then add the following six users. Do Not Attach any permissions directly to the
users, we'll grant it via the groups. Do grant API keys and console access for all users.
• k8sadmin01 # our k8s admin user 01, add to IAM Group grp-
k8s-admins
• k8sdevops01 # our DevOps/SRE team user 01, add to IAM Group grp-
k8s-devops
The reason we are adding the 01 users to the groups is to show how the cluster
responds when the user brings appropriate privileges when performing a cluster action;
since we will be granting privileges to the group, all 01 users would be able to perform
actions that their privileges allow for, while the 02 users, despite their name, will NOT
have privileges, since they are not yet part of the privilege granting group. We'll execute
the same commands the 01 users will execute and observe the error conditions as a
learning experience. Later on, we can add the 02 users to their appropriate groups.
301
Chapter 14 Operating the EKS Cluster
In the Specify permissions screen, under "Select a service," type EKS. EKS shows up
in the results; select EKS and then click "Next" as shown in Figure 14-7.
Figure 14-7. Selecting All EKS options for the proposed EKS admins
302
Chapter 14 Operating the EKS Cluster
Figure 14-8. Selecting "All resources" for the proposed EKS admins
Then click the "Review policy" button; the Review Policy screen shows up next.
303
Chapter 14 Operating the EKS Cluster
In the Review Policy screen, select the following option as shown in Figure 14-9:
Name: k8sadmins
Click "Create policy."
After clicking "Create Policy," the policy is created, and the permissions will be
visible in the group description, which we can validate under the "Permissions" tab in
the group Summary screen as shown in Figure 14-10.
304
Chapter 14 Operating the EKS Cluster
Give it about ten minutes or so for the policy to take effect; now would be a good
time to refill the coffee cup.
Now we will need to assume the persona of the k8sadmin01 user; for this, you can set
up a new native user in your operating system to keep things clean, or you can simulate
it having multiple profiles in your ~/.aws/credentials file. In this book, we'll choose to set
up a native user to match k8sadmin – it keeps things clean.
305
Chapter 14 Operating the EKS Cluster
As a k8sadmin01 OS user, set up the AWS CLI, kubectl, and eksctl as needed, as well
as configuring the aws cli.
Do the same for k8sadmin02.
Here, we have set it up successfully, meaning the OS user k8sadmin01 is set up to
use the IAM user k8sadmin01 credentials, thus keeping the identity logically connected.
The OS user k8sadmin01 is set up with AWS CLI credentials from the AWS IAM user
k8sadmin01 and verified as shown in Listing 14-16.
Now this k8sadmin IAM user has no privileges on the cluster yet, because we have
not granted this user/group anything inside the K8S cluster we have set up, which we
will do in the next section.
306
Chapter 14 Operating the EKS Cluster
The output indicates that the command is successful, as shown in the highlighted
entry in Listing 14-17.
307
Chapter 14 Operating the EKS Cluster
As indicated before, the kubectl command does not know the Cluster's endpoint,
which we can update using the update-kubeconfig command as we have done earlier
and also shown in Listing 14-19.
We can see that as user k8sadmin01, we can describe the service and manage the
cluster. Let us test the management permissions given to this user k8sadmin01.
A simple example might be that of creating a namespace, which is a privileged action.
Let us try this first with user k8sadmin02 as shown in Listing 14-21; despite the admin
name, recall that this user is not in k8s group system:masters; thus, the expected result is
for the action to fail with an unauthorized message, while trying with user k8sadmin01
should succeed since this user is part of the k8s admin group system:masters.
308
Chapter 14 Operating the EKS Cluster
309
Chapter 14 Operating the EKS Cluster
Success! As expected.
We can also perform additional checks, such as listing the namespaces to ensure the
namespace is visible and is available to use as shown in Listing 14-23.
We can see that the "test" namespace we created is visible, active, and ready for use.
YOUR TURN: Add user k8sadmin02 to the EKS system:masters group, then retry the
preceding action; creating and listing namespaces should be successful!
Provided the previous setup is correct, listing the iamidentitymapping as an
admin on the cluster, either shiva or k8sadmin01, our output would be as shown in
Listing 14-25.
310
Chapter 14 Operating the EKS Cluster
Note The preceding output would match yours closely provided you finished the
"YOUR TURN" action to add the user k8sadmin02 to the system:masters role. If
this was not successful for some reason, then you would most likely not see the
k8sadmin02 user in the preceding output.
Now let us move to the next role, DevOps/SRE, where we would like for the user to
be able to deploy, roll back, and do similar actions on existing namespaces, but we do
not want them to be full-scale admins of the cluster itself. Thus, we are scoping the role
down one level, giving just enough privileges to operate on existing resources at the
namespace level, but not above it. What does k8s RBAC give us? It gives us the admin
role; recall that the k8sadmins were granted the cluster-admin role; this is simply an
admin, which is scoped to the namespace level. Let us utilize that!
We use the same eksctl iamidentitymapping command to do it.
311
Chapter 14 Operating the EKS Cluster
Listing 14-26. Mapping k8sdevops01 IAM user to the cluster admin role
Now, we can verify the preceding command executed successfully by listing the
iamidentitymapping entries as shown in Listing 14-27.
Listing 14-27. Verifying the k8sdevops01 user is mapped to the admin role
eksctl get iamidentitymapping --cluster myeks01
312
Chapter 14 Operating the EKS Cluster
Notice in the output that the k8sdevops01 has been authorized to use the role admin,
which gives them most rights inside the cluster, enough to manage the workloads and
more, but not the full system:masters role.
Why do we get this error? The astute reader would notice that even though the user
has admin privileges in the K8S cluster, they do not have access to call the relevant APIs
on the AWS IAM side; thus, an AWS IAM policy granting this access must be attached
either directly to this IAM user or to the IAM groups this user belongs to. Let us do that.
On the AWS Console page, IAM Service, logged in as a root user or user with IAM
privileges, go to User Groups ➤ grp-k8s-devops ➤ Permissions as shown in Figure 14-11.
313
Chapter 14 Operating the EKS Cluster
Since we would like for this DevOps/SRE group to have all EKS privileges, let us add
a policy by clicking "Add permissions" seen on the far right and choosing "Create inline
policy" as shown in Figure 14-12.
314
Chapter 14 Operating the EKS Cluster
315
Chapter 14 Operating the EKS Cluster
Choose the following option in the "Create Policy" screen, then click the "Create
Policy" button as shown in Figure 14-13.
Name: IAM-Policy-for-k8s-devopsrole
Then click the "Create Policy" button to finish creating the policy. We can then
confirm that the policy is successfully attached to this group by reviewing its details as
shown in Figure 14-14.
316
Chapter 14 Operating the EKS Cluster
You can see that our group grp-k8s-devops now has an inline policy that grants all
List, Read, and Tagging privileges to the cluster.
Since we already added the IAM user k8sdevops01 to the K8S cluster admin role, let
us try the action again as shown in Listing 14-29.
Voilà! We now can list the cluster, as expected. What else can we do? We can describe
the cluster to obtain detailed information about the cluster with the command as shown
in Listing 14-30.
Okay, we can describe the cluster. Can we go inside the cluster and look at the workloads,
which is the intended purpose of this role? Let us give it a try as shown in Listing 14-31.
But we have admin role, right? We can check that using the get iamindentitymapping
command as shown in Listing 14-32.
318
Chapter 14 Operating the EKS Cluster
Indeed, we do, but why are we not able to do any action within the cluster itself,
though we have admin rights as the user k8sdevops01? This is because we have not done
the rolebinding yet, that is, we have not binded the admin role to a given namespace
yet; since K8S has multiple namespaces, and it doesn't know which namespace to grant
admin rights to for this user, it has denied this request. Let us grant our DevOps users
access to the default namespace by binding the admin role to the default namespace.
We do that by using the rolebinding command, we indicate which clusterrole
we are interested in with the option –clusterrole, admin in our case, to which group
with the option –group, admin in our case and finally to which namespace, with the
option –namespace, default in our case. Thus, the entire command looks like shown in
Listing 14-33.
319
Chapter 14 Operating the EKS Cluster
Now that the role has rights into the default namespace, can we get some
information about the services in this namespace? We can test that now as a
k8sdevops01 user, as shown in Listing 14-34.
Now this makes sense; user k8sdevops01 is in the K8S admin group, which is bound
to the clusterRole admin, and the IAM group this user belongs to has IAM policies
granting access to the EKS service, so the user is now able to do actions on the cluster.
We can test in another way, by getting the pod information in the default namespace
as shown in Listing 14-35.
The negative test is that we can attempt to list the pods in a namespace that is not
default, to which user k8sdevops01 does not have any rights and thus should fail; we
can confirm that by attempting to list the pods on a public namespace, as shown in
Listing 14-36.
320
Chapter 14 Operating the EKS Cluster
Notice that the user is limited to using the "default" namespace, since that's how we did
the roleBinding. They are able to do actions within default, as the first command shows,
but the second command to show pods on a different namespace is denied, as expected.
Since user k8sdevops01 has admin rights to the namespace, they can do normal
activities a kubernetes admin can do within that namespace, such as creating a
deployment, which we can test as shown in Listing 14-37.
321
Chapter 14 Operating the EKS Cluster
k8sdevops01@wks01:~$
error: failed to create deployment: deployments.apps is forbidden: User ""
cannot create resource "deployments" in API group "apps" in the namespace
"kube-public"
k8sdevops01@wks01:~$
Similarly, a deployment action also succeeds on the default namespace, but fails on
the namespace to which we do not have access as expected.
You can utilize this structure to create multiple namespaces and groups based on
your business needs and segment the administrators to have powers only within the
namespace created/allocated for their use.
322
Chapter 14 Operating the EKS Cluster
This time, we also remove the tagging rights, since if we are using tags for billing
constructs, etc., we do not want read-only users messing them up; just list and read
should be fine. Click the "Review Policy" button to go to the next screen of "Review
policy":
Name: IAM-Policy-for-k8sreadonly
Click "Create Policy."
Name the policy, review, and finish creating the policy as shown in Figure 14-16.
323
Chapter 14 Operating the EKS Cluster
Once the policy is created, you can confirm the inline policy is correctly applied by
going to the "Permissions" tab on the Group description as shown in Figure 14-17.
324
Chapter 14 Operating the EKS Cluster
Figure 14-17. Confirming the permissions policy after attaching the read-
only rights
Now, grant rights on the K8S cluster itself; this time, we will use the cluster role
named "view" as shown in Listing 14-39.
We can see that the user k8sreadonly01 is correctly mapped to the cluster role read-
only. Now, let us go ahead and create the clusterrolebinding also so that the user has
rights inside the cluster as shown in Listing 14-41.
326
Chapter 14 Operating the EKS Cluster
Since the get operation is a read-only operation, and we have the required privileges,
we are successful in executing this command.
A deployment is a write operation; since it creates resources within the cluster, a
deployment command by the user k8sreadonly01 would fail, as expected and shown in
the following output:
Summary
In this chapter, we learned about various concepts such as setting up IAM users for
various business purposes, granting them access to the cluster based on the concept of
least privileges. We also deployed our applications to the EKS cluster and accessed the
application services from the Internet.
327
Chapter 14 Operating the EKS Cluster
In the next chapter, we will learn about data persistence; since the container images
are immutable, any information stored during the lifetime of the container exists only
when it is running. If the container should be destroyed, the container that replaces it is
started from the container image, which does not contain any runtime information the
previous container may have held in its memory; thus, data persistence is an important
concept in Kubernetes in general, and in AWS this is implemented by using EBS add-
ons – this is the core of our next chapter.
328
CHAPTER 15
329
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_15
Chapter 15 Data Persistence in EKS
The first thing we have to do for this setup is to create a stock nginx deployment,
which we can by creating a mynginx.yaml – the contents of the file are shown in
Listing 15-1. Create this file using your favorite editor; save and have it ready for
deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment-ch15
spec:
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- ports:
- containerPort: 80
name: mynginx
image: nginx
Now that the deployment is created, we can use the myservice01.yaml file to expose
the web service, as shown in the deployment file in Listing 15-2.
330
Chapter 15 Data Persistence in EKS
apiVersion: v1
kind: Service
metadata:
name: "myservice"
spec:
ports:
- port: 80
targetPort: 80
type: LoadBalancer
selector:
app: "mynginx"
Enter the contents in Listing 15-2 using your favorite editor, name it myservice01.
yaml, then apply it using the commands shown in Listing 15-3.
As noted before, port 30439 is the service port; thus, we will then be able to access
the web service using port 30439 as shown in Listing 15-4.
331
Chapter 15 Data Persistence in EKS
Listing 15-4. Confirming connection to the web server using the service port
curl -L localhost:30439
Listing 15-5. Editing the stock index.html file inside the container
332
Chapter 15 Data Persistence in EKS
After executing the preceding command, we would have updated the index.html at
the appropriate location; now let’s log out from inside the container and repeat CURL
from the host terminal (or use a new terminal) as shown in Listing 15-7.
Listing 15-7. Accessing the web server – expecting the updated content
root@mydeployment-ch15-74d9d4bc84-cx22h:/# exit
exit
We notice that our content has changed as expected. Now, let us kill the pod and
observe the results as shown in Listing 15-8.
333
Chapter 15 Data Persistence in EKS
We can see that a new pod has been created, as evidenced by the new pod name;
now what does the index.html look like? Check it out as shown in Listing 15-9.
334
Chapter 15 Data Persistence in EKS
As expected, the content has been replaced; since our changes were discarded along
with the pod when we killed it and when the pod was recreated using the image file,
the original index.html was restored as that’s what is stored in the container image. The
persistence is lost.
But we need the ability to keep our changes to index.html though … somehow we
need to externalize the /usr/share/nginx/html directory onto a persistent storage, so
our changes are permanent.
Let us see how we can accomplish this.
Creating a PV
Continuing along, first, let us delete the previous deployment named mynginx, then
proceed to create a folder on the host (node) named /nginxdata, as shown in Listing 15-10,
which would act as the persistent volume, where we would like to store all our web content.
Let us ensure we have more than 2GB available in the host filesystem first, since we
would like to define a PV of size 2GB; otherwise, the operation would fail due to lack of
available disk space. We can do that with the Linux df command as shown in Listing 15-11.
df -h /
shiva@wks01:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 38G 12G 25G 32% /
shiva@wks01:~$
Avail = 2.8GB, so we should be okay. The manifest file to creating the PV is shown in
Listing 15-12; create the file, name it pv.yaml using the contents shown in Listing 15-12
using your favorite editor, save it, then we can apply it.
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginxdata
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: /nginxdata
storageClassName: webcontent
336
Chapter 15 Data Persistence in EKS
• spec
• accessModes → ReadWriteOnce
• Since we are only dealing with one node in our microk8s case,
we are okay to use ReadWriteOnce; other types are given for
your reference.
• capacity -> 2GB, that is, we are indicating to the cluster that this
PV is of size 2GB.
337
Chapter 15 Data Persistence in EKS
• storageClassName
Now, let’s create the PV by applying the deployment file as shown in Listing 15-13.
Listing 15-13. Applying the deployment file to create the PV and confirmation
338
Chapter 15 Data Persistence in EKS
You can see that the PV is STATUS Available, but what about the RECLAIM POLICY?
More on that later. You can also see that the ACCESS MODES state RWO, short for
ReadWriteOnce.
As per the K8S docs, in the CLI, the access modes are abbreviated to
• RWO – ReadWriteOnce
• ROX – ReadOnlyMany
• RWX – ReadWriteMany
• RWOP – ReadWriteOncePod
Creating a PVC
Now that we have the PV created and it is available for the cluster to use, it is time to
make a claim from the pod, so that the pod can use this storage space. Listing 15-14
shows the contents of the deployment file for the persistent volume claim (PVC); create
the file using your favorite editor and save it named pvc.yaml as shown in Listing 15-14.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: webdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: webdata
Note that the storage request here should be less than the available space in the
PV; otherwise, the claim will NOT be satisfied by the cluster, since resources won’t be
available.
Apply the pvc.yaml file, then check the status of the PVC as shown in Listing 15-15.
339
Chapter 15 Data Persistence in EKS
This has been pending for a while. The astute reader would have noticed that the
storageclass available as per PV (webcontent) doesn’t match what is being requested
here (webdata) – let us check on that as shown in Listing 15-16.
340
Chapter 15 Data Persistence in EKS
341
Chapter 15 Data Persistence in EKS
After we fixed the error and reapplied, we can see that the pvc is successful, and the
status shows BOUND. This means this PVC can be utilized by the pods.
Now, it’s time to actually use this PVC in a pod. How do we do that? We need to add
a few additional parameters to our deployment file. Create a new mynginx-pvc.yaml file
which includes the deployment along with the PVC to be used as shown in Listing 15-18;
create this file using your favorite editor; save and have it ready for deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment-ch15pvc
spec:
selector:
matchLabels:
app: mynginx
template:
metadata:
labels:
app: mynginx
spec:
containers:
- ports:
- containerPort: 80
name: mynginx
image: nginx
342
Chapter 15 Data Persistence in EKS
volumeMounts:
- name: nginxdata
mountPath: /usr/share/nginx/html
volumes:
- name: nginxdata
persistentVolumeClaim:
claimName: webcontent
The preceding nginx deployment file is very much similar to the ones we have used
in previous chapters. Notice that we added a few lines to that deployment file indicating
the volumeMounts and information about the volume itself, so the pod could utilize
durable storage; let us examine them.
volumeMounts is the section, name is the name of the volume, and mountPath
indicates that this volume needs to be mounted in the path given inside of the pods.
Thus, essentially what we have done is externalized the /usr/share/nginx/html directory
to the PVC-based storage; thus, even if the pod is destroyed, the contents of the PVC are
saved. When Kubernetes relaunches the pod to meet the deployment desired state, the
contents of the /usr/share/nginx/html/index.html will be read from the PVC, which is
persistent.
volumes is the section, name is the name of the volume, and persistentVolumeClaim
indicates the name of the pvc that we created where we intended to store the webdata.
Note If you have the previous deployment of mynginx that we created previously,
please delete both the deployment and the service before proceeding, so the
system is clean.
Now that we have the required updates to the mynginx.html file to include the PVC,
let us proceed forward with deploying using this updated file as shown in Listing 15-19.
343
Chapter 15 Data Persistence in EKS
Let us recreate the LoadBalancer service. No changes to the manifest file, so just
rerun the command as shown in Listing 15-20.
Listing 15-21. Accessing the web server with contents served from the PVC
curl -L localhost:30439
344
Chapter 15 Data Persistence in EKS
Alas! 403 Forbidden!! Why so? Web admins would recall that the said mount point
doesn’t have an index.html file yet; nginx, not seeing the index.html, is throwing this
error. So we need to create the index.html file in the host directory of /nginxdata as
shown in Listing 15-22.
Listing 15-22. Creating the index.html file in the persistent storage layer
Now that we have created the index.html file in the persistent storage, which is
then shared with the pod via the PVC, this file can now be read by the nginx pod, which
should then serve it; we can test it as shown in Listing 15-23.
Listing 15-23. Accessing the web server with contents served from the PVC
curl localhost:30439
345
Chapter 15 Data Persistence in EKS
<body>
new content
</body>
</html>
shiva@wks01:~$
It works! nginx is returning the new index.html from our mount point as opposed to
the default file from inside the container.
We can conduct more testing by scaling up the pods and using the same PVC; all
the pods should return the same content. Let us first check how many pods are in this
deployment as shown in Listing 15-24.
There’s just one pod; now, let’s scale up the pods to three replicas as shown in
Listing 15-25.
346
Chapter 15 Data Persistence in EKS
We can also confirm the same by describing the service and counting the number
of Endpoints, which should match the required number of replicas as shown in
Listing 15-27.
Notice that the Endpoints have increased to three, matching the number of replicas
we asked for.
Let us hit the web server and see what content it is serving. We can do that using a
simple bash script running on the command line as shown in Listing 15-28.
Listing 15-28. Accessing the web server via the command line
<html>
<title>
new title
</title>
<body>
new content
</body>
</html>
count: 2
<SNIP>
<html>
<title>
new title
</title>
348
Chapter 15 Data Persistence in EKS
<body>
new content
</body>
</html>
count: 14
shiva@wks01:~$
Now that we have hit the web server some 14 times, the content remains the same,
but did we hit all the web servers/pods? We can check that by examining the access.
log of each of the pods as shown in Listing 15-29. Note that your pod names might be
different; please use your pod names if you are following along on your system as shown
in Listing 15-29.
Listing 15-29. Reviewing nginx logs via pod logs to confirm we accessed all
the pods
349
Chapter 15 Data Persistence in EKS
You can see that all the pods have been hit with the web requests, and all of them
returned the same content, since all the pods are using the same mount point being
served by the same underlying PVC, recalling that we just scaled the deployment using
the same manifest file.
350
Chapter 15 Data Persistence in EKS
Next, we can delete the PVC itself, since no pod is using this claim, as shown in
Listing 15-31.
Next, we can delete the PV itself, since no PVC is present against this PV. Delete the
PV as shown in Listing 15-32.
351
Chapter 15 Data Persistence in EKS
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
selector:
matchLabels:
app: label-nginx
352
Chapter 15 Data Persistence in EKS
template:
metadata:
labels:
app: label-nginx
spec:
containers:
- ports:
- containerPort: 80
name: name-nginx
image: nginx
With the deployment file ready, deploying is straightforward as we have done earlier;
run the deployment command as “shiva,” “shiva-eks,” or another admin user who has
admin rights on the AWS EKS cluster as shown in Listing 15-34 – note the switch to using
kubectl commands, which is already aware of the EKS cluster.
As before, we have to create a service to access the web server; thus, we create a
service via the command line and get the details of the port as shown in Listing 15-35.
353
Chapter 15 Data Persistence in EKS
We can see that the service is created and exposed on port :30969; now the only
thing remaining is to access the website, for that we need the external IP of the node
also, and as before, we can find the external IP of the node via the command shown in
Listing 15-36.
shiva@wks01:~/git/myeks01$
ExternalIP: 3.141.29.3
shiva@wks01:~/git/myeks01$
Now that we have the node’s external IP, as before the format for accessing the
web server is http://<Node’s External IP>:<NodePort where the service is exposed>;
in our case, that would be http://3.141.29.3:31747. We can access it as shown in
Listing 15-37.
Remember that the security group needs to be updated to allow incoming traffic
on port 31747, so let us update that as shown in Figure 15-1. Go to EKS ➤ Clusters ➤
myeks01 ➤ Networking tab, then click Security Group under “Cluster security group,”
which opens a new browser tab; there, select the “Inbound rules” tab.
354
Chapter 15 Data Persistence in EKS
Figure 15-1. Editing the inbound rules of the Cluster Security Group
Notice only 30969 is allowed, which we did earlier; since this new service is now
running on port :31747, we need to add :31747 to the open port list, by clicking the “Edit
inbound rules,” as shown in Figure 15-2.
355
Chapter 15 Data Persistence in EKS
356
Chapter 15 Data Persistence in EKS
Once this security group is updated, we can retry our curl command as shown in
Listing 15-38.
curl 3.141.29.3:31747
357
Chapter 15 Data Persistence in EKS
We can also do the same in a browser, which should yield identical results, albeit the
browser one being a little user-friendly as shown in Figure 15-4.
Figure 15-4. Accessing nginx running on the AWS EKS cluster via a browser
Now that we have a deployed nginx container and can access it from the Internet, we
observe that the index.html is the stock version; which we now need to externalize from
the container, so we can make updates to the index.html without having to burn the html
files into the container and keep replacing them every time we need to make a change to
the content.
The question is where or what is the underlying storage mechanism for the
persistent volumes going to come from? Recall that in our microk8s instance, we just
created a directory on the host machine to act as our durable storage; that isn’t going to
work in the AWS EKS instance, since we typically do not have access to the underlying
compute nodes.
Kubernetes supports multiple underlying storage mechanisms such as NFS, iSCSI, or
a cloud provider–specific storage system.
AWS also provides us with a couple of different options.
358
Chapter 15 Data Persistence in EKS
359
Chapter 15 Data Persistence in EKS
You can also edit the example policy and/or create it yourself, but following the DRY
(Don’t Repeat Yourself ) principle, we will just use the existing policy which fits our use
case nicely. Then we create the IAM policy with the aforementioned input file using the
aws iam create-policy command as shown in Listing 15-40.
Listing 15-40. Creating the IAM policy required for the CSI driver
The policy is created, and the policy ID and ARN are given as outputs to that
command. Note them both; we will need them later. Now we will proceed with creating
the iamservice account, which takes the format shown in Listing 15-41.
360
Chapter 15 Data Persistence in EKS
The parameters my-cluster, policy-arn, and region need to be updated to match our
instance of Kubernetes cluster; thus, the command becomes as shown in Listing 15-42,
where we also associate our EKS cluster with the AWS OIDC provider first.
Listing 15-42. Creating the iamserviceaccount for use by the CSI driver
361
Chapter 15 Data Persistence in EKS
362
Chapter 15 Data Persistence in EKS
From the output section in Listing 15-42, we can observe that the IAM Service
Account via a CloudFormation Stack and the command is completed. Most operations
we perform via eksctl result in a CloudFormation Stack being created and executed,
so you can always go into the CloudFormation Stack and read the execution logs if you
need to in the future.
We can verify that the IAM Service Account got created in kube-system by the
command shown in Listing 15-43.
We can also confirm the annotation is completed by describing the service account
as shown in Listing 15-44.
363
Chapter 15 Data Persistence in EKS
Note The annotation contains the role ARN, not the policy ARN that we created in
step 1; when we ran the eksctl command, it created the role, took the ARN of that
role, and annotated it for us.
The next step is to install the Amazon EFS driver; we have three choices: (1) via a
helm chart, (2) manifest using a private registry, and (3) manifest using a public registry.
We will use option 3 in our example.
Download the manifest file and save it as a public-ecr-driver.yaml file using the
command shown in Listing 15-45.
kubectl kustomize \
"github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/
stable/?ref=release-1.4" > public-ecr-driver.yaml
We then apply the file to install the Amazon EFS driver onto the cluster as shown in
Listing 15-46.
364
Chapter 15 Data Persistence in EKS
clusterrole.rbac.authorization.k8s.io/efs-csi-external-provisioner-
role created
clusterrolebinding.rbac.authorization.k8s.io/efs-csi-provisioner-
binding created
deployment.apps/efs-csi-controller created
Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/os]: deprecated
since v1.14; use "kubernetes.io/os" instead
daemonset.apps/efs-csi-node created
csidriver.storage.k8s.io/efs.csi.aws.com unchanged
shiva@wks01:~/git/myeks01$
Delete the EBS driver pods, if any, by using the command shown in Listing 15-47.
365
Chapter 15 Data Persistence in EKS
Now, we can go ahead and start setting up the EFS filesystem and test it out. First, we
need to create a security group that grants NFS access, since EFS operates over NFS v4.1
Let us get the vpc-id for the VPC, where our cluster is set up as the EFS should also be set
up within the same VPC, as shown in Listing 15-48.
Now we need the CIDR block in use within that VPC; we can obtain that by the
command shown in Listing 15-49.
Now create the security group which requires for us to know the VPC ID, which we
obtained earlier, using the command in Listing 15-50.
1
https://docs.aws.amazon.com/efs/latest/ug/how-it-works.html
366
Chapter 15 Data Persistence in EKS
The security group is created; note down the security group ID, which will be
required for the next command where we grant NFS access through that group as shown
in Listing 15-51.
367
Chapter 15 Data Persistence in EKS
Notice the system output, where the Return field is true – access granted! Now, on to
creating the actual EFS filesystem as shown in Listing 15-52.
While the EFS filesystem is being creating, we can quickly do another task. We need
to add the AmazonEFSCSIDriverPolicy to the role myAWSEKSNodeRole; this is needed
so that the nodes in the node group can access the EFS-based FS. Go to IAM ➤ Roles ➤
myAWSEKSNodeRole ➤ Attach policies.
368
Chapter 15 Data Persistence in EKS
In the Filter Policies search box, type in EFSCSI; the AmazonEFSCSIDriverPolicy will
show in the results, select it, and click “Add permissions.”
369
Chapter 15 Data Persistence in EKS
If the filesystem is still “creating” as per the LifeCycleState field value, it is a good
time to get some coffee.
A few moments later… our filesystem is ready to use! We can confirm the EFS
filesystem is ready to use by executing the command shown in Listing 15-53.
Listing 15-53. Describing and confirming the EFS filesystem is ready for use
370
Chapter 15 Data Persistence in EKS
Note down the FileSystemId from the output shown in Listing 15-53; we’ll need
it later.
The examples are inside the folder shown in Listing 15-55; switch to that folder.
371
Chapter 15 Data Persistence in EKS
cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
shiva@wks01:~/git/myeks01$ cd aws-efs-csi-driver/examples/kubernetes/
multiple_pods/
shiva@wks01:~/git/myeks01/aws-efs-csi-driver/examples/kubernetes/
multiple_pods$
Using your favorite editor, edit the specs/pv.yaml file and update the specs.csi.
volumeHandle field with the value from FileSystemId we obtained for our EFS filesystem
we created earlier, that last line, as shown in Listing 15-56.
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-013a347478dfa307c
372
Chapter 15 Data Persistence in EKS
Note The AWS example directory specs/ has multiple files; all of them need to be
applied, so rather than deploying one file at a time, we just ask kubernetes to apply
all the files in that directory by just pointing it to the directory rather than a single
file inside it as shown in Listing 15-57.
Listing 15-57. Applying the deployment files from the AWS example
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl apply -f specs/
persistentvolumeclaim/efs-claim created
pod/app1 created
pod/app2 created
persistentvolume/efs-pv created
storageclass.storage.k8s.io/efs-sc created
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$
We can see that the storageclass, PV, PVC, and the pods are created all in one go!
Now, we can check out the /data/out1.txt file from either pod after about a minute or
so, giving the pod some time to write the timestamps as shown in Listing 15-58.
Listing 15-58. Tailing a log file from pod/app1 located in /data/ which is
stored in EFS
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app1 -- tail /data/out1.txt
373
Chapter 15 Data Persistence in EKS
Listing 15-59. Tailing a log file from pod/app2 located in /data/ which is
stored in EFS
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app2 -- tail /data/out1.txt
Both pods are writing datastamp to the file /data/out1.txt, which is not very helpful,
since it’s hard to distinguish which pod is writing which entries.
Since this is EFS, meaning NFS, anything we write on /data/ from pod/app1 should
be visible from pod/app2 /data folder since they both point back to the same EFS. Let us
now write something else to this file from pod/app1 to confirm we can see this output
from pod/app2 as shown in Listing 15-60.
kubectl exec -it app1 -- /bin/sh -c 'echo "this is pod1/app1 writing" >>
/data/out1.txt'
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app1 -- /bin/sh -c 'echo "this is pod1/app1
writing" >> /data/out1.txt'
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$
Let us give it 10–20 seconds and write the same message updating it for pod2 as
shown in Listing 15-61.
374
Chapter 15 Data Persistence in EKS
kubectl exec -it app2 -- /bin/sh -c 'echo "this is pod2/app2 writing" >>
/data/out1.txt'
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app2 -- /bin/sh -c 'echo "this is pod2/app2
writing" >> /data/out1.txt'
Now, we can tail the file from both the pods and validate the content shows up as
expected using the same command as before and as shown in Listings 15-62 and 15-63.
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app1 -- tail /data/out1.txt
Tue Dec 27 00:51:25 UTC 2022
Tue Dec 27 00:51:30 UTC 2022
Tue Dec 27 00:51:35 UTC 2022
Tue Dec 27 00:51:40 UTC 2022
this is pod1/app1 writing
Tue Dec 27 00:51:45 UTC 2022
Tue Dec 27 00:51:50 UTC 2022
Tue Dec 27 00:51:55 UTC 2022
this is pod2/app2 writing
Tue Dec 27 00:52:00 UTC 2022
shiva@wks01:~/git/myeks01/aws-efs-csi-driver.orig/examples/kubernetes/
multiple_pods$ kubectl exec -it app2 -- tail /data/out1.txt
Tue Dec 27 00:51:35 UTC 2022
Tue Dec 27 00:51:40 UTC 2022
this is pod1/app1 writing
Tue Dec 27 00:51:45 UTC 2022
375
Chapter 15 Data Persistence in EKS
From both pods, we can see the messages we wrote, showing that the EFS filesystem
is mounted across both pods and is shared among them! Now, it is time to reconfigure
this PV for our nginx pod!
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
selector:
376
Chapter 15 Data Persistence in EKS
matchLabels:
app: label-nginx
template:
metadata:
labels:
app: label-nginx
spec:
containers:
- ports:
- containerPort: 80
name: name-nginx
image: nginx
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
Notice the mountpath and volumes this deployment will use; we are just reusing the
volume from our previous example. Recall that the EFS is still mounted via pod/app1;
we’ll just use that to write our index.html file, like given in Listing 15-66. Take note of all
those pesky apostrophes and double quotes; they have to be in the exact places for this
to work as shown in Listing 15-66.
Recall the external IP of the node port; we can access our nginx site using the same
URL of http://3.141.29.3:31747 in my case, and voilà! The website is serving the
content files from the EFS index.html file as shown in Figure 15-5.
Figure 15-5. Browser showing the index.html content we created inside of EFS
To confirm this even further, we can make an edit to the index.html; make no other
changes. We should see the refreshed content on the browser, so let us make a change to
the index.html file and refresh; once again, we’ll use the existing pod to update this file,
using the command shown in Listing 15-68.
378
Chapter 15 Data Persistence in EKS
Nice! To test this even further, we can delete the pods and let the deployment
recreate the pods. This time, due to persistence, since index.html is coming from EFS,
which is mounted to the pods upon creation, the information should not be lost; let us
test it out.
Let us first delete the pod and let it recreate on its own (testing persistence) as shown
in Listing 15-69.
Listing 15-69. Obtaining pod names and deleting to test for persistence
shiva@wks01:~/git/myeks01/aws-efs-csi-driver/examples/kubernetes/multiple_
pods/specs$ sleep 30; kubectl get pods
NAME READY STATUS RESTARTS AGE
app1 1/1 Running 0 37m
app2 1/1 Running 0 37m
mydeployment-559c5c446b-jbcpk 1/1 Running 0 50s
primeornot-79f775bb8c-7vc5n 1/1 Running 0 42d
shiva@wks01:~/git/myeks01/aws-efs-csi-driver/examples/kubernetes/
multiple_pods/specs$
As we can see from the output in Listing 15-69, the pod has been recreated on its
own; now let us check our browser, the output of which is shown in Figure 15-7.
Figure 15-7. Browser showing the updated index.html content even after pod
recreation
Summary
In this chapter, we learned about how persistence works in the AWS EKS, by creating an
EFS, mounting that EFS to our EKS-based cluster, then utilizing that filesystem in our
workloads. This concept can be extended out to production workloads, where you will
need to store transactional data on the persistent layer. Whether the durable/persistent
storage is backed by EFS or EBS, the concept remains the same. EKS needs add-ons to
utilize the storage layer; once they are provisioned, that storage can be utilized by the
workloads using the regular PV and PVC concepts of Kubernetes.
380
Chapter 15 Data Persistence in EKS
The beauty of AWS is all the elastic nature of the EFS and EBS are still applicable; you
can scale your workload without having to worry about running out of storage space.
These EBS and/or EFS volumes can be extended on demand, thereby providing relief for
the Kubernetes engineers.
In the next chapter, we will learn about ingress and ingress controllers; these are
important topics for production-ready workloads will seek to utilize load balancing
techniques to provide for fault tolerance.
381
CHAPTER 16
383
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_16
Chapter 16 Networking and Ingress
–-subnets are the list of three public subnet IDs in our VPC, since
in our exercise we are only using public subnets.
384
Chapter 16 Networking and Ingress
"updateConfig": {
"maxUnavailable": 1
},
"tags": {}
}
}
shiva-eks@wks01:~/eks$
The output in Listing 16-2 shows the nodegroup is DELETING in status; we can
confirm this also from the AWS Console as shown in Figure 16-1.
386
Chapter 16 Networking and Ingress
After the myng02 nodegroup has been created, we can find out the external IP of the
node by describing it, as shown in Listing 16-3.
Since Kubernetes will redeploy our deployment onto this new node, our service
primeornot should be running on this new IP on port 30969, which we can access via
the old formula http://<Node’s External IP>:<Application’s NodePort>; thus, it would be
http://52.14.201.94:30969 in our case; this is confirmed as shown in Figure 16-2.
387
Chapter 16 Networking and Ingress
Figure 16-2. Accessing the application using the new external IP address of
the node
This poses an issue for the application though; every time we update the nodegroup/
nodes as the external IP of the nodes change, we need to find that IP to access our
services, and that’s not good for production-type applications that need a consistent
DNS name/IP address.
Additionally, the nodes may not also have external facing IP addresses if they are
launched on private subnets. So how do we address that?
388
Chapter 16 Networking and Ingress
Thus, in its simplest form, Ingress relies on an ingress controller and a set of routing
rules to allow external access onto services running inside the cluster.
Unlike the previous chapters, in this chapter, we’ll expose our service using an
Ingress Controller. We’ll start by doing that in our microk8s kubernetes setup.
Recall that we deleted all the deployments in microk8s as we cleaned up the PVC
and PVs. We can quickly recreate them so that we have our nginx web server back online
to try out the access method using the LoadBalancer service. We do that using the
commands given in Listing 16-5.
389
Chapter 16 Networking and Ingress
Let us describe the services running on this cluster as shown in Listing 16-6.
390
Chapter 16 Networking and Ingress
Annotations: <none>
Selector: app=mynginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.43
IPs: 10.152.183.43
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 32614/TCP
Endpoints: 10.1.139.86:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
shiva@wks01:~$
Listing 16-7. Accessing the nginx web server via the NodePort
391
Chapter 16 Networking and Ingress
As expected, we are able to access the web server via the NodePort.
Now let’s redeploy the service using ingress as the front end; to do that, we first need
to delete the myservice and confirm CURL has stopped working as shown in Listing 16-8.
Create a new myservice02.yaml using your favorite editor, save, and keep ready; the
contents of the myservice02.yaml is shown in Listing 16-9.
apiVersion: v1
kind: Service
metadata:
name: "myservice"
spec:
ports:
- port: 80
targetPort: 80
selector:
app: "mynginx"
Note that we don’t have the type LoadBalancer defined in this service; we then apply
this deployment as shown in Listing 16-10.
392
Chapter 16 Networking and Ingress
Even though the service is available within the cluster at port 80, we cannot access it
from outside the cluster.
Listing 16-11. Accessing the web server from inside the pod
393
Chapter 16 Networking and Ingress
<body>
new content
</body>
</html>
shiva@wks01:~$
This is successful, as expected. Next, we try to access the web server using the cluster
service IP as shown in Listing 16-12.
Listing 16-12. Accessing the web server via the ClusterIP from inside the pod
This also works as expected as the ClusterIP is accessible from within the containers
running within the cluster.
Let us extend the testing further by executing the same command from the other
pod that’s running currently; this should also work, since the other pod is also inside
the cluster. Find any pod in our current deployment and use that pod, as shown in
Listing 16-13.
394
Chapter 16 Networking and Ingress
Listing 16-13. Accessing the web server via the ClusterIP from inside
another pod
Ingress on microk8s
Before we can use the ingress capability on the microk8s Kubernetes, we first need
to enable the ingress add-on within microk8s. Let us check whether it is enabled or
disabled currently by executing the command as shown in Listing 16-14.
microk8s status
395
Chapter 16 Networking and Ingress
addons:
enabled:
ha-cluster # (core) Configure high availability on the
current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for
Kubernetes
disabled:
<SNIP>directory
ingress # (core) Ingress controller for external access
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
<SNIP>
shiva@mk8s-01:~$
We can see that ingress service is available, but not enabled, so let us enable that by
executing the command as shown in Listing 16-15.
396
Chapter 16 Networking and Ingress
configmap/nginx-ingress-udp-microk8s-conf created
daemonset.apps/nginx-ingress-microk8s-controller created
Ingress is enabled
shiva@wks01:~$
Notice that this single command does two things; it enables the ingress service and
deploys the ingress controller, making it ready for us. Now we can get an update on the
services already running on this cluster as shown in Listing 16-16.
No changes yet, but we have not yet created an Ingress resource, so let us now
create the ingress deployment file; using your favorite editor, copy the contents of
the myingress01.yaml file as shown in Listing 16-17, then create and save this file as
myingress01.yaml.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress01
spec:
defaultBackend:
service:
name: myservice
port:
number: 80
397
Chapter 16 Networking and Ingress
kind: This indicates the type of service we would like for the
cluster to create, Ingress in this case.
spec:
And the port is the port on which external sites will be accessing this service.
Now we are ready to deploy this Ingress service; we do so as shown in Listing 16-18.
Listing 16-18. Deploying the myingress01.yaml file and obtaining the status
Notice that our services haven’t changed, but we now have an ingress service that’s
proxying our traffic to the cluster service. It’s time to test via a simple CURL command on
the localhost as shown in Listing 16-19.
398
Chapter 16 Networking and Ingress
Listing 16-19. Accessing the web server via the Ingress port
Perfect, after deploying the Ingress service, we can access this service from our Linux
workstation directly.
Does this mean we can access this Ingress from some other machine within the same
network?
To test, let us find out the IP of our workstation first; it’s the ens160, as shown in
Listing 16-20.
ip -br -p -4 address
399
Chapter 16 Networking and Ingress
ip -br -p -4 address
Notice the other machine has a different IP address but on the same network; let’s try
to access our nginx web server as shown in Listing 16-22.
Nice! Our ingress is working well. We have established how to publish internal
cluster resources via Ingress service to outside the cluster. We can now do the same in
AWS EKS.
400
Chapter 16 Networking and Ingress
Listing 16-23. Obtaining the current list of services running on our AWS
EKS cluster
It is in the same state as we left it, that is, it has two services label-nginx and
primeornot. Let us confirm that we can access these two services via the NodePort’s
external IP before switching to Ingress. First, let us make sure our mydeployment is still
healthy by describing it as shown in Listing 16-24.
401
Chapter 16 Networking and Ingress
name-nginx:
Image: public.ecr.aws/nginx/nginx:stable
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/usr/share/nginx/html from persistent-storage (rw)
Volumes:
persistent-storage:
Type: PersistentVolumeClaim (a reference to a
PersistentVolumeClaim in the same namespace)
ClaimName: efs-claim
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: mydeployment-64845ffdff (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 36m deployment-controller Scaled up replica
set mydeployment-64845ffdff to 1
shiva@wks01:~$
Replicas are available and still healthy! Since both services are exposed by the
NodePort, we can and should be able to access them via the node’s external IP; let us
find out the external IP of the node as shown in Listing 16-25.
http://18.218.183.7:31747
http://18.218.183.7:30969
which we can access through curl as shown in Listing 16-26 or via a browser.
We will do both.
curl 18.218.183.7:30969
Accessing the same web server via the browser is shown in Figure 16-3.
We will repeat the same for the other application, primeornot service as shown in
Listing 16-27.
curl 18.218.183.7:31747
<p>
This RESTful service will determine whether a given number is prime or
not. <br>
<SNIP>
</script>
</p>
</body>
</html>shiva@wks01:~$
Let us now create an Ingress resource for our nginx service so that we can expose
the nginx server to the outside world. The deployment file contents are shown in
Listing 16-28; using your favorite editor, place the contents of this listing in it, save it as
eks-myingress01.yaml, and have it ready.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
404
Chapter 16 Networking and Ingress
name: myingress01
spec:
defaultBackend:
service:
name: label-nginx
port:
number: 80
Now let us create the Ingress resource by deploying it as shown in Listing 16-29.
The Ingress resource is created and working; we can confirm this as shown in
Listing 16-30.
Now let us try to connect to the web server as shown in Listing 16-31.
curl localhost
405
Chapter 16 Networking and Ingress
It did not work. This is because the worker node is inside the AWS VPC network,
while our workstation is in our local network; there isn’t any network path/connection
between our two machines.
Recall that such a similar setup worked in our microk8s setup because both the
Ingress resource and our workstation were on the same network; thus, it worked.
However, with AWS, the networks are different; that’s why this isn’t working.
However, one way we can confirm the service is running as defined is by executing
the same command inside the pod; let us get the pod name to execute the command
from inside it, as shown in Listing 16-32.
Running the CURL command in any pod should do; we will choose the mydeploym
ent-64845ffdff-c2677 pod and execute the CURL command from inside it as shown in
Listing 16-33.
Listing 16-33. Accessing the nginx web server from inside the pod
406
Chapter 16 Networking and Ingress
Recall that when we enabled the ingress add-on in microk8s, it did two things.
First, it enabled the ingress service module and second, it also deployed the ingress-
controllers for us – in AWS, this is a two-step process – while we have enabled the ingress
service; this is why when we defined the ingress service, it worked. The ingress-controller
isn’t deployed yet; that’s why there isn’t an external address for our defined ingress
service. Let us go ahead and deploy an ingress-controller.
407
Chapter 16 Networking and Ingress
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.
projectcontour.io created
serviceaccount/contour-certgen created
rolebinding.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour-certgen created
job.batch/contour-certgen-v1.23.2 created
clusterrolebinding.rbac.authorization.k8s.io/contour created
rolebinding.rbac.authorization.k8s.io/contour-rolebinding created
clusterrole.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour created
service/contour created
service/envoy created
deployment.apps/contour created
daemonset.apps/envoy created
shiva@wks01:~$
This created a namespace called projectcontour with the contour pods running; we
can confirm this as shown in Listing 16-35.
Notice that this is running two contour pods and an envoy pod; the certgen is okay to
be completed since it’s used only during initialization, not needed after that.
Notice that immediately after deploying the contour Ingress controller, we get an
external address for our Ingress service that we can access from the outside world, as
shown in Listing 16-36.
408
Chapter 16 Networking and Ingress
curl aa1555cf26850457789cb252f01a8e1b-851197561.us-east-2.elb.amazonaws.com
Good, we have provided a way for the services inside the cluster to be exposed to
outside the cluster via the Ingress and Ingress Controller resources. Let us now describe
the Ingress to obtain additional information about it as shown in Listing 16-38.
409
Chapter 16 Networking and Ingress
We can see that all paths lead to the back end where our nginx service is running.
What if we wanted to expose our primeornot service also? Let us try. Using your favorite
editor, create the eks-myingress02.yaml file, contents shown in Listing 16-39, then save
and apply it as shown in Listing 16-40.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress02
spec:
defaultBackend:
service:
name: primeornot
port:
number: 8080
410
Chapter 16 Networking and Ingress
Since we have two virtual hosts that we need to expose via a single LoadBalancer, we
will use the HTTPProxy construct, which works similar to the virtual host construct in
http servers such as nginx and/or apache2/httpd.
Let us expose both services via the HTTPProxy construct; the deployment file is
shown in Listing 16-41; create the file and apply it. Ensure you update the first fqdn to
match your output from Listing 16-38.
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: myingress01
spec:
virtualhost:
fqdn: <YOUR Ingress address HERE>
routes:
- conditions:
- prefix: /
services:
- name: primeornot
port: 80
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: myingress02
spec:
virtualhost:
fqdn: myeksnginx.example.org
411
Chapter 16 Networking and Ingress
routes:
- conditions:
- prefix: /
services:
- name: label-nginx
port: 80
Delete the old ingress, apply this new ingress, and obtain the ingress addresses as
shown in Listing 16-42.
412
Chapter 16 Networking and Ingress
myingress02 nginx.labs.subbu.us
valid Valid HTTPProxy
shiva@wks01:~$
Note that we now have two virtual hosts defined with addresses, one for each service;
myingress01 points to the primeornot service, and myingress02 points to the nginx
web server:
and
While aa1555cf26850457789cb252f01a8e1b-851197561.us-east-2.elb.amazonaws.
com is publicly resolvable:
nginx.labs.subbu.us is not, so either we have to make a new DNS entry for nginx.labs.
subbu.us or create a host entry for it; in our case, let us just create a host entry for it to
save time. The DNS approach will equally work.
On your Linux workstation, execute the command, using one of the IPs from the
DNS resolution of the LoadBalancer, as shown in Listing 16-43 to add the host entry.
Listing 16-43. Adding the host entry for the second service
Now we are ready to test things out; we should be able to access the primeornot
service using the amazonaws.com DNS entry and the nginx web server using the nginx.
labs.subbu.us DNS name as shown in Listing 16-44.
413
Chapter 16 Networking and Ingress
Voilà! We now have two services from our cluster exposed to the Internet traffic via
httpproxy/ingress service!
In this way, you can publish multiple service back ends to the Internet to suit your
application needs.
414
Chapter 16 Networking and Ingress
S
ummary
In this chapter, we learned about how to work with LoadBalancers, Ingress Controllers,
and Services to provide a path for inside cluster services to be exposed to outside cluster
users. Along the way, we deployed a popular open source Ingress Controller, namely,
contour. The features of contour extend far beyond what we have explored in this
chapter; please read through the contour projects’ website for additional information.
In the next chapter, we will look at some of the popular tools available to manage
Kubernetes, giving a Kubernetes engineer a boost to productivity and simplifying
mundane operational tasks.
415
CHAPTER 17
Kubernetes Tools
In this chapter, we will review a few popular and commonly used tools to administer and
operate the kubernetes cluster, making your life as a Kubernetes engineer a bit easier.
K9S
The first of them is the K9S, which can be used to administer the kubernetes cluster. It
can be downloaded from https://k9scli.io/. The binary release can be downloaded
from https://github.com/derailed/k9s/releases.
Let us install the Linux x86_64 version on our Linux workstation, the same
workstation where we have our microk8s installed.
Thank you: https://frontside.com/blog/2021-01-29-kubernetes-wet-
your-toes/.
To install the Linux x86_64 version of K9S, run the command shown in Listing 17-1.
curl -L https://github.com/derailed/k9s/releases/download/v0.26.7/k9s_
Linux_x86_64.tar.gz -o k9s_Linux_x86_64.tar.gz
417
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8_17
Chapter 17 Kubernetes Tools
Notice the k9s file; this is the binary file that is the K9S tool, which you can move to
the /usr/local/bin directory by running this command as shown in Listing 17-3, so that
the file is in your PATH, making it easy to execute.
Just like before, K9S needs to connect to our cluster, so we need to set the contexts;
since microk8s is self-contained, there isn’t a default ~/.kube/config written. We do so by
executing the command in Listing 17-4, which shows the cluster information, which we
can copy to configure the K9S to connect to it.
microk8s config
418
Chapter 17 Kubernetes Tools
RVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.235.224:16443
name: microk8s-cluster
contexts:
- context:
cluster: microk8s-cluster
user: admin
name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: admin
user:
token: WE<SNIP>0K
shiva@mk8s-01:~$
mkdir ~/.kube
microk8s config > ~/.kube/config
k9s
Now launch k9s by executing the binary /usr/local/bin/k9s or just k9s as /usr/local/
bin is typically in the system’s path, the tool launches and shows its landing page as seen
in Figure 17-1.
419
Chapter 17 Kubernetes Tools
Here, in one screen you can see all the pods that are running; let us launch some
more pods on another screen and watch it update. Launch another terminal window
and/or SSH to your microk8s server, then launch a stock nginx as shown in Listing 17-6.
Back on the k9s screen, you should see the new pod show up on the display as shown
in Figure 17-2.
420
Chapter 17 Kubernetes Tools
We can press “d” to describe this pod; the describe details pop up as shown in
Figure 17-3.
421
Chapter 17 Kubernetes Tools
Press <ESC> to go back to the original pod listing screen, then press <l> for logs and
then <0> for tail; you can then see the pod logs as shown in Figure 17-4.
422
Chapter 17 Kubernetes Tools
Now we can see the entire log stream, which makes it easy to obtain details about a
pod without typing a whole bunch of commands one after the other, making your life,
the Kubernetes engineer’s life, easier!
This tool comes in handy for the command line driven and a simple panel that’s
terminal driven to manage our K8S cluster.
423
Chapter 17 Kubernetes Tools
Before you add the “+” sign, ensure that on the workstation this Lens application is
running you have a valid .kube/config configured for the cluster you’d like to add. In our
case, we will continue with the .kube/config file we are using to manage our AWS EKS
cluster; thus, the workstation has all the valid information already in its .kube/config file.
Click the “+” sign at the bottom right as shown in Figure 17-6.
424
Chapter 17 Kubernetes Tools
Then choose “Sync kubeconfig(s)”; select the location of your .kube/config file,
which is typically under your user’s home directory, which then adds the cluster. In our
case, the AWS EKS cluster shows up since Lens picked up the context for that cluster
from the .kube/config file as shown in Figure 17-7.
425
Chapter 17 Kubernetes Tools
Click the name of the cluster to connect to it. Lens connects to the cluster as shown
in Figure 17-8.
Once Lens is connected to the cluster, we can visually view the cluster information,
similar to what we saw in the AWS Web Console, node, pod detailed information are all
visible. If you are managing multiple clusters, you can easily connect and switch between
multiple clusters that you are working with. For working with nodes in the nodegroup,
the Nodes screen comes in handy; you can see the nodes and detailed information
about the nodes and even conduct operations such as cordon using the GUI as shown in
Figure 17-9.
426
Chapter 17 Kubernetes Tools
Figure 17-9. Lens screen showing details of the nodes in the cluster
Close the node details and select the Workloads ➤ Pods option on the left-hand-side
navigation bar as shown in Figure 17-10. All the pods and their information are shown in
one place.
427
Chapter 17 Kubernetes Tools
All the storage-related information is in one place, including the PVC that we set up
in an earlier chapter, as shown in Figure 17-11.
428
Chapter 17 Kubernetes Tools
Feel free to explore the tool further to get the maximum usage out of the tool to make
your life easier. Next, we move on to helm, a popular package manager for Kubernetes.
H
ELM
Another popular toolset in the Kubernetes world is HELM, which is a package manager
for Kubernetes. Just like how you have Debian or RedHat packages, HELM packages are
for Kubernetes which you can use to install software as containers; you can read more
details about HELM at its official website: https://helm.sh.
H
ELM3 Basics
To list the helm3 charts (packages) repo list, execute the repo list command as shown in
Listing 17-7.
To list any helm packages already installed, use the command shown in Listing 17-8.
To add a repo, use the repo add command as shown in Listing 17-9; one popular
helm chart source is from bitnami, and it is available at https://charts.bitmani.com/
bitnami – add it to your helm repo as shown in Listing 17-9.
429
Chapter 17 Kubernetes Tools
Note Please always use your discretion when adding third-party repos, as there
is always the injecting unknown software that may have cyber risks.
To search HELM charts available in this repository, use the command as shown in
Listing 17-10.
430
Chapter 17 Kubernetes Tools
Since we are going to deploy a PostgreSQL server, it is prudent to install the PSQL
client software on our workstation also, since we’ll need that to connect to the PGSQL
server as a test. Install this as shown in Listing 17-12.
432
Chapter 17 Kubernetes Tools
Now onto deploying the PGSQL server via a helm chart as shown in Listing 17-13.
433
Chapter 17 Kubernetes Tools
434
Chapter 17 Kubernetes Tools
Port-forward is running. Now we can obtain the password for the database user
postgresql so that we can connect to the database, as indicated in the instructions shown
in Listing 17-15.
postgres=# \db
postgres=# \q
435
Chapter 17 Kubernetes Tools
What happened under the hood when we deployed this helm chart? It provisioned
a PVC to use as the permanent/durable storage for the postgresql pods, as shown in
Listing 17-17.
Listing 17-17. PVC setup by the helm chart for PostgreSQL install
microk8s kubectl get pvc
436
Chapter 17 Kubernetes Tools
The helm chart also deployed a service to expose the postgresql port as shown in
Listing 17-18.
All in one go making it easy for deploying commonly used software in a standardized
format. You can find more charts on the Internet and use them as you see fit. Again,
exercise caution when using unknown repositories.
CNCF
Finally, CNCF is not a tool by itself, but it is a community of cloud-native developers
including Kubernetes to come together, learn, exchange ideas, and find new and
innovative ways to deploy, manage, and utilize Kubernetes. You can access their website
at https://cncf.io.
You can gain one of the most impressive pieces of information about the ecosystem
by visiting their CNCF Cloud Native Interactive Landscape page at https://landscape.
cncf.io/?project=graduated,member, which appears similar to the screenshot shown
in Figure 17-12, highlighting the several tools we have come across in this book. There
are more tools, vendors, services, and software waiting to be discovered by you; go check
them out!
437
Chapter 17 Kubernetes Tools
More Tools
More tools for you to explore
https://monokle.io/
https://collabnix.github.io/kubetools/
www.rancher.com/
Summary
In this chapter, we looked at a few tools available to create, maintain, and manage
Kubernetes clusters. There are plenty more tools out there as the Kubernetes ecosystem
is in an active development phase, and new tools are being developed and introduced
each and every day. Please explore the tools in the “More Tools” section to explore
further, and utilize the tools that will make your life easier maintaining and managing
your Kubernetes cluster.
438
Index
A IAM role, 359, 360
IAM user, 224–235
Access key, 208–211, 232, 235
inbound rules, 357
AmazonEC2ContainerRegistryReadOnly,
Kubernetes cluster, 361, 362
277, 278
NFS access, 366
Amazon EFS, 359, 371–376
nginx container, 358
AmazonEFSCSIDriverPolicy, 368
nginx deployment, 356
AmazonEKSClusterPolicy, 243
nodes, 354, 355
AmazonEKS_CNI_Policy, 277
OS users, 223
AmazonEKSServiceRolePolicy, 244
See also Elastic Kubernetes Service (EKS)
AmazonEKSWorkerNodePolicy, 277
AWS IAM user
API Client Utility, 104
access key wizard, 233
apiVersion, 90
API keys, 232
Artifact repository, 73
create user, 230
CI/CD parlance, 197
IAM Service, 226, 227
Docker Hub, 199–202
landing page, 225, 226
ECR, 202–213
login details, 231
JFrog container registry, 214–223
PowerUserAccess policy, 229
storage and retrieval of containers, 198
power user permissions, 224
Automation, 81
retrieving access keys, 235
AWS CLI, 208–210, 261, 384–386
root user credentials, 224
AWS CLI v2, 288, 289
user wizard, 228
AWS Console, 263
AWS managed node groups, 268
AWS Elastic Container Registry (ECR),
Azure Container Registry, 198
73, 202–213
AWS Elastic Kubernetes Service (EKS),
235, 400–407, 424 B
clusters, 223, 273 (see also Clusters) Business units (BU), 110–112
deployment, 352, 353
EFS, nginx content, 376–380
EFS testing, 371–376 C
file system, 370, 371 calico-node-tkzbg, 165
Filter Policies, 369 CenOS, 62
439
© Shiva Subramanian 2023
S. Subramanian, Deploy Container Applications Using Kubernetes,
https://doi.org/10.1007/978-1-4842-9277-8
INDEX
440
INDEX
441
INDEX
H
K
Hello-World command, 16–19, 29, 219
k8sadmin01 user testing, 307–310
HELM3 (packages), 429–431
k8sdevops01 user testing, 313–322
connectivity testing to PGSQL, 435–437
k8sreadonly01 user, 185, 188, 327
deploying postgresql, 431–434
k8sreadonly02 user, 189
testing PGSQL deployment, 434, 435
K9S, 417–423
Horizontal scaling, 151
Kubelet, 63
HTTPProxy construct, 411, 412
Kubernetes (K8S), 7, 81, 82, 161, 162, 165,
169, 197, 198, 202, 213, 223
I api-server, 177–180, 190
IAM PassRole, 255–260 clusters, 95–100, 146, 224
IAM users command-line, 97
classes, 300, 301 contexts, 101–106
IMAGE ID, 29–32, 57, 200 distributions, 62
Ingress Docker relationship, 61, 62
on AWS EKS, 400–407 documentation, 137
connecting draining a node, 108–110
pods, 393–395 Kubelet, 63
via NodePort, 391–393 microk8s, 389
connectivity testing, 389–391 nodes, 105–108
deploying, EKS, 407–414 pod-public nginx, 69–72
on microk8s, 395–400 pods (see Pods)
Inline policy, 256, 302 and PV (see Persistent volume (PV))
RBAC (see Role-Based Access
Control (RBAC))
J redeploy, 387
Java, 1, 127 resource availability, 152
JFrog Artifactory, 73, 198 resource requirements, 141
JFrog Container Registry service & testing, 74–78
cloud provider, 214 storage volumes, 329–335
442
INDEX
443
INDEX
O option, 427
port-forward, 134–136
Open Lens, 423–429
postgresql, 436
Open source system, 61
PV, 329–335
Operating system (OS), 1, 26, 305
self-explanatory, 70
ARCH, 24
web server service, 74–78
containerized base, 40–44
Port 4567, 74
package repository, 21
Port 8080, 32–35, 135
perspective, 96
Port 8083, 56
users, 223
Power node03, 168
VMs, 3, 4
PowerUsers, 256
Projectcontour, 408
P, Q Public subnets, 283
Parent node, 154–156, 158–161
PassRole permissions, 259–264 R
Persistent volume claim (PVC)
ReadOnlyMany, 337
creation, 339–344
Read-only user setup, 322–327
nginx content, 344–350
ReadWriteMany, 337
and PV, 351–353
ReadWriteOnce, 337
Persistent volume (PV)
ReadWriteOncePod, 337
AWS EKS, 352, 353
ReplicaSets
creation, 335–339
availability and scalability, 139
storage volumes
container resources, 151
microk8s, 335
deployment, 146
pods, 329–335
disk pressure, 145
Pods, 31–34, 63, 81, 86–88, 137, 140, 141,
horizontal scaling, 151
148, 151, 160, 162, 349, 375–377,
K8S management plan, 140
420, 422
Kubernetes cluster, 146
command line, 292
mydep-atscale.yaml, 137–139
connection, 393–395
node, 142–145
container for troubleshooting, 133, 134
one-pod deployment, 141
creation, 106, 107
pods command, 139
deployment, 294–301
scalability factor, 141
EKS, 271–273
scale-deployment.sh script, 149
information, 189
vertical scaling, 151
kubectl binary, 182
VMs, 150
logs, 126–134
worker node, 140
mydeployment, 92
444
INDEX
445