0% found this document useful (0 votes)
88 views32 pages

Fundamentals of Kubernetes

This document discusses Kubernetes fundamentals including managing cluster capacity with requests and limits, using health checks, organizing clusters, and working with Kubernetes volumes. It covers how Kubernetes uses requests and limits to manage CPU and memory resources and ensure containers have adequate resources while preventing overconsumption. It also provides examples of Pod configuration files specifying requests and limits.

Uploaded by

aloktax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views32 pages

Fundamentals of Kubernetes

This document discusses Kubernetes fundamentals including managing cluster capacity with requests and limits, using health checks, organizing clusters, and working with Kubernetes volumes. It covers how Kubernetes uses requests and limits to manage CPU and memory resources and ensure containers have adequate resources while preventing overconsumption. It also provides examples of Pod configuration files specifying requests and limits.

Uploaded by

aloktax
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

The Fundamentals

of Kubernetes
From managing cluster capacity to container health to
storage volumes, build your foundational knowledge of
Kubernetes with these five fundamentals
The Fundamentals of Kubernetes

Table of Contents
Introduction03

Chapter 1: How to Manage Cluster Capacity with Requests and Limits 04

Chapter 2: How to Use Health Checks 08

Chapter 3: How to Use Kubernetes Secrets 13

Chapter 4: How to Organize Clusters 20

Chapter 5: Working With Kubernetes Volumes 27


The Fundamentals of Kubernetes

Introduction

A recent survey from Cloud Native Computing Foundation (CNCF)


found that 83% of respondents are using Kubernetes in produc-
tion, up from 58% in 2018. When it comes to automating the
deployment, scaling, and management of containerized applica-
tions, teams are clearly finding significant value in Kubernetes.
But that doesn’t mean the technology is simple to comprehend.
As teams move to adopt Kubernetes, they may find that the sheer
breadth of the platform can be overwhelming.

The five chapters in this ebook explore intro-level Kubernetes


fundamentals—including managing cluster capacity, using health
checks, organizing clusters, and beyond. If you’re a Kubernetes
novice, this information is essential, but we suspect even some
pros will find value in reviewing these basic elements for success-
ful Kubernetes management.

01
The Fundamentals of Kubernetes

Chapter 1: How to Manage Cluster


Capacity with Requests and Limits

While Kubernetes manages the nodes in your cluster, you have • By specifying a request on a container, you are setting the
to first define the resource requirements for your applications. minimum amount of RAM or CPU required for that container.
Understanding how Kubernetes manages resources, especially Kubernetes will roll all container requests into a total Pod
during peak times, is important to keep your containers running request. The scheduler will use this total request to ensure the
smoothly. Pod can be deployed on a node with enough resources.
• By specifying a limit on a container, you are setting the
This chapter looks at how Kubernetes manages CPU and memory
maximum amount of RAM or CPU that the container can
using requests and limits.
consume. Kubernetes translates the limits to the container

How requests and limits work service (Docker, for instance) that enforces the limit. If a
container exceeds its memory limit, it may be terminated
Every node in a Kubernetes cluster has an allocated amount of
and restarted, if possible. CPU limits are less strict and can
memory (RAM) and computational power (CPU) that can be used
generally be exceeded for extended periods of time.
to run containers.

Kubernetes defines a logical grouping of one or more containers Let’s see how requests and limits are used.

into Pods. Pods, in turn, can be deployed and managed on top


Setting CPU requests and limits
of the nodes. When you create a Pod, you normally specify the
storage and networking that containers share within that Pod. Requests and limits on CPU are measured in CPU units. In Kuber-
The Kubernetes scheduler will then look for nodes that have the netes, a single CPU unit equals a virtual CPU (vCPU) or core for
resources required to run the Pod. cloud providers, or a single thread on bare metal processors.

To help the scheduler, you can specify a lower and upper RAM and Under certain circumstances, one full CPU unit can still be con-
CPU limits for each container using requests and limits. These two sidered a lot of resources for a container, particularly in regard
keywords enable you to specify the following: to microservices. This is why Kubernetes supports CPU fractions.

02
The Fundamentals of Kubernetes

While you can enter fractions of the CPU as decimals—for exam- - name: cpu-request-limit-container
ple, 0.5 of a CPU—Kubernetes uses the “millicpu” notation, where image: images.example/app-image
1,000 millicpu (or 1,000m) equals 1 CPU unit. resources:
When you submit a request for a CPU unit, or a fraction of it, the requests:
Kubernetes scheduler will use this value to find a node within a cpu: "500m"
cluster that the Pod can run on. For instance, if a Pod contains a limits:
single container with a CPU request of 1 CPU, the scheduler will
cpu: "1500m"
ensure the node it places this Pod on has 1 CPU resource free. For
a Docker container, Kubernetes uses the CPU share constraint to This configuration defines a single container called “cpu-re-
proportion the CPU. quest-limit-container” with the image limits specified in the
resources section. In that section, you specify your requests and
If you specify a limit, Kubernetes will try to set the container's
limits. In this case, you are requesting 500 millicpu (0.5 or 50% of
upper CPU usage limit. As mentioned earlier, this is not a hard
a CPU unit) and limiting the container to 1500 millicpu (1.5 or 150%
limit, and a container may or may not exceed this limit depend-
of a CPU unit).
ing on the containerization technology. For a Docker container,
Kubernetes uses the CPU period constraint to set the upper
Setting memory requests and limits
bounds of CPU usage. This allows Docker to restrict the percent-
age of runtime over 100 milliseconds the container can use. Memory requests and limits are measured in bytes, with some
standard short codes to specify larger amounts, such as kilobytes
Below is a simple example of a Pod configuration YAML file with a
(K) or 1,000 bytes, megabytes (M) or 1,000,000 bytes, and giga-
CPU request of 0.5 units and a CPU limit of 1.5 units.
bytes (G) or 1,000,000,000 bytes. There are also power-of-two
apiVersion: v1 equivalents for these shortcuts. For example, Ki (1,024 bytes), Mi,
kind: Pod and Gi. Unlike CPU units, there are no fractions for memory as the

metadata: smallest unit is a byte.

name: cpu-request-limit-example The Kubernetes scheduler uses memory requests to find a node
spec: within your cluster that has enough memory for the Pod to run
containers: on. Memory limits work in a similar way to CPU limits, except they

03
The Fundamentals of Kubernetes

are enforced more strictly. If a container exceeds a memory limit, Setting limits via namespaces
it might be terminated and potentially restarted with an “out of
If you have several developers, or teams of developers, working
memory” error.
within the same large Kubernetes cluster, a good practice is to
The simple example of a Pod configuration YAML file below con- set common resource requirements to ensure resources are not
tains a memory request of 256 megabytes and a memory limit of consumed inadvertently. With Kubernetes, you can define differ-
512 megabytes. ent namespaces for teams and use resource quotas to enforce

apiVersion: v1 quotas on these namespaces.

kind: Pod For instance, you may have a Kubernetes cluster that has 64 CPU
metadata: units and 256 gigabytes of RAM spread over eight nodes. You

name: memory-request-limit-example might create three namespaces—one for each of your develop-
ment teams—with the resource quota of 10 CPU units and 80
spec:
Gigabytes of memory. This would allow each development team
containers:
to create any number of Pods up to that limit, with some CPU and
- name: memory-request-limit-container
memory left in reserve.
image: images.example/app-image
For more information on specifying resource quotas for name-
resources:
spaces, refer to the Resource Quotas section of the Kubernetes
requests:
documentation.
memory: "256M"
limits: The importance of monitoring cluster capacity
memory: "512M" Setting requests and limits on both containers and namespaces
can go a long way to ensure your Kubernetes cluster does not run
This configuration defines a single container called “memo-
out of resources. Monitoring, however, still plays an important
ry-request-limit-container” with the image limits specified in the
role in maintaining the health of individual services, as well as the
resources section. You have specified the memory request of
overall health of your cluster.
256M, and limited the container to 512M.

04
The Fundamentals of Kubernetes

When you have large clusters with many services running within
Kubernetes Pods, health and error monitoring can be difficult.
Observability tools, such as New Relic, offer an easy way to mon-
itor your Kubernetes cluster and the services running within it. It
helps you make sure that requests and limits you are setting at
the container and across the cluster are appropriate.

Having a good understanding of how Kubernetes handles CPU


and memory resources, as well as enabling configuration to man-
age these resources, is critical to ensure your Kubernetes clusters
have enough capacity at all times. As we’ve seen, setting CPU and
memory requests and limits is easy—and now you know how to
do it. By adding a layer of monitoring, you will go a long way to
ensuring that Pods are not fighting for resources on your cluster.

05
The Fundamentals of Kubernetes

Chapter 2: How to Use Health Checks

To manage containers effectively, Kubernetes needs a way to Types of probes


check container health to ensure that they are working correctly
As you deploy and operate distributed applications, containers
and receiving traffic. Kubernetes uses health checks—also known
are created, started, run, and terminated. To check a container's
as probes—to determine if instances of your app are running and
health in the different stages of its life cycle, Kubernetes uses dif-
responsive.
ferent types of probes:
This chapter discusses the different probe types and the various
Liveness probes allow Kubernetes to check if your app is alive.
ways to use them.
The kubelet agent that runs on each node uses the liveness

Why probes are important probes to ensure that the containers are running as expected. If
a container app is no longer serving requests, kubelet will inter-
Distributed systems can be hard to manage. Because the sepa-
vene and restart the container.
rate components work independently, each part will keep running
even after other components have failed. At some point, an appli- For example, if an application is not responding and cannot make

cation might crash. Or an application might be still in the initial- progress because of a deadlock, the liveness probe detects that it

ization stage and not yet ready to receive and process requests. is faulty. Kubelet then terminates and restarts the container. Even
if the application carries defects that cause further deadlocks, the
You can only assert the system's health if all of its components
restart will increase the container's availability. It also gives your
are working. Using probes, you can determine whether a con-
developers time to identify the defects and resolve them later.
tainer is dead or alive, and decide if Kubernetes should temporar-
ily prevent other containers from accessing it. Kubernetes verifies Readiness probes run during the entire life cycle of the container.
individual containers’ health to determine the overall Pod health. Kubernetes uses this probe to know when the container is ready

06
The Fundamentals of Kubernetes

to start accepting traffic. If a readiness probe fails, Kubernetes will Creating probes
stop routing traffic to the Pod until the probe passes again.
To create health check probes, you must issue requests against
For example, a container may need to perform initialization tasks, a container. There are three ways of implementing Kubernetes
including unzipping and indexing files and populating database liveness, readiness, and startup probes:
tables. Until the startup process is completed, the container will
1. Sending an HTTP request
not be able to receive or serve traffic. During this time, the read-
2. Running a command
iness probe will fail, so Kubernetes will route requests to other
3. Opening a TCP socket
containers.

A Pod is considered ready when all of its containers are ready. HTTP REQUESTS
That helps Kubernetes control which Pods are used as backends An HTTP request is a common and straightforward mechanism
for services. If it’s not ready, a Pod is removed from service load for creating a liveness probe. To expose an HTTP endpoint, you
balancers. can implement any lightweight HTTP server in your container.

Startup probes are used to determine when a container applica- A Kubernetes probe will perform an HTTP GET request against

tion has been initialized successfully. If a startup probe fails, the your endpoint at the container's IP to verify whether your ser-

Pod is restarted. vice is alive. If your endpoint returns a success code, kubelet will
consider the container alive and healthy. Otherwise, kubelet will
When Pod containers take too long to become ready, readiness terminate and restart the container.
probes might fail repeatedly. In this case, containers risk being
terminated by kubelet before they are up and running. This is Suppose you have a container based on an image named k8s.gcr.

where the startup probe comes to the rescue. io/liveness. In that case, if you define a liveness probe that uses
an HTTP GET request, your YAML configuration file would look
The startup probe forces liveness and readiness checks to wait similar to this snippet:
until it succeeds, so that the application startup is not com-
promised. That is especially beneficial for slow-starting legacy apiVersion: v1

applications. kind: Pod


metadata:

07
The Fundamentals of Kubernetes

labels: COMMANDS
test: liveness When the HTTP requests are not suitable, you can use command
name: liveness-http probes.
spec:
Once you have a command probe configured, kubelet executes
containers: the cat /tmp/healthy command in the target container. Kubelet
- name: liveness considers your container alive and healthy if the command
image: k8s.gcr.io/liveness succeeds. Otherwise, Kubernetes terminates and restarts the
args: container.

- /server This is how your YAML configuration would look for a new Pod
livenessProbe: that runs a container based on the k8s.gcr.io/busybox image:
httpGet:
apiVersion: v1
path: /healthz
kind: Pod
port: 8080
metadata:
httpHeaders:
labels:
- name: Custom-Header
test: liveness
value: Awesome
name: liveness-exec
initialDelaySeconds: 3
spec:
periodSeconds: 3
containers:
The configuration defines a single-container Pod with initialDelay- - name: liveness
Seconds and periodSeconds properties that tell kubelet to exe- image: k8s.gcr.io/busybox
cute a liveness probe every three seconds then wait three seconds args:
before performing the first probe. Kubelet will check whether the
- /bin/sh
container is alive and healthy by sending requests to the /healthz
- -c
path on 8080 port and expect a success result code.

08
The Fundamentals of Kubernetes

- touch /tmp/healthy; sleep 30; rm -rf /tmp/ With the following configuration, kubelet will try to open a socket
healthy; sleep 600 to your container on the specified port.
livenessProbe:
apiVersion: v1
exec:
kind: Pod
command:
metadata:
- cat
name: goproxy
- /tmp/healthy
labels:
initialDelaySeconds: 5
app: goproxy
periodSeconds: 5
spec:
containers:
The above configuration defines a single container Pod with the
initialDelaySeconds and the periodSeconds keys tell kubelet - name: goproxy

to perform a liveness probe every five seconds then wait five sec- image: k8s.gcr.io/goproxy:0.1
onds before the first probe is completed. ports:

Kubelet will run the cat /tmp/healthy command in the con- - containerPort: 8080

tainer to execute a probe. readinessProbe:


tcpSocket:
TCP CONNECTIONS
port: 8080
When a TCP socket probe is defined, Kubernetes tries to open a initialDelaySeconds: 5
TCP connection on your container's specified port. If Kubernetes
periodSeconds: 10
succeeds, the container is considered healthy. TCP probes are
livenessProbe:
helpful when HTTP or command probes are not adequate. Sce-
tcpSocket:
narios in which containers can benefit from TCP probes include
gRPC and FTP services, where the TCP protocol infrastructure port: 8080

already exists. initialDelaySeconds: 15


periodSeconds: 20

09
The Fundamentals of Kubernetes

The above configuration is similar to the HTTP check. It defines


a readiness and a liveness probe. When the container starts,
kubelet will wait five seconds to send the first readiness probe.
After that, kubelet will keep checking the container readiness
every 10 seconds.

Monitoring Kubernetes health


Probes tell Kubernetes whether your containers are healthy, but
they don't tell you anything.

When you have many services running in Kubernetes Pods


deployed across many nodes, health and error monitoring can
be difficult.

As a developer or a DevOps specialist working with the Kuberne-


tes platform, you might find New Relic an excellent tool for check-
ing Kubernetes' health, gathering insights, and troubleshooting
container issues.

Health checks via probes are essential to ensure that your con-
tainers are good citizens in a cluster. Kubernetes uses liveness,
readiness, and startup probes to decide when a container needs
to be restarted, or a Pod needs to be removed from service. That
helps you keep your distributed system services reliable and
available.

10
The Fundamentals of Kubernetes

Chapter 3: How to Use


Kubernetes Secrets

Containerized applications running in Kubernetes frequently Creating Kubernetes Secrets


need access to external resources that often require secrets, from the command line
passwords, keys, or tokens to gain access. Kubernetes Secrets lets
You can create a secret via the Kubernetes administrator com-
you securely store these items, removing the need to store them
mand line tool, kubectl. This tool allows you to use files or pass
in Pod definitions or container images.
in literal strings from your local machine, package them into
This chapter addresses various ways to create and use secrets in secrets, and create objects on the cluster server using an API. It’s
Kubernetes, with an aim to help you select the best approach for important to note that secret objects must be in the form of a
your environment. DNS subdomain name.

Creating Kubernetes Secrets For username and password secrets, use this command line
pattern:
Kubernetes Secrets offers three methods to create and store
secrets: kubectl create secret generic <secret-object-name>
<flags>
• Via the command line
• In a configuration file For secrets using Transport Layer Security (TLS) from a given pub-
• With a generator lic/private key pair, use this command line pattern:

Let’s take a look at creating some secrets with these methods. kubectl create secret tls <secret-object-name>
--cert=<cert-path> --key=<key-file-path>

11
The Fundamentals of Kubernetes

You can also create a generic secret using a username and pass- Setting Kubernetes Secrets
word combination for a database. This example applies the literal in a configuration file
flag to specify the username and password at the command
Another option is to create your secret using a JSON or YAML con-
prompt:
figuration file. A secret created in a configuration file has two data
kubectl create secret generic sample maps: data and stringData. The former requires your values to be
-db-secret --from-literal=username=admin base64-encoded, whereas the latter allows you to provide values
--from-literal=password=’7f3,F9D^LJz37]!W’ as unencoded strings.

The command creates a new secret called sample-db-secret, with The following template is used for secrets in YAML files:

a username value of admin and a password value of 7f3,F9D^L- apiVersion: v1


Jz37]!W. It is worth noting that strong, complex passwords often
kind: Secret
have characters that need to be escaped. To avoid this, you can
metadata:
put all your usernames and passwords in text files and use the
name: <secret name>
following flags:
type: Opaque
kubectl create secret generic sample-db-secret
data:
--from-file=username.txt --from-file=password.txt
<key>: <base64 Value>

This command drops the username and password keys as it stringData:


provides the name of a file containing this information. You can <key>: <string value>
add this back into the --from-file switch the same way as the
--from-literal switch if the key is different from the file name. You apply the template using the kubectl apply -f ./<file-
name>.yaml command. As an example, here is a YAML file for an
kubectl create secret generic sample-db-secret
application that requires a number of secret values:
--from-file=username=123.txt --from-file=pass-
word=xyz.txt apiVersion: v1
kind: Secret
metadata:

12
The Fundamentals of Kubernetes

name: my-example-app Creating Kubernetes Secrets with a generator


type: Opaque
The third option for creating secrets is to use Kustomize, a stand-
data:
alone tool for customizing Kubernetes objects using a configura-
app-user: YWRtaW5pc3RyYXRvcg== tion file called kustomization.yaml.
app-password: cGFzc3dvcmQ=
Kustomize allows you to generate secrets in a fashion similar to
stringData:
the command line by specifying secrets in files (with a key/value
Dbconnection: pair on each line), or as literals within the configuration file.
Server=tcp:myserver.database.net,1433;Database=myD-
For secrets, the following structure is used within a kustomiza-
B;User ID=mylogin@myserver;Password=myPass-
tion.yaml file:
word;Trusted_Connection=False;Encrypt=True;
config.yaml: |- secretGenerator:

LogLevel: Warning name: <secret-name>

API_TOKEN: NcNIMcMYMAMg.MGwjPnPfEBgqMl8Q files:

API_URI: https://www.myapp.com/api <filename>


literals:
The above YAML file contains a number of values, including: <key>=<value>
• The secret name (my-example-app)
When you have created the kustomization.yaml file and included
• An app-user (“administrator,” base64-encoded)
all the linked files in a directory, you can use the kubectl kustom-
• An app-pasword (“password,” base64-encoded)
ize <directory> command, then apply the configuration using the
• A dbconnection string
kubectl apply -k <directory> command.
• The config.yaml file with data
The following example kustomization.yaml file creates a secret
This is a good way for packaging many secrets and potentially with two literal key/values (API_TOKEN and API_URI), as well as a
sensitive configuration information into a single configuration file. config.yaml file:

13
The Fundamentals of Kubernetes

secretGenerator: Once you have created your secrets, you can access them in two
name: example-app-secrets main ways:

files: • Via files (volume-mounted)


passwords.txt • Via environment variables
literals:
The first option is similar to accessing configuration files as part
API_TOKEN: NcNIMcMYMAMg.MGwjPnPfEBgqMl8Q
of the application process. The second option loads the secrets as
API_URI: https://www.myapp.com/api
environment variables for the application to access. We are going
to explore both methods.
The config.yaml file referenced in this example could be the con-
fig file for an application, for instance.
Accessing volume-mounted
Which method for creating Kubernetes secrets
Kubernetes Secrets is the best? To access secrets loaded in a volume, first you need to add the
secret to the Pod under spec[].[]volumes[].secret.secret-
Each of the methods we’ve discussed is “the best” under specific
Name. You then add a volume to each container under spec[].
circumstances.
containers[].volumeMounts, where the name of the volume
The command line is most useful when you have one or two is the same as that of the secret, and where readOnly is set to
secrets you want to add to a Pod—for instance a username and “true”.
password—and you have them in a local file or want to pass them
There are also a number of additional options you can specify.
in as literals.
Let’s have a look at an example Pod with a single container:
Configuration files are great when handling a handful of secrets
apiVersion: v1
that you want to include in a Pod all at one time.
kind: Pod
The Kustomize configuration file is the preferred option when
metadata:
you have one or more configuration files and secrets you want to
name: mypod
deploy to multiple Pods.
spec:

14
The Fundamentals of Kubernetes

containers: (my-username). In your container (myapp), you map the volume in


- name: myapp volumeMounts (name is secrets) to the mountPath as read only.

image: ubuntu Because you’re casting the username to my-username, the direc-
volumeMounts: tory /etc/secrets in the container will look something like this:
- name: secrets
lrwxrwxrwx 1 root root 2 September 20 19:18 my-us-
mountPath: "/etc/secrets” ername -> ..data/username
readOnly: true
volumes: Because you’re casting a single value and changing the path, the
key has changed. If you follow symlink, you will see that the per-
- name: secrets
missions are set correctly:
secret:
secretName: mysecret -r-------- 1 root root 2 September 20 19:18
username
defaultMode: 0400
items:
All files that reside on the secret volume will contain the base64-de-
- key: username coded values of the secrets.
path: my-username
The advantage of loading your secrets into a volume is that, when

This configuration file specifies a single Pod (mypod) with a single the secrets are updated or modified, the volume is eventually

container (myapp). In the volumes section for the Pod, you have a updated as well, allowing your applications to re-read the secrets.

volume named secrets, which is shared by all containers. This vol- It also makes parsing secret files (such as sensitive configuration

ume is of type secret, and it loads the secret called mysecret. It files), as well as referencing multiple secrets, a lot easier.

loads the volume using the Unix file permissions 0400, which gives
Accessing Kubernetes Secrets
read access to the owner (root), and no access to other users.
as environment variables
The secret also contains the items list, which casts only spe-
You can project your secrets into a container using environment
cific secret keys (in this case, username) and an appended path
variables. To do this, you add an environment variable for every

15
The Fundamentals of Kubernetes

secret key you wish to add using env[].valueFrom.secretKey If you logged into the container and ran the echo $USERNAME com-
Ref. Let’s have a look at an example Pod specification: mand, you would get the decoded value of the username secret
(for example, “admin”).
apiVersion: v1
kind: Pod One of the biggest advantages of casting secrets this way is that

metadata: you can be very specific with the secret value. Besides, for some
applications, reading environment variables is easier than parsing
name: mypod
configuration files.
spec:
containers: Alternatives to Kubernetes Secrets
- name: myapp
While secret management in a Kubernetes cluster is relatively sim-
image: ubuntu
ple, fairly secure, and can meet most requirements, it does have
env: some downsides. In particular, secrets use namespaces such as
- name: USERNAME Pods, so if secrets and Pods are in the same namespace, all Pods
valueFrom: can read the secrets.

secretKeyRef: The other major downside is that keys are not rotated automati-
name: mysecret cally. You need to manually rotate secrets.
key: username
To address these issues and provide a more centralized secret
- name: PASSWORD management, you can use an alternative configuration, such as:
valueFrom:
• Integrating a cloud vendor secrets management tool, such as
secretKeyRef:
Hashicorp Vault or AWS Secrets Manager. These tools typically
name: mysecret
use Kubernetes service accounts to grant access to the vault
key: password for secrets and mutating webhooks to mount the secrets into
the Pod.
This configuration file passes two keys (username and password)
• Integrating a cloud vendor Identity and Access Management
from the mysecret secret to the container as environment vari-
(IAM) tool, such as AWS Identity and Access Manager. This type
ables. These values are the base64-decoded values of the secrets.

16
The Fundamentals of Kubernetes

of integration uses a method similar to OpenID Connect for


web applications, which allows Kubernetes to utilize tokens
from a Secure Token Service.
• Running a third-party secrets manager, such as Conjur loaded
into Pods as a sidecar.

Secure storage of secrets is critical to running containers in Kuber-


netes because almost all applications require access to external
resources—databases, services, and so on. Using Kubernetes
Secrets allows you to manage sensitive application information
across your cluster, minimizing the risks of maintaining secrets in
a non-centralized fashion.

17
The Fundamentals of Kubernetes

Chapter 4: How to Organize Clusters

Kubernetes was designed to scale. A team can start a small cluster teams from working with a single namespace and improving man-
and progressively expand its installation. After a while, the cluster ageability, security, and performance.
may be running dozens of Pods and hundreds of containers, or
Different companies will adopt different namespace strategies,
even more.
depending on factors such as team size, structure, and the com-
However, without organization, the number of deployed services plexity of their projects. A small team working with a few micros-
and objects can quickly get out of control, leading to performance, ervices can easily deploy all services into the default namespace.
security, and other issues. But for a rapidly growing company, with far more services, a sin-
gle namespace could make it hard to coordinate the team's work.
This chapter examines three key tools you can use to help keep
In this case, the company could create sub-teams, with a separate
your Kubernetes cluster organized: namespaces, labels, and
namespace for each.
annotations.
In larger companies, too, teams may be widely dispersed, often
How namespaces work working on projects that other teams aren't aware of, making it

By default, Kubernetes provides just one workable namespace hard to keep up with frequent changes. Third-party companies

on top of a physical cluster in which you create all Kubernetes might also contribute to the platform, further increasing com-

objects. Eventually, though, your project is likely to grow to a point plexity. Coordinating so many resources is an administrative

where a single namespace will become a limitation. challenge, and for developers it becomes impossible to run the
entire stack on the local machine. In addition to technologies such
Luckily, you can think of a namespace as a virtual cluster, and
as service mesh and multi-cloud continuous delivery, multiple
Kubernetes supports multiple virtual clusters. Configuring mul-
namespaces are essential to managing large-scale scenarios.
tiple namespaces creates a sophisticated façade, freeing your

18
The Fundamentals of Kubernetes

When different teams deploy projects to the same namespace, the default namespace is suitable only when you're working on
they risk affecting one another's work. By providing isolation and small systems.
team-based access security, separate namespaces help ensure
You can create a Kubernetes namespace with a single kubectl
teams work on their own without disrupting others. You can also
command:
set a resource quota per namespace, so that a resource-hungry
application doesn't exhaust the cluster capacity, impacting other kubectl create namespace test
teams’ resources.
Alternatively, you can create namespaces with a YAML configura-
Using namespaces tion file, which might be preferable if you want to leave a history
in your configuration file repository of the objects that have been
When you create a cluster, Kubernetes provides three name-
created in a cluster. The following demo.yaml file shows how to
spaces out of the box. To list the namespaces that come with the
create a namespace with a configuration file:
cluster, run the following command:
kind: Namespace
$kubectl get namespaces
apiVersion: v1
NAME STATUS AGE
metadata:
default ACTIVE 2d
name: demo
kube-system ACTIVE 2d
labels:
kube-public ACTIVE 2d
name: demo

The kube-system namespace is reserved for the Kubernetes kubectl apply -f demo.yaml
engine and is not meant for your use. The kube-public name-
Imagine having three projects—sales, billing, and shipping. You
space is where public access data is stored, such as cluster
wouldn’t want to deploy all of them into a single default name-
information.
space, for the reasons presented earlier, so you’d start by creating
The default namespace is where you create apps and services. one namespace per project.
Whenever you create a component and don't specify a name-
The problem is, each project has its own life cycle and you don't
space, Kubernetes creates it in the default namespace. But using
want to mix development and production resources. So, as your

19
The Fundamentals of Kubernetes

projects get more complicated, your cluster needs a more sophis- Here's a list of potential namespaces you might employ, depend-
ticated namespace solution. You could further split your cluster ing on the needs of your projects:
into development, staging, and production environments:
• sales-dev
kind: Namespace • sales-staging
apiVersion: v1 • sales-prod

metadata: • billing-dev
• billing-staging
name: dev
• billing-prod
labels:
• shipping-dev
name: dev
• shipping-staging
kubectl apply -f dev.yaml • shipping-prod

kind: Namespace There are two ways to explicitly tell Kubernetes in which name-
apiVersion: v1 space you want to create your resources.

metadata: You can specify the namespace flag when creating the resource
name: staging with the kubectl apply command:
labels: kubectl apply -f pod.yaml --namespace=demo
name: staging
kubectl apply -f staging.yaml You can also modify the YAML configuration file metadata to
include the destination namespace attribute:
kind: Namespace
apiVersion: v1
apiVersion: v1
kind: Pod
metadata:
metadata:
name: prod
name: testpod
labels:
namespace: demo
name: prod
labels:
kubectl apply -f prod.yaml
name: testpod

20
The Fundamentals of Kubernetes

spec: • Specifying an object’s release version (e.g., V1.0, V2.0, V2.1)


containers:
For example, the following configuration file defines a Pod that
- name: testpod
has two labels: layer: interface and version: stable.
image: nginx
apiVersion: v1
If you predefine the namespace in the YAML declaration, the kind: Pod
resource will always be created in that namespace. If you try to
metadata:
run the kubectl apply command with the namespace flag to set
name: app-gateway
a different namespace for this resource, the command will fail.
labels:

Using labels layer: interface


version: stable
As the number of objects in your cluster grows, it can be hard to
find and organize them. The increased complexity of the projects
Once the labels are in place, you can use label selectors to select
means their multidimensional components may cross boundar-
Kubernetes objects according to criteria you define.
ies and challenge rigid cluster structures. Labels let you attach
meaningful and relevant metadata to cluster objects so they can Let's say you have some Pods in a cluster and they’ve been

be categorized, found, and operated on in bulk. assigned labels. The following command will get you all Pods and
their labels:
Labels are key/value structures assigned to objects. An object can
be assigned one or more label pairs or no label at all. Labels can kubectl get pods --show-labels

be useful for:
NAME READY STATUS RESTARTS AGE LABELS
• Determining whether a Pod is part of a production or a canary
app-gateway 1/1 Running 0 1m
deployment
layer=interface,version=stable
• Differentiating between stable and alpha releases
micro-svc1 1/1 Running 0 1m
• Specifying to which layer (e.g., UI, business logic, database) an
layer=business,version=stable
object belongs
• Identifying whether a Pod is frontend or backend

21
The Fundamentals of Kubernetes

micro-svc2 1/1 Running 0 1m Using annotations


layer=business,version=alpha
Annotations are similar to labels. They’re also structured as key/

Kubernetes lets you use label selectors to run the same kubectl value pairs, but unlike labels, they’re not intended to be used in

get pods command and retrieve only Pods with the specified searches.

labels. The following -L selector allows you to display only the So why should you bother to use annotations, when you can use
layer label: labels?

kubectl get pods -L=layer Imagine having to organize a warehouse where boxes are stored.
These boxes have small labels attached outside that include
NAME READY STATUS RESTARTS AGE LABELS
important data to help you identify, group, and arrange them.
app-gateway 1/1 Running 0 1m interface Now imagine that each of these boxes contains information.
micro-svc1 1/1 Running 0 1m business Think of these contents as annotations that you can retrieve when
micro-svc2 1/1 Running 0 1m business you open the box, but that you don't need to be visible from the
outside.
If you want to filter the results and retrieve just the interface pods,
Unlike labels, annotations can't be used to select or identify
you can use the -l and specify this condition:
Kubernetes objects, so you can't use selectors to query them.
kubectl get pods -l=layer=interface --show-labels You could store this kind of metadata in external databases, but
that would be complicated. Instead, you can conveniently attach
NAME READY STATUS RESTARTS AGE LABELS
annotations to the object itself. Once you access the object, you
app-gateway 1/1 Running 0 1m
can read the annotations.
layer=interface,version=stable
Annotations can be helpful for a number of use cases, including:
For more on labels and label selectors, refer to the Kubernetes
• The Docker registry where a Pod's containers are stored
Labels and Selectors page.
• The git repo from which a container is built
• Pointers to logging, monitoring, analytics, or audit repositories

22
The Fundamentals of Kubernetes

• Debugging-related information, such as name, version, and You can also add annotations to an existing Pod with the
build information annotate command:
• System information, such as URLs of related objects from
kubectl annotate pod annotations-test
other ecosystem components
phone=800-555-1212
• Rollout metadata, such as config or checkpoints
• Phone numbers or email addresses of people in charge of the To access Pod annotations, you can use:
component's project
kubectl describe pod your-pod-name
• The team's website URL

This will give you a full description of your Pod, or you can use
In the following example, a Pod configuration file has the informa-
kubectl get pods command with the JSONPath template to
tion on the git repo from which a container is built, as well as the
read just the annotation data from Pods:
team manager's phone number:
kubectl get pods -o=jsonpath="{.items[*]['metadata.
apiVersion: v1
annotations']}"
kind: Pod
metadata:
name: annotations-test
annotations:
repo: "https://git.your-big-company.com.br/lms/
new-proj"
phone: 800-555-1212
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

23
The Fundamentals of Kubernetes

Organizing your cluster,


organizing your journey
Namespaces, labels, and annotations are handy tools for keeping
your Kubernetes cluster organized and manageable.

A New Relic One dashboard showing Kubernetes information and a breakdown of Pods by
namespace

None of these tools are hard to use. As with most things in Kuber-
netes, the individual concepts are easy to learn—there are just a
lot of them to learn.

But now you're further along your Kubernetes journey because


you understand namespaces, labels, and annotations—and how
to use them.

24
The Fundamentals of Kubernetes

Chapter 5: Working With


Kubernetes Volumes

There are many advantages to using containers to run applica- About Kubernetes volumes
tions. However, ease of storage is certainly not one of them. To do
Volumes offer storage shared between all containers in a Pod.
its job, a container must have a temporary file system. But when
This allows you to reliably use the same mounted file system with
a container shuts down, any changes made to its file system are
multiple services running in the same Pod. This is, however, not
lost. A side effect of easily fungible containers is that they lack an
automatic. Containers that need to use a volume have to specify
inherent concept of persistence.
which volume they want to use, and where to mount it in the con-
While Docker has solved this issue with mount points from the tainer’s file system.
host, on Kubernetes you face more difficulties along the way.
Additionally, volumes come with a clearly defined life cycle. They
As you’ve learned, the smallest deployable unit of computing in
are bound to the life cycle of the Pod they belong to. As long as
Kubernetes is a Pod. Multiple instances of a Pod may be hosted
the Pod is active, the volume is there, too. However, when you
on multiple physical machines. Even worse, different containers
restart the Pod, the volume gets reset. If this is not what you want,
might run in the same Pod but access the same storage.
you should either use persistent volumes (discussed in the next
This chapter covers two tools Kubernetes offers to help solve section) or change your application's logic to accommodate this
storage issues: volumes and persistent volumes. We’ll cover how behavior appropriately.
and why you’d use each.
While Kubernetes cares about only the formal definition of a vol-
ume, you also need to have a real (physical) file system allocated
somewhere. This is where Kubernetes goes beyond what Docker
offers. While Docker only maps a path from the host to the con-

25
The Fundamentals of Kubernetes

tainer, Kubernetes allows essentially anything as long as there is a - "bin/bash"


proper provider for the storage. - "-c"

You could use cloud options such as Amazon Elastic Block Store - "sleep 10000"
(EBS) or Microsoft Azure Blob Storage, or an open source solu- volumeMounts:
tion such as Ceph. Using something as simple and generic as NFS - name: xchange
is possible, too. If you want to use something similar to Docker’s mountPath: "/tmp/xchange"
mount path, you can fall back to the hostPath volume type.
- name: c2
So how do you create these volumes? You do so in the Pod image: centos:7
definition. command:
- "bin/bash"
Working with volumes
- "-c"
For example, consider creating a new Pod called sharedvolu-
- "sleep 10000"
meexample using two containers—both just sleeping. Using the
volumeMounts:
volumes key, you can describe your volumes to be used within
- name: xchange
the containers.
mountPath: "/tmp/data"
kind: Pod
volumes:
apiVersion: v1
- name: xchange
metadata:
emptyDir: {}
name: sharedvolumeexample
spec: To use a volume in a container, you need to specify volumeMounts
containers: as shown above. The mountPath key describes the volume access
path.
- name: c1
image: centos:7 To demonstrate how this shares the volume between the two
command: containers, let’s run a little test. First, you should create the Pod
from the spec (for example, sharedvolumeexample.yml):

26
The Fundamentals of Kubernetes

kubectl apply -f sharedvolumeexample.yml A persistent volume is a storage object that lives at the cluster
level. As a result, its life cycle isn’t tied to that of a single Pod, but
Then, you can access the terminal on the first container, c1, using rather to the cluster itself. A persistent volume makes it possible
kubectl: to share data among Pods.

kubectl exec -it sharedvolumeexample -c c1 -- bash One advantage of a persistent volume is that it can be shared not
only among containers of a single Pod, but also among multiple
Next, write some data into a file under the /tmp/xchange mount
Pods. This means persistent volumes can be scaled by expanding
point:
their size. Reducing size, however, is not possible.
echo 'some data' > /tmp/xchange/file.txt
A persistent volume offers the same options for selecting the
physical provider as a regular volume. Provisioning, however, is
Let’s open another terminal, connecting to the container called
a bit different.
c2:
There are two ways to provision a persistent volume:
kubectl exec -it sharedvolumeexample -c c2 -- bash
• Statically: You already allocated everything on the storage
The difference is that this time you read from its mounted storage side, so there’s nothing to be done. The physical storage behind
at /tmp/data: will always be the same.
cat /tmp/data/file.txt • Dynamically: You might want to extend the available storage
space when the demand grows. The demand is settled via
This yields “some data,” as expected. Now you can remove the a volume claim resource, which we’ll discuss in a bit. To
Pod: enable dynamic storage provisioning, you have to enable
the DefaultStorageClass admission controller on the
kubectl delete pod/sharedvolumeexample
Kubernetes API server.

Working with persistent volumes


For growing systems with demand increase backed by scalable
When (regular) volumes don’t meet your needs, you can switch to resources, dynamic provisioning makes more sense. Otherwise,
a persistent volume. we recommend staying with the simpler static provisioning.

27
The Fundamentals of Kubernetes

Let’s try to create a persistent volume for a hostPath-backed You can use the created persistent volume via another resource:
storage. Note that instead of configuring kind as Pod, you instead PersistentVolumeClaim. The claim ensures that there is enough
configure as PersistentVolume: space available. This might fail even if, during dynamic provision-
ing, Kubernetes actively tries to allocate more space.
kind: PersistentVolume
apiVersion: v1 Let’s create a claim for provisioning 3GB:

metadata: kind: PersistentVolumeClaim


name: persvolumeexample apiVersion: v1
labels: metadata:
type: local name: myclaim-1
spec: spec:
capacity: accessModes:
storage: 10Gi - ReadWriteOnce
accessModes: resources:
- ReadWriteOnce requests:
hostPath: storage: 3Gi
path: "/tmp/data"
The provisioning requires the use of kubectl:
As with Pods, these resources are created using the kubectl tool:
kubectl apply -f myclaim-1.yml
kubectl apply -f persvolumeexample.yml
In this example, you created a new persistent volume named When you run this command, Kubernetes looks for a persistent

persvolumeexample, with the maximum storage capacity of volume that matches the claim. Using the claim is simple:

10GB. As for the different access modes, you could specify Read- kind: Pod
WriteOnce, ReadOnlyMany, and ReadWriteMany, although not apiVersion: v1
all of these modes are available for every storage provider. For
metadata:
instance, AWS EBS only supports ReadWriteOnce.

28
The Fundamentals of Kubernetes

name: volumeexample If you compare this example with the previous one, you’ll see that
spec: only the volumes section has changed, nothing else.

containers: The claim manages only a fraction of the volume. To free this frac-
- name: c1 tion, you’d have to delete the claim. The reclaim policy for a per-
image: centos:7 sistent volume tells Kubernetes what to do with the volume after

command: it has been released of its claim. The options are Retain, Recycle
(deprecated in preference of dynamic provisioning), and Delete.
- "bin/bash"
- "-c" To set the reclaim policy, you need to define the persistent-
- "sleep 10000" VolumeReclaimPolicy option in the spec section of the Per-

volumeMounts: sistentVolume config. For instance, in the previous config this


would look like:
- name: xchange
mountPath: "/tmp/xchange" kind: PersistentVolume

- name: c2 apiVersion: v1

image: centos:7 metadata:

command: name: persvolumeexample

- "bin/bash" labels:

- "-c" type: local

- "sleep 10000" spec:

volumeMounts: capacity:

- name: xchange storage: 10Gi

mountPath: "/tmp/data" accessModes:

volumes: - ReadWriteOnce

- name: xchange persistentVolumeReclaimPolicy: Retain

persistentVolumeClaim: hostPath:

claimName: myclaim-1 path: "/tmp/data"

29
The Fundamentals of Kubernetes

Choose your volume wisely


Both volumes and persistent volumes allow you to add data stor-
age that survives container restarts. While volumes are bound to
the life cycle of the Pod, persistent volumes can be defined inde-
pendently of a specific Pod. They can then be used in any Pod.

The one you choose depends on your needs. A volume is deleted


when the containing Pod shuts down, yet it is perfect when you
need to share data between containers running in a Pod.

Because persistent volumes outlive individual Pods, they’re ideal


when you have data that must survive Pod restarts or has to be
shared among Pods.

Both types of storage are easy to set up and use in a cluster.

Hopefully, the approaches we’ve documented help you gain some


control over your Kubernetes environment, so you can get busy
shipping more perfect software. When you’re ready for a deep
dive into Kubernetes monitoring, check out A Complete Introduc-
tion to Monitoring Kubernetes with New Relic.

Kubernetes Monitoring
Get complete observability into your clusters.

Learn More

30

You might also like