Kuber Net Es
Kuber Net Es
[Trivia]
Kubernetes was originally developed by Google, based on
their internal project called Borg. The name "Kubernetes"
comes from the Greek word for "helmsman" or "pilot,"
reflecting its role in guiding and managing containers.
2
Pods as the Smallest Deployable
Units in Kubernetes
In Kubernetes, Pods are the smallest deployable units. A Pod
encapsulates one or more containers, along with their
storage resources, a unique network IP, and options that
govern how the containers should run.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: example-container
image: nginx:latest
ports:
- containerPort: 80
bash
kubectl apply -f example-pod.
[Result]
pod/example-pod created
A Pod in Kubernetes represents a single instance of a
running process in your cluster. It can run multiple
containers, but in most cases, a Pod runs a single container.
Pods are ephemeral; they can be deleted and recreated as
needed, often by Kubernetes controllers that manage the
desired state of your applications.When you deploy a Pod,
Kubernetes assigns it an IP address, which allows it to
communicate with other Pods in the cluster. Pods also have
associated storage volumes that persist data even when the
containers are restarted. The YAML configuration file defines
the Pod's specifications, including the container image to
use, ports to expose, and other settings.Understanding Pods
is foundational to working with Kubernetes, as they are the
basic units that Kubernetes manages. Higher-level objects
like Deployments, Services, and StatefulSets are built
around Pods to manage their lifecycle, scaling, and
networking.
[Trivia]
Pods in Kubernetes are designed to support a pattern called
"sidecar," where a secondary container runs alongside the
main application container to enhance its functionality. This
can include logging, monitoring, or proxy services.
3
Pods Can Contain Multiple
Containers Sharing a Network
Namespace
In Kubernetes, a Pod is the smallest deployable unit and can
contain one or more containers. These containers share the
same network namespace, which means they can
communicate with each other using localhost and share the
same IP address and port space.
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
- name: busybox-container
image: busybox
command: ['sh', '-c', 'while true; do sleep 3600; done']
[Result]
[Trivia]
Pods are ephemeral, meaning they are not meant to be
durable entities. If a Pod dies, it won't be resurrected;
instead, a new Pod with a new network identity will be
created. This is why relying on Pod IPs for communication is
not recommended; instead, Kubernetes Services should be
used to provide stable network endpoints.
4
Kubernetes Services Provide
Stable Network Endpoints
A Kubernetes Service is an abstraction that defines a logical
set of Pods and a policy by which to access them. Services
provide a stable IP address and DNS name to connect to
Pods, ensuring that even if Pods are recreated or
rescheduled, the service's IP remains constant.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
[Result]
[Trivia]
Kubernetes uses label selectors to associate Pods with
Services. The Service selects Pods based on their labels,
meaning it can automatically direct traffic to all Pods that
match the specified labels. This dynamic association allows
for easy scaling and updating of applications, as new Pods
matching the labels will automatically be included in the
Service's endpoints.
5
Kubernetes Configuration Files
Use YAML
Kubernetes utilizes YAML (YAML Ain't Markup Language) for
its configuration files, which makes it accessible and easy to
read. Unlike proprietary languages, YAML is a human-
readable data serialization format that is widely used for
configuration files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[Result]
[Trivia]
YAML is not only used in Kubernetes but is also popular in
other configuration management tools like Ansible and
Docker Compose. Its simplicity and readability make it a
preferred choice for developers and system administrators
alike. Additionally, YAML supports comments, which can be
added using the # symbol, allowing for better
documentation within the configuration files.
6
Understanding Kubernetes
Deployments
In Kubernetes, a Deployment is a resource that manages the
desired state of Pods. It ensures that the specified number
of Pods are running at all times, allowing for easy scaling
and updates.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: my-webapp:1.0
ports:
- containerPort: 8080
[Result]
[Trivia]
Kubernetes Deployments are a fundamental concept in
Kubernetes and are often used in conjunction with other
resources like Services and ConfigMaps. Understanding how
Deployments work is essential for managing applications
effectively in a Kubernetes environment. Additionally, the
kubectl rollout command can be used to manage the rollout
of updates to your Deployments, providing further control
over application deployment processes.
7
Understanding ReplicaSets in
Kubernetes
A ReplicaSet is a Kubernetes resource that ensures a
specified number of pod replicas are running at any given
time. It helps maintain the desired state of your application
by automatically replacing any pods that fail or are deleted.
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-replicaset
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[Result]
[Trivia]
ReplicaSets are often used indirectly through Deployments,
which provide additional features like rolling updates and
easy rollback.
The kubectl get rs command can be used to check the
status of ReplicaSets in your cluster.
Understanding how ReplicaSets work is crucial for managing
application availability and scaling in Kubernetes.
8
Utilizing Namespaces for
Organization in Kubernetes
Namespaces in Kubernetes provide a mechanism for
isolating resources within a single cluster. They allow you to
organize and manage your applications and services
effectively, especially in environments where multiple teams
or projects coexist.
apiVersion: v1
kind: Namespace
metadata:
name: dev-environment
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
namespace: dev-environment
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
[Result]
[Trivia]
Kubernetes comes with several default namespaces,
including default, kube-system, and kube-public.
You can switch between namespaces using the kubectl
config set-context --current --namespace=<namespace-
name> command.
Namespaces can also be used to enforce security policies,
as Role-Based Access Control (RBAC) can be applied at the
namespace level.
9
Understanding Labels and
Selectors for Kubernetes
Resource Management
Labels and selectors are fundamental concepts in
Kubernetes that help you organize and manage your
resources effectively. Labels are key-value pairs attached to
objects, while selectors are queries used to filter those
objects based on their labels.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app
env: production
spec:
containers:
- name: my-container
image: nginx
To list pods with a specific label, you can use the following
command:
kubectl get pods -l app=my-app
[Result]
[Trivia]
Labels can be used for various purposes, such as versioning,
environment identification, or grouping related components.
You can combine multiple selectors using logical operators.
For example, kubectl get pods -l app=my-
app,env=production will list pods that match both labels.
Labels are not unique; multiple resources can have the
same label.
10
Managing Configuration with
ConfigMaps
ConfigMaps are a Kubernetes resource that allows you to
manage configuration data separately from your application
code. This separation makes your applications more flexible
and easier to manage.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
database-url:
"mysql://user:password@hostname:3306/dbname"
log-level: "debug"
---
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
spec:
containers:
- name: my-app-container
image: my-app-image
env:
- name: DATABASE_URL
valueFrom:
configMapKeyRef:
name: my-config
key: database-url
[Result]
pod/my-app-pod created
[Trivia]
ConfigMaps can store data in key-value pairs, and the data
can be referenced in various ways, such as environment
variables or command-line arguments.
You can update a ConfigMap without restarting the pods
that use it, although the pods will need to be configured to
reload the configuration dynamically.
ConfigMaps are particularly useful for separating
configuration from code in microservices architectures,
enhancing the maintainability and scalability of applications.
11
Managing Sensitive Information
with Kubernetes Secrets
Kubernetes Secrets are a way to store and manage sensitive
information such as passwords, OAuth tokens, and SSH
keys. By using Secrets, you can keep sensitive data
separate from your application code, enhancing security
and simplifying configuration management.
[Result]
[Trivia]
Secrets are namespace-scoped, meaning they are only
accessible within the namespace they are created in.
By default, Secrets are stored unencrypted in etcd,
Kubernetes' backing store, so it is important to secure etcd
and consider using encryption features.
You can create Secrets from literal values or files, making it
easy to manage sensitive data.
12
Persistent Storage in Kubernetes
with Volumes
In Kubernetes, Volumes provide a way to manage persistent
storage for containers. Unlike ephemeral storage, which is
lost when a pod is terminated, Volumes allow data to persist
beyond the lifecycle of individual containers, enabling
stateful applications to function correctly.
[Result]
[Trivia]
The hostPath volume type is primarily used for development
and testing, as it ties the pod to a specific node.
Other volume types include NFS, AWS EBS, and GCE
Persistent Disks, which provide more robust solutions for
production environments.
It is essential to consider the access modes of volumes,
such as ReadWriteOnce, ReadOnlyMany, and
ReadWriteMany, as they dictate how many pods can access
the volume simultaneously.
13
Understanding Kubernetes
Volume Types
Kubernetes supports various types of volumes that allow
containers to store data persistently or temporarily. Key
volume types include hostPath, emptyDir, and
persistentVolumeClaim, each serving different use cases.
apiVersion: v1
kind: Pod
metadata:
name: volume-example
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- mountPath: /data
name: my-empty-dir
- mountPath: /host-data
name: my-host-path
volumes:
- name: my-empty-dir
emptyDir: {}
- name: my-host-path
hostPath:
path: /tmp/host-data
[Result]
[Trivia]
Volume Lifecycle: Volumes in Kubernetes have a lifecycle
independent of the containers that use them. This means
that data can persist even if the Pod is restarted.
Persistent Volumes: For long-term data storage, consider
using PersistentVolume and PersistentVolumeClaim, which
allow you to manage storage resources more effectively
across different environments.
14
Exploring DaemonSets in
Kubernetes
DaemonSets are a powerful Kubernetes resource that
ensures a specific Pod runs on all (or a subset of) nodes in a
cluster. This is particularly useful for services like logging or
monitoring agents that need to operate on every node.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logging-agent
spec:
selector:
matchLabels:
app: logging-agent
template:
metadata:
labels:
app: logging-agent
spec:
containers:
- name: log-agent
image: fluentd:latest
[Result]
[Trivia]
Use Cases: Common use cases for DaemonSets include log
collection, monitoring, and network proxying.
Node Selector: You can use node selectors or affinity rules
within a DaemonSet to control on which nodes the Pods
should run, allowing for greater flexibility and resource
management.
15
Understanding Kubernetes Jobs
for Batch Processing
Kubernetes Jobs are designed to manage batch processes,
allowing you to run tasks that need to complete
successfully. They ensure that a specified number of pods
successfully terminate, making them ideal for tasks that
require completion, such as data processing or backups.
apiVersion: batch/v1
kind: Job
metadata:
name: example-job
spec:
template:
spec:
containers:
- name: job-container
image: busybox
command: ["echo", "Hello, Kubernetes Jobs!"]
restartPolicy: Never
backoffLimit: 4
[Result]
Hello, Kubernetes Jobs!
[Trivia]
Jobs are particularly useful for tasks that are not continuous
and need to run to completion.
You can specify parallelism in Jobs to run multiple pods
simultaneously, which can speed up the processing of large
datasets.
Jobs can be monitored using kubectl get jobs to check their
status and completion.
16
Scheduled Tasks with Kubernetes
CronJobs
Kubernetes CronJobs allow you to run jobs on a scheduled
basis, similar to how the cron utility works in Linux. This is
useful for automating repetitive tasks like backups, report
generation, or system maintenance.
apiVersion: batch/v1
kind: CronJob
metadata:
name: example-cronjob
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: cronjob-container
image: busybox
command: ["echo", "Hello from Kubernetes
CronJob!"]
restartPolicy: Never
[Result]
[Trivia]
CronJobs can be scheduled using standard cron syntax,
allowing for complex timing configurations.
You can specify concurrency policies to control how jobs are
managed if previous jobs are still running when a new one is
scheduled.
Monitoring CronJobs is similar to monitoring Jobs, and you
can use kubectl get cronjobs to view their schedules and
statuses.
17
Horizontal Pod Autoscaling in
Kubernetes
Horizontal Pod Autoscaling (HPA) is a feature in Kubernetes
that automatically adjusts the number of Pods in a
deployment based on observed CPU and memory usage.
This ensures that your application can handle varying loads
efficiently without manual intervention.
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
[Result]
[Trivia]
HPA can also scale based on custom metrics, not just CPU
and memory.
The HPA controller continuously monitors the metrics and
adjusts the number of Pods every 30 seconds by default.
To use HPA effectively, ensure that metrics-server is
installed in your cluster, as it collects resource usage data.
18
Understanding Network Policies
in Kubernetes
Network Policies in Kubernetes are used to control the
communication between Pods. They define rules that specify
which Pods can communicate with each other and under
what conditions, enhancing the security of your
applications.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-specific-pods
spec:
podSelector:
matchLabels:
role: my-app
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
[Result]
[Trivia]
Network Policies are implemented by the container network
interface (CNI) plugin, so ensure that your CNI supports
them.
You can define multiple policies for different types of traffic
(Ingress and Egress).
Network Policies can be combined to create complex
communication patterns between Pods, allowing for fine-
grained control over network traffic.
19
Managing External Access with
Ingress
Ingress is a powerful Kubernetes resource that manages
external access to services, primarily through HTTP. It allows
you to define rules for routing external traffic to your
internal services based on the request's hostname or path.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /service1
pathType: Prefix
backend:
service:
name: service1
port:
number: 80
- path: /service2
pathType: Prefix
backend:
service:
name: service2
port:
number: 80
[Result]
[Trivia]
Ingress can also handle SSL termination, allowing you to
manage HTTPS traffic easily.
You can use annotations in the Ingress resource to
customize the behavior of the Ingress controller, such as
enabling rate limiting or configuring authentication.
20
Central Management with the
API Server
The API server is the core component of the Kubernetes
control plane. It serves as the central management entity
that exposes the Kubernetes API and handles all
communication between the various components of the
system.
[Trivia]
The API server is stateless, meaning it does not store any
data itself; it relies on etcd for persistence.
All Kubernetes components communicate with the API
server, making it a crucial part of the architecture.
You can extend the API server with custom resources and
controllers to manage your specific workloads.
21
Understanding etcd: The Data
Store for Kubernetes
etcd is a distributed key-value store that is used by
Kubernetes to store all of its cluster data, including
configuration details and the current state of the cluster.
[Result]
/kubernetes.io/cluster/my-cluster
/kubernetes.io/cluster/my-cluster=true
/pods/...
[Trivia]
etcd is built on the Raft consensus algorithm, which ensures
that the data is consistent across all nodes.
The default data storage for Kubernetes, etcd, is often run
as a separate component in a Kubernetes cluster, but it can
also be managed externally.
etcd supports watch functionality, allowing clients to receive
notifications on changes to keys, which is useful for dynamic
configurations.
22
The Role of the Controller
Manager in Kubernetes
The Controller Manager in Kubernetes is responsible for
ensuring that the desired state of the cluster matches its
actual state by monitoring and managing various resources.
[Result]
[Trivia]
The Controller Manager can run multiple instances for high
availability, but only one instance of each controller should
be active at a time to avoid conflicts.
Controllers use a loop to continuously monitor the state of
the cluster and make adjustments as needed, which is a
core principle of Kubernetes' reconciliation loop.
Common controllers include the Replication Controller,
Deployment Controller, and Node Controller, each serving a
unique purpose in managing the cluster's resources.
23
Understanding Kubernetes
Scheduler and Pod Placement
The Kubernetes Scheduler is a critical component that
determines where Pods should run within a cluster based on
resource availability and defined policies.
LAST
SEEN TYPE REASON OBJECT MESSAGE
1m Normal Scheduled pod/my-
pod Successfully assigned default/my-pod to node-1
1m Normal Pulling pod/my-pod Pulling
image "nginx:latest"
1m Normal Pulled pod/my-
pod Successfully pulled image "nginx:latest"
1m Normal Created pod/my-pod Created
container my-container
1m Normal Started pod/my-pod Started
container my-container
The command above retrieves events related to Pods in the
Kubernetes cluster, specifically focusing on their scheduling.
The --field-selector flag filters events to only show those
involving Pods, while --sort-by organizes them by creation
timestamp. The output shows the sequence of events,
including when a Pod was scheduled to a node, when the
image was pulled, and when the container was started. This
helps in understanding how the Scheduler operates in
placing Pods based on available resources and policies.
[Trivia]
The Scheduler uses various algorithms to make placement
decisions, considering factors such as resource requests
(CPU and memory), node affinity/anti-affinity rules, taints
and tolerations, and custom scheduling policies.
Understanding these concepts is crucial for optimizing
resource utilization and application performance in a
Kubernetes environment.
24
Introduction to Kubernetes
Nodes
Nodes are the worker machines in a Kubernetes cluster
where Pods are deployed and run. They can be either
physical or virtual machines.
[Trivia]
Nodes can be categorized into two types: master nodes,
which manage the Kubernetes control plane, and worker
nodes, where the actual application workloads run. Each
node in a Kubernetes cluster runs a container runtime (like
Docker), a kubelet (which communicates with the
Kubernetes API server), and a kube-proxy (which manages
network routing). Knowing how to manage nodes and their
resources is fundamental for maintaining a healthy
Kubernetes environment.
25
Understanding Kubelet: The
Node Agent in Kubernetes
Kubelet is a crucial agent that runs on every node within a
Kubernetes cluster. Its primary responsibility is to ensure
that the containers in the pods are running as expected by
continuously monitoring the state of the node and
interacting with the container runtime.
[Result]
[Trivia]
Kubelet uses a process called "node heartbeat" to regularly
report to the Kubernetes API server that the node is healthy
and functioning correctly. This heartbeat is sent every few
seconds. If the Kubernetes control plane does not receive a
heartbeat from a node for a certain period (default is 40
seconds), it marks the node as "NotReady," and any pods on
that node may be rescheduled to other nodes.
26
Understanding Container
Runtime in Kubernetes
The container runtime in Kubernetes, such as Docker or
containerd, is responsible for running the containers on a
node. It works closely with the Kubelet to ensure containers
are launched and maintained according to the pod
specifications provided by Kubernetes.
[Result]
CONTAINER
ID IMAGE COMMAND CREATED STATU
S PORTS NAMES
d9b100f2d815 nginx:latest "nginx -g 'daemon of…" 5
minutes ago Up 5
minutes 80/tcp k8s_nginx_default_example-pod-
12345_67890
f1234567890b pause:3.2 "/pause" 5 minutes
ago Up 5 minutes
(Paused) k8s_POD_default_example-pod-
12345_67890
[Trivia]
Kubernetes introduced the Container Runtime Interface
(CRI) to allow different container runtimes to be plugged
into Kubernetes. This makes Kubernetes flexible and allows
it to support various runtimes beyond Docker and
containerd, such as CRI-O. This modularity ensures
Kubernetes can evolve with new container technologies as
they emerge.
27
Understanding Kube-proxy:
Managing Network Rules for
Pods
Kube-proxy is a crucial component in Kubernetes that
manages network rules on nodes, enabling communication
between services and Pods. It ensures that network traffic is
routed correctly to the appropriate Pod based on defined
services.
[Trivia]
Kube-proxy is essential for service discovery in Kubernetes,
allowing Pods to communicate without needing to know the
specific IP addresses of other Pods.
It can be configured to operate in different modes based on
performance needs and cluster architecture.
Kube-proxy is typically deployed as a DaemonSet, ensuring
that there is one instance running on each node in the
cluster.
28
ServiceAccounts: Identity for
Pods in Kubernetes
ServiceAccounts in Kubernetes provide a way for processes
running in Pods to authenticate and interact with the
Kubernetes API. They are essential for managing
permissions and security within the cluster.
# Create a ServiceAccount
kubectl create serviceaccount my-service-account
# Bind the ServiceAccount to a role (e.g., view role)
kubectl create clusterrolebinding my-service-account-
binding --clusterrole=view --serviceaccount=default:my-
service-account
[Result]
serviceaccount/my-service-account created
clusterrolebinding.rbac.authorization.k8s.io/my-service-
account-binding created
[Trivia]
Each ServiceAccount is associated with a set of permissions
defined by Role-Based Access Control (RBAC), allowing for
fine-grained access to resources.
By default, Kubernetes creates a ServiceAccount named
"default" in each namespace, which can be used if no other
ServiceAccount is specified.
ServiceAccounts can be used in conjunction with Kubernetes
Secrets to manage sensitive information securely.
29
Understanding RBAC in
Kubernetes
RBAC (Role-Based Access Control) is a method used in
Kubernetes to manage permissions within a cluster. It allows
administrators to define roles and assign them to users or
groups, controlling what actions they can perform on
resources.
[Result]
[Trivia]
RBAC is one of the key security features in Kubernetes,
allowing for fine-grained access control.
It is essential to regularly review roles and bindings to
ensure they align with the principle of least privilege.
RBAC can be used in conjunction with other authentication
methods, such as OpenID Connect or LDAP, for enhanced
security.
30
Resource Quotas in Kubernetes
Resource quotas are a way to limit the resource usage of a
namespace in Kubernetes. They help prevent a single team
or application from consuming all the resources in a cluster,
ensuring fair distribution among multiple users or
applications.
[Result]
[Trivia]
Resource quotas can be applied to various resources,
including CPU, memory, persistent volume claims, and
more.
They help maintain cluster stability and performance,
especially in environments with multiple teams or
applications.
Resource quotas can be monitored using Kubernetes
metrics, allowing administrators to track resource usage and
adjust quotas as necessary.
31
Setting Resource Limits with
LimitRanges
LimitRanges in Kubernetes allow you to define minimum and
maximum resource usage for Pods or containers. This helps
ensure that your applications do not consume more
resources than intended, which is crucial for maintaining
cluster stability and performance.
apiVersion: v1
kind: LimitRange
metadata:
name: example-limit-range
namespace: my-namespace
spec:
limits:
- default:
cpu: "500m"
memory: "512Mi"
defaultRequest:
cpu: "250m"
memory: "256Mi"
type: Container
[Result]
[Trivia]
Resource Requests and Limits: Requests are the minimum
resources required for a Pod to run, while limits are the
maximum resources it can consume. This distinction helps
Kubernetes manage resources efficiently.
Impact on Scheduling: If a Pod exceeds its limit, it may be
throttled or terminated, affecting application performance.
Conversely, if it does not meet its request, it may not be
scheduled if there are insufficient resources available.
Best Practices: Always define resource requests and limits to
avoid resource contention and ensure fair distribution of
resources among applications.
32
Controlling Pod Placement with
Affinity and Anti-Affinity Rules
Affinity and Anti-Affinity rules in Kubernetes are used to
control how Pods are placed on nodes within a cluster. These
rules help ensure that Pods are scheduled in a way that
meets specific requirements, such as co-locating related
services or spreading workloads across different nodes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: "kubernetes.io/hostname"
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: my-container
image: my-image
[Result]
[Trivia]
Topology Keys: The topologyKey defines how the rules are
applied based on the node labels. Common keys include
kubernetes.io/hostname for node-level rules and failure-
domain.beta.kubernetes.io/zone for zone-level rules.
Impact on Scheduling: Affinity and Anti-Affinity rules can
make scheduling more complex and may lead to
unschedulable Pods if the cluster does not have enough
resources or suitable nodes.
Best Practices: Use Affinity and Anti-Affinity rules judiciously
to avoid over-constraining the scheduler, which can lead to
resource wastage or scheduling failures.
33
Controlling Pod Scheduling with
Taints and Tolerations
In Kubernetes, taints and tolerations are mechanisms used
to control which nodes can or cannot accept specific Pods.
This ensures that Pods are only scheduled on nodes that
meet certain conditions.
# Taint a node
kubectl taint nodes <node-name> key=value:NoSchedule
# Define a Pod with a toleration
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "NoSchedule"
[Result]
node/<node-name> tainted
pod/my-pod created
[Trivia]
Taints and tolerations do not guarantee exclusive access to
nodes. For stricter control, consider using Node Affinity or
Node Selectors in combination with taints and tolerations.
34
Understanding Kubernetes
Declarative API
The Kubernetes API is declarative, meaning that you define
the desired state of your resources, and Kubernetes works
to maintain that state.
# Define a Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
[Result]
deployment.apps/nginx-deployment created
[Trivia]
The opposite of the declarative approach is the imperative
approach, where each action must be explicitly defined.
Kubernetes supports both methods, but the declarative
approach is preferred for its scalability, maintainability, and
ability to handle complex scenarios.
35
Understanding Reconciliation
Loops in Kubernetes
Kubernetes uses reconciliation loops to continuously ensure
that the current state of the system matches the desired
state defined by the user. This mechanism is fundamental to
Kubernetes' operation, allowing it to maintain the desired
configuration and automatically correct any discrepancies.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example-container
image: nginx:latest
[Result]
[Trivia]
The reconciliation loop is a key concept in Kubernetes that is
not only limited to Deployments but is also used in other
resources like StatefulSets, DaemonSets, and more.
The controller manager in Kubernetes is responsible for
managing these reconciliation loops, ensuring that the
desired state is achieved and maintained.
Understanding reconciliation is crucial for troubleshooting
and optimizing Kubernetes applications, as it helps you
grasp how Kubernetes reacts to changes and failures.
36
Imperative vs. Declarative
Configuration in Kubernetes
Kubernetes supports two main styles of configuration:
imperative and declarative. Understanding the difference
between these two approaches is essential for effectively
managing Kubernetes resources.
[Result]
[Trivia]
The declarative approach aligns well with GitOps practices,
where configurations are stored in Git repositories, allowing
for easier rollbacks and audits.
While the imperative approach can be quicker for simple
tasks, it lacks the reproducibility and scalability benefits of
the declarative method.
Kubernetes' kubectl command-line tool supports both
approaches, giving users flexibility based on their needs and
workflows.
37
Understanding kubectl: The
Command-Line Tool for
Kubernetes
kubectl is the command-line tool used to interact with
Kubernetes clusters. It allows users to manage and control
Kubernetes resources, such as deploying applications,
inspecting cluster resources, and troubleshooting issues.
Mastering kubectl is essential for anyone working with
Kubernetes.
[Trivia]
kubectl is a versatile tool, with a wide range of commands
available. Some of the most commonly used commands
include kubectl describe (to get detailed information about a
resource), kubectl logs (to fetch logs from a pod), and
kubectl exec (to execute commands inside a container).It's
important to note that kubectl communicates with the
Kubernetes API server to perform these actions, making it a
direct interface to the cluster's state and configuration.
38
Declarative Configuration with
kubectl apply
kubectl apply is used to manage Kubernetes resources using
declarative configuration files. Instead of manually creating
and updating resources, you define the desired state in a
YAML file, and kubectl apply ensures that the cluster
matches this state.
# deployment.apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: nginx:1.19.10
ports:
- containerPort: 80
bash
kubectl apply -f deployment.
[Result]
deployment.apps/my-deployment created
[Trivia]
One key advantage of declarative configuration with kubectl
apply is its idempotency. This means you can apply the
same configuration file multiple times without changing the
result. Kubernetes will only make the necessary changes to
align the current state of the cluster with the desired state
defined in the YAML file.Moreover, kubectl apply allows for
easier version control and collaboration, as the configuration
files can be stored in a version control system like Git,
enabling teams to manage and review changes to the
cluster configuration systematically.
39
Understanding 'kubectl create'
for Imperative Configuration
The kubectl create command is used for creating
Kubernetes resources in an imperative manner. This means
that the command directly tells Kubernetes what to do,
without needing a predefined configuration file.
[Trivia]
The kubectl create command can create various resources,
including deployments, services, namespaces, and more.
Each resource has its own set of parameters and flags that
can be used to customize its creation. This is particularly
powerful for developers and operators who need to interact
with Kubernetes clusters in a fast and flexible way.
40
Retrieving Kubernetes Resources
with 'kubectl get'
The kubectl get command is used to retrieve information
about Kubernetes resources such as pods, services,
deployments, and more. It provides a quick overview of the
status and configuration of these resources.
[Trivia]
kubectl get can retrieve a wide range of resources beyond
just pods, including nodes, services, deployments,
namespaces, and more. The versatility of this command
makes it a cornerstone of Kubernetes management, often
used in scripts and automation tools to check the status of
resources before proceeding with further actions.
41
Detailed Resource Information
with kubectl describe
kubectl describe is a command used in Kubernetes to
provide detailed information about various resources in your
cluster, such as Pods, Services, Deployments, and more.
This command is essential for understanding the current
state of your resources and diagnosing issues.
Name: my-pod
Namespace: default
Priority: 0
Node: node-1/192.168.1.1
Start Time: Mon, 20 Aug 2024 10:00:00 +0900
Labels: app=my-app
Annotations: <none>
Status: Running
IP: 10.244.0.1
Containers:
my-container:
Container ID: docker://abcdef123456
Image: my-image:latest
Image ID: docker-pullable://my-
image@sha256:abcdef...
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 20 Aug 2024 10:01:00 +0900
Ready: True
Restart Count: 0
Environment:
MY_ENV_VAR: my-value
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from
default-token-abcde (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-abcde:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-abcde
Optional: false
[Trivia]
The -c flag specifies the container name when a Pod has
multiple containers. If the Pod has only one container, this
flag can be omitted.
Logs are stored in the container's stdout and stderr streams,
making it easy to capture application output.
You can also use the --follow flag with kubectl logs to stream
logs in real-time, which is particularly useful for monitoring
live applications.
43
Executing Commands Inside a
Kubernetes Pod
kubectl exec allows you to run commands directly inside a
container that is part of a Kubernetes Pod. This is
particularly useful for debugging and managing applications
running in your Pods.
[Result]
file1.txt
file2.txt
directory1
[Trivia]
kubectl exec can also be used to start an interactive shell
session inside the container. For example, using kubectl
exec -it my-pod -c my-container -- /bin/bash allows you to
interact with the container as if you were logged into it
directly.
You can execute commands in all containers of a Pod by
omitting the -c option, but specifying the container is a good
practice to avoid ambiguity.
44
Forwarding Local Ports to a
Kubernetes Pod
kubectl port-forward is a command that allows you to
forward one or more local ports to a Pod, making it easier to
test and debug applications that are running inside
Kubernetes.
[Result]
[Trivia]
kubectl port-forward can also be used with Services,
allowing you to forward traffic to a Service instead of a
specific Pod. For example, kubectl port-forward service/my-
service 8080:80 would forward local port 8080 to port 80 of
the service.
This command is great for debugging issues with services
that are not exposed externally, as it allows you to interact
with them directly from your local machine.
45
Scaling Deployments with
kubectl scale
The kubectl scale command is used to adjust the number of
replicas in a Kubernetes Deployment or ReplicaSet. This
allows you to easily manage the number of pod instances
running in your cluster.
[Result]
deployment.apps/my-deployment scaled
[Result]
[Result]
NAME: my-nginx
LAST DEPLOYED: Tue Aug 20 15:32:15 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NAME READY STATUS RESTARTS AGE
my-nginx-6dff6c64ff-8f7h7 1/1 Running 0 1m
[Trivia]
Helm was originally developed by DeisLabs, which was later
acquired by Microsoft. It has since become one of the most
widely used tools in the Kubernetes ecosystem. Helm is
often referred to as the "apt" or "yum" of Kubernetes,
drawing an analogy to Linux package managers.
48
Kustomize: Customizing
Kubernetes YAML Without
Templates
Kustomize is a tool for customizing Kubernetes YAML files
without the need for templates. It allows you to create
overlays that modify the base Kubernetes configurations in
a declarative manner.
[Result]
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 5
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: nginx:1.17.10
[Trivia]
Kustomize was the first Kubernetes-native configuration
tool, allowing for a more declarative management style
without using templates. It was integrated into kubectl in
version 1.14, making it accessible out of the box for
Kubernetes users.
49
Running Kubernetes Locally with
Minikube
Minikube is a tool that allows developers to run a local
Kubernetes cluster on their machine. This is extremely
useful for testing and development purposes without
needing a full-scale Kubernetes setup.
minikube start
[Result]
✨
see output similar to:😄 minikube v1.25.2 on Darwin 12.0.1
🏄
provisioner
Done! kubectl is now configured to use "minikube"
cluster and "default" namespace by default
[Trivia]
Minikube also comes with several built-in add-ons, such as
the Kubernetes Dashboard, which you can enable to
visualize your cluster's resources. This is particularly helpful
for those new to Kubernetes who prefer a graphical
interface.
50
Lightweight Kubernetes with k3s
k3s is a lightweight, easy-to-install Kubernetes distribution
designed for edge computing, IoT, and resource-constrained
environments. It's perfect for development and small-scale
production workloads.
[Trivia]
k3s was originally developed by Rancher Labs and is now
part of the CNCF (Cloud Native Computing Foundation).
Despite being lightweight, k3s is fully compliant with the
Kubernetes API, so any Kubernetes tool or application should
work seamlessly with it.
51
Running Kubernetes Clusters
with Kind
Kind is a tool that allows you to run Kubernetes clusters
using Docker containers as nodes, making it easier to set up
and experiment with Kubernetes on your local machine.
[Result]
[Trivia]
Kind is a great tool for learning Kubernetes because it
doesn't require a cloud provider or expensive hardware to
run clusters. It supports multi-node clusters, networking,
and most Kubernetes features, making it a practical option
for local development. Additionally, since Kind uses Docker,
it's possible to easily integrate with other tools and
workflows that are container-based.
52
Applying Kubernetes
Configurations with kubectl
The kubectl apply -f <file> command applies or updates the
configuration of Kubernetes resources based on the YAML
file provided, managing their desired state in the cluster.
# deployment.apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.0
ports:
- containerPort: 80
bash
# Apply the deployment using kubectl
kubectl apply -f deployment.
[Result]
deployment.apps/nginx-deployment created
[Trivia]
The kubectl apply command is declarative, meaning that it
ensures the cluster's state matches the configuration
defined in the YAML file. This is different from imperative
commands, which directly instruct Kubernetes to perform
specific actions. The declarative approach is powerful
because it allows for versioning and continuous
management of infrastructure as code, making it easier to
track changes and maintain desired states over time.
53
Deleting Kubernetes Resources
Defined in a YAML File
In Kubernetes, you can delete resources defined in a YAML
file using the kubectl delete -f <file> command. This is
useful when you want to remove all the resources specified
in that file in one go.
[Trivia]
The kubectl delete -f <file> command is useful in
Continuous Deployment (CD) pipelines, where you may
need to clean up resources from previous deployments
before deploying new ones. It's also commonly used in
development environments to tear down resources after
testing.
54
Listing Pods in a Specific
Namespace
To list all Pods within a specific namespace in Kubernetes,
you can use the kubectl get pods --namespace=
<namespace> command. This allows you to focus on
resources within a particular context, such as a
development or production environment.
[Trivia]
You can view all available namespaces in a cluster by
running kubectl get namespaces. This command lists all the
namespaces, which can be particularly helpful when
managing large clusters with many projects or
environments.
55
Understanding Resource Usage
with kubectl top
kubectl top is a command used in Kubernetes to display
metrics about resource usage for nodes and pods in a
cluster. This command is crucial for monitoring the
performance and resource consumption of your
applications.
[Trivia]
The kubectl top command requires the Metrics Server to be
installed in your cluster. The Metrics Server collects resource
metrics from Kubelets and exposes them via the Kubernetes
API.
You can also use kubectl top nodes to view resource usage
for each node in the cluster, which is helpful for monitoring
overall cluster health.
56
Editing Resources Directly with
kubectl edit
kubectl edit is a command that allows you to modify
Kubernetes resources directly in your cluster using your
default text editor. This is a powerful feature for making
quick changes without needing to reapply a configuration
file.
[Trivia]
You can specify a different editor by setting the EDITOR
environment variable. For example, export EDITOR=nano
will use Nano as the text editor.
The kubectl edit command works for various resource types,
including pods, services, deployments, and more, making it
a versatile tool for cluster management.
57
Managing Rollouts and Rollbacks
with kubectl rollout
The kubectl rollout command is essential for managing the
deployment of applications in Kubernetes. It allows you to
initiate, monitor, and revert deployments effectively.
[Result]
deployment.apps/my-app created
Waiting for rollout to finish: 0 out of 1 new replicas have
been updated...
deployment "my-app" successfully rolled out
Waiting for rollout to finish: 1 out of 1 new replicas have
been updated...
deployment "my-app" successfully rolled out
Rollback to deployment "my-app" completed
[Trivia]
Rollout History: Kubernetes maintains a history of your
deployments, allowing you to see previous versions and roll
back as needed.
Deployment Strategy: By default, Kubernetes uses a rolling
update strategy, which gradually replaces old pods with new
ones to minimize downtime.
Monitoring Rollouts: You can monitor the rollout process
through the Kubernetes dashboard or by using kubectl get
pods to see the status of individual pods.
58
Debugging with kubectl get
events
The kubectl get events command provides insights into the
events occurring in your Kubernetes cluster. This
information is crucial for troubleshooting issues within your
applications and infrastructure.
[Result]
LAST
SEEN TYPE REASON OBJECT MESSAGE
1m Normal Scheduled pod/my-
pod Successfully assigned default/my-pod to node-1
1m Normal Pulling pod/my-pod Pulling
image "nginx:latest"
1m Normal Pulled pod/my-
pod Successfully pulled image "nginx:latest"
1m Normal Created pod/my-
pod Created container my-container
1m Normal Started pod/my-pod Started
container my-container
[Trivia]
Event Types: Events can be categorized into Normal and
Warning, helping you quickly identify issues that may
require attention.
Event TTL: Events are stored in etcd with a default time-to-
live (TTL) of one hour, after which they are automatically
deleted.
Custom Events: You can create custom events in your
applications to log specific occurrences, which can be
helpful for monitoring and debugging.
59
Viewing Current Kubeconfig
Settings with kubectl config view
The kubectl config view command is used to display the
current configuration settings for Kubernetes clusters. This
includes information about clusters, contexts, and users
defined in the kubeconfig file.
apiVersion: v1
clusters:
- cluster:
server: https://example.com:6443
certificate-authority: /path/to/ca.crt
name: example-cluster
contexts:
- context:
cluster: example-cluster
user: example-user
name: example-context
current-context: example-context
kind: Config
preferences: {}
users:
- name: example-user
user:
client-certificate: /path/to/client.crt
client-key: /path/to/client.key
[Trivia]
The kubeconfig file is typically located at ~/.kube/config on
Unix-based systems.
You can specify a different kubeconfig file using the
KUBECONFIG environment variable or the --kubeconfig flag
with kubectl commands.
The kubectl config view --minify command can be used to
show only the details of the current context, which is useful
for quick checks.
60
Switching Between Contexts
with kubectl config set-context
The kubectl config set-context command allows users to
switch between different contexts defined in the kubeconfig
file. This is essential for managing multiple Kubernetes
clusters or environments.
[Trivia]
You can view all available contexts with the command
kubectl config get-contexts.
To set a context with specific cluster and user settings, you
can use the command kubectl config set-context <context-
name> --cluster=<cluster-name> --user=<user-name>.
Remember to ensure that your kubeconfig file is properly
configured with the necessary clusters and users before
switching contexts.
61
Preparing a Node for
Maintenance with kubectl drain
The kubectl drain command is used to safely prepare a
Kubernetes node for maintenance. This is done by evicting
all running Pods from the node, ensuring that no new Pods
are scheduled until maintenance is completed.
node/<node-name> cordoned
evicting pod default/nginx-deployment-6f77f7f8d9-w9j6q
evicting pod default/nginx-deployment-6f77f7f8d9-4x9kl
pod/nginx-deployment-6f77f7f8d9-w9j6q evicted
pod/nginx-deployment-6f77f7f8d9-4x9kl evicted
[Trivia]
DaemonSets: These are special Pods that run on every node
and are not automatically evicted by kubectl drain.
Examples include logging and monitoring agents.Cordoning:
This process marks the node as unschedulable, meaning no
new Pods can be scheduled on it, but it doesn't affect
already running Pods. This is automatically done when you
run kubectl drain.
62
Marking a Node Unschedulable
with kubectl cordon
The kubectl cordon command is used to mark a node as
unschedulable, preventing new Pods from being scheduled
on it. However, it does not affect Pods that are already
running on the node.
node/<node-name> cordoned
[Trivia]
Difference between Cordoning and Draining: Cordoning
simply prevents new Pods from being scheduled on the
node. Draining goes further by also evicting existing
Pods.Maintenance Planning: Cordoning is often the first step
in maintenance planning, giving administrators the
flexibility to manage when and how Pods are relocated
without immediate pressure.
63
Understanding kubectl taint:
Controlling Pod Scheduling on
Nodes
kubectl taint is a command used in Kubernetes to add taints
to nodes. Taints allow you to control which Pods can be
scheduled on a particular node, ensuring that only specific
Pods can run on nodes that have certain conditions.
node/node-name tainted
[Trivia]
Taints are used in conjunction with tolerations. A toleration
is a way for a Pod to indicate that it can tolerate a specific
taint. If a Pod does not have a toleration for a taint on a
node, it will not be scheduled on that node. This mechanism
is essential for managing resources effectively in a
Kubernetes cluster, especially in environments where
certain nodes may be reserved for specific workloads.
64
Using kubectl label: Organizing
Resources with Labels
kubectl label is a command that allows you to add labels to
Kubernetes resources. Labels are key-value pairs that help
you organize and select resources based on specific criteria.
pod/pod-name labeled
[Trivia]
Labels are fundamental in Kubernetes for managing and
organizing resources. They can be used for various
purposes, such as deployment strategies, monitoring, and
scaling. It’s essential to plan your labeling strategy carefully,
as it can significantly simplify the management of your
Kubernetes resources. Additionally, labels can be updated or
removed, allowing for flexibility as your application evolves.
65
Adding Annotations to
Kubernetes Resources with
kubectl annotate
Annotations in Kubernetes are key-value pairs that store
metadata about resources. They are useful for attaching
non-identifying information to objects, which can be used by
external tooling and libraries.
pod/my-pod annotated
[Trivia]
Annotations differ from labels. While labels are used for
selecting and grouping objects, annotations are intended for
storing arbitrary metadata. This distinction is crucial for
organizing and managing Kubernetes resources effectively.
Annotations can also be used by controllers or operators to
manage resources dynamically.
66
Starting a Local Proxy to the
Kubernetes API Server with
kubectl proxy
The kubectl proxy command creates a local proxy that
allows you to interact with the Kubernetes API server. This is
useful for accessing the API without needing to authenticate
directly.
kubectl proxy
[Result]
[Trivia]
Using kubectl proxy is a common practice for developers
who want to interact with the Kubernetes API without
dealing with authentication tokens or certificates. The proxy
handles these complexities, allowing for a more
straightforward approach to testing and debugging API calls.
Additionally, the proxy can be configured to limit access to
specific namespaces or resources, enhancing security
during development.
67
Copying Files Between a Pod and
the Local Filesystem Using
kubectl cp
The kubectl cp command allows you to copy files or
directories between a Kubernetes Pod and your local
machine’s filesystem, enabling file transfer operations
without needing direct access to the Pod’s container.
[Result]
If successful, the command will complete without errors,
and the file will appear at the specified destination on the
Pod or your local machine.
[Trivia]
If the file copy operation fails, it could be due to incorrect
file paths, insufficient permissions, or the Pod not being in a
Running state. Always ensure the Pod is running and paths
are correctly specified. Additionally, kubectl cp can only
copy files between a local system and a Pod, not between
Pods.
68
Understanding Kubernetes Flat
Networking Model
Kubernetes employs a flat networking model, meaning
every Pod within a cluster can communicate with every
other Pod by default, regardless of the node on which they
reside.
[Result]
[Trivia]
While the flat networking model is powerful, it’s essential to
secure inter-Pod communication using Network Policies,
especially in multi-tenant environments or when dealing
with sensitive data. Network Policies allow you to restrict
traffic between Pods, controlling which Pods can
communicate with others based on labels, namespaces, and
other criteria.
69
Understanding Pod Lifecycle
Events in Kubernetes
Pod lifecycle events in Kubernetes refer to the different
phases that a Pod goes through from its creation to its
termination. The key phases include Pending, Running,
Succeeded, Failed, and Unknown.
[Result]
Pending
(Or)Running
(Or)
Succeeded
(Or)
Failed
(Or)Unknown
Pending: The Pod has been accepted by the Kubernetes
system, but one or more of the containers inside the Pod
has not been created. This phase could involve image
pulling, container creation, or network setup.Running: The
Pod has been bound to a node, and all of the containers
have been created. At least one container is still running, or
is in the process of starting or restarting.Succeeded: All
containers in the Pod have terminated successfully, and will
not be restarted.Failed: All containers in the Pod have
terminated, and at least one container has terminated in a
failure (exited with a non-zero status).Unknown: The state of
the Pod cannot be determined, usually because the
communication between the Pod and the node has failed.
[Trivia]
Kubernetes uses a controller to ensure that the desired
number of Pods are running at any given time. For example,
a Deployment controller ensures that a specific number of
Pods are up and running, and if a Pod fails, it is
automatically replaced.
70
Ensuring Pod Health with
Liveness and Readiness Probes
Liveness and Readiness Probes in Kubernetes are
mechanisms to ensure that a Pod is healthy and ready to
receive traffic. Liveness Probes detect if a container is still
alive, while Readiness Probes determine if the container is
ready to start accepting traffic.
apiVersion: v1
kind: Pod
metadata:
name: liveness-readiness-pod
spec:
containers:
- name: myapp-container
image: myapp:latest
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /readiness
port: 8080
initialDelaySeconds: 5
periodSeconds: 3
[Result]
[Trivia]
Liveness Probes can be implemented using different
methods such as HTTP GET requests, TCP sockets, or
executing a command inside the container. Readiness
Probes can also be customized similarly, making them
versatile for different use cases.
71
Initialization Tasks with Init
Containers in Kubernetes
Init containers in Kubernetes are special containers that run
before the main application containers in a Pod. They are
used for setup tasks that must complete before the main
application containers can start.
apiVersion: v1
kind: Pod
metadata:
name: init-container-example
spec:
initContainers:
- name: init-db-check
image: busybox
command: ['sh', '-c', 'until nslookup my-database; do echo
waiting for database; sleep 2; done;']
containers:
- name: app-container
image: my-app-image
ports:
- containerPort: 80
[Result]
[Trivia]
Init containers can access the same volumes as regular
containers, which is useful for setting up files or
configuration data needed by the main application.If an init
container fails to start due to an error, Kubernetes will
restart it according to the Pod’s restart policy until it
succeeds or the Pod is deleted.
72
Networking and Storage in Multi-
container Pods
In Kubernetes, when multiple containers are defined within a
single Pod, they share the same network namespace. This
means they can communicate with each other using
localhost as if they were running within the same
environment. However, they can have separate storage
volumes, allowing for more flexible data management.
apiVersion: v1
kind: Pod
metadata:
name: multi-container-pod-example
spec:
volumes:
- name: shared-storage
emptyDir: {}
containers:
- name: app-container
image: nginx
volumeMounts:
- name: shared-storage
mountPath: /usr/share/nginx/ - name: sidecar-container
image: busybox
command: ['sh', '-c', 'echo "Hello from sidecar" >
/usr/share/nginx/html/index.html']
[Result]
[Trivia]
Kubernetes uses the concept of "sidecar" containers in
multi-container Pods, where an auxiliary container supports
the main application container by handling tasks such as
logging, monitoring, or data synchronization.Pods, being the
smallest deployable unit in Kubernetes, ensure that all
containers within them are tightly coupled and meant to
work together, sharing resources like CPU, memory, and
storage.
73
Kubernetes Supports OCI-
Compliant Container Images
Kubernetes allows you to pull and run container images
from any Open Container Initiative (OCI)-compliant registry.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21.6
ports:
- containerPort: 80
To deploy this configuration, save it to a file named nginx-
deployment.yaml and run the following command:kubectl
apply -f nginx-deployment.
[Result]
deployment.apps/nginx-deployment created
[Trivia]
OCI, or Open Container Initiative, was established in 2015 to
create open standards for container formats and runtimes.
This initiative ensures that container technologies from
different vendors can interoperate, fostering a more flexible
and versatile ecosystem for containerized applications.
74
New Features in the Latest
Kubernetes Stable Version
The latest stable version of Kubernetes includes various new
features and improvements that enhance the platform's
functionality and user experience.
[Result]
[Trivia]
Kubernetes follows a regular release cycle, typically every
three to four months. Each new release brings updates, new
features, performance improvements, and security patches.
It's essential for Kubernetes administrators to stay informed
about these updates to take full advantage of the platform's
evolving capabilities.
75
Difference Between StatefulSets
and Deployments in Kubernetes
StatefulSets and Deployments are two core Kubernetes
resources used to manage applications. Deployments are
ideal for stateless applications, ensuring that any pod can
be replaced by another without affecting the application. On
the other hand, StatefulSets are specifically designed for
stateful applications where the order and identity of pods
are crucial.
# Deployment Example
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.4
ports:
- containerPort: 80
---
# StatefulSet Example
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.17.4
ports:
- containerPort: 80
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
[Result]
[Trivia]
StatefulSets are often used in scenarios where applications
require stable storage or are highly sensitive to the order of
operations, such as in databases (e.g., MySQL, Cassandra)
or other distributed systems requiring consistent network
identities.In a StatefulSet, the pod's hostname is derived
from the StatefulSet name and the ordinal index of the pod
(e.g., web-0), allowing the pod to be consistently
addressable by other resources.
76
Introduction to Network Plugins
(CNI) in Kubernetes
Kubernetes uses the Container Network Interface (CNI) to
extend and manage network capabilities for pods. CNIs are
responsible for establishing network connectivity, assigning
IP addresses to pods, and ensuring consistent
communication between them.
[Result]
[Trivia]
The CNI specification was originally developed by CoreOS
and is now a CNCF project. The goal is to create a common
interface that different networking solutions can implement,
allowing Kubernetes to be flexible with various network
environments.Some advanced CNIs like Cilium use eBPF
(extended Berkeley Packet Filter) for efficient networking
and security features directly within the Linux kernel,
providing enhanced performance and observability in large-
scale environments.
77
Cluster Autoscaler: Dynamically
Managing Node Count
The Cluster Autoscaler is a powerful Kubernetes component
that automatically adjusts the number of nodes in your
cluster based on the resource usage of your applications. It
ensures that your cluster has enough resources to run your
workloads efficiently while minimizing costs by scaling down
when resources are underutilized.
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
containers:
- name: cluster-autoscaler
image: k8s.gcr.io/cluster-autoscaler:v1.21.0
command:
- ./cluster-autoscaler
- --cloud-provider=gce
- --nodes=1:10:YOUR_NODE_POOL_NAME
- --v=4
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /etc/gce-service-
account/key. volumeMounts:
- name: gce-service-account
mountPath: /etc/gce-service-account
volumes:
- name: gce-service-account
secret:
secretName: gce-service-account
[Result]
[Trivia]
The Cluster Autoscaler can only scale up if the underlying
infrastructure supports it (e.g., if you're using a cloud
provider).
It requires specific IAM permissions to manage resources in
the cloud environment.
The Cluster Autoscaler operates independently of the
Horizontal Pod Autoscaler, which scales the number of pod
replicas based on CPU or memory usage.
78
Ingress Controllers: Essential for
Managing Ingress Resources
Ingress Controllers are crucial components in Kubernetes
that manage Ingress resources, allowing you to define how
external HTTP/S traffic should be routed to your services.
They act as a bridge between the external world and your
internal Kubernetes services.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-
ingress-controller:0.30.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-
services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-
services
- --default-backend-
service=$(POD_NAMESPACE)/default-backend
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
[Result]
[Trivia]
Ingress Controllers can handle SSL termination, allowing you
to secure your applications with HTTPS.
There are various Ingress Controllers available, including
NGINX, Traefik, and HAProxy, each with unique features and
configurations.
Ingress resources can also be used to set up load balancing
and path-based routing, enhancing the flexibility of your
Kubernetes applications.
79
Understanding
PodDisruptionBudgets in
Kubernetes
PodDisruptionBudgets (PDBs) are a critical feature in
Kubernetes that help maintain application availability during
maintenance events by limiting the number of Pods that can
be disrupted simultaneously.
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: my-app-pdb
spec:
minAvailable: 2
selector:
matchLabels:
app: my-app
[Result]
[Trivia]
PodDisruptionBudgets are particularly useful in scenarios
where applications require high availability, such as in
production environments. They work in conjunction with
other Kubernetes features like Eviction and Drain, which are
used during node maintenance. It's important to note that
PDBs only apply to voluntary disruptions, such as node
maintenance or scaling down, and do not prevent
involuntary disruptions like crashes or node failures.
80
The "API First" Approach in
Kubernetes
Kubernetes employs an "API first" approach, meaning that
all interactions with the system are performed through API
calls, making it highly extensible and adaptable.
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: myapps.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: myapps
singular: myapp
kind: MyApp
shortNames:
- ma
---
apiVersion: example.com/v1
kind: MyApp
metadata:
name: my-first-app
spec:
config: "example configuration"
Apply this CRD:kubectl apply -f myapp-crd.
[Result]
customresourcedefinition.apiextensions.k8s.io/myapps.exa
mple.com created
myapp.example.com/my-first-app created
You can check the custom resource by running:kubectl get
myapps
Expected output:NAME AGE
my-first-app 10s
[Trivia]
CRDs are part of Kubernetes’ API extension mechanism,
introduced in Kubernetes 1.7.Over 70% of Kubernetes
operators use CRDs to define the state of custom
resources.With CRDs, you can implement advanced features
like versioning, validation, and subresources (e.g., status
and scale).
82
Managing Stateful Workloads
with Kubernetes Operators
Operators are a method of packaging, deploying, and
managing a Kubernetes application. Operators extend
Kubernetes’ capabilities to manage complex, stateful
workloads, providing operational knowledge that is encoded
into a custom controller.
package main
import (
"context"
"fmt"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
)
type MyApp struct {
Spec struct {
Config string `json:"config"`
} `json:"spec"`
}
func main() {
mgr, err := manager.New(manager.GetConfigOrDie(),
manager.Options{})
if err != nil {
fmt.Println("Error creating manager:", err)
return
}
c := mgr.GetClient()
myApp := &MyApp{}
key := client.ObjectKey{Name: "my-first-app",
Namespace: "default"}
err = c.Get(context.Background(), key, myApp)
if err != nil {
fmt.Println("Error getting MyApp resource:", err)
return
}
fmt.Printf("Found MyApp with config: %s\n",
myApp.Spec.Config)
stop := signals.SetupSignalHandler()
if err := mgr.Start(stop); err != nil {
fmt.Println("Error starting manager:", err)
}
}
Build and deploy this operator within your Kubernetes
cluster.
[Result]
[Trivia]
The Operator pattern was introduced by CoreOS in
2016.Operators can be written in several languages, with
Go being the most common due to its tight integration with
Kubernetes.Advanced Operators often include features like
leader election, Prometheus metrics, and reconcilers that
continually monitor the desired state of resources.
83
Bootstrapping Kubernetes
Clusters with kubeadm
kubeadm is a powerful tool designed to simplify the process
of setting up and bootstrapping Kubernetes clusters. It
provides a straightforward way to initialize a master node,
join worker nodes, and manage the overall cluster setup
with minimal configuration.
[Result]
[Trivia]
kubeadm was introduced as a part of the Kubernetes 1.4
release to simplify cluster setup and has since become the
de facto tool for bootstrapping clusters.The use of kubeadm
is not limited to single master setups; it can also be used to
set up high-availability (HA) clusters with multiple control
plane nodes.The networking model in Kubernetes is unique
because it requires all pods in the cluster to be able to
communicate with each other without NAT, which is why
deploying a network plugin like Calico or Flannel is
mandatory after initialization.
84
Understanding Resource
Requests and Limits in
Kubernetes
Resource requests and limits in Kubernetes are mechanisms
that allow you to control how much CPU and memory a
container can use. Properly configuring these values
ensures efficient resource utilization and helps prevent any
single container from monopolizing resources, which could
degrade the performance of other containers in the same
node.
apiVersion: v1
kind: Pod
metadata:
name: resource-demo
spec:
containers:
- name: busybox
image: busybox
command: ["sh", "-c", "sleep 3600"]
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
[Result]
Resource requests and limits are vital for ensuring that your
Kubernetes cluster runs efficiently. The requests field
specifies the minimum amount of CPU and memory that the
container will get, while the limits field specifies the
maximum resources the container can use. If a node doesn’t
have enough resources to meet the request, the pod won’t
be scheduled on that node.The CPU request is measured in
CPU units, where 1 CPU equals one physical core or one
virtual core in the cloud. Memory is measured in bytes, with
common values being Mi for mebibytes and Gi for gibibytes.
If a container exceeds its memory limit, it might be
terminated by the kubelet, and if it exceeds its CPU limit, it
will be throttled. Ensuring proper requests and limits can
prevent resource contention and ensure that critical
applications have the necessary resources to function
correctly.
[Trivia]
CPU in Kubernetes is measured in millicores (m). For
example, 500m equals 0.5 CPU, meaning half a CPU
core.Pods without specified resource requests are
considered to have zero requests, meaning they can be
starved of resources if other pods have defined
requests.Kubernetes scheduler uses these requests to make
decisions about which nodes to place pods on, so defining
them correctly is crucial for ensuring pods are placed on
nodes with sufficient resources.
85
Kubernetes: Multi-Cloud and On-
Premises Support
Kubernetes is a powerful container orchestration platform
that supports deployment across multiple cloud providers
and can also be run on-premises. This flexibility allows
organizations to choose the best environment for their
applications based on their specific needs.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app-container
image: nginx:latest
ports:
- containerPort: 80
[Result]
[Trivia]
Kubernetes was originally developed by Google and is now
maintained by the Cloud Native Computing Foundation
(CNCF).
It uses a declarative configuration model, meaning you
describe the desired state of your application, and
Kubernetes works to maintain that state.
Kubernetes supports various networking models, enabling
communication between containers across different
environments.
86
Managed Kubernetes Services:
Simplifying Cluster Management
Kubernetes clusters can be easily managed through
managed services such as Google Kubernetes Engine (GKE),
Amazon Elastic Kubernetes Service (EKS), and Azure
Kubernetes Service (AKS). These services simplify the
deployment, management, and scaling of Kubernetes
clusters, allowing developers to focus on building
applications rather than managing infrastructure.
[Trivia]
Managed Kubernetes services often include built-in
integrations with other cloud services, such as storage and
networking, making it easier to build complex applications.
They typically provide automatic scaling features, allowing
clusters to adjust resources based on workload demands.
These services also enhance security by providing features
like automatic updates and built-in monitoring tools.
87
Introduction to Istio: Managing
Service-to-Service
Communication in Kubernetes
Istio is a service mesh that integrates with Kubernetes to
manage and control the communication between
microservices. It offers advanced traffic management,
security features, and observability for services running
within a Kubernetes cluster.
[Result]
[Trivia]
Istio’s observability tools allow you to monitor and trace
service behavior across your entire application stack. This
includes automatic collection of metrics, logs, and traces.
The system integrates with popular tools like Prometheus,
Grafana, and Jaeger to provide deep insights into service
performance and issues.
88
Understanding Metrics Server in
Kubernetes for Resource
Monitoring
Metrics Server is a cluster-wide aggregator of resource
usage data in Kubernetes. It collects metrics like CPU and
memory usage from nodes and pods, providing the data
necessary for scaling operations and resource monitoring.
[Result]
NAME DESIRED CURRENT UP-TO-
DATE AVAILABLE AGE
metrics-
server 1 1 1 1 30s
If you run the optional
commands:NAME CPU(cores) CPU% MEMORY(bytes
) MEMORY%
node-1 250m 12% 500Mi 25%
node-2 300m 15% 550Mi 27%
[Trivia]
Metrics Server is not deployed by default in Kubernetes
clusters. It is also different from Heapster, an older tool that
was deprecated in favor of Metrics Server. Metrics Server
can be paired with Prometheus for more advanced
monitoring setups, where Prometheus can handle long-term
storage and complex querying of metrics data.
89
Monitoring Kubernetes Clusters
with Prometheus
Prometheus is a powerful open-source monitoring system
widely used to collect and store metrics from Kubernetes
clusters. It enables developers and operators to gain
insights into the performance and health of their
applications and infrastructure.
apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
ports:
- port: 9090
targetPort: 9090
selector:
app: prometheus
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
spec:
replicas: 1
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
containers:
- name: prometheus
image: prom/prometheus
ports:
- containerPort: 9090
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.listen-address=:9090"
volumeMounts:
- name: config-volume
mountPath: /etc/prometheus
- name: data-volume
mountPath: /prometheus
volumes:
- name: config-volume
configMap:
name: prometheus-config
- name: data-volume
emptyDir: {}
[Result]
[Trivia]
Prometheus uses a pull model to collect metrics, meaning it
scrapes data from targets at specified intervals. It supports
multi-dimensional data collection and querying, allowing
users to slice and dice metrics in various ways. Additionally,
Prometheus is designed to work seamlessly with
Kubernetes, automatically discovering services and
endpoints.
90
Visualizing Metrics with Grafana
Grafana is an open-source analytics and monitoring platform
that integrates well with Prometheus, allowing users to
create visually appealing dashboards to display metrics
collected from Kubernetes clusters.
apiVersion: v1
kind: Service
metadata:
name: grafana
spec:
ports:
- port: 3000
targetPort: 3000
selector:
app: grafana
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana
ports:
- containerPort: 3000
env:
- name: GF_SECURITY_ADMIN_PASSWORD
value: "admin" # Default admin password
[Result]
[Trivia]
Grafana supports a wide range of data sources beyond
Prometheus, including InfluxDB, Elasticsearch, and MySQL. It
provides powerful visualization capabilities, allowing users
to create graphs, heatmaps, and alerts. Grafana dashboards
can be shared and customized, making it a popular choice
for teams monitoring their applications and infrastructure.
91
Backing Up and Restoring
Kubernetes Clusters with Velero
Velero is a powerful open-source tool used for backing up,
restoring, and migrating Kubernetes clusters and persistent
volumes. It provides a reliable way to protect your
Kubernetes environment from accidental data loss or
disaster.
[Result]
The result will display the progress and completion status of
the backup and restore operations. For example, you may
see output like:Backup request "my-cluster-backup"
submitted successfully.
Run `velero backup describe my-cluster-backup` or `velero
backup logs my-cluster-backup` for details.
Restore request "restore-1" submitted successfully.
Run `velero restore describe restore-1` or `velero restore
logs restore-1` for details.
[Trivia]
Velero was originally called "Heptio Ark" before being
renamed. It was created by Heptio, a company founded by
Kubernetes co-creators Joe Beda and Craig McLuckie, which
was later acquired by VMware. Velero is now maintained by
VMware and is one of the most widely used backup tools for
Kubernetes environments.
92
Running Security Checks on
Kubernetes with Kube-bench
Kube-bench is a tool that checks whether Kubernetes is
deployed securely by running checks defined in the CIS
(Center for Internet Security) Kubernetes Benchmark.
# Install Kube-bench
curl -L https://github.com/aquasecurity/kube-
bench/releases/download/v0.6.5/kube-
bench_0.6.5_linux_amd64.tar.gz | tar -xz
chmod +x kube-bench
sudo mv kube-bench /usr/local/bin/
# Run Kube-bench to check your Kubernetes cluster security
kube-bench --config-dir cfg --config cfg/config.
# View the detailed report
cat kube-bench-results.json | jq .
[Result]
[Trivia]
The CIS Kubernetes Benchmark is a comprehensive guide
with over 200 recommendations for securing Kubernetes. It
covers everything from API server configurations to network
policies. Adhering to these guidelines is essential for
maintaining a secure Kubernetes environment, particularly
in regulated industries where compliance is mandatory.
93
Understanding Kubernetes
RBAC: Roles and Bindings
Kubernetes Role-Based Access Control (RBAC) is a method
for regulating access to resources based on the roles of
individual users within an organization. It uses roles, role
bindings, cluster roles, and cluster role bindings to define
permissions.
[Result]
[Trivia]
RBAC is a powerful feature in Kubernetes that helps manage
permissions effectively, especially in large teams or
organizations.
Cluster roles and cluster role bindings function similarly to
roles and role bindings but are applicable across all
namespaces, making them suitable for cluster-wide
permissions.
To check the permissions granted to a user, you can use the
command kubectl auth can-i get pods --as alice -n default.
94
Kubernetes Auditing: Tracking
Resource Access and
Modifications
Auditing in Kubernetes is a feature that logs requests made
to the API server, providing a way to track who accessed
what resources and when. This is essential for security and
compliance.
[Result]
No direct output is generated from the audit policy
configuration. However, once the API server is running with
auditing enabled, you can check the audit log by viewing
the specified log file:
cat /var/log/audit.log
[Trivia]
Kubernetes auditing can be configured to log different levels
of detail, including metadata, request bodies, and response
bodies, depending on the sensitivity of the data and the
needs of the organization.
You can analyze audit logs using various tools, including ELK
stack (Elasticsearch, Logstash, Kibana) for better
visualization and monitoring.
Auditing is a key component of security best practices in
Kubernetes, especially in production environments.
95
Kubernetes Secrets Are Base64-
Encoded, Not Encrypted
Kubernetes Secrets are a way to store sensitive information,
such as passwords, OAuth tokens, and SSH keys, in your
cluster. However, by default, these secrets are only base64-
encoded, which is not the same as encryption.
# Create a secret
kubectl create secret generic my-secret --from-
literal=username=admin --from-
literal=password=secret123
# Retrieve the secret
kubectl get secret my-secret -o yaml
[Result]
apiVersion: v1
data:
password: c2VjcmV0MTIz
username: YWRtaW4=
kind: Secret
metadata:
name: my-secret
namespace: default
type: Opaque
[Trivia]
Base64 encoding is often confused with encryption, but they
serve different purposes. Base64 is used for encoding binary
data into text, while encryption is a method of securing data
by converting it into a format that cannot be easily
understood without the proper decryption key.
96
Dynamic Storage Provisioning in
Kubernetes with StorageClasses
Kubernetes supports dynamic provisioning of storage,
allowing persistent volumes to be created automatically
when needed using StorageClasses. This feature simplifies
the management of storage resources by removing the
need for administrators to manually provision storage.
# Define a StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-storage
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
replication-type: none
---
# Create a PersistentVolumeClaim using the StorageClass
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast-storage
[Result]
[Trivia]
Dynamic provisioning allows for more efficient resource
utilization, as storage is allocated only when needed.
Additionally, by using different StorageClasses,
administrators can offer varying levels of performance and
redundancy, tailored to the specific needs of different
applications.
97
Understanding ClusterIP,
NodePort, and LoadBalancer in
Kubernetes
ClusterIP, NodePort, and LoadBalancer are service types in
Kubernetes that define how a service is exposed to the
network. They determine whether the service is accessible
only within the cluster, from outside, or through an external
load balancer.
# nginx-deployment.apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
---
# clusterip-service.apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
---
# nodeport-service.apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30001
type: NodePort
---
# loadbalancer-service.apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
[Result]
[Trivia]
The ClusterIP service type is often used in conjunction with
an Ingress resource to manage external access to services
without exposing each one individually. An Ingress controller
can route traffic to different services based on the
requested hostnames or paths, offering more flexibility than
NodePort or LoadBalancer alone.
98
Advanced Networking with
Service Mesh: Using Linkerd
Service mesh tools like Linkerd enhance the networking
capabilities of Kubernetes by providing features such as
automatic retries, load balancing, and secure
communication between services. They operate at a layer
above the basic Kubernetes services.
[Result]
[Trivia]
Linkerd, originally developed by Buoyant, was the first
service mesh project to join the Cloud Native Computing
Foundation (CNCF). Unlike some other service meshes,
Linkerd focuses on simplicity, security, and speed, making it
a popular choice for those new to service meshes or running
smaller clusters.
99
Understanding Kubernetes
Versioning and Upgrade
Importance
Kubernetes uses a semantic versioning scheme (e.g., 1.x.y)
to manage its releases. Understanding this versioning
system is crucial for maintaining cluster stability and
ensuring compatibility with applications. Regular upgrades
are essential to benefit from new features, security patches,
and performance improvements.
[Result]
[Trivia]
Kubernetes follows a quarterly release cycle for minor
versions, and it is recommended to upgrade your cluster at
least once a year to stay current with the latest features and
security fixes. You can also use tools like kubeadm for
managing cluster upgrades, which provides a safe and
controlled upgrade path.
100
Kubernetes Controllers: The
Heart of Cluster Management
Kubernetes controllers are control loops that monitor the
state of your cluster and make necessary adjustments to
achieve the desired state defined by the user. They play a
vital role in maintaining the health and performance of
applications running in Kubernetes.
[Result]
[Trivia]
There are various types of controllers in Kubernetes,
including ReplicaSets, Deployments, StatefulSets, and
DaemonSets. Each type serves a specific purpose, such as
managing stateless applications, stateful applications, or
ensuring that a pod runs on every node in the cluster.
Understanding how these controllers work is essential for
effectively managing applications in Kubernetes.
101
Kubernetes Multi-Tenancy:
Securely Sharing Clusters
Kubernetes supports multi-tenancy, allowing multiple users
or teams to share the same cluster while maintaining
security and resource isolation. This is essential for
organizations that want to maximize resource utilization
without compromising security.
[Result]
[Trivia]
Multi-tenancy is crucial for cloud environments where
multiple customers share the same infrastructure.
Kubernetes uses Role-Based Access Control (RBAC) to
manage permissions for users and groups within
namespaces, enhancing security.
You can also use Network Policies to control traffic between
pods in different namespaces.
102
Transition from Pod Security
Policies to Pod Security
Admission
Pod Security Policies (PSPs) were used to control the
security settings of pods in Kubernetes, but they have been
deprecated in favor of the Pod Security Admission (PSA)
mechanism. PSA provides a simpler and more effective way
to enforce security standards for pods.
map[pod-security.kubernetes.io/enforce:privileged]
[Trivia]
The PSA mechanism provides three levels of security:
Privileged, Baseline, and Restricted, allowing for flexible
security implementations.
The transition from PSP to PSA was made to simplify
security management and reduce the overhead associated
with defining complex policies.
It is recommended to adopt the Restricted level for most
workloads to minimize security risks.
Afterword
◆