0% found this document useful (0 votes)
22 views11 pages

CND Unit 2

The document provides an in-depth overview of Kubernetes Deployments and Pods, explaining their roles in managing containerized applications. It details the features and functionalities of Deployments, Pods, and ReplicaSets, emphasizing their importance in maintaining the desired state and ensuring application resilience. Additionally, it covers container lifecycle management, chaos testing tools, and the use of namespaces for resource isolation within a Kubernetes cluster.

Uploaded by

gahanas.sit23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views11 pages

CND Unit 2

The document provides an in-depth overview of Kubernetes Deployments and Pods, explaining their roles in managing containerized applications. It details the features and functionalities of Deployments, Pods, and ReplicaSets, emphasizing their importance in maintaining the desired state and ensuring application resilience. Additionally, it covers container lifecycle management, chaos testing tools, and the use of namespaces for resource isolation within a Kubernetes cluster.

Uploaded by

gahanas.sit23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1) Discuss in detail Kubernetes Deployments and Pods

Kubernetes Deployments and Pods

Kubernetes is an orchestration platform that manages containerized applications across a cluster of nodes. It
automates deployment, scaling, and operations of application containers, making it easier to manage
applications consistently in different environments. Two fundamental concepts in Kubernetes are Deployments
and Pods, which are critical to understanding how Kubernetes operates.

1. Deployments

A Deployment is a Kubernetes resource that provides declarative updates to applications. It serves as a higher-
level abstraction that allows you to manage a group of Pods. A Deployment ensures that a specified number of
Pods are running and in a healthy state at all times.

Key Features of Deployments:

• Automated Rollouts and Rollbacks: Deployments allow you to roll out changes to your applications
incrementally. If something goes wrong, you can roll back to a previous state.

• Scaling: Deployments enable horizontal scaling of applications. You can scale up or down the number of
Pods based on the load on your application.

• Self-Healing: Deployments automatically replace or reschedule Pods that fail, ensuring that the desired state
is maintained.

How Deployments Work:

When you create a Deployment, you specify a container image, the number of replicas (Pods), and other
configurations. The Deployment creates a ReplicaSet, which in turn ensures that the desired number of identical
Pods is running at all times. If a Pod crashes or is deleted, the ReplicaSet will automatically create a new Pod to
maintain the desired state.

For example, when you run the following command:

bash

Copy code

kubectl create deployment demo --image=cloudnatived/demo:hello

Kubernetes creates a Deployment named demo using the specified container image cloudnatived/demo:hello.

To view the status of your Deployment, you can use:

bash

Copy code

kubectl get deployments

This command shows you how many replicas are running, how many are up-to-date, and how many are
available.

You can also get detailed information about the Deployment with:

bash

Copy code

kubectl describe deployments/demo


2. Pods

A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your
cluster and is the basic unit of deployment in Kubernetes. A Pod can encapsulate one or more containers,
storage resources, a unique network IP, and options that govern how the container(s) should run.

Key Features of Pods:

• Single or Multiple Containers: While a Pod usually contains a single container, it can also contain multiple
containers that need to share resources.

• Shared Network and Storage: Containers within a Pod share the same network IP and storage resources,
making it easy for them to communicate with each other.

• Ephemeral Nature: Pods are ephemeral, meaning they can be created and destroyed dynamically. This
ephemeral nature is managed by Kubernetes using Deployments and ReplicaSets to ensure the desired
state.

How Pods Work:

When you create a Deployment, it doesn't manage Pods directly. Instead, it creates a ReplicaSet that manages
the Pods. The ReplicaSet ensures that the correct number of Pods is running at any given time. If a Pod fails or is
deleted, the ReplicaSet creates a new one to replace it.

You can view the Pods created by a Deployment using:

bash

Copy code

kubectl get pods --selector app=demo


3. ReplicaSets

A ReplicaSet is a Kubernetes resource that ensures a specified number of identical Pods are running at any given
time. It is responsible for maintaining the desired state of the Pod replicas defined in the Deployment.

Key Features of ReplicaSets:

• Replica Management: A ReplicaSet ensures that the desired number of Pods is always running. If there are
too few Pods, it will create new ones. If there are too many, it will terminate the excess Pods.

• Versioning: When you update a Deployment, a new ReplicaSet is created to manage the updated Pods. The
old ReplicaSet and its Pods are eventually terminated once the new Pods are up and running.

In most cases, you interact with Deployments rather than directly with ReplicaSets because Deployments
manage the creation and updating of ReplicaSets for you.
4. Maintaining Desired State

Kubernetes uses a concept called the reconciliation loop to ensure that the actual state of your cluster matches
the desired state defined in your resource manifests (such as Deployments). The reconciliation loop constantly
checks the state of your resources and makes adjustments as necessary.

For instance, if you manually delete a Pod managed by a Deployment, Kubernetes will notice that the desired
state (a specific number of replicas) does not match the actual state (one less Pod) and will create a new Pod to
replace the one that was deleted.

This self-healing feature is one of the core strengths of Kubernetes, as it ensures high availability and resilience
of applications running in the cluster.

Managing the Container Life Cycle in Kubernetes

Kubernetes, as a container orchestration platform, is responsible for ensuring that your containerized applications
are running smoothly and efficiently. It does so by managing the lifecycle of containers, which includes monitoring
their health and ensuring they are ready to serve requests. This management is facilitated by several types of
probes: Liveness Probes, Readiness Probes, and Startup Probes.

1. Liveness Probes

A Liveness Probe is a health check that Kubernetes uses to determine whether a container is still functioning as
expected. Containers may sometimes get into a state where they are not functioning properly but haven’t crashed.
For example, a web server might be running but stuck in a loop, unable to process new requests. Liveness probes
help Kubernetes detect such issues.

• HTTP Probe: The most common type of liveness probe is an HTTP probe, where Kubernetes sends an HTTP
request to a specified endpoint (e.g., /healthz) and port (e.g., 8888). If the application responds with an HTTP
status code in the 2xx or 3xx range, the container is considered healthy. If not, Kubernetes will restart the
container.
• TCP Socket Probe: For applications that do not use HTTP, a TCP socket probe can be used. Kubernetes
attempts to establish a TCP connection to the specified port, and if successful, the container is considered
alive.

• Exec Probe: This probe runs a command inside the container. If the command exits with a status of 0, the
container is considered healthy.

Probe Delay and Frequency

• initialDelaySeconds: This field defines how long Kubernetes should wait after the container starts before
performing the first liveness check. This prevents premature checks that could result in the container being
restarted unnecessarily.

• periodSeconds: This field defines how often Kubernetes should perform the liveness check.

• failureThreshold: This field specifies the number of consecutive failures required before Kubernetes
considers the container to be unhealthy and restarts it.

2. Readiness Probes

A Readiness Probe is used to determine if a container is ready to serve traffic. This is crucial for scenarios where a
container might take some time to initialize and should not receive traffic until it is fully ready.

• HTTP Probe: Similar to the liveness probe, the readiness probe often checks an HTTP endpoint to determine
if the container is ready.

When a container fails its readiness probe, it is temporarily removed from the Service’s endpoints, meaning it won’t
receive any traffic until it is ready again. Unlike a liveness probe, a failing readiness probe does not cause the
container to be restarted.
3. Startup Probes

Startup Probes are designed for containers that take a long time to start up. They provide an additional mechanism
to ensure that Kubernetes does not prematurely kill a container that is still starting.

• Configuration: A startup probe uses the same configuration structure as liveness and readiness probes but is
specifically designed to ensure that the application has started up before liveness checks begin.

The failureThreshold is typically higher for a startup probe, giving the application more time to initialize.

4. gRPC Probes

For applications using gRPC instead of HTTP, Kubernetes can use a gRPC health-checking protocol with an exec
probe. Since gRPC is a binary protocol, an HTTP-based probe will not work. Instead, you can use tools like grpc-
health-probe to integrate gRPC health checks with Kubernetes.

5. File-Based Readiness Probes

Another alternative is to use file-based probes where the application creates or deletes a file to indicate its readiness
or unavailability.

• Configuration: This can be achieved using an exec probe that checks for the presence of a file.

If the file exists, the probe succeeds, and the container is considered ready. If the file is deleted (for debugging or
other reasons), the probe fails, and the container is removed from the Service endpoints.
Chaos Testing in Kubernetes

Chaos testing, also known as chaos engineering, is the practice of intentionally introducing failures and
unpredictable conditions into a system to test its resilience and ability to recover. In a Kubernetes environment,
chaos testing helps ensure that your applications and services remain highly available and reliable even when faced
with unexpected disruptions, such as node failures or pod terminations.

The core idea behind chaos testing is to "trust, but verify." You can't simply assume that your systems are resilient;
you need to actively test and verify their behavior under stress. This is typically done by automating the process of
causing random failures and observing how the system responds.

Key Chaos Testing Tools

Several tools are available for chaos testing in Kubernetes, each with its unique features and use cases. Here’s a brief
overview of some popular ones:

1. chaoskube

• Overview: chaoskube is a simple and effective tool that randomly kills pods in your Kubernetes cluster. It
helps test the resilience of your applications by simulating pod failures.

• Features:

o Dry-Run Mode: By default, chaoskube operates in dry-run mode, where it shows you what it would
do without actually killing any pods. This allows you to test your configurations safely.

o Filtering: You can configure chaoskube to include or exclude specific pods based on labels,
annotations, or namespaces. It also allows you to avoid certain time periods or dates to prevent
disruptions during critical times.
• Use Case: chaoskube is ideal for getting started with chaos engineering due to its simplicity and ease of
setup. It’s useful for organizations looking to introduce basic chaos testing into their Kubernetes
environments.

2. kube-monkey

• Overview: kube-monkey is a chaos testing tool designed to randomly terminate Kubernetes pods based on a
predefined schedule. Unlike chaoskube, kube-monkey operates on an opt-in basis, meaning only pods with
specific annotations will be targeted.

• Features:

o Scheduled Testing: kube-monkey runs at preset times (e.g., 8 a.m. on weekdays) and schedules pod
terminations for later in the day.

o Opt-In Testing: Pods must be explicitly opted into kube-monkey's chaos testing by using
annotations. This allows for selective and controlled chaos testing.

o Customization: You can customize the frequency and aggressiveness of the chaos testing, such as
setting the mean time between failures (MTBF) and the percentage of pods to be terminated.

• Use Case: kube-monkey is suited for more controlled chaos testing scenarios, where you want to gradually
introduce chaos testing into your development and production environments.

3. PowerfulSeal

• Overview: PowerfulSeal is a versatile chaos engineering tool that works in both interactive and autonomous
modes. It allows you to manually or automatically inject failures into your Kubernetes cluster.

• Features:

o Interactive Mode: This mode lets you manually explore and break things in your cluster, such as
terminating nodes, namespaces, deployments, and individual pods.

o Autonomous Mode: In this mode, PowerfulSeal runs automatically based on a set of user-defined
policies. These policies define which resources to target, when to run the tests, and how aggressive
the chaos testing should be.

o Flexibility: PowerfulSeal’s policy files offer extensive flexibility, allowing you to create custom chaos
engineering scenarios tailored to your specific needs.

• Use Case: PowerfulSeal is ideal for advanced chaos testing scenarios, where you need more control and
customization over how and when chaos is introduced into your cluster.

Using Kubernetes Namespaces

Namespaces in Kubernetes are a way of logically dividing a single Kubernetes cluster into multiple virtual clusters.
They provide a mechanism to isolate resources within the cluster, allowing teams, environments, or different stages
of application development (like production, staging, testing) to coexist without conflicts. This is especially useful for
large clusters that serve multiple applications or teams.

Why Use Namespaces? (3 Marks)

• Isolation: Namespaces provide a way to isolate different parts of your application or different teams in a
single cluster. This is akin to having separate directories on a computer for different projects.

• No Naming Conflicts: Names in one namespace are invisible to another. This means you can have multiple
resources with the same name in different namespaces without conflict.
• Resource Management: Namespaces allow you to apply resource limits, such as the number of Pods or total
CPU and memory usage, ensuring fair resource distribution across teams.

Working with Namespaces (3 Marks)

1. Listing Namespaces:

• default: The default namespace where resources are placed unless another namespace is specified.

• kube-system: Contains Kubernetes system components.

• kube-public: Typically used for public, read-only data.

2) Creating a Namespace: You can create a namespace using a YAML manifest:


3) Using Namespaces: To operate within a specific namespace, you can specify it with the --namespace flag (or
-n for short):

4) Resource Quotas: ResourceQuotas are used to limit the resource usage within a namespace. This ensures
that no single namespace consumes all the resources of the cluster.

Example of a ResourceQuota manifest:

Best Practices (4 Marks)

1. Avoid Using the Default Namespace: Always create and use separate namespaces for different applications,
teams, or environments. The default namespace is shared and can lead to accidental resource conflicts.

2. Namespace Naming Convention: Use clear and consistent naming conventions for namespaces, such as
team-app-env (e.g., finance-app-prod), to easily identify and manage them.

3. Resource Quotas: Apply ResourceQuotas to each namespace to enforce limits on resource usage, preventing
any single team or application from monopolizing cluster resources.

4. Network Policies: Use Network Policies to control the traffic in and out of namespaces. This adds another
layer of isolation, ensuring that only authorized communication occurs between namespaces.

You might also like