0% found this document useful (0 votes)
5 views19 pages

Module 4-2

THIS is about jargon computing and it's various aspects about jargon

Uploaded by

Siddhant Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views19 pages

Module 4-2

THIS is about jargon computing and it's various aspects about jargon

Uploaded by

Siddhant Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Containerization and

Micro Services

CCA3010

Module 4
Syllabus
Monitoring, Logging and Resource Management:
Inspecting a container, Monitoring in Kubernetes, Logging
Events, Scheduling Workloads, Elastically Scaling, Managing
Cluster Resources
Inspecting a container

• In Kubernetes, container inspection refers to the process of examining the attributes, configuration, status, and
behavior of containers running within pods. Container inspection allows administrators, developers, and
operators to gather detailed information about containers to debug issues, troubleshoot problems, monitor
performance, ensure security compliance, and manage resources effectively.

Container inspection in Kubernetes typically involves various tasks, including:

1.Viewing Container Logs: Inspecting logs generated by containers is essential for diagnosing issues and
monitoring application behavior. Kubernetes provides commands like kubectl logs to retrieve logs from
containers running within pods. These logs contain valuable information about application activity, errors,
warnings, and other events.

2.Executing Commands Inside Containers: Kubernetes allows administrators to execute commands inside
containers for debugging and troubleshooting purposes. The kubectl exec command enables users to open a shell
session or run specific commands within a container. This capability is particularly useful for inspecting the
container's filesystem, environment variables, network connections, and other runtime attributes.
Inspecting a container
3. Monitoring Resource Usage: Monitoring resource utilization by containers helps ensure optimal performance
and efficient resource allocation. Kubernetes provides commands like kubectl top to view resource usage
statistics for containers, pods, and nodes in the cluster. These statistics include CPU and memory usage, which
are critical for identifying resource bottlenecks and optimizing resource allocation.

4.Inspecting Container Configuration: Examining the configuration of containers provides insights into their
runtime parameters, such as environment variables, volume mounts, and container images. Kubernetes resources
like pods and deployments contain configuration specifications that define how containers are created and
managed. Inspecting these specifications helps ensure that containers are configured correctly and consistently
across different environments.

5.Verifying Security Compliance: Container inspection is crucial for verifying that containers adhere to security
best practices and compliance requirements. Administrators can examine container attributes, such as filesystem
permissions, network policies, environment variables, and image provenance, to ensure that containers are
running securely and comply with organizational security policies and regulatory standards.
Inspecting a container

5.Troubleshooting Networking Issues: Kubernetes networking is complex, involving multiple layers of


abstraction and network plugins. Inspecting container networking attributes, such as IP addresses, ports, DNS
settings, and network policies, helps diagnose networking issues and ensure that containers can communicate with
each other and external services correctly.

Overall, container inspection is an essential aspect of managing containerized workloads in Kubernetes. It


provides visibility into containers' runtime behavior, configuration, and resource usage, enabling administrators
and developers to maintain the health, performance, security, and reliability of applications running in Kubernetes
clusters.
Commands that are used in container inspection

Inspecting a container in Kubernetes involves gathering detailed information about its configuration, status, logs,
and resource usage. Here's how you can inspect a container in Kubernetes in detail:

1. kubectl describe: Use the kubectl describe command to get detailed information about a specific Kubernetes
resource, such as a pod, deployment, or service. For example, to describe a pod

kubectl describe pod <pod_name>

This command will display information about the pod, including its status, events, labels, annotations, and more.

2. kubectl logs: Use the kubectl logs command to retrieve logs from a specific container within a pod. You can
specify the pod name and container name to fetch logs. For example:

kubectl logs <pod_name> -c <container_name>

This command will display logs from the specified container.


Commands that are used in container inspection
3. kubectl exec: Use the kubectl exec command to execute commands inside a running container. This can be
helpful for troubleshooting or debugging purposes. For example:

kubectl exec -it <pod_name> -c <container_name> -- /bin/bash

This command opens a shell session inside the specified container, allowing you to run commands and inspect its
environment.

4. kubectl top: Use the kubectl top command to view resource usage statistics for pods, nodes, or containers in
the cluster. For example:
kubectl top pod <pod_name>

This command displays CPU and memory usage for the specified pod.

5. kubectl get: Use the kubectl get command to list resources in the cluster, including pods, deployments,
services, and more. For example:
kubectl get pods
This command lists all pods in the current namespace along with their status and other relevant information.
Commands that are used in container inspection

By utilizing these commands and tools, you can inspect containers in Kubernetes comprehensively, gaining
insights into their configuration, status, logs, and resource usage to diagnose issues and ensure smooth operation
of your applications.
Importance of Monitoring in Kubernetes
1. Ensuring Availability and Reliability: Kubernetes manages containerized workloads across a cluster of
nodes, automatically scheduling and scaling applications based on resource demands. Monitoring ensures that
all components of the Kubernetes cluster, including nodes, pods, and services, are available and functioning
correctly. Monitoring alerts operators to any issues or failures, allowing them to take corrective actions
promptly and minimize downtime.
2. Optimizing Resource Utilization: Kubernetes dynamically allocates compute resources, such as CPU and
memory, to pods based on resource requests and limits. Monitoring resource usage provides insights into how
effectively resources are utilized across the cluster. By analyzing resource metrics, operators can identify
inefficiencies, overprovisioned or underutilized resources, and optimize resource allocation to maximize
efficiency and cost-effectiveness.
3. Detecting and Diagnosing Issues: Containers running in Kubernetes are ephemeral and can be terminated and
replaced at any time. Monitoring helps detect and diagnose issues with containerized applications, such as
performance bottlenecks, errors, or failures. By monitoring application metrics, logs, and events, operators can
pinpoint the root cause of issues quickly and take corrective actions to restore normal operation.
4. Scaling Applications Dynamically: Kubernetes supports horizontal scaling of applications through
mechanisms like Horizontal Pod Autoscaler (HPA) based on CPU or custom metrics. Monitoring provides the
necessary data to make scaling decisions. By monitoring application metrics and performance indicators,
Kubernetes can automatically scale up or down the number of pod replicas to meet changing demand, ensuring
optimal performance and resource utilization.
Importance of Monitoring in Kubernetes
5. Ensuring Security and Compliance: Monitoring helps ensure the security and compliance of Kubernetes
clusters and containerized applications. By monitoring system and application logs, operators can detect security
incidents, unauthorized access attempts, or policy violations. Monitoring also provides visibility into the runtime
behavior of containers, helping verify that security controls, such as network policies, resource quotas, and access
controls, are enforced correctly.
6. Capacity Planning and Forecasting: Monitoring historical data allows operators to perform capacity planning
and forecasting for future resource requirements. By analyzing trends and patterns in resource usage over time,
operators can anticipate growth, plan capacity upgrades, and ensure that the Kubernetes cluster can accommodate
the evolving needs of applications and workloads.
7. Meeting Service Level Objectives (SLOs) and Service Level Agreements (SLAs): Monitoring enables
organizations to track key performance indicators (KPIs) and metrics related to service availability, performance,
and reliability. By monitoring service-level objectives (SLOs) and comparing them against service-level
agreements (SLAs), organizations can ensure that they meet their commitments to customers and stakeholders,
proactively identify and address issues that may impact service quality, and continuously improve the overall
reliability and performance of their Kubernetes-based applications and services.
In summary, monitoring in Kubernetes is essential for ensuring the availability, reliability, performance, security,
and cost-effectiveness of containerized applications and Kubernetes clusters. It provides visibility into system and
application health, helps detect and diagnose issues, enables proactive capacity planning and optimization, and
supports compliance with security and operational requirements.
Logging Events in Kubernetes
In Kubernetes, logging events provide valuable information about the state and activities of the cluster, including
its components, applications, and infrastructure. Logging events help administrators, developers, and operators
understand what is happening within the cluster, diagnose issues, troubleshoot problems, and monitor the health
and performance of Kubernetes resources.

1. Event Logging in Kubernetes:


1. Kubernetes records events for various actions, state changes, and errors that occur within the cluster.
These events are stored in an event log and can be accessed using the Kubernetes API or command-line
tools.

2. Events are associated with specific Kubernetes resources, such as pods, nodes, deployments, services, and
namespaces. Each event contains metadata, including the resource type, name, namespace, event type
(Normal or Warning), timestamp, and message describing the event.

3. Events provide insights into various lifecycle events, such as pod scheduling, container creation, pod
termination, node status changes and service creation.
Logging Events in Kubernetes

• Commands that are used to interpret event messages in Kubernetes:-

➢ To interpret event messages in Kubernetes,


you can use the kubectl describe command followed by the resource type and name.

kubectl describe <resource_type> <resource_name>

Replace <resource_type> with the type of Kubernetes resource you want to inspect (e.g., pod, deployment,
node, service) and <resource_name> with the name of the specific resource you're interested in.
Logging Events in Kubernetes

For example, to view events for a pod named my-pod, you would use the following command:

kubectl describe pod my-pod

This command will display detailed information about the specified pod, including any events associated with it.
You can interpret the event messages to understand the state changes, actions, or errors that occurred within the
cluster related to the specified resource.
Scheduling workloads

• Scheduling workloads in Kubernetes is a critical aspect of managing containerized applications efficiently.


Kubernetes uses a scheduler component to assign pods to nodes based on available resources, constraints, and other
factors. Here's an overview of how scheduling works in Kubernetes:
1. Pod Definition: To schedule a workload in Kubernetes, you define a Pod specification. This specification includes
details like the container image, resource requirements (CPU, memory), volume mounts, labels.
2. Scheduler: The Kubernetes scheduler watches for newly created pods that have no node assigned and selects a
node for them to run on. The scheduler considers various factors when making this decision, including:
1. Resource requirements and availability: It ensures that the node has enough resources (CPU, memory) to
accommodate the pod's requirements.
2. Affinity and anti-affinity: Pods can be scheduled based on their affinity or anti-affinity with other pods or
nodes, ensuring co-location or spreading across different nodes.
3. Node selectors and node affinity: Pods can specify node selectors or affinity rules to indicate preferences
for nodes where they should be scheduled.
4. Pod priority and preemption: Kubernetes supports pod priorities, allowing higher priority pods to be
scheduled first. It also supports preemption, where lower priority pods may be evicted to make room for
higher priority ones.
Scheduling workloads

3. Scheduling Decisions: Once the scheduler evaluates all the constraints and considerations, it assigns the pod to
a suitable node. If the scheduler fails to find a suitable node for a pod immediately, it keeps trying until it
succeeds.

4. Node Controller: Once a pod is scheduled, the Kubernetes node controller is responsible for ensuring that the
pod remains running on the assigned node. If the node fails or becomes unreachable, the node controller
reschedules the pod onto a healthy node.

5. Custom Schedulers: Kubernetes also supports custom schedulers, allowing users to implement their own
scheduling logic based on specific requirements or policies.

Overall, Kubernetes' scheduling mechanism is highly flexible and can be customized to meet diverse workload
requirements and operational constraints.
Elastically Scaling
• Elastic scaling in Kubernetes refers to the ability to automatically adjust the number of replicas or instances of
a workload (such as pods) based on demand or predefined metrics. Kubernetes provides several mechanisms to
achieve elastic scaling:
1. Horizontal Pod Autoscaler (HPA): HPA automatically scales the number of pods in a deployment, replica set,
or stateful set based on observed CPU utilization or custom metrics. You define the minimum and maximum
number of replicas, as well as the target CPU utilization or custom metric value. When the observed metrics
exceed or fall below the thresholds, HPA adjusts the number of replicas accordingly.
2. Vertical Pod Autoscaler (VPA): VPA adjusts the resource requests (CPU and memory) of pods dynamically
based on resource usage metrics. It helps to optimize resource utilization by adjusting the resource requests to
match the actual usage patterns of pods.
3. Cluster Autoscaler: Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster by adding or
removing nodes based on the resource demands of pods and node utilization. When pods cannot be scheduled
due to insufficient resources, Cluster Autoscaler provisions additional nodes. Conversely, it removes nodes
when their resources are underutilized to save costs.
4. Custom Metrics and External Scaling: Besides CPU and memory metrics, Kubernetes supports scaling based
on custom metrics. You can use external monitoring systems like Prometheus to collect application-specific
metrics and configure HPAs to scale based on those metrics. For example, you could scale based on the number
of requests per second or queue length.
Elastically Scaling

Here's a basic workflow for setting up horizontal pod autoscaling:

1.Enable the Kubernetes metrics server, which collects resource usage metrics from nodes and pods.

2.Define an HPA resource for your deployment, specifying the minimum and maximum number of replicas, and
the target CPU utilization or custom metric.

3.Monitor the HPA behavior using kubectl get hpa.

4.As the workload's resource demands fluctuate, the HPA adjusts the number of replicas to maintain the desired
performance and resource utilization levels.

By leveraging these elastic scaling mechanisms, Kubernetes enables applications to automatically respond to
changing demands, ensuring optimal resource utilization, performance, and cost-efficiency.
Managing Cluster Resources.

• Managing cluster resources in Kubernetes involves efficiently allocating and managing compute, storage, and
networking resources across nodes to support the workloads running on the cluster. Here are some key aspects of
managing cluster resources:
1. Resource Quotas: Kubernetes allows you to define resource quotas at the namespace level to limit the aggregate
resource consumption of pods, deployments, and other objects within that namespace. Resource quotas can be set
for CPU, memory, and storage resources, as well as the number of objects like pods, services, and persistent volume
claims.
2. Resource Requests and Limits: Pods can specify resource requests and limits for CPU and memory. Resource
requests are the amount of resources that a pod is guaranteed to receive, while limits are the maximum amount of
resources a pod can consume. Kubernetes uses these values for scheduling decisions and resource management.
Setting appropriate requests and limits helps prevent resource contention and ensures fair resource allocation.
3. Node Affinity and Anti-affinity: Node affinity and anti-affinity rules allow you to influence pod placement based
on node labels. Affinity rules ensure that pods are scheduled onto nodes that satisfy certain criteria, such as the
presence of specific labels or the absence of others. Anti-affinity rules allow you to spread pods across different
nodes to improve fault tolerance and availability.
Managing Cluster Resources.
5. Horizontal Pod Autoscaler (HPA): HPA automatically adjusts the number of pod replicas based on observed CPU
utilization or custom metrics. By dynamically scaling the number of replicas up or down, HPA ensures that the
workload can handle fluctuations in demand while optimizing resource utilization.
6. Cluster Autoscaler: Cluster Autoscaler automatically adjusts the size of the Kubernetes cluster by adding or
removing nodes based on the resource demands of pods and node utilization. This helps ensure that there are enough
resources available to support the workloads running on the cluster without over-provisioning.
7. Monitoring and Alerts: Monitoring cluster resource usage and setting up alerts for resource constraints or
abnormalities is essential for proactive management. Tools like Prometheus and Grafana can be used to collect and
visualize cluster metrics, allowing you to identify performance bottlenecks and optimize resource allocation.
• By effectively managing cluster resources, Kubernetes ensures that workloads can run reliably and efficiently,
maximizing resource utilization and minimizing operational overhead.

You might also like