0% found this document useful (0 votes)
42 views

Modern Guide To Container Monitoring and Orchestration

The document discusses containers and container orchestration using Kubernetes. Containers package applications and dependencies to run reliably anywhere, and Kubernetes automates deploying, scaling, and managing containerized applications. Key benefits of containers include speed, portability, increased scalability, consistency, and enabling a DevOps approach.

Uploaded by

learn ani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Modern Guide To Container Monitoring and Orchestration

The document discusses containers and container orchestration using Kubernetes. Containers package applications and dependencies to run reliably anywhere, and Kubernetes automates deploying, scaling, and managing containerized applications. Key benefits of containers include speed, portability, increased scalability, consistency, and enabling a DevOps approach.

Uploaded by

learn ani
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

The Modern Guide to

Container Monitoring
and Orchestration
Since the introduction of the concept in 2013, containers
have become the buzz of the IT world. It’s easy to see why:
Application container technology is revolutionizing app
development, bringing previously unimagined flexibility and
efficiency to the development process.

Businesses are embracing containers in droves. According


to Gartner, more than 85% of global enterprises will be
running containerized applications in production by 2025,
up from less than 35% in 2019. Mass adoption makes it
clear that organizations need to adopt a container-based
development approach to stay competitive.

Let’s look at what’s involved with containerization and how


your organization can gain an edge.

2
What is a container?
The easiest way to understand the concept of a container is to
consider its namesake. A physical container is a receptacle used to
hold and transport goods from one location to another.

A software container performs a similar function. It allows you to


package up an application’s code, configuration files, libraries, system
tools and everything else needed to execute that app into a self-
contained unit, so you can move and run it anywhere.

Containers are a key component of a “microservices” approach. This


approach breaks applications down into single-function modules
that are accessed only when they’re needed. A developer can modify
and redeploy a particular service — not the whole application —
whenever changes are required.

3
Why are containers A VM abstracts hardware to turn a physical server into several
virtual ones. It does so by running on top of a hypervisor, which

such a big deal? itself runs on a physical computer called the “host machine.” The
hypervisor is essentially a coordination system that arbitrates access
to the host machine’s resources — CPU, RAM, etc. — making them
Containers remedy an all-too-common problem in operations: getting available to the VM or “guest machine.” The apps and everything
software to run reliably and uniformly no matter where it is deployed. required to run them, including libraries and system binaries, are
As an app moves from one computing environment to another — from contained in the guest machine. Each guest machine also includes
staging to production, for example — it can run into problems if the a complete operating system of its own. So a server running four
operating system, network topology, security policies or other aspects VMs, for example, would have four operating systems in addition to
of the environment are different. Containers isolate the app from its the hypervisor coordinating them all. That’s a lot of demand on one
environment, abstracting away these environmental differences. machine’s resources, and things can bog down in a hurry, ultimately
Containers also prevent issues due to different components requiring limiting how many VMs a single server can operate.
different versions of the same shared library or other dependency, by
including all the dependencies within the container. Containers, on the other hand, abstract at the operating system
level. A single host operating system runs on the host (this can be
Prior to containers, virtual machines (VMs) were the primary method a physical server, VM or cloud host), and the containers — using a
for running many isolated applications on a single server. Like containerization engine like the Docker Engine — share that OS’s
containers, VMs abstract away a machine’s underlying infrastructure kernel with other containers, each with its own isolated user space.
so that hardware and software changes won’t affect app performance. There’s much less overhead here than with a virtual machine, and as a
But there are significant differences to how each does this. result, containers are far more lightweight and resource-efficient than
VMs — allowing for much greater utilization of server resources.

4
5 benefits 3. Portability — Containers pack up the app and all of its
dependencies. That makes it easy to move and reliably run

of deploying containers on Windows, Linux or Mac hardware. Containers can


run on bare metal or on virtual servers, and within public or private

containers clouds. This also helps avoid vendor lock-in should you need to
move your apps from one public cloud environment to another.

A container-based infrastructure offers a host of benefits. 4. Increased scalability — Containers tend to be small because
Here are the five biggest. they don’t require a separate OS the way that VMs do. One
container is typically sized on the order of tens of megabytes,
1. Speed of delivery — Applications installed on a virtual machine whereas a single VM can be tens of gigabytes — roughly 1,000
typically take several minutes to launch. Containers don’t have times the size of a container. That efficiency allows you to
to wait for an operating system boot, so they can start up in store many more containers on a single host operating system,
a fraction of a second. They also run faster since they use increasing scalability.
fewer host OS resources, and they only take a few seconds to
5. Consistency — Because containers retain all dependencies and
create, clone or destroy. All of this has a dramatic impact on the
configuration internally, they ensure developers are able to work
development process, allowing organizations to more quickly get
in a consistent environment regardless of where the containers
software to market, fix bugs and add new features.
are deployed. That means developers won’t have to waste time
2. DevOps-first approach — The speed, small footprint and resource troubleshooting environmental differences and can focus on
efficiency of microservice-based containers make them ideal for a addressing new app functionality. It also means you can take
DevOps environment. A microservice-based infrastructure enables the same container from development to production when it’s
developers to own specific parts of the application end-to-end, time to go live. Finally, because containers are immutable once
making sure that they can fully understand how it works, optimize created, developers don’t have to worry about configuration
its performance, and troubleshoot any issues more efficiently than differences across the deployment or other sources of
with monolithic applications. troubleshooting difficulty.

5
Orchestration 101: Cluster: A cluster represents a master node and several worker
nodes. Clusters consolidate all of these machines into a single,

using Kubernetes powerful unit. Containerized applications are deployed to a cluster,


and the cluster distributes the workload to various nodes, shifting
work around as nodes are added or removed.
To get started with container orchestration, you need specialized
software to deploy, manage and scale containerized applications. One Pods: A pod represents a collection of containers packaged together
of the most well-established and popular choices today is Kubernetes, and deployed to a node. All containers within a pod share a local
an open-source automation platform developed by Google and now network and other resources. They can talk to each other as if
managed by the Cloud Native Computing Foundation. they were on the same machine, but they remain isolated from one
another. At the same time, pods isolate network and storage away
Kubernetes can dramatically enhance the development process by from the underlying container.
simplifying container management, automating updates and scaling,
and minimizing downtime so developers can focus on improving and A single worker node can contain multiple pods. If a node goes down,
adding new features to applications. To better understand how, let’s Kubernetes can deploy a replacement pod to a functioning node.
look at Kubernetes’ basic components and how they work together.
Despite a pod being able to hold many containers, it’s recommended
Kubernetes uses multiple layers of abstraction defined within its own they wrap up only as many as needed: a main process and its helper
unique language. There are many parts to Kubernetes. This list isn’t containers, which are called “sidecars.” Pods scale as a unit no matter
exhaustive, but it provides a simplified look at how hardware and what their individual needs are and overstuffed pods can be a drain
software is represented in the system. on resources.

Nodes: In Kubernetes lingo, any single “worker machine” is a node. It Deployments: Instead of directly deploying pods to a cluster,
can be a physical server or virtual machine on a cloud provider such Kubernetes uses an additional abstraction layer called a “deployment.”
as AWS or Microsoft Azure. Nodes were originally called “minions,” A deployment enables you to designate how many replicas of a pod
which gives you an idea of their purpose. They receive and perform you want running simultaneously. Once it deploys that number of pods
tasks assigned from the master node and contain all the services to a cluster, it will continue to monitor them and automatically recreate
required to manage and assign resources to containers. and redeploy a pod if it fails.

Master node: This is the machine that orchestrates all the worker Ingress: Kubernetes isolates pods from the outside world, so you
nodes and is your point of interaction with Kubernetes. All assigned need to open a communication channel to any service you want to
tasks originate here. expose. This is another abstraction layer called “ingress.” There are a
few ways to add ingress to a cluster, including adding a LoadBalancer,
NodePort or Ingress controller. Think of this as the internet-facing
web server you may have used in traditional architecture.

6
What challenges • Increased volume of data — The easy portability of so many
interdependent components creates an increased need

do Kubernetes and
to maintain telemetry data to ensure observability into the
performance and reliability of the application, container and

containerization
orchestration platform. Many microservice architectures are
built to scale up microservices when needed and destroy them

present for
when not. This ephemerality also increases the need to have data
streamed into an observability system. Additional components

monitoring?
added to the system also increase how many things must be
monitored and checked when things go wrong.

• The importance of visualizations — The scale and complexity


For all the benefits that containers and orchestration frameworks
introduced by microservices, containers and container
bring to organizations, they can also make cloud-based application
orchestration requires the ability to both visualize the environment
management more complex. Some of the challenges they present
to gain immediate insight into your infrastructure health and to
include:
determine how traffic is flowing within your environment. You also
• Significant blind spots — Containers are designed to be need to be able to zoom in and view the health and performance of
disposable. Because of this, they introduce several layers containers, nodes and pods. The right monitoring solution should
of abstraction between the application and the underlying provide this workflow.
hardware to ensure portability and scalability. This all
• Pacing for DevOps — Containers can be scaled and modified
contributes to a significant blind spot when it comes to
with lightning speed. This accelerated deployment pace makes
conventional monitoring. Traditional monitoring tools aren’t
it more challenging for DevOps teams to track how application
capable of understanding these abstractions.
performance is impacted across deployments, or even to
understand when new service dependencies are added.

7
How to implement Containers: Visibility into your containers in aggregate and
individually is critical. A monitoring tool can provide information on

containers the number of currently running containers, the containers using the
most memory and the most recently started container. It can also
provide insight into each container’s CPU and memory utilization,
A good container monitoring solution will enable you to stay on and the health of its network I/O.
top of your dynamic container-based environment by unifying
container data with other infrastructure data to provide better Orchestration framework: Kubernetes itself also needs to be
contextualization and root cause analysis. Let’s look at ways you can monitored. How many available nodes are there? What’s the health
provide several layers of monitoring for Docker, the most popular of the master node? Are there any pods pending reassignment
implementation. for long periods? What volume of traffic is moving through your
ingress? You need to be able to answer all of these questions quickly
Hosts: The physical and virtual machines in your clusters can be to continue operating reliable services. Additionally, the nature of
monitored for availability and performance. Key metrics to track Kubernetes means that pods are scheduled to optimize resource
include memory, CPU usage, swap space used and storage utilization. utilization. This increases efficiency, but also adds unpredictability
This should be a core capability of any container monitoring tool. about where pods are deployed and run.

Application endpoints: Determining when your service is able to


handle user requests and the performance and latency of these
requests is also vital. Your monitoring solution must also perform
health checks on the application itself and determine latency and
other performance metrics.

8
What features are Consolidation, correlation, and analysis features
Easy deployment: Deployment of collectors must be easy and cloud-

necessary to monitor native. Ideally, a helm chart is available. Deployment can be as easy as:

these applications? helm repo add splunk-otel-collector-chart https://


signalfx.github.io/splunk-otel-collector-chart

As we’ve discussed, containerization of applications and use of helm repo update


orchestration frameworks like Kubernetes create many benefits
for modern development and deployment workflows. However, helm install --set splunkAccessToken=’xxx’ --set
monitoring these environments and applications is far more clusterName=’sample’ --set splunkRealm=’us0’ --set
complicated than using legacy tools. A monitoring solution that’s otelCollector.enabled=’true’ --generate-name splunk-
ready for containers and orchestration workflows must be able to otel-collector-chart/splunk-otel-collector
provide these features.
Avoidance of lock-in: OpenTelemetry is the future of monitoring
technology, and instrumentation of your environment must account
Collection of key metrics for the fact that you may decide to change monitoring or observability

Pod metrics: Number of desired pods, number of available pods, pods providers. It’s essential that the solution you adopt supports

by phase (failed, pending, running), desired pods per deployment OpenTelemetry so that you aren’t required to redo instrumentation
work if you move.
Resource utilization metrics: Docker Socket-collected metrics
(container and node-level resource metrics, e.g., CPU and memory Real-time, streaming platform: In a world where downtime can cost

usage) hundreds of thousands of dollars per hour, seconds count. A modern


monitoring and observability platform must be able to provide alerts
Application metrics: RED metrics (rate, error, duration); application and analyze data in real time to minimize MTTD and to make sure
health; database availability and performance that the conclusions you’re drawing from the data are based on the
current state of the system.

Predictive analytics: Based on AI and ML, predictive analytics


tools can tell you things are about to fail before they fail. Given the
complexity of modern environments, this can prevent issues from
even happening and help you detect potential performance and
availability issues before they impact your customers.

9
Prebuilt dashboards: Getting value from your monitoring system Automated service maps: Understanding how traffic is flowing through
shouldn’t rely on you needing to fully document your infrastructure. ingress, containers and applications is extremely complicated in this
Automated, built-in dashboards can make it easy to understand the new world. Your monitoring tool must be able to figure out the paths
complex interrelationships between nodes, pods, containers and your requests are taking and show you these requests, in addition to
applications, and see the status of your environment at a glance: identifying errors and other issues in your environment:

10
Next steps
Containers are a powerful tool in your development arsenal, but it’s critical to understand
how and how well your container environments are working. Infrastructure monitoring and
application performance monitoring become more essential after deploying containers,
not less. The requirements of a container-ready monitoring solution include being built
to understand containers, easy deployment, a real-time streaming platform, predictive
analytics and an out-of-the-box experience that gives you meaningful data. If you’re ready
for an observability system that does all this and can operate natively with containers,
public clouds, private clouds and self-hosted environments, check out a demo of Splunk
Observability Cloud, or you can start a free trial today. To learn more about observability,
view our website or download our Beginner’s Guide to Observability.

Splunk, Splunk> and Turn Data Into Doing are trademarks and
registered trademarks of Splunk Inc. in the United States and
other countries. All other brand names, product names or
trademarks belong to their respective owners.
© 2021 Splunk Inc. All rights reserved.

21-14769-Splunk-ModernGuidetoContainerMonitoringandOrchestration-114-EB

You might also like