Modern Guide To Container Monitoring and Orchestration
Modern Guide To Container Monitoring and Orchestration
Container Monitoring
and Orchestration
Since the introduction of the concept in 2013, containers
have become the buzz of the IT world. It’s easy to see why:
Application container technology is revolutionizing app
development, bringing previously unimagined flexibility and
efficiency to the development process.
2
What is a container?
The easiest way to understand the concept of a container is to
consider its namesake. A physical container is a receptacle used to
hold and transport goods from one location to another.
3
Why are containers A VM abstracts hardware to turn a physical server into several
virtual ones. It does so by running on top of a hypervisor, which
such a big deal? itself runs on a physical computer called the “host machine.” The
hypervisor is essentially a coordination system that arbitrates access
to the host machine’s resources — CPU, RAM, etc. — making them
Containers remedy an all-too-common problem in operations: getting available to the VM or “guest machine.” The apps and everything
software to run reliably and uniformly no matter where it is deployed. required to run them, including libraries and system binaries, are
As an app moves from one computing environment to another — from contained in the guest machine. Each guest machine also includes
staging to production, for example — it can run into problems if the a complete operating system of its own. So a server running four
operating system, network topology, security policies or other aspects VMs, for example, would have four operating systems in addition to
of the environment are different. Containers isolate the app from its the hypervisor coordinating them all. That’s a lot of demand on one
environment, abstracting away these environmental differences. machine’s resources, and things can bog down in a hurry, ultimately
Containers also prevent issues due to different components requiring limiting how many VMs a single server can operate.
different versions of the same shared library or other dependency, by
including all the dependencies within the container. Containers, on the other hand, abstract at the operating system
level. A single host operating system runs on the host (this can be
Prior to containers, virtual machines (VMs) were the primary method a physical server, VM or cloud host), and the containers — using a
for running many isolated applications on a single server. Like containerization engine like the Docker Engine — share that OS’s
containers, VMs abstract away a machine’s underlying infrastructure kernel with other containers, each with its own isolated user space.
so that hardware and software changes won’t affect app performance. There’s much less overhead here than with a virtual machine, and as a
But there are significant differences to how each does this. result, containers are far more lightweight and resource-efficient than
VMs — allowing for much greater utilization of server resources.
4
5 benefits 3. Portability — Containers pack up the app and all of its
dependencies. That makes it easy to move and reliably run
containers clouds. This also helps avoid vendor lock-in should you need to
move your apps from one public cloud environment to another.
A container-based infrastructure offers a host of benefits. 4. Increased scalability — Containers tend to be small because
Here are the five biggest. they don’t require a separate OS the way that VMs do. One
container is typically sized on the order of tens of megabytes,
1. Speed of delivery — Applications installed on a virtual machine whereas a single VM can be tens of gigabytes — roughly 1,000
typically take several minutes to launch. Containers don’t have times the size of a container. That efficiency allows you to
to wait for an operating system boot, so they can start up in store many more containers on a single host operating system,
a fraction of a second. They also run faster since they use increasing scalability.
fewer host OS resources, and they only take a few seconds to
5. Consistency — Because containers retain all dependencies and
create, clone or destroy. All of this has a dramatic impact on the
configuration internally, they ensure developers are able to work
development process, allowing organizations to more quickly get
in a consistent environment regardless of where the containers
software to market, fix bugs and add new features.
are deployed. That means developers won’t have to waste time
2. DevOps-first approach — The speed, small footprint and resource troubleshooting environmental differences and can focus on
efficiency of microservice-based containers make them ideal for a addressing new app functionality. It also means you can take
DevOps environment. A microservice-based infrastructure enables the same container from development to production when it’s
developers to own specific parts of the application end-to-end, time to go live. Finally, because containers are immutable once
making sure that they can fully understand how it works, optimize created, developers don’t have to worry about configuration
its performance, and troubleshoot any issues more efficiently than differences across the deployment or other sources of
with monolithic applications. troubleshooting difficulty.
5
Orchestration 101: Cluster: A cluster represents a master node and several worker
nodes. Clusters consolidate all of these machines into a single,
Nodes: In Kubernetes lingo, any single “worker machine” is a node. It Deployments: Instead of directly deploying pods to a cluster,
can be a physical server or virtual machine on a cloud provider such Kubernetes uses an additional abstraction layer called a “deployment.”
as AWS or Microsoft Azure. Nodes were originally called “minions,” A deployment enables you to designate how many replicas of a pod
which gives you an idea of their purpose. They receive and perform you want running simultaneously. Once it deploys that number of pods
tasks assigned from the master node and contain all the services to a cluster, it will continue to monitor them and automatically recreate
required to manage and assign resources to containers. and redeploy a pod if it fails.
Master node: This is the machine that orchestrates all the worker Ingress: Kubernetes isolates pods from the outside world, so you
nodes and is your point of interaction with Kubernetes. All assigned need to open a communication channel to any service you want to
tasks originate here. expose. This is another abstraction layer called “ingress.” There are a
few ways to add ingress to a cluster, including adding a LoadBalancer,
NodePort or Ingress controller. Think of this as the internet-facing
web server you may have used in traditional architecture.
6
What challenges • Increased volume of data — The easy portability of so many
interdependent components creates an increased need
do Kubernetes and
to maintain telemetry data to ensure observability into the
performance and reliability of the application, container and
containerization
orchestration platform. Many microservice architectures are
built to scale up microservices when needed and destroy them
present for
when not. This ephemerality also increases the need to have data
streamed into an observability system. Additional components
monitoring?
added to the system also increase how many things must be
monitored and checked when things go wrong.
7
How to implement Containers: Visibility into your containers in aggregate and
individually is critical. A monitoring tool can provide information on
containers the number of currently running containers, the containers using the
most memory and the most recently started container. It can also
provide insight into each container’s CPU and memory utilization,
A good container monitoring solution will enable you to stay on and the health of its network I/O.
top of your dynamic container-based environment by unifying
container data with other infrastructure data to provide better Orchestration framework: Kubernetes itself also needs to be
contextualization and root cause analysis. Let’s look at ways you can monitored. How many available nodes are there? What’s the health
provide several layers of monitoring for Docker, the most popular of the master node? Are there any pods pending reassignment
implementation. for long periods? What volume of traffic is moving through your
ingress? You need to be able to answer all of these questions quickly
Hosts: The physical and virtual machines in your clusters can be to continue operating reliable services. Additionally, the nature of
monitored for availability and performance. Key metrics to track Kubernetes means that pods are scheduled to optimize resource
include memory, CPU usage, swap space used and storage utilization. utilization. This increases efficiency, but also adds unpredictability
This should be a core capability of any container monitoring tool. about where pods are deployed and run.
8
What features are Consolidation, correlation, and analysis features
Easy deployment: Deployment of collectors must be easy and cloud-
necessary to monitor native. Ideally, a helm chart is available. Deployment can be as easy as:
Pod metrics: Number of desired pods, number of available pods, pods providers. It’s essential that the solution you adopt supports
by phase (failed, pending, running), desired pods per deployment OpenTelemetry so that you aren’t required to redo instrumentation
work if you move.
Resource utilization metrics: Docker Socket-collected metrics
(container and node-level resource metrics, e.g., CPU and memory Real-time, streaming platform: In a world where downtime can cost
9
Prebuilt dashboards: Getting value from your monitoring system Automated service maps: Understanding how traffic is flowing through
shouldn’t rely on you needing to fully document your infrastructure. ingress, containers and applications is extremely complicated in this
Automated, built-in dashboards can make it easy to understand the new world. Your monitoring tool must be able to figure out the paths
complex interrelationships between nodes, pods, containers and your requests are taking and show you these requests, in addition to
applications, and see the status of your environment at a glance: identifying errors and other issues in your environment:
10
Next steps
Containers are a powerful tool in your development arsenal, but it’s critical to understand
how and how well your container environments are working. Infrastructure monitoring and
application performance monitoring become more essential after deploying containers,
not less. The requirements of a container-ready monitoring solution include being built
to understand containers, easy deployment, a real-time streaming platform, predictive
analytics and an out-of-the-box experience that gives you meaningful data. If you’re ready
for an observability system that does all this and can operate natively with containers,
public clouds, private clouds and self-hosted environments, check out a demo of Splunk
Observability Cloud, or you can start a free trial today. To learn more about observability,
view our website or download our Beginner’s Guide to Observability.
Splunk, Splunk> and Turn Data Into Doing are trademarks and
registered trademarks of Splunk Inc. in the United States and
other countries. All other brand names, product names or
trademarks belong to their respective owners.
© 2021 Splunk Inc. All rights reserved.
21-14769-Splunk-ModernGuidetoContainerMonitoringandOrchestration-114-EB