0% found this document useful (0 votes)
267 views24 pages

Application Performance Management For Microservice Applications On Kubernetes The Ultimate

This document discusses application performance management challenges for microservice applications running on Kubernetes. It begins with an overview of Kubernetes, explaining that while Kubernetes provides container orchestration capabilities, it does not directly address application performance management. The document then outlines several challenges to application performance monitoring in Kubernetes environments, including that Kubernetes adds abstraction between applications and infrastructure, microservices introduce more moving parts and complexity, and service mapping provides a new layer of indirection. It stresses that operational production monitoring of application performance is not available through Kubernetes alone.

Uploaded by

ankitnayan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
267 views24 pages

Application Performance Management For Microservice Applications On Kubernetes The Ultimate

This document discusses application performance management challenges for microservice applications running on Kubernetes. It begins with an overview of Kubernetes, explaining that while Kubernetes provides container orchestration capabilities, it does not directly address application performance management. The document then outlines several challenges to application performance monitoring in Kubernetes environments, including that Kubernetes adds abstraction between applications and infrastructure, microservices introduce more moving parts and complexity, and service mapping provides a new layer of indirection. It stresses that operational production monitoring of application performance is not available through Kubernetes alone.

Uploaded by

ankitnayan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Application

Performance
Management for
Microservice Applications
on Kubernetes
The Ultimate Guide to Managing
Performance of Business Applications
on Kubernetes
Application Performance Management for Microservice Applications on Kubernetes

Table of
Contents
P. 4 P. 6
Kubernetes Basics Kubernetes Application
(or the A-B-C’s of K-8-S) Monitoring Challenges

P. 12 P. 16
Seeing Through Should You Use
the Complexity Prometheus?

P. 17 P. 22
Kubernetes Monitoring Conclusion
Tools and Strategies
There’s a reason everyone is talking about Kubernetes these days. It has become the go-to
container orchestration solution for organizations of all sizes as they migrate to microservice
application stacks running in managed container environments.

Kubernetes is certainly worthy of the recent excitement it has garnered, but it doesn’t solve every management
problem, especially around performance. It’s important to understand what Kubernetes does, and what it doesn’t do;
and what specific capabilities DevOps teams require from their tooling to fully manage orchestrated microservice
applications and achieve operational excellence.

This eBook examines Kuberenetes, the operational issues it addresses, and those that it does not. Additional
examination of modern DevOps process is included, with a discussion on management tooling needed to achieve
continuous delivery of business services leading to excellent operational performance. The eBook concludes with a
detailed analysis of the capabilities needed from your tooling to successfully operate and manage the performance of
microservice applications running on Kubernetes.
Application Performance Management for Microservice Applications on Kubernetes

Kubernetes Basics
(or the A-B-C’s of K-8-S)
Kubernetes (sometimes abbreviated K8s) is a container
orchestration tool for microservice application deployment.
It originated as an infrastructure orchestration tool built by
Google to help manage container deployment in their
hyper-scale environment. Google ultimately released K8s
as an open source solution through CNCF (the Cloud-Native
Computing Foundation).

Notice there is a critical Orchestration is just a fancy word that summarizes the basic Kubernetes features:
aspect of operational
management • Container deployment automation, relieving admins of the need to manually start them
missing - application
• Instance management - balancing the number of instances of a given container running
performance management.
concurrently to meet application demand
The whole discipline
of application performance • DNS management regarding microservice / container load balancing and clustering to
visibility and management help manage scaling due to increased request load
is not part of the
Kubernetes platform. • Container distribution management across host servers to spread application load
evenly across the host infrastructure (which can help maximize application availability)

4
Application Performance Management for Microservice Applications on Kubernetes

Why Kubernetes is Kubernetes


Important distributions

Remember, the goal of DevOps Many cloud providers have their For most enterprise use cases,
is speed!! Orchestration is all own versions of Kubernetes it‘s much faster and easier to
about enabling fast and easy (called a “Distribution”) that use a cloud provider’s
changes to production have unique enterprise Kubernetes distribution than to
environments so that business capabilities added to the open set up the open source version.
applications can rapidly evolve. source Kubernetes version, A wide variety of Kubernetes
which provide a few distinct distributions are available,
The message is clear: advantages: designed to run either on local
speeding up your infrastructure or as a hosted
• Organizations concerned service in the cloud.
application delivery
about enterprise readiness
cycles adds huge value
get a fully tested and
to your business. You can get an
supported version of k8s
updated list of
Automating container • Additional enterprise distribution providers
orchestration is a great functionality is included in Kubernetes
complement to agile - for example, Red Hat’s online docs.
development methods and the OpenShift K8s distribution
microservice architecture. adds security features and
Modern CI/CD is automating build automation to the mix
the testing and delivery stages
of development - containers
and Kubernetes make it much
easier to get your code into
production and manage
resources.

5
Kubernetes Application
Monitoring Challenges
Container Management is NOT
Application Performance Management

More Moving Parts - and Complexity

Decoupling of Microservices from


Physical Infrastructure

Service Mapping - A New Layer


of Abstraction

Root-Cause Ambiguity
Application Performance Management for Microservice Applications on Kubernetes

Container Management
is NOT Application
Performance
Management

Now that we’ve discussed what and how service requests are
Kubernetes does, let’s explain flowing across the microservices
what it does not do. Remember, is not easily available via
Kubernetes orchestrates Kubernetes, nor is performance
containers that are part of an data (request rate, errors and
application. It does not manage duration or latency) of services a
application performance or the native part of Kubernetes.
availability of highly distributed
Operational
applications. Similar to
applications, Kubernetes doesn’t production monitoring
consider performance when of application
managing infrastructure. performance and
health is absolutely
Kubernetes effectively adds a not available via
layer of abstraction between the Kubernetes.
running application (containers)
and the actual compute Let’s look at other aspects of
infrastructure. On its own, orchestrated containerized
Kubernetes makes decisions application environments that
about where containers run, and further complicate monitoring.
can move them around abruptly.
Visibility of exactly how your
technical stack is deployed,

7
Application Performance Management for Microservice Applications on Kubernetes

More Moving Parts - and Complexity

Any microservice application creates a trio of issues:

• Exponentially more individual components


• Constant change in the infrastructure and applications (the application stack)
• Dynamic application components, In a Kubernetes environment, there are many
more moving parts than there would be in a traditional application stack.

With the addition of containers With so many different pieces of


- and then orchestration with infrastructure and middleware,
Kubernetes - each of these as well as the polyglot of
management challenges application languages used to
becomes even more difficult. create the microservices, it’s
Every time there is a decoupling difficult for monitoring tools to
of physical deployment from distinguish the different needs
the application functionality, and behaviors of all these critical
it becomes more difficult to components in the application
monitor application performance stack. For example, collecting
and solve problems. Instead of and interpreting monitoring
host servers connected with a data from any one platform
physical network, Kubernetes is different from all other
utilizes a cluster of nodes and platforms. What do you do when
virtualizes the network, which you have Python, Java, PHP, .NET,
can be distributed across a Application Proxies, 4 different
mixture of on-premise and databases and a multitude of
cloud-based infrastructure, middleware?
or even multiple clouds.

8
Application Performance Management for Microservice Applications on Kubernetes

Decoupling of
Microservices from
Physical Infrastructure

Kubernetes takes control The already deep level of abstraction


of running the containers may be further compounded by
the Kubernetes nodes running on
that make up the
external cloud computing services
microservices of your
such as EC2, GCE or Azure.
application, completely
automating their lifecycle The high level of disconnect from
management and the application code to the hardware
abstracting the hardware. it’s running on makes traditional
infrastructure monitoring less critical.
Kubernetes will run the requested It is considerably more important to
workloads on any available host/node understand how the microservices
and using software-defined networks and overarching applications are
to ensure that those workloads performing and if they are meeting
are reachable and load balanced. their desired SLAs. An understanding
Compute resources (memory and of the overall health of the
CPU) are also abstracted with each Kubernetes backplane is also essential
workload having a configured limit for to ensure the highest levels of service
those resources. Because containers for your application.
are ephemeral, any long term storage
is provisioned by Persistent Volume
Claims provided by various
storage drivers.

9
Application Performance Management for Microservice Applications on Kubernetes

Service Mapping - A New Layer


of Abstraction

As noted earlier in this eBook, one of the main reasons for


using an orchestrator like Kubernetes is that it automates
most of the work required to deploy containers and establish
communications between them. However, Kubernetes on its
own can’t guarantee that microservices can communicate
and integrate with each other effectively. To do that, you
need to directly monitor the services and their interactions.

That is challenging because Kubernetes doesn’t offer a way


to automatically map or visualize relationships between
microservices.

Admins must manually determine which


microservices are actually running, where
within the cluster they exist, which services
depend on other ones and how requests are
flowing between services.

They must also be able to quickly determine quickly how a


service failure or performance regression could impact other
services, while also looking for opportunities to optimize
the performance of individual services and communications
between services.

10
Application Performance Management for Microservice Applications on Kubernetes

Root-Cause Ambiguity

APM tools exist because In a Kubernetes environment, problems. Ultimately, there


middleware-based business determining the root cause of a could be a myriad number of
applications - first using Java and problem based on surface-level root causes for the issue, ranging
.NET, then using SOA principles, symptoms is even more difficult, from configuration problems in
and even microservices and because the relationships Kubernetes, to an issue with data
containers - make it difficult to between different components flows between containers, to a
monitor performance, trace user of the environment are much physical hardware failure.
requests, then identify and solve harder to map and continuously
problems. change. For example, a problem To put it simply, tracing problems
in a Kubernetes application in a Kubernetes environment
The more complex might be caused by an issue back to their root cause is not
with physical infrastructure, feasible in many cases without
the application
but it could also result from a the help of tools that can
environment, the
configuration mistake or coding automatically parse through
harder it becomes problem. Or perhaps the problem the complex web of data and
for DevOps lies within the virtual network dependencies that compose your
teams to get the that allows microservices to cluster and your microservice
performance visibility communicate with each other. application’s structure.
and component
dependencies Of course, when the problem
lies within the application code,
needed to effectively
it’s important to have the deep
manage application
visibility required to debug actual
performance.
code issues, even understanding
when bad parameters or other
inputs are causing application

11
Application Performance Management for Microservice Applications on Kubernetes

Seeing Through the Complexity


By now, it should be clear that managing the performance and availability of Kubernetes
applications is challenging and scary! It’s not hopeless, though. With the right APM (Application
Performance Management) tool, you can manage your Kubernetes environment in a way that
maximizes uptime and optimizes performance, combining the benefits that K8s offers with
the goal of achieving DevOps excellence.

Let’s look at key types of visibility that your monitoring should support for applications running in a Kubernetes environment.

Application Service Identification and Mapping

As discussed earlier, Kubernetes injects a new level of application Kubernetes services are NOT the same as application services.
abstraction, making it difficult to know how well individual services The K8s documentation states:
are running, or the interdependencies between all the deployed
services. Your APM tool must be able to see past Kuberenetes and
“A Kubernetes Service is an abstraction
the container system to identify the application services - and how
they are related to each other. which defines a logical set of Pods and
a policy by which to access them”
$ kubectl get svc
There can be multiple application services within a Pod.
Lists out Kubernetes service definitions but not
their relationships.

12
Application Performance Management for Microservice Applications on Kubernetes

Microservice Application Request


Relationships Mapping and Tracing

You also need to know how The microservices that comprise


your Kubernetes services map an application constantly send There is no kubectl
to application services, the and receive requests from each command to provide this
microservices they are built upon other. Effective microservice information.
and their physical infrastructure application monitoring requires
in order to determine how your APM tool to detect all
the infrastructure impacts the services, as well as the
the services’ availability and interdependencies between
performance. Kubernetes them - and visualize the dynamic
doesn’t easily reveal all of this relationships (i.e., map them) in
information; you need to run real time.
multiple kubectl commands
to manually build a mapping at Additionally, to solve
a single point in time. problems, you will need
exact traces from each
Good luck doing that when there
individual application
is a production issue that needs
request across all
to be fixed immediately.
the microservices it
touches.

13
Application Performance Management for Microservice Applications on Kubernetes

Deployment Failures

If Kubernetes fails to deploy a pod as expected, you want to know why and how The event stream will show where the deployment
it happened. However, it’s more important to understand if your application failed.
functionality has been negatively impacted by this deployment failure. Is your
application slower and handling less workload or is it throwing errors because a Since you cannot see the performance
critical service is unavailable? of your application using kubectl
commands, the only way to answer the
$ kubectl get events --field-selector involvedObject.name=my-deployment question above is with an APM tool that
understands Kubernetes.

Performance Regressions

If your application is responding slowly, it’s important to identify the issue and Infrastructure metrics like CPU, memory, disk I/O,
trace it to its root cause quickly. Since Kubernetes was not designed to help network I/O, etc are good KPIs to reference while
with this use case, there are no kubectl commands you can run to understand troubleshooting performance issues but they are only
microservice or application performance. a part of the information required to fully ascertain
root cause. There might also be issues with the
Troubleshooting microservice applications running on application code or Kubernetes configuration issues

Kubernetes requires your APM tool to have the ability to that are causing resource contention. It’s quite
common to over-allocate CPU and memory resources
correlate metrics up and down the full application stack:
on Kubernetes nodes with improper configuration.
infrastructure, application code, kubernetes system
information, and the trace data between the services.

14
Application Performance Management for Microservice Applications on Kubernetes

Performance Optimization
Opportunities

In Agile development environments,


developers often push new code
into production on a daily basis.
How do they know that their code is
delivering good response time and
not consuming too many resources?

To help with this, the APM solution must work


at the speed of DevOps, automatically and
immediately recognizing when new code has been
deployed - or any changes to the structure of the
environment (including infrastructure). It must
also make it easy for developers to analyze the
efficiency of their code.

This use case calls for granular visibility into


user requests, host resources (K8s nodes), and
workload patterns. It’s also critical that you
have a robust analytics mechanism for all of this
data. You cannot accomplish this use case with
Kubernetes alone.

15
Application Performance Management for Microservice Applications on Kubernetes

Should You Use Prometheus?


Prometheus has become the go-to monitoring tool for Kubernetes but it’s missing some
important functionality. Let’s begin our Prometheus exploration by discussing what
Prometheus does well.

Time series metrics | Flexible API | High cardinality | Monitoring and Alerting

Prometheus is a good stand-alone tool Prometheus is an open-source time series metrics monitoring and alerting tool.
for collecting time series metrics but it is It is typically used to monitor KPIs, such as rates, counters, and gauges from
not capable of meeting the majority of use infrastructure and application services. You can use Prometheus to monitor request
cases presented in this document. Here response times but this often requires that you modify your source code to add the
are the drawbacks of using Prometheus as Prometheus API calls. This can be useful to understand overall response times and
your monitoring tool: request rates but this approach lacks the detail required to troubleshoot or optimize
application performance.
•• No distributed tracing capability
•• No correlation between service infrastructure and The Kubernetes distribution natively supports Prometheus, and when the
host
Prometheus Helm package is installed, you’ll find several dashboards pre-configured
•• No correlation between Kubernetes resources, request
for the purpose of basic health checks. You’ll also find a few predefined alerts
response times, and infrastructure metrics
•• No analytics interface, roll-ups, or aggregates configured on your cluster.
•• No automatic root cause analysis
•• No automated alerting Ultimately, Prometheus is a good stand-alone metrics tool that
•• Management and Administrative costs
cannot meet the challenges associated with running business
critical microservice workloads on Kubernetes.

16
Kubernetes Monitoring
Tools and Strategies
Achieving the elements described above in a
Kubernetes application requires an APM tool
that includes special features absent within
traditional monitoring tools.

For Kubernetes, you can not just collect monitoring data and detect
anomalies that could signal problems. Let’s look at key capabilities you
need in an APM tool to help you with microservice applications running
on Kubernetes.
Application Performance Management for Microservice Applications on Kubernetes

Root-Cause Analysis
Within the Application,
Containers and
Orchestration

One is the ability to identify


the root-cause of performance
issues automatically. It’s not
good enough to just be aware of
problems within your Kubernetes
environment.

You must be able to


trace those problems
to their exact root
cause and fix them in
minutes.

Given the extreme complexity of


When your APM tool understands the relationships between Kubernetes, a Kubernetes-based application
application services, and infrastructure, it can automatically identify the
root cause of issues anywhere within the system. and the lack of visibility into
the environment, identifying
the root causes of availability
or performance problems is
exceptionally challenging to do
manually.

18
Application Performance Management for Microservice Applications on Kubernetes

Integrated Service /
Infrastructure Mapping

Given that Kubernetes


doesn’t offer full
visibility into how
services interact
with each other, your
monitoring tool must
be able to map services
automatically.

Equally important, it must


have the ability to interpret the
relationships and dependencies
between those services in
order to identify problems and
understand how one service’s
performance will impact that
of others.
Dependencies between all services are continuously
mapped and monitored to understand the performance
of the system as a whole.

19
Application Performance Management for Microservice Applications on Kubernetes

Upstream and downstream dependencies of individual


services are automatically identified. Every application
service is correlated to its Kubernetes service so that you
can seamlessly navigate between data sets.

Kubernetes cluster data is collected and analyzed


with the correlated application performance data to
create a holistic understanding of the system.

20
Application Performance Management for Microservice Applications on Kubernetes

Dynamic Baselining

With environment architectures and


configurations changing constantly,
your APM tool must make sense of
highly dynamic monitoring data
and distinguish true anomalies from
normal changes.

Identification of application performance issues requires anomaly


detection based upon machine learning. Alerts are raised when
performance indicators deviate too far away from normal behavior.

Remediation guidance

When something goes wrong in your


enormously complex Kubernetes
environment, you want to be able
to resolve the problem quickly. That
is difficult for human admins to do
without the help or guidance from
their APM tool. There’s just too much
data, and too many fast-changing
variables, for humans to wade
through to formulate an incident
response plan on their own.

21
Application Performance Management for Microservice Applications on Kubernetes

Conclusion
Kubernetes is rapidly becoming the standard orchestration
platform in enterprises to augment and even complete the
transition to DevOps, but does not include application
performance visibility or management. Furthermore, Kubernetes
introduces a new layer of abstraction into the datacenter creating
observability challenges making it more difficult to manage
application availability and deliver the needed performance SLAs
demanded by your business.

To properly manage business critical applications on Kubernetes,


Instana recommends an APM tool with these key capabilities:

• Full-stack visibility (including infrastructure, code, microservices, request traces, middleware,


containers and Kubernetes) of all technology layers
• Continuous discovery of the full application stack to automatically adjust to changes
in the environment
• Dependency mapping and correlation between the layers of technology
• Automatic root cause determination and assistance for the DevOps teams to troubleshoot
application issues.

22
Application Performance Management for Microservice Applications on Kubernetes

About The solution is Start a Free Trial Today


designed to
Instana empower the full
Instana is the only APM tool DevOps team.
specifically built for microservice
applications running on Kubernetes.
The solution automatically discovers
the full containerized application
stack, automatically understands
the performance of your
microservices, and includes
automatic determination of the root
cause of performance issues.

Stan
Your Intelligent
DevOps Assistant

You might also like