Kubernetes in Enterprise Redefining Container Ecosystem
Kubernetes in Enterprise Redefining Container Ecosystem
Welcome Letter
By Ray Elenteny, Solution Architect at SOLTECH, Inc.
The adoption of Kubernetes continues its march forward. Is While Kubernetes is an evolution in technology, when
it a perfect technology stack? No, but what technology stack applied with the right mindset, it encourages a cultural
is? Does it have a somewhat steep learning curve? Yes, it shift. To me, this is what makes Kubernetes exciting. I
does. With these and other questions, you might ask: see it analogous to the Agile movement. Key to Agile
methodologies is having representation from all aspects
"Why does the adoption of Kubernetes continue to grow, of an organization. Kubernetes, done well, is a collaborative
and do I want to jump into the pool? Isn't it just (and yet) effort across multiple organizational disciplines.
another deployment abstraction layer like how virtual
machines eventually became ubiquitous?" Kubernetes has allowed the replication of production-
like environments to be available as early in the process as
To some extent, those are legitimate questions. an engineer's workstation. This capability facilitates more
communication between those creating an application and
As someone who has literally spent decades in this industry
those who deploy and monitor it; all parties involved get
delivering products and applications, the drive to deliver
to understand each other's requirements and challenges,
applications more efficiently and quickly has, at the very
and working as a team can resolve them. This requires
least, remained a constant, and one could argue that the
cooperation, teamwork, empathy, and a host of other "soft
drive to deliver continues to increase in intensity. I see
skills." It can change the culture of an organization, and I
Kubernetes as an opportunity to take a significant step
would argue for the better.
forward in improving application delivery.
So while you explore in the 2023 Kubernetes in the Enterprise
Many of us love researching and adopting new technologies.
Trend Report how Kubernetes can move your business
Kubernetes and containerization, in general, are "cool stuff."
forward, consider how Kubernetes can also impact the
However, many of us also make a living in this field, and
culture within your organization — for the better.
businesses don't really care about cool stuff. Stakeholders
want to be shown how they can leverage technology to Seize the Opportunity,
their advantage.
With over 35 years of experience in the IT industry, Ray thoroughly enjoys sharing his experience by
helping organizations deliver high-quality applications that drive business value. Ray has a passion for
software engineering. Over the past ten years or so, Ray has taken a keen interest in the cultural and
technical dynamics of efficiently delivering applications.
By G. Ryan Spain, Freelance Software Engineer, former Engineer & Editor at DZone
Kubernetes: it is everywhere. To fully capture or articulate the prevalence and far-reaching impacts of this monumental
platform is no small task — from its initial aims to manage and orchestrate containers to the more nuanced techniques to scale
deployments, leverage data and AI/ML capabilities, and manage observability and performance — it's no wonder we, DZone,
research and cover the Kubernetes ecosystem at great lengths each year.
In September 2023, DZone surveyed software developers, architects, and other IT professionals in order to understand the state
of Kubernetes across enterprises.
Methods: We created a survey and distributed it to a global audience of software professionals. Question formats included
multiple choice, free response, and ranking. Survey links were distributed via email to an opt-in subscriber list, popups on
DZone.com, the DZone Core Slack workspace, and various DZone social media channels. The survey was opened on August 31st
and ended on September 19th; it recorded 103 complete and partial responses.
Demographics: Due to the limited response rate for our 2023 Kubernetes survey, we've noted certain key audience details
below in order to establish a more solid impression of the sample from which results have been derived:
• Respondents described their primary role in their organization as "Technical Architect" (22%), "Developer/Engineer" (21%),
"Developer Team Lead" (13%), "Consultant/Solutions Architect" (11%), and "DevOps Lead" (11%).
• 80% of respondents said they are currently developing "Web applications/Services (SaaS)," 55% said "Enterprise business
applications," 22% said "Native mobile apps," and 21% said "High-risk software (bugs and failures can mean significant
financial loss or loss of life)."
• "Java" (74%) was the most popular language ecosystem used at respondents' companies, followed by "JavaScript (client-
side)" (56%), "Python" (52%), and "Node.js (server-side JavaScript)" (43%).
• Regarding responses on the primary language respondents use at work, the most popular by far was "Java" (43%),
followed distantly by "Python" (15%) and "Go" (11%).
• On average, respondents said they have 17.28 years' experience as an IT professional, with a median of 18 years' experience.
• 33% of respondents work at organizations with < 100 employees, 18% work at organizations with 100-999 employees, and
47% work at organizations with 1,000+ employees.
In this report, we review some of our key research findings. Many secondary findings of interest are not included here.
Research Target One: The Current State of Containers and Container Orchestration
Motivations:
1. Kubernetes and containers are inextricably linked, and one of the main reasons behind container orchestration demand
— and, by extension, the purpose of this very report — is the use of containers seemingly everywhere in contemporary
software development. We wanted to know how often containers are being used today compared to our surveys from
previous years, and whether organization/application size correlated with container usage.
2. Container usage can be so prevalent in modern software because of the availability and accessibility of robust container
management tools. From free and open-source tools to paid, enterprise-level solutions, there are more than a few options
for spinning up, managing, and orchestrating containers. We aimed to see which container tools were being used most often.
CONTAINER POPULARITY
In 2021's Kubernetes in the Enterprise "Key Research Findings," we speculated that containerization may have reached a point
of saturation based on the results of our survey. Last year, our data indicated that container usage may have even begun
declining, especially among smaller organizations.
To continue this analysis of the prevalence of containers, comparing with annual data from 2017 to the present, we asked:
Results (n=91):
Figure 1
3.3%
8.8%
Yes
No
Figure 2
80 70% 88%
83% % using
containers
60 Logarithmic (% using containers)
f(x) = 0.2779 ln(x) + 0.3844
R2 = 0.8393
40 45%
42% Linear (% using containers)
f(x) = 0.0829 x + 0.3914
20 R2 = 0.7424
0
2017 2018 2019 2020 2021 2022 2023
The data also reinforces the hypothesis we stated in 2021 — Yes 71% 100% 98%
that containerization has reached a saturation point. Further No 21% 0% 3%
supporting the saturation hypothesis is the trendline for
Figure 2, where a logarithmic model (R²=0.839) fits much I don't know 7% 0% 0%
2. We found again this year that respondents at the smallest organizations (1-99) were the least likely to use containers. In
fact, nearly all respondents at organizations with > 100 employees said that they use application containers (99%), while
only 71% of respondents at organizations with < 100 employees said they use containers.
If we were to assume that larger organizations tend to have larger/more complex applications, this discrepancy may
indicate that relatively small or simple applications just don't have the same need for containerization — even in
development — as bigger applications. Alternatively, it could be that smaller organizations are less likely to be able to
expend the resources necessary to properly manage and orchestrate containers.
What tools/platforms are you using to manage containers in development and production environments?
Results (n=75):
Table 2
Development Production
Tool % n= % n=
Ansible 11% 8 8% 6
*Note: This table only displays options selected by > 5% of respondents in either the Development or Production category.
Observations:
1. Docker was the most commonly used container management tool in development environments, with 81% of
respondents saying they use Docker on the dev side of things; Kubernetes was not far behind at 71%. Other commonly
used container tools for development were Docker Compose (43%), Terraform (32%), AWS EKS (27%), and Azure AKS (24%).
KUBERNETES USAGE
As we saw from the data presented in the "Container Management Tools" section, Kubernetes is being used by a lot of software
professionals. We wanted to dive a little deeper into the frequency of its use, so we asked respondents:
and
Figure 3
4.5%
15.9%
Yes
No
Figure 4
23.5%
Yes
No
76.5%
Results (n=69):
Figure 5
Hybrid/multi-cloud
Air-gapped
AI/ML
Non-ML AI
Edge/IoT
Other - write in
0 10 20 30 40 50 60
1. The most common Kubernetes use cases were KUBERNETES USE CASES BY ORGANIZATION SIZE
"New cloud-native apps" (58%), "Hybrid/multi-cloud"
Org Size
(52%), and "Modernizing existing apps" (49%). The
least common use cases were "Air-gapped" (12%), Use Cases < 100 (n=17) > 100 (n=48) Gap
"Non-ML AI" (12%), and "Edge/IoT" (7%).
Hybrid/multi-cloud 24% 63% 39%
2. Respondents at organizations with < 100 employees
Air-gapped 6% 15% 9%
were considerably less likely than respondents at
larger organizations to use Kubernetes for "Hybrid/ Fast data pipelines 6% 21% 15%
multi-cloud" (24% vs. 63%), "Lift and shift [for] AI/ML 18% 25% 7%
existing applications" (18% vs. 35%), and "Non-ML
Non-ML AI 0% 17% 17%
AI" (0% vs. 17%). On the other hand, there was no
significant difference between small and large Edge/IoT 6% 8% 2%
organization response rates for "Modernizing New cloud-native apps 53% 60% 7%
existing applications" (47% vs. 50%) and "Edge/IoT"
Lift and shift existing apps 18% 35% 18%
(6% vs. 8%) use cases.
selected fewer Kubernetes use cases (1.76) than *Note: This question was only asked to respondents who answered "Yes"
those at larger organizations (2.94). to the question, "Does your organization run any Kubernetes clusters?"
and
Results (n=65):
Figure 6
100
80
60
Improved
40 Neither
Worsened
20
0
CI/CD Deploys Auto- Archit- Building Security App Reliability Cost Overall
in general scaling ectural micro- modularity system
refactoring services design
*Note: These questions were only asked to respondents who answered "Yes" to the question, "Does your organization run any Kubernetes
clusters?" Additionally, these results ignore any responses where both "Improved" and "Worsened" were selected; < 5% of responses were
ignored this way per response option.
What pain points have you encountered while working with Kubernetes?
Results (n=60):
Figure 7
60
40
20
0
Learning or Maintaining Performance CLI tooling Learning or Visualizing Security Other,
using kubectl YAML files tuning with using Helm what’s happening write in
microservices at runtime
Observations:
1. The most common Kubernetes pain points were "Performance tuning" (60%), "Maintaining YAML files" (55%), "Learning or
using Helm" (45%), and "Security" (44%). On average, respondents selected 3.03 pain points — with a median value of 3 —
out of the seven options listed.
2. As mentioned in the previous section, most respondents reported that security at their organization was neither
improved nor worsened by Kubernetes, yet close to half of respondents selected "Security" as a pain point.
Comparing these results, we found that a large majority of respondents who said that Kubernetes worsened security
at their organization found security to be a pain point (89%) — which is to be expected. But over half of respondents
who said that Kubernetes improved security found it to be a pain point as well (57%). 34% of respondents who believed
security was neither improved nor diminished found security to be a pain point. We believe these results may imply
and
Figure 8
Development
Production
0 20 40 60 80 100
Figure 9
60
50
40 Development
30 Production
20
10
0
Bare metal Virtual machines Both I don't know
*Note: These questions were only asked to respondents who answered "Yes" to the question, "Does your organization run any Kubernetes clusters?"
2. Almost all respondents said their Kubernetes clusters run on "Virtual machines" in both development (91%) and
production (88%) environments. In development environments, 62% said Kubernetes runs on virtual machines only, and
29% said they have clusters running on both bare metal and VMs. In production, 64% said Kubernetes runs on VMs only,
and 23% said it runs on both bare metal and VMs.
This seems to imply that, more often than not, the ease of use afforded by VMs supersedes any performance gains or
orchestration layer simplification that bare metal provides, and that when bare metal is needed, a hybrid bare metal/
VM approach will often make more sense than bare metal alone.
and
Results (n=66):
Figure 10
100
80
60
40
20
0
Web CPU- GPU- Memory- Storage/ Non-web Other,
apps intensive intensive intensive database- general write in
intensive compute
80
60
40
20
0
Vertical Pod Horizontal Pod Cluster Pod Other,
Autoscaler Autoscaler Autoscaler write in
Table 4
Autoscalers
GPU-intensive 7% 8% 15% 7%
n= 14 39 20 56
*Note: % of columns (table). The two aforementioned questions were only asked to respondents who answered "Yes" to the question,
"Does your organization run any Kubernetes clusters?"
Observations:
1. The vast majority of respondents said that their organizations run "Web applications" on Kubernetes (94%), which is
unsurprising considering the sheer volume of web apps currently being created or maintained. "Storage/database-
intensive" (42%), "CPU-intensive" (41%), and "Non-web general compute" (38%) workloads were moderately popular
options. Very few respondents said their organization uses Kubernetes for "GPU-intensive" workloads (9%).
2. Respondents at small organizations (< 100 employees) were significantly less likely than respondents at larger
organizations to report that their company uses Kubernetes for "Memory-intensive" workloads (6% vs. 38%) and "CPU-
intensive" workloads (24% vs. 48%), though it seems likely that these correlations stem from a higher likelihood that larger
organizations' applications deal with these types of workloads in the first place.
3. Horizontal autoscaling (67%) was a far more popular autoscaling method than cluster (35%) or vertical (24%) — a pattern
we have seen for the past two years as well.
This reinforces the hypothesis from our 2021 Kubernetes in the Enterprise "Key Research Findings" that, because
horizontal autoscalers are "perhaps the most opinionated, least requiring of accurate guessing during cluster
configuration," their overwhelming popularity "may suggest that Kubernetes is being used to pluck relatively low-
hanging cluster management fruit."
Future Research
Our analysis here only touched the surface of the available data, and we will look to refine and expand our Kubernetes survey as
we produce further Trend Reports. Some of the topics we didn't get to in this report, but were incorporated in our survey, include:
Please contact [email protected] if you would like to discuss any of our findings or supplementary data.
G. Ryan Spain, Freelance Software Engineer, former Engineer & Editor at DZone
@grspain on DZone, GitHub, and GitLab | gryanspain.com
G. Ryan Spain lives on a beautiful two-acre farm in McCalla, Alabama with his lovely wife and adorable
dog. He is a polyglot software engineer with an MFA in poetry, a die-hard Emacs fan and Linux user, a
lover of The Legend of Zelda, a journeyman data scientist, and a home cooking enthusiast. When he isn't
programming, he can often be found playing Super Auto Pets with a glass of red wine or a cold beer.
Results
Today, Zenseact's 500 developers use Kasten K10 to: CREATED IN PARTNERSHIP WITH
Kubernetes celebrates its ninth year since the initial release this year, a significant milestone for a project that has
revolutionized the container orchestration space. During the time span, Kubernetes has become the de facto standard for
managing containers at scale. Its influence can be found far and wide, evident from various architectural and infrastructure
design patterns for many cloud-native applications.
As one of the most popular and successful open-source projects in the infrastructure space, Kubernetes offers a ton of choices
for users to provision, deploy, and manage Kubernetes clusters and applications that run on them. Today, users can quickly spin
up Kubernetes clusters from managed providers or go with an open-source solution to self-manage them. The sheer number
of these options can be daunting for engineering teams deciding what makes the most sense for them.
In this Trend Report article, we will take a look at the current state of the managed Kubernetes offerings as well as options for
self-managed clusters. With each option, we will discuss the pros and cons as well as recommendations for your team.
All of the managed Kubernetes platforms take care of the control plane components such as kube-apiserver , etcd , kube-
scheduler , and kube-controller-manager . However, the degree to which other aspects of operating and maintaining a
Kubernetes cluster are managed differs for each cloud vendor.
For example, Google offers a more fully-managed service with GKE Autopilot, where Google manages the cluster's underlying
compute, creating a serverless-like experience for the end user. They also provide the standard mode where Google takes
care of patching and upgrading of the nodes along with bundling autoscaler, load balancer controller, and observability
components, but the user has more control over the infrastructure components.
On the other end, Amazon's offering is more of a hands-off, opt-in approach where most of the operational burden is offloaded
to the end user. Some critical components like CSI driver, CoreDNS, VPC CNI, and kube-proxy are offered as managed add-ons
but not installed by default.
By offloading much of the maintenance and operational tasks to the cloud provider, managed Kubernetes platforms can offer
users a lower total cost of ownership (especially when using something like a per-Pod billing model with GKE Autopilot) and
increased development velocity. Also, by leaning into cloud providers' expertise, teams can reduce the risk of incorrectly setting
Kubernetes security settings or fault-tolerance that could lead to costly outages. Since Kubernetes is so complex and notorious
for a steep learning curve, using a managed platform to start out can be a great option to fasttrack Kubernetes adoption.
Finally, while Kubernetes lends itself to application portability, there is still some degree of vendor lock-in by going with a
managed option that you should be aware of.
The biggest advantage of going the self-managed route is that you have complete control over how you want your Kubernetes
cluster to work. You can opt to run a small cluster without a highly available control plane for less critical workloads and save
on cost. You can customize the CNI, storage, node types, and even mix and match across multiple cloud providers if need be.
Finally, self-managed options are more prevalent in non-cloud environments, namely edge or on-prem.
On the other hand, operating a self-managed cluster can be a huge burden for the infrastructure team. Even though open-
source tools have come a long way to lower the burden, it still requires a non-negligible amount of time and expertise to justify
the cost against going with a managed option.
Table 1
There are few use cases where going with a self-managed Kubernetes option makes sense:
• If you need to run on-prem or on the edge, you may decide that the on-prem offerings from the cloud providers may
not fit your needs. If you are running on-prem, likely this means that either cost was a huge factor or there is a tangible
need to be on-prem (i.e., applications must run closer to where it's deployed). In these scenarios, you likely already have an
infrastructure team with significant Kubernetes experience or the luxury of growing that team in house.
• Even if you are not running on-prem, you may consider going with a self-managed option if you are running on multiple
clouds or a SaaS provider that must offer a flexible Kubernetes-as-a-Service type of product. While you can run different
variants of Kubernetes across clouds, it may be desirable to use a solution like Cluster API to manage multiple Kubernetes
clusters in a consistent manner. Likewise, if you are offering Kubernetes as a Service, then you may need to support more
than the managed Kubernetes offerings.
Conclusion
Managing and operating Kubernetes at scale is no easy task. Over the years, the community has continually innovated and
produced numerous solutions to make that process easier. On one hand, we have massive support from major hyperscalers for
production-ready, managed Kubernetes services. Also, we have more open-source tools to self-manage Kubernetes if need be.
In this article, we went through the pros and cons of each approach, breaking down the state of each option along the way.
While most users will benefit from going with a managed Kubernetes offering, opting for a self-managed option is not only
valid but sometimes necessary. Make sure your team either has the expertise or the resources required to build it in house
before going with the self-managed option.
Additional reading:
• CNCF Survey 2019: Deployments Are Getting Larger as Cloud Native Adoption Becomes Mainstream
• "101+ Cloud Computing Statistics That Will Blow Your Mind (Updated 2023)" by Cody Slingerland, Cloud Zero
Yitaek is a software engineer at NYDIG, applying new cryptographic protocols to improve the custody of
Bitcoin. He formerly worked at Axoni and Leverege, mainly building internal development platforms and
architecting cloud infrastructure. He writes about cloud, DevOps/SRE, and crypto topics on DZone.
Cloud-native architecture is a transformative approach to designing and managing applications. This type of architecture
embraces the concepts of modularity, scalability, and rapid deployment, making it highly suitable for modern software
development. Though the cloud-native ecosystem is vast, Kubernetes stands out as its beating heart. It serves as a container
orchestration platform that helps with automatic deployments and the scaling and management of microservices. Some of
these features are crucial for building true cloud-native applications.
In this article, we explore the world of containers and microservices in Kubernetes-based systems and how these technologies
come together to enable developers in building, deploying, and managing cloud-native applications at scale.
Here are a few ways in which containers and microservices turn cloud-native architectures into a reality:
• Containers encapsulate applications and their dependencies. This encourages the principle of modularity and results in
rapid development, testing, and deployment of application components.
• Containers also share the host OS, resulting in reduced overhead and a more efficient use of resources.
• Since containers provide isolation for applications, they are ideal for deploying microservices. Microservices help in
breaking down large monolithic applications into smaller, manageable services.
• With microservices and containers, we can scale individual components separately. This improves the overall fault
tolerance and resilience of the application as a whole.
Despite their usefulness, containers and microservices also come with their own set of challenges:
• Managing many containers and microservices can become overly complex and create a strain on operational resources.
• Monitoring and debugging numerous microservices can be daunting in the absence of a proper monitoring solution.
• Networking and communication between multiple services running on containers is challenging. It is imperative to
ensure a secure and reliable network between the various containers.
SELF-HEALING
Resilience and fault tolerance are key properties
of a cloud-native setup. Kubernetes excels in
this area by continuously monitoring the health of containers and Pods. In case of any Pod failures, Kubernetes takes remedial
actions to ensure the desired state is maintained. It means that Kubernetes can automatically restart containers, reschedule
them to healthy nodes, and even replace failed nodes when needed.
SERVICE DISCOVERY
Service discovery is an essential feature of a microservices-based cloud-native environment. Kubernetes offers a built-in service
discovery mechanism. Using this mechanism, we can create services and assign labels to them, making it easier for other
components to locate and communicate with them. This simplifies the complex task of managing communication between
microservices running on containers.
SECURITY
Security is paramount in cloud-native systems and Kubernetes provides robust mechanisms to ensure the same. Kubernetes
allows for fine-grained access control through role-based access control (RBAC). This certifies that only authorized users
can access the cluster. In fact, Kubernetes also supports the integration of security scanning and monitoring tools to detect
vulnerabilities at an early stage.
The second advantage is scalability to support fluctuating workloads based on user demand. Cloud-native applications
deployed on Kubernetes are inherently elastic, thereby allowing organizations to scale resources up or down dynamically.
Lastly, low latency is a must-have feature for delivering responsive user experiences. Otherwise, there can be a tremendous loss
of revenue. Cloud-native design principles using microservices and containers deployed on Kubernetes enable the efficient use
of resources to reduce latency.
The use of Kubernetes operators is gaining prominence for stateful applications. Operators extend the capabilities of
Kubernetes by automating complex application-specific tasks, effectively turning Kubernetes into an application platform.
These operators are great for codifying operational knowledge, creating the path to automated deployment, scaling, and
management of stateful applications such as databases. In other words, Kubernetes operators simplify the process of running
applications on Kubernetes to a great extent.
Lastly, GitOps and Infrastructure as Code (IaC) have emerged as foundational practices for provisioning and managing cloud-
native systems on Kubernetes. GitOps leverages version control and declarative configurations to automate infrastructure
deployment and updates. IaC extends this approach by treating infrastructure as code.
• Observability is a key practice that must be followed. Implementing comprehensive monitoring, logging, and tracing
solutions gives us real-time visibility into our cluster's performance and the applications running on it. This data is
essential for troubleshooting, optimizing resource utilization, and ensuring high availability.
• Resource management is another critical practice that should be treated with importance. Setting resource limits for
containers helps prevent resource contention and ensures a stable performance for all the applications deployed on a
Kubernetes cluster. Failure to manage the resource properly can lead to downtime and cascading issues.
• Configuring proper security policies is equally vital as a best practice. Kubernetes offers robust security features like
role-based access control (RBAC) and Pod Security Admission that should be tailored to your organization's needs.
Implementing these policies helps protect against unauthorized access and potential vulnerabilities.
• Integrating a CI/CD pipeline into your Kubernetes cluster streamlines the development and deployment process. This
promotes automation and consistency in deployments along with the ability to support rapid application updates.
Conclusion
This article has highlighted the significant role of Kubernetes in shaping modern cloud-native architecture. We've explored
key elements such as observability, resource management, security policies, and CI/CD integration as essential building blocks
for success in building a cloud-native system. With its vast array of features, Kubernetes acts as the catalyst, providing the
orchestration and automation needed to meet the demands of dynamic, scalable, and resilient cloud-native applications.
As readers, it's crucial to recognize Kubernetes as the linchpin in achieving these objectives. Furthermore, the takeaway is to
remain curious about exploring emerging trends within this space. The cloud-native landscape continues to evolve rapidly, and
staying informed and adaptable will be key to harnessing the full potential of Kubernetes.
Additional reading:
• CNCF Annual Survey 2021
• CNCF Blog
• "Why Google Donated Knative to the CNCF" by Scott Carey
• Getting Started With Kubernetes Refcard by Alan Hohn
• "The Beginner's Guide to the CNCF Landscape" by Ayrat Khayretdinov
I'm a full-stack architect, tech writer, and guest author in various publications. I have expertise building
distributed systems across multiple business domains such as banking, autonomous driving, and retail.
Throughout my career, I have worked at several large organizations. I also run a tech blog on cloud,
microservices, and web development, where I have written hundreds of articles. Apart from work, I enjoy reading books and
playing video games.
Kubernetes Today
The Growing Role of Serverless in Modern Kubernetes Clusters
Kubernetes, a true game-changer in the domain of modern application development, has revolutionized the way we manage
containerized applications. Some people tend to think that Kubernetes is an opposing approach to serverless. This is probably
because of the management bound in deploying applications to Kubernetes — the node management, service configuration,
load management, etc. Serverless computing, celebrated for its autoscaling power and cost-efficiency, is known for its easy
application development and operation. Yet, the complexities Kubernetes introduces have led to a quest for a more automated
approach — this is precisely where serverless computing steps into Kubernetes.
In this exploration, we'll delve into the serverless trend advantages and highlight key open-source solutions that bridge the gap
between serverless and Kubernetes, examining their place in the tech landscape.
• Extensibility – Kubernetes offers custom resource definitions (CRDs) that empower developers to define and
manage complex application architectures according to their requirements.
• Ecosystem – Kubernetes fosters a rich ecosystem of tools and services, enhancing its adaptability to various
cloud environments.
• Declarative configuration – Kubernetes empowers developers through declarative configuration, which allows
developers to define desired states and lets the system handle the rest.
Figure 1: HorizontalPodAutoscaler
Nonetheless, these solutions are not without their constraints. HPA primarily relies on resource utilization metrics (e.g., CPU
and memory) for scaling decisions. For applications with unique scaling requirements tied to specific business logic or external
events, HPA may not provide the flexibility needed.
Furthermore, consider the challenge HPA faces in scaling down to zero Pods. Scaling down to zero Pods can introduce
complexity and safety concerns. It requires careful handling of Pod termination to ensure that in-flight requests or processes
are not disrupted, which can be challenging to implement safely in all scenarios.
When you combine the event-driven approach with serverless platforms, the benefits are twofold: You not only save on costs
by paying only for what you need, but you also enhance your app's user experience and gain a competitive edge as it syncs
with real-world happenings.
triggers:
- type: rabbitmq
metadata:
host: amqp://localhost:5672/vhost
protocol: auto
mode: QueueLength
value: "100.50"
activationValue: "10.5"
queueName: testqueue
unsafeSsl: true
Let's go over this configuration: It sets up a trigger for RabbitMQ queue activity. This monitors the testqueue and activates
when the queue length exceeds the specified threshold of 100.50 . When the queue length drops below 10.5 , the trigger
deactivates. The configuration includes the RabbitMQ server's connection details, using the auto protocol detection and
potentially unsafe SSL settings. This setup enables automated scale in response to queue length changes.
The architecture achieves an effortlessly deployable and intelligent solution, allowing the code to concentrate solely on
essential business logic without the distraction of scalability concerns. This was just an example; the producer-consumer
serverless architecture can be implemented through a variety of robust tools and platforms other than KEDA. Let's briefly
explore another solution using Knative.
• Message broker – Selected message queue that seamlessly integrates as a Knative Broker like Apache Kafka or RabbitMQ.
• Producer – The producer component is responsible for generating the tasks and dispatching them to a designated
message queue within the message broker, implemented as Knative Service.
• Trigger – The Knative trigger establishes the linkage between the message queue and the consumer, ensuring a
seamless flow of messages from the broker to the consumer service.
• Consumer – The consumer component is configured to efficiently capture these incoming messages from the queue
through the Knative trigger, implemented as Knative Service.
All of this combined results in an event-driven data processing application that leverages Knative's scaling capabilities. The
application automatically scales and adapts to the ever-evolving production requirements of the real world.
Indeed, we've explored solutions that empower us to design and construct serverless systems within Kubernetes. However, the
question that naturally arises is: What's coming next for serverless within the Kubernetes ecosystem?
It's worth highlighting that the open-source communities behind projects like KEDA and Knative are the driving force
behind their success. These communities of contributors, developers, and users actively shape the projects' futures, fostering
innovation and continuous improvement. Their collective effort ensures that serverless in Kubernetes remains dynamic,
responsive, and aligned with the ever-evolving needs of modern application development.
In short, these open-source communities promise a bright and feature-rich future for serverless within Kubernetes, making it
more efficient, cost-effective, and agile.
Gal Cohen is a software engineer at Firefly, boasting years of experience in cloud and engineering. She's
dedicated to disseminating her DevOps expertise through technical articles, videos, and social media.
Her passion lies in DevOps practices and cloud-native technologies. Before joining Firefly, Gal served in
the Elite Intelligence unit 8200.
Kubernetes Is Everywhere
By Daniel Stori, Software Development Manager at AWS
Passionate about computing since writing my first lines of code in Basic on Apple 2, I share my time
raising my young daughter and working on AWS Cloud Quest and AWS Industry Quest, a fun learning
experience based on 3D games. In my (little) spare time, I like to make comics related to programming,
operating systems, and funny situations in the routine of an IT professional.
Kubernetes streamlines cloud operations by automating key tasks, specifically deploying, scaling, and managing containerized
applications. With Kubernetes, you have the ability to group hosts running containers into clusters, simplifying cluster
management across public, private, and hybrid cloud environments.
AI/ML and Kubernetes work together seamlessly, simplifying the deployment and management of AI/ML applications.
Kubernetes offers automatic scaling based on demand and efficient resource allocation, and it ensures high availability
and reliability through replication and failover features. As a result, AI/ML workloads can share cluster resources efficiently
with fine-grained control. Kubernetes' elasticity adapts to varying workloads and integrates well with CI/CD pipelines for
automated deployments. Monitoring and logging tools provide insights into AI/ML performance, while cost-efficient resource
management optimizes infrastructure expenses. This partnership streamlines the AI/ML development process, making it agile
and cost-effective.
Let's look into the real-world use cases to better understand how companies and products can benefit from Kubernetes and AI/ML.
Table 1
Recommendation systems Personalized content recommendations in streaming services, e-commerce, social media, and news apps
Image and video analysis Automated image and video tagging, object detection, facial recognition, content moderation,
and video summarization
Natural language Sentiment analysis, chatbots, language translation, text generation, voice recognition, and
processing (NLP) content summarization
Anomaly detection Identifying unusual patterns in network traffic for cybersecurity, fraud detection, and quality control
in manufacturing
Healthcare diagnostics Disease detection through medical image analysis, patient data analysis, drug discovery, and personalized
treatment plans
Autonomous vehicles Self-driving cars use AI/ML for perception, decision-making, route optimization, and collision avoidance
Financial fraud detection Detecting fraudulent transactions in real-time to prevent financial losses and protect customer data
Energy management Optimizing energy consumption in buildings and industrial facilities for cost savings and
environmental sustainability
Customer support AI-powered chatbots, virtual assistants, and sentiment analysis for automated customer support, inquiries,
and feedback analysis
Supply chain optimization Inventory management, demand forecasting, and route optimization for efficient logistics and
supply chain operations
Agriculture and farming Crop monitoring, precision agriculture, pest detection, and yield prediction for sustainable
farming practices
Language understanding Advanced language models for understanding and generating human-like text, enabling content
generation and context-aware applications
Medical research Drug discovery, genomics analysis, disease modeling, and clinical trial optimization to accelerate
medical advancements
Now, let's examine security and how it lives in Kubernetes and AI/ML.
Kubernetes ML analyses require a strong security foundation, and robust security practices are essential to safeguard data in
AI/ML and Kubernetes environments. This includes data encryption at rest and in transit, access control mechanisms, regular
security audits, and monitoring for anomalies. Additionally, Kubernetes offers features like role-based access control (RBAC)
and network policies to restrict unauthorized access.
Access control
Set RBAC for user permissions
Create dedicated service accounts for ML workloads
Apply network policies to control communication
Image security
Only allow trusted container images
Keep container images regularly updated and patched
Secrets management
Securely store and manage sensitive data (Secrets)
Implement regular Secret rotation
Network security
Segment your network for isolation
Enforce network policies for Ingress and egress traffic
Vulnerability scanning
Regularly scan container images for vulnerabilities
• TensorFlow – An open-source ML framework that provides tf.distribute.Strategy for distributed training. Kubernetes
can manage TensorFlow tasks across a cluster of containers, enabling distributed training on extensive datasets.
• PyTorch – Another widely used ML framework that can be employed in a distributed manner within Kubernetes clusters.
It facilitates distributed training through tools like PyTorch Lightning and Horovod.
• Horovod – A distributed training framework, compatible with TensorFlow, PyTorch, and MXNet, that seamlessly integrates
with Kubernetes. It allows for the parallelization of training tasks across multiple containers.
These are just a few of the many great platforms available. Finally, let's summarize how we can benefit from using AI and
Kubernetes in the future.
Conclusion
In this article, we reviewed real-world use cases spanning various domains, including healthcare, recommendation systems,
and medical research. We also went into a practical example that illustrates the application of AI/ML and Kubernetes in a
medical research use case.
Kubernetes and AI/ML are essential together because Kubernetes provides a robust and flexible platform for deploying,
managing, and scaling AI/ML workloads. Kubernetes enables efficient resource utilization, automatic scaling, and fault
tolerance, which are critical for handling the resource-intensive and dynamic nature of AI/ML applications. It also promotes
containerization, simplifying the packaging and deployment of AI/ML models and ensuring consistent environments across all
stages of the development pipeline.
Overall, Kubernetes enhances the agility, scalability, and reliability of AI/ML deployments, making it a fundamental tool in
modern software infrastructure.
I'm a certified senior software and cloud architect with solid experience designing and developing
complex solutions based on the Azure, Google, and AWS clouds. I have expertise in building distributed
systems and frameworks based on Kubernetes and Azure Service Fabric. My areas of interest include
enterprise cloud solutions, edge computing, high-load applications, multi-tenant distributed systems, and IoT solutions.
Kubernetes security is essential in today's digital landscape. With the increasing adoption of containerization and
microservices, Kubernetes has become the go-to solution for orchestrating and managing containers. However, this also
means that it has become a target for attackers, making Kubernetes security a top priority. The dynamic and complex nature
of Kubernetes requires a proactive and comprehensive approach to security. This involves securing the Kubernetes cluster
itself, the workloads running on it, and the entire CI/CD pipeline. It's important to ensure secure configurations, enforce least
privilege access, isolate workloads, scan for vulnerabilities regularly, and encrypt sensitive data.
This article will serve as a comprehensive guide to Kubernetes security, aimed at helping developers protect their applications
and data.
Considerations
Before diving into key security considerations,
it's crucial to understand the architecture. In
Kubernetes, the control plane communicates
with nodes via the Kubernetes API, which the
API server exposes. Nodes use the kubelet
to report back to the control plane and
communicate with etcd to read configuration
details or write new values.
CONTROL PLANE
The control plane (formerly known as the
master node) is responsible for managing the
Kubernetes cluster. It is the entry point for
all administrative tasks. Components of the
control plane include the API server, controller
manager and scheduler, and etcd.
• API server (kube-apiserver ) – Use role-based access control (RBAC) to limit who can access the API server and what
actions they can perform. Enable audit logs to track and analyze every request made to the API server. Use transport layer
security (TLS) for all API server traffic.
• Controller manager (kube-controller-manager ) and scheduler (kube-scheduler ) – These components should only
be accessible by administrators. Use TLS for connections and ensure they are only accessible over the local network.
• etcd – This is one of the most critical components from a security perspective, as it stores all cluster data. It should be
accessible only by the API server. Protect it with strong access controls and encryption, both in transit and at rest.
NODES
Nodes (formerly known as worker nodes) run the actual workloads. Each node contains the necessary services to manage
networking between containers, communicate with the control plane, and assign resources to containers. Components of a
node include the kubelet, kube-proxy, and container runtime.
• kubelet – The kubelet can be a potential attack surface. Limit API access to the kubelet and use TLS for connections.
• kube-proxy – This component should be secured by ensuring it can only be accessed by the control plane components.
• Container runtime – Ensure you're using secure, up-to-date container images. Regularly scan images for vulnerabilities.
Use Pod Security Admission to limit a container's access to resources.
PODS
Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. A Pod encapsulates
an application's container (or multiple containers), storage resources, a unique network IP, and options that govern how the
container(s) should run.
KUBERNETES HARDENING
Kubernetes hardening involves implementing robust security measures — including access control, network policies, audit
logging, and regular updates — to enhance the resilience and protection of Kubernetes clusters against potential threats
and vulnerabilities.
• RBAC – Implement RBAC to regulate access to your Kubernetes API. Assign the least privilege necessary to users, groups,
and service accounts. Kubernetes itself provides RBAC as a built-in mechanism for access control.
• Network policies – Define network policies to dictate which Pods can communicate with each other. This acts as a basic
firewall for your Pods. You can use Project Calico or Cilium for implementing network policies.
• etcd security – Configure etcd peer-to-peer communication and client-to-server communication with mutual TLS.
Enable etcd's built-in authentication and RBAC support.
• Audit logging – Enable audit logging in the API server using the --audit-log-path flag. Define your audit policy to
record the necessary level of detail. Fluentd or Fluent Bit are often used for processing Kubernetes audit logs.
• Update and patch – Regularly apply patches and updates to your Kubernetes components to protect against known
vulnerabilities using the Kubernetes built-in mechanism.
• Admission controllers – Admission controllers are built-in plugins that help govern how the cluster is used. Enable
specific admission controllers like AlwaysPullImages to ensure images are always pulled from the registry, and
DenyEscalatingExec to prevent granting a Pod more privileges than its parent.
• Secure CI/CD pipelines – CI/CD pipelines are commonly used in Kubernetes deployments. A DevSecOps approach
ensures that these pipelines are secure and free of vulnerabilities by integrating security checks and tests at every step of
the pipeline. Use practices like static code analysis, dynamic analysis, and dependency checks at the coding and building
stages.
• Configuration management – Kubernetes configurations can be complex, and misconfigurations can lead to security
vulnerabilities. DevSecOps practices involve managing and reviewing these configurations continuously to ensure
security. Use automated configuration management tools, like Ansible or Terraform, to ensure consistent and secure
configurations. Regularly audit and update configurations as necessary.
• Image assurance – Ensuring that the container images you're using in your Kubernetes deployments are from trusted
sources, not tampered with, and free of known vulnerabilities is critical. Use Docker Content Trust or Notary to sign your
images and verify signatures before deployment. Use private registries like Harbor or Quay, and secure them using TLS
and access controls.
• Dependency management – Kubernetes applications will likely depend on external libraries and components. It's
important to ensure these dependencies are secure and up to date. Regularly audit your dependencies for vulnerabilities
using tools like OWASP Dependency-Check.
• Secure build processes – The tools and processes used to build your application and create your container images need
to be secure. This could involve securing your CI/CD pipelines and using signed images. Use DevOps tools like Jenkins or
CircleCI to ensure they are properly secured and updated.
• Secrets management – Safely manage sensitive information such as API keys, passwords, and certificates. Use
Kubernetes Secrets or external Secret management tools to store and distribute Secrets securely.
GOVERNANCE
Governance in Kubernetes security ensures the implementation of policies, access controls, and best practices, fostering a
secure ecosystem for managing containerized applications and safeguarding sensitive data within Kubernetes clusters.
• Policy review – Regularly review and update your security policies to keep them aligned with the latest security best
practices and compliance requirements. Tools like kube-score or kube-bench (Go application that checks whether
Kubernetes is deployed securely) can be used to assess how well security policies are being followed.
• Documentation – Document all security procedures and ensure your team is aware of them. Use a centralized, version-
controlled repository like GitHub for your documentation.
• Compliance audit – Regularly audit your cluster for compliance with your security policies. Use tools like kube-bench or
kube-score for automated checks.
• Namespaces – Use Kubernetes' built-in namespaces to segregate different projects or teams. Apply RBAC and network
policies at the namespace level to enforce access and communication restrictions.
• Collaborative vendor security – For third-party services or vendors within your Kubernetes ecosystem, ensure they
adhere to robust security practices. Regularly review and validate security protocols to maintain a secure supply chain.
OTHER CONSIDERATIONS
In addition to fundamental security practices, several advanced considerations are vital for a robust Kubernetes security strategy:
• Monitoring – Use a comprehensive monitoring solution like Prometheus or Grafana to monitor your cluster. Set up
alerts for any signs of suspicious activity.
• Incident response – Have an incident response plan in place. This should include steps for identifying, isolating, and
mitigating security incidents. The ELK (Elasticsearch, Logstash, Kibana) or EFK (Elasticsearch, Fluentd, Kibana) stacks
can be used for log management and analysis during incident response.
• Backup – Regularly back up your etcd data using etcd's built-in snapshot feature.
• Resource quotas – Use resource quotas and limit ranges to prevent any one application from consuming too many
cluster resources.
• Service mesh – Consider using a service mesh for additional security and observability features. This can provide
mutual TLS, fine-grained traffic control, and detailed telemetry data. Istio and Linkerd are popular open-source service
mesh implementations.
To summarize, Kubernetes security is essential and requires a continuous, proactive approach. By combining robust security
practices with a strong security culture, organizations can leverage Kubernetes' benefits while minimizing security risks.
Akanksha specializes in cloud and application security, TDR, and vulnerability management. As a senior
member of the corporate governance team, she oversees the third-party cybersecurity. Her expertise
lies in managing relationships while also architecting and analyzing application designs. Additionally,
she is an active participant in cybersecurity communities like GIAC Advisory Board and IEEE. View her professional website
to learn more.
The financial intricacies of Kubernetes deployments demand more than reactive measures alone. Organizations have a
choice: react to costs as they arise or employ FinOps (financial operations) practices to anticipate and manage expenditures
proactively. Yet the road to efficient Kubernetes FinOps is far from one-dimensional. It's an ever-evolving practice that must
be fine-tuned according to operational realities and architectural demands. If a certain cost model continues to yield returns
without overwhelming resources, perhaps it's due for scaling. Conversely, a recurring budgetary shortfall may signal the need
for an extensive financial overhaul.
In this article, we delve into the multifaceted complexities of a distributed Kubernetes ecosystem and cost implications. We also
discuss the recommended FinOps practices for Kubernetes that offer guidance on their seamless integration into overarching
financial and operational frameworks.
CLUSTER DISTRIBUTION
Traditional, centralized-data-center models are now less relevant. Instead, deploying Kubernetes clusters across multiple
regions and cloud providers is the standard approach. While this aids in high availability and fault tolerance, it introduces
financial nuances as regional variations in resource pricing can skew budget forecasts. The crux of the challenge lies in regional
resource pricing variances and the costs associated with data egress — often hidden fees that only surface when closely
scrutinized. Additionally, latency between clusters can result in performance issues, necessitating more robust — and costly —
solutions to maintain service levels.
MICROSERVICES ARCHITECTURE
Besides being an architectural pivot, microservices can often result in a considerable shift in your expense structure.
Disaggregating a monolith into microservices requires each service needing its own set of resources and policies for
autoscaling, resiliency, and network Ingress/egress. This disintegration amplifies the volume of Pods and containers, each
becoming its own line item on your budget. Service meshes, such as Istio or Linkerd, which are used to facilitate inter-service
communication, add an extra layer of complexity and ultimately lead to higher costs.
RESOURCE HETEROGENEITY
Kubernetes helps you orchestrate a variety of resource types, including VM-based workloads, serverless functions, or managed
databases. The diversity is considered great for performance; however, since each resource type comes with its own pricing
model, the heterogeneity complicates the precise correlation of resource usage and cost allocation. In addition, not all
resources are billed the same way — some might incur costs per request, others per minute or per GB of data transferred. This
fragmentation calls for advanced tagging and granular monitoring tools to demystify your operational expenses.
MULTI-TENANCY
As enterprises scale, the practice of sharing Kubernetes cluster resources among multiple teams or projects — known as
multi-tenancy — becomes more prevalent. While this strategy can be cost-efficient, it raises concerns around security and
isolation. Resource quotas and limits must be set to prevent a noisy neighbor problem, where one team's activities are limited
to consume resources of others. Isolated namespaces can help, but what about shared costs like cluster-level logging or
monitoring? This balancing act ultimately has its own cost implications, making it vital to monitor usage carefully to ensure
equitable distribution of costs among tenants.
Finance Governance
When resource heterogeneity and regional
pricing variations complicate the cost equation,
visibility becomes paramount. FinOps bridges the
gap between IT and finance, empowering teams
to derive more value from their cloud spend.
Adopt advanced tagging and cost allocation methods for attributing costs to specific projects, departments, or teams. Once
metrics are scoped, the next logical step is to delve into the tools built to track them effectively.
The following table lists some open-source FinOps tools. Each tool brings its own set of capabilities and focuses on distinct
metrics that are essential to measure both financial and operational benchmarks. A typical approach is to integrate them
together to form a robust, open-source stack for FinOps in Kubernetes environments.
Table 1
RESOURCE OPTIMIZATION
Resource optimization in a FinOps parlance goes beyond simple cost cutting while helping you extract maximum value
from your deployments. Through predictive analytics and continuous performance monitoring, FinOps tools can identify
underutilized resources and suggest consolidation. Achieving optimal financial governance in Kubernetes demands a three-
pronged approach:
• Over-provisioning a container wastes compute cycles, just as under-provisioning can result in sluggish performance.
Consider right-sizing containers and Pods to strike the right balance between cost control and operational efficacy.
• Accumulated idle resources drain budgets without contributing to productivity. Effective management of these
dormant assets recaptures value, streamlining your financial operations.
• Scaling of resources should align with demand curves, ultimately ensuring that you pay only for what you actually use.
Utilize solutions such as Kubernetes' native HorizontalPodAutoscaler or third-party offerings to dynamically adjust
resource allocation.
Pods Compute and storage CPU, memory, disk Resource limit/quota settings
BUDGET FORECASTING
When it comes to budget forecasting of a Kubernetes setup, the best approach aligns overarching key performance indicators
(KPIs) with granular system metrics. This multi-layered approach enriches your financial strategy, adding depth and detail
to fiscal planning. Kubernetes namespaces serve as effective categorization tools, categorizing your costs to project-level
granularity. Metrics from tools like Prometheus or Grafana can further refine your budget models by providing insights into
resource utilization. This facilitates agile budgeting practices, enabling dynamic allocation of funds to projects based on their
real-time resource consumption.
Perhaps the most pivotal aspect of budget forecasting is the integration of system metrics with business KPIs. Metrics such as
CPU usage, memory allocation, and I/O operations not only indicate system performance but also translate into quantifiable
costs. This integration yields a multi-dimensional financial strategy that accommodates both operational realities and business
objectives. For instance, a KPI focused on maximizing application uptime would directly influence budget allocations toward
fault tolerance and high availability solutions.
A role-based approach is further strengthened by implementing resource constraints in Kubernetes, using features like
resource quotas, limit ranges, and network policies. These guardrails help implement "soft" and "hard" limits to prevent
resource overutilization.
Following the definition of roles and limits, FinOps policies are the pillars upon which everything is built. These hard-coded
guidelines act as the governance playbook, aligning both financial planning and operational strategy. From outlining minimum
security standards to delineating the resource scaling approval process, these policies act as your rulebook for fiscal control.
Conclusion
The success of a FinOps practice in Kubernetes is shaped by various factors, from distributed services and multi-tenancy to
compliance and security. While these complexities bring challenges, they also offer opportunities for refined cost control and
performance optimization.
However, mastering these variables requires a continuous process of calibration and readjustment. This doesn't undermine
the significance of FinOps practices, though. On the contrary, it emphasizes the need to augment them with specialized tools,
granular analytics, and team collaboration. Such a comprehensive stance fosters a culture that prioritizes fiscal prudence,
maximizes efficiency, and innovates in the face of Kubernetes's financial complexities.
Resources:
Sudip Sengupta is a TOGAF Certified Solutions Architect with more than 18 years of experience working
for global majors such as CSC, Hewlett Packard Enterprise, and DXC Technology. Sudip now works as a
full-time tech writer, focusing on Cloud, DevOps, SaaS, and cybersecurity. When not writing or reading,
he's likely on the squash court or playing chess.
Diving Deeper
Into Kubernetes
TREND REPORTS REFCARDS
Solutions Directory
This directory contains Kubernetes and cloud-native tools to assist with deployment, management,
monitoring, and cost optimization. It provides pricing data and product category information gathered from
vendor websites and project pages. Solutions are selected for inclusion based on several impartial criteria,
including solution maturity, technical innovativeness, relevance, and data availability.
Ambassador getambassador.io/products/edge-
Edge Stack API Gateway Kubernetes-native API gateway Free tier
Labs stack/api-gateway
Teleport Teleport Access Platform Secure infrastructure access Open source goteleport.com/kubernetes-access
aquasec.com/aqua-cloud-native-
Aqua CNAPP Cloud-native security platform By request
security-platform
Aqua Security
Kubernetes compliance check with github.com/aquasecurity/kube-
kube-bench Open source
CIS benchmark bench
Cloud-native chaos
Chaos Mesh Chaos Mesh Open source chaos-mesh.org
engineering platform
eBPF-based networking,
Cilium cilium.io
observability, security
Cilium Open source
Network, service, and security
Hubble github.com/cilium/hubble
observability for Kubernetes
Circle Internet
CircleCI CI/CD platform Free tier circleci.com
Services
couchbase.com/products/cloud/
Couchbase Autonomous Operator Containerized Couchbase Free
kubernetes
Cloud-native unstructured
CubeFS CubeFS Open source cubefs.io
data storage
digitalocean.com/products/
DigitalOcean DigitalOcean Kubernetes Managed Kubernetes clusters By request
kubernetes
f5.com/products/aspen-service-
Aspen Service Mesh Istio-based service mesh By request
mesh
F5
Kubernetes-native API gatways, load nginx.com/products/nginx-ingress-
NGINX Ingress Controller Trial period
balancers, and Ingress controllers controller
Fairwinds Fairwinds Insights Kubernetes security and governance Free tier fairwinds.com/insights
Fluent Bit Fluent Bit End-to-end observability pipeline Open source fluentbit.io
Kubernetes progressive
Flagger flagger.app
delivery operator
Flux Open source
Flux Kubernetes continuous delivery fluxcd.io
Grafana Labs Grafana Cloud Analytics and monitoring tool Free tier grafana.com/products/cloud
Cloud-native hyperconverged
harvesterhci.io Harvester Open source harvesterhci.io
infrastructure
Automated configuration
HashiCorp Terraform Open source terraform.io
management tool
IBM Cloud Kubernetes Service Managed Kubernetes platform Trial period ibm.com/cloud/kubernetes-service
Kubernetes-based, event-driven
KEDA KEDA Open source keda.sh
autoscaler
Cloud-native application
Keptn Keptn Open source lifecycle.keptn.sh
lifecycle orchestration
Kubernetes-native edge
KubeEdge KubeEdge Open source kubeedge.io
computing framework
Kubernetes-native policy
Kyverno Kyverno Open source kyverno.io
management
mirantis.com/software/mirantis-
Mirantis Secure Registry Enterprise container registry By request
secure-registry
newrelic.com/platform/kubernetes-
New Relic Pixie Kubernetes observability Free tier
pixie
nutanix.com/products/kubernetes-
Nutanix Kubernetes Engine Kubernetes management platform By request
engine
Ondat Ondat Data mesh for block storage Open source docs.ondat.io
Open Policy
Open Policy Agent Policy engine Open source openpolicyagent.org
Agent
github.com/planetscale/vitess-
PlanetScale Vitess Operator Kubernetes operator for Vitess Open source
operator
portworx.com/products/portworx-
Portworx Data Services Kubernetes DBaaS platform
data-services
By request
portworx.com/products/portworx-
Enterprise Kubernetes storage platform
enterprise
Cloud-native monitoring
Prometheus Prometheus Open source prometheus.io
and alerting
Automated configuration
Ansible Free tier ansible.com
management tool
Release prod.releasehub.com/product/
Release Delivery Container-based EaaS platform By request
Technologies release-delivery
snyk.io/product/container-
Snyk Container Container and Kubernetes security Free tier
vulnerability-management
Sumo Logic Sumo Logic Cloud-native SaaS analytics Free tier sumologic.com
tigera.io/tigera-products/calico-
Tigera Calico Enterprise Zero trust security for Kubernetes By request
enterprise
Traefik Labs Traefik Enterprise Unified API gateway and Ingress Trial period traefik.io/traefik-enterprise
Weaveworks Weave GitOps Enterprise Docker, Oracle, SQL Server containers By request weave.works/product/gitops-enterprise