Trend Report Kubernetes in The Enterprise
Trend Report Kubernetes in The Enterprise
the Enterprise
Trend Report
Modernization at Scale
B RO UG H T TO YOU I N PA R T NER SH I P WI T H
Table of Contents
3 Highlights & Introduction
B Y KA R A P H E LPS
This year marks the fifth birthday for Kubernetes, today’s de facto open
source container orchestration system for much of the tech industry.
TREND PREDICTIONS
It’s now the largest proof of concept for open source development.
▶▶ Kubernetes has seen staggering growth
In just five years, believe it or not, Kubernetes has become virtually
in the five years since its inception, and
indispensable to many aspects of modern software. A thriving
there are no signs of its popularity waning
ecosystem of related tools has grown around it. Some of the largest
anytime soon.
modern tech companies now rely on Kubernetes (also known as K8s)
to scale quickly and to provide stateless, flexible infrastructure. ▶▶ Developers with advanced skills in
Kubernetes will be in increasingly high
Still, enterprise and legacy organizations frequently struggle with demand.
K8s adoption and long-term maintenance. Strong enough security
▶▶ As the enterprise world begins to
protocols are an ongoing concern, as well. Teams often abandon
modernize legacy systems with the help of
Kubernetes due to its steep learning curve and the complexities of
Kubernetes, security will become an even
installation and daily operation. The tech world needs more developers
more important part of the K8s
to be highly skilled in K8s.
development and deployment pipeline.
For all these reasons and more, the DZone team decided to publish
our first-ever Kubernetes Trend Report with a specific focus on
applications in the enterprise environment. We’re exploring the challenges of container orchestration at the enterprise level,
diving into a few ways to automate security in the Kubernetes pipeline, and examining enterprise Kubernetes deployment from
the ground floor up.
Kosmas Pouianou opens up the report with his article that covers the history of K8s and containers as well as musings on their
future, including examples of use cases from the enterprise world.
In “Automating Open Source Security in Kubernetes Throughout the DevOps Pipeline,” Shiri Ivtsan addresses one of the most
overlooked challenges encountered when integrating K8s into the software development lifecycle: open source security.
DZone Research
We also have the results from a recent reader survey on Kubernetes and containers. In the Key Research Findings later in this
report, we’re sharing what we’ve learned about how and why software organizations now use K8s. Usage rates have exploded
from 2018 to 2019, and we look at the usage rate through the lens of organization size. We also delve further into the data to
explore how many containers are typically being run in production, as well as the most common reasons for using container
orchestration tools.
Kubernetes has seen staggering growth in five short years, and its story is far from over. There are no signs of its popularity
waning anytime soon. Thank you for your interest in bringing K8s to the enterprise as part of the next phase of its evolution — and
thank you for downloading this report to explore further. Drop us a line and let us know what you think of it; we’re always glad to
hear your feedback.
Kubernetes in the
Enterprise
By Kosmas Pouianou, Data Engineer at SAP
• Resource Efficiency and Speed: One of the key features of containers is that, Container
unlike virtual machines (VMs), they don’t virtualize the hardware; rather, they
just virtualize the OS, allowing multiple containers to share OS resources.
Host OS
Essentially, this means many more containers can run simultaneously on the
same machine, considerably lowering costs. At the same time, containers
are very fast to start up. If you’ve ever heard of the “serverless” buzzword, Infrasturcture
this is what makes it at all possible.
While containers on their own bring a lot to the table, the industry-changing benefits only become apparent when one takes the
next logical step: container orchestration. This is exactly where K8s comes in.
Container orchestration solves several challenges that arise from such an architecture. For example:
It’s obvious that these features are among the pillars of the modern cloud, and partly explain why K8s has become ubiquitous. It
is equally easy to see why this approach is a perfect fit for stateless apps in particular. But what about the needs of big
enterprise systems?
K8s tries to address this mainly through Volumes, Persistent Volumes, and StatefulSets. In practice, all these options are great to
have and cover many scenarios; but, for the time being, there are still many that they do not, and the sheer complexity of
containerizing stateful apps, in general, often outweighs the benefits in production scenarios. The question of managing storage
and containerizing stateful apps is a very hot topic, and there's a lot of effort being put in this direction in the industry (e.g., Ceph,
Rook, KubeDirector, KubeDB, Red Hat's Operator Framework).
Security: This is a big deal in the enterprise world. Despite their many advantages, containers do not offer the same level of
isolation as VMs. Multi-tenancy, in particular, can be a challenge. Again, there is a lot of effort being put into making containers
more secure; a great example being Google open-sourcing gVisor in a bid to bring better isolation to containers — and it
integrates nicely with K8s.
Multi-cloud: Enabling hybrid/cloud deployments and avoiding vendor lock-in are key requirements for the modern enterprise.
This poses significant technical challenges which cannot be addressed by a simple tool, but typically require a combination of
technologies and architectural approaches. This is one of the reasons why we’ve seen a growth in enterprise K8s offerings, such
as OpenShift, Docker Enterprise, and Google’s Anthos.
Commercial Offerings enterprise applications. The vision for Kyma is to act as the glue
between Mode 1 and Mode 2 environments, essentially allowing
Open Source users to extend their Mode 1 environment with Mode 2
Kubernetes
capabilities, without disrupting the existing Mode 1 systems.
Moving Forward
We already discussed challenges specific to the enterprise and what options K8s and its ecosystem employ to address them. As a
takeaway, there are a few points worth reiterating:
• Containers are not the answer to everything, but K8s is probably the default way to manage containerized systems at present.
• Container tech is constantly evolving and a lot of effort is being put into overcoming current challenges (e.g.,
containerizing databases).
• As the K8s ecosystem evolves, a positive feedback loop emerges, resulting in increasingly sophisticated technology.
• Most big tech players are directly involved in enriching the ecosystem and building commercial offerings on top of it.
Considering these points, it is clear we are currently at an exciting point in the evolution of cloud native technologies. As for
Kubernetes, it is increasingly making headway into the enterprise world. It will be interesting to see how this growth continues and
how the ecosystem will adapt and evolve in turn.
Autonomous Couchbase on
Kubernetes With Cloud 2.0
By Anil Kumar, Director of Product Management at Couchbase
Two years ago there were many competing standards for container
orchestration. Today Kubernetes has emerged as the clear frontrunner TREND PREDICTIONS
and has become the de facto standard to power the next phase of ▶▶ Kubernetes will become the operating system
cloud technology. for Cloud 2.0.
Cloud, especially Kubernetes, is fantastically successful for deploying and testing stateless applications. But underpinning the
system, you need a database to drive the applications and provide operational and analytical insight. Stateful applications have
the most resistance to change and are always the last to migrate to new technologies.
Couchbase prepares you for Cloud 2.0 with the Couchbase Autonomous Operator for Kubernetes. It runs as an internal database
as a service and reduces operational complexity by up to 95% by implementing best practices and running Couchbase as an
autonomous, fully managed stateful database application next to other microservices applications, all on the same Kubernetes
platform.
2. Deploy at will: Couchbase doesn’t force you to choose between on-premises, private cloud, or a specific public cloud deployment.
You can easily deploy Couchbase within a managed private or public cloud to maximize flexibility, customizability, and performance.
3. Use what you know: Couchbase has developed strategic partnerships with the most popular enterprise providers, including
Red Hat OpenShift Container Platform and cloud-managed Kubernetes services on AWS (Amazon EKS), Azure (Azure AKS), and
Google (Google GKE). As cloud vendors build more ways to integrate with their container platforms, Couchbase makes it easier to
take advantage of their latest advancements.
couchbase.com/kubernetes
orchestration framework helps teams to deploy and scale containerized ▶▶ Technologists will begin using a next
applications at the speed of DevOps. generation toolbox of automated
solutions to secure their software projects
However, as is often the case when attempting to speed up development
from the earliest stages of development.
and delivery, security is slow to join the party. Enthusiastic users tend
▶▶ Kubernetes will continue to play a
to forget that integrating Kubernetes into the software development
prominent role in development because
lifecycle also introduces a new set of security concerns, and one of the
of its ability to easily integrate with
top issues that is often overlooked is open source security.
automated security tools.
Considering the fact that Kubernetes and the containerized environments that rely on it are very much an open source
ecosystem, it’s important to understand the challenges that open source components with known vulnerabilities pose to
organizations using Kubernetes.
As open source usage has gone mainstream across industries of all sizes and verticals over the past few years, the number of
known open source security vulnerabilities has risen exponentially from year to year. This requires software development
organizations to start paying attention to open source security and to do their best to address known open source security
vulnerabilities.
Open source vulnerabilities management requires a different set of processes and tools than securing proprietary or commercial
Tracking which open source components you are using is no easy feat considering the volume of open source code in today’s
software, and, to make matters even more complicated, the decentralized nature of the open source community means that
known open source vulnerabilities are published across a number of community issue trackers and advisories, rather than in one
centralized location. That makes keeping track of known open source vulnerabilities and remediating the vulnerable components
in your products an impossible feat to carry out manually, especially at scale.
If that’s not daunting enough, there is also the issue of the tangled web of dependencies in open source libraries. Most open
source components are dependent on other open-source components. Tracking those dependencies is critical to keeping your
entire codebase secure since a vulnerability in an underlying component can impact all other software that is built on top of it.
Covering All of the Layers: Integrating Security Into the Kubernetes Pipeline
Ensuring open source security in your Kubernetes usage requires integrating a DevSecOps approach throughout the software
development lifecycle — from the early stages of research and development, as well as understanding which architecture and
components to use, through the build stage, and all the way up to deployment. I’ve mapped out the main stages when known
open source vulnerabilities should be addressed to ensure a secure deployment.
As they continue to code away, developers will probably continue adding open source dependencies. Here, too, it’s important that
they know exactly what they are using as building blocks in their project, and make sure components — dependencies included —
are vulnerability free.
This is a great example of the shift left approach, when you address security vulnerabilities as early as possible — in this case,
even before they are added to the image registry. Keeping your Kubernetes processes secure can, and should, start before code
even gets to the containerized environment. This saves a lot of time and money that would otherwise be spent fixing issues later
on, closer to delivery dates, when remediation is a bigger and far more expensive task.
Integrating vulnerability scanning into your CI processes ensures that open source security vulnerabilities are blocked from
Deployment is your last security gate before your image or container starts getting production traffic from real users. This is your final
chance to stop projects with known vulnerabilities from being deployed.
At this stage, enforcing policies that block any vulnerable open source components in your code is of the utmost importance, and
automated policy enforcement will make the task that much easier. The Kubernetes admission controller, for example, is a good
tool that allows you to enforce such rules.
The Future Will Be Automated: Baking Security Into the Entire Kubernetes Lifecycle
Currently, most of the security measures that we described to address known open source vulnerabilities when working with
Kubernetes orchestration are implemented manually, if at all.
Considering the speed of software development and scale of open source code being used in modern applications, manually
tracking and remediating vulnerable open source components in a project is extremely time consuming and downright impractical
at scale. Not only that, the margin for human error is far too wide and its implications can range from costly to disastrous.
The margin for human error is far too wide and its
implications can range from costly to disastrous.
The good news is that all of these processes can be easily automated, and, looking to the future, organizations are going to begin
bolstering their DevSecOps game with automation to gain an edge.
The scale of open source components’ usage combined with the adoption of shift left processes are putting more weight on the
shoulders of developers when it comes to security responsibilities. Keeping up with these new responsibilities while organizations
continue to speed up development processes is impossible without automating security.
Kubernetes is a prime example of an environment where automated security tools can be seamlessly integrated by all of the
relevant teams to ensure that development proceeds at the breakneck speed of DevOps, all without compromising on security.
• Complex attack vectors: Attackers may access compromised API access keys, leverage a vulnerability in base images
(used for containers, outsourced libraries, or in serverless function code), or take advantage of vulnerabilities inside the
orchestrator settings to reach services that contain sensitive information.
• Ephemeral services: The ephemeral nature of containers (95% are said to live less than a week) combined with their
opaque characteristics and the massive volume of data, makes wading through a plethora of security or performance issues
like finding a needle in a haystack.
• Security seen as the bottleneck: By not including security in the approach early and collaborating often, there is greater
risk for exposure into the app dev infrastructure.
Containers demand a new approach to security, which is rarely a niche, single function tool. Data is key to understanding what is
happening in the ephemeral container realm, and insight is key to securing these dynamic environments. Ultimately, visibility into
both security and performance data is critical to operate reliable containers at scale.
At Sysdig, we use data to solve Kubernetes visibility and security as a converged problem. We offer the only unified approach to
security, monitoring, and forensics in Kubernetes environments. Sysdig delivers:
Deploying Kubernetes
in an Enterprise
Environment
By Adi Polak, Senior Software Engineer and Developer Advocate at Microsoft
By Idan Levin, Chief Architect for MDATP at Microsoft
So, picture this: you can’t stop talking about Kubernetes. After some
initial discussions, your company’s stakeholders have finally given you TREND PREDICTIONS
the green light to adopt it. You are driving a technological change in
▶▶ Kubernetes changes every day. Hence, you
an enterprise environment, which is very different than running it in a will need a team in place. You can hire or
startup or a small company. You are accountable for a huge product’s create your own team of Kubernetes experts,
success. How do you proceed? What do you do? We’ve got you covered. but it takes time to train or hire the right
Just continue reading. people with the right skillsets and passion.
• If you are totally new to Kubernetes, we recommend going over ▶▶ Various managed Kubernetes solutions will
the basic core concepts before reading the article. continue to target solving security
challenges, operability challenges, and much
Kubernetes is a great container orchestrator, with out-of-the-box lumi- as they might save you time and effort on
nary capabilities like resource management, deployment management, this long journey of deploying Kubernetes in
automatic updates, self-healing, and many more. Although Kubernetes the enterprise.
• What enterprise grade Kubernetes product solution requirements are and how to target them.
• What the team should look like, e.g., should you share cluster resources with other teams?
Team of Experts
First things first, build your team. Kubernetes is far from a simple platform; with great power comes great complexity. To run it
properly, you’ll need a team (or a v-team) of experts, responsible for the “infrastructure.” Whether you hire experienced
Kubernetes engineers externally, or train existing employees, you should invest a significant amount of resources in building the
team’s skillset. Team members should read Kubernetes books, participate in conferences (like KubeCon), and keep up-to-date
Well, Kubernetes was built in a way that allows it to be shared by multiple teams. It has built-in Role-Based Access Control (RBAC)
that allows you to give only the required permissions to each entity. It also has the Namespaces ability, which creates virtual
clusters that are backed by the same physical one. Sharing a cluster enables better auto-scaling for saving COGs (cost of goods),
it has a smaller operational cost, better support network policies, and, if you’re not doing in-cluster TLS, it actually results in better
performance.
• Export security and audit logs to a centralized server (outside of the cluster) for detection and response.
Kubernetes doesn’t (yet) provide an easy way to manage the underlying VMs. However, there are two approaches you can take to
implement the above:
One thing to consider when choosing between these alternatives is cluster auto-scaling. With Option 2, you don't have to do
anything; auto-scaling will just work. Option 1, however, requires special handling.
Access Management
No one should ever have standing access (always on) to production environments; you should implement just-in-time (JIT)
access and make sure you closely monitor those connections. Avoid using the certificate you received when creating the cluster (the
Kubeconfig file), use an identity-based authentication solution like Azure Active Directory or any solution that implements OpenID
Connect; a hands-on tutorial and more information are available here.
Monitoring
Anything that runs on Kubernetes, including the underlying virtual machines, needs to be monitored, where basic monitoring
includes logs and metrics.
In Figure 1, all the containers write logs to STDOUT, then the container runtime persists these logs to files on the host. Fluentd is
used here as the logging layer (DaemonSet) that tails, aggregates, compresses, and sends the log files to a centralized database.
Start with making sure that individual clusters are set up correctly, meaning that there are at least three master nodes to survive
failures (we need a quorum). And, if possible, use multiple Availability Zones. The fact that the masters are healthy doesn’t mean
your application is. Therefore:
A good example of a service that should implement the above best practices is the Ingress Controller (a.k.a. the cluster gateway).
Since this is the main entry point to your application services, it has to be up and running at all costs.
Even after strictly following all the recommendations above, there are many reasons an entire Kubernetes cluster can suddenly
break — anything from a bad cluster upgrade to an entire datacenter catching fire after a lightning strike. That’s why you should
have at least two Kubernetes clusters in different geographical locations, as this will help you to survive failures stemming from
outages and/or natural disasters.
Deployment
Now that the clusters are secured, monitored, and highly available, it’s time to deploy your services. Avoid running manual
commands on the cluster. Instead, adopt a model where a Git commit to the master branch automatically kick-starts the
production deployment. It is best to configure your cluster state in Git, just like your code.
Services running on multiple clusters in different regions might require different configurations to operate. One way to avoid many
YAML duplications is by templating the environment variables; this can be done using Helm.
Tip: You can create one shared Helm chart, managed by your Kubernetes experts, for the entire group. This will make your
developers’ lives easier.
Most applications require Secrets, but, as a general rule, avoid using native Kubernetes Secrets — they are insecure. Instead, use
fully managed identity solutions (like AAD POS identity) wherever possible; if you still need to use secrets, keep them safe in a
Key Vault (see Azure Key Vault). Lastly, to retrieve them in runtime, we recommend using Kubernetes InitContainer. Don't forget to
rotate your secrets every couple of months (as defined in your security policy).
There are many topics that need to be covered when discussing Kubernetes in the enterprise. In this article, we addressed the
basics of getting started and formulizing your draft solution. For a deeper, hands-on look at K8s and security-focused topics,
follow us on our blogs and social media. We are here for you, so let's discuss your questions, insights, ideas, and concerns.
Write us!
Building a Development
Ready Kubernetes Platform
By Anita Buehrle, Senior Content Lead at Weaveworks
Congratulations on starting your cloud-native journey. Your team has chosen the leading development and deployment framework
that provides application portability, agility, and scalability. You began your journey with containers and now you’re ready to deploy
your container-based application at scale with Kubernetes. But at this point you’re faced with a bewildering array of software ven-
dors, cloud providers, and open source projects that all promise painless, successful Kubernetes deployments.
The key to success is a flexible and reproducible cloud native platform that allows you to quickly adopt these new technologies in
your infrastructure and to run workloads anywhere: on premise, in public clouds, or even in a hybrid-cloud environment.
“Cloud-native applications increase business agility and speed. But this requires a new runtime platform and environment for
operating cloud-native applications reliably, securely, and at scale.” Steve George, COO Weaveworks
Weaveworks Enterprise Kubernetes Platform reduces this complexity through automated configuration management and opera-
tions tooling. With GitOps configuration management, teams can define a standard installation of Kubernetes and automate the
deployment of new nodes following standard templates. Preconfigured cluster templates let developers and operators define apps
and update clusters add-ons with security patches, minimizing the YAML mess.
When your entire cluster configuration is stored in Git and managed with GitOps, you can reproduce the cluster in a repeatable
and predictable way. This brings advantages when you are building test environments and pipelines, and producing clusters for
different teams with the same base configuration, or improving your disaster recovery capability.
Demographics
For this article, we’ve drawn upon data from a survey conducted
among the DZone member community. In the survey, we asked how TREND PREDICTIONS
respondents’ organizations use containers and cloud technologies, and ▶▶ Due to its open source nature, Kubernetes
how Kubernetes fits into the picture. usage rates among organizations will
continue to grow.
Before we dive into where and how Kubernetes is used, let’s go over our
▶▶ More and more industry leaders will use
respondents’ basic demographic information.
Kubernetes for container orchestration.
▶▶ Respondents live in three main geographical areas:
▶▶ As developers become more familiar with
• 35% live in Europe.
Kubernetes, enthusiasm for the Kubernetes
• 26% reside in the United States. ecosystem will also continue to increase in
• 14% live in South Central Asia. the developer community.
▶▶ Respondents typically fill one of three main roles for their organization:
• 29% work as developers.
• 25% are architects.
• 20% work as developer team leads.
Where Kubernetes Is
Figure 1: Which container orchestration/management technologies does your
Used organization use?
Kubernetes (K8s) is a technology
on the rise and its use when Cloud Foundry Diego 7%
Despite these impressive numbers for Kubernetes, there are indeed situations in which Kubernetes is far more likely to be used. One
such delimiting factor is organization size. When we compare the data on respondents’ organization size given in the Demographics
section with the data we gave above on Kubernetes usage rates, we find that bigger orgs are more likely to use Kubernetes. We
found that 33% of organizations with 1-19 employees use K8s for container orchestration, while 64% of organizations sized 10,000+
use K8s. The below table shows how the usage rates of Kubernetes increases as organization size increases.
1-19 25%
20-99 21%
100-999 28%
1,000-9,999 36%
10,000+ 32%
Figure 2
We also found that, despite organizations’ reasons for adopting container orchestration tools, Kubernetes proved a popular
solution to their needs. In Figure 4, we’ve compared the data we collected on the benefits of container orchestration tools with our
data on K8s usage rates.
Another variable that had an effect Kubernetes usage rates was the percent of an organization’s workload that is containerized.
Among respondents whose organizations have containerized 1-25% of their workload, 51% use K8s. For respondents whose
organizations have containerized either 26-50% or 51-75% of their workload, 68% use K8s. And among organizations with
76-100% of their workloads containerized, 58% use K8s. Based on this analysis, it seems that organizations that containerized
somewhere between 26-75% are the most likely to use container orchestration tools, and specifically use Kubernetes.
1 - 25% 51%
26 - 50% 68%
51 - 75% 68%
76 - 100% 58%
Figure 5
Interestingly, while Kubernetes seems to be winning enterprise-level adherents across the software industry, developers are not
yet quite as sold on it. Among respondents who told us that containers make their job easier, 68% use K8s. Among respondents
who said that containers have made their job harder, however, 81% use K8s. Similarly, we found that 80% of respondents who
claim that containers have had no impact on their job’s difficulty use Kubernetes.
Kubernetes Monitoring
in Dynamic and Hybrid
Environments
By Daniella Pontes, Senior Manager Product Marketing, InfluxData
Kubernetes, a.k.a. K8s, is paving the way to modern dynamic application environments. Its orchestration logic takes IT
operations to the next level of automation of container clusters deployment, continuous updating and scaling. Visionaries have
foreseen Kubernetes’s ubiquitous adoption in enterprises and, most importantly, a critical contribution to the cloud-native
transformation. Kubernetes is not only changing the way software is architected, integrated, and delivered to production
environments, but also changing business models which now have the intrinsic potential of exponential growth and global
distribution. One can say that Kubernetes is one of the most transformational technologies towards cloud-native today.
Applications fragmented into microservices are running on ephemeral containers and continuously integrated and delivered,
and yet are running on hybrid environments, making monitoring Kubernetes for performance and reliability mandatory. In
order to address this pressing need, Kuberenetes has integrated Prometheus monitoring model in its architecture.. However,
Prometheus’s endpoint ‘pull monitoring’ does only part of the job of collecting metrics. What security and implementation
concerns arise when pulling data in multi-domain cloud environments? What happens when you want to monitor in events
real-time, not at intervals? And what about applications that don’t expose metrics in Prometheus format — that are better
suited to other monitoring methods, such as pushing and streaming? Furthermore, what about monitoring various data types,
numeric and non-numeric, with different retention policies (months, years… forever) and serving multiple customers and
audiences, such as in managed services?
Push and pull metric collection mechanisms, stream ingestion, real-time analytics, high availability, and cost-effective long-
term storage all matter when diving deeper into monitoring Kubernetes application environments of all sorts: cloud, hybrid,
multi-cloud, and multi-IT. The reality is that most production environments don’t have a singular approach to application
deployment and monitoring. Therefore, one should consider solutions such as the InfluxDB time series platform that can handle
variances, custom implementations, and unique business cases, while facilitating the need for evolution.
InfluxDB
By Daniella Pontes, Senior Manager Product Marketing, InfluxData
Open Source
Multi-Region Kubernetes refers to when Operations teams in large companies
Yes
with many distributed product teams need to provide Kubernetes-as-a-
Service within their organization across multiple hosting regions and multiple Strengths
hosting providers. Gravitational’s search for a time series database suitable for • Built for developers
configurable monitoring and alerting resulted in choosing InfluxData. • Trusted by Ops
• Vital to business
Using InfluxData for Kubernetes monitoring, Gravitational was able to make cloud
applications portable for its clients and implement improvements that extended
Notible Users
the power of Kubernetes and served their own need to scale the operational
• Capital One
management of applications across many clusters.
• PayPal
• Comcast
• Transformation in their mindset, how they manage infrastructure and
• Wayfair
deliver products
• Optum Health
• Full transparency throughout the environment enabling internal teams and
• Gravitational
customers to see how everything is running at any given time
• App metrics available the minute the app is “born” within one of their Website
platforms influxdata.com
Blog
influxdata.com/blog
Twitter
twitter.com/influxdb
Refcards Podcasts
Monitoring Kubernetes This Refcard PodCTL Produced by Red Hat OpenShift, this
outlines common challenges in monitoring Kubernetes, podcast covers everything related to enterprise
detailing the core components of the monitoring tool Kubernetes and OpenShift, from in-depth discussions
Prometheus. on Operators to conference recaps.
Advanced Kubernetes This Refcard aims to Deloitte on Cloud This episode of the Deloitte
deliver quickly accessible information for operators using on Cloud podcast dives into a few ways that
any Kubernetes product. organizations can use Kubernetes to standardize
processes around cloud migration.
Securing Your Kubernetes Deploy-
ment This Refcard will teach you the essentials of Kubernetes Podcast from Google
security in Kubernetes, addressing topics like container Considering that Google produces it (and that Google
network access, user authorization, service token access, also created Kubernetes in 2014), you might call this
and more. podcast a classic. Enjoy weekly interviews with
prominent tech folks who work with K8s.
Executive Insights on
the State of K8s in the
Enterprise
By Tom Smith, Research Analyst at DZone
software into production — security, monitoring, and debugging. deployments are around the lack of skills/
knowledge, complexity, security, and “day
Build your environment to be specific to a purpose, not to a location. two” operations.
Have a plan driven by your goals. Start with people that have knowledge
of K8s that will work well together when services are divided among teams. The team needs to know what’s going on across the
landscape as well as to understand what’s required for “day two” operations — upgrades, patches, disaster recovery, and scale.
Think about how to handle state, whether it’s using stateful sets leveraging your provider’s block storage devices, or moving to
a completely managed storage solution, implementing stateful services correctly the first time around is going to save you huge
headaches.
2. Kubernetes has made it easier to scale and achieve speed to market in a vendor-agnostic way. We’re seeing deployments in
production at scale with thousands, tens, and hundreds of thousands of containers implementing microservices. K8s has provid-
ed the infrastructure that’s more stateless, self-healing, and flexible. K8s enables teams to scale production workloads and fault
tolerance not previously possible.
K8s is faster to scale and deploy, more reliable, and offers more options. It lets both the application and platform teams move more
quickly. Application teams don’t need to know all the details, and platform teams are free to change them.
3. K8s enhances the security of containers via roll-back access control (RBAC), reducing exposure, automation, and network
firewall policies. Regarding security, K8s solves more problems than it creates. RBAC enforces relationships between resources
like pod security policies to control the level of access that pods have to each other. K8s provides the access and mechanisms to
use other things to secure containers.
The security benefits of containers and K8s outweigh the risks because containers tend to be much smaller than a VM running
in NGINX which will have a full operating system with many processes and servers. Containers have far less exposure and fewer
attack surfaces.
By automating concepts and constructs of where things go using rules and a stabilized environment, you eliminate a lot of human
error that tends to occur in a manual configuration process. K8s standardizes container deployment — set it once and forget it.
Due to the increased autonomy of microservices deployed as pods in K8s, it’s important to have a thorough vulnerability assess-
ment on each service, and to change control enforcement on the security architecture. A strict security enforcement is critical to
defend against security threats. It’s important to attend to things like automated monitoring/auditing/alerting, OS hardening, and
continuous system patching.
Better performance results in more cost savings, and K8s helps reduce infrastructure costs. K8s also helps reduce technical debt
as organizations pursue legacy containerization/modernization. There's also automatic scale-in and -out to adjust quickly to appli-
cation workload demands, and to maintain integrity when scaling.
5. The most common failures revolve around a general shortage of skills/knowledge, as well as around complexity, security, and
“day two” operations. K8s talent is very hard to find and retain. There is also a lack of understanding about how K8s functions.
A common challenge is how to get a team up to speed quickly. Recruit experts with the depth of knowledge of K8s and DevOps
required to build proper tools and implement application workflows in containerized environments.
People often give up on implementation because it’s too hard. You can expect significant complexity and a steep learning curve.
People underestimate the complexity of installing and operating K8s. It’s easy getting started, but then people are surprised by the
complexity when they put it into production with security and monitoring in place.
Enterprises can find it challenging to implement effective security solutions. We see security and operations failures arise when teams
don’t implement any policy around the creation of external load balancers/ingresses. We see failures around security, with new vulner-
abilities weekly that require patches. “Day two” operations like upgrades and patches need to be managed. There’s an ongoing need to
deploy persistent storage, to monitor and alert on failure events, and to deploy applications across multiple K8s clusters.
Security controls lag behind, and newcomers may adopt inadequate K8s security measures — allowing attackers with increasingly
sophisticated exploits to succeed. You cannot assume that managed K8s offerings are somehow inherently secure — or that by
limiting CI/CD access to just a few DevOps people, that any risk can be avoided.
People who try to implement K8s on their own have trouble maintaining their own platform. Organizations also tend to assume
they need K8s when they don’t. A lot of people are flying blind, running random containers with third parties without monitoring
them. People assume it’s self-healing, and ignore the details.
7. Driven by adoption of the cloud and IoT, K8s is destined to become the de facto platform for developers. It will become the
standard platform for running applications, similar to the initial excitement in the developer community around Java. The future is
in IoT, with K8s enabling communication and rollbacks. You’ll be able to make IoT device nodes in a larger K8s cluster for faster
updates and more services. The K8s cloud operating system will also extend to hybrid, multi-cloud operating systems.
There will be more externalization of the platform and enterprise hardening of K8s. It will become more stable at the core while
also becoming more extensible. The toolkit for K8s operators will capture more complicated lifestyle automation. Containers will
eventually replace virtual machines and will support other infrastructure further up the stack. The technology will be increasingly
standardized, stable, and portable going forward. The adoption of open-source strategy and K8s by businesses will continue to
rapidly grow, with an ecosystem backed by leading internet tech companies and an expanding K8s developer community.
8. When working with K8s, developers need to keep in mind security, architecture, and DevOps methodology. Developers and
DevOps engineers need to consider how best to secure their K8s environments. Developers should become familiar with the
Cloud-Native Computing Foundation (CNCF) stack, with K8s as the centerpiece — along with technologies like network meshes,
permit use, and runtime security.
Figure out the best architecture to build around, but architect so your application can run on a different platform. Understand
the need to be elastic on a cloud-native platform, but be explicit about the shape of your infrastructure and be explicit with your
manifests.
Finally, make the trip to KubeCon. If you experience a problem, reach out to the community.
Below is the list of executives who were kind enough to share their insights with us:
Cloud Zone
Container technologies have exploded in popularity, leading to diverse use cases
and new and unexpected challenges. Developers are seeking best practices for
container performance monitoring, data security, and more.