PF Containers in Prod For Dummies
PF Containers in Prod For Dummies
Containers
in Production
Sysdig Special Edition
For general information on our other products and services, or how to create a custom For
Dummies book for your business or organization, please contact our Business Development
Department in the U.S. at 877-409-4177, contact [email protected], or visit www.wiley.
com/go/custompub. For information about licensing the For Dummies brand for products or
services, contact BrandedRights&[email protected].
ISBN 978-1-119-52110-5 (pbk); ISBN 978-1-119-52105-1 (ebk)
Manufactured in the United States of America
10 9 8 7 6 5 4 3 2 1
Publisher’s Acknowledgments
We’re proud of this book and of the people who worked on it. Some of the
people who helped bring this book to market include the following:
Contributing Writer: Emily Freeman Editorial Manager: Rev Mengle
.
Identifying Common Threats . . . . . . . . . . . . . . . . . 34
.
Handling Configuration and Compliance . . . . . . . 39
.
Ensuring Run-Time Security . . . . . . . . . . . . . . . . . . 40
.
Working with Forensics and Incident
Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
.
CHAPTER 5: Ten Container Takeaways . . . . . . . . . . . . . . . 43
.
Introduction
M
any people consider containers a hot new tech-
nology that they need to jump on. In reality,
however, containers aren’t new at all. They’re
actually pretty old and have their origins dating back as
far as the introduction of the chroot system call in 1979.
1
production. It provides an overview of the container eco-
system as well as the development processes and archi-
tectures that complement it. This book covers
2
IN THIS CHAPTER
»» Moving to microservices
»» Comparing orchestration
platforms
Chapter 1
Understanding
Containers and
Orchestration
Platforms
C
ontainers may be all the rage today, but they aren’t
a new development. They’ve actually been around
since the late 1970s. It wasn’t until Docker debuted
its container platform in 2013 that users found the
3
technology mature enough to run applications for pro-
duction workloads.
WHAT DO
CONTAINERS DO?
Containers are a type of operating system vir-
tualization that isolates resources — CPU,
memory, disk, or network — while allowing
isolated workloads to run on the same host.
They hold software binaries and libraries —
everything required to run an application. You
can think of containers as lightweight virtual
machines without the overhead of a full oper-
ating system getting in your way, meaning that
they can be far more agile and enable far more
workload density than traditional approaches
to virtualization.
4
Moving to Microservices
Before containers came on the scene, most enterprise
applications were unmoving monoliths that were being
crushed under their own weight. These applications con-
sisted of one massive code base that contained all of the
functionality required to make the application do the
company’s bidding.
Enter microservices.
5
No longer are you at the mercy of the mono-
lith. Your shiny new microservices model
enables a “scale-out” architecture where
additional process instances can be started to
keep pace with load — all of which interact
with each other over the network.
Comparing Orchestration
and Management Tools
Automating the operation of hundreds, if not thousands,
of containers in a cluster across multiple hosts requires
an orchestration tool. As the name suggests, orchestration
tools assume the role of an orchestra conductor by man-
aging and coordinating all of the services that comprise
the environment. This means managing how hosts, con-
tainers, and services are created, started, stopped,
upgraded, connected, and made available.
6
Kubernetes
Originally created by Google as a mechanism for deploy-
ing, maintaining, and scaling applications, Kubernetes —
K8s or Kube, for short — was donated to the Cloud Native
Computing Foundation in 2015 and is now available as an
open source project.
OpenShift
OpenShift is not a stand-alone orchestration platform.
Rather, OpenShift is Red Hat’s enterprise container
application platform. It’s built around Kubernetes and
extends that platform’s capabilities. OpenShift inherits
7
all the upstream capabilities of Kubernetes but also
enhances the enterprise user experience by adding fea-
tures to enable rapid application development, easy
deployment, and lifecycle maintenance.
8
ECS also supports Fargate, which allows you to run con-
tainers without having to manage the servers that com-
prise the cluster. With ECS, a built-in scheduler can be
used to trigger container deployment based on resource
availability and demand.
9
network, and more. With Mesos alone, you need to set up
components individually.
10
IN THIS CHAPTER
»» Building containerized
applications
»» Shipping container images
»» Implementing CI/CD/CS
Chapter 2
Building and
Deploying
Containers
E
very DevOps organization strives to have a supply
chain that can consistently develop, package, and
get applications into production faster. The tech-
nology to accomplish these goals has not always been
available. Today, however, containers make this possible.
11
packaging, shipping, deployment, and operation of your
services.
12
your image and track components and versions
included on each one.
13
Implementing CI/CD/CS
A complete container supply chain process typically cov-
ers integration (CI), deployment (CD), and security (CS)
as a continuous, ongoing process. Here’s an overview of
the steps necessary for a CI/CD/CS pipeline.
Continuous integration
Continuous integration is a process by which code is auto-
matically tested each time a change is committed to ver-
sion control. Implementing continuous integration is
accomplished through a series of activities, including
Continuous delivery
Continuous delivery is a concept that enables organiza-
tions to automatically build, test, and otherwise prepare
14
software for deployment into the production environ-
ment. Here are several important considerations to keep
in mind when implementing continuous delivery:
Continuous deployment
At first, continuous delivery and continuous deployment
may appear to be the same thing, but there are some
nuances. Continuous delivery has an end result that a
particular update is safely deployable. The update isn’t
automatically deployed into production. That’s where
15
continuous deployment comes in. Here are a few of its
features:
Continuous security
Don’t miss security as part of the pipeline. A movement
known as DevSecOps advocates making security part of
this process.
16
IN THIS CHAPTER
»» Collecting and managing metrics
»» Monitoring the environment
»» Troubleshooting: Going beyond
monitoring
Chapter 3
Monitoring
Containers
O
ver the past few years, the infrastructure and
ecosystem evolution with microservices and
containers has made many existing monitoring
tools and techniques no longer relevant. Instead, devel-
opers need solutions that can adapt to the short-lived
and isolated nature of containers and application
services.
17
Understanding Container
Visibility Challenges
On one hand, containers provide flexibility and portabil-
ity, but on the other hand, they complicate monitoring
and troubleshooting. Why is that? As isolated “black
boxes,” containers make it difficult for traditional tools
to penetrate their shells in an effort to observe processes
and performance metrics.
Here’s why.
18
With containers, microservices come and go. They move
around, scaling up and down as demand shifts. The
dynamic nature of containers makes manual configura-
tion to collect relevant metrics impossible. Instead, mon-
itoring must focus a bit differently. You want to know
how your service — the one that is comprised of multiple
containers — behaves overall but also how each contrib-
uting container individually performs in its role.
Collecting Metrics
Monitoring containers is not simply about visibility into
container processes. To understand the whole picture,
teams must monitor containers, services, and the infra-
structure on which these run — and do so with minimal
19
impact. A number of methods exist for instrumenting
and collecting data.
20
»» Agent-per-pod: This approach attaches a monitor-
ing agent to a group of containers like Kubernetes
pods — containers that share a namespace. This
method is easy to set up, but resource consump-
tion is high per-agent because of the volume of
metrics flowing through it.
21
This allows you to drill into metrics at different layers
of a hierarchy. For example, in Kubernetes, you have
namespaces, services, deployments, pods, and containers.
Segmenting metrics by these layers lets you see aggre-
gated performance, which is essential for logical trouble-
shooting: a drill-down process where you identify the
application with an issue; the microservice where the issue
comes from; the specific pod, container, and process with
the issue; and the host where it is running at that moment.
Infrastructure
Infrastructure-level monitoring, from host resources to
storage and networking, provides information that helps
determine the root cause of certain container issues. For
example, host CPU metrics are an important factor when
trying to understand which containers are using the most
computing resources.
22
Services
Service metrics provide views into the performance of
each of the services, such as load balancing or web
endpoints, that comprise your application. If there’s an
application slowdown, being able to see the relative
performance of each microservice helps you pinpoint
problems.
Applications
Application metrics such as the number of connections,
current response time, and reported errors, are focused
on the health and performance of your application as a
whole from a user perspective. Having data at this level
takes much of the guesswork out of understanding the
user experience with your solution.
Custom metrics
Custom metrics are those that are uniquely defined
within an application or by a developer for tracking spe-
cific information. Custom metrics are typically of high
value, put in place to reveal important details about
application behavior and events:
23
PROMETHEUS
MONITORING
Prometheus is an incubating project of the
Cloud Native Computing Foundation (CNCF).
The CNCF fosters a community around open
source technologies that orchestrate containers
as part of microservices architectures.
Prometheus is one of the fastest growing proj-
ects, providing real-time monitoring, alerting,
and time-series database capabilities for cloud
native applications. It is used to generate and
collect metrics from monitored targets and inte-
grates with many popular open source and
commercial tools for data import/export.
Prometheus client libraries enable developers to
instrument application code. Its PromQL query
language lets users select and aggregate time
series data. Many of the orchestration and
enterprise container application platforms,
including Kubernetes, OpenShift, and
Mesosphere DC/OS, have embraced the solu-
tion and export Prometheus metrics by default.
24
Despite being widely used, because of its lack of
metrics labels support, StatsD’s popularity is
decreasing in favor of Prometheus metrics.
25
you a better understanding of your applica-
tion’s behavior.
Designing a Monitoring
Process
Containerized applications running in production must
be constantly monitored for availability, errors, and ser-
vice response times. Achieving this requires collection of
a wide range of telemetry and event data. Sources and
approaches for collecting and presenting this informa-
tion can be varied.
26
Metrics
By collecting and correlating container metrics with
infrastructure and orchestration data, you can monitor
the performance, health, and state of your containerized
applications and maximize availability.
27
software like Grafana or AlertManager is required
to build a complete monitoring system.
Prometheus metrics is the format used internally
for Kubernetes orchestration state metrics.
Tracing
Tracing is designed to log and track transaction flows as
requests propagate throughout an application. This allows
administrators to observe latency for each microservice
and identify bottlenecks that affect performance. Tracing
is accomplished through two primary toolsets:
28
run-time behavior. Custom metric frame-
works such as JMX, StatsD, and Prometheus
provide the required information without
performance drawbacks.
29
Dashboarding and Exploring
A key component of any container monitoring solution is
the visual display of the metrics and events. Graphical
dashboards, as well as other more dynamic visualizations
such as topology maps or hierarchical explore views,
simplify the task of understanding your environment and
identifying anomalies.
Alerting
Just like in all areas of IT, alerting in a container environ-
ment is a critical activity when it comes to identifying
potential problems and events that can hinder applica-
tion performance and availability. Keeping track of
what’s happening, especially in a large, dynamic envi-
ronment, requires automation.
30
Troubleshooting: Going
Beyond Monitoring
A key challenge with troubleshooting containers is that
they may no longer exist after a problem occurs.
31
Choosing a platform
Monitoring systems that collect, analyze, dashboard, and
alert on containers come in different forms. Open source
options that you need to build, maintain, and scale on
your own are available. The industry also provides
software-as-a-service (SaaS) solutions in a fully man-
aged and fully supported cloud service with no
maintenance.
32
IN THIS CHAPTER
»» Identifying common container
threats
»» Handling configuration and
compliance
»» Ensuring run-time security
Chapter 4
Securing Containers
M
aintaining secure practices is a never-ending
cyclical battle. Even as you implement brand
new security measures and install patches,
you’ve arrived just in time for the next vulnerability to be
exposed. And so it goes. . . .
33
Identifying Common Threats
A number of security issues can plague containers. The
following sections describe some of the security consid-
erations you need to account for in your container
environment.
34
Outdated container images
The longer you run software without updating it, the
more likely it is to eventually be exploited. Your service or
application might pull a hardcoded image from a reposi-
tory, or that image might not be rebuilt as the base image
is updated with patches or other new software. This
means you could easily have older, vulnerable software
running in production.
Secrets management
Software needs sensitive information to run. This
includes user password hashes, server-side certificates,
encryption keys, and more. This sensitive information
should be handled independently from application code,
container images, and configuration, and should be
stored in a secure location such as a secrets vault. Most
35
container orchestration platforms have this functionality
built in through a feature called secrets management.
36
»» Do you have objective cryptographic proof that the
author is actually that person?
37
sensitive information from the host or gained additional
privileges. In order to prevent this, it’s important to
reduce the default container privileges, which can be
accomplished via various means.
38
Handling Configuration
and Compliance
Half of the battle in security is making sure your teams
follow secure configuration and compliance practices.
Luckily, containers have many default run-time security
features, some described in this section, and their porta-
bility eases the burden on developers.
Compliance checks
The Center for Internet Security (CIS) has published
general security recommendations for Docker and Kuber-
netes. These recommendations can be tested against your
infrastructure with some scripts known as benchmarks,
docker-bench, and kube-bench projects. Run these peri-
odically against your infrastructure to see if it meets best
practices.
39
»» Pod security policy: Using security policies, you can
restrict the pods that will be allowed to run on your
cluster. For example, you can configure resources,
privileges, and sensitive configuration items.
40
and applications. Falco lets you drill down to details
such as system, network, and file activity.
41
IN THIS CHAPTER
»» Reviewing key points about
containers
»» Deploying containers effective
and securely
Chapter 5
Ten Container
Takeaways
A
s containers continue to make their way into a
dominant role for large-scale application deploy-
ments, you should keep these critical consider-
ations in mind:
43
»» Trust and then verify via ongoing monitoring.
»» Implement a CI/CD/CS pipeline.
»» Embrace and manage increased complexity.
»» Understand your monitoring and instrumenta-
tion options and the pros and cons of each
approach.
»» Monitor everything.
»» Make sure you can measure and correlate
incident responses.
44
WILEY END USER LICENSE AGREEMENT