Running Containerized Microservices On Aws
Running Containerized Microservices On Aws
Microservices on AWS
November 2017
© 2017, Amazon Web Services, Inc. or its affiliates. All rights reserved.
Notices
This document is provided for informational purposes only. It represents AWS’s
current product offerings and practices as of the date of issue of this document,
which are subject to change without notice. Customers are responsible for
making their own independent assessment of the information in this document
and any use of AWS’s products or services, each of which is provided “as is”
without warranty of any kind, whether express or implied. This document does
not create any warranties, representations, contractual commitments,
conditions or assurances from AWS, its affiliates, suppliers or licensors. The
responsibilities and liabilities of AWS to its customers are controlled by AWS
agreements, and this document is not part of, nor does it modify, any agreement
between AWS and its customers.
Contents
Introduction 1
Componentization Via Services 2
Organized Around Business Capabilities 4
Products Not Projects 6
Smart Endpoints and Dumb Pipes 8
Decentralized Governance 9
Decentralized Data Management 11
Infrastructure Automation 13
Design for Failure 15
Evolutionary Design 18
Conclusion 21
Contributors 21
Abstract
This whitepaper is intended for architects and developers who want to run
containerized applications at scale in production on Amazon Web Services
(AWS). This document provides guidance for application lifecycle management,
security, and architectural software design patterns for container-based
applications on AWS.
Introduction
As modern, microservices-based applications gain popularity, containers are an
attractive building block for creating agile, scalable, and efficient microservices
architectures. Whether you are considering a legacy system or a greenfield
application for containers, there are well-known, proven software design
patterns that you can apply.
• Decentralized governance
• Evolutionary design
Page 1
Amazon Web Services – Running Containerized Microservices on AWS
After reading this whitepaper, you will know how to map the microservices
design characteristics to twelve-factor app patterns, down to the design pattern
to be implemented.
Page 2
Amazon Web Services – Running Containerized Microservices on AWS
Decoupling increases agility by removing the need for one development team to
wait for another team to finish work that the first team depends on. When
containers are used, container images can be swapped for other container
images. These can be either different versions of the same image or different
images altogether—as long as the functionality and boundaries are conserved.
Here are the key factors from the twelve-factor app pattern methodology that
play a role in componentization:
Page 3
Amazon Web Services – Running Containerized Microservices on AWS
Page 4
Amazon Web Services – Running Containerized Microservices on AWS
Here are the key factors from the twelve-factor app pattern methodology that
play a role in organizing around capabilities:
• Build, release, run (strictly separate build and run stages) – Each
microservice has its own deployment pipeline and deployment
frequency. This allows the development teams to run microservices at
varying speeds so they can be responsive to customer needs.
Page 5
Amazon Web Services – Running Containerized Microservices on AWS
• Singleton –This pattern is for an application that needs one, and only
one, instance of an object.
Page 6
Amazon Web Services – Running Containerized Microservices on AWS
Here are the key factors from the twelve-factor app pattern methodology that
play a role in adopting a product mindset for delivering software:
Page 7
Amazon Web Services – Running Containerized Microservices on AWS
This means there are two primary forms of communication between services:
Page 8
Amazon Web Services – Running Containerized Microservices on AWS
service bus for routing messages between microservices. It is much better to use
a message broker such as Kafka, or Amazon Simple Notification Service
(Amazon SNS) and Amazon Simple Queue Service (Amazon SQS).
Microservices architectures favor these tools because they enable a
decentralized approach in which the endpoints that produce and consume
messages are smart, but the pipe between the endpoints is dumb. In other
words, concentrate the logic in the containers and refrain from leveraging (and
coupling to) sophisticated buses and messaging services.
The core benefit of building smart endpoints and dumb pipes is the ability to
decentralize the architecture, particularly when it comes to how endpoints are
maintained, updated, and extended. One goal of microservices is to enable
parallel work on different edges of the architecture that will not conflict with
each other. Building dumb pipes enables each microservice to encapsulate its
own logic for formatting its outgoing responses or supplementing its incoming
requests.
Here are the key factors from the twelve-factor app pattern methodology that
play a role in building smart endpoints and dumb pipes:
Decentralized Governance
As your organization grows and establishes more code-driven business
processes, one challenge it could face is the necessity to scale the engineering
team and enable it to work efficiently in parallel on a large and diverse
codebase. Additionally, your engineering organization will want to solve
problems using the best available tools.
Page 9
Amazon Web Services – Running Containerized Microservices on AWS
When a team kicks off its first greenfield project it is generally just a small team
of a few people working together on a common codebase. After the greenfield
project has been completed, the business will quickly discover opportunities to
expand on their first version. Customer feedback generates ideas for new
features to add and ways to expand the functionality of existing features. During
this phase, engineers will start growing the codebase and your organization will
start dividing the engineering organization into service-focused teams.
Decentralized governance means that each team can use its expertise to choose
the best tools to solve their specific problem. Forcing all teams to use the same
database, or the same runtime language, isn’t reasonable because the problems
they’re solving aren’t uniform. However, decentralized governance is not
without boundaries. It is helpful to use standards throughout the organization,
such as a standard build and code review process because this helps each team
continue to function together.
Here are the key factors from the twelve-factor app pattern methodology that
play a role in enabling decentralized governance:
Page 10
Amazon Web Services – Running Containerized Microservices on AWS
Centralized governance was favored in the past because it was hard to efficiently
deploy a polyglot application. Polyglot applications need different build
mechanisms for each language and an underlying infrastructure that can run
multiple languages and frameworks. Polyglot architectures had varying
dependencies, which could sometimes have conflicts.
Containers solve these problems by allowing the deliverable for each individual
team to be a common format: a Docker image that contains their component.
The contents of the container can be any type of runtime written in any
language. However, the build process will be uniform because all containers are
built using the common Dockerfile format. In addition, all containers can be
deployed the same way and launched on any instance since they carry their own
runtime and dependencies with them.
Page 11
Amazon Web Services – Running Containerized Microservices on AWS
Here are the key factors from the twelve-factor app pattern methodology that
play a role in organizing around capabilities:
Page 12
Amazon Web Services – Running Containerized Microservices on AWS
Infrastructure Automation
Contemporary architectures, whether monolithic or based on microservices,
greatly benefit from infrastructure-level automation. With the introduction of
virtual machines, IT teams were able to easily replicate environments and create
templates of operating system states that they wanted. The host operating
system became immutable and disposable. With cloud technology, the idea
bloomed and scale was added to the mix. There is no need to predict the future
when you can simply provision on demand for what you need and pay for what
you use. If an environment isn’t needed anymore, you can shut down the
resources.
Page 13
Amazon Web Services – Running Containerized Microservices on AWS
Here are the key factors from the twelve-factor app pattern methodology that
play a role in evolutionary design:
• Build, release, run (strictly separate build and run stages) – One
environment for each purpose.
To wrap the application with a CI/CD pipeline, you should choose a code
repository, an integration pipeline, an artifact-building solution, and a
mechanism for deploying these artifacts. A microservice should do one thing
and do it well. This implies that when you build a full application, there will
potentially be a large number of services. Each of these need their own
integration and deployment pipeline. Keeping infrastructure automation in
mind, architects who face this challenge of proliferating services will be able to
find common solutions and replicate pipelines that have made a particular
service successful.
Page 14
Amazon Web Services – Running Containerized Microservices on AWS
Ultimately, the goal is to allow developers to push code updates and have the
updated application sent to multiple environments in minutes. There are many
ways to successfully deploy in phases, including the blue/green and canary
methods. With the blue/green deployment, two environments live side by side,
with one of them running a newer version of the application. Traffic is sent to
the older version until a switch happens that routes all traffic to the new
environment. You can see an example of this happening in this reference
architecture:4
In this case, we use a switch of target groups behind a load balancer in order to
redirect traffic from the old to the new resources. Another way to achieve this is
to use services fronted by two load balancers and operate the switch at the DNS
level.
This adage is not any less true in the container world than it is for the cloud.
Achieving high availability is a top priority for workloads, but remains an
arduous undertaking for development teams. Modern applications running in
containers should not be tasked with managing the underlying layers, from
physical infrastructure like electricity sources or environmental controls all the
Page 15
Amazon Web Services – Running Containerized Microservices on AWS
Designing for failure also means testing the design and watching services cope
with deteriorating conditions. Not all technology departments need to apply this
principle to the extent that Netflix does,6, 7 but we encourage you to test these
mechanisms often.
Designing for failure yields a self-healing infrastructure that acts with the
maturity that is expected of recent workloads. Preventing emergency calls
guarantees a base level of satisfaction for the service-owning team. This also
removes a level of stress that can otherwise grow into accelerated attrition.
Designing for failure will deliver greater uptime for your products. It can shield
a company from outages that could erode customer trust.
Here are the key factors from the twelve-factor app pattern methodology that
play a role in designing for failure:
Page 16
Amazon Web Services – Running Containerized Microservices on AWS
One very useful container pattern for hardening an application’s resiliency is the
circuit breaker. In this approach, an application container is proxied by a
container in charge of monitoring connection attempts from the application
container. If connections are successful, the circuit breaker container remains in
closed status, letting communication happen. When connections start failing,
the circuit breaker logic triggers. If a pre-defined threshold for failure/success
ratio is breached, the container enters an open status that prevents more
connections. This mechanism offers a predictable and clean breaking point, a
departure from partially failing situations that can render recovery difficult. The
application container can move on and switch to a backup service or enter a
degraded state.
Driver Description
none No logs will be available for the container and Docker logs will not return any
output.
json-file The logs are formatted as JSON. The default logging driver for Docker.
syslog Writes logging messages to the syslog facility. The syslog daemon must be
running on the host machine.
journald Writes log messages to journald. The journald daemon must be running on the
host machine.
gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as
Graylog or Logstash.
fluentd Writes log messages to fluentd (forward input). The fluentd daemon must be
running on the host machine.
splunk Writes log messages to splunk using the HTTP Event Collector.
etwlogs Writes log messages as Event Tracing for Windows (ETW) events. Only available
on Windows platforms.
Page 17
Amazon Web Services – Running Containerized Microservices on AWS
Evolutionary Design
In modern systems architecture design, you need to assume that you don’t have
all the requirements up-front. As a result, having a detailed design phase at the
beginning of a project becomes impractical. The services have to evolve through
various iterations of the software. As services are consumed there are learnings
from real-world usage that help evolve their functionality.
As a result of the evolutionary design principle, a service team can build the
minimum viable set of features needed to stand up the stack and roll it out to
users. The development team doesn’t need to cover edge cases to roll out
features. Instead, the team can focus on the needed pieces and evolve the design
as customer feedback comes in. At a later stage, the team can decide to refactor
after they feel confident that they have enough feedback.
Here are the key factors from the twelve-factor app pattern methodology that
play a role in infrastructure automation:
Page 18
Amazon Web Services – Running Containerized Microservices on AWS
• Build, release, run (strictly separate build and run stages) – Help roll
out new features using various deployment techniques. Each release has
a specific ID and can be used to gain design efficiency and user feedback.
Containers provide additional tools to evolve design at a faster rate with image
layers.
As the design evolves, each image layer can be added, keeping the integrity of
the layers unaffected. Using Docker, an image layer is a change to an image, or
an intermediate image. Every command (FROM, RUN, COPY, etc.) in the
Page 19
Amazon Web Services – Running Containerized Microservices on AWS
Dockerfile causes the previous image to change, thus creating a new layer.
Docker will build only the layer that was changed and the ones after that. This is
called layer caching. Using layer caching deployment times can be reduced.
Page 20
Amazon Web Services – Running Containerized Microservices on AWS
Conclusion
Microservices can be designed using the twelve-factor app pattern methodology
and software design patterns enable you to achieve this easily. These software
design patterns are well known. If applied in the right context, they can enable
the design benefits of microservices. AWS provides a wide range of primitives
that can be used to enable containerized microservices.
Contributors
The following individuals contributed to this document:
Notes
1 https://martinfowler.com/articles/microservices.html
2 https://12factor.net/
3 https://en.wikipedia.org/wiki/Conway's_law
4 https://github.com/awslabs/ecs-blue-green-deployment
5 https://docs.aws.amazon.com/general/latest/gr/api-retries.html
6 https://github.com/netflix/chaosmonkey
7 https://github.com/Netflix/SimianArmy
8 https://docs.docker.com/engine/admin/logging/overview/
9 Canary deployment is a technique to reduce the risk of introducing a new
software version in production by slowly rolling out the change to a small
subset of users before rolling it out to the entire infrastructure and making it
available to everybody. See
https://martinfowler.com/bliki/CanaryRelease.html
Page 21