Understanding Distributed Systems What Every Developer Should Know About Large Distributed Applications
Understanding Distributed Systems What Every Developer Should Know About Large Distributed Applications
Version 1.0.4
Roberto Vitillo
February 2021
Understanding Distributed Systems
Copyright
Understanding Distributed Systems by Roberto Vitillo
While the author has used good faith efforts to ensure that the information
and instructions in this work are accurate, the author disclaims all
responsibility for errors or omissions, including without limitation
responsibility for damages resulting from the use of or reliance on this
work. The use of the information and instructions contained in this work is
at your own risk. If any code samples or other technology this work
contains or describes is subject to open source licenses or the intellectual
property rights of others, it is your responsibility to ensure that your use
thereof complies with such licenses and/or rights.
About the author
Authors generally write this page in the third person as if someone else is
writing about them. I like to do things a little bit differently.
Before that, I worked at Mozilla, where I set the direction of the data
platform from its very early days and built a large part of it, including the
team.
Finally, and above all, thanks to my family: Rachell and Leonardo. You
always believed in me. That made all the difference.
Preface
According to Stack Overflow’s 2020 developer survey, the best-paid
engineering roles require distributed systems expertise. That comes as no
surprise as modern applications are distributed systems.
I plan to update the book regularly, which is why it has a version number.
You can subscribe to receive updates from the book’s landing page. As no
book is ever perfect, I’m always happy to receive feedback. So if you find
an error, have an idea for improvement, or simply want to comment on
something, always feel free to write me1.
The book also makes for a great study companion for a system design
interview if you want to land a job at a company that runs large-scale
distributed systems, like Amazon, Google, Facebook, or Microsoft. If you
are interviewing for a senior role, you are expected to be able to design
complex networked services and dive deep into any vertical. You can be a
world champion at balancing trees, but if you fail the design round, you are
out. And if you just meet the bar, don’t be surprised when your offer is well
below what you expected, even if you aced everything else.
1. [email protected]↩
1 Introduction
A distributed system is one in which the failure of a computer you
didn’t even know existed can render your own computer unusable.
– Leslie Lamport
Some applications need to tackle workloads that are just too big to fit on a
single node, no matter how powerful. For example, Google receives
hundreds of thousands of search requests per second from all over the
globe. There is no way a single node could handle that.
This book will guide you through the fundamental challenges that need to
be solved to design, build and operate distributed systems: communication,
coordination, scalability, resiliency, and operations.
1.1 Communication
The first challenge comes from the fact that nodes need to communicate
over the network with each other. For example, when your browser wants to
load a website, it resolves the server’s address from the URL and sends an
HTTP request to it. In turn, the server returns a response with the content of
the page to the client.
How are request and response messages represented on the wire? What
happens when there is a temporary network outage, or some faulty network
switch flips a few bits in the messages? How can you guarantee that no
intermediary can snoop into the communication?
1.2 Coordination
Another hard challenge of building distributed systems is coordinating
nodes into a single coherent whole in the presence of failures. A fault is a
component that stopped working, and a system is fault-tolerant when it can
continue to operate despite one or more faults. The “two generals” problem
is a famous thought experiment that showcases why this is a challenging
problem.
Suppose there are two generals (nodes), each commanding its own army,
that need to agree on a time to jointly attack a city. There is some distance
between the armies, and the only way to communicate is by sending a
messenger (messages). Unfortunately, these messengers can be captured by
the enemy (network failure).
Is there a way for the generals to agree on a time? Well, general 1 could
send a message with a proposed time to general 2 and wait for a response.
What if no response arrives, though? Was one of the messengers captured?
Perhaps a messenger was injured, and it’s taking longer than expected to
arrive at the destination? Should the general send another messenger?
You can see that this problem is much harder than it originally appeared. As
it turns out, no matter how many messengers are dispatched, neither general
can be completely certain that the other army will attack the city at the same
time. Although sending more messengers increases the general’s
confidence, it never reaches absolute certainty.
Because coordination is such a key topic, the second part of this book is
dedicated to distributed algorithms used to implement coordination.
1.3 Scalability
The performance of a distributed system represents how efficiently it
handles load, and it’s generally measured with throughput and response
time. Throughput is the number of operations processed per second, and
response time is the total time between a client request and its response.
Load can be measured in different ways since it’s specific to the system’s
use cases. For example, number of concurrent users, number of
communication links, or ratio of writes to reads are all different forms of
load.
As the load increases, it will eventually reach the system’s capacity — the
maximum load the system can withstand. At that point, the system’s
performance either plateaus or worsens, as shown in Figure 1.1. If the load
on the system continues to grow, it will eventually hit a point where most
operations fail or timeout.
Figure 1.1: The system throughput on the y axis is the subset of client
requests (x axis) that can be handled without errors and with low response
times, also referred to as its goodput.
A quick and easy way to increase the capacity is buying more expensive
hardware with better performance, which is referred to as scaling up. But
that will hit a brick wall sooner or later. When that option is no longer
available, the alternative is scaling out by adding more machines to the
system.
In the book’s third part, we will explore the main architectural patterns that
you can leverage to scale out applications: functional decomposition,
duplication, and partitioning.
1.4 Resiliency
A distributed system is resilient when it can continue to do its job even
when failures happen. And at scale, any failure that can happen will
eventually occur. Every component of a system has a probability of failing
— nodes can crash, network links can be severed, etc. No matter how small
that probability is, the more components there are, and the more operations
the system performs, the higher the absolute number of failures becomes.
And it gets worse, since failures typically are not independent, the failure of
a component can increase the probability that another one will fail.
Failures that are left unchecked can impact the system’s availability, which
is defined as the amount of time the application can serve requests divided
by the duration of the period measured. In other words, it’s the percentage
of time the system is capable of servicing requests and doing useful work.
1.5 Operations
Distributed systems need to be tested, deployed, and maintained. It used to
be that one team developed an application, and another was responsible for
operating it. The rise of microservices and DevOps has changed that. The
same team that designs a system is also responsible for its live-site
operation. That’s a good thing as there is no better way to find out where a
system falls short than experiencing it by being on-call for it.
Remote clients can’t just invoke an interface, which is why adapters are
required to hook up IPC mechanisms with the service’s interfaces. An
inbound adapter is part of the service’s Application Programming Interface
(API); it handles the requests received from an IPC mechanism, like HTTP,
by invoking operations defined in the inbound interfaces. In contrast,
outbound adapters implement the service’s outbound interfaces, granting
the business logic access to external services, like data stores. This is
illustrated in Figure 1.2.
Figure 1.2: The business logic uses the messaging interface implemented by
the Kafka producer to send messages and the repository interface to access
the SQL store. In contrast, the HTTP controller handles incoming requests
using the service interface.
Figure 1.3: The different architectural points of view used in this book.
(PART) Communication
Introduction
Communication between processes over the network, or inter-process
communication (IPC), is at the heart of distributed systems. Network
protocols are arranged in a stack, where each layer builds on the abstraction
provided by the layer below, and lower layers are closer to the hardware.
When a process sends data to another through the network, it moves
through the stack from the top layer to the bottom one and vice-versa on the
other end, as shown in Figure 1.4.
The link layer consists of network protocols that operate on local network
links, like Ethernet or Wi-Fi, and provides an interface to the underlying
network hardware. Switches operate at this layer and forward Ethernet
packets based on their destination MAC address.
The internet layer uses addresses to route packets from one machine to
another across the network. The Internet Protocol (IP) is the core protocol
of this layer, which delivers packets on a best-effort basis. Routers operate
at this layer and forward IP packets based on their destination IP address.
The transport layer transmits data between two processes using port
numbers to address the processes on either end. The most important
protocol in this layer is the Transmission Control Protocol (TCP).
The application layer defines high-level communication protocols, like
HTTP or DNS. Typically your code will target this level of abstraction.
Even though each protocol builds up on top of the other, sometimes the
abstractions leak. If you don’t know how the bottom layers work, you will
have a hard time troubleshooting networking issues that will inevitably
arise.
Chapter 4 dives into how the phone book of the Internet (DNS) works,
which allows nodes to discover others using names. At its heart, DNS is a
distributed, hierarchical, and eventually consistent key-value store. By
studying it, we will get a first taste of eventually consistency.
Chapter 5 concludes this part by discussing how services can expose APIs
that other nodes can use to send commands or notifications to. Specifically,
we will dive into the implementation of a RESTful HTTP API.
2 Reliable links
TCP is a transport-layer protocol that exposes a reliable communication
channel between two processes on top of IP. TCP guarantees that a stream
of bytes arrives in order, without any gaps, duplication or corruption. TCP
also implements a set of stability patterns to avoid overwhelming the
network or the receiver.
2.1 Reliability
To create the illusion of a reliable channel, TCP partitions a byte stream into
discrete packets called segments. The segments are sequentially numbered,
which allows the receiver to detect holes and duplicates. Every segment
sent needs to be acknowledged by the receiver. When that doesn’t happen, a
timer fires on the sending side, and the segment is retransmitted. To ensure
that the data hasn’t been corrupted in transit, the receiver uses a checksum
to verify the integrity of a delivered segment.
This is a simplification, though, as there are more states than the three
above.
A server must be listening for connection requests from clients before a
connection is established. TCP uses a three-way handshake to create a new
connection, as shown in Figure 2.1:
The sequence numbers are used by TCP to ensure the data is delivered in
order and without holes.
Figure 2.2: The receive buffer stores data that hasn’t been processed yet by
the application.
The receiver also communicates back to the sender the size of the buffer
whenever it acknowledges a segment, as shown in Figure 2.3. The sender, if
it’s respecting the protocol, avoids sending more data that can fit in the
receiver’s buffer.
Figure 2.3: The size of the receive buffer is communicated in the headers of
acknowledgments segments.
The equation shows that bandwidth is a function of latency. TCP will try
very hard to optimize the window size since it can’t do anything about the
round trip time. However, that doesn’t always yield the optimal
configuration. Due to the way congestion control works, the lower the
round trip time is, the better the underlying network’s bandwidth is utilized.
This is more reason to put servers geographically close to the clients.
Unlike TCP, UDP does not expose the abstraction of a byte stream to its
clients. Clients can only send discrete packets, called datagrams, with a
limited size. UDP doesn’t offer any reliability as datagrams don’t have
sequence numbers and are not acknowledged. UDP doesn’t implement flow
and congestion control either. Overall, UDP is a lean and barebone
protocol. It’s used to bootstrap custom protocols, which provide some, but
not all, of the stability and reliability guarantees that TCP does1.
3.1 Encryption
Encryption guarantees that the data transmitted between a client and a
server is obfuscated and can only be read by the communicating processes.
When the TLS connection is first opened, the client and the server negotiate
a shared encryption secret using asymmetric encryption. Both parties
generate a key-pair consisting of a private and public part. The processes
are then able to create a shared secret by exchanging their public keys. This
is possible thanks to some mathematical properties of the key-pairs. The
beauty of this approach is that the shared secret is never communicated over
the wire.
Encrypting in-flight data has a CPU penalty, but it’s negligible since
modern processors actually come with cryptographic instructions. Unless
you have a very good reason, you should use TLS for all communications,
even those that are not going through the public Internet.
3.2 Authentication
Although we have a way to obfuscate data transmitted across the wire, the
client still needs to authenticate the server to verify it’s who it claims to be.
Similarly, the server might want to authenticate the identity of the client.
The problem with this naive approach is that the client has no idea whether
the public key shared by the server is authentic, so we have certificates to
prove the ownership of a public key for a specific entity. A certificate
includes information about the owning entity, expiration date, public key,
and a digital signature of the third-party entity that issued the certificate.
The certificate’s issuing entity is called a certificate authority (CA), which
is also represented with a certificate. This creates a chain of certificates that
ends with a certificate issued by a root CA, as shown in Figure 3.1, which
self-signs its certificate.
When a TLS connection is opened, the server sends the full certificate chain
to the client, starting with the server’s certificate and ending with the root
CA. The client verifies the server’s certificate by scanning the certificate
chain until a certificate is found that it trusts. Then the certificates are
verified in the reverse order from that point in the chain. The verification
checks several things, like the certificate’s expiration date and whether the
digital signature was actually signed by the issuing CA. If the verification
reaches the last certificate in the path without errors, the path is verified,
and the server is authenticated.
One of the most common mistakes when using TLS is letting a certificate
expire. When that happens, the client won’t be able to verify the server’s
identity, and opening a connection to the remote process will fail. This can
bring an entire service down as clients are no longer able to connect with it.
Automation to monitor and auto-renew certificates close to expiration is
well worth the investment.
3.3 Integrity
Even if the data is obfuscated, a middle man could still tamper with it; for
example, random bits within the messages could be swapped. To protect
against tampering, TLS verifies the integrity of the data by calculating a
message digest. A secure hash function is used to create a message
authentication code (HMAC). When a process receives a message, it
recomputes the digest of the message and checks whether it matches the
digest included in the message. If not, then the message has either been
corrupted during transmission or has been tampered with. In this case, the
message is dropped.
The TLS HMAC protects against data corruption as well, not just
tampering. You might be wondering how data can be corrupted if TCP is
supposed to guarantee its integrity. While TCP does use a checksum to
protect against data corruption, it’s not 100% reliable because it fails to
detect errors for roughly 1 in 16 million to 10 billion packets. With packets
of 1KB, this can happen every 16 GB to 10 TB transmitted.
3.4 Handshake
When a new TLS connection is established, a handshake between the client
and server occurs during which:
1. The parties agree on the cipher suite to use. A cipher suite specifies the
different algorithms that the client and the server intend to use to create
a secure channel, like the:
key exchange algorithm used to generate shared secrets;
signature algorithm used to sign certificates;
symmetric encryption algorithm used to encrypt the application
data;
HMAC algorithm used to guarantee the integrity and authenticity
of the application data.
2. The parties use the negotiated key exchange algorithm to create a
shared secret. The shared secret is used by the chosen symmetric
encryption algorithm to encrypt the communication of the secure
channel going forwards.
3. The client verifies the certificate provided by the server. The
verification process confirms that the server is who it says it is. If the
verification is successful, the client can start sending encrypted
application data to the server. The server can optionally also verify the
client certificate if one is available.
In this chapter, we will look at how DNS resolution works in a browser, but
the process is the same for any other client. When you enter a URL in your
browser, the first step is to resolve the hostname’s IP address, which is then
used to open a new TLS connection.
Concretely, let’s take a look at how the DNS resolution works when you
type www.example.com in your browser (see Figure 4.1).
1. The browser checks whether it has resolved the hostname before in its
local cache. If so, it returns the cached IP address; otherwise it routes
the request to a DNS resolver. The DNS resolver is typically a DNS
server hosted by your Internet Service Provider.
3. The root name server maps the top-level domain (TLD) of an incoming
request, like .com, to the name server’s address responsible for it.
4. The resolver, armed with the address of the TLD, sends the resolution
request to the TLD name server for the domain, in our case .com.
5. The TLD name server maps the domain name of a request to the
address of the authoritative name server responsible for it. An
authoritative name server is responsible for a specific domain and
holds all records that map the hostnames to IP addresses within that
domain.
The resolution process involves several round trips in the worst case, but its
beauty is that the address of a root name server is all that’s needed to
resolve any hostname. Given the costs involved resolving a hostname, it
comes as no surprise that the designers of DNS thought of ways to reduce
them.
DNS uses UDP to serve DNS queries as it’s lean and has a low overhead.
UDP at the time was a great choice as there is no price to be paid to open a
new connection. That said, it’s not secure, as requests are sent in the clear
over the Internet, allowing third parties to snoop in. Hence, the industry is
pushing slowly towards running DNS on top of TLS.
How do these caches know when to expire a record? Every DNS record has
a time to live (TTL) that informs the cache how long the entry is valid. But,
there is no guarantee that the client plays nicely and enforces the TTL.
Don’t be surprised when you change a DNS entry and find out that a small
fraction of clients are still trying to connect to the old address days after the
change.
Setting a TTL requires making a tradeoff. If you use a long TTL, many
clients won’t see a change for a long time. But if you set it too short, you
increase the load on the name servers and the average response time of
requests because the clients will have to resolve the entry more often.
If your name server becomes unavailable for any reason, the smaller the
record’s TTL is and the higher the number of clients impacted will be. DNS
can easily become a single point of failure — if your DNS name server is
down and the clients can’t find the IP address of your service, they won’t
have a way to connect it. This can lead to massive outages.
5 APIs
A service exposes operations to its consumers via a set of interfaces
implemented by its business logic. As remote clients can’t access these
directly, adapters — which make up the service’s application programming
interface (API) — translate messages received from IPC mechanisms to
interface calls, as shown in Figure 5.1.
When a client sends a request to a service, it can block and wait for the
response to arrive, making the communication synchronous. Alternatively,
it can ask the outbound adapter to invoke a callback when it receives the
response, making the communication asynchronous.
5.1 HTTP
HTTP is a request-response protocol used to encode and transport
information between a client and a server. In an HTTP transaction, the
client sends a request message to the server’s API endpoint, and the server
replies back with a response message, as shown in Figure 5.2.
In HTTP 1.1, a message is a textual block of data that contains a start line, a
set of headers, and an optional body:
In a request message, the start line indicates what the request is for,
and in a response message, it indicates what the response’s result is.
The headers are key-value pairs with meta-information that describe
the message.
The message’s body is a container for data.
Figure 5.2: An example HTTP transaction between a browser and a web
server.
HTTP 2 was designed from the ground up to address the main limitations of
HTTP 1.1. It uses a binary protocol rather than a textual one, which allows
HTTP 2 to multiplex multiple concurrent request-response transactions on
the same connection. In early 2020 about half of the most-visited websites
on the Internet were using the new HTTP 2 standard. HTTP 3 is the latest
iteration of the HTTP standard, which is slowly being rolled out to browsers
as I write this — it’s based on UDP and implements its own transport
protocol to address some of TCP’s shortcomings.
Given that neither HTTP 2 nor HTTP 3 are ubiquitous yet, you still need to
be familiar with HTTP 1.1, which is the standard the book uses going
forward as its plain text format is easier to depict.
5.2 Resources
Suppose we are responsible for implementing a service to manage the
product catalog of an e-commerce application. The service must allow users
to browse the catalog and admins to create, update, or delete products.
Sounds simple enough; the interface of the service could be defined like
this:
interface CatalogService
{
List<Product> GetProducts(...);
Product GetProduct(...);
void AddProduct(...);
void DeleteProduct(...);
void UpdateProduct(...)
}
External clients can’t invoke interface methods directly, which is where the
HTTP adapter comes in. It handles an HTTP request by invoking the
methods defined in the service interface and converts their return values
into HTTP responses. But to perform this mapping, we first need to
understand how to model the API with HTTP in the first place.
The URL without the query string is also referred to as the API’s /products
endpoint.
Now that we know how to refer to resources, let’s see how to represent
them on the wire when they are transmitted in the body of request and
response messages. A resource can be represented in different ways; for
example, a product can be represented either with an XML or a JSON
document. JSON is typically used to represent non-binary resources in
REST APIs:
{
"id": 42,
"category": "Laptop",
"price": 999,
}
The most commonly used methods are POST, GET, PUT, and DELETE.
For example, the API of our catalog service could be defined as follows:
POST /products — Create a new product and return the URI of the
new resource.
GET /products — Retrieve a list of products. The query string can be
used to filter, paginate, and sort the collection. Pagination should be
used to return a limited number of resources per call to prevent denial
of service attacks.
GET /products/42 — Retrieve product 42.
PUT /products/42 — Update product 42.
DELETE /products/42 — Delete product 42.
Request methods can be classified depending on whether they are safe and
idempotent. A safe method should not have any visible side effects and can
be safely cached. An idempotent method can be executed multiple times,
and the end result should be the same as if it was executed just a single
time.
Status codes between 200 and 299 are used to communicate success. For
example, 200 (OK) means that the request succeeded, and the body of the
response contains the requested resource.
Status codes between 300 and 399 are used for redirection. For example,
301 (Moved Permanently) means that the requested resource has been
moved to a different URL, specified in the response message Location
header.
Status codes between 400 and 499 are reserved for client errors. A request
that fails with a client error will usually continue to return the same error if
it’s retried, as the error is caused by an issue with the client, not the server.
Because of that, it shouldn’t be retried. These client errors are common:
Status codes between 500 and 599 are reserved for server errors. A request
that fails with a server error can be retried as the issue that caused it to fail
might be fixed by the time the retry is processed by the server. These are
some typical server status codes:
5.5 OpenAPI
Now that we have learned how to map the operations defined by our
service’s interface onto RESTful HTTP endpoints, we can formally define
the API with an interface definition language (IDL), a language
independent description of it. The IDL definition can be used to generate
boilerplate code for the IPC adapter and client SDKs in your languages of
choice.
The OpenAPI specification, which evolved from the Swagger project, is
one of the most popular IDL for RESTful APIs based on HTTP. With it, we
can formally describe our API in a YAML document, including the
available endpoints, supported request methods and response status codes
for each endpoint, and the schema of the resources’ JSON representation.
For example, this is how part of the /products endpoint of the catalog
service’s API could be defined:
openapi: 3.0.0
info:
version: "1.0.0"
title: Catalog Service API
paths:
/products:
get:
summary: List products
parameters:
- in: query
name: sort
required: false
schema:
type: string
responses:
'200':
description: list of products in catalog
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/ProductItem'
'400':
description: bad input
components:
schemas:
ProductItem:
type: object
required:
- id
- name
- category
properties:
id:
type: number
name:
type: string
category:
type: string
Although this is a very simple example and we won’t spend time describing
OpenAPI further as it’s mostly an implementation detail, it should give you
an idea of its expressiveness. With this definition, we can then run a tool to
generate the API’s documentation, boilerplate adapters, and client SDKs for
our languages of choice.
5.6 Evolution
APIs start out as beautifully-designed interfaces. Slowly, but surely, they
will need to change over time to adapt to new use cases. The last thing you
want to do when evolving your API is to introduce a breaking change that
requires modifying all the clients in unison, some of which you might have
no control over in the first place.
There are two types of changes that can break compatibility, one at the
endpoint level and another at the message level. For example, if you were to
change the /products endpoint to /fancy-products, it would obviously break
clients that haven’t been updated to support the new endpoint. The same
goes when making a previously optional query parameter mandatory.
Chapter 6 introduces formal models that encode our assumptions about the
behavior of nodes, communication links, and timing; think of them as
abstractions that allow us to reason about distributed systems by ignoring
the complexity of the actual technologies used to implement them.
Chapter 8 dives into the concept of time and order. In this chapter, we will
first learn why agreeing on the time an event happened in a distributed
system is much harder than it looks, and then propose a solution based on
clocks that don’t measure the passing of time.
Chapter 9 describes how a group of processes can elect a leader who can
perform operations that others can’t, like accessing a shared resource or
coordinating other processes’ actions.
The fair-loss link model assumes that messages may be lost and
duplicated. If the sender keeps retransmitting a message, eventually it
will be delivered to the destination.
The reliable link model assumes that a message is delivered exactly
once, without loss or duplication. A reliable link can be implemented
on top of a fair-loss one by de-duplicating messages at the receiving
side.
The authenticated reliable link model makes the same assumptions as
the reliable link, but additionally assumes that the receiver can
authenticate the message’s sender.
Even though these models are just abstractions of real communication links,
they are useful to verify the correctness of algorithms. As we have seen in
the previous chapters, it’s possible to build a reliable and authenticated
communication link on top of a fair-loss one. For example, TCP does
precisely that (and more), while TLS implements authentication (and more).
We can also model the different types of node failures we expect to happen:
The arbitrary-fault model assumes that a node can deviate from its
algorithm in arbitrary ways, leading to crashes or unexpected behavior
due to bugs or malicious activity. The arbitrary fault model is also
referred to as the “Byzantine” model for historical reasons.
Interestingly, it can be theoretically proven that a system with
Byzantine nodes can tolerate up to of faulty nodes and still operate
1
correctly.
The crash-recovery model assumes that a node doesn’t deviate from its
algorithm, but can crash and restart at any time, losing its in-memory
state.
The crash-stop model assumes that a node doesn’t deviate from its
algorithm, but if it crashes it never comes back online.
In the worst case, the client will wait forever for a response that will never
arrive. The best it can do is make an educated guess on whether the server
is likely to be down or unreachable after some time has passed. To do that,
the client can configure a timeout to trigger if it hasn’t received a response
from the server after a certain amount of time. If and when the timeout
triggers, the client considers the server unavailable and throws an error.
The tricky part is defining how long the amount of time that triggers this
timeout should be. If it’s too short and the server is reachable, the client will
wrongly consider the server dead; if it’s too long and the server is not
reachable, the client will block waiting for a response. The bottom line is
that it’s not possible to build a perfect failure detector.
Pings and heartbeats are typically used when specific processes frequently
interact with each other, and an action needs to be taken as soon as one of
them is no longer reachable. If that’s not the case, detecting failures just at
communication time is good enough.
8 Time
Time is an essential concept in any application, even more so in distributed
ones. We have already encountered some use for it when discussing the
network stack (e.g., DNS record TTL) and failure detection. Time also
plays an important role in reconstructing the order of operations by logging
their timestamps.
This creates a problem as measuring the elapsed time between two points in
time becomes error-prone. For example, an operation that is executed after
another could appear to have been executed before.
Luckily, most operating systems offer a different type of clock that is not
affected by time jumps: the monotonic clock. A monotonic clock measures
the number of seconds elapsed since an arbitrary point, like when the node
started up, and can only move forward in time. A monotonic clock is useful
to measure how much time elapsed between two timestamps on the same
node, but timestamps of different nodes can’t be compared with each other.
Imagine sending an email to a friend. Any actions you did before sending
that email, like drinking coffee, must have happened before the actions your
friend took after receiving the email. Similarly, when one process sends a
message to another, a so-called synchronization point is created. The
operations executed by the sender before the message was sent must have
happened before the operations that the receiver executed after receiving it.
A Lamport clock is a logical clock based on this idea. Every process in the
system has its own local logical clock implemented with a numerical
counter that follows specific rules:
You would think that the converse also applies — if the logical timestamp
of operation O is less than O , then O happened-before O . But, that
3 4 3 4
A process updates its local vector clock based on the following rules:
Figure 8.2: Each process has a vector clock represented with an array of
three counters.
The beauty of vector clock timestamps is that they can be partially ordered;
given two operations O and O with timestamps T and T , if:
1 2 1 2
T ,
2
and there is at least one counter in T that is strictly less than the
1
corresponding counter in T , 2
then O happened-before
1 O2 . For example, in Figure 8.2, B happened-
before C.
If O didn’t happen before O and O didn’t happen before O , then the
1 2 2 1
This discussion about logical clocks might feel quite abstract. Later in the
book, we will encounter some practical applications of logical clocks. Once
you learn to spot them, you will realize they are everywhere, as they can be
disguised under different names. What’s important to internalize at this
point is that generally, you can’t use physical clocks to derive accurately the
order of events that happened on different processes2 .
2. That said, sometimes physical clocks are good enough. For example,
using physical clocks to timestamp logs is fine as they are mostly used
for debugging purposes.↩
9 Leader election
Sometimes a single process in the system needs to have special powers, like
being the only one that can access a shared resource or assign work to
others. To grant a process these powers, the system needs to elect a leader
among a set of candidate processes, which remains in charge until it crashes
or becomes otherwise unavailable. When that happens, the remaining
processes detect that the leader is no longer available and elect a new one.
the follower state, in which the process recognizes another one as the
leader;
the candidate state, in which the process starts a new election
proposing itself as a leader;
or the leader state, in which the process is the leader.
When the system starts up, all processes begin their journey as followers. A
follower expects to receive a periodic heartbeat from the leader containing
the election term the leader was elected in. If the follower doesn’t receive
any heartbeat within a certain time period, a timeout fires and the leader is
presumed dead. At that point, the follower starts a new election by
incrementing the current election term and transitioning to the candidate
state. It then votes for itself and sends a request to all the processes in the
system to vote for it, stamping the request with the current election term.
The process remains in the candidate state until one of three things happens:
it wins the election, another process wins the election, or some time goes by
with no winner:
The candidate wins the election — The candidate wins the election if
the majority of the processes in the system vote for it. Each process
can vote for at most one candidate in a term on a first-come-first-
served basis. This majority rule enforces that at most one candidate
can win a term. If the candidate wins the election, it transitions to the
leader state and starts sending out heartbeats to the other processes.
Another process wins the election — If the candidate receives a
heartbeat from a process that claims to be the leader with a term
greater than, or equal the candidate’s term, it accepts the new leader
and returns to the follower state. If not, it continues in the candidate
state. You might be wondering how that could happen; for example, if
the candidate process was to stop for any reason, like for a long GC
pause, by the time it resumes another process could have won the
election.
A period of time goes by with no winner — It’s unlikely but possible
that multiple followers become candidates simultaneously, and none
manages to receive a majority of votes; this is referred to as a split
vote. When that happens, the candidate will eventually time out and
start a new election. The election timeout is picked randomly from a
fixed interval to reduce the likelihood of another split vote in the next
election.
Figure 9.1: Raft’s leader election algorithm represented as a state machine.
The TTL expiry logic can also be implemented on the client-side, like this
locking library for DynamoDB does, but the implementation is more
complex, and it still requires the data store to offer a compare-and-swap
operation.
You might think that’s enough to guarantee there can’t be more than one
leader in your application. Unfortunately, that’s not the case.
To see why, suppose there are multiple processes that need to update a file
on a shared blob store, and you want to guarantee that only a single process
at a time can do so to avoid race conditions. To achieve that, you decide to
use a distributed mutex, a form of leader election. Each process tries to
acquire the lock, and the one that does so successfully reads the file,
updates it in memory, and writes it back to the store:
if lock.acquire():
try:
content = store.read(blob_name)
new_content = update(content)
store.write(blob_name, new_content)
except:
lock.release()
The problem here is that by the time the process writes the content to the
store, it might no longer be the leader and a lot might have happened since
it was elected. For example, the operating system might have preempted
and stopped the process, and several seconds will have passed by the time
it’s running again. So how can the process ensure that it’s still the leader
then? It could check one more time before writing to the store, but that
doesn’t eliminate the race condition, it just makes it less likely.
To avoid this issue, the data store downstream needs to verify that the
request has been sent by the current leader. One way to do that is by using a
fencing token. A fencing token is a number that increases every time that a
distributed lock is acquired — in other words, it’s a logical clock. When the
leader writes to the store, it passes down the fencing token to it. The store
remembers the value of the last token and accepts only writes with a greater
value:
success, token = lock.acquire()
if success:
try:
content = store.read(blob_name)
new_content = update(content)
store.write(blob_name, new_content, token)
except:
lock.release()
This approach adds complexity as the downstream consumer, the blob store,
needs to support fencing tokens. If it doesn’t, you are out of luck, and you
will have to design your system around the fact that occasionally there will
be more than one leader. For example, if there are momentarily two leaders
and they both perform the same idempotent operation, no harm is done.
Also, having a leader introduces a single point of failure with a large blast
radius; if the election process stops working or the leader isn’t working as
expected, it can bring down the entire system with it.
As a rule of thumb, if you must use leader election, you have to minimize
the work it performs and be prepared to occasionally have more than one
leader if you can’t support fencing tokens end-to-end.
10 Replication
Data replication is a fundamental building block of distributed systems. One
reason to replicate data is to increase availability. If some data is stored
exclusively on a single node, and that node goes down, the data won’t be
accessible anymore. But if the data is replicated instead, clients can
seamlessly switch to a replica. Another reason for replication is to increase
scalability and performance; the more replicas there are, the more clients
can access the data concurrently without hitting performance degradations.
Figure 10.1: The leader’s log is replicated to its followers. This figure
appears in Raft’s paper.
When the leader wants to apply an operation to its local state, it first
appends a new log entry for the operation into its log. At this point, the
operation hasn’t been applied to the local state just yet; it has only been
logged.
The leader then sends a so-called AppendEntries request to each follower
with the new entry to be added. This message is also sent out periodically,
even in the absence of new entries, as it acts as a heartbeat for the leader.
Because the leader needs to wait only for a majority of followers, it can
make progress even if some processes are down, i.e., if there are 2f + 1
followers, the system can tolerate up to f failures. The algorithm guarantees
that an entry that is committed is durable and will eventually be executed by
all the processes in the system, not just those that were part of the original
majority.
So far, we have assumed there are no failures, and the network is reliable.
Let’s relax these assumptions. If the leader fails, a follower is elected as the
new leader. But, there is a caveat: because the replication algorithm only
needs a majority of the processes to make progress, it’s possible that when a
leader fails, some processes are not up-to-date.
When the AppendEntries request is rejected, the leader retries sending the
message, this time including the last two log entries — this is why we
referred to the request as AppendEntries, and not as AppendEntry. This
dance continues until the follower finally accepts a list of log entries that
can be appended to its log without creating a hole. Although the number of
messages exchanged can be optimized, the idea behind it is the same: the
follower waits for a list of puzzle pieces that perfectly fit its version of the
puzzle.
10.2 Consensus
State machine replication can be used for much more than just replicating
data since it’s a solution to the consensus problem. Consensus is a
fundamental problem studied in distributed systems research, which
requires a set of processes to agree on a value in a fault-tolerant way so that:
Typically, when you have a problem that requires consensus, the last thing
you want to do is to solve it from scratch by implementing an algorithm like
Raft. While it’s important to understand what consensus is and how it can
be solved, many good open-source projects implement state machine
replication and expose simple APIs on top of it, like etcd and ZooKeeper.
But in reality, things are quite different — the request needs to reach the
leader, which then needs to process it and finally send back a response to
the client. As shown in Figure 10.3, all these actions take time and are not
instantaneous.
The best guarantee the system can provide is that the request executes
somewhere between its invocation and completion time. You might think
that this doesn’t look like a big deal; after all, it’s what you are used to
when writing single-threaded applications. If you assign 1 to x and read its
value right after, you expect to find 1 in there, assuming there is no other
thread writing to the same variable. But, once you start dealing with
systems that replicate their state on multiple nodes for high availability and
scalability, all bets are off. To understand why that’s the case, we will
explore different ways to implement reads in our replicated store.
In section 10.1, we looked at how Raft replicates the leader’s state to its
followers. Since only the leader can make changes to the state, any
operation that modifies it needs to necessarily go through the leader. But
what about reads? They don’t necessarily have to go through the leader as
they don’t affect the system’s state. Reads can be served by the leader, a
follower, or a combination of leader and followers. If all reads were to go
through the leader, the read throughput would be limited by that of a single
process. But, if reads can be served by any follower instead, then two
clients, or observers, can have a different view of the system’s state, since
followers can lag behind the leader.
If clients send writes and reads exclusively to the leader, then every request
appears to take place atomically at a very specific point in time as if there
was a single copy of the data. No matter how many replicas there are or
how far behind they are lagging, as long as the clients always query the
leader directly, from their point of view there is a single copy of the data.
What if the client sends a read request to the leader and by the time the
request gets there, the server assumes it’s the leader, but it actually was just
deposed? If the ex-leader was to process the request, the system would no
longer be strongly consistent. To guard against this case, the presumed
leader first needs to contact a majority of the replicas to confirm whether it
still is the leader. Only then it’s allowed to execute the request and send
back the response to the client. This considerably increases the time
required to serve a read.
Even though a follower can lag behind the leader, it will always receive
new updates in the same order as the leader. Suppose a client only ever
queries follower 1, and another only ever queries follower 2. In that case,
the two clients will see the state evolving at different times, as followers are
not entirely in sync (see Figure 10.5).
Figure 10.5: Although followers have a different view of the systems’ state,
they process updates in the same order.
The consistency model in which operations occur in the same order for all
observers, but doesn’t provide any real-time guarantee about when an
operation’s side-effect becomes visible to them, is called sequential
consistency. The lack of real-time guarantees is what differentiates
sequential consistency from linearizability.
Even though network partitions can happen, they are usually rare. But, there
is a trade-off between consistency and latency in the absence of a network
partition. The stronger the consistency guarantee is, the higher the latency
of individual operations must be. This relationship is expressed by the
PACELC theorem. It states that in case of network partitioning (P) in a
distributed computer system, one has to choose between availability (A)
and consistency (C), but else (E), even when the system is running normally
in the absence of partitions, one has to choose between latency (L) and
consistency (C).
11.1 ACID
Consider a money transfer from one bank account to another. If the
withdrawal succeeds, but the deposit doesn’t, the funds need to be deposited
back into the source account — money can’t just disappear into thin air. In
other words, the transfer needs to execute atomically; either both the
withdrawal and the deposit succeed, or neither do. To achieve that, the
withdrawal and deposit need to be wrapped in an inseparable unit: a
transaction.
Atomicity guarantees that partial failures aren’t possible; either all the
operations in the transactions complete successfully, or they are rolled
back as if they never happened.
Consistency guarantees that the application-level invariants, like a
column that can’t be null, must always be true. Confusingly, the “C” in
ACID has nothing to do with the consistency models we talked about
so far, and according to Joe Hellerstein, the “C” was tossed in to make
the acronym work. Therefore, we will safely ignore this property in the
rest of this chapter.
Isolation guarantees that the concurrent execution of transactions
doesn’t cause any race conditions.
Durability guarantees that once the data store commits the transaction,
the changes are persisted on durable storage. The use of a write-ahead
log (WAL) is the standard method used to ensure durability. When
using a WAL, the data store can update its state only after log entries
describing the changes have been flushed to permanent storage. Most
of the time, the database doesn’t read from this log at all. But if the
database crashes, the log can be used to recover its prior state.
11.2 Isolation
A set of concurrently running transactions that access the same data can run
into all sorts of race conditions, like dirty writes, dirty reads, fuzzy reads,
and phantom reads:
Transactions can have different types of isolation levels that are defined
based on the type of race conditions they forbid, as shown in Figure 11.1.
Figure 11.1: Isolation levels define which race conditions they forbid.
Serializability is the only isolation level that guards against all possible race
conditions. It guarantees that the side effects of executing a set of
transactions appear to be the same as if they had executed sequentially, one
after the other. But, we still have a problem — there are many possible
orders that the transactions can appear to be executed in, as serializability
doesn’t say anything about which one to pick.
There are more isolation levels and race conditions than the ones we
discussed here. Jepsen provides a good formal reference of the existing
isolation levels, how they relate to one another, and which guarantees they
offer. Although vendors typically document the isolation levels their
products offer, these specifications don’t always match the formal
definitions.
Now that we know what serializability is, let’s look at how it can be
implemented and why it’s so expensive in terms of performance.
Serializability can be achieved either with a pessimistic or an optimistic
concurrency control mechanism.
11.2.1 Concurrency control
There are two phases in 2PL, an expanding and a shrinking one. In the
expanding phase, the transaction is allowed only to acquire locks, but not to
release them. In the shrinking phase, the transaction is permitted only to
release locks, but not to acquire them. If these rules are obeyed, it can be
formally proven that the protocol guarantees serializability.
I have deliberately not spent much time describing 2PL and MVCC, as it’s
unlikely you will have to implement them in your systems. But, the
commercial data stores your systems depend on use one or the other
technique to isolate transactions, so you must have a basic grasp of the
tradeoffs.
11.3 Atomicity
Going back to our original example of sending money from one bank
account to another, suppose the two accounts belong to two different banks
that use separate data stores. How should we go about guaranteeing
atomicity across the two accounts? We can’t just run two separate
transactions to respectively withdraw and deposit the funds — if the second
transaction fails, then the system is left in an inconsistent state. We need
atomicity: the guarantee that either both transactions succeed and their
changes are committed, or that they fail without any side effects.
The other point of non-return is when the leader decides to commit or abort
the transaction after receiving a response to its prepare message from all
participants. Once the coordinator makes the decision, it can’t change its
mind later and has to see through that the transaction is committed or
aborted, no matter what. If a participant is temporarily down, the
coordinator will keep retrying until the request eventually succeeds.
Two-phase commit has a mixed reputation. It’s slow, as it requires multiple
round trips to complete a transaction and blocks when there is a failure. If
either the coordinator or a participant fails, then all processes part of the
transactions are blocked until the failing process comes back online. On top
of that, the participants need to implement the protocol; you can’t just take
PostgreSQL and Cassandra and expect them to play ball with each other.
But, some types of transactions can take hours to execute, in which case
blocking just isn’t an option. And some transactions don’t need isolation in
the first place. Suppose we were to drop the isolation requirement and the
assumption that the transactions are short-lived. Can we come up with an
asynchronous non-blocking solution that still provides atomicity?
To integrate with the search index, the catalog service needs to update both
the relational database and the search index when a new product is added or
an existing product is modified or deleted. The service could just update the
relational database first, and then the search index; but if the service crashes
before updating the search index, the system would be left in an
inconsistent state. As you can guess by now, we need to wrap the two
updates into a transaction somehow.
We could consider using 2PC, but while the relational database supports the
X/Open XA 2PC standard, the search index doesn’t, which means we
would have to implement that functionality from scratch. We also don’t
want the catalog service to block if the search index is temporarily
unavailable. Although we want the two data stores to be in sync, we can
accept some temporary inconsistencies. In other words, eventual
consistency is acceptable for our use case.
Now, when the catalog service receives a request from a client to create a
new product, rather than writing to the relational database, or the search
index, it appends a product creation message to the message log. The
append acts as the atomic commit step for the transaction. The relational
database and the search index are asynchronous consumers of the message
log, reading entries in the same order as they were appended and updating
their state at their own pace (see Figure 11.3). Because the message log is
ordered, it guarantees that the consumers see the entries in the same order.
Figure 11.3: The producer appends entries at the end of the log, while the
consumers read the entries at their own pace.
The consumers periodically checkpoint the index of the last message they
processed. If a consumer crashes and comes back online after some time, it
reads the last checkpoint and resumes reading messages from where it left
off. Doing so ensures there is no data loss even if the consumer was offline
for some time.
But, there is a problem as the consumer can potentially read the same
message multiple times. For example, the consumer could process a
message and crash before checkpointing its state. When it comes back
online, it will eventually re-read the same message. Therefore, messages
need to be idempotent so that no matter how many times they are read, the
effect should be the same as if they had been processed only once. One way
to do that is to decorate each message with a unique ID and ignore
messages with duplicate IDs at read time.
A message channel acts as a temporary buffer for the receiver. Unlike the
direct request-response communication style we have been using so far,
messaging is inherently asynchronous as sending a message doesn’t require
the receiving service to be online.
11.4.2 Sagas
Suppose we own a travel booking service. To book a trip, the travel service
has to atomically book a flight through a dedicated service and a hotel
through another. However, either of these services can fail their respective
requests. If one booking succeeds, but the other fails, then the former needs
to be canceled to guarantee atomicity. Hence, booking a trip requires
multiple steps to complete, some of which are only required in case of
failure. Since appending a single message to a log is no longer sufficient to
commit the transaction, we can’t use the simple log-oriented solution
presented earlier.
The Saga pattern provides a solution to this problem. A saga is a distributed
transaction composed of a set of local transactions T , T , ..., T , where T
1 2 n i
changes. The Saga guarantees that either all local transactions succeed, or in
case of failure, that the compensating local transactions undo the partial
execution of the transaction altogether. This guarantees the atomicity of the
protocol; either all local transactions succeed, or none of them do. A Saga
can be implemented with an orchestrator, the transaction’s coordinator, that
manages the execution of the local transactions across the processes
involved, the transaction’s participants.
At a high level, the Saga can be implemented with the workflow depicted in
Figure 11.5:
A scalable application can increase its capacity as its load increases. The
simplest way to do that is by scaling up and running the application on
more expensive hardware, but that only brings you so far since the
application will eventually reach a performance ceiling.
Functional decomposition
Section 12.3 discusses how to decouple an API’s read path from its write
path so that their respective implementations can use different technologies
that fit their specific use cases.
Section 12.4 dives into asynchronous messaging channels that decouple
producers on one end of a channel from consumers on the other end.
Thanks to channels, communication between two parties is possible even if
the destination is temporarily not available. Messaging provides several
other benefits, which we will explore in this section, along with best
practices and pitfalls you can run into.
Partitioning
Duplication
Section 14.1 introduces the concept of load balancing requests across nodes
and its implementation using commodity machines. We will start with DNS
load balancing and then dive into the implementation of load balancers that
operate at the transport and application layer of the network stack. Finally,
we will discuss geo load balancing that allows clients to communicate with
the geographically closest datacenter.
Section 14.2 describes how to replicate data across nodes and keep it in
sync. Although we have already discussed one way of doing that with Raft
in chapter 10, in this section, we will take a broader look at the topic and
explore different approaches with varying trade-offs (single-leader, multi-
leader, and leaderless).
Section 14.3 discusses the benefits and pitfalls of caching. We will start by
discussing in-process caches first, which are easy to implement but have
several pitfalls. Finally, we will look at the pros and cons of external
caches.
12 Functional decomposition
12.1 Microservices
An application typically starts its life as a monolith. Take a modern backend
of a single-page JavaScript application (SPA), for example. It might start
out as a single stateless web service that exposes a RESTful HTTP API and
uses a relational database as a backing store. The service is likely to be
composed of a number of components or libraries that implement different
business capabilities, as shown in Figure 12.1.
12.1.2 Costs
Development experience
Nothing forbids the use of different languages, libraries, and datastores in
each microservice, but doing so transforms the application into an
unmaintainable mess. For example, it becomes more challenging for a
developer to move from one team to another if the software stack is
completely different. And think of the sheer number of libraries, one for
each language adopted, that need to be supported to provide common
functionality that all services need, like logging.
Resource provisioning
Communication
Remote calls are expensive and come with all the caveats we discussed
earlier in the book. You will need defense mechanisms to protect against
failures and leverage asynchrony and batching to mitigate the performance
hit of communicating across the network. All of this increases the system’s
complexity.
Operations
Unlike with a monolith, it’s much more expensive to staff each team
responsible for a service with its own operations team. As a result, the team
that develops a service is typically also on-call for it. This creates friction
between adding new features and operating the service as the team needs to
decide what to prioritize during each sprint.
Eventual consistency
A side effect of splitting an application into separate services is that the data
model no longer resides in a single data store. As we have learned in
previous chapters, atomically updating records stored in different data
stores, and guaranteeing strong consistency, is slow, expensive, and hard to
get right. Hence, this type of architecture usually requires embracing
eventual consistency.
You should only start with a microservice-first approach if you already have
experience with it, and you either have built out a platform for it or have
accounted for the time it will take you to build one.
The API gateway provides multiple features, like routing, composition, and
translation.
12.2.1 Routing
The API gateway can route the requests it receives to the appropriate
backend service. It does so with the help of a routing map, which maps the
external APIs to the internal ones. For example, the map might have a 1:1
mapping between an external path and internal one. If in the future the
internal path changes, the public API can continue to expose the old path to
guarantee backward compatibility.
12.2.2 Composition
Composition can be hard to get right. The availability of the composed API
decreases as the number of internal calls increases since each has a non-
zero probability of failure. Additionally, the data across the services might
be inconsistent as some updates might not have propagated to all services
yet; in that case, the gateway will have to somehow resolve this
discrepancy.
12.2.3 Translation
The API gateway can translate from one IPC mechanism to another. For
example, it can translate a RESTful HTTP request into an internal gRPC
call.
The gateway can also expose different APIs to different types of clients. For
example, a web API for a desktop application can potentially return more
data than the one for a mobile application, as the screen estate is larger and
more information can be presented at once. Also, network calls are
expensive for mobile clients, and requests generally need to be batched to
reduce battery usage.
As the API gateway is a proxy, or middleman, for the services behind it, it
can also implement cross-cutting functionality that otherwise would have to
be re-implemented in each service. For example, the API gateway could
cache frequently accessed resources to improve the API’s performance
while reducing the bandwidth requirements on the services or rate-limit
requests to protect the services from being overwhelmed.
The application uses the session token to retrieve a session object from an
in-memory cache or an external data store. The object contains the
principal’s ID and the roles granted to it, which are used by the
application’s API handlers to decide whether to allow the principal to
perform an operation or not.
The most popular standard for transparent tokens is the JSON Web Token
(JWT). A JWT is a JSON payload that contains an expiration date, the
principal’s identity, roles, and other metadata. The payload is signed with a
certificate trusted by internal services. Hence, no external calls are needed
to validate the token.
OpenID Connect and OAuth 2 are security protocols that you can use to
implement token-based authentication and authorization. We have barely
scratched the surface on the topic, and there are entire books written on the
subject you can read to learn more about it.
12.2.5 Caveats
The other downside is that the API gateway is one more service that needs
to be developed, maintained, and operated. Also, it needs to be able to scale
to whatever the request rate is for all the services behind it. That said, if an
application has dozens of services and APIs, the upside is greater than the
downside and it’s generally a worthwhile investment.
So how do you go about implementing a gateway? You can roll your own
API gateway, using a proxy framework as a starting point, like NGINX. Or
better yet, you can use an off-the-shelf solution, like Azure API
Management.
12.3 CQRS
The API’s gateway ability to compose internal APIs is quite limited, and
querying data distributed across services can be very inefficient if the
composition requires large in-memory joins.
Accessing data can also be inefficient for reasons that have nothing to do
with using a microservice architecture:
The data store used might not be well suited for specific types of
queries. For example, a vanilla relational data store isn’t optimized for
geospatial queries.
The data store might not scale to handle the number of reads, which
could be several orders of magnitude higher than the number of writes.
In these cases, decoupling the read path from the write path can yield
substantial benefits. This approach is also referred to as the Command
Query Responsibility Segregation (CQRS) pattern.
The two paths can use different data models and data stores that fit their
specific use cases (see Figure 12.5). For example, the read path could use a
specialized data store tailored to a particular query pattern required by the
application, like geospatial or graph-based.
Figure 12.5: In this example, the read and write paths are separated out into
different services.
To keep the read and write data models synchronized, the write path pushes
updates to the read path whenever the data changes. External clients could
still use the write path for simple queries, but complex queries are routed to
the read path.
This separation adds more complexity to the system. For example, when the
data model changes, both paths might need to be updated. Similarly,
operational costs increase as there are more moving parts to maintain and
operate. Also, there is an inherent replication lag between the time a change
has been applied on the write path and the read path has received and
applied it, which makes the system sequentially consistent.
12.4 Messaging
When an application is decomposed into services, the number of network
calls increases, and with it, the probability that a request’s destination is
momentarily unavailable. So far, we have mostly assumed services
communicate using a direct request-response communication style, which
requires the destination to be available and respond promptly. Messaging —
a form of indirect communication — doesn’t have this requirement, though.
By decoupling the producer from the consumer, the former gains the ability
to communicate with the latter even if it’s temporarily unavailable.
Messaging provides several other benefits:
Because there is an additional hop between the producer and consumer, the
communication latency is necessarily going to be higher, more so if the
channel has a large backlog of messages waiting to be processed.
Additionally, the system’s complexity increases as there is one more
service, the message broker, that needs to be maintained and operated — as
always, it’s all about tradeoffs.
Any number of producers can write messages to a channel, and similarly,
multiple consumers can read from it. Depending on how the channel
delivers messages to consumers, it can be classified as either point-to-point
or publish-subscribe. In a point-to-point channel, a specific message is
delivered to exactly one consumer. Instead, in a publish-subscribe channel,
a copy of the same message is delivered to all consumers.
One-way messaging
Request-response messaging
Broadcast messaging
12.4.1 Guarantees
A message channel is implemented by a messaging service, like AWS SQS
or Kafka. The messaging service, or broker, acts as a buffer for messages. It
decouples producers from consumers so that they don’t need to know the
consumers’ addresses, how many of them there are, or whether they are
available.
Because a message broker needs to scale out just like the applications that
use it, its implementation is necessarily distributed. And when multiple
nodes are involved, guaranteeing order becomes challenging as some form
of coordination is required. Some brokers, like Kafka, partition a channel
into multiple sub-channels, each small enough to be handled entirely by a
single process. The idea is that if there is a single broker process
responsible for the messages of a sub-channel, then it should be trivial to
guarantee their order.
In this case, when messages are sent to the channel, they are partitioned into
sub-channels based on a partition key. To guarantee that the message order
is preserved end-to-end, only a single consumer process can be allowed to
read from a sub-channel2.
Now you see why not having to guarantee the order of messages makes the
implementation of a broker much simpler. Ordering is just one of the many
tradeoffs a broker needs to make, such as:
delivery guarantees, like at-most-once or at-least-once;
message durability guarantees;
latency;
messaging standards supported, like AMQP;
support for competing consumers;
broker limits, such as the maximum supported size of messages.
Because there are so many different ways to implement channels, in the rest
of this section we will make some assumptions for the sake of simplicity:
The above guarantees are very similar to what cloud services such as
Amazon’s SQS and Azure Storage Queues offer.
If the consumer deletes the message before processing it, there is a risk it
could crash after deleting the message and before processing it, causing the
message to be lost for good. On the other hand, if the consumer deletes the
message only after processing it, there is a risk that the consumer might
crash after processing the message but before deleting it, causing the same
message to be read again later on.
Because of that, there is no such thing as exactly-once message delivery.
The best a consumer can do is to simulate exactly-once message processing
by requiring messages to be idempotent.
12.4.3 Failures
Once you have a way to count the number of times a message has been
retried, you still have to decide what to do when the maximum is reached.
A consumer shouldn’t delete a message without processing it, as that would
cause data loss. But what it can do is remove the message from the channel
after writing it to a dead letter channel — a channel that acts as a buffer for
messages that have been retried too many times.
This way, messages that consistently fail are not lost forever but merely put
on the side so that they don’t pollute the main channel, wasting consumers’
processing resources. A human can then inspect these messages to debug
the failure, and once the root cause has been identified and fixed, move
them back to the main channel to be reprocessed.
12.4.4 Backlogs
One of the main advantages of using a messaging broker is that it makes the
system more robust to outages. Producers can continue to write messages to
a channel even if one or more consumers are not available or are degraded.
As long as the rate of arrival of messages is lower or equal to the rate they
are being deleted from the channel, everything is great. When that is no
longer true, and consumers can’t keep up with producers, a backlog starts to
build up.
To detect backlogs, you should measure the average time a message waits
in the channel to be read for the first time. Typically, brokers attach a
timestamp of when the message was first written to it. The consumer can
use that timestamp to compute how long the message has been waiting in
the channel by comparing it to the timestamp taken when the message was
read. Although the two timestamps have been generated by two physical
clocks that aren’t perfectly synchronized (see section 8.1), the measure still
provides a good indication of the backlog.
Transmitting a large binary object (blob) like images, audio files, or video
can be challenging or simply impossible, depending on the medium. For
example, message brokers limit the maximum size of messages that can be
written to a channel; Azure Storage queues limit messages to 64 KB, AWS
Kinesis to 1 MB, etc. So how do you transfer large blobs of hundreds of
MBs with these strict limits?
You can upload a blob to an object storage service, like AWS S3 or Azure
Blob Storage, and then send the URL of the blob via message (this pattern
is sometimes referred to as queue plus blob). The downside is that now you
have to deal with two services, the message broker and the object store,
rather than just the message broker, which increases the system’s
complexity.
Of course, the downside is that you lose the ability to transactionally update
the blob with its metadata and potentially other records in the data store.
For example, suppose a transaction inserts a new record in the data store
containing an image. In this case, the image won’t be visible until the
transaction completes; that won’t be the case if the image is stored in an
external store, though. Similarly, if the record is later deleted, the image is
automatically deleted as well; but if the image lives outside the store, it’s
your responsibility to delete it.
The mapping between keys and partitions, and other metadata, is typically
maintained in a strongly-consistent configuration store, like etcd or
Zookeeper. But how are keys mapped to partitions in the first place? At a
high level, there are two ways to implement the mapping using either range
partitioning or hash partitioning.
With range partitioning, the data is split into partitions by key range in
lexicographical order, and each partition holds a continuous range of keys,
as shown in Figure 13.1. The data can be stored in sorted order on disk
within each partition, making range scans fast.
Figure 13.1: A range partitioned dataset
Splitting the key-range evenly doesn’t make much sense though if the
distribution of keys is not uniform, like in the English dictionary. Doing so
creates unbalanced partitions that contain significantly more entries than
others.
Another issue with range partitioning is that some access patterns can lead
to hotspots. For example, if a dataset is range partitioned by date, all writes
for the current day end up in the same partition, which degrades the data
store’s performance.
The idea behind hash partitioning is to use a hash function to assign keys to
partitions, which shuffles — or uniformly distributes — keys across
partitions, as shown in Figure 13.2. Another way to think about it is that the
hash function maps a potentially non-uniformly distributed key space to a
uniformly distributed hash space.
Although this approach ensures that the partitions contain more or less the
same number of entries, it doesn’t eliminate hotspots if the access pattern is
not uniform. If there is a single key that is accessed significantly more often
than others, then all bets are off. In this case, the partition that contains the
hot key needs to be split further down. Alternatively, the key needs to be
split into multiple sub-keys, for example, by adding an offset at the end of
it.
For example, with consistent hashing, both the partition identifiers and keys
are randomly distributed on a circle, and each key is assigned to the next
partition that appears on the circle in clockwise order (see Figure 13.3).
Figure 13.3: With consistent hashing, partition identifiers and keys are
randomly distributed on a circle, and each key is assigned to the next
partition that appears on the circle in clockwise order.
Now, when a new partition is added, only the keys mapped to it need to be
reassigned, as shown in Figure 13.4.
Figure 13.4: After partition P4 is added, key ‘for’ is reassigned to P4, but
the other keys are not reassigned.
Here, the idea is to create way more partitions than necessary when the data
store is first initialized and assign multiple partitions per node. When a new
node joins, some partitions move from the existing nodes to the new one so
that the store is always in a balanced state.
The drawback of this approach is that the number of partitions is set when
the data store is first initialized and can’t be easily changed after that.
Getting the number of partitions wrong can be problematic — too many
partitions add overhead and decrease the data store’s performance, while
too few partitions limit the data store’s scalability.
We have merely scratched the surface on the topic; if you are interested to
learn more about it, I recommend reading Designing Data-Intensive
Applications by Martin Kleppmann.
14 Duplication
Now it’s time to change gears and dive into another tool you have at your
disposal to design horizontally scalable applications — duplication.
Creating more service instances can be a fast and cheap way to scale out a
stateless service, as long as you have taken into account the impact on its
dependencies. For example, if every service instance needs to access a
shared data store, eventually, the data store will become a bottleneck, and
adding more service instances to the system will only strain it further.
Distributing requests across servers has many benefits. Because clients are
decoupled from servers and don’t need to know their individual addresses,
the number of servers behind the LB can be increased or reduced
transparently. And since multiple redundant servers can interchangeably be
used to handle requests, a LB can detect faulty ones and take them out of
the pool, increasing the service’s availability.
The algorithms used for routing requests can vary from simple round-robin
to more complex ones that take into account the servers’ load and health.
There are several ways for a LB to infer the load of the servers. For
example, the LB could periodically hit a dedicated load endpoint of each
server that returns a measure of how busy the server is (e.g., CPU usage).
Hitting the servers constantly can be very costly though, so typically a LB
caches these measures for some time.
Actually, there is a way, but it requires combining load metrics with the
power of randomness. The idea is to randomly pick two servers from the
pool and route the request to the least-loaded one of the two. This approach
works remarkably well as it combines delayed load information with the
protection against herding that randomness provides.
Service Discovery
Health checks
Health checks are used by the LB to detect when a server can no longer
serve requests and needs to be temporarily removed from the pool. There
are fundamentally two categories of health checks: passive and active.
Now that we know what a load balancer’s job is, let’s take a closer look at
how it can be implemented. While you probably won’t have to build your
own LB given the plethora of off-the-shelf solutions available, a basic
knowledge of how load balancing works is crucial. LB failures are very
visible to your services’ clients since they tend to manifest themselves as
timeouts and connection resets. Because the LB sits between your service
and its clients, it also contributes to the end-to-end latency of request-
response transactions.
The most basic form of load balancing can be implemented with DNS.
Suppose you have a couple of servers that you would like to load balance
requests over. If these servers have publicly-reachable IP addresses, you can
add those to the service’s DNS record and have the clients pick one when
resolving the DNS address, as shown in Figure 14.1.
Although this works, it doesn’t deal well with failures. If one of the two
servers goes down, the DNS server will happily continue serving its IP
address unaware of the failure. You can manually reconfigure the DNS
record to take out the problematic IP, but as we have learned in chapter 4,
changes are not applied immediately due to the nature of DNS caching.
When a client creates a new TCP connection with a LB’s VIP, the LB picks
a server from the pool and henceforth shuffles the packets back and forth
for that connection between the client and the server. How does the LB
assign connections to the servers, though?
As the data going out of the servers usually has a greater volume than the
data coming in, there is a way for servers to bypass the LB and respond
directly to the clients using a mechanism called direct server return, but this
is beyond the scope of this section.
There are two different TCP connections at play here, one between the
client and the L7 LB and another between the L7 LB and the server.
Because a L7 LB operates at the HTTP level, it can de-multiplex individual
HTTP requests sharing the same TCP connection. This is even more
important with HTTP 2, where multiple concurrent streams are multiplexed
on the same TCP connection, and some connections can be several orders
of magnitude more expensive to handle than others.
For example, the LB could use a specific cookie to identify which logical
session a specific request belongs to. Just like with a L4 LB, the session
identifier can be mapped to a server using consistent hashing. The caveat is
that sticky sessions can create hotspots as some sessions are more expensive
to handle than others.
The sidecar processes form the data plane of a service mesh, which is
configured by a corresponding control plane. This approach has been
gaining popularity with the rise of microservices in organizations that have
hundreds of services communicating with each other. Popular sidecar proxy
load balancers as of this writing are NGINX, HAProxy, and Envoy. The
advantage of using this approach is that it distributes the load-balancing
functionality to the clients, removing the need for a dedicated service that
needs to be scaled out and maintained. The con is a significant increase in
the system’s complexity.
14.2 Replication
If the servers behind a load balancer are stateless, scaling out is as simple as
adding more servers. But when there is state involved, some form of
coordination is required.
Replication and sharding are techniques that are often combined, but are
orthogonal to each other. For example, a distributed data store can divide its
data into N partitions and distribute them over K nodes. Then, a state-
machine replication algorithm like Raft can be used to replicate each
partition R times (see Figure 14.5).
Figure 14.5: A replicated and partitioned data store. A node can be the
replication leader for a partition while being a follower for another one.
We have already discussed one way of replicating data in chapter 10. This
section will take a broader, but less detailed, look at replication and explore
different approaches with varying trade-offs. To keep things simple, we will
assume that the dataset is small enough to fit on a single node, and therefore
no partitioning is needed.
The most common approach to replicate data is the single leader, multiple
followers/replicas approach (see Figure 14.6). In this approach, the clients
send writes exclusively to the leader, which updates its local state and
replicates the change to the followers. We have seen an implementation of
this when we discussed the Raft replication algorithm.
Figure 14.6: Single leader replication
At a high level, the replication can happen either fully synchronously, fully
asynchronously, or as a combination of the two.
Asynchronous replication
In this mode, when the leader receives a write request from a client, it
asynchronously sends out requests to the followers to replicate it and replies
to the client before the replication has been completed.
Although this approach is fast, it’s not fault-tolerant. What happens if the
leader crashes right after accepting a write, but before replicating it to the
followers? In this case, a new leader could be elected that doesn’t have the
latest updates, leading to data loss, which is one of the worst possible trade-
offs you can make.
Synchronous replication
In multi-leader replication, there is more than one node that can accept
writes. This approach is used when the write throughput is too high for a
single node to handle, or when a leader needs to be available in multiple
data centers to be geographically closer to its clients.
The simplest strategy is to design the system so that conflicts are not
possible; this can be achieved under some circumstances if the data has a
homing region. For example, if all the European customer requests are
always routed to the European data center, which has a single leader, there
won’t be any conflicting writes. There is still the possibility of a data center
going down, but that can be mitigated with a backup data center in the same
region, replicated with single-leader replication.
One way to deal with a conflict updating a record is to store the concurrent
writes and return them to the next client that reads the record. The client
will try to resolve the conflict and update the data store with the resolution.
In other words, the data store “pushes the can down the road” to the clients.
The data store could use the timestamps of the writes and let the most
recent one win. This is generally not reliable because the nodes’
physical clocks aren’t perfectly synchronized. Logical clocks are better
suited for the job in this case.
The data store could allow the client to upload a custom conflict
resolution procedure, which can be executed by the data store
whenever a conflict is detected.
Finally, the data store could leverage data structures that provide
automatic conflict resolution, like a conflict-free replicated data type
(CRDT). CRDTs are data structures that can be replicated across
multiple nodes, allowing each replica to update its local version
independently from others while resolving inconsistencies in a
mathematically sound way.
For this to work, a basic invariant needs to be satisfied. Suppose the data
store has N replicas. When a client sends a write request to the replicas, it
waits for at least W replicas to acknowledge it before moving on. And when
it reads an entry, it does so by querying R replicas and taking the most
recent one from the response set. Now, as long as W + R > N , the write
and replica set intersect, which guarantees that at least one record in the
read set will reflect the latest write.
The writes are always sent to all N replicas in parallel; the W parameter
determines just the number of responses the client has to receive to
complete the request. The data store’s read and write throughput depend on
how large or small R and W are. For example, a workload with many reads
benefits from a smaller R, but in turn, that makes writes slower and less
available.
14.3 Caching
Let’s take a look now at a very specific type of replication that only offers
best effort guarantees: caching.
14.3.1 Policies
When a cache miss occurs3, the missing data item has to be requested from
the remote dependency, and the cache has to be updated with it. This can
happen in two ways:
A cache also has an expiration policy that dictates for how long to store an
entry. For example, a simple expiration policy defines the maximum time to
live (TTL) in seconds. When a data item has been in the cache for longer
than its TTL, it expires and can safely be evicted.
Because the external cache is shared among the service instances, there can
be only a single version of each data item at any given time. And although
the cached item can be out-of-date, every client accessing the cache will see
the same version, which reduces consistency issues. The load on the
dependency is also reduced since the number of times an entry is accessed
no longer grows as the number of clients increases.
Figure 14.9: Out-of-process cache
Maintaining an external cache comes with a price as it’s yet another service
that needs to be maintained and operated. Additionally, the latency to access
it is higher than accessing an in-process cache because a network call is
required.
If the external cache is down, how should the service react? You would
think it might be okay to temporarily bypass the cache and directly hit the
dependency. But in that case, the dependency might not be prepared to
withstand a surge of traffic since it’s usually shielded by the cache.
Consequently, the external cache becoming unavailable could easily cause a
cascading failure resulting in the dependency to become unavailable as
well.
3. A cache hit occurs when the requested data can be found in the cache,
while a cache miss occurs when it cannot.↩
This nasty behavior is caused by cruel math; given an operation that has a
certain probability of failing, the total number of failures increases with the
total number of operations performed. In other words, the more you scale
out your system to handle more load, and the more operations and moving
parts there are, the more failures your systems will experience.
Chapter 16 dives into resiliency patterns that help shield a service against
failures in downstream dependencies, like timeouts, retries, and circuit
breakers.
Chapter 17 discusses resiliency patterns that help protect a service against
upstream pressure, like load shedding, load leveling, and rate-limiting.
15 Common failure causes
In order to protect your systems against failures, you first need to have an
idea of what can go wrong. The most common failures you will encounter
are caused by single points of failure, the network being unreliable, slow
processes, and unexpected load. Let’s take a closer look at these.
Slow network calls are the silent killers of distributed systems. Because the
client doesn’t know whether the response is on its way or not, it can spend a
long time waiting before giving up, if it gives up at all. The wait can in turn
cause degradations that are extremely hard to debug. In chapter 16 we will
explore ways to protect clients from the unreliability of the network.
Memory is just one of the many resources that can leak. For example, if you
are using a thread pool, you can lose a thread when it blocks on a
synchronous call that never returns. If a thread makes a synchronous
blocking HTTP call without setting a timeout, and the call never returns, the
thread won’t be returned to the pool. Since the pool has a fixed size and
keeps losing threads, it will eventually run out of threads.
You might think that making asynchronous calls, rather than synchronous
ones, would help in the previous case. However, modern HTTP clients use
socket pools to avoid recreating TCP connections and pay a hefty
performance fee as discussed in chapter 2. If a request is made without a
timeout, the connection is never returned to the pool. As the pool has a
limited size, eventually there won’t be any connections left.
On top of that, the code you write isn’t the only one accessing memory,
threads, and sockets. The libraries your application depends on access the
same resources, and they can do all kinds of shady things. Without digging
into their implementation, assuming it’s open in the first place, you can’t be
sure whether they can wreak havoc or not.
For example, suppose there are multiple clients querying two database
replicas A and B, which are behind a load balancer. Each replica is handling
about 50 transactions per second (see Figure 15.1).
Figure 15.1: Two replicas behind an LB; each is handling half the load.
Figure 15.2: When replica B becomes unavailable, A will be hit with more
load, which can strain it beyond its capacity.
As replica A starts to struggle to keep up with the incoming requests, the
clients experience more failures and timeouts. In turn, they retry the same
failing requests several times, adding insult to injury.
Cascading failures are very hard to get under control once they have started.
The best way to mitigate one is to not have it in the first place. The patterns
introduced in the next chapters will help you stop the cracks in the system
from spreading.
To address a failure, you can either find a way to reduce the probability of it
happening, or reduce its impact.
1. These techniques might look simple but are very effective. During the
COVID-19 outbreak, I have witnessed many of the systems I was
responsible for at the time doubling traffic nearly overnight without
causing any incidents.↩
16 Downstream resiliency
In this chapter, we will explore patterns that shield a service against failures
in its downstream dependencies.
16.1 Timeout
When you make a network call, you can configure a timeout to fail the
request if there is no response within a certain amount of time. If you make
the call without setting a timeout, you tell your code that you are 100%
confident that the call will succeed. Would you really take that bet?
Unfortunately, some network APIs don’t have a way to set a timeout in the
first place. When the default timeout is infinity, it’s all too easy for a client
to shoot itself in the foot. As mentioned earlier, network calls that don’t
return lead to resource leaks at best. Timeouts limit and isolate failures,
stopping them from cascading to the rest of the system. And they are useful
not just for network calls, but also for requesting a resource from a pool and
for synchronization primitives like mutexes.
To drive the point home on the importance of setting timeouts, let’s take a
look at some concrete examples. JavaScript’s XMLHttpRequest is the web
API to retrieve data from a server asynchronously. Its default timeout is
zero, which means there is no timeout:
var xhr = new XMLHttpRequest();
xhr.open('GET', '/api', true);
// No timeout by default!
xhr.timeout = 10000;
xhr.onload = function () {
// Request finished
};
xhr.ontimeout = function (e) {
// Request timed out
};
xhr.send(null);
Client-side timeouts are as crucial as server-side ones. There is a maximum
number of sockets your browser can open for a particular host. If you make
network requests that never return, you are going to exhaust the socket
pool. When the pool is exhausted, you are no longer able to connect to the
host.
The fetch web API is a modern replacement for XMLHttpRequest that uses
Promises. When the fetch API was initially introduced, there was no way to
set a timeout at all. Browsers have recently added experimental support for
the Abort API to support timeouts.
const controller = new AbortController();
const signal = controller.signal;
const fetchPromise = fetch(url, {signal});
// No timeout by default!
setTimeout(() => controller.abort(), 10000);
fetchPromise.then(response => {
// Request finished
})
Things aren’t much rosier for Python. The popular requests library uses a
default timeout of infinity:
# No timeout by default!
response = requests.get('https://github.com/', timeout=10)
Modern HTTP clients for Java and .NET do a much better job and usually
come with default timeouts. For example, .NET Core HttpClient has a
default timeout of 100 seconds. It’s lax but better than not setting a timeout
at all.
As a rule of thumb, always set timeouts when making network calls, and be
wary of third-party libraries that do network calls or use internal resource
pools but don’t expose settings for timeouts. And if you build libraries,
always set reasonable default timeouts and make them configurable for
your clients.
Ideally, you should set your timeouts based on the desired false timeout
rate. Say you want to have about 0.1% false timeouts; to achieve that, you
should set the timeout to the 99.9th percentile of the remote call’s response
time, which you can measure empirically.
You also want to have good monitoring in place to measure the entire
lifecycle of your network calls, like the duration of the call, the status code
received, and if a timeout was triggered. We will talk about monitoring later
in the book, but the point I want to make here is that you have to measure
what happens at the integration points of your systems, or you won’t be able
to debug production issues when they show up.
Ideally, you want to encapsulate a remote call within a library that sets
timeouts and monitors it for you so that you don’t have to remember to do
this every time you make a network call. No matter which language you
use, there is likely a library out there that implements some of the resiliency
and transient fault-handling patterns introduced in this chapter, which you
can use to encapsulate your system’s network calls.
Using a language-specific library is not the only way to wrap your network
calls; you can also leverage a reverse proxy co-located on the same machine
which intercepts all the remote calls that your process makes1. The proxy
enforces timeouts and also monitors the calls, relinquishing your process
from the responsibility to do so.
16.2 Retry
You know by now that a client should configure a timeout when making a
network request. But, what should it do when the request fails, or the
timeout fires? The client has two options at that point: it can either fail fast
or retry the request at a later time.
If the failure or timeout was caused by a short-lived connectivity issue, then
retrying after some backoff time has a high probability of succeeding.
However, if the downstream service is overwhelmed, retrying immediately
will only make matters worse. This is why retrying needs to be slowed
down with increasingly longer delays between the individual retries until
either a maximum number of retries is reached or a certain amount of time
has passed since the initial request.
To set the delay between retries, you can use a capped exponential function,
where the delay is derived by multiplying the initial backoff duration by a
constant after each attempt, up to some maximum value (the cap):
attempt
delay = min(cap, initial-backoff ⋅ 2 )
For example, if the cap is set to 8 seconds, and the initial backoff duration is
2 seconds, then the first retry delay is 2 seconds, the second is 4 seconds,
the third is 8 seconds, and any further delay will be capped to 8 seconds.
To avoid this herding behavior, you can introduce random jitter in the delay
calculation. With it, the retries spread out over time, smoothing out the load
to the downstream service:
attempt
delay = random(0, min(cap, initial-backoff ⋅ 2 ))
Actively waiting and retrying failed network requests isn’t the only way to
implement retries. In batch applications that don’t have strict real-time
requirements, a process can park failed requests into a retry queue. The
same process, or possibly another, reads from the same queue later and
retries the requests.
Just because a network call can be retried doesn’t mean it should be. If the
error is not short-lived, for example, because the process is not authorized
to access the remote endpoint, then it makes no sense to retry the request
since it will fail again. In this case, the process should fail fast and cancel
the call right away.
You should also not retry a network call that isn’t idempotent, and whose
side effects can affect your application’s correctness. Suppose a process is
making a call to a payment provider service, and the call times out; should
it retry or not? The operation might have succeeded and retrying would
charge the account twice, unless the request is idempotent.
Having retries at multiple levels of the dependency chain can amplify the
number of retries; the deeper a service is in the chain, the higher the load it
will be exposed to due to the amplification (see Figure 16.2).
And if the pressure gets bad enough, this behavior can easily bring down
the whole system. That’s why when you have long dependency chains, you
should only retry at a single level of the chain, and fail fast in all the other
ones.
Unlike retries, circuit breakers prevent network calls entirely, which makes
the pattern particularly useful for long-term degradations. In other words,
retries are helpful when the expectation is that the next call will succeed,
while circuit breakers are helpful when the expectation is that the next call
will fail.
In the closed state, the circuit breaker is merely acting as a pass-through for
network calls. In this state, the circuit breaker tracks the number of failures,
like errors and timeouts. If the number goes over a certain threshold within
a predefined time-interval, the circuit breaker trips and opens the circuit.
When the circuit is open, network calls aren’t attempted and fail
immediately. As an open circuit breaker can have business implications,
you need to think carefully what should happen when a downstream
dependency is down. If the down-stream dependency is non-critical, you
want your service to degrade gracefully, rather than to stop entirely.
That’s really all there is to understand how a circuit breaker works, but the
devil is in the details. How many failures are enough to consider a
downstream dependency down? How long should the circuit breaker wait to
transition from the open to the half-open state? It really depends on your
specific case; only by using data about past failures can you make an
informed decision.
The operating system has a connection queue per port with a limited
capacity that, when reached, causes new connection attempts to be rejected
immediately. But typically, under extreme load, the server crawls to a halt
before that limit is reached as it starves out of resources like memory,
threads, sockets, or files. This causes the response time to increase to the
point the server becomes unavailable to the outside world.
The definition of overload depends on your system, but the general idea is
that it should be measurable and actionable. For example, the number of
concurrent requests being processed is a good candidate to measure a
server’s load; all you have to do is to increment a counter when a new
request comes in and decrease it when the server has processed it and sent
back a response to the client.
When the server detects that it’s overloaded, it can reject incoming requests
by failing fast and returning a 503 (Service Unavailable) status code in the
response. This technique is also referred to as load shedding. The server
doesn’t necessarily have to reject arbitrary requests though; for example, if
different requests have different priorities, the server could reject only the
lower-priority ones.
The idea is to introduce a messaging channel between the clients and the
service. The channel decouples the load directed to the service from its
capacity, allowing the service to process requests at its own pace — rather
than requests being pushed to the service by the clients, they are pulled by
the service from the channel. This pattern is referred to as load leveling and
it’s well suited to fend off short-lived spikes, which the channel smoothes
out (see Figure 17.1).
Figure 17.1: The channel smooths out the load for the consuming service.
17.3 Rate-limiting
Rate-limiting, or throttling, is a mechanism that rejects a request when a
specific quota is exceeded. A service can have multiple quotas, like for the
number of requests seen, or the number of bytes received within a time
interval. Quotas are typically applied to specific users, API keys, or IP
addresses.
For example, if a service with a quota of 10 requests per second, per API
key, receives on average 12 requests per second from a specific API key, it
will on average, reject 2 requests per second tagged with that API key.
If the client application plays by the rules, it stops hammering the service
for some time, protecting it from non-malicious users monopolizing it by
mistake. This protects against bugs in the clients that, for one reason or
another, cause a client to repeatedly hit a downstream service for no good
reason.
Rate-limiting is also used to enforce pricing tiers; if a user wants to use
more resources, they also need to be prepared to pay more. This is how you
can offload your service’s cost to your users: have them pay proportionally
to their usage and enforce pricing tiers with quotas.
You would think that rate-limiting also offers strong protection against a
denial-of-service (DDoS) attack, but it only partially protects a service from
it. Nothing forbids throttled clients from continuing to hammer a service
after getting 429s. And no, rate-limited requests aren’t free either — for
example, to rate-limit a request by API key, the service has to pay the price
to open a TLS connection, and to the very least download part of the
request to read the key. Although rate-limiting doesn’t fully protect against
DDoS attacks, it does help reduce their impact.
Economies of scale are the only true protection against DDoS attacks. If
you run multiple services behind one large frontend service, no matter
which of the services behind it are attacked, the frontend service will be
able to withstand the attack by rejecting the traffic upstream. The beauty of
this approach is that the cost of running the frontend service is amortized
across all the services that are using it.
Suppose we want to enforce a quota of 2 requests per minute, per API key.
A naive approach would be to use a doubly-linked list per API key, where
each list stores the timestamps of the last N requests received. Every time a
new request comes in, an entry is appended to the list with its
corresponding timestamp. Then periodically, entries older than a minute are
purged from the list.
By keeping track of the list’s length, the process can rate-limits incoming
requests by comparing it with the quota. The problem with this approach is
that it requires a list per API key, which becomes quickly expensive in
terms of memory as it grows with the number of requests received.
Figure 17.2: Buckets divide time into 1-minute intervals, which keep track
of the number of requests seen.
A bucket contains a numerical counter. When a new request comes in, its
timestamp is used to determine the bucket it belongs to. For example, if a
request arrives at 12.00.18, the counter of the bucket for minute “12.00” is
incremented by 1 (see Figure 17.3).
Figure 17.3: When a new request comes in, its timestamp is used to
determine the bucket it belongs to.
The sliding window represents the interval of time used to decide whether
to rate-limit or not. The window’s length depends on the time unit used to
define the quota, which in our case is 1 minute. But, there is a caveat: a
sliding window can overlap with multiple buckets. To derive the number of
requests under the sliding window, we have to compute a weighted sum of
the bucket’s counters, where each bucket’s weight is proportional to its
overlap with the sliding window (see Figure 17.4).
Figure 17.4: A bucket’s weight is proportional to its overlap with the sliding
window.
We only have to store as many buckets as the sliding window can overlap
with at any given time. For example, with a 1-minute window and a 1-
minute bucket length, the sliding window can touch at most 2 buckets. And
if it can touch at most two buckets, there is no point to store the third oldest
bucket, the fourth oldest one, and so on.
To summarize, this approach requires two counters per API key, which is
much more efficient in terms of memory than the naive implementation
storing a list of requests per API key.
When more than one process accepts requests, the local state no longer cuts
it, as the quota needs to be enforced on the total number of requests per API
key across all service instances. This requires a shared data store to keep
track of the number of requests seen.
As discussed earlier, we need to store two integers per API key, one for
each bucket. When a new request comes in, the process receiving it could
fetch the bucket, update it and write it back to the data store. But, that
wouldn’t work because two processes could update the same bucket
concurrently, which would result in a lost update. To avoid any race
conditions, the fetch, update, and write operations need to be packaged into
a single transaction.
Although this approach is functionally correct, it’s costly. There are two
issues here: transactions are slow, and executing one per request would be
crazy expensive as the database would have to scale linearly with the
number of requests. On top of that, for each request a process receives, it
needs to do an outgoing call to a remote data store. What should it do if it
fails?
Let’s address these issues. Rather than using transactions, we can use a
single atomic get-and-increment operation that most data stores provide.
Alternatively, the same can be emulated with a compare-and-swap. Atomic
operations have much better performance than transactions.
Now, rather than updating the database on each request, the process can
batch bucket updates in memory for some time, and flush them
asynchronously to the database at the end of it (see Figure 17.5). This
reduces the shared state’s accuracy, but it’s a good trade-off as it reduces the
load on the database and the number of requests sent to it.
Figure 17.5: Servers batch bucket updates in memory for some time, and
flush them asynchronously to the database at the end of it.
17.4 Bulkhead
The goal of the bulkhead pattern is to isolate a fault in one part of a service
from taking the entire service down with it. The pattern is named after the
partitions of a ship’s hull. If one partition is damaged and fills up with
water, the leak is isolated to that partition and doesn’t spread to the rest of
the ship.
Some clients can create much more load on a service than others. Without
any protections, a single greedy client can hammer the system and degrade
every other client. We have seen some patterns, like rate-limiting, that help
prevent a single client from using more resources than it should. But rate-
limiting is not bulletproof. You can rate-limit clients based on the number of
requests per second; but what if a client sends very heavy or poisonous
requests that cause the servers to degrade? In that case, rate-limiting
wouldn’t help much as the issue is intrinsic with the requests sent by that
client, which could eventually lead to degrading the service for every other
client.
When everything else fails, the bulkhead pattern provides guaranteed fault
isolation by design. The idea is to partition a shared resource, like a pool of
service instances behind a load balancer, and assign each user of the service
to a specific partition so that its requests can only utilize resources
belonging to the partition it’s assigned to.
Figure 17.7: Virtual partitions are far less likely to fully overlap with each
other.
You need to be careful when applying the bulkhead pattern; if you take it
too far and create too many partitions, you lose all the economy-of-scale
benefits of sharing costly resources across a set of users that are active at
different times.
You also introduce a scaling problem. Scaling is simple when there are no
partitions and every user can be served by any instance, as you can just add
more instances. It’s not that easy with a partitioned pool of instances as
some partitions are much hotter than others.
If the server is behind a load balancer and can communicate that it’s
overloaded, the balancer can stop sending requests to it. The process can
expose a health endpoint that when queried performs a health check that
either returns 200 (OK) if the process can serve requests, or an error code if
it’s overloaded and doesn’t have more capacity to serve requests.
Let’s have a look at the different types of health checks that you can
leverage in your service.
A liveness health test is the most basic form of checking the health of a
process. The load balancer simply performs a basic HTTP request to see
whether the process replies with a 200 (OK) status code.
A local health test checks whether the process is degraded or in some faulty
state. The process’s performance typically degrades when a local resource,
like memory, CPU, or disk, is either close enough to be fully saturated, or is
completely saturated. To detect a degradation, the process compares one or
more local metrics, like memory available or remaining disk space, with
some fixed upper and lower-bound thresholds. When a metric is above an
upper-bound threshold, or below a lower-bound one, the process reports
itself as unhealthy.
A more advanced, and also harder check to get right, is the dependency
health check. This type of health check detects a degradation caused by a
remote dependency, like a database, that needs to be accessed to handle
incoming requests. The process measures the response time, timeouts, and
errors of the remote calls directed to the dependency. If any measure breaks
a predefined threshold, the process reports itself as unhealthy to reduce the
load on the downstream dependency.
A smart load balancer instead detects that a large fraction of the service
instances is being reported as unhealthy and considers the health check to
no longer be reliable. Rather than continuing to remove processes from the
pool, it starts to ignore the health-checks altogether so that new requests can
be sent to any process in the pool.
17.6 Watchdog
One of the main reasons to build distributed services is to be able to
withstand single-process failures. Since you are designing your system
under the assumption that any process can crash at any time, your service
needs to be able to deal with that eventuality.
For a process’s crash to not affect your service’s health, you should ensure
ideally that:
there are other processes that are identical to the one that crashed that
can handle incoming requests;
requests are stateless and can be served by any process;
any non-volatile state is stored on a separate and dedicated data store
so that when the process crashes its state isn’t lost;
all shared resources are leased so that when the process crashes, the
leases expire and the resources can be accessed by other processes;
the service is always running slightly over-scaled to withstand the
occasional individual process failures.
Because crashes are inevitable and your service is prepared for them, you
don’t have to come up with complex recovery logic when a process gets
into some weird degraded state — you can just let it crash. A transient but
rare failure can be hard to diagnose and fix. Crashing and restarting the
affected process gives operators maintaining the service some breathing
room until the root-cause can be identified, giving the system a kind of self-
healing property.
Imagine that a latent memory leak causes the available memory to decrease
over time. When a process doesn’t have more physical memory available, it
starts to swap back and forth to the page file on disk. This swapping is
extremely expensive and degrades the process’s performance dramatically.
If left unchecked, the memory leak would eventually bring all processes
running the service on their knees. Would you rather have the processes
detect they are degraded and restart themselves, or try to debug the root
cause for the degradation at 3 AM?
Although tests can’t give you complete confidence that your code is bug-
free, they certainly do a good job at detecting failure scenarios you are
aware of and validating expected behaviors. As a rule of thumb, if you want
to be confident that your implementation behaves in a certain way, you have
to add a test for it.
18.1 Scope
Tests come in different shapes and sizes. To begin with, we need to
distinguish between code paths a test is actually testing (aka system under
test or SUT) from the ones that are being run. The SUT represents the scope
of the test, and depending on it, the test can be categorized as either a unit
test, an integration test, or an end-to-end test.
A unit test validates the behavior of a small part of the codebase, like an
individual class. A good unit test should be relatively static in time and
change only when the behavior of the SUT changes — refactoring, fixing a
bug, or adding a new feature shouldn’t require a unit test to change. To
achieve that, a unit test should:
An integration test has a larger scope than a unit test, since it verifies that a
service can interact with its external dependencies as expected. This
definition is not universal, though, because integration testing has different
meanings for different people.
Martin Fowler makes the distinction between narrow and broad integration
tests. A narrow integration test exercises only the code paths of a service
that communicate with an external dependency, like the adapters and their
supporting classes. In contrast, a broad integration test exercises code paths
across multiple live services.
In the rest of the chapter, we will refer to these broader integration tests as
end-to-end tests. An end-to-end test validates behavior that spans multiple
services in the system, like a user-facing scenario. These tests usually run in
shared environments, like staging or production. Because of their scope,
they are slow and more prone to intermittent failures.
End-to-end tests should not have any impact on other tests or users sharing
the same environment. Among other things, that requires services to have
good fault isolation mechanisms, like rate-limiting, to prevent buggy tests
from affecting the rest of the system.
As the scope of a test increases, it becomes more brittle, slow, and costly.
Intermittently-failing tests are nearly as bad as no tests at all, as developers
stop having any confidence in them and eventually ignore their failures.
When possible, prefer tests with smaller scope as they tend to be more
reliable, faster, and cheaper. A good trade-off is to have a large number of
unit tests, a smaller fraction of integration tests, and even fewer end-to-end
tests (see Figure 18.1).
18.2 Size
The size of a test reflects how much computing resources it needs to run,
like the number of nodes. Generally, that depends on how realistic the
environment is where the test runs. Although the scope and size of a test
tend to be correlated, they are distinct concepts, and it helps to separate
them.
A small test runs in a single process and doesn’t perform any blocking calls
or I/O. It’s very fast, deterministic, and has a very small probability of
failing intermittently.
An intermediate test runs on a single node and performs local I/O, like
reads from disk or network calls to localhost. This introduces more room
for delays and non-determinism, increasing the likelihood of intermittent
failures.
A large test requires multiple nodes to run, introducing even more non-
determinism and longer delays.
Unsurprisingly, the larger a test is, the longer it takes to run and the flakier
it becomes. This is why you should write the smallest possible test for a
given behavior. But how do you reduce the size of a test, while not reducing
its scope?
You can use a test double in place of a real dependency to reduce the test’s
size, making it faster and less prone to intermittent failures. There are
different types of test doubles:
The problem with test doubles is that they don’t resemble how the real
implementation behaves with all its nuances. The less the resemblance is,
the less confidence you should have that the test using the double is actually
useful. Therefore, when the real implementation is fast, deterministic, and
has few dependencies, use that rather than a double. If that’s not the case,
you have to decide how realistic you want the test double to be, as there is a
tradeoff between its fidelity and the test’s size.
When using the real implementation is not an option, use a fake maintained
by the same developers of the dependency, if one is available. Stubbing, or
mocking, are last-resort options as they offer the least resemblance to the
actual implementation, which makes tests that use them brittle.
As it turns out, the endpoint doesn’t need to communicate with the internal
service, so we can safely use a mock in its place. The data store comes with
an in-memory implementation (a fake) that we can leverage to avoid issuing
network calls to a remote data store.
Finally, we can’t use the third-party billing API, as that would require the
test to issue real transactions. Fortunately, the API has a different endpoint
that offers a playground environment, which the test can use without
creating real transactions. If there was no playground environment available
and no fake either, we would have to resort to stubbing or mocking.
In this case, we have cut the test’s size considerably, while keeping its scope
mostly intact.
There is a balance between the safety of a rollout and the time it takes to
release a change to production. A good CD pipeline should strive to make a
good trade-off between the two. In this chapter, we will explore how.
19.1 Review and build
At a high level, a code change needs to go through a pipeline of four stages
to be released to production: review, build, pre-production rollout, and
production rollout.
It all starts with a pull request (PR) submitted for review by a developer to a
repository. When the PR is submitted for review, it needs to be compiled,
statically analyzed, and validated with a battery of tests, all of which
shouldn’t take longer than a few minutes. To increase the tests’ speed and
minimize intermittent failures, the tests that run at this stage should be small
enough to run on a single process or node, like e.g., unit tests, with larger
tests only run later in the pipeline.
Code changes shouldn’t be the only ones going through this review process.
For example, cloud resource templates, static assets, end-to-end tests, and
configuration files should all be version-controlled in a repository (not
necessarily the same) and be treated just like code. The same service can
then have multiple CD pipelines, one for each repository, that can
potentially run in parallel.
Once the change has been merged into the repository’s main branch, the CD
pipeline moves to the build stage, in which the repository’s content is built
and packaged into a deployable release artifact.
19.2 Pre-production
During this stage, the artifact is deployed and released to a synthetic pre-
production environment. Although this environment lacks the realism of
production, it’s useful to verify that no hard failures are triggered (e.g., a
null pointer exception at startup due to a missing configuration setting) and
that end-to-end tests succeed. Because releasing a new version to pre-
production requires significantly less time than releasing it to production,
bugs can be detected earlier.
You can even have multiple pre-production environments, starting with one
created from scratch for each artifact and used to run simple smoke tests, to
a persistent one similar to production that receives a small fraction of
mirrored requests from it. AWS, for example, uses multiple pre-production
environments (Alpha, Beta, and Gamma).
19.3 Production
Once an artifact has been rolled out to pre-production successfully, the CD
pipeline can proceed to the final stage and release the artifact to production.
It should start by releasing it to a small number of production instances at
first2. The goal is to surface problems that haven’t been detected so far as
quickly as possible before they have the chance to cause widespread
damage in production.
If that goes well and all the health checks pass, the artifact is incrementally
released to the rest of the fleet. While the rollout is in progress, a fraction of
the fleet can’t serve any traffic due to the ongoing deployment, and the
remaining instances need to pick up the slack. To avoid this causing any
performance degradation, there needs to be enough capacity left to sustain
the incremental release.
19.4 Rollbacks
After each step, the CD pipeline needs to assess whether the artifact
deployed is healthy, or else stop the release and roll it back. A variety of
health signals can be used to make that decision, such as:
Monitoring just the health signals of the service being rolled out is not
enough. The CD pipeline should also monitor the health of upstream and
downstream services to detect any indirect impact of the rollout. The
pipeline should allow enough time to pass between one step and the next
(bake time) to ensure that it was successful, as some issues can appear only
after some time has passed. For example, a performance degradation could
be visible only at peak time.
The CD pipeline can further gate the bake time on the number of requests
seen for specific API endpoints to guarantee that the API surface has been
properly exercised. To speed up the release, the bake time can be reduced
after each step succeeds and confidence is built up.
Since rolling forward is much riskier than rolling back, any change
introduced should always be backward compatible as a rule of thumb. The
most common cause for backward-incompatibility is changing the
serialization format used either for persistence or IPC purposes.
A service should emit metrics about its load, internal state, and availability
and performance of downstream service dependencies. Combined with the
metrics emitted by downstream services, this allows operators to identify
problems quickly. This requires explicit code changes and a deliberate
effort by developers to instrument their code.
For example, take a fictitious HTTP handler that returns a resource. There
is a whole range of questions you will want to be able to answer once it’s
running in production1:
def get_resource(id):
resource = self._cache.get(id) # in-process cache
# Is the id valid?
# Was there a cache hit?
# How long has the resource been in the cache?
resource = self._repository.get(id)
# Did the remote call fail, and if so, why?
# Did the remote call timeout?
# How long did the call take?
self._cache[id] = resource
# What's the size of the cache?
return resource
# How long did it take for the handler to run?
Now, suppose we want to record the number of requests our service failed
to handle. One way to do that is with an event-based approach — whenever
a service instance fails to handle a request, it reports a failure count of 1 in
an event2 to a local telemetry agent, e.g.:
{
"failureCount": 1,
"serviceRegion": "EastUs2",
"timestamp": 1614438079
}
The agent batches these events and emits them periodically to a remote
telemetry service, which persists them in a dedicated data store for event
logs. For example, this is the approach taken by Azure Monitor’s log-based
metrics.
As you can imagine, this is quite expensive since the load on the backend
increases with the number of events ingested. Events are also costly to
aggregate at query time — suppose you want to retrieve the number of
failures in North Europe over the past month; you would have to issue a
query that requires fetching, filtering, and aggregating potentially trillions
of events within that time period.
Is there a way to reduce costs at query time? Because metrics are time-
series, they can be modeled and manipulated with mathematical tools. The
samples of a time-series can be pre-aggregated over pre-specified time
periods (e.g., 1 second, 5 minutes, 1 hour, etc.) and represented with
summary statistics such as the sum, average, or percentiles.
For example, the telemetry backend can pre-aggregate metrics over one or
more time periods at ingestion time. Conceptually, if the aggregation (i.e.,
the sum in our example) were to happen with a period of one hour, we
would have one failureCount metric per serviceRegion, each containing one
sample per hour, e.g.:
"00:00", 561,
"01:00", 42,
"02:00", 61,
...
We can take this idea one step further and also reduce ingestion costs by
having the local telemetry agents pre-aggregate metrics on the client side.
Because metrics are mainly used for alerting and visualization purposes,
they are usually persisted in pre-aggregated form in a time-series data store
since querying pre-aggregated data can be several order of magnitudes
more efficient than the alternative.
SLIs are best defined with a ratio of two metrics, good events over total
number of events, since they are easy to interpret: 0 means the service is
broken and 1 that everything is working as expected (see Figure 20.1). As
we will see later in the chapter, ratios also simplify the configuration of
alerts.
Once you have decided what to measure, you need to decide where to
measure it. Take the response time, for example. Should you use the metric
reported by the service, load balancer, or clients? In general, you want to
use the one that best represents the experience of the users. And if that’s too
costly to collect, pick the next best candidate. In the previous example, the
client metric is the more meaningful one, as that accounts for delays in the
entire path of the request.
Also, long-tail behaviors left unchecked can quickly bring a service to its
knees. Suppose a service is using 2K threads to serve 10K requests per
second. By Little’s Law, the average response time of a thread is 200 ms.
Suddenly, a network switch becomes congested, and as it happens, 1% of
requests are being served from a node behind that switch. That 1% of
requests, or 100 requests per second out of the 10K, starts taking 20 seconds
to complete.
How many more threads does the service need to deal with the small
fraction of requests having a high response time? If 100 requests per second
take 20 seconds to process, then 2K additional threads are needed to deal
just with the slow requests. So the number of threads used by the service
needs to double to keep up with the load!
Measuring long-tail behavior and keeping it under check doesn’t just make
your users happy, but also drastically improves the resiliency of your
service and reduces operational costs. When you are forced to guard against
the worst-case long-tail behavior, you happen to improve the average case
as well.
For example, an SLO could define that 99% of API calls to endpoint X
should complete below 200 ms, as measured over a rolling window of 1
week. Another way to look at it, is that it’s acceptable for up to 1% of
requests within a rolling week to have a latency higher than 200 ms. That
1% is also called the error budget, which represents the number of failures
that can be tolerated.
Figure 20.2: An SLO defines the range of acceptable values for an SLI.
SLOs are helpful for alerting purposes and help the team prioritize repair
tasks with feature work. For example, the team can agree that when an error
budget has been exhausted, repair items will take precedence over new
features until the SLO is restored. Also, an incident’s importance can be
measured by how much of the error budget has been burned. An incident
that burned 20% of the error budget needs more afterthought than one that
burned only 1%.
Smaller time windows force the team to act quicker and prioritize bug fixes
and repair items, while longer windows are better suited to make long-term
decisions about which projects to invest in. Therefore it makes sense to
have multiple SLOs with different window sizes.
How strict should SLOs be? Choosing the right target range is harder than it
looks. If it’s too loose, you won’t detect user-facing issues; if it’s too strict,
you will waste engineering time micro-optimizing and get diminishing
returns. Even if you could guarantee 100% reliability for your system, you
can’t make guarantees for anything that your users depend on to access your
service that is outside your control, like their last-mile connection. Thus,
100% reliability doesn’t translate into a 100% reliable experience for users.
When setting the target range for your SLOs, start with comfortable ranges
and tighten them as you build up confidence. Don’t just pick targets that
your service meets today that might become unattainable in a year after the
load increases; work backward from what users care about. In general,
anything above 3 nines of availability is very costly to achieve and provides
diminishing returns.
How many SLOs should you have? You should strive to keep things simple
and have as few as possible that provide a good enough indication of the
desired service level. SLOs should also be documented and reviewed
periodically. For example, suppose you discover that a specific user-facing
issue generated lots of support tickets, but none of your SLOs showed any
degradations. In that case, they are either too relaxed, or you are not
measuring something that you should.
Users can become over-reliant on the actual behavior of your service rather
than the published SLO. To mitigate that, you can consider injecting
controlled failures in production — also known as chaos testing — to
“shake the tree” and ensure the dependencies can cope with the targeted
service level and are not making unrealistic assumptions. As an added
benefit, injecting faults helps validate that resiliency mechanisms work as
expected.
20.4 Alerts
Alerting is the part of a monitoring system that triggers an action when a
specific condition happens, like a metric crossing a threshold. Depending
on the severity and the type of the alert, the action triggered can range from
running some automation, like restarting a service instance, to ringing the
phone of a human operator who is on-call. In the rest of this section, we will
be mostly focusing on the latter case.
Suppose you have an availability SLO of 99% over 30 days, and you would
like to configure an alert for it. A naive way would be to trigger an alert
whenever the availability goes below 99% within a relatively short time
window, like an hour. But how much of the error budget has actually been
burned by the time the alert triggers?
Because the time window of the alert is one hour, and the SLO error budget
is defined over 30 days, the percentage of error budget that has been spent
when the alert triggers is 1 hour
= 0.14. Is it really critical to be notified
30 days
that 0.14% of the SLO’s error budget has been burned? Probably not. In this
case, you have high recall, but low precision.
You can improve the alert’s precision by increasing the amount of time its
condition needs to be true. The problem with it is that now the alert will
take longer to trigger, which will be an issue when there is an actual outage.
The alternative is to alert based on how fast the error budget is burning, also
known as the burn rate, which lowers the detection time.
The burn rate is defined as the percentage of the error budget consumed
over the percentage of the SLO time window that has elapsed — it’s the
rate of increase of the error budget. Concretely, for our SLO example, a
burn rate of 1 means the error budget will be exhausted precisely in 30
days; if the rate is 2, then it will be 15 days; if the rate is 3, it will be 10
days, and so on.
By rearranging the burn rate’s equation, you can derive the alert threshold
that triggers when a specific percentage of the error budget has been
burned. For example, to have an alert trigger when an error budget of 2%
has been burned in a one-hour window, the threshold for the burn rate
should be set to 14.4:
alert window 1h
time period elapsed = =
SLO period 720h
To improve recall, you can have multiple alerts with different thresholds.
For example, a burn rate below 2 could be a low-severity alert that sends an
e-mail and is investigated during working hours. The SRE workbook has
some great examples of how to configure alerts based on burn rates.
While you should define most of your alerts based on SLOs, some should
trigger for known hard-failure modes that you haven’t had the time to
design or debug away. For example, suppose you know your service suffers
from a memory leak that has led to an incident in the past, but you haven’t
managed yet to track down the root-cause or build a resiliency mechanism
to mitigate it. In this case, it could be useful to define an alert that triggers
an automated restart when a service instance is running out of memory.
20.5 Dashboards
After alerting, the other main use case for metrics is to power real-time
dashboards that display the overall health of a system.
The first decision you have to make when creating a dashboard is to decide
who the audience is and what they are looking for. Given the audience, you
can work backward to decide which charts, and therefore metrics, to
include.
SLO dashboard
Service dashboard
This dashboard offers a first entry point into the behavior of a service when
debugging. As we will later learn when discussing observability, this high-
level view is just the starting point. The operator typically drills down into
the metrics by segmenting them further, and eventually reaches for raw logs
and traces to get more detail.
As new metrics are added and old ones removed, charts and dashboards
need to be modified and be kept in-sync across multiple environments like
staging and production. The most effective way to achieve that is by
defining dashboards and charts with a domain-specific language and
version-control them just like code. This allows updating dashboards from
the same pull request that contains related code changes without needing to
update dashboards manually, which is error-prone.
As dashboards render top to bottom, the most important charts should
always be located at the very top.
Charts should be rendered with a default timezone, like UTC, to ease the
communication between people located in different parts of the world when
looking at the same data.
Similarly, all charts in the same dashboard should use the same time
resolution (e.g., 1 min, 5 min, 1 hour, etc.) and range (24 hours, 7 days,
etc.). This makes it easy to correlate anomalies across charts in the same
dashboard visually. You should pick the default time range and resolution
based on the most common use case for a dashboard. For example, a 1-hour
range with a 1-min resolution is best to monitor an ongoing incident, while
a 1-year range with a 1-day resolution is best for capacity planning.
You should keep the number of data points and metrics on the same chart to
a minimum. Rendering too many points doesn’t just slow downloading
charts, but also makes them hard to interpret and spot anomalies.
A chart should contain only metrics with similar ranges (min and max
values); otherwise, the metric with the largest range can completely hide the
others with smaller ranges. For that reason, it makes sense to split related
statistics for the same metric into multiple charts. For example, the 10th
percentile, average and 90th percentile of a metric can be displayed in one
chart, while the 0.1th percentile, 99.9th percentile, minimum and maximum
in another.
Metrics that are only emitted when an error condition occurs can be hard to
interpret as charts will show wide gaps between the data points, leaving the
operator wondering whether the service stopped emitting that metric due to
a bug. To avoid this, emit a metric using a value of zero in the absence of an
error and a value of 1 in the presence of it.
20.6 On-call
A healthy on-call rotation is only possible when services are built from the
ground up with reliability and operability in mind. By making the
developers responsible for operating what they build, they are incentivized
to reduce the operational toll to a minimum. They are also in the best
position to be on-call since they are intimately familiar with the system’s
architecture, brick walls, and trade-offs.
Being on-call can be very stressful. Even when there are no call-outs, just
the thought of not having the same freedom usually enjoyed outside of
regular working hours can cause anxiety. This is why being on-call should
be compensated, and there shouldn’t be any expectations for the on-call
engineer to make any progress on feature work. Since they will be
interrupted by alerts, they should make the most out of it and be given free
rein to improve the on-call experience, for example, by revising dashboards
or improving resiliency mechanisms.
The first step to address an alert is to mitigate it, not fix the underlying root
cause that created it. A new artifact has been rolled out that degrades the
service? Roll it back. The service can’t cope with the load even though it
hasn’t increased? Scale it out.
Once the incident has been mitigated, the next step is to brainstorm ways to
prevent it from happening again. The more widespread the impact was, the
more time you should spend on this. Incidents that burned a significant
fraction of an SLO’s error budget require a postmortem.
2. We will talk more about event logs in section 21.1, for now assume an
event is just a dictionary.↩
4. For the same reason, you should automate what you can to minimize
manual actions that operators need to perform. Machines are good at
following instructions; use that to your advantage.↩
21 Observability
A distributed system is never 100% healthy at any given time as there can
always be something failing. A whole range of failure modes can be
tolerated, thanks to relaxed consistency models and resiliency mechanisms
like rate limiting, retries, and circuit breakers. Unfortunately, they also
increase the system’s complexity. And with more complexity, it becomes
increasingly harder to reason about the multitude of emergent behaviours
the system might experience.
When debugging, the operator makes an hypothesis and tries to validate it.
For example, the operator might get suspicious after noticing that the
variance of her service’s response time has increased slowly but steadily
over the past weeks, indicating that some requests take much longer than
others. After correlating the increase in variance with an increase in traffic,
the operator hypothesizes that the service is getting closer to hitting a
constraint, like a limit or a resource contention. Metrics and charts alone
won’t help to validate this hypothesis.
21.1 Logs
A log is an immutable list of time-stamped events that happened over time.
An event can have different formats. In its simplest form, it’s just free-form
text. It can also be structured and represented with a textual format like
JSON, or a binary one like Protobuf. When structured, an event is typically
represented with a bag of key-value pairs:
{
"failureCount": 1,
"serviceRegion": "EastUs2",
"timestamp": 1614438079
}
Logs can originate from your services and external dependencies, like
message brokers, proxies, databases, etc. Most languages offer libraries that
make it easy to emit structured logs. Logs are typically dumped to disk
files, which are rotated every so often, and forwarded by an agent to an
external log collector asynchronously, like an ELK stack or AWS
CloudWatch logs.
Logs are very simple to emit, particularly free-form textual ones. But that’s
pretty much the only advantage they have compared to metrics and other
instrumentation tools. Logging libraries can add overhead to your services
if misused, especially when they are not asynchronous and logging blocks
while writing to stdout or disk. Also, if the disk fills up due to excessive
logging, the service instance might get itself into a degraded state. At best,
you lose logging, and at worst, the service instance stops working if it
requires disk access to handle requests.
Finally, but not less important, logs have a high noise to signal ratio because
they are fine-grained and service-specific, which makes it challenging to
extract useful information from them.
Best Practices
To make the job of the engineer drilling into the logs less painful, all the
data about a specific work unit should be stored in a single event. A work
unit typically corresponds to a request or a message pulled from a queue. To
effectively implement this pattern, code paths handling work units need to
pass around a context object containing the event being built.
An event should contain useful information about the work unit, like who
created it, what it was for, and whether it succeeded or failed. It should
include measurements as well, like how long specific operations took.
Every network call performed within the work unit needs to be
instrumented to log its response status code and response time. Finally, data
logged to the event should be sanitized and stripped of potentially sensitive
properties that developers shouldn’t have access to, like user content.
Collating all data within a single event for a work unit minimizes the need
for joins but doesn’t completely eliminate it. For example, if a service calls
another downstream, you will have to perform a join to correlate the caller’s
event log with the callee’s one to understand why the remote call failed. To
make that possible, every event should include the id of the request or
message for the work unit.
Costs
There are various ways to keep the costs of logging under control. A simple
approach is to have different logging levels (e.g.: debug, info, warning,
error) controlled by a dynamic knob that determines which ones are
emitted. This allows operators to increase the logging verbosity for
investigation purposes and reduce costs when granular logs aren’t needed.
Sampling is another option to reduce verbosity. For example, a service
could log only one every n-th event. Additionally, events can also be
prioritized based on their expected signal to noise ratio; for example,
logging failed requests should have a higher sampling frequency than
logging successful ones.
The options discussed so far only reduce the logging verbosity on a single
node. As you scale out and add more nodes, logging volume will
necessarily increase. Even with the best intentions, someone could check-in
a bug that leads to excessive logging. To avoid costs soaring through the
roof or killing your logging pipeline entirely, log collectors need to be able
to rate-limit requests. If you use a third-party service to ingest, store, and
query your logs, there probably is a quota in place already.
Of course, you can always opt to create in-memory aggregates from the
measurements collected in events (e.g., metrics) and emit just those rather
than raw logs. By doing so, you trade-off the ability to drill down into the
aggregates if needed.
21.2 Traces
Tracing captures the entire lifespan of a request as it propagates throughout
the services of a distributed system. A trace is a list of causally-related
spans that represent the execution flow of a request in a system. A span
represent an interval of time that maps to a logical operation or work unit,
and contains a bag of key-value pairs (see Figure 21.2).
Figure 21.2: An execution flow can be represented with spans.
When a request begins, it’s assigned a unique trace id. The trace id is
propagated from one stage to another at every fork in the local execution
flow from one thread to another and from caller to callee in a network call
(through HTTP headers, for example). Each stage is represented with a span
— an event containing the trace id.
When a single user request flows through a system, it can pass through
several services. A specific event only contains information for the work
unit of one specific service, so it can’t be of much use to debug the entire
request flow. Similarly, a single event doesn’t tell much about the health or
state of a specific service.
This is where metrics and traces come in. You can think of them as
abstractions, or derived views, built from event logs and tuned to specific
use cases. A metric is a time-series of summary statistics derived by
aggregating counters or observations over multiple work units or events.
You could emit counters in events and have the backend roll them up into
metrics as they are ingested. In fact, this is how some metrics collection
systems work.
Once you have digested that, I suggest reading “Azure Data Explorer: a big
data analytics cloud platform optimized for interactive, adhoc queries over
structured, semi-structured and unstructured data.” The paper discusses the
implementation of a cloud-native event store built on top of Azure’s cloud
storage — a great example of how these large scale systems compose on
top of each other3.
Finally, if you are preparing for the system design interview, check out Alex
Xu’s book “System Design Interview.” The book introduces a framework to
tackle design interviews and includes more than 10 case studies.