Monolith To Microservices Handbook-1
Monolith To Microservices Handbook-1
from Monolith to
Microservices
Handbook
Transitioning from Monolith to Microservices Handbook
Converting monoliths to the microservice architecture
Semaphore
1
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Who Is This Book for, and What Does It Cover? . . . . . . . . . . . . . . . . . . 7
Additional recommended reading . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
How to Contact Us . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
About the Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
About the Reviewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2
3.2 Domain-Driven Design for microservices . . . . . . . . . . . . . . . . . . . . . 30
3.3 Strategic phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3.1 Types of relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4 Tactical phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.5 Domain-Driven Design is iterative . . . . . . . . . . . . . . . . . . . . . . . . . 34
3.6 Complementary design patterns . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3
5.3 Which method is best to deploy microservices? . . . . . . . . . . . . . . . . . 67
5.4 Release management for microservices . . . . . . . . . . . . . . . . . . . . . . 67
5.4.1 A common approach: one microservice, one repository . . . . . . . . . 67
5.5 Maintaining multiple microservices releases . . . . . . . . . . . . . . . . . . . . 68
5.6 Managing microservices releases with monorepos . . . . . . . . . . . . . . . . 69
5.7 Never too far away from safety . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.8 When in doubt, try monorepos . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Parting Words 73
6.1 Share This Book With The World . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.2 Tell Us What You Think . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.3 About Semaphore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4
© 2022 Rendered Text. All rights reserved.
This work is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International. To view a copy of this license, visit https://creativecommons.org/licenses/by-
nc-nd/4.0
This book is open source: https://github.com/semaphoreci/book-microservices
Published on the Semaphore website: https://semaphoreci.com/resources/microservices
Sep 2022: First edition v1.0 (revision e932d9d)
5
Share this book:
I’ve just started reading “Transitioning from Monolith to Microservices Handbook”
a free ebook by @semaphoreci: https://bit.ly/3eWMTA0 (Tweet this!)
6
Preface
Microservices are the most scalable way of developing software. As projects grow in size
and complexity, one of the possible ways forward is to break the system into autonomous
microservices and hand them out to different teams.
Given the advantages, one would be forgiven for thinking that microservices are the superior
architecture. But there are some caveats that, if ignored, can lead to development hell. This
book aims to help you decide when migrating your monolith to the microservice architecture
is a good idea, if so, navigate the choppy waters ahead.
7
How to Contact Us
We would very much love to hear your feedback after reading this book. What did you like
and learn? What could be improved? Is there something we could explain further?
A benefit of publishing electronically is that we can continuously improve it. And that’s
exactly what we intend to do based on your feedback.
You can send us feedback by sending an email to [email protected].
Find us on Twitter: https://twitter.com/semaphoreci
Find us on Facebook: https://facebook.com/SemaphoreCI
Find us on LinkedIn: https://www.linkedin.com/company/rendered-text
8
Chapter 1 — What Are Microservices?
Beloved by tech giants like Netflix and Amazon, microservices have become the darlings in
modern software development. But, despite the benefits, this is a paradigm that is easy to
get wrong. So, let’s explore what microservices are and, more importantly, what they are not.
9
1.3 Benefits of microservices
Microservices allow companies to keep teams small and agile. The idea is to decompose
the application into small services that can be autonomously developed and deployed by
tightly-knitted teams.
1.3.1 Scalability
The main reason that companies adopt microservices is scalability. Services can be developed
and released independently without arranging large-scale coordination efforts within the
organization.
10
However, most firms that have succeeded with microservices did not begin with them. Consider
the examples of Airbnb and Twitter, which went the microservice route after outgrowing
their monoliths and are now battling its complexities. Even successful companies that use
microservices appear to still be figuring out the best way to make them work. It is evident
that microservices come with their share of tradeoffs.
11
Figure 2: The key properties of microservice architecture
12
A similar argument can be made when working on greenfield projects, which are unconstrained
by earlier work and hence have nothing upon which to base decisions. Sam Newman, author
of Building Microservices: Designing Fine-Grained Systems, stated that it is very difficult to
build a greenfield project with microservices:
I remain convinced that it is much easier to partition an existing “brownfield”
system than to do so upfront with a new, greenfield system. You have more to
work with. You have code you can examine, you can speak to people who use
and maintain the system. You also know what ‘good’ looks like – you have a
working system to change, making it easier for you to know when you may have
got something wrong or been too aggressive in your decision-making process.
13
Figure 3: Microservices are initially the less productive architecture due to maintenance
overhead. As the monolith grows, it gets more complex, and it’s harder to add new features.
Microservice only pays off after the lines cross.
14
Figure 4: Brook’s law applied to complex software development states that adding more
developers to a late software project only makes it take longer.
Microservices are one method of reducing the impact of Brooke’s Law. You get smaller, more
agile and communicative teams. Before deciding on using microservices, however, you must
determine if Brooke’s Law is affecting your team. Switching to microservices too soon would
not be a wise investment.
15
1.7 Is it the right time for the switch?
Microservices are the most scalable way we have to develop software, no doubt about that.
But they are not free lunches. They come with some risks that are easy to run afoul of if
you’re not cautious. They are great when the team is growing and you need to stay fast and
agile. But you need to have a good understanding of the problem to solve, or you can end up
with a distributed monolith.
We can summarize this whole discussion about transitioning to microservices in one sentence:
don’t do it unless you have a good reason. Companies that embark on the journey to
microservices unprepared and without a solid design will have a very tough time. You need
to achieve a critical mass of engineering culture and scaling know-how before microservices
should be considered as an option.
16
Instead of rewriting their entire monolith as microservices, Shopify chose modularization as
the solution.
Figure 5: Modularization helps design better monoliths and microservices. Without carefully
defined modules, we either fall into the traditional layered monolith (the big ball of mud) or,
even worse, as a distributed monolith, which combines the worst features of monoliths and
microservices.
Modularization is a lot of work, that’s true. But it also adds a ton of value because it
makes development more straightforward. New developers do not have to know the whole
application before they can start making changes. They only need to be familiar with one
module at a time. Modularity makes a large monolith feel small.
Modularization is a required step before transitioning to microservices, and for some, it may
be a better solution than microservices. The modular monolith, like in microservices, solves
the tangled codebase problem by splitting the code into independent modules. Unlike with
microservices, where communication happens over a network, the modules in the monolith
communicate over internal API calls.
17
Figure 6: Layered vs modular monoliths. Modularized monoliths share many of the charac-
teristics of microservice architecture sans the most difficult challenges.
18
Figure 7: Shopify bragging about their Black Friday stats
The architecture and technology stack will determine how the monolith can be optimized;
a process that almost invariably will start with modularization and can leverage cloud
technologies for scaling:
• Deploying multiple instances of the monolith and using load balancing to distribute
the traffic.
• Distributing static assets and frontend code using CDNs.
• Using caching to reduce the load on the database.
• Implementing high-demand features with edge computing or serverless functions.
19
Chapter 2 — How to Restructure Your Organization for
Microservices
When companies think about how to restructure their organizations, they often focus on
the new roles that must be filled and the skills that employees need to learn. However,
restructuring your organization to support microservice-based applications goes beyond a
few roles and job titles. A company restructuring for microservices requires an entire culture
shift and new way of working.
20
Figure 8: Traditional hierachical organization structure.
Everyone has their own defined job function. Everyone has their assigned roles. The problem
is, nobody is responsible for the product as a whole. Nobody owns the application. Organiza-
tionally, you have to go all the way to the highest level of engineering management—such as
the VP of engineering or CTO/CPO—before you find someone who owns and manages the
product as a whole. This type of structure leads to finger-pointing and a “not-my-problem”
mentality.
Everyone has a role to fill, but no one has responsibility.
When you build your application using microservices, one of the advantages is the ability
to define and manage smaller chunks of the application as a whole. This advantage isn’t as
useful when you keep the traditional organizational structure. You have just moved from
having one large application with no owner, to hundreds of smaller applications with no
owners.
To fully take advantage of the structural benefits of a microservice application architecture,
you must modify your organizational structure to match that model. Most importantly, you
must move from a roles and job functions assignment model to a ownership and responsibility
assignment model.
21
2.2 The pod model
In the pod model, your organization is not split by job functions; instead, it’s split into small,
cross-functional teams, called pods. Each team has the capabilities, the resources, and the
support required to be completely responsible for a small portion of the overall application—a
service or two in the microservice architecture model.
A pod that owns microservices within the application typically consists of 6-10 people with
the following types of job skills:
• Team management
• Software development
• Software validation
• Service operation
• Service scaling and maintaining availability
• Security
• Operational support and maintenance
• Infrastructure operational maintenance (servers, etc.)
It’s important that the team has the necessary skills to perform these jobs. But, in a pod
model, the pod as a whole has responsibilities, and no single person is assigned specific job
functions. In other words, there is no “security person,” or “DevOps person,” or “QA person”
in the pod. Instead, everyone in the pod shares the entire pod’s responsibilities.
Figure 9 shows the same organization using a pod model. The pods are each independent
and peers of one another, and each pod provides cross-functional responsibilities.
22
2.3 Ownership is the key
The key to successfully operating the pod model is to create pods with responsibilities
that aren’t specific job functions. Rather, their responsibilities are ownership. A pod owns
a service or set of services. Ownership means they have complete responsibility for the
architecture, design, creation, testing, operation, maintenance, and security of the service. No
one else has ownership, only the assigned pod. If anything happens to that service, it is the
pod’s responsibility to manage and fix. This completely removes the ability to finger-point to
another team when a service fails. The service’s owning pod is the one responsible. This is
illustrated in Figure 10, where interconnected services are represented in blue, and the pods
that own those services are shown in red.
Every service has exactly one owner, and if something is failing in a service, it is completely
clear which pod is responsible for resolving the issue.
23
resides. Because the two services are owned by different pods, which pod owns the problem?
The answer may be difficult and complex to determine. Finger pointing between Pod 2 and
Pod 4 is definitely a possibility.
If you have successfully set up a pod model and have ingrained a strong ownership mindset
into the members of the pods, the likelihood of finger-pointing in this case should be low.
What should happen in a high-quality team organization is both Pod 2 and Pod 4 work
together to resolve the issue.
Although this is the way things should work, that’s not sufficient. The model must help
resolve these ownership issues quickly and decisively in order to keep your application working,
at scale, and maintain high availability. This is where two characteristics of your microservice
architecture are critical: Well-designed and documented APIs and solid, maintainable
SLAs. Not everyone who promotes moving to microservice architectures drives these two
characteristics; but in my mind, they are the two most important characteristics of a solid
microservice architecture, and they are critical to the ownership organizational model. Let’s
look at these two microservice characteristics:
• Well-designed and documented APIs. Each and every service in your application
must have a well-designed API describing how the service should be used and how to
talk to it, and this API must be well-documented across the organization. We are used
to well-designed and documented APIs when we are talking about APIs exposed to
customers. But it’s equally important to design quality APIs among internal services as
well. No service should talk to any other service without using a well-defined
and documented API to that service. This makes sure that expectations on what
each service does and does not do is clear, and those expectations drive higher-quality
interactions and hence fewer application issues.
• Solid, maintainable SLAs. Besides having APIs, a set of performance expectations
around those APIs must be established. If Service C is calling Service E’s API, it’s
critical that Service C understand the performance expectations it can expect from
Service E. What will the latency be for the API calls it makes? What happens to
latency if the call rate increases? Are there limits on how often a service can be called?
What happens when those limits are reached?
APIs are about understanding, and SLAs are about expectations. APIs help us know what
other services do and what they are responsible for. SLAs help us know what we can expect
from a performance standpoint from the service.
If Service E in Figure 10 has a well-defined and documented API, and has well-defined SLAs
on how it should be used and it keeps those SLAs, then as long as Service C is using the
service in accordance with the documented API and keeping within the defined SLAs, Service
C should be able to expect reasonable performance from Service E.
Now, in the hypothetical example above, Service E was causing problems for Service C. In
this case, it should be obvious in the measured performance compared with the documented
SLAs that Service E has the problem and not Service C. By using monitoring, and API/SLA
management, diagnosing problems becomes far easier.
24
2.5 Pods need support
In the pod model, pods have a lot of responsibility and a lot of authority. There is no way
that a small team (6-10 people) that composes a pod can handle all aspects of the breadth
and depth of responsibility for all aspects of service ownership without support.
To give them support, horizontal service teams are created to provide tools and support to
the service-owning pods. These teams can handle common pod-independent problems such
as creating CI/CD pipelines, understanding global security issues, creating tooling to manage
infrastructures, and maintaining vendor relationships. The pods can then leverage these
teams to augment the pod and give support to the pod. This is illustrated in Figure 20.
It’s important to note that these support teams are supporting the pods, and do not—can
not—take ownership responsibility away from the pods. If a security issue exists in a service,
responsibility for the issue lies with the pod that owns the service—not with the security
support team. The pods have ultimate control and decision-making responsibilities—and
hence ultimate responsibility—for all aspects of the operation of the services they own.
25
The pod ownership model is part of the STOSA framework. STOSA stands for Single Team
Oriented Service Architecture. It defines a model where service teams — pods — own all
aspects of building and operating individual services.
The model was developed and introduced in Lee Atchison’s book Architecting for Scale. It’s
now available as a standalone model documented at stosa.org. We recommend checking it
out.
26
Chapter 3 — Design Principles for Microservices
How do you know if you’re doing proper microservice design? If your team can deploy
an update at any time without coordinating with other teams, and if other teams can
similarly deploy their changes without affecting you, congratulations, you got the knack of
microservices.
The surest way of losing the benefits microservices offer is by not respecting the decoupling
rule. If we look closely, we see that microservices are all about autonomy. When this autonomy
is lost, teams must coordinate during development and deployment. Perfect integration
testing is required to make sure all microservices work together.
Figure 12: Tight service dependencies create team dependencies and communication bottle-
necks.
These are all problems that come with distributed computing. If you’ve ever used a cloud
service you’ll know that spreading services or machines over many geographical locations
is not the same as running everything on the same site. A distributed system has a higher
latency, can have synchronization issues, and is a lot harder to manage and debug. This
highly-coupled service architecture is really, deep down, a distributed monolith, with the worst
of both worlds and none of the benefits microservices should bring.
If you cannot deploy without coordinating with another team or relying on specific versions
of other microservices to deploy yours, you’re only distributing your monolith.
Domain-Driven Development allows us to plan a microservice architecture by decomposing
the larger system into self-contained units, understanding the responsibilities of each, and
identifying their relationships.
27
3.1 What is Domain-Driven Design?
Domain-Driven Design (DDD) is a software design method wherein developers construct
models to understand the business requirements of a domain. These models serve as the
conceptual foundation for developing software.
According to Eric Evans, author of Domain-Driven Design: Tackling Complexity in the Heart
of Software, a domain is:
A sphere of knowledge, influence, or activity. The subject area to which the user
applies a program is the domain of the software.
How well one can solve a problem is determined by one’s capacity to understand the domain.
Developers are smart, but they can’t be specialists in all fields. They need to collaborate with
domain experts to guarantee that the code is aligned with business rules and client needs.
Figure 13: Developers and domain experts use a unified language to share knowledge,
document, plan, and code.
The two most important DDD concepts for microservice architecture are: bounded contexts
and context maps.
28
3.1.1 Bounded Context (BC)
The setting in which a word appears determines its meaning. Depending on the context,
“book” may refer to a written piece of work, or it may mean “to reserve a room”. A bounded
context (BC) is the space in which a term has a definite and unambiguous meaning.
Before DDD it was common practice to attempt to find a model that spanned the complete
domain. The problem is that the larger the domain, the more difficult it is to find a consistent
and unified model. DDD’s solution is to break down the domain into more manageable
subdomains.
Figure 14: The relevant properties of the “book” change from context to context.
In software, we need to be exact. That is why defining BCs is critical: it gives us a precise
vocabulary, called ubiquitous language, that can be used in conversations between developers
and domain experts. The ubiquitous language is present throughout the design process,
project documentation, and code.
29
Figure 15: Bounded context communication used to achieve a high-level task.
30
Figure 16: An Event Storming session, domain events are used as the catalyst for sharing
knowledge and identifying business requirements.
Figure 17: Strategic Domain-Driven Design helps us identify the logical boundaries of
individual microservices.
The boundaries act as natural barriers, protecting the models inside. As a result, every BC
represents an opportunity to implement at least one microservice.
31
Figure 18: Bounded relationships
32
Figure 19: ACL is implemented downstream to mitigate the impact of upstream changes.
OHS does the opposite. It’s implemented upstream to offer a stable interface for services
downstream.
33
or functionality. A domain service can span multiple entities.
• Domain events: essential for microservice design, domain events notify other services
when something happens. For instance, when a customer buys a book, a payment is
rejected, or that a user has logged in. Microservices can simultaneously produce and
consume events from the network.
• Repositories: repositories are persistent containers for aggregates, typically taking
the form of a database.
• Factories: factories are responsible for creating new aggregates.
Figure 20: The shipping aggregate consists of a package containing books shipped to an
address.
34
Other methods such as Test-Driven Development (TDD) or Behavior-Driven Development
(BDD) may be enough for smaller, simpler systems. TDD is the fastest to start with and
works best when on single microservices or even on applications consisting of only a few
services.
On a bigger scale, we can use BDD, which forces us to validate the wholesale behavior
with integration and acceptance tests. BDD may work well if you work on low to medium-
complexity designs.
You can also combine these three patterns, choosing the best one for each stage of development.
For example:
1. Identify microservices and their relationships with strategic DDD.
2. Model each microservice with tactical DDD.
3. Since each team is autonomous, they can choose to adopt BDD or TDD (or a mix of
both) for developing a microservice or a cluster of microservices.
DDD can feel daunting to learn and implement, but its value for developing a microservice
architecture is well worth the effort. If you’re interested in learning more, we recommend
picking up the relevant books by Eric Evans and Vaughn Vernon.
35
Chapter 4 — From Monolith to Microservices
In the previous chapters, we discussed the downsides of microservices and examined ways of
making a monolith remain viable despite growing pressures. The goal was never to dissuade
you from microservices; only to consider all options before taking action. In this chapter,
we’ll talk about the signs crumbling monoliths show.
Overweight monoliths exhibit two classes of problems: degrading system performance and
stability, and slow development cycles. So, whatever we do next comes from the desire to
escape these technical and social challenges.
36
4.2 Slow development cycles
The second big problem is when making any change happen begins to take too much time.
There are some technical factors that are not difficult to measure. A good question to consider
is how much time it takes your team to ship a hotfix to production. Not having a fast delivery
pipeline is painfully obvious to your users in the case of an outage.
What’s less obvious is how much the slow development cycles are affecting your company over
a longer period of time. How long does it take your team to get from an idea to something
that customers can use in production? If the answer is weeks or months, then your company
is vulnerable to being outplayed by competition.
Nobody wants that, but that’s where the compound effects of monolithic, complex code bases
lead to.
• Slow CI builds: anything longer than a few minutes leads to too much unproductive
time and task switching. As a standard for web apps we recommend setting the bar at
10 minutes. Slow CI builds are one of the first symptoms of an overweight monolith, but
the good news is that a good CI tool can help you fix it. For example, on Semaphore
you can split your test suite into parallel jobs.
• Slow deployment: this issue is typical for monoliths that have accumulated many
dependencies and assets. There are often multiple app instances, and we need to replace
each one without having downtime. Moving to container-based deployment can make
things even worse, by adding the time needed to build and copy the container image.
• High bus factor on the old guard, long onboarding for the newcomers: it takes
months for someone new to become comfortable with making a non-trivial contribution
in a large code base. And yet, all new code is just a small percentile of the code that
has already been written. The idiosyncrasies of old code affect and constrain all new
code that is layered on top of the old one. This leaves those who have watched the
app grow with an ever-expanding responsibility. For example, having five developers
that are waiting for a single person to review their pull requests is an indicator of this
problem.
• Emergency-driven context switching: we may have begun working on a new
feature, but an outage has just exposed a vulnerability in our system. So, healing it
becomes a top priority, and the team needs to react and switch to solving that issue. By
the time they return to the initial project, internal or external circumstances can change
and reduce its impact, perhaps even make it obsolete. A badly designed distributed
system can make this even worse — hence one of the requirements for making one is
having solid design skills. However, if all code is part of a single runtime hitting one
database, our options for avoiding contention and downtime are very limited.
• Change of technology is difficult: our current framework and tooling might not be
the best match for the new use cases and the problems we face. It’s also common for
monoliths to depend on outdated software. For example, GitHub upgraded to Rails 3
four years after it was released. Such latency can either limit our design choices, or
generate additional maintenance work. For example, when the library version that
you’re using is no longer receiving security updates, you need to find a way to patch it
37
yourself.
38
Figure 21: A monorepo is a shared repository containing the monolith and the new microser-
vices.
39
Figure 22: The testing pyramid.
Aim to run the tests as often on your local development machine as you do in your continuous
integration pipeline.
40
Figure 23: Use API gateways and HTTP reverse proxies to route requests to the appropriate
endpoint. You can toggle between the monolith and microservices on a very fine-grained
level.
Once the migration is complete, the gateways and proxies will remain – they are a standard
component of any microservice application since they offer forwarding and load balancing.
They can also function as circuit breakers if a service goes down.
41
Figure 24: Containerizing your monolith is a way of standardizing deployment, and it is an
excellent first step in learning Kubernetes.
Figure 25: Pick something easy to start, like a simple edge service.
42
An typical workflow for a feature-flag-enabled migration is:
1. Identify a piece of the monolith’s functionality to migrate to a microservice.
2. Wrap the functionality with a feature flag. Re-deploy the monolith.
3. Build and deploy the microservice.
4. Test the microservice.
5. Once satisfied, disable the feature on the monolith by switching the feature off.
6. Repeat until the migration is complete.
Because feature flags allow us to deploy inactive code to production and toggle it at any
time, we can decouple feature releases from actual deployment. This gives developers an
enormous degree of flexibility and control.
43
Layered monoliths are hard to disentangle – code tends to have too many dependencies
(sometimes circular), making changes difficult to implement.
A modular monolith is the next best thing to microservices and a stepping stone towards
them. The rule is that modules can only communicate over public APIs and everything is
private by default. As a result, the code is less intertwined, relationships are easy to identify,
and dependencies are clear-cut.
Figure 27: This Java monolith has been split into independent modules.
Two patterns can help you refactor a monolith: the Strangler Fig and the Anticorruption
Layer.
Figure 28: The monolith is modularized one piece at a time. Eventually, the old monolith is
gone and is replaced by a new one.
44
4.4.10 The anticorruption layer pattern
You will find that, in some cases, changes in one module propagate into others as you refactor
the monolith. To combat this, you can create a translation layer between rapidly-changing
modules. This anticorruption layer prevents changes in one module from impacting the rest.
Figure 29: The anticorruption layer prevents changes from propagating by translating calls
between modules and the monolith.
45
Figure 30: Decoupling data into separate and independent databases.
After decoupling, you’ll have to install mechanisms to keep the old and new data in sync
while the transition is in progress. You can, for example, set up a data-mirroring service or
change the code, so transactions are written to both sets of databases.
Figure 31: Use data duplication to keep tables in sync during development.
46
4.4.12 Add observability
The new system must be faster, more performant, and more scalable than the old one.
Otherwise, why bother with microservices?
You need a baseline to compare the old with the new. Before starting the migration, ensure
you have good metrics and logs available. It may be a good idea to install some centralized
logging and monitoring service, since it’s a key component for the observability of any
microservice application.
47
• Production is a moving target: because microservices are independently-deployable
and built by autonomous teams, extra checks and boundaries are required to assure
they will all still function correctly together when deployed.
All these characteristics force us to think of new testing strategies.
48
• Solitary unit tests: this should be used when we need the test result to always be
deterministic. We use mocking or stubbing to isolate the code under test from external
dependencies.
• Sociable unit tests: sociable tests are allowed to call other services. In this mode, we
push the complexity of the test into the test or staging environment. Sociable tests are
not deterministic, but we can be more confident in their results when they pass.
Figure 34: We can run unit tests in isolation using test doubles. Alternatively, we can allow
tested code to call other microservices, in which case we’re talking about sociable tests.
As you’ll see, balancing confidence vs. stability will be a running theme throughout the entire
chapter. Mocking makes things faster and reduces uncertainty, but the more you mock, the
less you can trust the results. Sociable tests, despite their downsides, are more realistic. So,
you’ll likely need to strike a good balance of both types.
If you want to check examples of solitary vs sociable tests, check out this nice post from
Dylan Watson at dev.to.
49
• Consumer-side contract tests are written and executed by the downstream team.
During the test, the microservice connects to a fake or mocked version of the producer
service to check if it can consume its API.
• Producer-side contract tests are run in the upstream service. This type of test
emulates the various API requests clients can make, verifying that the producer matches
the contract. Producer-side tests let the developers know when they are about to break
compatibility for their consumers.
Figure 35: Contract tests can run on the upstream or downstream. Producer tests check
that the service doesn’t implement changes that would break depending services. Consumer
tests run the consumer-side component against a mocked version of the upstream producer
(not the real producer service) to verify that the consumer can make requests and consume
the expected responses from the producer. We can use tools such as Wiremock to reproduce
HTTP requests.
If both sides of the contract tests pass, the producers and consumers are compatible and
should be able to communicate. Contract tests should always run in continuous integration
to detect incompatibilities before deployment.
You can play with contract testing online in the Pact 5-minute getting started guide. Pact is
a HTTP-based testing tool to write and run consumer- and producer-based contract tests.
50
Integration tests are not interested in evaluating behavior or business logic of a service.
Instead we want to make sure that the microservices can communicate with one another and
their own databases. We’re looking for things like missing HTTP headers and mismatched
request/response pairings. And, as a result, integration tests are typically implemented at
the interface level.
Figure 36: Using integration tests to check that the microservices can communicate with
other services, databases, and third party endpoints.
Check out Vitaly Baum’s post on stubbing microservices to see integration code tests in
action.
51
Figure 37: Component testing performs end-to-end testing to a group of microservices.
Services outside the scope of the component are mocked.
There are two ways of performing component testing: in-process and out-of-process.
52
Figure 38: Component test running in the same process as the microservice. The test injects
a mocked service in the adapter to simulate interactions with other components.
In-process testing only works when the component is a single microservice. On a first glance,
component tests look very similar to end-to-end or acceptance tests. The only difference is
that component tests pick one part of the system (the component) and isolate it from the
rest. The component is thoroughly tested to verify that it performs the functions its users or
consumers need.
Figure 39: Component and end-to-end testing may look similar. But the difference is that
end-to-end tests the complete system (all the microservices) in a production-like environment,
whereas a component does it on an isolated piece of the whole system. Both types of tests
check the behavior of the system from the user (or consumer) perspective, following the
journeys a user would perform.
We can write component tests with any language or framework, but the most popular ones
are probably Cucumber and Capybara.
53
4.6.6 Out-of-process component testing
Out-of-process tests are appropriate for components of any size, including those made up of
many microservices. In this type of testing, the component is deployed — unaltered — in a
test environment where all external dependencies are mocked or stubbed out.
Figure 40: In this type of component tests the complexity is pushed out into the test
environment, which should replicate the rest of the system.
To round out the concept of contract testing you may explore example code for contract
testing on Java Spring. Also, if you are a Java developer, this post has code samples for
testing Java microservices at every level.
54
Figure 41: End-to-end are automated tests that simulate user interaction. Only external
third-party services might be mocked.
As depicted by the testing pyramid, E2E tests are the least numerous, which is good because
they are usually the hardest to run and maintain. As long as we focus on the user’s journeys
and their needs, we can extract a lot of value with only a few E2E tests.
55
Chapter 5 — Running Microservices
A microservice application is a group of distributed programs that communicate over networks,
occasionally interfacing with third-party services and databases. Microservices, by their
networked nature, provide more points of failure than a traditional monolith. As a result of
this, we need a different, broader approach running them.
56
Figure 42: Two paths ahead: one goes from process, to containers, and ultimately, to
Kubernetes. The other goes the serverless route.
57
Figure 43: The most basic form of microservice deployment uses a single machine. The
application is a group of processes coupled with load balancing.
58
Figure 44: Custom scripts are required to deploy the executables built in the CI pipeline.
This is the best option to learn the basics of microservices. You can run a small-scale
microservice application to get familiarized. A single server will take you far until you need
to expand, at which time you can upgrade to the next option.
59
Figure 45: The load balancer still is a single point of failure. To avoid this, multiple balancers
can run in parallel.
Horizontal scaling is not without its problems, however. Going past one machine poses a
few critical points that make troubleshooting much more complex and typical problems that
come with using the microservice architecture emerge.
• How do we correlate log files distributed among many servers?
• How do we collect sensible metrics?
• How do we handle upgrades and downtime?
• How do we handle spikes and drops in traffic?
These are all problems inherent to distributed computing, and are something that you will
experience (and have to deal with) as soon as more than one machine is involved.
This option is excellent if you have a few spare machines and want to improve your application’s
availability. As long as you keep things simple, with services that are more or less uniform
(same language, similar frameworks), you will be fine. Once you pass a certain complexity
threshold, you’ll need containers to provide more flexibility.
60
• A runaway process can consume all the memory or CPU.
• Deploying and monitoring the microservices is a brittle process.
All these shortcomings can be mitigated with containers. Containers are packages that
contain everything a program needs to run. A container image is a self-contained unit that
can run on any server without having to install any dependencies or tools first (other than
the container runtime itself).
Containers provide just enough virtualization to run software in isolation. With them, we get
the following benefits:
• Isolation: contained processes are isolated from one another and the OS. Each container
has a private filesystem, so dependency conflicts are impossible (as long as you are not
abusing volumes).
• Concurrency: we can run multiple instances of the same container image without
conflicts.
• Less overhead: since there is no need to boot an entire OS, containers are much more
lightweight than VMs.
• No-install deployments: installing a container is just a matter of downloading and
running the image. There is no installation step required.
• Resource control: we can put CPU and memory limits on containers so they don’t
destabilize the server.
Figure 46: Containerized workloads require an image build stage on the CI/CD.
We can run containers in two ways: directly on servers or via a managed service.
61
5.2.4 Containers on servers
This approach replaces processes with containers since they give us greater flexibility and
control. As with option 2, we can distribute the load across any number of machines.
Figure 47: Wrapping microservices processes in containers make them more portable and
flexible.
62
Figure 48: Elastic Container Service (ECS) with Fargate allows us to run containers without
having to rent servers. They are maintained by the cloud provider.
Either container option will suit small to medium-sized microservice applications. If you’re
comfortable with your vendor, a managed container service is easier, as it takes care of a lot
of the details for you.
For large-scale deployments, needless to say, both options will fall short. Once you get
to a certain size, you’re more likely to have team members that have experience with (or
willingness to learn about) tools such as Kubernetes, which completely change the way
containers are managed.
5.2.6 Orchestrators
Orchestrators are platforms specialized in distributing container workloads over a group of
servers. The most well-known orchestrator is Kubernetes, a Google-created open-source
63
project maintained by the Cloud Native Computing Foundation.
Orchestrators provide, in addition to container management, extensive network features like
routing, security, load balancing, and centralized logs — everything you may need to run a
microservice application.
Figure 49: Kubernetes uses pods as the scheduling unit. A pod is a group of one or more
containers that share a network address.
With Kubernetes, we step away from custom deployment scripts. Instead, we codify the
desired state with a manifest and let the cluster take care of the rest.
Figure 50: The continuous deployment pipeline sends a manifest to the cluster, which takes
the steps required to fulfill it.
Kubernetes is supported by all cloud providers and is the de facto platform for microservice
deployment. As such, you might think this is the absolute best way to run microservices. For
many companies, this is true, but they’re also a few things to keep in mind:
64
• Complexity: orchestrators are known for their steep learning curve. It’s not uncommon
to shoot oneself in the foot if not cautious. For simple applications, an orchestrator is
overkill.
• Administrative burden: maintaining a Kubernetes installation requires significant
expertise. Fortunately, every decent cloud vendor offers managed clusters that take
away all the administration work.
• Skillset: Kubernetes development requires a specialized skillset. It can take weeks to
understand all the moving parts and learn how to troubleshoot a failed deployment.
Transitioning into Kubernetes can be slow and decrease productivity until the team is
familiar with the tools.
Check out deploying applications with Kubernetes in these tutorials:
• A Step-by-Step Guide to Continuous Deployment on Kubernetes
• CI/CD for Microservices on DigitalOcean Kubernetes
• Kubernetes vs. Docker: Understanding Containers in 2022
• Continuous Blue-Green Deployments With Kubernetes
Kubernetes is the most popular option for companies making heavy use of containers. If
that’s you, choosing an orchestrator might be the only way forward. Before making the
jump, however, be aware that a recent survey revealed that the greatest challenge for most
companies when migrating to Kubernetes is finding skilled engineers. So if you’re worried
about finding skilled developers, the next option might be your best bet.
65
Figure 51: Serverless functions scale automatically and have per-usage billing.
It’s an entirely different paradigm with different pros and cons. On the plus side, we get:
• Ease of use: we can deploy functions on the fly without compiling or building container
images, which is great for trying things out and prototyping.
• Easy to scale: you get (basically) infinite scalability. The cloud will provide enough
resources to match demand.
• Pay per use: you pay based on usage. If there is no demand, there’s no charge.
The downsides, nevertheless, can be considerable, making serverless unsuitable for some types
of microservices:
• Vendor lock-in: as with managed containers, you’re buying into the provider’s
ecosystem. Migrating away from a vendor can be demanding.
• Cold starts: infrequently-used functions might take a long time to start. This happens
because the cloud provider spins down the resources attached to unused functions.
• Limited resources: each function has a memory and time limit–they cannot be
long-running processes.
• Limited runtimes: only a few languages and frameworks are supported. You might
be forced to use a language that you’re not comfortable with.
Imprevisible bills: since the cost is usage-based, it’s hard to predict the size of the invoice
at the end of the month. A usage spike can result in a nasty surprise.
Learn more about serverless below:
• AWS Serverless With Monorepos
• A CI/CD Pipeline for Serverless Cloudflare Workers
Serverless provides a hands-off solution for scalability. Compared with Kubernetes, it doesn’t
give you as much control, but it’s easier to work with as you don’t need specialized skills
66
for serverless. Serverless is an excellent option for small companies that are rapidly growing,
provided they can live with its downsides and limitations.
67
Figure 52: Each microservice has a separate CI/CD pipeline.
Micro-deployment is a side effect of organizing the code into multirepos. For reference, this
is how we currently deploy microservices for Semaphore CI/CD.
68
Figure 53: Micro-deployments to the hosted version of the application combined with releases
for the on-premise instances of the product.
The steps needed to release an application organized into multirepos usually go like this:
1. In each repo, tag the versions of microservices that will go into the release.
2. For each microservice, build a Docker image and map the microservice version to the
image tag.
3. Test the release candidate in a separate test environment. This usually involves a mix
of integration testing, acceptance testing, and perhaps some manual testing.
4. Go over every repository and compile a list of changes for the release changelog before
updating the documentation.
5. Identify hotfixes required for older releases.
6. Publish the release.
Considering that an application can consist of dozens of microservices (and repositories), it’s
easy to see how releasing this way could entail a lot of repeated admin overhead.
69
Figure 54: A monorepo contains all the microservices and a unified CI/CD deployment
pipeline.
The monorepo strategy makes microservices feel more like a monolith, but in a good way:
• Creating a release is as simple as creating branches and using tags.
• A single CI/CD process standardizes testing and deployment.
• Integration and acceptance testing are a lot easier to implement.
• A single Git history is much clearer to understand, simplifying the process of writing a
changelog and updating documentation.
70
• Git has no built-in code protection features. So, if trust is a factor, we should use a
feature like Bitbucket or GitHub CODEOWNERS.
• Finding errors in the CI build can feel overwhelming when the test suite spans many
separate services. Features like test reports can help you identify and analyze problems
at a glance.
• A monorepo CI/CD configuration can have a lot of repetitive parts. We can use
environment variables or parametrized pipelines to reduce boilerplate.
Figure 56: Multirepos make it challenging to find the root cause of a failure. It’s difficult to
find the “last working microservice configuration”.
Monorepos don’t suffer from this. A monorepo captures the complete snapshot of the system.
71
We have all the details needed to return to any point in the project’s history. So, we’ll always
be able to find an appropriate place to retreat to when there’s a problem.
Figure 57: A monorepo has all the microservice relationship details needed to go back to any
point in the project’s history.
72
Parting Words
Hopefully, by now you have a much better grasp of microservices, what they are, what they
aren’t, and what a migration from a monolith would entail. This may be the end of the
handbook but certainly not the end of the road. We wish you the best of luck on your
microservice journey!
73