Microservices and Containerization 1663200052
Microservices and Containerization 1663200052
Table of Contents
HIGHLIGHTS AND INTRODUCTION
03 Welcome Letter
Guillaume Hugot, Director of Engineering at DZone
DZONE RESEARCH
44 Microservices Orchestration
GETTING THE MOST OUT OF YOUR SERVICES
Christian Posta, VP, Global Field CTO at Solo.io
ADDITIONAL RESOURCES
Scalability has always been one of the greatest challenges in Where microservices architecture promises that every class
software development. The computer science industry has could become a service, containerization offers to deploy
grown at an unprecedentedly rapid rate in its short 50 years. them independently with the exact needed requirements.
Software products have become more complex over time,
having interacted with more heterogenous and massive Today, we are in the first decade when the coupled
amounts of data and having been developed by teams with microservices architecture/containerization infrastructure
various expertise at the same time. Different approaches is mature and widely used in the industry, and its adoption
toward this complexity have been explored. continues to grow every year. Industry analysts predict
that the global microservices architecture market size will
Microservices and containerization are two trending words increase at a compound annual growth rate, reaching more
that have gained traction over the past several years. They than $2 billion in a matter of years.
are complementary attempts to address the challenges of
complexity with the same idea: reducing everything into the As a reader of this year's "Microservices and Containerization"
smallest and simplest possible pieces — microservices for Trend Report, you are probably familiar with the benefits of
software architecture and containers for its deployment. this architecture — maybe you have even used it in one of
your projects already. Experts from the DZone community
Despite the terms' recent gains in popularity, microservices will share their diverse opinions, analyses, and perhaps some
architecture is nothing new and has been part of the unexpected insights that, we hope, will allow you to gain
ecosystem for a couple of decades. a better understanding of microservices and containers
standards in 2022.
The first research in this direction — for making code less
brittle and easier to scale — was made in 1999 at HP Labs, but Sincerely,
this solution started to become popular only six years later,
when the term « Micro-Web-Services » was introduced by
Peter Rodgers.
Then, in almost the same year, in 2006, when Google began Guillaume Hugot
working on Linux control groups that isolated the resource
usage of a collection of processes, there was the first step
in creating virtualization, which would eventually become
containerization.
Guillaume Hugot is a 15-year experienced engineer specialized in web technologies and the media
industry, and he is part of DZone as the head of engineering. At DZone, Guillaume conducts project
developments and ensures we always deliver the best experience to our site visitors and our contributors.
DZone Publications
Meet the DZone Publications team! Publishing DZone Mission Statement
Refcards and Trend Reports year-round, this At DZone, we foster a collaborative environment that empowers
team can often be found reviewing and editing developers and tech professionals to share knowledge, build
contributor pieces, working with authors and skills, and solve problems through content, code, and community.
sponsors, and coordinating with designers. Part
of their everyday includes collaborating across We thoughtfully — and with intention — challenge the status
DZone's Production team to deliver high-quality quo and value diverse perspectives so that, as one, we can inspire
content to the DZone community. positive change through technology.
Melissa Habit
Publications Manager at DZone Lucy Marcum
@dzone_melissah on DZone Publications Coordinator at DZone
@melissahabit on LinkedIn @LucyMarcum on DZone
@lucy-marcum on LinkedIn
Melissa co-leads the publication lifecycles
for Trend Reports and Refcards, from managing schedules As a Publications Coordinator, Lucy spends
and workflows to conducting editorial reviews with DZone much of her time working with authors, from sourcing new
authors and facilitating layout processes. She also provides contributors to setting them up to write for DZone. She
content support for Sponsors alongside her Production also edits publications and creates different components
teammates. At home, Melissa passes the days reading, of Trend Reports. Outside of work, Lucy spends her time
knitting, sewing, and (most importantly) adoring her cats, reading, writing, running, and trying to keep her cat, Olive,
Bean and Whitney. out of trouble.
Lindsay oversees the Publication lifecycles John works as technical architect, teaches
end to end, delivering impactful content to DZone's global undergrads whenever they will listen, and moonlights as a
developer audience. Assessing Publications strategies across research analyst at DZone. He wrote his first C in junior high
Trend Report and Refcard topics, contributor content, and and is finally starting to understand JavaScript NaN%. When
sponsored materials — she works with both DZone authors he isn't annoyed at code written by his past self, John hangs
and Sponsors. In her free time, Lindsay enjoys reading, out with his wife and cats, Gilgamesh and Behemoth, who
biking, and walking her dog, Scout. look and act like their names.
From May–June 2022, DZone surveyed software developers, architects, and other IT professionals in order to understand how
microservices are being developed and deployed.
Methods: We created and distributed a survey to a global audience of software professionals. Question formats included
multiple choice, free response, and ranking. Survey links were distributed via email to an opt-in subscriber list, popups on
DZone.com, the DZone Core Slack Workspace, and LinkedIn. The survey was open June 14–29, 2022 and recorded 346 responses.
In this report, we review some of our key research findings. Many secondary findings of interest are not included here.
Research Target One: Expected vs. Actual Benefits and Pains of Adopting Microservices
Motivations:
1. The buzz around any buzzword exerts non-technical social pressure toward doing the thing signified by the buzzword.
Moreover, in engineering, deciding whether to do a thing always involves tradeoffs. Deciding to do a new thing involves
imagination; deciding to reject an old thing involves arguments to keep it. Microservices still generate buzz, so software
professionals will feel pressure to consider using them. We wanted to know how people who have implemented at least
one microservice feel before and after implementation.
2. The hard problems posed by distributed systems — and therefore by microservices — are now increasingly handled by
mature support mechanisms (Kubernetes, API gateways, programmable infrastructure, etc.). We imagine that the hard
problems posed by microservices will be increasingly abstracted away, and we wanted to begin studying whether this
turns out to be the case over time in practice.
3. Nothing ignites an architecture-graph-theoretic flame war more than "fat nodes or fat edges?" We wanted to get a
current picture of software professionals' experience with the latest edge- or system-thickening architectural fashion (à la
service-oriented architecture [SOA] a decade ago).
Rate the following benefits of microservices by how much benefit you expected microservices to provide (left column) vs. how
much benefit microservices actually provided (right column):
• Actual benefits were equal to expected benefits of reusability, fault isolation, and runtime independence.
• Actual benefits were less than expected benefits of bounded context facilitation, cloud-native functionality,
decentralized governance, fault tolerance, flexibility, resource isolation, loose coupling, CI/CD facilitation, single
We would be surprised if results were too much otherwise. Any buzzword is likely to be worse than the buzz, and even
experienced engineers' (experience-crumbled) hopes regularly exceed grim reality. That the differences were consistently
small suggests rather an impressive perceived success rate of microservices across a wide range of possible benefits.
Moreover, it is worth noting that respondents' non-disappointment with respect to reusability, fault isolation, and runtime
independence cluster around (in the object-oriented metaphor) the "phospholipid bilayer" of the microserviced system's
"cell membranes": While fault isolation expectations did not exceed actuals, loose coupling (the "membrane channels") did.
2. Microservices consistently benefited security slightly more than expected. The two explicitly security-related purported
benefits of microservices available as predefined survey answers were both reported to have higher actual vs. perceived
benefit levels. This was not the case for any other domain.
We hypothesize two reasons for this. First, access to each microservice is less likely to grant the intruder free access to the
rest of the system because subsystem boundaries within monoliths are less likely to be "hard" than boundaries between
microservices. Second, because microservices coordinate over narrow (usually HTTP) channels, the points of interaction (i.e.,
the API contracts) are more likely to receive sustained design thought in a microservices vs. monolith architecture.
We imagine these two security-related attributes mutually reinforce: A system of microservices has more bulkheads,
and the bulkheads themselves are built with more attention and expertise (vs. a monolith).
3. The largest bucket of free-response benefits reported (12 of 34) relate to release/DevOps. In future surveys, we will include
more granular DevOps-related purported benefits of microservices (or whatever the latest flavor of service-orientated)
architecture than the four we included.
Rate the following pains of microservices by how much pain you expected microservices to cause (left column) vs. how much
pain microservices actually caused (right column):
• Actual pains were equal to expected pains of heavyweight service contracts, API versioning, complexity,
decentralized access control, uncoordinated CI/CD, endpoint proliferation, hidden complexity, data consistency,
performance overhead, query complexity, performance testing, distributed storage heterogeneity, cascading
failures, design complexity, service coordination, logging, debugging, and source repository/package complexity.
• Actual pains were less than expected pains of communication heterogeneity, integration testing, distributed
transactions, and source repository/package complexity.
From this greater overlap between actual and expected pains of microservices vs. overlap between actual and expected
benefits of microservices, we conclude that the pains of microservices are better understood by software professionals,
and that the positive buzz may still have space to do some good.
2. The simpler problems appear to be better understood, while the deeper/fuzzier problems appear to inspire
some unwarranted fear. Any developer who has tried to implement an e-commerce system over the web — let
alone wrestled with CAP at the level of a database management system — is likely to react strongly to greater
distributedness of a proposed architecture, and (in our experience) masters of source/package management are many
fewer than skilled developers.
We imagine that fear of source/package and integration testing complexity will decrease as source and test
management tools mature, but we do not expect that trepidation over distributed transactions will ever be cleanly
abstracted away (just ask the sub-cerebral complexities of spinal motor control).
2. Performance, because:
• Performance bottlenecks are theoretically harder to isolate in more monolithic architecture (an opportunity
for microservices to improve performance).
• Service-orientation seems to encourage higher-level, less procedural, more modular design thinking.
• Well-defined interfaces between services should make meaningful integration tests easier to write.
• Delayed refactoring (facilitated by heavy decoupling) may result in lower software quality over time.
4. Technical debt, because:
• The small size of each microservice makes technical debt more likely to decrease patently (because any given part
of a system need not drag down other parts, like an individualist Californian) but increase latently (because the
independent growth of each microservice may result in emergent fractioning of the system, like Dutch mathematics).
The first, they say, is your brain on monoliths. The second, the story goes, is your brain on microservices. That is, decoupling
means that microservices should, in theory, result in increased feature velocity. We wanted to find out if this is what actually
happens. So we asked:
In your experience, adopting a microservices architecture has resulted in: {Higher feature velocity, Lower feature velocity, No
change in feature velocity, I don't know}
Results (n=336):
Figure 1
5.1%
Of course, one might eyebrow-cock a quiz: But what if those features were released prematurely? One disadvantage
of radical decoupling is that nobody takes a system-wide view, which makes side effects more likely. This is possible,
but the not-quite-as-but-still-impressively-strong perceived positive effect of microservices on software quality
enervates such an objection.
2. Moreover, when microservices increase feature velocity, post-deployment incidents tend to decrease. Only 15.9% of
respondents who reported that microservices increase feature velocity reported incidents or rollbacks almost every
deployment vs. 25.6% of respondents who reported that microservices decrease feature velocity.
This vindicates microservices and continuous delivery together: Microservices facilitate a technique (rapid releases)
that (as shown in other research published here and elsewhere) tend to decrease incidents without incidentally taking
away that technique's power.
3. Affirmation of software ↔ organizational isomorphism (Conway's Law) correlates with increase in feature velocity as
a result of adopting microservices. Specifically, 39.7% of respondents who reported that microservices result in higher
feature velocity agree with Conway's Law, and 26.2% agree strongly, while 34.9% of respondents who reported that
microservices result in lower feature velocity agree, and 23.3% agree strongly.
The overlap suggests, loosely and in proportion to respondents' omniscience, that the impact of microservices on
feature velocity is related to organizational interactions of microservices.
4. Experience personally designing microservices also correlates with increase in feature velocity as a result of adopting
microservices. 77.3% of respondents who reported having designed microservices personally also reported higher feature
velocity as a result of microservices adoption vs. only 51.7% of respondents who haven't personally designed microservices.
Since deeper technical experience generally implies greater understanding, we are more inclined to accept the higher
feature velocity correlation claimed by the personally experienced microservices developers.
5. The impact of microservices on feature velocity can perhaps be bottlenecked by reliance on an external database to
maintain application state — a common "we-cannot-really-afford-to-trust-consensus" pattern that we have observed
in enterprise software development. That is, 37.1% of respondents who reported no change in feature velocity as a result
of microservices adoption reported that they also rely on an external database to maintain application state a little too
often vs. 19.6% of respondents who reported higher feature velocity (and lower for other velocity changes). Respondents
who reported higher feature velocity from microservices are also significantly more likely to report reliance on an
external database for application state just the right amount (40% vs. only 8.6% of respondents who reported no
change in feature velocity).
We hypothesize that a fuller, if more technically challenging, commitment to distributed design might multiply the
feature-velocity-increasing benefits of microservices.
In a monolith, one can look at one metric of time spent querying and compare that with a second metric of time spent doing
arithmetic. However, in a set of interacting microservices, each service has its own time spent querying and time spent doing
arithmetic. And the harder boundaries between each distinct service make simple performance-analytical abstraction over all
services (e.g., summing database query time over all services – something that is tricky enough even over a cluster of identical
containers) slouch toward the worst kind of iffiness. So prima facie, we might guess that microservices performance is harder
to tune. To test our hypothesis against the broader software development world's experience, we asked:
Agree/disagree: Microservices make performance engineering, tuning, and monitoring more difficult. {Strongly agree, Agree,
Neutral, Disagree, Strongly disagree, Not applicable}
Figure 2
0.6% 7.6%
Strongly disagree
15.5% Disagree
Neutral
26.7%
Agree
33.0%
Strongly agree
16.7%
Not applicable
Observations:
1. Just over half of respondents (56.1%) think that microservices make performance engineering, tuning, and monitoring
more difficult; only about a quarter (26.7%) disagree. The general opinion of software professionals is clearly on the "fear
performance tuning microservices" side.
2. This risk-skewed picture is reinforced by segmenting respondents into those who have personally designed microservices
and those who have not. 33.8% of respondents who reported having personally designed microservices disagree or
strongly disagree that microservices make performance engineering difficult vs. 38.6% of respondents who have not.
The difference is small but significant: Experience with microservices tends to strengthen the impression that
optimizing microservices performance is hard.
3. Again, affirmation of software ↔ organizational isomorphism correlates with affirmation of the difficulty to performance
tuning posed by microservices. Of those who agree with Conway's Law, 35.7% agree that microservices impact
performance engineering and 20.8% strongly agree vs. only 20.6% agree and 2.9% strongly agree, respectively, of those
who disagree with Conway's law.
Because Conway's Law has only very high-level technical teeth, we suppose that the orthogonality of performance
tuning to microservices nodes — one does not have a dedicated "performance microservice" except perhaps for
monitoring — suggests some organizational impedance mismatch (apart from the technical reasons for performance
worries outlined in the introduction to this question above). Perhaps the domain-specific specialization permitted by
microservices does not map well to the specialization of performance engineering.
Surely modularity makes software better. –The Cell That Invented Organelles
Separation of concerns is a luxury, and in the long run, luxury is bad design. – Also Every Biological System
Everything that does not absolutely need to be called by some other class absolutely must be declared private.
Also, have you ever heard of our lord and savior, Dependency Injection? –Some Annoying PR Reviewer
We suppose that microservices might have ambivalent effects on software quality. On the one hand, separation of concerns (to
our OO-indoctrinated brains anyway) seems like generally a good thing. Again, clean API design results in better integration tests
— which should, theoretically, make more accessible the sweet middle ground between trivial "does this concat(a,b) method
in fact return ab" overly unit-y unit tests and cosmic "if I click this button does every document in this multinational corporation
get TF-IDFed after 52 hours" overly integration-y integration tests. On the other hand, in our experience, high-level architectural
paradigms do less to improve software quality than a good craftsperson's commitment to excellence and attention to detail.
In your experience, adopting a microservices architecture has resulted in: {Higher software quality, Lower software quality,
No change in software quality, I don't know}
Results (n=322):
Figure 3
3.1%
Observations:
1. Nearly two thirds of respondents (65.8%) reported that adopting microservices resulted in higher software quality. This
is quite an endorsement. In a field as complex and volatile as software engineering, any paradigm shift is likely to have
diffuse effects on something as broad as "quality." In future surveys, we will attempt to dive deeper into causes.
2. Experience designing microservices correlates positively with the relation of microservices adoption and software quality:
69.4% of respondents who have personally designed microservices reported that microservices adoption resulted in
higher quality vs. 53.7% of respondents who have not personally designed microservices. Similarly, 24.1% of respondents
who have not personally designed microservices reported that microservices adoption resulted in lower software quality
vs. 11% of respondents who have personally designed microservices.
Here, however, we must be extra careful: We imagine it might be psychologically more difficult for someone who has
designed something to suppose that it resulted in lower software quality — a general and highly charged metric for
anyone interested in excellence, as any good engineer is — than for that person to suppose that what they designed
resulted in (for instance) lower feature velocity, which is not intrinsically tied to technical excellence.
We suppose microservices, in principle, are likely to have ambivalent effects on technical debt: lower up front (because one
microservice's internal decisions affect other services' decisions minimally), but perhaps easier to spiral out of control in the
long run (because the abstraction leakage from each microservice is invisible on its own but compounds in the aggregate).
In your experience, adopting a microservices architecture has resulted in: {More technical debt, Less technical debt, No
change in technical debt, I don't know}
Results (n=320):
3.1%
Observations:
1. Our hypothesis about the ambivalence of microservices' effect on technical debt was confirmed: The difference between
"more technical debt" and "less technical debt" responses was minimal (39.1% vs. 41.3%, respectively). Together, however,
these responses make up a large majority (80.4%), so it seems highly likely that microservices have some effect on
technical debt. In future surveys, we intend to ask separate questions about up-front vs. long-term technical debt.
2. Interestingly, no major differences appeared when we segmented answers to the technical debt question by whether
respondents have mostly built microservices in greenfield environments or split existing systems into microservices.
Mostly-brownfield respondents were slightly more likely to report an increase in technical debt as a result of
microservices adoption, but the difference is only 3.9%.
This lack of coupling encourages the hope that refactoring into microservices can be successful, while also denying
that microservices adoption by itself magically washes away technical debt.
3. Junior respondents (≤ five years' experience as a software professional) were somewhat more likely to report that
microservices resulted in less technical debt (48.2% vs. 40.6% of senior respondents), and the difference is balanced by
a greater percent of senior respondents reporting no change in technical debt as a result of microservices adoption
(19.3% vs. 8.9%).
We are not sure how to interpret these results. It seems improbable that junior respondents are simply more likely to
understand microservices better than senior respondents, many of whom in the latter group are likely to have built
software when SOA was already cliche. Junior respondents were more sanguine about microservices' effect on other
software engineering desiderata, also reporting higher software quality post-microservices. But we can also imagine
that junior respondents might have a bias toward refactoring since as the size of the existing world codebase grows,
the amount of refactoring required grows over time — possibly at geometric rates, but the relevant condition applies
even if growth is linear. We intend to address this broader question in future research (on refactoring in general).
2. Service orientation is a network-level expression of the general principle of "careful boundary definition" that is
exemplified especially by object-oriented and domain-driven design paradigms. We wanted to see if microservices
manifest some kind of "fractal" thinking about software design from class to microservices-aggregate level.
3. The "degrees of atomicity" problem is, in principle, greatly complicated by microservices. For instance, simple database
snapshots-plus-rollbacks are enough to enforce transactional atomicity in a monolith — allowing the database engine to
handle the physical difficulties of juggling any "undo" queue — but not in a microservice, which has no in-built integrity-
defining algebra. We wanted to understand how software professionals approach this problem.
Does your organization use a workflow or orchestration engine for state handling, monitoring, and reporting?
{Yes, No, I don't know}
Results (n=316):
Table 1
Percent n=
No 24.7% 78
Observation: Broadly speaking, software professionals have not hand-waved the "operating system" role to "emergent
properties of the microservices system": 69% of respondents' organizations use a workflow or orchestration engine for state
handling, monitoring, and reporting.
We asked:
How often do you take the following approaches to software development and design? {Test-driven development (TDD),
Behavior-driven development (BDD), Domain-driven design (DDD), Object-oriented analysis and design (OOAD)}
Results (n=332):
Figure 5
Object-oriented analysis and design (OOAD) 5.3% 10.9% 25.2% 34.2% 24.5% 332
We used these results as a baseline for segmentation against answers to questions regarding microservices-specific experiences.
We record only the most interesting results here but welcome interest in other correlations at [email protected].
100
80
60
Often OO
Rarely or
40
never OO
20
0
Yes No The question has no meaning
because “microservices”
is undefined
This result is as we anticipated, according to the OO-microservices homology: Significantly more respondents who often use
OO have personally designed microservices (84.5% vs. 68.6% of those who rarely or never use OO).
Figure 7
80
60
Often OO
40
Rarely or
never OO
20
0
Higher feature velocity Lower feature velocity No change in I don’t know
feature velocity
We take this to suggest that object-oriented thinkers are more likely to deliver features faster after adopting microservices
because microservices are objects writ large, so technical facility with object decoupling is likely to transfer to technical facility
with service decoupling.
70
60
50
40 Often OO
Rarely or
30
never OO
20
10
0
Higher software quality Lower software quality No change in I don’t know
software quality
The apparent impact of high OO experience on the software quality increase caused by microservices (68.5% vs. 49% of rarely or
never OO respondents) is even stronger than other OO-microservices correlations. We take the extreme vagueness of software
quality as an effect of OO-microservices homology at the paradigmatic level (while impact on feature velocity might be a more
specific technical effect of, say, SOLID design "trickling up").
Figure 9
100
80
60
Often DDD
Rarely or
40
never DDD
20
0
Yes No The question has no meaning
because “microservices”
is undefined
Figure 10
80
60
Often DDD
40
Rarely or
never DDD
20
0
Higher feature velocity Lower feature velocity No change in I don’t know
feature velocity
Figure 11
70
60
50
40 Often DDD
Rarely or
30
never DDD
20
10
0
Higher software quality Lower software quality No change in I don’t know
software quality
In one case, however, the relation of OO vs. DDD to microservices adoption diverged:
60
50
Often OO
40 Rarely or
never OO
30
Often DDD
20
Rarely or
never DDD
10
0
More technical debt Less technical debt No change in I don’t know
technical debt
As a result of adopting microservices, DDD-experienced respondents were significantly more likely to report more technical
debt than OO-experienced respondents. And DDD-inexperienced respondents were significantly more likely to report less
technical technical debt than OO-inexperienced respondents. We imagine the reason being that many of microservices'
benefits were already available to DDD experts, and that as a result, the distributed complexity of microservices relatively
overwhelms the comparatively smaller benefit squeezed by microservices out of the DDD-pre-squeezed system.
Further Research
Although many of our previous surveys have touched on microservices, this was our first survey since 2018 to focus on
microservices in particular. In this survey, we focused on higher-level correlations; in future research, we aim to extend our
analysis to a lower, intra-service level. Our survey included material not published in this report, much of which is of interest at
that lower intra-service level in relation to higher-level effects of microservices adoption, including use of distributed design
patterns, twelve-factor app principles, and SOLID OO design principles; container design principles; organizational attitudes
toward Docker in particular; and implementation of consensus protocols.
Please contact [email protected] if you would like to discuss any of our findings or supplementary data.
John works as technical architect, teaches undergrads whenever they will listen, and moonlights as
a research analyst at DZone. He wrote his first C in junior high and is finally starting to understand
JavaScript NaN%. When he isn't annoyed at code written by his past self, John hangs out with his wife and
cats, Gilgamesh and Behemoth, who look and act like their names.
In 2005, Dr. Peter Rodgers addressed micro web services during a presentation at a Web Services Edge conference, where the
first generation of micro web services was based on service-oriented architecture (SOA). SOA is a "self-contained" software
module that performs as a task, and this allows the service to communicate based on SOAP (Simple Object Access Protocol).
The main SOAP idea is, "Do the simplest thing possible."
Nowadays, there is no option to avoid talking about microservices during architectural calls, especially if you want to design
cloud or multi-cloud, modular, scalable, and multi-user applications. In this article, I will explain microservices and how to
design applications based on the multi-cloud scenario. I will walk you through the microservice design pattern and wrap this
information into an architectural example.
• Each module or microservice has its own data and therefore should have an independent database.
• Each microservice should be developed by its own team. This doesn't mean that microservices-based applications can't
be developed by one team, however, having one team per microservice shows how independent microservices can be.
• Microservices should have an independent deployment process.
• Microservices should give better control over resources and compute power so you can scale each service independently
and according to the service needs.
• Microservices can be written in different languages, but they should communicate with a single protocol like REST.
Multi-cloud (i.e., hybrid cloud) means there are two different approaches to spreading an app through multiple cloud providers.
For example, we can build core applications in AWS, and there are some parts that we can deploy to Azure and Google Cloud.
Another multi-cloud example is when an app that was designed for one cloud can be migrated to another with minor changes.
Table 1: Microservices
Pros Cons
• Scalability – As we have all separate services, we can scale • Complexity – Microservices architecture can be a good choice for
each service independently. huge, enterprise-level companies and platforms. Take Netflix, for
• Isolation – A large project may be unaffected when one example. There you can separate domains and subdomains to
service goes down. different services and teams. However, for a small-level company,
separating may add redundant complexity. Moreover, it can be
• Flexibility – You can use different languages and technology impossible to separate small projects into microservices.
as all services are independent. We can separate the whole
project into microservices where each microservice will be • Testing – Testing microservices applications can be a difficult task as
developed and supported by a separate team. you may need to have other services running.
• DevOps independence – All microservices are independent; • Debugging – Debugging microservices can be a painful task; it may
therefore, we can implement an independent deployment include running several microservices. Also, we need to constantly
process for each microservice. investigate logs. This issue can be partially resolved by integrating a
monitoring platform.
• During design and implementation, we can already see if our application can be moved to microservices.
• We can identify monolithic applications that can be moved to microservices using a step-by-step approach.
Table 2: Monolith
Pros Cons
• Simplicity – Monolithic architecture is • Vendor lock-in – With monolithic architecture, we may be locked with one vendor/
relatively simple and can be used as a base cloud provider. All modules in monolithic architecture are closely tied to each other.
architecture or the first microservice step. It's hard to spread them across different vendors.
• Simple DevOps – The DevOps process can • Inflexible DevOps – The DevOps process for one enterprise-level, monolithic app can
be simple as we may need to automate take a lot of time as we need to wait until all modules are built and tested.
one simple application. • Stick with one programming language/technology – Monolithic architecture is
not too flexible — you need to stick with one technology/programming language.
Otherwise, you must rewrite the whole application to change the core technology.
In Figure 1, you can see an example of a typical modular monolithic architecture of a traveling system. It allows passengers to
find drivers, book, and join the trip.
Figure 1: Travel booking application
• UI/front end
• API
• SQL adapter
• Stripe adapter to process payments
• SendGrid to manage emails
• Adapter for Twillio to be able to
communicate over the phone
(calls, SMS, video)
CONTAINER ENGINE
Container engines are essential to building microservices as they allow for separation, orchestration, and management of
the microservices within various cloud providers. Docker is a widely used container engine that allows one to wrap each
microservice into a container and put it into a cloud-based container orchestration system like Kubernetes (AKS, EKS) or
directly spin up the application. Containerd is the same as Docker but has a lightweight and more secure architecture.
ORCHESTRATOR
Kubernetes is a popular open-source system for orchestrating, automating deployment, and scaling the containers apps. It
contains and automates the deployment. Azure, AWS, and Google Cloud have their own managed orchestration services that
already include load balancing, auto scaling, workload management, monitoring, and a service mesh.
MESSAGE BUS
A queue is a service that is based on the FIFO (first in, first out) principle. All message bus systems are based on this service. For
example, Azure has queue storage that contains simple business logic. If you have a simple architecture that needs centralized
message storage, you can rely on the queue. AWS and Google Cloud also have the Simple Queue Service and GC Pub/Sub,
respectively. These allow you to send, store, and receive messages between microservices and software components.
Service Bus/Message Bus is based on the same approach as a queue. However, a Message Bus has more features on top — it
contains a dead-letter queue, scheduled delivery, message deferral, transactions, duplicate detection, and many other features.
For example, Azure Service Bus and AWS Managed Kafka service are highly available message brokers for enterprise-level
applications that can deal with thousands of messages.
SERVERLESS
Serverless allows us to build microservices architecture purely with an event-driven approach. It's a single cloud service unit
that is available on demand with an intended focus on building the microservices straight away in the cloud without thinking
about what container engine, orchestrator, or cloud service you should use. AWS and Azure have Lambda and Azure Functions,
respectively. Google Cloud Functions follows the same principle.
DOMAIN MICROSERVICES
The microservices domain model (part of domain-driven design) is more than a pattern — it is a set of principles to design and
scope a microservice. A microservices domain should be designed using following rules:
• A single microservice should be an independent business function. Therefore, the overall service
should be scoped to a single business concept.
ANTI-CORRUPTION LAYER
A legacy system may have unmaintainable code and an overall poor design, but we still rely on the data that comes from
this module. An anti-corruption layer provides the façade, or bridge, between new microservices architecture and legacy
architecture. Therefore, it allows us to stay away from manipulating legacy code and focus on feature development.
However, dependencies may occur during refactoring from monolith to microservices. In this case, you need to implement a
circuit breaker to predict a cascade failure. Circuit breakers act as a state machine. They monitor numerous recent failures and
decide what to do next. They can allow operation (closed state) or return the exception (open or half-open state).
SERVICE MESH
A service mesh implements the communication layer for microservices architecture. It ensures the delivery of service requests,
provides load balancing, encrypts data (with TLS), and provides the discovery of other services. A service mesh also:
It allows you to not only manage the service but also collect telemetry data and send it to the control plane. We implement a
service mesh such as Istio, which is the most popular service mesh framework for managing microservices in Kubernetes.
• Collecting logs
• Managing configuration
• Controlling connection to the service
Microservices in Action
To demonstrate the power of microservices, we will migrate our monolithic
travel application (see Figure 1) to the microservices architecture.
Conclusion
Building highly available, scalable, and performant applications can be challenging. Microservices architecture provides us the
option to build not only independent services but also create several teams to support each service and introduce the DevOps
approach. Microservices and all popular cloud providers allow us to build multi-cloud microservices architecture. This saves
money, as some services have different pricing strategies. But be sure to choose the most appropriate service that's suited for
specific microservices domains. For example, we can use AKS with an integrated service mesh or a serverless approach based
on AWS Lambdas. Multi-cloud allows us to apply cloud-native DevOps to deliver services independently.
I'm a Certified Software and Cloud Architect who has solid experience designing and developing complex
solutions based on the Azure, Google, and AWS clouds. I have expertise in building distributed systems
and frameworks based on Kubernetes and Azure Service Fabric. My areas of interest include enterprise
cloud solutions, edge computing, high load applications, multitenant distributed systems, and IoT solutions.
The SV Group team enlisted 3ap to build an automated digital guest journey.
3ap selected Camunda Platform 8 (C8) for end-to-end process monitoring COMPANY
and orchestration. 3ap
PRODUCTS USED
Guests would be able to book through a variety of platforms, such as Airbnb
Camunda Platform 8
or Expedia. Once they arrive at the hotel, they would use their mobile app to
effortlessly check in and complete legal requirements like passport checks.
PRIMARY OUTCOME
Next, they'd unlock their room using a digital key via their mobile device.
Enhanced customer experience by
From there, guests would enjoy their stay, easily requesting guest services
orchestrating multiple cloud services
via the same mobile experience. When their stay is over, guests would
into a single consumer mobile app, faster
quickly check out via mobile.
time-to-market, and improved scalability.
Results
Today, C8 continues to monitor critical points in the guest journey, ensuring
that processes are automated and operating correctly within predefined
time periods. In the event of a failure or delay, C8 automatically alerts the SV
Group IT team of the issue via Slack and an incident response tool.
While 3ap started with passive process monitoring on C8, they plan to move
to active process orchestration so they can easily and seamlessly integrate
new cloud services into the platform's automated BPMN workflow.
Service meshes and observability are hot topics within the microservices community. In this Trend Report, we’ll explore in
detail how a service mesh, along with a good observability stack, can help us overcome some of the most pressing challenges
that we face when working with microservices.
In other words, we’ll have to follow the entire network trace to figure out which microservice is the root cause of the problem.
This is an extremely time-consuming process.
When tested individually, each microservice may seem to be performant. But in a real-world scenario, the load on each service
may differ drastically. There could be certain core microservices that a bunch of other microservices depend on. Such scenarios
can be extremely difficult to replicate in an isolated testing environment.
These events become difficult to avoid when there is no clear dependency tree between microservices. A dependency tree
makes it easier to inform the appropriate teams and plan releases better.
Luckily for us, there are some powerful open-source tools to help simplify the process of setting up an observability stack.
Ideally, we want our developers to write application code and nothing else. The complications of microservices networking
need to be pushed down to the underlying platform. A better way to achieve this decoupling would be to use a service mesh
like Istio, Linkerd, or Consul Connect.
A service mesh is an architectural pattern to control and monitor microservices networking and communication.
Let’s take the example of Istio to understand how a service mesh works.
The service mesh architecture, as illustrated in Figure 2, helps you abstract away all the complexities we spoke about earlier.
The best part is that we can start using a service mesh without having to write a single line of code. A service mesh helps us
with multiple aspects of managing a microservics-based architecture. Some of the notable advantages include:
CONTROLLING NETWORK
A service mesh isn’t just a silent spectator. It can actively take part in shaping all network traffic. The Envoy proxies used as
sidecars are HTTP aware, and since all the requests are flowing through these proxies, they can be configured to achieve
several useful features like:
A service mesh can assist in access control as well by selectively allowing which service is allowed to talk to which. All this can
help us completely eradicate a whole breed of security vulnerabilities like man-in-the-middle attacks.
DISTRIBUTED TRACING
We've discussed how difficult it is to debug microservices. One way to solve this debuggability problem is by means of
distributed tracing — the process of capturing the lifecycle of a request. One graph alone can make it so much easier to figure
out the root cause of the problem.
Most service meshes automatically collect and ship network traces to tools like Jaeger. All you need to do is forward a few HTTP
headers in your application code. That’s it!
There are many more metrics that a service mesh collects, but these are by far the most important ones. These metrics can be
used to power several interesting use cases. Some of them include:
NETWORK TOPOLOGY
A service mesh can help us automatically construct a network topology, which can be built by combining tracing data with
traffic flow metrics. If you ask me, this is a real lifesaver. A network topology can help us visualize the entire microservice
dependency tree in a single glance. Moreover, it can also highlight the network health of our cluster. This can be great for
identifying bottlenecks in our application.
As next steps, you can check out the following guides to dive deeper into the world of service meshes and observability.
Noorain is a die-hard techie and a profound open source enthusiast. He is an AWS Certified Solutions
Architect with 5+ years of experience in designing and developing cloud-native systems and architectures.
He has demonstrated mastery in architecting solutions on top of AWS, GCP, and Kubernetes.
© vFunction 2022 1
PARTNER CASE STUDY
With over $2 trillion in assets, this Fortune 100 financial services provider is COMPANY
one of the largest asset holders in the world. Fortune 100 Bank*
With a mandate to become cloud-ready, they needed to ensure their future PRODUCTS USED
application architecture was architected to work in a complementary way vFunction Assessment Hub, vFunction
to take advantage of cloud-native services. This meant refactoring, which is Modernization Hub
extremely difficult and time consuming for humans, but made much easier
PRIMARY OUTCOME
with automation, AI, and data science.
The customer was able to refactor
and migrate hundreds of legacy Java
Solution
applications to the AWS cloud faster and
The client turned to vFunction to automate the analysis of the company's
less expensively than ever imagined.
legacy Java applications, assessing the complexity of selected apps to
determine readiness for modernization. This included deep tracking of call
stacks, memory, and object behaviors from actual user activity, events, and
"In a matter of weeks, vFunction gave
tests. This analysis uses patented methods of static analysis, dynamic analysis,
us visibility and insights into some of our
and dead code detection.
most complex applications — saving us
This enabled them to better preview refactorability, stack rank applications, significant time and reducing costs.
estimate schedules, and manage the modernization process to accelerate After years of ultimately unsuccessful
cloud-native migrations. Fortune 100 Bank used vFunction to accelerate Java efforts trying to refactor and migrate our
monolith decomposition. legacy application portfolio manually,
we now have a repeatable, AI-driven
Results model to refactor and modernize for the
• 25x reduction in time to market – Within just a few weeks of
AWS cloud."
installing vFunction, the company was able to unlock and take action
on never-before-seen insights about their largest Java monolith — — Senior Architect,
after years of efforts. Fortune 100 Bank
• 3x reduction in cost of modernization – AI-driven insights and
actionable recommendations reduced the cost of modernization by
3x compared to manual decomposition and refactoring efforts. CREATED IN PARTNERSHIP WITH
Applications have been built with monolithic architectures for decades; however, many are now moving to a microservices
architecture. Microservices architectures gives us faster development speed, scalability, reliability, the flexibility to develop each
component with the best tech stack suitable, and much more. Microservices architectures rely on independently deployable
microservices. Each microservice has its own business logic and database consisting of a specific domain context. The testing,
enhancing, and scaling of each service is independent of other microservices.
However, a microservices architecture is also prone to its own challenges and complexity. To solve the most common
challenges and problems, some design patterns have evolved. In this article, we will look at a few of them.
• How do you handle cross-cutting concerns such as authorization, rate limiting, load balancing, retry policies, service
discovery, and so on?
• How do you avoid too many round trips and tight coupling due to direct client-to-microservice communication?
• Who will do the data filtering and mapping in case a subset of data is required by a client?
• Who will do the data aggregation in case a client requires calling multiple microservices to get data?
To address these concerns, an API gateway sits between the client applications and microservices. It brings features like reverse
proxy, requests aggregation, gateway offloading, service discovery, and many more. It can expose a different API for each client.
A UI team should create a page skeleton that builds screens by composing multiple UI components. Each team develops a
client-side UI component that is service-specific. This skeleton is also known as a single-page application (SPA). Frameworks
like AngularJS directives and ReactJS components supports this. This also allows users to refresh a specific region of the screen
when any data changes, providing a better user experience.
• A typical business transaction may involve queries, joins, or data persistence actions from multiple services owned by
different teams.
• In polyglot microservices architectures, where each microservice may have different data storage requirements, consider
unstructured data (NoSQL database), structured data (relational database), and/or graph data (Neo4j).
A microservice transaction must be limited to its own database. Any other service requiring that data must use a service API. If
you are using a relational database, then a schema per service is the best option to make the data private to the microservice.
To create a barrier, assign a different database user ID to each service. This ensures developers are not tempted to bypass a
microservice’s API and access the database directly.
This enables each microservice to use the type of database best suited for its requirements. For example, use Neo4j for social
graph data and Elasticsearch for text search.
SAGA PATTERN
When we use one database per service, it creates a problem with implementing transactions that span multiple microservices.
How do we bring data consistency in this case? Local ACID transactions don’t help here. The solution is the saga pattern. A saga
is a chain of local transactions where each transaction updates the database and publishes an event to trigger the next local
transaction. The saga pattern mandates compensating transactions in case any local transaction fails.
• Orchestration – An orchestrator (object) coordinates with all the services to do local transactions, get updates, and
execute the next event. In case of failure, it holds the responsibility to trigger the compensation events.
• Choreography – Each microservice is responsible for listening to and publishing events, and it enables the compensation
events in case of failure.
Electric circuit breaker functionality inspired the circuit breaker pattern — the solution to this issue. A proxy sits between a
client and a microservice, which tracks the number of consecutive failures. If it crosses a threshold, it trips the connection and
fails immediately. After a timeout period, the circuit breaker again allows a limited number of test requests to check if the
circuit can be resumed. Otherwise, the timeout period resets.
Source: Diagram adapted from "Circuit Breaker Implementation in Resilience4j," Bohdan Storozhuk
There are two ways to decompose a greenfield application — by business capability or subdomain:
• A business capability is something that generates value. For example, in an airline company, business capabilities can be
bookings, sales, payment, seat allocation, and so on.
• The subdomain concept comes from domain-driven design (DDD). A domain consists of multiple subdomains, such as
product catalog, order management, delivery management, and so on.
Conclusion
Microservices architectures provide a lot of flexibility for developers but also bring many challenges as the number of
components to manage increases. In this article, we have talked about the most important design patterns that are essential to
building and developing a microservices application.
Rajesh Bhojwani is a development architect with a rich 18+ years of experience. He is currently responsible
for design, development, and implementation of cloud-native technologies around Spring Boot, Cloud
Foundry, and AWS. He has experience not only as a developer and architect but has taken the role of
consulting on-site coordinator. Rajesh understands customer issues very well. His hobby is educating and mentoring
developers on cloud-native technology skills.
Have you encountered challenges in how to manage data in a microservices architecture? In this article, we examine
traditional approaches and introduce the data API gateway (also sometimes known as a "data gateway"), a new type of
data infrastructure. We explore the features of a data API gateway, why you should implement it, and how to apply it to
your architecture.
This microservices architecture includes a layer of data services which manage specific data types including hotels, rates,
inventory, reservations, and guests, and a layer of business services which implement specific processes such as shopping
and booking reservations. The business services provide a primary interface to web and mobile applications and delegate the
storage and retrieval of data to the data services. The data services are responsible for performing CRUD operations on an
underlying database.
While there are many ways of integrating and orchestrating interactions between these services, the basic pattern of
separating services responsible for data and business logic has been around since the early days of service-oriented
architecture (SOA).
• Identifying services to manage specific data types in the domain using a technique such as domain-driven design.
For more on the interaction between domain-driven design, service identification, and data modeling, see Chapter 7 of
Cassandra, The Definitive Guide: 3rd Edition.
• Implementing services using a selected language and framework. In the Java world, frameworks like Spring Boot make
it easy to build services with an embedded HTTP server that are then packaged into VMs or containers. Quarkus is a more
recent framework which can build, test, and containerize services in a single CI workflow.
As the move toward large-scale microservices Figure 2: Polyglot persistence approach for microservices architectures
architectures in the cloud began in the 2010s,
large-scale innovators, including Netflix, advocated
strongly for independent services managing their
own data types. One consequence of this was that
individual data services were free to select their own
databases, a pattern known as polyglot persistence.
An example of what this might look like in our
hypothetical hotel application is shown in Figure 2.
API styles like gRPC provide more structured data representations which can lead to more optimal performance. GraphQL
and REST APIs provide more flexibility in how data is represented at the cost of additional latency. The maximum flexibility
is provided by document-style APIs which can store and search JSON in whatever format the client chooses, at the cost of
potentially lower performance for more complex queries.
Conclusion
The data API gateway is a new type of data infrastructure which can help eliminate layers of CRUD-style microservices that
you have to develop and maintain. While there are multiple styles of gateway, they have a common set of features that benefit
both developers and operators. Data API gateways enable developer productivity by providing a variety of API styles over a
single supporting database. From an operations perspective, data API gateways and their supporting databases can be run in
containers alongside other applications to simplify your overall deployment process. In summary, adopting a data API gateway
is a great way to reduce development and maintenance cost for microservices architectures.
Jeff has worked as a software engineer and architect in multiple industries and as a developer advocate
helping engineers get up to speed on Apache Cassandra. He's involved in multiple open-source projects
in the Cassandra and Kubernetes ecosystems and is co-writing the O'Reilly book, "Managing Cloud Native
Data on Kubernetes," scheduled for publication in late 2022.
Approaches to Cloud-Native
Application Security
Securing Microservices and Containers
Securing cloud-native applications requires proper understanding of the interfaces (boundaries) being exposed by your
microservices to various consumers. Proper tools and mechanisms need to be applied on each boundary to achieve the
right level of security. Properly securing the infrastructure on which your application runs is also very important. This
includes securing container images, securely running container runtimes, and properly configuring and using the container
orchestration system (Kubernetes).
A monolithic application typically has one entry point. Beyond that, everything happens within a single process, except for
database calls or similar interactions. Comparatively, the surface of exposure in a cloud-native application is much higher. As
shown in Figure 1, a cloud-native application typically has multiple components (services) that communicate over the network.
Each entry point into any given component needs to be appropriately secured.
As shown in Figure 2, an API gateway protects the cloud-native application at the north-south channel as well as the inter-
domain east-west channel.
The standard way to model this on an API is to say the product list update operation requires a special "scope." Unless the token
being used to access this API bears this scope, the request will not be permitted. The OpenAPI specification allows binding
scopes to an operation.
Only privileged users get access to server configuration files. But in the microservices world, it is common for developers to
store this information in property files along with the code of the microservice. When a developer builds such a container and
pushes it to the container registry, this information becomes available for anyone who can access the container image!
To prevent this from happening, we need to externalize application secrets from code. Let's take a look at a sample Dockerfile
in a Java program that does this:
FROM openjdk:17-jdk-alpine
ADD builds/sample-java-program.jar \
sample-java-program.jar
ENV CONFIG_FILE=/opt/configs/service.properties
ENTRYPOINT ["java", "-jar", "sample-java-program.jar"]
The third line in this Dockerfile instructs Docker to create an environment variable named CONFIG_FILE and point it to the
/opt/configs/service.properties location. Instead of having secrets hard coded in source code or the code read from a fixed
file location, the microservice's code should be written so that it looks up the value of this environment variable to determine
the configuration file location and load its contents to memory. With this, we have successfully avoided secrets within the
code. If we build a Docker container with this file, it will not contain any sensitive information. Next, let's look at how we can
externalize the values we need.
Before running a Docker image built from the above Dockerfile, we need to mount it to a location that has the actual
configuration file with the proper values. This can be done with the following Docker run command:
The source section contains a path of the filesystem on the container's host machine. The target section contains a path on
the container filesystem. The --mount command instructs the Docker runtime to mount the source onto the target, meaning
that the service.properties file can now be securely maintained on the host machine's filesystem and mounted to container
runtimes before starting the containers. This way, we externalize sensitive information from the microservice itself on Docker.
Developers need to set an environment variable named DOCKER_CONTENT_TRUST and set its value to 1 to enforce DCT in all the
environments on which Docker is used. For example: :\> export DOCKER_CONTENT_TRUST=1. Once this environment variable
is set, it affects the following Docker commands: push, build, create, pull, and run. This means that if you try to issue a
docker run command on an unverified image, your command will fail.
DOCKER PRIVILEGES
Any operating system has a super user known as root. All Docker containers by default run as the root user. This is not
necessarily bad, thanks to the namespace partitions on the Linux Kernel. However, if you are using file mounts in your
containers, an attacker gaining access to the container runtime can be very harmful. Another problem running containers with
root access is that it grants an attacker access to the container runtime to install additional tools into the container. These tools
can harm the application in various ways such as scanning for open ports and so on.
Docker provides a way to run containers as non-privileged users. The root user ID in Linux is 0. Docker allows us to run Docker
containers by passing in a user ID and group ID. The following command would start the Docker container under user ID 900
and group ID 300. Since this is a non-root user, the actions it can perform on the container are limited.
Conclusion
Securing a cloud-native application properly is not trivial. API gateways and mutual trust are key to ensuring our
communication channels are kept safe and we have a zero-trust architecture. OAuth2.0, scopes, and OPA (or similar) are
fundamental to ensuring APIs are properly authenticated and authorized.
Going beyond this scope, we also need to be concerned about using proper security best practices on Kubernetes, properly
handling secrets (passwords), securing event-driven APIs, and more. APIs, microservices, and containers are fundamental to
cloud-native applications. Every developer needs to keep themselves up to date with the latest security advancements and
best practices.
Nuwan is an API enthusiast and is working on making cloud-native application developments simpler. He
is the author of Microservices Security in Action. He speaks at conferences to share his knowledge on the
topics of APIs, microservices, and security. He spends most of his spare time with friends and family. He is
a big fan of Rugby.
Microservices Orchestration
Getting the Most Out of Your Services
Does your organization use a microservices-style architecture to implement its business functionality? What approaches
to microservices communication and orchestration do you use? Microservices have been a fairly dominant application
architecture for the last few years and are usually coupled with the adoption of a cloud platform (e.g., containers, Kubernetes,
FaaS, ephemeral cloud services). Communication patterns between these types of services vary quite a bit.
Microservices architectures stress independence and the ability to change frequently, but these services often need to share
data and initiate complex interactions between themselves to accomplish their functionality. In this article, we'll take a look at
patterns and strategies for microservices communication.
But what happens if that connection takes too long to open? What if that connection times out and cannot be open? What if
that connection succeeds but then later gets shut down after processing a request, but before a response?
We need a way to quickly detect connection Figure 1: Simple example of service A calling service B
These three patterns can be used as needed or together to improve communication reliability (but each has its own drawbacks):
1. Retry/backoff retry – if a call fails, send the request again, possibly waiting a period of time before trying again
2. Idempotent request handling – the ability to handle a request multiple times with the same result (can involve
de-duplication handling for write operations)
3. Asynchronous request handling – eliminating the temporal coupling between two services for request passing to succeed
One quick note about retries: We cannot just retry forever, and we cannot configure every service to retry the same number
of times. Retries can contribute negatively to "retry storm" events where services are degrading and the calling services are
retrying so much that it puts pressure on, and eventually takes down, a degraded service (or keeps it down as it tries to come
back up). A starting point could be to use a small, fixed number of retries (say, two) higher up in a call chain and don’t retry the
deeper you get into a call chain.
The client may retry the request, but this would then increment the count by 5 again, and this may not be the desired state.
What we want is the service to know that it’s seen a particular request already and either disregard it or apply a "no-op." If a
service is built to handle requests idempotently, a client can feel safe to retry the failed request with the service able to filter out
those duplicate requests.
We can trust the message log or queue to persist and deliver the message at some point in the future. Retry and idempotent
request handling are also applicable in the asynchronous scenario. If a message consumer can correctly apply changes that
may occur in an "at-least once delivery" guarantee, then we don't need more complicated transaction coordination.
Since you can externally control and Figure 4: Orchestrating service calls across multiple services with a GraphQL engine
configure the behavior, these behaviors can
be applied to any/all of your applications —
regardless of the programming language
they've been written in. Additionally, changes
can be made quickly to these resilience
policies without forcing code changes.
GraphQL can also be combined with API Gateway technology or even service mesh technology, as described above. Combining
these provides a common and consistent resilience policy layer — regardless of what protocols are being used to communicate
between services (REST, gRPC, GraphQL, etc.).
Conclusion
Most teams expect a cloud infrastructure and microservices architecture to deliver big promises around service delivery
and scale. We can set up CI/CD, container platforms, and a strong service architecture, but if we don’t account for runtime
microservices orchestration and the resilience challenges that come with it, then microservices are really just an overly
complex deployment architecture with all of the drawbacks and none of the benefits. If you’re going down a microservices
journey (or already well down the path), make sure service communication, orchestration, security, and observability are at
front of mind and consistently implemented across your services.
Christian Posta is the author of Istio in Action and many other books on cloud-native architecture. He is
well known as a speaker, blogger, and contributor to various open-source projects in the service mesh and
cloud-native ecosystem. Christian has spent time at government and commercial enterprises and web-
scale companies. He now helps organizations create and deploy large-scale, cloud-native, resilient, distributed architectures.
He enjoys mentoring, training, and leading teams to be successful with distributed systems concepts, microservices,
DevOps, and cloud-native app design.
Solutions Directory
This directory contains cloud platforms, container platforms, orchestration tools, service meshes,
distributed tracing tools, and other products that aim to simplify building with microservices and
containers. It provides free trial data and product category information gathered from vendor websites
and project pages. Solutions are selected for inclusion based on several impartial criteria, including
solution maturity, technical innovativeness, relevance, and data availability.
Apache incubator.apache.org/projects/htrace.
HTrace Distributed tracing
Software Open source html
Foundation
Distributed event streaming
Kafka kafka.apache.org
platform
Containerized apps
Aspen Mesh Aspen Mesh Trial period aspenmesh.io
management system
broadcom.com/products/software/
Automic Automation Service orchestration
automation/automic-automation
broadcom.com/info/aiops/docker-
Broadcom Docker Monitoring Container monitoring By request
monitoring
broadcom.com/products/software/api-
Layer7 API management, API gateway
management
couchbase.com/products/cloud/
Couchbase Autonomous Operator Cloud-native database Open source
kubernetes
Single-cluster Kubernetes
DKP Essential d2iq.com/products/essential
solution
Grails
Grails Web app framework Open source grails.org
Foundation
Reactive microservices
Akka Platform lightbend.com/akka-platform
frameworks
Lightbend Open source
Microservices development
Lagom lagomframework.com
platform
Linux
The FLX Platform Open-source project hosting Open source lfx.linuxfoundation.org
Foundation
microfocus.com/en-us/products/artix/
Micro Focus Artix ESB Trial period
overview
azure.microsoft.com/en-us/services/api-
API Management API gateway, API management Free tier
management
Containers-as-a-Service, azure.microsoft.com/en-us/services/
Kubernetes Service By request
Orchestration-as-a-Service kubernetes-service
Microsoft
Azure
azure.microsoft.com/en-us/services/
Service Bus ESB
service-bus
Free tier
Microservices development azure.microsoft.com/en-us/services/
Service Fabric
platform service-fabric
Mulesoft Anypoint Platform API and integration platform Trial period mulesoft.com
nginx.com/products/nginx-service-
F5 NGINX Service Mesh Service mesh Open source
mesh
okta.com/products/api-access-
Okta API Access Management API management Trial period
management
Particular
NServiceBus ESB Free tier particular.net/nservicebus
Software
Peregrine
Neuron ESB ESB Trial period peregrineconnect.com
Connect
Platform9 Managed
Platform9 Kubernetes-as-a-Service Free tier platform9.com/managed-kubernetes
Kubernetes
redhat.com/en/technologies/cloud-
OpenShift Container
PaaS, container platform computing/openshift/container-
Platform
platform
redis.com/redis-enterprise-software/
Redis Labs Redis Enterprise Software Self-managed data platform Trial period
overview
Solo.io Gloo Mesh Service mesh and control plane By request solo.io/products/gloo-mesh
splunk.com/en_us/products/splunk-
Splunk Splunk Cloud Platform PaaS Trial period
cloud.html
spring.io/projects/spring-cloud-
Spring Cloud Data Flow Orchestration service
dataflow
Spring Boot Spring Cloud Sleuth Distributed tracing Open source spring.io/projects/spring-cloud-sleuth
Event-driven microservices
Spring Cloud Stream spring.io/projects/spring-cloud-stream
framework
Short-lived microservices
Spring Cloud Task spring.io/projects/spring-cloud-task
framework
Sumo Logic Sumo Logic Cloud-native SaaS analytics Trial period sumologic.com
Container development,
Tanzu Build Service tanzu.vmware.com/build-service
management, and governance By request
Multi-cloud environment
VMWare Tanzu Observability Trial period tanzu.vmware.com/observability
observability