0% found this document useful (0 votes)
279 views7 pages

Why Event-Driven Microservices - Building Event-Driven Microservices

Uploaded by

Chandra Sekhar D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
279 views7 pages

Why Event-Driven Microservices - Building Event-Driven Microservices

Uploaded by

Chandra Sekhar D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

4/9/2020 1.

Why Event-Driven Microservices - Building Event-Driven Microservices

 Building Event-Driven Microservices

PREV NEXT
⏮ ⏭
Building Event-Driven Microservices 2. Event Driven Microservice Fundamentals
  🔎

Chapter 1. Why Event-Driven


Microservices
The medium is the message.

—Marshall McLuhan

Introduction
Microservices and microservice-style architectures have existed for many years,
in many different forms, and with many different names. Service-oriented
architectures (SOA) are often created as a number of microservices
synchronously communicating directly between one another. Message-passing
architectures use consumable events to asynchronously communicate between
one another. Event-based communication is certainly not new, but the need for
handling big data sets, at scale, and in real-time, are new requirements that
necessitates change from the old architectural styles.

Marchall McLuhan realized that it is not the content of media that makes its
impact on humankind, but the changes introduced to society based on their
engagement with the medium. Newspapers, radio, television, the internet, instant
messaging, and social media have all changed human interaction and social
structures simply due to our engagement with these forms of media.

The same can be found to be true with computer system architectures. We need
only look back at the history of computing inventions to see how the mediums of
network communications, relational databases, big-data developments and cloud
computing have been to the medium of computer architectures. Each of these
inventions not only changed the way that technology was used within various
software projects, but each drastically changed the way that organizations, teams,
and people communicated with one another. From centralized mainframes to
distributed mobile applications, each new medium has fundamentally changed
the relationship of people with computing.

The medium of the asynchronously produced and consumed event has been
fundamentally shifted by modern technology. These events can now be persisted
indefinitely, at extremely large scale, and be consumed by any service as many
times as necessary. Compute resources can be easily acquired and released on-
demand, enabling the easy creation and management of microservices.
Microservices can materialize and manage their data according to their own
needs, and do so at scale that has previously been only in the domain of batch-
based big-data solutions. These improvements to the humble and simple event-
driven medium have far-reaching impacts that change not only computer
architectures, but completely reshape how teams, people and organizations create
systems and businesses.

What are event-driven microservices?


In a modern event-driven microservices architecture, systems communicate by
issuing and consuming events. It must be noted that these events are not
destroyed upon consumption as happens in message-passing systems, but instead
remain readily available for other consumers to read as they require. This is an
important distinction for it allows for some truly powerful patterns that we will
examine later in this book.

The services themselves are small and purpose-built, created to service the
fulfillment of the necessary business goals of the organization. By small, a
typical estimation could be something which takes no more than two weeks to
write. These services consume events from input event streams, apply their
specific business logic, and may emit their own output events, provide data for
request-response access, communicate with a 3rd-party API or perform other
required actions. These can be stateful or stateless, complex or simple,
implemented using as long-running, standalone applications or executed as a
function using Functions-as-a-Service. We will examine these in greater detail in
the following chapters of this book.

This combination of event streams and microservices forms an interconnected


graph of activity across a business organization. Traditional computer
architectures, composed of monoliths and inter-monolith communications, also
have a similar graph structure. Both of these are seen below in Figure TODO.

Figure 1-1. The graph structures of microservices


and monoliths
Identifying how to make this graph structure operate efficiently involves looking
at the two major components, the nodes and the connections. First, we will take a
look at the nodes by examining the concept of the bounded context. Following
that, we will take a look at the connections between services, which are heavily
related to and influenced by an organization’s communication structures.

Introduction to Domain-Driven Design and Bounded


Contexts
Domain-Driven Design, as coined by Eric Evans in his book of the same title,
introduces some of the necessary concepts for building event-driven
microservices. Given the wealth of articles, books, and blogs-of-the-month
readily available to talk about this subject, I will keep this section brief.

The following concepts underpin Domain-Driven Design.

Domain

The problem space that an organization occupies and provides solutions to. This
domain encompasses everything that the business must do, including rules,

You have 2 days le in your trial, Reddychan. Subscribe today. See pricing options.
processes, ideas, business-specific terminology and all things related to that

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 1/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

business problem space, regardless of if the business concerns itself with it or


not. The domain exists regardless of the existence of the business.

Sub-domain

A decomposition of the main domain into multiple sub-domains. Each sub-


domain focuses on a specific sub-set of responsibilities, and typically reflects
some of the organizational structure of an organization (such as Warehouse,
Sales, Engineering). A sub-domain can be seen as a domain in its own right. Sub-
domains, like the domain itself, belong to the problem space.

Domain (and Sub-Domain) Models

A model is an abstraction of the actual domain useful for business purposes. The
pieces and properties of the domain that are most important to the organization
are used to generate the model. The main domain model of an organization is
discernible through the products the organization provides its customers, the
interfaces by which customers interact with the products and the various other
processes and functions by which the organization fulfils its stated goals. Models
often need to be refined as the domain changes and as other existing
characteristics of the domain become important to the business. A domain model
is part of the solution space as it is a construct used by the business to solve
problems.

Bounded Context

A bounded context is aligned with a sub-domain model. It defines the logical


boundaries, including the inputs, outputs, events, requirements, processes, and
data models relevant to the sub-domain. While ideally a bounded context and a
sub-domain will be in complete alignment, legacy systems, technical debt, and
third-party integrations often form exceptions. Bounded contexts are also a
property of the solution space, and have a significant impact on how
microservices interact with one another.

Bounded contexts should be highly cohesive. The internal operations of the


context should be intensive and highly related, with the vast majority of
communication occurring internally rather than cross-boundary. The co-existence
of highly-cohesive responsibilities allows for reduced design scope and simpler
implementations.

Connections between bounded contexts should be loosely coupled, as changes


made within one bounded context should minimize or eliminate the impact on
neighbouring contexts. A loose coupling can ensure that requirement changes in
one context do not propagate a surge of dependent changes to neighbouring
contexts.

Leveraging Domain Models and Bounded Contexts

Every organization forms a single domain between itself and the outside world.
Everyone working within the organization is operating to support the needs of the
domain of the organization.

This is broken down into sub-domains within the organization - perhaps, for a
technology-centric company, into an engineering department, a sales department
and customer support department. Each sub-domain has its own requirements
and duties, and in turn may be sub-divided again. This repeats until the sub-
domain models are granular and actionable, and can be formed into small and
independent services by the implementing teams. Bounded contexts are
established around these sub-domains, which form the basis of the creation of our
microservices.

Aligning Bounded Contexts with Business Requirements

It is very common for the business requirements of a product to change during its
lifetime, due to organizational change, new feature requests, and other domain
model changes. These changes to requirements occur far more frequently than
the need to change the underlying technological implementation. We draw the
bounded contexts based on the business requirements of our domain models
precisely because these are the areas which are subject to the most change. This
allows the subsequent changes made to the microservice implementations to be
performed in a loosely coupled and highly cohesive way. Aligning bounded
contexts on business requirements provides a team with the autonomy to design
and implement a solution for the specific business needs. This autonomy and
singular focus of responsibilities greatly reduces inter-team dependencies and
allows each team to focus strictly on their own requirements. The requirements
of other teams are left up to those teams to handle on their own.

Conversely, aligning microservices based on technical requirements is


problematic. This is a pattern which is more often seen in improperly designed
synchronous point-to-point microservices, and in traditional monolith-style
computing systems where teams own specific technical layers of the application.
The main issues with technological alignments is that multiple bounded contexts
must be crossed to fulfill a given business function. This may involve multiple
teams with differing schedules and responsibilities and distributes the
responsibility of fulfilling the business function across multiple bounded
contexts, none of which is then solely responsible for ensuring business
functionality. Each of the services becomes coupled to another across both team
and API boundaries, making changes difficult and expensive. A seemingly
innocent change, a bug, or a failed service can have serious ripple-effects to the
business-serving capabilities of all services requiring the use of the technical
system.

Technical alignment is seldomly used in event-driven microservice architectures


and should be avoided completely whenever possible. The sensitivity of systems
to change will be reduced due to the elimination of cross-cutting technological
and team dependencies. Figure TODO shows two scenarios, that of sole-
ownership on the left and that of cross-cutting ownership on the right. With sole-
ownership, the team is fully organized around the two independent business
requirements (bounded contexts), and has complete control over their application
code and the database layer. On the right, the teams have been organized via
technical requirements, where the application layer is managed separate from the
data layer. This creates explicit dependencies between the teams, as well as
implicit dependencies between the business requirements.

Figure 1-2. Alignment of business contexts vs


alignment on technological contexts
While event-driven microservices architectures are very favourable to modelling
around business requirements, there are tradeoffs with this approach. Code may
be replicated a number of times and very similar data-access patterns may be
used by many services. There may be a desire to share a common data source, or
couple on a technical boundary. In these cases it is important to recognize that the
subsequent tight coupling may be far more costly in the long run than repeating
logic or rereading data. These fundamental tradeoffs of event-driven
microservice design will be examined in greater detail throughout the book.

Additionally, each vertical team is required to have full stack expertise, which
can be complicated by the need for specialized skillsets and access permissions.
The organization should operationalize the most common requirements such that
these vertical teams can support themselves, while more specialized skillsets can

You have 2 days le in yourontrial,


be provided Reddychan.
a cross-team, Subscribe
as-needed basis. These best-practices aretoday.
covered in See pricing options.
more detail in chapter TODO on tooling.

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 2/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

Communication Structures
Communication structures form the basis of how an organization operates, as
teams, systems and people all must communicate with one another to fulfil their
goals. These communications form an interconnected topology of dependencies
called a communication structure. In this section we’ll look at three main
communication structures, their roles and how they affect the way businesses
operate.

Business Communication Structures

The business communication structure is how a business communicates between


teams and departments. A major part of this are the requirements and
responsibilities that each team must fulfill. Engineering produces the software
products, sales sells to customers, support ensures that customers and clients are
satisfied, etc. The organization of teams and the provisioning of their goals, from
the major business units down to the work of the individual contributor, fall
under this structure. Business requirements, their assignment to teams, and the
compositions of teams all change over time, which can greatly impact the
relationship between the business communication structure and the
implementation communication structures.

Figure 1-3. Sample Business Communications


Structure

Implementation Communication Structures

The implementation communication structure is the data and logic which serves
the requirements of the product as dictated by the business. It is a hardening of
business processing, data structures, and system design to perform business
operations quickly and efficiently. This results in a trade-off in flexibility for the
business communication structure, as redefining the business requirements
satisfied by the implementation requires a rewrite of the logic. The
implementation communication structure is the realization of the data pertaining
to the sub-domain model.

The quintessential example of an implementation communication structure for


software engineering is that of the monolithic database application. The
application uses the database as its primary (or sole) means of communicating
with other parts of the application. Non-engineering teams use established
processes and tools to optimize and quicken their workflows. In both cases, the
requirements are always dictated by the business communication structure.

Figure 1-4. Sample Implementation


Communications Structure

Data Communication Structures

The data communication structure is the mechanism through which data is


communicated across the business and between implementations. Email, instant
messaging and meetings may be used for communicating business changes, but
data communication structures have largely been neglected for software
implementations. It has usually been fulfilled ad-hoc, from system to system,
with the implementation communication structure playing double-duty both for
its own requirements and as the provider of data for other implementations. This
has caused many problems in how companies grow and change over time, and
we will take a much closer look at the impact in the next section.

Figure 1-5. Sample Ad-Hoc Data Communications


Structure

Summary of Communication Structures

The communication structures within your organization affect the way in which
products are built. You may be familiar with this quote by Melvin Conway
regarding this relationship, as it often appears in microservice-based books,
articles, and blog posts. This is known as Conway’s law.

Organizations which design systems … are constrained to produce designs


which are copies of the communication structures of these organizations.

—Melvin Conway - How Do Committees Invent? (April 1968)

Conway’s law implies that a team will build products according to the
communication structures of their organization. Business communication
structures organize people into teams, and these teams will typically produce
products that are delimited by their team boundaries. Implementation
communication structures provide access to the sub-domain data models for a
given product, but also restrict access to other products due to the weak data-
communication capabilities.

Domain data-models are often needed by other bounded contexts within an


organization, since these concepts span the domain of the business.
Implementation communication structures are generally poor at providing this
mechanism, but are quite good at providing access to the data within the
implementation itself. The implementation communication structure influences
the design of products in two ways. First, by discouraging the creation of new,
logically separate products due to the inefficiencies of communicating the
necessary domain data across the organization. Secondly, by providing easy
access to existing domain data, at the expense of expanding the domain to
encompass the new business requirements. This particular pattern is embodied by
monolithic designs.

Data-communication structures play a pivotal role in how an organization


designs and builds products. Unfortunately for many organizations, this is
precisely the structure that has long been missing. Implementation

You have 2 days le in your trial,


communication Reddychan.
structures Subscribe
frequently play double-duty, today.
serving both the needs See pricing options.

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 3/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

internal to their bounded context but also acting as the means of communicating
data across the organization.

Attempts to mitigate the inability to access domain data from other


implementations are often performed. Shared databases are often used, though
these promote unhealthy design patterns and often cannot scale sufficiently.
Databases may provide read-only replicas, though this can expose their inner data
models unnecessarily. Performance issues are always a concern. Batch processes
can dump data to a file store to be read by other processes, but this can end up
with issues around data consistency and multiple sources of truth. Lastly, all of
these forms result in a strong coupling between implementations, and further
harden an architecture into direct point-to-point relationships.

If you feel that it is too hard to access data, or that your products are scope-
creeping because a single implementation is where all the data is, you’re likely
experiencing the effects of poor data-communication structures. A weak or non-
existent data-communication structure can be a serious problem for an
organization, especially one that is trying to rapidly scale up. In the next section
we will take a look at how these communication structures influence decision
making.

Communication Systems in Traditional Computing


A business’s communication structures have great influence on how engineering
implementations are created. The focus of a team on specific business
requirements encourages solutions built upon the communication structures of
that team. Let’s see how this works in practice.

Consider the following scenario. A single team has a single service backed by a
single data store. They are happily providing their business function and all is
well in the world. One day the team lead comes in with a new business
requirement. It’s somewhat related to what they’re already doing and could
possibly just be added on to their existing service. However, it’s also different
enough that it could also go into its own new service.

The team is at a crossroads - do they implement this new business functionality


in the same service, or do they create a new service and add it to there? Let’s take
a look at their options in a bit more detail.

Option 1: Make a New Service

The business functionality is different enough that it could make sense to put it
into a new service. But what about data? This new business function needs some
of the old data, but that data is currently locked up in their original service.
Additionally, the team doesn’t really have a process for bringing up fully new,
independent services. On the other hand, their team is getting to be a bit big the
company is growing quickly… if they had to divide their team in the future,
having modular and independent systems will make dividing up ownership much
easier.

There are risks associated with this approach. The team must figure out a way to
source data from their original data store and sink it to their new data store. They
need to ensure that they don’t expose the inner workings, and they need to know
if the changes they make to their data structures will affect any other teams
copying their data. Additionally, the data being copied will always be somewhat
stale, as they can only afford to copy production data in realtime every 30
minutes so as not to over-saturate the datastore with queries. This connection will
need to be monitored and maintained to ensure that it is running correctly.

There is also a risk in spinning up and running a new service. They will need to
manage two datastores, two services, and ensure they both have established
logging, monitoring, testing, deployment and rollback processes. They must also
ensure that they synchronize any data structure changes so as not to affect the
dependent system.

Option 2: Add it to the existing service

The other option is to create the new data structures and business logic within the
existing service. The required data is already in the datastore, and the logging,
monitoring, testing, deployment and rollback processes are already defined and
used. The team is familiar with the system and they can get right to work on
implementing the logic, and their monolithic patterns support this approach to
service design.

There are also risks associated with this approach, though they are a bit more
subtle and less obvious up-front. Boundaries within the implementation can blur
as changes are made, especially since modules are often bundled together in the
same code base. It is far too easy to quickly add features by crossing those
boundaries and directly couple across the module. This is a major boon to
moving quickly, but it comes at the cost of tight couplings, reduced cohesion and
a lack of modularity. Though this can be guarded against, it requires excellent
planning and strict adherence to boundaries, which often times falls by the
wayside to tight schedules, inexperience and shifting service ownership.

Choices

Most teams would be inclined to choose the second option, adding the
functionality to the co-existing system. There is nothing wrong with this choice -
monolithic architectures are very useful and powerful structures, and can provide
exceptional value to a business. The first option runs head first into the two
problems associated with traditional computing. One, accessing another system’s
data is difficult to do reliably, and especially so at scale and in realtime. Two,
creating and managing new services has substantial overhead and risk associated
with it, especially if there is no established way to doing so within the
organization.

But Why?

Accessing local data is always easier than accessing data stored in another
datastore. Any data encapsulated in another team’s datastore is difficult to obtain,
for both implementation and business communication boundaries must be
crossed. This becomes increasingly difficult to maintain and scale as data,
connection count and performance requirements scale up.

Though the idea of copying the necessary data over is worthy, it is not a
foolproof approach. This model encourages many direct point-to-point couplings,
which become problematic to maintain as an organization grows, business units
and ownership change and products mature and phase out. A strict technical
dependency is created between the implementation communication structures of
both teams, requiring them to work in synchronicity whenever a data change is
made. Special care must be taken to ensure that the internal data model of an
implementation is not unduly exposed, lest the sink couple tightly to it.
Scalability, performance and system availability are often issues for both
systems, as the data replication query may place an unsustainable load on the
source system. Failed sync processes may not be noticed until an emergency
occurs. Tribal knowledge may result in a team copying a copy of data, thinking
that it’s the original source of truth.

Data copied in batch is will always be somewhat stale by the time the query is
complete and the data is transferred. The larger the dataset and the more complex
its sourcing, the more likely a copy will be out of sync with the original. This is
problematic when systems expect each other to have perfect, up-to-date copies.
For instance, a reporting service may report different values than a billing service
due to data of varying staleness. This can cause serious downstream effects on
service quality, reporting, analytics and monetary-based decision making.

This inability to correctly disseminate data throughout a company is not due to a

You have 2 days le in your flaw


fundamental trial,
in the Reddychan. Subscribe
idea of doing so. Quite contrary, this inabilitytoday.
is due to a See pricing options.
weak or non-existent data communication structure. In the scenario above, our

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 4/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

imaginary team is committing their implementation communication structure to


double-duty, acting also as an extremely limited data-communications structure.

One of the core tenets of event-driven microservices is that core business data
should be easy to obtain and be reusable by any service that requires it. This
replaces the ad-hoc data-communication structure in our scenario with a
formalized data communication structure. For our imaginary team, this data
communication structure could eliminate most of the difficulties of obtaining
data from other systems.

Team, Continued:

Fast forward a year. The team decided to go with option one and co-locate the
new features within the same service. It was quick, it was easy, and they have
implemented a number of new features since then. The company has grown, the
team has grown and now it is time for it to be reorganized into smaller, more
focused teams.

Each new team must now be assigned certain business functions from the
previous service. The business requirements of each team are neatly divided
based on areas of the business that need the most attention. The division of the
implementation communication structure, however, is not proving to be easy. Just
as before, it seems that the teams both require large amounts of the same data to
fulfil their requirements. New sets of questions arise - Which team should own
which data? Where should the data reside? What about that data where both
teams need to modify the values?

The team leads decide that it may be best to just share the service instead, and
both of them can work on different parts. This will be a lot more cross-team
communication and synchronization of efforts, which may be a drag on
productivity. And what about in the future, if they double in size again? Or if our
business requirements change enough that we’re no longer able to fulfil
everything with the same data structure?

Conflicting Pressures

There are two conflicting pressures on the team. The team was influenced to keep
all of its data local in one service to make adding new business functions quicker
and easier, at the cost of an increase in complexity of the implementation
communication structure. Eventually the growth of the team necessitated that the
business communication structure be split up - a requirement that is followed by
the reassignment of business requirements to the new teams. The implementation
communication structure, however, cannot support the reassignments in its
current form, and needs to be decomposed into suitable components. Neither
approach is scalable, and both point to a need to do things differently. These
problems all stem from the same root cause - a weak, ill-defined means of
communicating data between implementation communication structures.

Event-Driven Communication Structures


The event-driven approach offers a different choice for how implementation and
data communication structures operate. Event-based communications are not a
drop-in replacement for request-response communications, but rather a
completely different way of communicating between services. An event
streaming data communication structure decouples the production and ownership
of data from the access to it. Services no longer couple of a point-to-point API,
but couple using the definition of the event data within the event streams.
Producers limit their responsibilities to producing well-defined data into their
respective event streams.

Events are the basis of communication

All shareable data is published to a set of event streams, forming a continuous,


canonical narrative detailing everything that has happened in the organization.
This becomes the backbone from which systems communicate with one another.
The flexibility in event definitions allows nearly anything to be communicated as
an event, from simple occurrences to complex, stateful records. It must be noted
that the events are the data. They are not merely signals indicating data is ready
elsewhere, nor are they being used just as a means of direct data transfer from
one implementation to another. Rather, they act as both a store of meaning and as
a means of asynchronous communication between services

Event Streams provide the Single Source of Truth

Each event in a stream is a statement of fact, and together they form the single
source of truth. This definitive source of truth is the basis of communication for
all systems within the organization.Treating the event stream narrative as a single
source of truth is a necessary convention that must be adopted by the
organization, for a communication structure is only as good as the veracity of its
information. Should it be the case that the event streams do not form the
canonical narrative as some teams choose to put conflicting data in other
locations, its usage as the data communications backbone within an organization
is essentially eliminated.

Consumers perform their own Modelling and Querying

The event-based data-communications structure differs from an over-extended


implementation communication structure in that it is incapable of providing any
querying or data-lookup functionality. All business and application logic must be
encapsulated within the producer and consumer of the events.

Data access and modelling requirements are completely shifted down to the
consumer, with each consumer obtaining their own copy of events from the
source event streams. Any querying complexity is also shifted from the data
owner’s implementation communication structure to the consumer’s
implementation communication structure. The consumer remains fully
responsible for any mixing of data from multiple event streams, special query
functionality or other business specific implementation logic. Both producers and
consumers are otherwise relieved of their duty to provide querying mechanisms,
data transfer mechanisms, APIs and cross-team services for the means of
communicating data. They are now limited in their responsibility to only solving
the needs of their immediate bounded context.

Improved Data Communication across the Organization

The usage of a data-communications structure is an inversion, with all shareable


data being exposed outside of the implementation communication structure. Not
all data is required to be shared, and as such not all of it needs to be published to
the set of event streams. However, any data which is of interest to any other team
or service must be published to the common set of event streams, such that the
production and ownership of data becomes fully decoupled. This provides the
formalized data-communication structure that has long been missing from system
architectures, allowing us to better meet the bounded context principles of loose
coupling and high cohesiveness.

Applications may now access data that would otherwise have been laborious to
obtain via point-to-point connections. New services can simply acquire any
needed data from the canonical event streams, create their own models and state,
and perform any necessary business functions without depending on direct point-
to-point connections or APIs with any other service. This unlocks the potential
for an organization to use the vast amounts of data it has more effectively in any
product, and even mix data from multiple products across its organization in
unique and powerful ways.

You have 2 days le in yourBusiness


Supports trial,Communication
Reddychan. Subscribe today. See pricing options.
Changes

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 5/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

Event streams contain core domain events that are central to the operation of the
business. Though teams may restructure and projects may come and go, the
important core domain data remains readily available to any new product that
requires it, independent of any specific implementation communication structure.
This gives the business unparalleled flexibility, as access to core domain events
no longer relies upon any particular implementation.

Asynchronous Event-Driven Microservices


Event-driven microservices enable the business logic transformations and
operations necessary to meet the requirements of the bounded context. These
applications are tasked with fulfilling these requirements and emitting any of
their own necessary events to other downstream subscribers. Here are a few of
the primary benefits of using event-driven microservices. This is not an
exhaustive list, but it’s sufficient to show the main themes that will be visited in
this book.

Granular - Map neatly to bounded contexts, and can be easily rewritten when
business requirements change.

Scalable - Individual services can be scaled up and down as needed.

Technological Flexibility - Can select the most appropriate languages and


technologies to solving a solution. Allows for easy prototyping using pioneering
technology.

Business Requirement Flexibility - Ownership of granular microservices are


easy to reorganize. Cross-team dependencies are reduced when compared to large
services, and the organization can react more quickly to changes in business
requirements that would otherwise be impeded by barriers to data access.

Loosely Coupled - Event-driven microservices are coupled on domain data and


not on a specific implementation API. Data schemas can be used to greatly
improve the management of data changes, as will be discussed in Chapter 3.

Supports Continuous Delivery - Easy to ship a small, modular microservice,


and roll it back if needed.

Highly Testable - Microservices tend to have fewer dependencies than large


monoliths, making it easier to mock out the required testing endpoints and ensure
proper code coverage.

Example Team, Revisited, Using Event-Driven Microservices

Let’s revisit the team again but with an event-driven data communication
structure.

A new business requirement is introduced to the team. It’s somewhat related to


what their current products do, but it’s also different enough that it could go into
its own service. Does adding it to an existing service violate the single
responsibility principle and overextend the currently defined bounded context?
Or is it a simple extension, perhaps the addition of some new related data or
functionality, of an existing service?

Previous technical issues, such as figuring out where to source the data and how
to sink it, handling batch syncing issues and implementing synchronous APIs are
largely removed now. The team can spin up a new microservice and ingest the
necessary data from the event streams, all the way back to the beginning of time
if necessary. It is entirely possible that the team mixes in common data used in
their other services, so long as that data is used solely to fulfil the needs of the
new bounded context. The materialization and structure of this data is left
entirely up to the team, which can choose which fields to keep and which to
discard.

Business risks are also alleviated, as the small, finer grained services allow for
single team ownership, allowing the teams to scale and reorganize as necessary.
When their team scales up to be too large to manage under a single business
owner, they can split it up as required and reassign the microservice ownership.
By following the single writer principle, it can be easily determined which
services own which data, and organizational decisions can be made to reduce the
amount of cross-team communication required to perform future work.

The microservice nature prevents spaghetti code and expansive monoliths from
taking hold, provided that the overhead for creating new services and obtaining
the necessary data is minimal. Scaling concerns are now focused on individual
event-processing services, which can scale their CPU, memory, disk and instance
count as required. The remaining scaling requirements are offloaded onto the
data communication structure, which must ensure that it can handle the various
loads of services reading and writing to its event streams.

To do all of this, however, the team needs to ensure that the data is indeed present
in the data communication structure, as well as have the means for easily
spinning up and managing a fleet of microservices. These issues require an
organizational adoption of event-driven microservice architecture, but once
solved, can be reused by every team within the organization.

Synchronous Microservices
Microservices can be implemented asynchronously using events, such as what I
am advocating in this book, or synchronously, which is common in service-
oriented architectures. Synchronous microservices tend to be fulfilled using a
request-response approach, where services communicate directly through APIs to
fulfill business requirements.

Why Synchronous Doesn’t Scale

There are a number of issues with synchronous microservices that make them
difficult to use at large scale. This is not to say that a company cannot find
success by using synchronous microservices, as evidenced by the the successes
of companies such as Netflix, Lyft, Uber and Facebook, to name a few. But many
companies have also made fortunes using archaic and horribly tangled spaghetti-
code monoliths, so do not confuse the financial success of a company with the
quality of the underlying architecture. There are a number of books that describe
how to implement synchronous microservices (example 1, example 2), and I will
refer you to read those to get a better understanding of the synchronous
approaches.

Furthermore, note that there is no real consensus in the microservice field about
point-to-point synchronous microservices or asynchronous event-driven
microservices being strictly better. Both have their place in an organization, and
some tasks are better suited to one over the other. I will leave it up to you to
make up your own mind on how you’d like to proceed with your design
selections. I will, however, note the biggest impediments to synchronous point-
to-point microservices below, and leave it to you to consider if these are deal-
breakers for you or not.

Point-to-point couplings

Synchronous microservices rely on other services to help them perform their


business tasks. Those services, in turn, have their own dependent services, which
have their own dependent services, and so on. This can lead to excessive fanout
and difficultly in tracing which services are responsible for fulfilling specific
parts of the business logic. The number of connections between services can
become staggeringly high, which further entrenches the existing communication
structures and makes future changes more difficult.

Dependent Scaling

The ability to scale up your own service depends on the ability of all dependent
services to scale up as well, and is directly related to the degree of
You have 2 days le in your trial,
communications fanout.Reddychan. Subscribe
Implementation technologies today.
can be a bottleneck on See pricing options.
scalability. This is further complicated by highly variable loads and surging

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 6/7
4/9/2020 1. Why Event-Driven Microservices - Building Event-Driven Microservices

request patterns, which all need to be handled synchronously across the entire
architecture.

Handling Service Failures

If a dependent service is down, then decisions must be made about how to handle
the exception. Deciding how to handle the outages, when to retry, when to fail,
and how to recover to ensure data consistency becomes increasingly difficult the
more services that exist within the ecosystem.

API versioning and dependency management

Multiple API definitions and service versions will often need to exist at the same
time. It is not always possible or desirable to force clients to upgrade to the
newest API. This can add much complexity in orchestrating API change requests
across multiple services, especially if they are accompanied by changes to the
underlying data structures.

Data Access tied to the Implementation

Synchronous microservices have all the same problems as traditional services


when it comes to accessing external data. Although there are service design
strategies for mitigating the need to access external data, microservices will often
still need to access commonly-used data from other services. This puts the onus
of data access and scalability back on the implementation communication
structure.

Distributed Monoliths

Services may be composed in such a way to act as a distributed monolith, with


many intertwining calls being made between the services. This often arises when
a team is decomposing a monolith and decides to use synchronous point-to-point
calls to mimic the existing boundaries within their monolith. Point-to-point
services make it easy to blur the lines between the bounded contexts as the
function calls to remote systems can slot in line-for-line with existing monolith
code.

Testing

Integration testing can be difficult as each service requires fully operational


dependents, which in turn require their own as well. Stubbing them out may work
for unit tests, but seldom proves sufficient for more extensive testing
requirements.

The Benefits of Synchronous Microservices

There are a number of undeniable benefits provided by synchronous


microservices. Certain data access patterns are very favourable to direct request-
response couplings, such as authenticating a user and registering or reporting on
an AB-test. Integrations with company-external 3rd party solutions almost
always use a synchronous mechanism, and generally provide a flexible,
language-agnostic communication mechanism over HTTP.

Tracing operations across multiple systems can be easier in a synchronous


environment than an asynchronous one. Detailed logs can show which functions
were called on which systems, allowing for high debuggability and visibility into
business operations.

Front-end services hosting web and mobile experiences are by and large powered
by request-response designs, regardless of their synchronous or asynchronous
nature. Clients receive a timely response dedicated entirely to their needs.

The experience factor is also quite important, especially as many developers in


today’s market tend to be much more experienced with synchronous, monolithic-
style coding. This makes aquiring talent for synchronous systems easier, in
general, than acquiring talent for asynchronous event-driven development.

A company’s architecture could only rarely, if ever, be based entirely on event-


driven microservices. Hybrid architectures will certainly be the norm, where
synchronous and asynchronous solutions are deployed side-by-side as the
problem space requires. With all that being said though, the vast majority of
business functionality can be implemented more effectively using event-driven
microservices.

Summary TODO

Settings / Support / Sign Out


© 2020 O'Reilly Media, Inc. Terms of Service / Privacy Policy
PREV NEXT
⏮ ⏭
Building Event-Driven Microservices 2. Event Driven Microservice Fundamentals

You have 2 days le in your trial, Reddychan. Subscribe today. See pricing options.

https://learning.oreilly.com/library/view/building-event-driven-microservices/9781492057888/ch01.html 7/7

You might also like