0% found this document useful (0 votes)
7 views

30-Book-Microservices Best Practices For Java

Uploaded by

farzi account
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

30-Book-Microservices Best Practices For Java

Uploaded by

farzi account
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 28

1.

Overview
 Application architecture patterns are changing in the era of cloud computing.
 A convergence of factors has led to the concept of “cloud native” applications:
 › The general availability of cloud computing platforms
 › Advancements in virtualization technologies
› The emergence of agile and DevOps practices as organizations looked to streamline and
shorten their release cycles

 To best take advantage of the flexibility of cloud platforms, cloud native applications arecomposed of
smaller, independent, self-contained pieces that are called microservices.
 This chapter provides a brief overview of the concepts and motivations that surround cloudnative applications
and microservice architectures:
 Cloud native applications
 Twelve factors
 Microservices
 Philosophy and team structure
 Examples

1.1 Cloud native applications


 Cloud computing environments => dynamic, on-demand allocation, resources from a virtualized, shared pool.
 According to the Cloud Native Computing Foundation, cloud native systems have the following properties:
 Applications or processes are run in software containers as isolated units.
 Processes are managed by using central orchestration processes to improve resource utilization and
reduce maintenance costs.
 Applications or services (microservices) are loosely coupled with explicitly described dependencies.
 The 8 fallacies of distributed computing were drafted in 1994, and deserve a mention:
• The network is reliable.
• Latency is zero.
• Bandwidth is infinite.
• The network is secure.
• Topology doesn’t change.
• There is one administrator.
• Transport cost is zero.
• The network is homogeneous.
1.2 Twelve factors
 The 12-factor represent set of guidelines or best practices for portable, resilient, portable applications that will
thrive in cloud environments (specifically software as a service applications):
 12 factors help build resilient, portable, cloud native app.
1. Codebase Versioned: There should be a one-to-one association between a versioned
codebase (for example,an IT repository) and a deployed service. The same codebase is used
for many deployments.
2. Dependencies Services should explicitly declare all dependencies independent of
other microservices dependencies
3. Configuration that varies between deployment environments should be stored in
the environment (specifically in environment variables).
4. Backing services: All backing services are treated as attached resources, which are
managed (attached anddetached) by the execution environment.
5. Separate stage of pipeline: The delivery pipeline should have strictly separate stages: Build, release,
and run.
6. Processes Applications should be deployed as one or more stateless processes. Specifically,
transient processes must be stateless and share nothing. Persisted data should be storedin
an appropriate backing service.
7. Port Binding: Self-contained services should make themselves available to other services
by listeningon a specified port.
8. Concurrency: Concurrency is achieved by scaling individual processes (horizontal scaling).
9. Disposability: Processes must be disposable: Fast startup and graceful shutdown
behaviors lead to amore robust and resilient system.
10.Dev/Prod Parity: All environments, from local development to production, should be as
similar as possible.
11.Logs: Applications should produce logs as event streams (writing to stdout/stderr) and trust
the execution environment to aggregate streams.
12. Admin tasks: If admin tasks are needed, they should be kept in source control and packaged
alongsidethe application to ensure that it is run with the same environment as the
application.
These factors do not have to be strictly followed to achieve a good microservice environment.
However, keeping them in mind allows you to build portable applications or services that can be
built and maintained in continuous delivery environments.

1.3 Microservices
 Small, independent (autonomous), replaceable processes that communicate by using lightweight APIs that do not
depend on language.

1.3.1 The meaning of “small”


 Small = in terms of scope not size
 Single responsibility Principle => Should do one thing i.e., Focused in purpose
1.3.2 Independence and autonomy
 Monolithic vs Microservices debate
 Infrequent, expensive, all-or-nothing maintenance windows.
 Constrained technology choice due to well intentioned, but heavy handed, centralized governance.
 Downside of monolith
 Small change brings risk as everything is interrelated
 Not agile system
 Agility = independent, autonomous, loose coupling, independent start, stop
 Microservice advantages
 Build agile system
 Defined boundaries
 Act as abstraction/encapsulation/BC hiding the implementation details
 React and refactor swiftly as problem is identified
 Microservice advantages - Polyglot/ Language Agnostic
 Main benefit of microservice-based architectures.
 Choose appropriate language or data store independently
 Ensure that non-functional or regulatory requirements like maintainability, auditability, and data security can
be satisfied, while preserving agility and supporting innovation through experimentation.
 REST - Polyglot using REST
 Polyglot applications are only possible with language-agnostic protocols like REST
 RESTful architectures require the request state to be maintained by the client, allowing the server to be
stateless.
 REST uses HTTP
 Downside of REST
 REST/HTTP interactions are inherently synchronous (request/response) in nature.
 Asynchronous communications allow maintaining loose coupling between microservices.
 JSON has emerged in microservices architectures as the wire format displacing XML
 Binary serialization frameworks do exist, such as Apache Avro1, Apache Thrift2, and Google Protocol Buffers3.
However, use of these protocols should be limited to internal services and deferred until the interactions
between services are understood.

1.3.3 Resilience and Fault tolerance


 Services respond gracefully to the unexpected like receiving bad data, concurrent updates.
 API principle
 Be liberal in what you accept, and conservative in what you send.
 E.g., if API accept 2 values, it should not break code on receiving 3rd value rather log it
 No cascading
 Pattern like Circuit breakers
 Pattern like bulkheads

1.3.4 Automated environment


 So many applications in the environment
 How to manage and creating visualizations for interaction patterns between microservices
 Automation is required
 Continuous integration or continuous deployment practices and tools
 Automated monitoring and alerting
 Logs

1.4 Philosophy and team structure


 Relationship between microservices and team organization
 Conway’s law
 Conway's Law suggests that the structure of the software that a team or organization creates will mirror the
communication patterns and organizational structure of the team itself.
 E.g., DBA, front end, back-end team
 When teams are organized in a way that aligns with their specific expertise but doesn't prioritize seamless
integration, it can lead to integration difficulties.
 If a company has separate teams of DBAs, developers, and architects, Conway's Law suggests that their
software may reflect this division. The database architecture, application code, and system design might align
with these team boundaries, potentially leading to challenges in integration and collaboration between these
groups.
 Conway's law specifically focuses on communication and coordination between disparate groups of people.
 Creating an agile ecosystem that is capable of fast and frequent change requires streamlined communications,
and lots of automation.

1.5 Examples
-> Throughout this book, we use two example applications to help explain concepts and
provideperspective:
› Online retail store
› Game On!

1.5.1 Online retail store


 The online retail store is an example that is often used when discussing application architectures. This book
considers a set of microservices that together provide an applicationwhere users can browse for products and place
orders.
The architecture includes the following services:
› Catalog service
Contains information about what can be sold in the store.
› Account service
Manages the data for each account with the store.
› Payment service
Provides payment processing.
› Order service
Manages the list of current orders and performing tasks that are required in the lifecycle ofan
order.

 There are other ways to structure services for an online retail store. This set of services was chosen to illustrate
specific concepts that are described in this book. We leave the construction of alternate services as an exercise for the
reader.
1.5.2 Game On!
 The game is essentially a boundless two-dimensional network of interconnected rooms. Rooms are provided by third
parties (for example, developers at conferences or in workshops). A set of core services provide a consistent user
experience. The user logs in anduses simple text commands to interact with and travel between rooms. Both
synchronous andasynchronous communication patterns are used. Interactions between players, core services,and
third-party services are also secured.
2. Creating Microservices in Java
2.1 Java platforms and programming models
 New Java applications should embrace new language features like lambdas, parallel operations, and streams
 Try to establish a balance between code cleanliness and maintainability.
2.1.1 Spring Boot
 Spring Boot provides mechanisms for creating microservices
 Spring Cloud is a collection of integrations between third-party cloud technologies and the Spring programming
model.
 How to connect spring boot with AWS/Azure?
- Spring Cloud for Amazon Web Services, eases the integration with hosted Amazon Web Services
- Spring Cloud Azure is an open-source project that provides integration between your Spring applications
and Azure services,
- Use AWS SDKs or Azure libraries to integrate your Spring applications with services

2.1.2 Dropwizard
 Dropwizard is a Java framework which provides mechanisms for creating microservices
 It offers an integrated set of tools like Jersey, Jetty, and Metrics for creating RESTful services with built-in
features for configuration, logging, and database integration. It's known for its opinionated, streamlined
approach, making it popular for developing production-ready applications.

2.1.3 Java Platform, Enterprise Edition


 Java EE (Java Platform, Enterprise Edition) can be used to create microservices.
 While Java EE was originally designed for monolithic enterprise applications, it has evolved to support the
development of microservices as well.
 Key Java EE technologies and specifications that are particularly well-suited for building microservices include:
- Java API for RESTful Web Services (JAX-RS): JAX-RS is a Java EE specification that simplifies the creation
of RESTful APIs. RESTful endpoints are commonly used in microservices architectures for communication
between services.
- Modern Application Servers: Modern Java EE application servers (e.g., WebSphere Liberty, Wildfly) have
become more lightweight and container-friendly, making them suitable for deploying microservices.
- CDI (Contexts and Dependency Injection) is a Java EE and Java SE specification for managing
dependencies and promoting modularity in applications. It enables dependency injection, defines
lifecycles for managed beans, supports events, qualifiers, and custom scopes, making it a powerful tool
for building loosely coupled and dynamic Java applications.
- Java Persistence API (JPA): JPA is often used in microservices to interact with databases and manage
data persistence.

2.2 Versioned dependencies


 12 factor app
 Build tools like Apache Maven/Gradle provide mechanisms for defining and managing dependency versions.

2.3 Identifying services


 A microservice should have a small, focused scope:
 How to build an application as a composition of independent pieces

2.3.1 Applying domain-driven design principles (DDD)


 Monolithic systems => Single unified domain model exists for the entire application
- A domain is a particular area of knowledge or activity.
- A model is an abstraction of important aspects of the domain that emerges over time, as the
understanding of the domain changes. This model is then used to build the solution, for cross-team
communications.
 What is DDD?
- Software design approach focusing on Domain Model
- Clean architecture => Separate domain logic from outside world/infrastructure
- Model Software to match Domain (area of activity)
- Stable Business Domain
 Build systems of abstractions, separate BL from infrastructure
 Each service has domain, No single app wide domain
 Core of app is Domain, which is stable, rest external changes
 Microservices => Multiple Domain

2.3.2 Ubiquitous language in Game On!


 Shared language and consistent understanding of microservices' components are crucial to avoid
misinterpretations and maintain effective collaboration among development teams.
 Ubiquitous language is a shared and precise vocabulary used in software development teams, ensuring clear
communication and alignment on domain concepts.
 It bridges the gap between technical and non-technical members, enhancing collaboration and facilitating
accurate representation of the application's business domain.

2.3.3 Translating domain elements into services


 Domain models contain a few well-defined types:
- Entity => object with a fixed identity and a well-defined “thread of continuity” or lifecycle. E.g., person
entity, most systems need to track a Person uniquely, regardless of name, address, or other attribute
changes.
- Value Objects => do not have a well-defined identity but are instead defined only by their attributes.
They are typically immutable so that two equal value objects remain equal over time. An address could
be a value object that is associated with a Person.
- Aggregate => cluster of related domain objects that are treated as a unit. Must be in a consistent
state. It has a specific entity as its root and defines a clear boundary for encapsulation. It is not just a
list.
- Services => used to represent operations or activities that are not natural parts of entities or value
objects.
Domain elements in Game On!
 Player - Entity or value object?
- Players have a fixed ID and exist for as long as the player’s account does.
- The player entity has a simple set of value objects, including the player’s username, favorite color, and
location.
 Rooms - Entity or value object?
- Value Object as room attributes can change in various ways over time.
- Changeable attributes like the room’s name, descriptions of its entrances, and its public endpoint.
 Further, the placement of rooms in the map can change over time. If you consider each possible room location
as a Site, you end up with the notion of a Map as an Aggregate of unique Site entities, where each Site is
associated with changeable room attributes.

Converting domain elements into microservices


- Convert Aggregates and Entities into independent microservices, by using representations of Value
Objects as parameters and return values.
- Align Domain services (those not attached to an Aggregate or Entity) with independent microservices.
- Each microservice should handle a single complete business function.
 Applying these guidelines to the domain elements listed above for Game On!
- Player service provides the resource APIs (CRUD) to work with Player entities. Player API provides
additional operations for generating usernames and favorite colors, and for updating the player’s
location.
- Map service provides the resource APIs to work with Site entities, allowing developers to manage the
registration of room services with the game. The Map provides create, retrieve, update, and delete
operations for individual Sites within a bounded context. Data representations inside the Map service
differ from those shared outside the service, as some values are calculated based on the site’s position in
the map.
 Some additional services from the game have not been discussed:
- Mediator: The Mediator is a service that splintered out of the first implementation of the Player service.
Its sole responsibility is mediating between these WebSocket connections:
- One long running connection between the client device and the Mediator.
- Another connection between the Mediator and the target independent room service. From a domain
modeling point of view, the Mediator is a domain service.
- Rooms: What about individual room services? Developers stand up their own instances of rooms. For
modeling purposes, treat rooms, even your own, as outside actors.

2.3.4 Application and service structure


 Monolithic Java EE applications hosted as web archives (WAR files), enterprise archives (EAR files)
Internal structure
 Clear separation of concerns
 Separate the domain logic from any code that interacts with or understands external services.
 The architecture is like the Ports and Adapters (Hexagonal) architecture

Note how DL is kept contained or separate from external entities.


 The example internal structure considers the following elements:
- Resources
Expose JAX-RS resources to external clients. This layer handles basic validation of requests and then
passes the information into the domain logic layer.
- Domain logic
Usually represents the entity itself, with parameter validation, state change logic, and so on.
- Repositories
Optional. Abstraction between the DL and data storage. Repositories allows replacing backing data store
if required extensive changes to the DL.
- Service connectors
Like Repository, SC provides abstraction, encapsulating communications with other services. A facade or
an “anti-corruption” layer to protect domain logic from changes to external resource APIs, or to convert
between API wire formats and internal domain model constructs.
 Each class in microservice service should perform one of these tasks:
- Perform domain logic
- Expose resources
- Make external calls to other services
- Make external calls to a data store
 These are general recommendations for code structure that do not have to be followed strictly. The important
characteristic is to reduce the risk of making changes.
DDD
 Refer 01-Domain Driven Design.

2.4 Creating REST APIs


 Good microservices API => Well Designed and documented
 Secured, Easy to manage.
2.4.1 Top down or bottom up?
 Better create API and then start implementing code rather vice versa
 Everything should be built from the top down, except the first time
 When doing POC, unclear about what APIs you are providing, hence we can start implementation and then API
2.4.2 Documenting APIs
 Open API Initiative => standardizing how RESTful APIs are described.
 Open API specification is based on Swagger10, which defines the structure and format of metadata used to
create a Swagger representation of a RESTful API.
2.4.3 Use the correct HTTP verb
 REST APIs should use standard HTTP verbs for CRUD,
 Check if operation is idempotent (safe to repeat multiple times).
 C
- Create using POST. POST operation is not idempotent. E.g., if a POST request is started multiple times, a
new, unique resource should be created because of each invocation.
 R
- GET must be idempotent. GET should not be used for any update
 U
- Update using PUT. PUT operations usually idempotent. i.e., copy of the resource to be updated, making
the operation idempotent.
 D
- Delete using DELETE. DELETE usually idempotent.
2.4.4 Create machine-friendly, descriptive results
 The HTTP status code should be relevant and useful.
- Use 200 (OK) when everything is fine.
- Use 204 (NO CONTENT) when there is no response data
- Use 201 (CREATED) for POST requests that result in the creation of a resource
- Use 409 (CONFLICT) when concurrent changes conflict
- Use 400 (BAD REQUEST) when parameters are malformed
2.4.5 Resource URIs and versioning
 Resources => nouns, not verbs
 Endpoints => plural
POST /accounts Create a new item
GET /accounts/16 Retrieve a specific
item PUT /accounts/16 Update a specific item
DELETE /accounts/16 Delete a specific item
GET /players/accounts Retrieve a list of all players
POST /players/accounts Create a new player
GET /players/accounts/{id} Retrieve a specific player
PUT /players/accounts/{id}/location Update a specific player’s location
Versioning
 Bring change without breaking API.
 There are generally three ways to handled versioning a REST resource:
- › Put the version in the URI e.g., /api/v1/accounts
- › Use a custom request header
- › Put the version in the HTTP Accept header and rely on content negotiation
3. Locating Services
 Microservices needs scaling, how to locate the services
- › Service registry
- › Service invocation
- › API Gateway
3.1 Service registry
 A service registry is a persistent store that holds a list of all the available microservices at any time and the
routes that they can be reached on. There are four reasons why a microservice might communicate with a
service registry:
 › Registration
- After deployment, MS1 must register with the service registry.
 › Heartbeats
- MS1 should send regular heartbeats to the registry to show that it is ready to receive requests.
 › Service discovery
- MS1 must identify MS2 via Service registry
 › De-registration
- When MS1 is down, register itself from service registry.
3.1.1 Third-party registration versus self-registration
 MS registration with SR(service registry) can be done by the microservice or by a third party.
 3rd Party inspect => MS, 3rd Party updates => SR
 3rd party handles both registration and deregistration
- (+) Keep BL and Heartbeat logic separate
- (-) additional component
 Example:
› Consul
› Eureka
› Amalgam
3.1.2 Availability versus consistency
 Most service registries provide partition tolerance + (either consistency or availability)
- Eureka provides availability
- Consul and Apache Zookeeper provide consistency.
3.2 Service invocation
 How MS1 invokes MS2?
 Information about MS1, MS2 is stored in service registry (SR).
 Call from MS1=>MS2 can be either on the server side or the client side.
3.2.1 Server side
 Server-side communication with the microservice is performed by using a service proxy.
 Service proxy is either part of the service registry or a separate service.
 MS => SP has well-known endpoint => SR
 LB is handled completely on the server side.
 Server-side invocation advantages:
- Simple requests: MS request => well-known endpoint.
- Easier testing: Routing/LB taken out of microservices => handled by SP, testing using Mocked Proxy
 Server-side invocation disadvantages:
- Greater number of hops
3.2.2 Client side
 MS1=> MS2, directly calls each other after finding location from SR

 Advantage: fewer network hops.


 How MS1 client handles invocation itself?
- The request and any load balancing that is required can be handled by one of two mechanisms:
› Client library – runs within the microservice
› Sidecar – deployed with the microservice, but run as a separate process
- Both mechanisms make requests on the client side.

Client library
- MS1 invokes MS2 using client library
- Client library provided by like Consul/Netflix Eureka handle service registration and heart beating
- Client library provided by like Netflix Ribbon, provide client-side load balancing.
- (+)Better use existing libraries if required rather than reengineering and overcomplicating MS
- (-) Introducing complex code in application
Sidecar
- A good middle ground between the service proxy and a client library is a sidecar.
- Separate process that is deployed with your microservice
- Sidecars can do service registration with the service registry
- Sidecars can do client-side load balancing for outbound calls to other microservices
- Netflix Prana is an open-source sidecar that provides the capabilities of the Java based Ribbon
- Kubernetes is an orchestration engine for containers that provides sidecar like service discovery and load
balancing capabilities.

3.3 API Gateway


 API Gateway performs a similar role to a service proxy
 MS => API G/W Services that look up the service in the registry => makes request => returns response.
 Service Proxy vs. API Gateway Difference
- Service proxies make requests using the exact API provided by the end service.
- API gateways, on the other hand, use the API provided by the gateway itself.
- This setup allows API gateways to offer different APIs compared to the microservices they connect to.
 All External Access Through API Gateway
- In this architecture, all external clients access the application exclusively through the API Gateway.
- The gateway serves as a central point for external communication, offering APIs optimized for external
client requirements and abstracting the underlying microservices.
4. Microservice communication
 Distributed system, inter-service communication is vital
 This description includes synchronous and asynchronous communication, and how to achieve resilience.
- › Synchronous and asynchronous
- › Fault tolerance

4.1 Synchronous and asynchronous


 Synchronous communication => requires a response
 Asynchronous communication => no response is required
4.1.1 Synchronous messaging (REST)
 Where MS1 needs to trigger a specific behavior in MS2, synchronous APIs should be used instead.
 In a Java based microservice, having the applications pass JSON data is the best choice.
Async support for JAX-RS
 JAX-RS requests are typically synchronous but offer asynchronous support in JAX-RS 2.0. This feature lets
threads handle other tasks while awaiting HTTP responses. In stateless EJB beans, an AsyncResponse object,
linked with the request via @Suspended, enables offloading work from the active thread using
@Asynchronous, enhancing responsiveness. Example 4-1 demonstrates response handling.
 Example
- Client => Server
- Server's main thread (T1) receives the request; Offloads time-consuming I/O operation to another
thread (T2).
- T1 continues processing other incoming requests.
- After 1 minute, when the I/O operation in T2 is completed, it forms the response.
- T2, which handled the asynchronous task, responds back to the client. T1 is not directly involved in this
part of the process.
 Another option is using reactive libraries like RxJava1. RxJava is a Java implementation of ReactiveX2, a library
that is designed to aggregate and filter responses from multiple parallel outbound requests.

4.1.2 Asynchronous messaging (events)


 Asynchronous messaging => decoupled coordination.
 Example AMQP broker with the RabbitMQ, Apache Kafka8, and MQTT
4.1.3 Examples
 Where to use synchronous and where to use asynchronous?

- User initiate authentication(1), SYNC request (2)


- User verified, can create, and publish details ASYNC (3,4)
Online retail store
- User submits an order on an online store, triggering a SYNC request for confirmation.
- Upon submission, the application begins order processing.
- The service handling the order sends ASYNC events subscribed to by various services (inventory,
shipping, payment).
- ASYNC preferable when we know it will take time/heavy task/background process

4.2 Fault tolerance


 Microservice architecture => fault tolerant and resilient application
 Near zero downtime
 MS1 continue functioning even if MS2 is down.

4.2.1 Resilient against change


 In a microservice architecture, synchronous requests between services rely on specific APIs with defined input
and output attributes.
 These attributes are subject to change due to evolving requirements. To maintain resilience, microservice
producers should carefully design and version their APIs
Consuming APIs
 When consuming an API, it's vital to validate the response for the required data without unnecessary checks on
irrelevant attributes. In the case of JSON data, parsing should be performed before any Java transformations.
This involves:
- Targeted Validation: Validate only against the specific variables or attributes needed for the task at
hand. Don't validate against all provided variables if they aren't used in your request.
- Accept Unknown Attributes: Don't trigger exceptions for unexpected variables in the response. Focus on
the information you require, disregard additional attributes.
 Additionally, select a JSON parsing tool that offers configuration options. For instance, the Jackson Project
provides annotations like @JsonInclude and @JsonIgnoreProperties to control how the parser handles
serialization and unknown properties, ensuring efficient and flexible data handling.

Producing APIs
 When providing an API to external clients, it's essential to adhere to 2 key principles:
- Accept Unknown Attributes: When processing requests, allow for the inclusion of unknown attributes
without generating errors. Discard unnecessary attributes sent by a service calling your API rather than
returning an error, preventing unnecessary failures.
- Only Return Relevant Attributes: In response, provide only the attributes relevant to the specific API
being invoked. Avoid sharing unnecessary implementation details to allow room for future changes and
adaptability.
 These principles constitute the Robustness Principle, ensuring your API remains flexible and resilient in the
face of evolving requirements. They promote compatibility and smooth transitions for both current and future
consumers.

4.2.2 Timeouts
 SYNC or ASYNC, request must have timeout

4.2.3 Circuit breakers


 Timeout => prevent request from waiting indefinitely
 Circuit breaker => avoid repeating timeouts. If 2 requests have already failed, don’t go for 3rd request.
 Timeout vs Circuit breaker
- While timeout is based on request-response, Circuit breaker is based on state of service
- If service is experiencing issue, avoid service temporarily

 How to implement Circuit Breaker?


- Wrap the HTTP GET request to "/api/accountLists" inside the CircuitBreaker.decorateCheckedSupplier
method, which ensures that the HTTP request code is executed with circuit breaker protection.
CheckedFunction0<String> decoratedHttpRequest = CircuitBreaker
.decorateCheckedSupplier(circuitBreaker, () -> makeHttpRequest("/api/accountLists"));

4.2.4 Bulkheads
 Compartmentalized error
 Failure in one part doesn’t take down the whole thing.
 How to implement bulkhead?
- Provide fallbacks.
- Fallback allows the application to continue functioning when non-vital services are down
- For example, chained fallbacks, where a failed request for personalized content falls back to a request
for more general content, which would in turn fall back to returning cached (and possibly stale) content
in preference to an error.

4.2.5 Queues
 Limit outbound requests rather than some resource impacting all systems.
 Queues and Semaphores are common tools for this. Queues have a maximum pending work limit, providing
fast failure when full, while Semaphores use permits for request control. Semaphores skip requests if permits
are exhausted, while queues offer some buffering. These mechanisms are also useful for handling rate-limited
remote resources.
4. Handling data
5.1 Data-specific characteristics of a microservice
 Which data must be stored in the data store of your microservice?
 Top-down approach. Start at the business level to model your data.
 The following sections show how to identify this data, how to handle the data, and how it can be shared with
other data stores of other microservices.

5.1.1 Domain-driven design leads to entities


 From the approach in domain-driven design, you get the following objects, among others:
 › Entity
- Entity is not defined by attributes but by its Identity/Primary Key which is unchanged unlike attributes
 › Value Objects
- No Primary Key/Identity and represented only by its attributes. It is immutable.
- E.g., 2 money objects with 10 USD have no identity but these value objects are same immutable objects
 › Aggregate
- Aggregate groups related objects under an aggregate root entity, ensuring transactional consistency,
encapsulation, and integrity.
- The root manages the aggregate's state and acts as the primary entry point for interactions.
Library (Aggregate)
|
+-- Book (Aggregate Root)
| |
| +-- Title
| +-- Author
| +-- ISBN
| +-- Availability
|
+-- Checkout (Entity)
- To interact with Checkout, first go through Book root which is primary point of access
- Book/Aggregate root is responsible for maintaining the consistency and integrity of related objects.
- Book/Aggregate root internally manage the "Checkout" entities and ensure that these operations are
consistent and adhere to any business rules associated with book borrowing and returning.
- This encapsulation and centralization of operations within the aggregate root help maintain data
integrity and make it easier to reason about the behavior of the domain model.
 › Repository
- Repository => storage for domain objects (such as entities, value objects)
- The repository is responsible for retrieving, saving, updating, and deleting these objects, making it a
central point of interaction between the application's domain logic and the underlying data storage
mechanisms (e.g., databases, APIs).

5.1.2 Separate data store per microservice


 MS1 database should be different from MS2
 Reasons for separate database:
- Loose coupling
- Choose DB separately as per individual MS need
- Performance, scaling is easier
 Polyglot persistence
- Architectural strategy that embraces the use of multiple databases and storage technologies to handle
diverse data types and optimize system performance based on specific use cases.
- It aims to strike a balance between data management needs and database capabilities.
5.1.3 Polyglot persistence
 Every MS free to choose own data store e.g., MS1 uses relational while MS2 uses NoSQL
 Having a range of data storage technologies in one application is known as polyglot persistence.

5.1.4 Data sharing across microservices


 Client wants to see all Payments and made by his different account
 But performing JOIN between 2 MS is against microservices principle.
 So, create adapter micro service (NEW) which fetches data from both service and returns transformed result

 Updating records is challenging, 2 microservices each have own datastore, how to ensure consistency and none
of them have corrupt state. Maintain business transaction.
 E.g., retail store = Order + Payment
 If customer orders something, order transaction is completed payment is done. Hence, business transaction
spanning two or more microservices, two or more data stores are involved in this business transaction.

Event-Driven architecture
 ASYNC. Distributed transaction, can’t go for SYNC, one service waiting for another
 Decoupling required, independence, containment
 Avoid Distributed Transaction
 The best way to span a business transaction across microservices is to use an event-driven architecture.
 MS1 changes data, publishes event.
 MS2 subscribes to event, receives it and does the data change accordingly on its data.
 Using a publish/subscribe communication model, the two MS are loosely coupled. The coupling only exists on
the messages that they exchange. This technique enables a system of microservices to maintain data
consistency across all microservices without using a distributed transaction.
 Adapter services doing complex updates, which span more than one service, can also be implemented by using
events.
Eventual Consistency
 EC => Data may briefly differ but eventually converges to a consistent state due to replication delays or other
factors.
 Limitation/Outcome of Event Driven Architecture
- Client initiates order => MS1 updates data => Publishes event “OrderCreated” + “PedingPayment”
- MS2 subscribes event “PedingPayment” => Makes payment => Emits “SuccessPayment”
- During this time when MS2/Payment is processing, if the client fetches MS1/Order status, it will see
PendingPayment. After some moment, SuccessPayment. Hence there are 2 states.
 Unlike Strict consistency, where order is Created only when SuccessPayment is done. So client will never see
Pending Payment

Data replication
 Using database replication mechanisms like triggers or stored procedures to share data across microservices
can lead to tight coupling and issues with schema changes. Event-based processing and data transformation
can decouple data stores and improve flexibility.
5.1.5 Event Sourcing and Command Query Responsibility Segregation
 In an event-driven architecture, combining Command Query Responsibility Segregation (CQRS) and Event
Sourcing is a powerful approach.
 CQRS separates data store access into Reads (returns state, no changes) and Writes (commands, change state).
Event Sourcing stores these events, capturing data changes, and enabling a flexible and event-driven
architecture. Events are stored sequentially in the event store
5.1.6 Messaging systems
 Messaging systems or message-oriented middleware (MOM) are crucial components in supporting event-
driven architectures.
 MOM acts as infrastructure for sending and receiving messages between distributed systems, enabling
applications to work across diverse platforms and networks seamlessly.
 AMQP, Kafka, Rabbit MQ etc

5.1.7 Distributed transactions


 Most messaging systems support transactions. So, we can use 2PC while dealing with MS1=>DB1 i.e. local
transaction.
 But not a good idea to use distributed transaction between MS1  MS2.
 For interactions spanning services, compensation or reconciliation logic must be added to ensure consistency is
maintained.

5.1.8 Java Persistence API


 The Java Persistence API (JPA) is a Java standard for managing data between Java objects and relational
databases. It replaced the EJB 2 CMP Entity Beans in EJB 3.0. JPA is a specification, not a product, consisting of
interfaces that require an implementation.
 E.g., of JPA is Hibernate
 Spring is not a JPA implementation itself but a framework that provides integration with JPA and various JPA
implementations
 When using Spring Data JPA, you typically choose an underlying JPA implementation like Hibernate, EclipseLink,
or others, and Spring provides the integration and configuration support to work seamlessly with that JPA
provider.
6. Application Security
6.1 Securing microservice architectures
 Dynamic nature of microservice architectures changes how security should be approached.

 Monolithic architecture: Requests => Gateway/LB => BL in middleware => data tier.
 Microservices => more services with each db
 Microservices instances come and go based on load. Microservices are continuously updated independently
 Secure not only perimeter but rapidly changing infrastructure in organic way

6.1.1 Network segmentation


 Subnet in different systems.
 Firewall or gateway to guard resources that require more levels of protection

6.1.2 Ensuring data privacy


 In terms of how data is accessed, how it is transmitted, and how it is stored.
- Do not transmit plain text passwords.
- Protect private keys.
- Use known data encryption technologies rather than inventing your own.
- Securely store passwords using SALT
- Sensitive data should be encrypted as early as possible and decrypted as late as possible.
Backing services
 Backing services like MQ/DB are additional source of network traffic and must be secured
 These backing services can use data at rest, https .

Log data
 Log should help to diagnose problems + comply/protect regulatory and privacy reasons

6.1.3 Automation
 Automate as much as possible especially repeatable task
 Applying security policies, credentials, and managing SSL certificates and keys across segmented environments
to help avoid human error.
6.2 Identity and Trust
 In microservices architectures, maintaining user identity without causing latency or centralized service
contention is challenging.
 End-to-end SSL encrypts data but doesn't guarantee trust and requires key management.
 The following sections detail authentication, authorization, and identity propagation techniques to establish
trust in inter-service communication.

6.2.1 Authentication and authorization


 Managing auth/authz differs between monolith and microservice environments.
 Monolith has fine grained role
- Monolithic fine-grained roles (like Content Reader, Application Manager etc) in a central user repository.
- Anti-Pattern for microservices which needs independent lifecycles
- Microservices prefer coarse-grained roles like Admin, User etc
 Authentication better handled by centralized service or API gateway
 Authorization better use coarse-grained group or role definitions in common services while allowing individual
services to manage fine-grained controls, promoting independence.
Java EE security
 Java EE security, which often relies on centralized configurations and assumptions that may not align with
microservices' requirements, may not be the most suitable choice.
 E.g, in Monolith we have Java EE Security Realms, which allows configuring auth/authz at application level
rather than services level. So, each microservices depends on this central configuration.
 Microservices favor decentralized approach to security where each services manages its auth/authz
independently or relies on token-based authentication systems like OAUTH/JWT. This provides more flexibility
rather than coupled to central security realm

6.2.2 Delegated Authorization with OAuth 2.0


 OAuth provides an open framework for delegating authorization to a third party. OAuth defines interaction
patterns for authorization.
 OpenID Connect (OIDC) provides an identity layer on top of OAuth 2.0
 For public pages, we may keep only use OIDC authenticated users to access and skip OAuth checks
 For sensitive pages, we may keep both OIDC + OAuth
 Two main steps (AuthorizationCode = User=>Auth Server, AccessToken = MS=> AuthServer)
A. Client request Authorization Code (Consent)
1. Client => Auth service
2. MS => Send redirect URL => Client
3. Client initiates the OAuth2 flow by sending authorization code request to AuthServer
Client sends client identity, requested scope, redirected URI
4. AuthServer (OAuth Provider) validates C, gives authorization code and redirects to C
5. Authorization Code represents the consent of user and not Access token
B. MS using Authorization Code (Consent) requests Access Token
1. C/browser automatically handles the redirect to invoke a callback on the Auth service with the
authorization code.
2. MS then contacts the OAuth provider for an access token. AuthServer verifies if it has
same authorization code/consent which was sent earlier.
3. MS then converts data from that token into a Signed JSON Web Tokens (JWTs), which allows you
to verify the identity of the user over subsequent inter-service calls without going back to the
OAuth provider.
 Some application servers, like WebSphere Liberty, have OpenID Connect features to facilitate communication
with OAuth providers.
 A simpler approach might be to perform authentication with an OAuth provider by using a gateway. If you take
this approach, it is still a good idea to convert provider-specific tokensinto an internal facing token that
contains the information that you need.
6.1.2 JSON Web Tokens
 Identity propagation between MS is a challenge
 Frequent callback to central auth service adds Latency
 Who carries identity/role information?
- JWTs can be used to carry along a representation of information about the user like
Who initiated request
What roles the user has
JWT contains some standard attributes (called claims), such as issuer, subject (the user’s identity), and
expiration time
 Advantage
- JWTs are compact and URL friendly
- JWT contains expiry time i.e., doesn’t give access indefinitely
- JWT claims can be customized, allowing additional information to be passed along.
- Signed JWTs: helps establish trust between services, as the receiver can then verify the identity of the
signer. JWTs can be signed by using shared secrets, or a public/private key pair (SSL certificates work
well with a well-known public key).
6.1.3 Hash-Based Messaging Authentication Code
 Authentication via Hash-Based Messaging Authentication Code (HMAC) signature superior security compared
to HTTP Basic authentication.
- Client sends request with HMAC signature to Server
- Server receives request and extracts HMAC signature
- Server recalculates HMAC signature using shared secret and request attributes
- If signatures match = AUTH success
 HMAC validation can be performed by an API Gateway.

6.1.4 API keys and shared secrets


 API key = long, randomly generated string to identify the origin of a request made to an API
- API provider creates API /getAccount and creates API key AK1
- Client application retrieves API Key AK1. Or API provider shares the Access keys with authz clients.
- Client application in future requests to /getAccount includes AK1
- API provider/server finds AK1 is valid API key for /getAccount.
 API keys = simple, separate from user account credentials, temporarily created, revoked, and managed by the
service provider.
7. Testing
7.1 Types of tests
 This chapter focuses on the Java specific testing considerations, assuming some priorknowledge of
testing microservices. The following types of tests are covered:
 Unit
- Tests a single class or a set of closely coupled classes.
 Component
- Tests the full function of a single microservice. Calls to external services are mocked in some way.
 Integration
- Communication between services.
- Integration tests are used to verify the communication across network boundaries.
- Test basic success and error paths over a network boundary.
 Contract
- Test the agreed contract for APIs and other resources that are provided by the microservice.
 End-to-end
- Tests a complete flow through the application or microservice.

7.2 Application architecture


 Internal structure of the microservice affects how easy it is to test

 The different sections of the application have different testing requirements.


 For example, domain logic does not require integration testing, whereas resources do.
How do you test Fault tolerance and resilience
 Test how fault tolerant your system as a whole.
 Tests for fault tolerance should be performed with all the services deployed. Do not use any mocks.
 Intentionally take down some microservices and redeploy individual services and backend data stores.
- Monitor response times and adjust timeout settings or circuit breakers if needed.
- Tools like Netflix Chaos Monkey and Gremlin automate this process for fault tolerance testing.
 Injecting bad data
- As in the integration tests, tools such as Amalgam can be used to automate the injection of bad data
during testing.
- If bulkhead pattern is working, then bad data in MS1 should not propagate to other MS2.
 Stress testing
- MS should be able to handle unexpected loads. Stress testing should be used to test the bulk heads in
your microservices. If a particular microservice is holding up requests, look at the configured bulk heads.
 Recoverability testing
- Once there is an issue, how soon we can have new instances
- Orchestration tools such as Amalgam or Kubernetes will spin up new instances of MS
7.3 Production environment
 Robust monitoring and analytics to warn of any bad behavior.
 Run some tests at OFF hours
 Repeat these specific tests from the staging environment in the production environment:
- › Injecting test or bad data
- › Taking down services, both to test the fault tolerance of other services and the recoverability of the
service you took down
- › Security verification

7.3.1 Canary testing


 Early detection
 Canary tests are a monitoring and testing practice where a small subset of users or traffic is exposed to a new
version of a software or service.
 If this "canary" group experiences issues or failures, it signals potential problems, allowing for early detection
and mitigation before a broader release or deployment.
- E.g., attempt to deploy new version of website
- Not to release for all users at once, use canary testing and release to small group say 5%
- If the canary users experience any problems, such as slow page loading or errors, it serves as an early
warning sign.
- You can quickly identify and fix issues before rolling out the new version to all users. This approach helps
ensure a smoother and more reliable deployment, reducing the impact of potential problems on your
entire user base.
8. From development to production
 How individual microservices are built and deployed.
- › Independently deployable.
- › Deployable in minutes rather than in hours.
- › Fault tolerant and should prevent cascading failures.
- › Should not require any code changes as it is deployed across target environments.
 Microservice architectures require automated deployment tools to help manage the deployment, testing, and
promotion of services across target environments.

8.1 Deployment patterns


 Monolith is built and deployed as one unit.
 Microservices, has many components and deployed independently.
 This configuration provides more flexibility in how consumers are exposed to service changes:
 Blue/Green Deployment
- Blue/Green deployments are a common pattern for monoliths.
- Maintaining 2 working versions of your environment, if any issue rollback
 Existing Blue system is active and running.
 New Green system, test it, If OK, switch traffic to the Green system.
 Blue system is left alone unless there is issue
 After the Green system is stable, update Blue system to latest version as well
 Canary Releasing
- Aim: Early detection of issue, if any by releasing the feature to small subset of users.
- Aim: Minimize risks by releasing in phases
- Important criteria are subset selection
- Select subset of users/servers often referred as “canaries” from total deployment population
- In this minor subset of server or for selective users deploy v2.
- Majority still using v1.
- Monitor the performance and results. If all OK, deploy v2 in majority servers
 A/B Testing
- Split testing
- Aim: Focus on optimizing specific part of an application. E.g., create 2 versions of same feature and
present to user, which people like?
- User is given 2 variants of application – V1 (original feature), V2 (proposed feature)
- Users randomly divided into two groups, with each group exposed to only 1 of the variations.
- Data Collection: Metrics like click-through rates, or user engagement are collected and analyzed
- Comparison of performance of V1 and V2 to determine which version yields better results.

8.2 Deployment pipelines and tools


 Microservice offer agility, quick feature implementation, testing, deployment, due to independent lifecycles.
 Automation/Pipeline/DevOps reduces manual deployment steps
 Effective source code management, build tools, and configuration management tools are essential.
 Containerization and cloud services help in dynamic provisioning.
 Monitoring and logging ensure operational visibility and security.
 Version management using build tools maintains software immutability.
 Templates can standardize common concerns, balancing developer flexibility.
8.3 Packaging options
 Creation of immutable artifacts that is executed without change across different environments.
 More configuration changes per environment is likely to cause issues due to misconfiguration.
 The following packaging options are available, among others:
- WAR/EAR file deployment on preinstalled middleware
- Executable JAR file
- Containers
- Every microservice on its own virtualized server
8.3.1 JAR, WAR, and EAR file deployment on preinstalled middleware
 Traditional deployment of WAR/EAR are placed on a dedicated server with long deployment cycles.
 As applications move through different environments, inconsistencies in configuration may arise.
 In microservices, it's advised to allocate one middleware server per MS for scalability and independence.
Despite deployment risks, this mirrors traditional monolithic systems and simplifies initial microservices
development.

8.3.2 Executable JAR file


 To avoid dependency issues across deployment environments, use self-contained executable JAR files, known
as "fat" JARs.
 These files include service class files and require shared libraries, ensuring consistent and reproducible
environments for both DEV/Prod deployment.
Spring Boot
 Spring Boot's executable JARs package a Spring application, dependencies, and a web runtime (Tomcat, Jetty,
or Undertow). Spring Boot handles configuration and adds production support features like metrics and health
checks.
 Microservices using Spring Boot benefit from dependency management with Maven or Gradle and follow
"Convention over Configuration."
 Auto-configuration simplifies setup but can be customized. Leveraging Spring ecosystem frameworks
maximizes Spring Boot's potential. Spring Cloud extends this support for cloud environments, aiding in
integrating third-party technologies and facilitating microservice development.
Dropwizard
 Dropwizard also produces stand-alone executable JAR files by using a preselected stack thatenhances
an embedded Jetty servlet container with extra capabilities.
WebSphere Liberty
 IBM WebSphere Liberty can also produce immutable executable JAR files containing only the technologies the
application needs. The packaging tool of Liberty can combine the following elements:
- A customized (minified) Liberty run time
- Java EE applications packaged as a WAR file or EAR file (usually only one)
- Supporting configuration like shared libraries, JNDI values, or data source configurations
 IBM WebSphere Liberty is a lightweight, modular, and cloud-native application server developed by IBM.
 Unlike WAS, Liberty has smaller footprint and not all comprehensive JEE support. Lightweight, few features.
 Unlike WAS, don’t take long in startup time, as it has few features.
 Unlike WAS, Liberty is designed with cloud native principles in mind, well suited for containerized, MS deploy

8.3.3 Containerization
 To ensure consistent OS configurations across stages, MS can be packaged as containers using Docker.
 Containers encapsulate applications and their dependencies, providing isolation from both peer containers and
the host OS. This lightweight approach allows multiple containers to run on a single OS, reducing the need for
virtualized servers.
 Docker images are immutable artifacts, enabling consistent deployment across environments. Container
configuration defines startup, making packaging format (e.g., JAR, WAR) irrelevant. Configuration is via
environment variables or mounted volumes for microservices within containers.
 MS should keep data stores and other backends as separate services, aligning with the 12-factor principle. Data
stores can also be managed within Docker containers.
 Avoid hosting multiple instances of the same microservice on a single host for failover redundancy.

8.3.4 Every microservice on its own server


 To maintain identical OS across stages, create a template server with the required OS and tools, making it an
immutable server.
 Use this template as the foundation for all servers hosting your microservices.
 Manage these servers and templates with tools from virtualization or cloud providers, or third-party options
like Vagrant, VMware, or VirtualBox.
 Combining these with configuration and automation tools like Salt or Ansible offers flexibility and agility.
 This approach results in 2 configurations:
- one microservice per virtualized server
- multiple microservices on a single virtualized server, optimizing resource utilization.

8.4 Configuration of the applications across stages


 Configuring the system of microservices can be done in three different ways:
- › Environment variables
- › Configuration files
- › Configuration systems like Zookeeper/Etcd/Consul
9. Management and Operations
 In Microservices => single application becomes a distributed system of several MS communicating with each
other.
 This chapter covers the following topics:
- Metrics and health checks
- Logging
- Templating

9.1 Metrics and health checks


 Metrics and health checks are necessary to get information about the state of a running service.

9.1.1 Dynamic provisioning


 Dynamic provisioning => automatically keep the system of microservices in its optimum state.
 Scaling service instances up or down based on workload demands, ensuring low latency and high throughput.
 The scaling member monitors the use of the following resources within the server process:
- CPU
- Heap
- Memory
9.1.2 Health check
 Recognizing problematic instances in a microservice application is vital for maintaining system optimization.
 Users can define health policies, specifying conditions to monitor and corresponding actions to take.
 This includes predefined health conditions like excessive request/response times and memory issues, with
actions such as server restarts or memory dumps for diagnostics.
 The monitor feature further provides performance monitoring data, accessible through AdminCenter.
 Nagios = open-source alert/monitoring system to track health and performance of IT infra components\

9.2 Logging
 Logging is essential for applications.
 Track the request using Correlation Id
- As soon as request arrives, create a correlation ID in MS1
- If MS1 invokes MS2, send same correlation ID
- Send it back to client in HTTP header using a special field (X-header fields, for example)
 Preferred approach for a logging infrastructure in a microservice application is the ELK stack or Elastic Stack5,
which is a combination of three tools:
- › Elasticsearch (search and analytics engine)
- › Logstash (data collection and transportation pipeline)
- › Kibana (visualizing collected data)

9.3 Templating
 Templating in microservices involves providing a code framework that includes common capabilities like
service registration, communication, messaging, logging, and security.
 This approach streamlines development, ensures consistency across teams, and offers generators like WildFly
Swarm, Spring Initializr, and Liberty app accelerator. For Java-centric environments, a single adaptable
template suffice

You might also like