30-Book-Microservices Best Practices For Java
30-Book-Microservices Best Practices For Java
Overview
Application architecture patterns are changing in the era of cloud computing.
A convergence of factors has led to the concept of “cloud native” applications:
› The general availability of cloud computing platforms
› Advancements in virtualization technologies
› The emergence of agile and DevOps practices as organizations looked to streamline and
shorten their release cycles
To best take advantage of the flexibility of cloud platforms, cloud native applications arecomposed of
smaller, independent, self-contained pieces that are called microservices.
This chapter provides a brief overview of the concepts and motivations that surround cloudnative applications
and microservice architectures:
Cloud native applications
Twelve factors
Microservices
Philosophy and team structure
Examples
1.3 Microservices
Small, independent (autonomous), replaceable processes that communicate by using lightweight APIs that do not
depend on language.
1.5 Examples
-> Throughout this book, we use two example applications to help explain concepts and
provideperspective:
› Online retail store
› Game On!
There are other ways to structure services for an online retail store. This set of services was chosen to illustrate
specific concepts that are described in this book. We leave the construction of alternate services as an exercise for the
reader.
1.5.2 Game On!
The game is essentially a boundless two-dimensional network of interconnected rooms. Rooms are provided by third
parties (for example, developers at conferences or in workshops). A set of core services provide a consistent user
experience. The user logs in anduses simple text commands to interact with and travel between rooms. Both
synchronous andasynchronous communication patterns are used. Interactions between players, core services,and
third-party services are also secured.
2. Creating Microservices in Java
2.1 Java platforms and programming models
New Java applications should embrace new language features like lambdas, parallel operations, and streams
Try to establish a balance between code cleanliness and maintainability.
2.1.1 Spring Boot
Spring Boot provides mechanisms for creating microservices
Spring Cloud is a collection of integrations between third-party cloud technologies and the Spring programming
model.
How to connect spring boot with AWS/Azure?
- Spring Cloud for Amazon Web Services, eases the integration with hosted Amazon Web Services
- Spring Cloud Azure is an open-source project that provides integration between your Spring applications
and Azure services,
- Use AWS SDKs or Azure libraries to integrate your Spring applications with services
2.1.2 Dropwizard
Dropwizard is a Java framework which provides mechanisms for creating microservices
It offers an integrated set of tools like Jersey, Jetty, and Metrics for creating RESTful services with built-in
features for configuration, logging, and database integration. It's known for its opinionated, streamlined
approach, making it popular for developing production-ready applications.
Client library
- MS1 invokes MS2 using client library
- Client library provided by like Consul/Netflix Eureka handle service registration and heart beating
- Client library provided by like Netflix Ribbon, provide client-side load balancing.
- (+)Better use existing libraries if required rather than reengineering and overcomplicating MS
- (-) Introducing complex code in application
Sidecar
- A good middle ground between the service proxy and a client library is a sidecar.
- Separate process that is deployed with your microservice
- Sidecars can do service registration with the service registry
- Sidecars can do client-side load balancing for outbound calls to other microservices
- Netflix Prana is an open-source sidecar that provides the capabilities of the Java based Ribbon
- Kubernetes is an orchestration engine for containers that provides sidecar like service discovery and load
balancing capabilities.
Producing APIs
When providing an API to external clients, it's essential to adhere to 2 key principles:
- Accept Unknown Attributes: When processing requests, allow for the inclusion of unknown attributes
without generating errors. Discard unnecessary attributes sent by a service calling your API rather than
returning an error, preventing unnecessary failures.
- Only Return Relevant Attributes: In response, provide only the attributes relevant to the specific API
being invoked. Avoid sharing unnecessary implementation details to allow room for future changes and
adaptability.
These principles constitute the Robustness Principle, ensuring your API remains flexible and resilient in the
face of evolving requirements. They promote compatibility and smooth transitions for both current and future
consumers.
4.2.2 Timeouts
SYNC or ASYNC, request must have timeout
4.2.4 Bulkheads
Compartmentalized error
Failure in one part doesn’t take down the whole thing.
How to implement bulkhead?
- Provide fallbacks.
- Fallback allows the application to continue functioning when non-vital services are down
- For example, chained fallbacks, where a failed request for personalized content falls back to a request
for more general content, which would in turn fall back to returning cached (and possibly stale) content
in preference to an error.
4.2.5 Queues
Limit outbound requests rather than some resource impacting all systems.
Queues and Semaphores are common tools for this. Queues have a maximum pending work limit, providing
fast failure when full, while Semaphores use permits for request control. Semaphores skip requests if permits
are exhausted, while queues offer some buffering. These mechanisms are also useful for handling rate-limited
remote resources.
4. Handling data
5.1 Data-specific characteristics of a microservice
Which data must be stored in the data store of your microservice?
Top-down approach. Start at the business level to model your data.
The following sections show how to identify this data, how to handle the data, and how it can be shared with
other data stores of other microservices.
Updating records is challenging, 2 microservices each have own datastore, how to ensure consistency and none
of them have corrupt state. Maintain business transaction.
E.g., retail store = Order + Payment
If customer orders something, order transaction is completed payment is done. Hence, business transaction
spanning two or more microservices, two or more data stores are involved in this business transaction.
Event-Driven architecture
ASYNC. Distributed transaction, can’t go for SYNC, one service waiting for another
Decoupling required, independence, containment
Avoid Distributed Transaction
The best way to span a business transaction across microservices is to use an event-driven architecture.
MS1 changes data, publishes event.
MS2 subscribes to event, receives it and does the data change accordingly on its data.
Using a publish/subscribe communication model, the two MS are loosely coupled. The coupling only exists on
the messages that they exchange. This technique enables a system of microservices to maintain data
consistency across all microservices without using a distributed transaction.
Adapter services doing complex updates, which span more than one service, can also be implemented by using
events.
Eventual Consistency
EC => Data may briefly differ but eventually converges to a consistent state due to replication delays or other
factors.
Limitation/Outcome of Event Driven Architecture
- Client initiates order => MS1 updates data => Publishes event “OrderCreated” + “PedingPayment”
- MS2 subscribes event “PedingPayment” => Makes payment => Emits “SuccessPayment”
- During this time when MS2/Payment is processing, if the client fetches MS1/Order status, it will see
PendingPayment. After some moment, SuccessPayment. Hence there are 2 states.
Unlike Strict consistency, where order is Created only when SuccessPayment is done. So client will never see
Pending Payment
Data replication
Using database replication mechanisms like triggers or stored procedures to share data across microservices
can lead to tight coupling and issues with schema changes. Event-based processing and data transformation
can decouple data stores and improve flexibility.
5.1.5 Event Sourcing and Command Query Responsibility Segregation
In an event-driven architecture, combining Command Query Responsibility Segregation (CQRS) and Event
Sourcing is a powerful approach.
CQRS separates data store access into Reads (returns state, no changes) and Writes (commands, change state).
Event Sourcing stores these events, capturing data changes, and enabling a flexible and event-driven
architecture. Events are stored sequentially in the event store
5.1.6 Messaging systems
Messaging systems or message-oriented middleware (MOM) are crucial components in supporting event-
driven architectures.
MOM acts as infrastructure for sending and receiving messages between distributed systems, enabling
applications to work across diverse platforms and networks seamlessly.
AMQP, Kafka, Rabbit MQ etc
Monolithic architecture: Requests => Gateway/LB => BL in middleware => data tier.
Microservices => more services with each db
Microservices instances come and go based on load. Microservices are continuously updated independently
Secure not only perimeter but rapidly changing infrastructure in organic way
Log data
Log should help to diagnose problems + comply/protect regulatory and privacy reasons
6.1.3 Automation
Automate as much as possible especially repeatable task
Applying security policies, credentials, and managing SSL certificates and keys across segmented environments
to help avoid human error.
6.2 Identity and Trust
In microservices architectures, maintaining user identity without causing latency or centralized service
contention is challenging.
End-to-end SSL encrypts data but doesn't guarantee trust and requires key management.
The following sections detail authentication, authorization, and identity propagation techniques to establish
trust in inter-service communication.
8.3.3 Containerization
To ensure consistent OS configurations across stages, MS can be packaged as containers using Docker.
Containers encapsulate applications and their dependencies, providing isolation from both peer containers and
the host OS. This lightweight approach allows multiple containers to run on a single OS, reducing the need for
virtualized servers.
Docker images are immutable artifacts, enabling consistent deployment across environments. Container
configuration defines startup, making packaging format (e.g., JAR, WAR) irrelevant. Configuration is via
environment variables or mounted volumes for microservices within containers.
MS should keep data stores and other backends as separate services, aligning with the 12-factor principle. Data
stores can also be managed within Docker containers.
Avoid hosting multiple instances of the same microservice on a single host for failover redundancy.
9.2 Logging
Logging is essential for applications.
Track the request using Correlation Id
- As soon as request arrives, create a correlation ID in MS1
- If MS1 invokes MS2, send same correlation ID
- Send it back to client in HTTP header using a special field (X-header fields, for example)
Preferred approach for a logging infrastructure in a microservice application is the ELK stack or Elastic Stack5,
which is a combination of three tools:
- › Elasticsearch (search and analytics engine)
- › Logstash (data collection and transportation pipeline)
- › Kibana (visualizing collected data)
9.3 Templating
Templating in microservices involves providing a code framework that includes common capabilities like
service registration, communication, messaging, logging, and security.
This approach streamlines development, ensures consistency across teams, and offers generators like WildFly
Swarm, Spring Initializr, and Liberty app accelerator. For Java-centric environments, a single adaptable
template suffice