MicroService Design Pattern
MicroService Design Pattern
When designing microservices, it's important to leverage design patterns that help achieve
scalability, resilience, maintainability, and flexibility. Below are some key microservice design
patterns used in real-world applications:
The API Gateway pattern provides a single entry point for client requests and routes them to
the appropriate microservices. This pattern helps to simplify client-side logic and reduce the
complexity of communication between the client and microservices.
Key Points:
● Mobile apps that require data from multiple services, where the API Gateway aggregates
the data.
● Authentication and authorization for all services.
In a microservices architecture, each service is usually responsible for its own database. This
means that each microservice can choose the appropriate database technology based on its
needs (e.g., relational, NoSQL).
Key Points:
● Ensures data isolation between microservices, meaning that changes in one service’s
database do not affect others.
● Avoids distributed transactions by keeping each service’s data management
self-contained.
● Promotes autonomy, allowing each service to scale and evolve independently.
● A user service with a relational database and an orders service with a NoSQL database.
● Services that handle data in different formats (structured vs. unstructured) use different
database types.
In the Event Sourcing pattern, the state of a service is persisted as a sequence of events
rather than the current state. Each event represents a change in the state and is stored in an
event log. The service’s state can be rebuilt by replaying the events.
Key Points:
● Ensures that all changes to the system are stored as a series of immutable events.
● Useful for scenarios where auditing and versioning of data are crucial.
● The event log can be used to rebuild the state at any point in time.
The CQRS pattern suggests splitting the data access logic into two parts: one for commands
(writes) and one for queries (reads). This separation allows you to optimize read and write
operations independently.
Key Points:
● Systems with high read/write load, such as social media platforms, e-commerce sites,
and messaging apps.
● Complex domains where the write side and read side have different models.
The Circuit Breaker pattern is used to prevent a failure in one microservice from cascading to
others. When a service fails or experiences delays, the circuit breaker opens, allowing the
system to fall back to a default behavior and preventing further failures.
Key Points:
● Monitors the health of service calls and opens the circuit when failures reach a threshold.
● Allows the system to continue functioning even if some services are unavailable, by
invoking fallback logic.
● Helps to prevent a "domino effect" where a failure in one service causes failures in many
others.
The Service Discovery pattern enables microservices to find and communicate with each other
dynamically. Instead of hard-coding the network locations (e.g., IP addresses) of services,
microservices register themselves with a service registry, and other services query this registry
to discover available services.
Key Points:
● Service instances register themselves with a central registry when they start.
● Clients or other services query the registry to find service endpoints.
● Helps manage dynamic scaling, where services may be added or removed frequently.
● Large-scale systems where services can scale up or down automatically (e.g., using
Kubernetes).
● Microservices that need to locate each other at runtime (e.g., in cloud-native
applications).
The Strangler Fig pattern involves incrementally replacing an existing monolithic system with
microservices. The idea is to “strangle” the monolith by gradually replacing pieces of its
functionality with microservices, while the old system continues to run.
Key Points:
● Involves gradually migrating to a new architecture, avoiding the need for a complete
rewrite.
● Can be done by adding new functionality as microservices and redirecting traffic to these
services.
● Helps mitigate risks associated with a full migration.
8. Saga Pattern
The Saga pattern handles distributed transactions in a microservice architecture by dividing the
transaction into a series of smaller, isolated steps (each step being a local transaction in a
single service). If one step fails, compensating actions are taken to undo the previous steps.
Key Points:
The Backends for Frontends (BFF) pattern involves creating a dedicated backend service for
each frontend (e.g., web, mobile) to simplify the user experience and optimize API calls.
Key Points:
● Each client (web, mobile, etc.) has a specific backend tailored to its needs.
● Helps avoid having a one-size-fits-all API, improving performance and simplifying logic.
● Reduces the burden on the frontend by offloading complex operations to the BFF.
● Mobile apps and web apps with different user interface needs that require different data
or behavior.
● Optimizing APIs for specific client platforms (e.g., mobile clients need fewer resources
than web clients).
The Sidecar pattern involves deploying a helper service alongside a microservice to handle
cross-cutting concerns like logging, monitoring, or security. The sidecar service runs in the same
environment as the microservice but is responsible for auxiliary tasks.
Key Points:
● Adding a proxy service for handling security (e.g., authentication and authorization).
● Integrating logging and monitoring without modifying the core microservice code.
Conclusion
By leveraging these patterns, you can ensure that your microservices are robust, resilient, and
capable of handling a wide range of use cases in a distributed environment.
The API Gateway pattern is a design pattern used in microservices architecture that provides a
single entry point for all client requests. It acts as a reverse proxy, routing requests from
clients to the appropriate microservices. The API Gateway pattern helps manage and simplify
communication between clients and backend services by centralizing various cross-cutting
concerns, such as authentication, logging, rate limiting, response aggregation, and caching.
Key Concepts:
1. Single Entry Point: The API Gateway provides a single entry point to the client, which
interacts with multiple microservices behind the gateway.
2. Request Routing: The gateway forwards client requests to the appropriate backend
service.
3. Cross-Cutting Concerns: The API Gateway can handle tasks like authentication,
authorization, caching, logging, rate-limiting, and response transformations.
4. Response Aggregation: It can aggregate responses from multiple microservices into a
single response to the client.
5. Simplified Client: Clients interact with the API Gateway instead of directly interacting
with individual services, making the client-side code simpler.
Benefits:
● Simplified Client Interaction: Clients don’t need to know the details of the
microservices behind the scenes, reducing the complexity on the client side.
● Centralized Cross-Cutting Concerns: Handling logging, authentication, rate limiting,
and other concerns in one place reduces duplication across microservices.
● Reduced Number of Requests: By aggregating responses from multiple services, the
API Gateway reduces the number of calls the client needs to make.
● Flexibility and Security: The API Gateway can enforce security policies (e.g.,
authentication and authorization) centrally, ensuring a consistent security mechanism for
all services.
In a typical microservices architecture, the client would need to communicate with each of these
services directly, which can become complex. The API Gateway acts as a single point of entry,
routing requests to the appropriate microservice.
Instead of the client making separate HTTP requests to each service, the API Gateway handles
the routing, aggregation, and orchestration of these requests.
High-Level Flow:
1. Client Request: The client sends an HTTP request to the API Gateway to place an
order.
2. API Gateway: The gateway routes the request to the appropriate microservices:
○ It calls User Service to authenticate and retrieve user data.
○ It calls Inventory Service to check if the product is in stock.
○ It calls Payment Service to process the payment.
○ It calls Shipping Service to arrange the shipping of the product.
3. Aggregation: After receiving responses from the microservices, the API Gateway
aggregates the results into a single response (e.g., success message with order ID).
4. Client Response: The client receives the aggregated response from the API Gateway.
Let’s say that the application requires all requests to be authenticated using a JWT token.
Instead of each microservice individually handling authentication, the API Gateway can handle
this concern centrally.
High-Level Flow:
1. Client Request: The client sends a request to the API Gateway, including the JWT
token in the request headers.
2. API Gateway: Before forwarding the request to any service, the gateway:
○ Verifies the JWT token.
○ Checks if the token has the necessary permissions to access the requested
resource.
3. Service Routing: If authentication and authorization are successful, the gateway routes
the request to the appropriate service (e.g., Order Service).
4. Response: The requested data is retrieved from the service, and the API Gateway
sends the response back to the client.
By handling authentication and authorization centrally at the API Gateway, the individual
services can focus on their core functionality without needing to duplicate security logic.
Let’s assume we are implementing the API Gateway in a Spring Boot application with the
Spring Cloud Gateway library, which is a popular choice for building API Gateways in the
Spring ecosystem.
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-gateway</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-config</artifactId>
</dependency>
</dependencies>
spring:
cloud:
gateway:
routes:
- id: user-service
uri: lb://user-service
predicates:
- Path=/api/user/**
- id: order-service
uri: lb://order-service
predicates:
- Path=/api/order/**
- id: payment-service
uri: lb://payment-service
predicates:
- Path=/api/payment/**
- id: inventory-service
uri: lb://inventory-service
predicates:
- Path=/api/inventory/**
- id: shipping-service
uri: lb://shipping-service
predicates:
- Path=/api/shipping/**
Explanation:
You can add filters for common operations like authentication, logging, and rate limiting.
@Component
public class AuthenticationFilter implements GatewayFilter {
private static final Logger logger = LoggerFactory.getLogger(AuthenticationFilter.class);
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
String authToken =
exchange.getRequest().getHeaders().getFirst(HttpHeaders.AUTHORIZATION);
In this case, the API Gateway will intercept incoming requests and check if the request includes
a valid JWT token before forwarding the request to the microservices.
Imagine the client wants a summary of order status, including information from multiple
microservices, such as Order, Payment, and Shipping. The API Gateway can aggregate these
responses into a single response.
@Component
public class AggregatingFilter implements GatewayFilter {
@Override
public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
// Call multiple services (Order, Payment, Shipping)
Mono<Order> orderMono = WebClient.create("http://order-service/api/order")
.get()
.retrieve()
.bodyToMono(Order.class);
Mono<Payment> paymentMono = WebClient.create("http://payment-service/api/payment")
.get()
.retrieve()
.bodyToMono(Payment.class);
exchange.getResponse().getHeaders().setContentType(MediaType.APPLICATION_JSON);
return
exchange.getResponse().writeWith(Mono.just(exchange.getResponse().bufferFactory().wrap(re
sponse.toJson().getBytes())));
});
}
}
Conclusion
In the example of the E-commerce System, the API Gateway routes requests to various
microservices, ensuring that clients don't need to directly interact with each service. Additionally,
the API Gateway can enforce security policies (authentication) and aggregate responses from
multiple services into a single unified response.
By using an API Gateway, you gain flexibility, security, and simplification in the architecture of
your microservices
The Database per Service pattern is a key design pattern in microservices architecture. It
suggests that each microservice should have its own dedicated database (or storage). This
ensures that each service is independent, has control over its data, and avoids coupling
between services through a shared database.
Key Concepts:
1. Service Independence: Each microservice manages its own data and is responsible
for its own database schema.
2. Decentralized Data Storage: There is no direct access to another service's database.
Microservices communicate with each other via APIs (e.g., REST or messaging
systems) rather than sharing a common database.
3. Loose Coupling: Each service is loosely coupled to others because it doesn't rely on a
shared database, making it easier to change or scale individual services independently.
4. Consistency: Since microservices often have separate databases, ensuring consistency
(ACID properties) across services can be challenging. This pattern generally relies on
eventual consistency rather than strict consistency.
Benefits:
● Autonomy: Each service can choose the database that best fits its needs (e.g.,
relational databases, NoSQL databases, key-value stores).
● Scalability: Microservices can scale independently because each service manages its
own database.
● Resilience: One service’s database failure does not affect other services.
● Technology Flexibility: Different services can use different database technologies
depending on their needs (e.g., a user service might use a relational database like
MySQL, while an order service might use a NoSQL database like MongoDB).
Challenges:
● Data Duplication: Each service may have its own copy of certain data, leading to
duplication across services.
● Distributed Transactions: Managing distributed transactions (transactions that span
multiple microservices) becomes harder since each service controls its own database.
● Eventual Consistency: It becomes more difficult to ensure consistency of data across
multiple services, and developers often have to deal with eventual consistency (with
compensating actions or event-driven architecture).
Each service is responsible for its own database, so they have their own isolated data stores.
In this example, a customer wants to place an order, which involves interacting with multiple
services:
Step-by-Step Breakdown:
○ The User Service stores user-related data such as user profile, preferences,
login credentials, etc. This database can be a relational database (e.g.,
PostgreSQL, MySQL), as user data is often structured and relational.
2.
3. Order Service Database:
4.
5. Inventory Service Database:
○ The Inventory Service is responsible for tracking product availability and stock
levels. This service can use a key-value store (e.g., Redis) or a relational
database to store product information.
6.
7. Payment Service Database:
8.
1. Customer places an order: The client (e.g., a web or mobile app) sends a request to
the Order Service.
2. Order Service interacts with User Service: The Order Service first checks the User
Service to ensure that the customer is authenticated and retrieves the user’s details
from the User Service database.
○ User Service: Verifies user’s credentials and provides information like the
shipping address.
3. Inventory Service checks stock: The Order Service then calls the Inventory Service
to check if the ordered items are available in stock.
5. Payment Service processes payment: The Order Service then calls the Payment
Service to process the payment for the order.
○ Payment Service: Processes the payment (e.g., via credit card or PayPal) and
records the transaction in its own database.
6. Inventory Service updates stock: Once the payment is successful, the Inventory
Service updates the stock levels in its own database (reduces the stock of ordered
items).
7. Shipping Service schedules shipment: Finally, the Order Service communicates with
the Shipping Service to schedule the shipment and update the shipping status in the
order.
8. Response to Client: Once the order is successfully placed, the Order Service sends a
response back to the client with the order details.
● Independence: Each service (Order, Inventory, Payment) is responsible for its own data.
The Order Service is free to evolve without affecting other services.
● Scalability: If the Order Service is experiencing high traffic, it can scale its database
independently without affecting the Inventory Service or Payment Service.
● Autonomy: The Inventory Service can choose its preferred database technology (e.g.,
NoSQL for fast lookups of product availability), while the Order Service can use a
relational database (e.g., MySQL) to store structured order data.
○ Data such as user details might be duplicated across services (e.g., in both the
Order Service and the Payment Service). This duplication is inevitable in a
distributed system but can be managed by ensuring consistency across
services using asynchronous communication (e.g., event-driven architecture
with Kafka or RabbitMQ).
2. Distributed Transactions:
Conclusion:
The Database per Service pattern is a powerful way to ensure that microservices in a
distributed system remain independent and loosely coupled. Each microservice manages its
own database and can choose the most appropriate database technology for its needs. While
this approach brings significant flexibility, it also introduces challenges such as data
duplication, eventual consistency, and the need for distributed transactions.
In the E-commerce System example, this pattern allows each microservice (Order, Inventory,
Payment) to independently scale and evolve without being tightly coupled to the others, leading
to a more modular, flexible, and maintainable system.
The Event Sourcing pattern is an architectural pattern where state changes in an application
are persisted as a sequence of events rather than by directly storing the current state. In this
pattern, each change in the application’s state (e.g., an update to a database) is captured as an
immutable event. These events are stored in an event store, and the current state of the
application is derived by replaying the events in sequence.
Key Concepts:
1. Event Store: A specialized database or storage system that keeps a log of all events.
2. Immutable Events: Each event is immutable (cannot be changed once stored). It
represents a specific change in the state of the system (e.g., a user placed an order, an
item was added to the inventory).
3. State Reconstruction: The current state of an entity or system is not stored directly.
Instead, it is reconstructed by replaying the series of events that have occurred.
4. Eventual Consistency: Since events are processed asynchronously, the system might
exhibit eventual consistency rather than immediate consistency.
Benefits:
● Auditability: The entire history of an entity’s changes is stored as events, allowing you
to trace the lifecycle of data changes.
● Scalability: By using event logs and event-driven systems, you can achieve high
scalability and flexibility.
● Decoupling: Services can be decoupled as they can listen to events and act upon them
asynchronously.
● Temporal Queries: You can query the system for the state at any point in time by
replaying events up to that point.
Challenges:
● Complexity: Event Sourcing adds complexity to the system since you need to manage
the event store, handle eventual consistency, and design for replays of events.
● Event Storage Size: The event store can grow large over time, as it stores all events.
● Event Versioning: Over time, the event schema might evolve. Managing and
maintaining backward compatibility between versions of events can be tricky.
Let’s take an example of an E-commerce Order Management System to explain how Event
Sourcing works.
1. Order Placed: The customer places an order, which starts the process.
2. Order Payment Processed: The payment for the order is processed.
3. Order Shipped: The order is shipped after the payment is processed.
Each of these events represents a state change in the Order entity. Instead of storing just the
current state (e.g., Order Status: Shipped), the system stores a sequence of events
leading to the current state.
When a customer places an order, an event is created to represent this state change.
{
"eventId": "12345",
"eventType": "OrderPlaced",
"orderId": "1001",
"customerId": "5678",
"timestamp": "2025-01-16T10:00:00",
"orderDetails": {
"items": [
{ "productId": "P001", "quantity": 2 },
{ "productId": "P002", "quantity": 1 }
],
"totalAmount": 150.00
}
}
This event is stored in the Event Store. The event captures the information about the order
placement, including customer details and the items ordered.
After the customer’s payment is processed, a new event is generated to represent this state
change.
{
"eventId": "12346",
"eventType": "PaymentProcessed",
"orderId": "1001",
"paymentStatus": "Success",
"paymentAmount": 150.00,
"timestamp": "2025-01-16T10:05:00"
}
This event is stored in the event store as well. It includes details of the payment status and the
amount paid. Now, the system knows the payment has been processed successfully for order
1001.
{
"eventId": "12347",
"eventType": "OrderShipped",
"orderId": "1001",
"shippingStatus": "Shipped",
"timestamp": "2025-01-16T10:15:00",
"trackingNumber": "XYZ123456"
}
This event indicates that the order has been shipped and includes shipping details like the
tracking number. This event is also saved to the event store.
Event Store
All these events are stored in an Event Store. It could be a simple append-only log (like Kafka
or EventStoreDB) or a custom database designed for storing events.
The Event Store holds all these events in the following sequence:
● Event 1: OrderPlaced
● Event 2: PaymentProcessed
● Event 3: OrderShipped
The current state of the Order can be rebuilt by replaying the events stored in the Event Store.
1. Retrieve all events related to the order (in this case, 1001).
2. Replay the events in the correct order:
○ First, an order was placed.
○ Then, the payment was processed.
○ Finally, the order was shipped.
By replaying these events, we can determine the current state of the order: Shipped.
{
"orderId": "1001",
"status": "Shipped",
"items": [
{ "productId": "P001", "quantity": 2 },
{ "productId": "P002", "quantity": 1 }
],
"totalAmount": 150.00,
"shippingStatus": "Shipped",
"trackingNumber": "XYZ123456"
}
Note: The system doesn’t store the current state of the order directly, but it derives it by
replaying the sequence of events.
1. Auditability: Every change in the order state is captured as an event, which allows you
to trace the complete history of the order, from placement to shipment.
2. Flexibility: If you want to change the business logic (e.g., adding a new rule for order
processing), you don’t need to modify the current state directly. You can handle the
change by processing events differently or adding new events.
3. Event Replay: If the system needs to calculate the state at any point in time, you can
replay the events up to that point. For example, if you need to see the status of an order
on 2025-01-16T10:10:00, you can replay all events up to that timestamp.
4. Decoupling: Since each service or component works with events rather than directly
modifying the database, this pattern promotes loose coupling. Services can react to
events asynchronously.
5. CQRS (Command Query Responsibility Segregation): Event Sourcing works well
with CQRS, a pattern that separates the handling of commands (actions that change
state) from queries (retrieving state). Events can be used for commands, and the
reconstructed state can be optimized for queries.
1. Event Storage Size: As events accumulate over time, the event store can grow large.
To address this:
○ Event Schema Evolution: Use versioning for events or add new fields with
defaults to maintain backward compatibility.
○ Event Normalization: Use a versioning system or a transformation layer to
convert old events into a format understood by the current system.
3. Eventual Consistency: Because event processing is asynchronous, there can be a
delay between when an event occurs and when the state reflects that change. This is an
inherent part of Event Sourcing and is addressed through eventual consistency
mechanisms like retries and compensating actions.
Conclusion
The Event Sourcing pattern enables systems to capture and store every state change as an
immutable event. Instead of storing the current state, you store the history of all events and
reconstruct the state by replaying those events. This pattern provides benefits like auditability,
scalability, and flexibility in complex systems.
In the E-commerce Order Management System example, Event Sourcing allows for detailed
traceability of the entire lifecycle of an order, from placement to shipment, by storing and
replaying events. While it adds some complexity, it provides significant benefits in terms of state
management, scalability, and decoupling of services.
1. Commands: Operations that modify state (e.g., create, update, delete).
2. Queries: Operations that retrieve data without modifying it.
Key Concepts:
1. Separation of Concerns: The read and write operations are completely separated,
which can help optimize each for its specific purpose (e.g., reads are optimized for
performance and queries, writes are optimized for consistency).
2. Scalability: By separating the read and write sides, each side can be scaled
independently. This is particularly useful when read operations outnumber write
operations.
3. Eventual Consistency: Since the read model is usually updated asynchronously, it can
result in eventual consistency, meaning the read side may not always immediately
reflect changes made on the write side.
Benefits:
● Performance Optimization: The read and write sides can be optimized independently.
For example, the read side can be denormalized (faster querying) while the write side
can be kept normalized (for consistency).
● Independent Scaling: The read and write sides can be scaled independently. In a
typical application, read operations are more frequent than writes, so scaling the read
side can lead to significant performance improvements.
● Simplified Domain Logic: The write side (command side) often has a simpler, more
explicit representation of business logic. This is because commands are typically used
for modifying state and have specific validation rules, while queries are often about
retrieving data and can be optimized separately.
Challenges:
● Commands:
○ Place an order
○ Update an order (e.g., change the shipping address)
○ Cancel an order
● Queries:
○ Get order details (by order ID)
○ Get orders for a specific customer
In this system, CQRS would separate the concerns of reading and writing orders. We will use
two models: a Write Model (to handle commands like placing, updating, and canceling orders)
and a Read Model (to efficiently retrieve order details and customer orders).
The Write Model contains the business logic and entities related to order creation, updates, and
deletion. It handles commands like PlaceOrder, UpdateOrder, and CancelOrder.
● PlaceOrder Command: When a customer places an order, the command handler will
validate the order, check inventory, and update the state of the system (e.g., create an
order, subtract stock).
The PlaceOrderCommandHandler will handle the command, interacting with the write model
to persist the order in the database.
● UpdateOrder Command: If the customer updates their order (e.g., changes shipping
address), a command will trigger that updates the corresponding data in the database.
The UpdateOrderCommandHandler will apply the changes to the order in the database.
The Read Model is optimized for querying and retrieving data. In this model, data may be
denormalized to make it easier to retrieve specific information (e.g., customer’s order history).
● GetOrderDetails Query: This query is used to get detailed information about a specific
order.
The Read Model may store data in a denormalized form to facilitate fast queries. For instance, it
may store order summaries or a list of orders for each customer.
This denormalized data allows the system to quickly serve queries without needing to join
multiple tables or perform complex calculations.
How CQRS Works in This Example:
Benefits of CQRS:
1. Performance: By separating read and write concerns, you can optimize each side
independently. The read side can be denormalized and optimized for fast queries, while
the write side can focus on business logic and consistency.
2. Scalability: You can scale the read and write models independently. In systems where
read operations significantly outnumber write operations, you can scale the read side
(often a cache or read-optimized store) without impacting the write side.
3. Flexibility: Different technologies can be used for the read and write models. For
example, the write model might use a relational database for transactional consistency,
while the read model might use a NoSQL database like MongoDB or a caching layer like
Redis for fast lookups.
4. Complexity Management: Complex domain logic related to commands can be isolated
in the write model, while the read model focuses purely on fast and efficient queries.
Challenges of CQRS:
1. Complexity: Implementing and maintaining two separate models (read and write)
increases the complexity of the system. You'll need to manage the synchronization of
data between the models (often using event-driven mechanisms).
2. Eventual Consistency: The read model is typically updated asynchronously, meaning
there might be a delay in reflecting changes made to the write model. This leads to
eventual consistency, which might not be acceptable in all scenarios.
3. Data Duplication: The read model may store data that is duplicated from the write
model, leading to additional storage requirements and complexity in keeping the data in
sync.
Conclusion
The CQRS (Command Query Responsibility Segregation) pattern helps optimize systems by
separating the write (command) and read (query) operations. It allows you to scale and
optimize each side independently, making the system more efficient and flexible. In the
E-commerce System example, CQRS helps optimize the performance of queries (such as
retrieving customer orders) while still supporting complex business logic for commands (such as
placing or updating orders). While CQRS brings significant benefits in performance and
scalability, it also introduces complexities in terms of managing data consistency and
synchronization.
The Circuit Breaker pattern is a software design pattern that is used to detect failures in a
system and prevent further attempts to perform an operation that is likely to fail. The pattern is
inspired by the electrical circuit breaker, which detects faults and stops the flow of electricity to
prevent further damage.
In a software system, the Circuit Breaker pattern is used to handle failures in remote services,
microservices, or other external systems. When an operation (e.g., a call to an external API or
microservice) fails repeatedly, the circuit breaker "trips" (or opens), and the system stops trying
to execute the failing operation. This allows the system to recover more gracefully and avoid
cascading failures that might overwhelm other parts of the system.
Key Concepts:
1. Closed State: The default state of the circuit breaker where operations are allowed to
execute. If the service works as expected, requests will proceed normally.
2. Open State: When the circuit breaker detects too many failures within a certain time
window, it enters the open state. In this state, the system will immediately fail all
requests without trying to call the external service, thus preventing the system from
repeatedly trying and failing.
3. Half-Open State: After the circuit breaker has been in the open state for a
predetermined period, it enters the half-open state. In this state, a limited number of
requests are allowed to test if the external service has recovered. If these requests
succeed, the circuit breaker transitions back to the closed state. If they fail, it returns to
the open state.
1. Prevents System Overload: When a service is failing repeatedly, trying to call it can
lead to unnecessary load, causing further strain on the system. The circuit breaker
prevents this overload by blocking further calls.
2. Improved System Resilience: By preventing cascading failures and giving failing
services time to recover, the circuit breaker helps maintain the overall system stability.
3. Graceful Degradation: Instead of a complete failure, the circuit breaker allows for
graceful degradation by blocking faulty operations while continuing to serve other parts
of the system.
4. Fail Fast: The circuit breaker prevents the system from wasting resources on operations
that are likely to fail, enabling faster error detection and recovery.
Here’s how the Circuit Breaker pattern would work in this scenario:
In the closed state, the circuit breaker allows requests to pass through and attempts to process
payments via the external payment service.
public PaymentService() {
this.circuitBreaker = new CircuitBreaker();
}
In this case, the processPayment method calls the external payment service. If the payment
service fails, the circuit breaker records the failure, and the system transitions into the open
state after a certain threshold of failures.
When the payment service fails repeatedly (e.g., more than 5 failures within a short time), the
circuit breaker transitions into the open state. In the open state, any new attempts to call the
payment service are blocked immediately, preventing unnecessary load on the payment service.
In this code:
● The circuit breaker will keep track of how many failures occurred (failureCount).
● If the number of failures exceeds the threshold (e.g., 5), it enters the open state.
● After a certain period (OPEN_TIME_DURATION), the circuit breaker resets and enters the
half-open state to test if the service has recovered.
In the half-open state, the system allows a few requests to pass through to test whether the
external payment service has recovered. If these requests succeed, the circuit breaker
transitions back to the closed state. If they fail, the circuit breaker returns to the open state.
public PaymentService() {
this.circuitBreaker = new CircuitBreaker();
}
Here’s how the flow would look for the circuit breaker:
1. The first few requests fail, and the circuit breaker enters the open state.
2. After a timeout (OPEN_TIME_DURATION), the circuit breaker enters the half-open state
and allows a few requests to pass.
3. If these requests succeed, the circuit breaker returns to the closed state and normal
operation resumes.
4. If they fail, the circuit breaker goes back to the open state.
○ A few requests are allowed to test if the external service has recovered.
○ If successful, the circuit breaker transitions back to the closed state.
○ If failures occur, the system goes back to the open state.
1. Prevents Overloading a Failed Service: The circuit breaker prevents further requests
to a service that is likely to fail, helping to avoid overwhelming it and giving it time to
recover.
2. Graceful Degradation: By isolating failures, the system can still function partially,
ensuring other operations can continue while the problematic service recovers.
3. Improved Resilience: The circuit breaker helps build a more fault-tolerant and resilient
system by managing failure scenarios in a controlled manner.
4. Faster Recovery: The system can quickly detect failures and stop wasting resources on
failed operations, reducing the time required to recover.
Example in Action
Suppose the payment service starts to fail due to some network issues or an outage. With the
Circuit Breaker pattern, instead of continuously retrying failed requests (which could strain both
the system and the external service), the system will:
Conclusion
The Circuit Breaker Pattern is a powerful resilience pattern that helps systems handle failures
gracefully by detecting failures and preventing repetitive, unnecessary operations that might
overload services. This pattern increases the robustness and fault tolerance of a system by
introducing a mechanism to stop calling failing services and allows them to recover. It is
particularly useful in microservices and distributed systems, where services depend on
external systems or services that might become temporarily unavailable.
Service Discovery Pattern
In a traditional monolithic application, all services are typically known and their locations are
fixed. However, in a microservices architecture, services are often distributed and can
dynamically scale. As services are created or destroyed frequently (e.g., in a containerized
environment), service discovery provides a way for services to automatically find and
communicate with each other.
Key Concepts:
1. Client-Side Service Discovery: In this approach, the client (a service) is responsible for
knowing how to find other services. The client queries a service registry to get the
location of a service and then communicates directly with the service.
2. Server-Side Service Discovery: In this approach, the client sends a request to a load
balancer or API Gateway, which is responsible for discovering the location of the
appropriate service and forwarding the request.
The main objective of service discovery is to decouple service instances and clients. It helps
handle issues like:
1. Static Discovery: Involves maintaining a fixed list of service endpoints in a configuration
file or DNS, which services use to find each other. However, this method doesn’t scale
well and is less flexible.
2. Dynamic Discovery: Involves using a service registry where services register
themselves upon startup and de-register when they stop. Clients can query the registry
to discover available instances of a service.
1. Service Registry: A central repository that keeps track of the network locations
(addresses) of available service instances.
2. Service Providers: Services that register themselves in the service registry. They report
their network location (e.g., IP address and port) to the registry.
3. Service Consumers: Clients or other services that need to discover and communicate
with the service providers.
● Consul: A tool that provides a service registry and supports both client-side and
server-side service discovery.
● Eureka: A REST-based service registry provided by Netflix, primarily used in Spring
Cloud applications.
● Zookeeper: An open-source project that provides centralized configuration management
and service discovery.
● Kubernetes: Kubernetes provides built-in service discovery with its internal DNS
system, where services can discover each other via DNS names.
● Order Service
● Payment Service
● Inventory Service
Each service needs to be able to discover the others to make API calls.
1. Service Registration
When a service (e.g., Order Service) starts up, it registers itself with the Consul service
registry. It provides its address and metadata such as service name, health status, and version.
Now, the Order Service wants to call the Payment Service. Instead of hard-coding the IP
address or hostname of the Payment Service, it queries Consul to find available instances.
Example: A client (Order Service) queries Consul for the Payment Service address:
curl http://localhost:8500/v1/catalog/service/payment-service
[
{
"Node": "payment-service-1",
"Address": "192.168.1.3",
"ServiceID": "payment-service-1",
"ServiceName": "payment-service",
"ServiceAddress": "192.168.1.3",
"ServicePort": 8081
}
]
● The Order Service now knows that the Payment Service is running on
192.168.1.3:8081.
Consul can return multiple instances of the service if they are available. In this case, the Order
Service can choose one of the available Payment Service instances. If one instance is down
or unavailable, it can retry with another instance.
Alternatively, the Order Service can use a load balancer to distribute the requests across
multiple instances of the Payment Service, improving scalability and fault tolerance.
In Kubernetes, service discovery is built-in and is based on DNS. When a service is created in
Kubernetes, it is automatically assigned a DNS name that can be used by other services to
discover it.
Example:
● You have a Payment Service running in Kubernetes as a service called
payment-service.
● The Order Service can discover and communicate with the Payment Service by calling
payment-service:8080 in its requests. Kubernetes will resolve the service name
(payment-service) to the appropriate IP address of the available pod(s).
In this case:
1. Dynamic Service Scaling: As services come and go (scale up or down), the service
discovery mechanism ensures that all service instances are registered and discoverable.
2. Decoupling: Services are decoupled from each other in terms of knowledge of their
locations. This enables flexibility and scalability in distributed systems.
3. Fault Tolerance: Service discovery allows systems to dynamically choose available,
healthy instances of a service, enabling high availability and failover.
4. Load Balancing: Service discovery can work with load balancers to distribute traffic
evenly across multiple instances of a service.
Conclusion
The Strangler Fig Pattern is a software design pattern often used to migrate legacy systems
to new systems or architectures, such as when transitioning from monolithic applications to
microservices or when refactoring a legacy codebase. The main idea behind the Strangler Fig
Pattern is to incrementally replace parts of a legacy system with new components, without
having to completely rewrite or replace the entire system at once.
The name of the pattern comes from the strangler fig tree, which grows around a host tree and
slowly replaces it over time without killing it. Similarly, in software, the pattern allows you to
"strangle" the legacy system by incrementally replacing parts of it while keeping the existing
system operational until the migration is complete.
1. Incremental Replacement: The legacy system is replaced gradually, one piece at a
time, ensuring that the application continues to function as the transition occurs.
2. Coexistence: During the migration process, the new system (e.g., microservices or
refactored components) and the legacy system run in parallel, with both systems
collaborating to ensure that business operations continue without interruption.
3. Risk Mitigation: By replacing components incrementally, the risk of introducing bugs or
downtime is minimized. If a problem arises, it can be isolated to the newly replaced parts
of the system.
Let's consider a monolithic e-commerce application that handles all aspects of an online
store: user authentication, product catalog, order processing, and payment processing. Over
time, the application has become difficult to scale and maintain. The company wants to refactor
the monolith into a microservices architecture but doesn't want to shut down the application
while doing so. Instead, they decide to use the Strangler Fig Pattern.
Step-by-Step Example:
The monolithic e-commerce system can be broken down into the following major components:
To ensure that the monolith and the new microservices can coexist, a gateway or API proxy is
introduced. This layer routes requests to either the monolithic application or the new
microservices, depending on which parts of the system have already been migrated.
For example:
● Initially, when the user requests the product catalog, the gateway routes the request to
the monolithic system.
● After the product catalog service has been migrated to a microservice, the gateway will
route product catalog-related requests to the new service, while still handling requests
for other components (like order processing) by forwarding them to the monolithic
system.
● Step 1: Create a new Product Catalog Microservice that performs the same
functionality as the product catalog module in the monolith.
● Step 2: Update the API gateway to route requests for product catalog data to the new
product catalog microservice.
● Step 3: As the microservice for the product catalog becomes operational and stable, the
product catalog functionality in the monolith can be gradually retired.
Example Workflow:
○ A customer visits the e-commerce site and requests the product catalog.
○ The gateway routes the request to the monolithic system (Product Catalog is still
part of the monolith).
2. After the Product Catalog Microservice is Implemented:
○ The company continues this process by gradually replacing other parts of the
monolith with microservices (e.g., moving the Order Management and Payment
Processing to their respective microservices).
○ As each part is replaced, the gateway adjusts to route traffic to the appropriate
service.
4. Final State (Microservices):
1. Incremental Migration: The Strangler Fig Pattern allows you to gradually migrate away
from legacy systems, reducing the risk of disruptions and allowing the new system to be
developed and tested incrementally.
2. Reduced Risk: Since you are only replacing small, isolated components, the risk of
introducing errors into the entire system is minimized. The legacy system remains
operational during the migration.
3. Minimal Disruption: The business can continue to operate normally during the
migration process. Since the legacy system and the new system coexist, there is no
need for downtime.
4. Maintain Flexibility: The Strangler Fig Pattern allows you to test and experiment with
new architectures and technologies without having to commit to a full rewrite all at once.
You can also adapt your approach as the migration progresses.
Conclusion
The Strangler Fig Pattern is an effective strategy for migrating legacy systems incrementally,
allowing businesses to modernize their applications without disrupting operations. By gradually
replacing parts of a monolithic application with new services or components (such as
microservices), companies can reduce the risks associated with large-scale rewrites and
achieve a smoother transition to a new architecture. This pattern is especially useful when
moving from monolithic systems to microservices or when upgrading outdated technologies,
enabling a gradual, low-risk migration that ensures business continuity.
Saga Pattern
The Saga Pattern is a design pattern for managing distributed transactions in microservices
architectures. In traditional monolithic systems, transactions are typically handled with ACID
(Atomicity, Consistency, Isolation, Durability) properties, ensuring that all changes within a
transaction are committed or rolled back together. However, in a microservices architecture,
where services are distributed and autonomous, achieving this level of consistency across
services is more complex and costly.
The Saga Pattern solves this problem by breaking a distributed transaction into a series of
smaller, isolated transactions that each correspond to a single service. Each service in a saga
performs its part of the transaction and, if successful, passes the transaction on to the next
service in the sequence. If any service fails, compensating transactions are executed to revert
the changes made by the successful transactions.
1. Choreography: Each service involved in the saga knows about the other
services and handles its own logic for triggering the next service or rolling back in
case of failure.
2. Orchestration: A central service (or orchestrator) coordinates the flow of the
saga, telling each service when to execute its transaction and when to perform
compensating actions.
● Compensating Transactions: If a service in the saga fails, compensating actions (also
known as compensating transactions) are triggered to undo the successful actions
that have already been committed by the previous services in the saga.
Suppose a customer places an order. The order must go through the following steps:
This is a typical distributed transaction that spans multiple services. If any step fails, we need
to ensure that all previous steps are undone to maintain consistency.
1. Choreographed Saga
In a choreographed saga, each service knows about the other services and coordinates the flow
based on events.
● The customer places an order. The Order Service creates a new order in the database
and emits an event: OrderPlaced.
{
"event": "OrderPlaced",
"orderId": 123,
"amount": 100.00
}
● The Inventory Service listens for the OrderPlaced event. When it receives the event,
it reserves inventory for the order and emits an event: InventoryReserved.
{
"event": "InventoryReserved",
"orderId": 123
}
Step 3: Charge Payment (Payment Service)
● The Payment Service listens for the InventoryReserved event. It attempts to charge
the customer's credit card and emits an event: PaymentSucceeded or
PaymentFailed.
{
"event": "PaymentSucceeded",
"orderId": 123,
"transactionId": "txn789"
}
{
"event": "ProductShipped",
"orderId": 123,
"trackingNumber": "track987"
}
● The Notification Service listens for the ProductShipped event and sends a
notification to the customer.
{
"event": "NotificationSent",
"orderId": 123,
"status": "Your order has been shipped!"
}
● If the Payment Service fails (PaymentFailed event), the Inventory Service must
release the reserved inventory by emitting an event: InventoryReleased.
● If the Shipping Service fails to ship the product, the Payment Service must refund the
payment and emit an event: PaymentRefunded.
Example of Compensation Flow (Payment Failure):
{
"event": "InventoryReleased",
"orderId": 123
}
2. Shipping Service and Notification Service are never triggered since the payment
failed.
{
"event": "PaymentRefunded",
"orderId": 123,
"transactionId": "txn789"
}
1. Scalability: Each service in the saga is independent, and the pattern allows the system
to scale horizontally by adding more instances of individual services.
2. Fault Tolerance: Since each step in the saga is a local transaction, the failure of one
service does not require the entire transaction to be rolled back. Instead, compensating
actions can be taken to maintain consistency.
3. Flexibility: The saga pattern can be implemented using either choreography or
orchestration, depending on your preference and the complexity of your system.
4. Decoupling: The services involved in a saga are loosely coupled, meaning that changes
in one service (e.g., adding a new payment gateway) do not require changes in other
services.
Challenges and Considerations:
Conclusion
Certainly! Let's dive deeper into the Saga Pattern by exploring an alternative orchestrated
saga example. This time, instead of having the services manage their own transitions via events
(choreography), we'll have a central orchestrator that controls the flow of the saga. This is often
referred to as the Orchestrated Saga Pattern.
Let’s continue with the Order Processing System example used in the previous response, but
this time we’ll use an Orchestrator to control the saga.
Steps Involved:
We'll assume the central orchestrator is a dedicated Saga Orchestrator Service, which
coordinates the flow and handles compensation in case of failure.
1. Saga Orchestrator
The Saga Orchestrator is responsible for initiating, coordinating, and ensuring the success of
the saga across the microservices. It will send commands to the respective services, receive
responses, and decide whether to proceed to the next service or trigger compensating actions if
any step fails.
{
"command": "CreateOrder",
"orderId": 123,
"customerId": 456,
"amount": 100.00
}
● The Order Service successfully creates the order and returns an acknowledgment to
the Saga Orchestrator.
{
"status": "OrderCreated",
"orderId": 123
}
{
"command": "ReserveInventory",
"orderId": 123,
"productId": 789,
"quantity": 1
}
● The Inventory Service reserves the inventory and sends a response back to the Saga
Orchestrator.
{
"status": "InventoryReserved",
"orderId": 123
}
● The Saga Orchestrator now sends a command to the Payment Service to charge the
customer's credit card.
{
"command": "ChargePayment",
"orderId": 123,
"customerId": 456,
"amount": 100.00
}
● The Payment Service processes the payment and responds back to the Saga
Orchestrator with the result.
{
"status": "PaymentSucceeded",
"transactionId": "txn789",
"orderId": 123
}
● The Saga Orchestrator sends a command to the Shipping Service to ship the product.
{
"command": "ShipProduct",
"orderId": 123,
"shippingAddress": "123 Main St, SomeCity, SomeCountry"
}
● The Shipping Service ships the product and sends a confirmation to the Saga
Orchestrator.
{
"status": "ProductShipped",
"orderId": 123,
"trackingNumber": "track987"
}
● Finally, the Saga Orchestrator sends a command to the Notification Service to notify
the customer that the product has been shipped.
{
"command": "SendNotification",
"orderId": 123,
"status": "Your order has been shipped!"
}
{
"status": "NotificationSent",
"orderId": 123
}
In the Orchestrated Saga Pattern, if any step fails, the Saga Orchestrator is responsible for
invoking compensating actions to undo the operations performed by the services up to that
point. For instance, if a step fails after payment has been successfully processed, the
orchestrator would initiate a compensating transaction.
Let's assume the Payment Service fails, and the payment is not processed successfully. Here’s
how the orchestrator handles it:
The Saga Orchestrator sends the ChargePayment command, but the Payment Service
responds with a failure (PaymentFailed).
Payment Service Response (Failure):
{
"status": "PaymentFailed",
"orderId": 123,
"reason": "Insufficient funds"
}
1.
2. Since the Payment Service failed, the Saga Orchestrator triggers compensating
actions to undo the successful steps up to that point.
Cancel Order: The Saga Orchestrator sends a command to the Order Service to cancel the
order.
{
"command": "CancelOrder",
"orderId": 123
}
Order Service Response:
{
"status": "OrderCancelled",
"orderId": 123
}
○
Release Inventory: The Saga Orchestrator sends a command to the Inventory Service to
release the reserved inventory.
{
"command": "ReleaseInventory",
"orderId": 123
}
Inventory Service Response:
{
"status": "InventoryReleased",
"orderId": 123
}
○
3. The saga ends, and the Saga Orchestrator can notify the customer that the order was
not successful.
1. Centralized Control: The Saga Orchestrator gives you a centralized point of control,
which simplifies tracking the saga's state and making decisions about compensating
actions.
2. Consistency and Control: Since the orchestrator handles the flow and compensation,
you have more control over the transaction. It can ensure that all steps are completed
successfully or that appropriate rollback actions are taken in case of failure.
3. Error Handling: The orchestrator is responsible for ensuring that errors are handled
gracefully by triggering compensating transactions, reducing the risk of inconsistent
states across services.
Conclusion
The Orchestrated Saga Pattern is a central coordination mechanism for handling distributed
transactions in microservices architectures. By using a central orchestrator, this approach
ensures that the saga's flow is controlled, compensating actions can be triggered in case of
failure, and consistency is maintained across distributed services. While it offers better control
over transaction flow compared to choreography, it introduces complexity in terms of
maintaining the orchestrator and managing failure scenarios.
The Backends for Frontends (BFF) pattern is a microservices architecture pattern used to
optimize the way data is presented to different types of front-end clients (e.g., mobile apps, web
applications, and even desktop clients). The BFF pattern introduces a dedicated backend
service that is tailored to the specific needs of each type of frontend.
In traditional systems, the front-end client directly communicates with the backend services
(e.g., through APIs or microservices). However, different front-end clients may have vastly
different requirements in terms of data formatting, aggregation, and the number of calls required
to render a page or screen. The BFF pattern solves this issue by having a separate backend
service for each type of frontend, which optimizes the communication between the backend and
the client.
1. Tailored Backend for Each Frontend: Instead of having one generic backend for all
clients, each type of frontend (e.g., mobile, web) has a backend service optimized for it.
2. Client-Specific API Layer: The BFF acts as an intermediary between the frontend and
the backend services, providing client-specific APIs.
3. API Aggregation: The BFF can aggregate responses from multiple backend services
and deliver them as a single response to the frontend, reducing the need for multiple
round trips from the frontend to the backend.
4. Decoupling: The pattern decouples the frontends from the backend, enabling changes
in the frontend without affecting backend services and vice versa.
Example Use Case:
Each of these frontends may require different sets of data from the backend services, in
different formats or structures, and with different performance requirements.
Without BFF:
● Mobile App and Web App might both communicate directly with the same backend
services (e.g., product service, order service, user service).
● The data required for these two frontends might differ significantly. For example, the
Mobile App may need optimized data for low-bandwidth environments, while the Web
App may need more detailed data (e.g., images, product recommendations).
● The frontend has to deal with multiple API calls to different services and manage
complex logic for adapting the data to its needs.
In this case, the BFF Pattern introduces a backend service layer between the frontend and the
backend services. Each frontend gets its own Backend for Frontend (BFF) that optimizes data
retrieval and aggregation specific to the frontend's needs.
1. Mobile BFF:
● The Mobile BFF can aggregate and optimize the data for mobile use, making fewer API
calls to backend services and compressing the data to minimize the payload size. It can
also provide tailored data such as a limited set of product images, user information, or
summaries for better performance on mobile devices with limited bandwidth.
2. Web BFF:
● The Web BFF can aggregate data with more details, like displaying full-sized images,
product recommendations, user data, and other information that is more appropriate for
the richer screen and higher bandwidth available on a web platform.
3. Desktop BFF:
● Similarly, the Desktop BFF could optimize the data for desktop users, who might need
more complex features and richer data but have the resources (screen size, bandwidth)
to handle it.
How It Works:
1. Aggregate data from various backend services (e.g., product service, payment service,
order service).
2. Tailor the response to meet the specific requirements of each frontend (e.g., mobile,
web).
3. Provide client-specific APIs that are simple, fast, and easy for the frontend to consume.
4. Act as a single source of truth for the frontend, meaning that the frontend doesn't have to
manage multiple API calls and complex data transformations.
Let’s use the e-commerce example again to illustrate the workflow of the BFF pattern for
different frontends:
○ The Web Frontend (Web App) makes a request to the Web BFF (backend
service).
○ The Web BFF calls multiple backend services to aggregate the required data
(e.g., product details, pricing, user data).
○ The Web BFF aggregates all the data and formats it according to the needs of
the web application (e.g., full-size product images, detailed pricing,
recommended products).
○ The Web BFF sends the aggregated data back to the Web Frontend.
2. User Opens Mobile App:
○ The Mobile Frontend (Mobile App) makes a request to the Mobile BFF.
○ The Mobile BFF calls multiple backend services but fetches optimized data, such
as smaller image sizes, condensed product details, and reduces the number of
API calls (e.g., via caching).
○ The Mobile BFF sends the aggregated and optimized data to the Mobile
Frontend for a faster user experience.
●
●
As you can see, the Mobile BFF might return fewer details, optimized for mobile performance
(e.g., compressed image sizes), while the Web BFF returns more detailed data suited for a
desktop or web environment.
1. Tailored Experience: The BFF pattern enables you to tailor the backend specifically for
each type of frontend, optimizing both the performance and the data payload.
2. Simplified Frontend Logic: The frontend doesn't need to manage multiple backend
service calls or complex data transformations. It can simply call the appropriate BFF and
receive exactly the data it needs.
3. Optimized for Each Client: Each BFF is optimized for the specific frontend, ensuring
that the mobile app and web app perform efficiently, even with different data and user
interaction models.
4. Decouples Frontend and Backend: The frontends are decoupled from the backend
services. Changes in the backend don’t necessarily require changes in the frontend, as
long as the BFF interfaces remain the same.
5. Simplified API Management: The BFF pattern reduces the need for different frontend
apps to directly deal with multiple backend services. The BFF acts as a single API
gateway for each frontend, which can simplify API management.
Challenges and Considerations:
1. Increased Complexity: While the BFF pattern decouples the frontend from the
backend, it introduces an additional layer (BFF services), which may increase the
complexity of the system.
2. Duplication of Business Logic: Since each BFF is designed for a specific frontend,
there may be some duplication of logic between the BFFs (e.g., aggregating data from
backend services), which might lead to maintenance challenges.
3. Overhead of Managing Multiple BFFs: You need to maintain separate BFF services for
each frontend (e.g., mobile, web, desktop), which can increase the operational
overhead.
Conclusion:
The Backends for Frontends (BFF) pattern is an effective way to optimize communication
between multiple types of front-end clients and backend services. By creating a dedicated
backend for each frontend (mobile, web, desktop), the BFF pattern ensures that each client
receives only the data it needs in the most optimized form. This approach simplifies frontend
logic, reduces redundant calls to backend services, and improves performance for end-users.
However, the BFF pattern also introduces complexity in terms of managing and maintaining
multiple backend services, and developers must carefully consider the trade-offs before
adopting this pattern.
Sidecar Pattern
A sidecar extends and enhances the functionality of the main service without modifying its core
logic. It’s typically responsible for tasks that are common to many microservices but not directly
related to the business functionality of the service itself.
3. Transparency: The main service may not be aware of the sidecar. The sidecar operates
transparently, often intercepting communication (such as HTTP requests or network
traffic) to handle cross-cutting concerns.
4. Reusable and Scalable: Multiple instances of sidecars can be deployed alongside the
main services, enabling shared functionality across multiple services. They are highly
reusable and can be independently scaled based on needs.
5. Language Agnostic: The sidecar can be written in any language, as it interacts with the
main service via standardized protocols (e.g., HTTP, gRPC, etc.).
● Logging: Aggregating logs from different services and forwarding them to a centralized
logging system.
● Metrics and Monitoring: Collecting service-level metrics and sending them to a
monitoring system (like Prometheus).
● Security: Handling authentication, encryption, and authorization. For example, a sidecar
could manage SSL termination or implement mutual TLS between services.
● Communication: Managing inter-service communication, such as API gateway routing
or service discovery.
● Configuration Management: Handling dynamic configuration for the service, such as
syncing configuration files or environment variables.
Let's take the example of an E-commerce Application with two microservices: Order Service
and Inventory Service. We want to apply the Sidecar Pattern for logging and monitoring.
1. Order Service:
This is the main microservice that handles order creation, updates, and status tracking. It does
not need to concern itself with cross-cutting concerns like logging or metrics. These concerns
are delegated to the sidecar.
2. Inventory Service:
This microservice handles inventory management, including stock level updates and product
availability.
For both Order Service and Inventory Service, we can use a logging sidecar to capture logs
and forward them to a centralized logging system (e.g., Elasticsearch, Fluentd, or Logstash).
○ A logging sidecar can run alongside both the Order Service and Inventory
Service.
○ It collects logs from both services, processes them (e.g., adding timestamp,
service name), and forwards the logs to an external logging system.
For example, in a Kubernetes deployment, you can define two containers in the same pod:
apiVersion: v1
kind: Pod
metadata:
name: order-service-pod
spec:
containers:
- name: order-service
image: order-service-image
ports:
- containerPort: 8080
- name: logging-sidecar
image: fluentd-image
volumeMounts:
- mountPath: /var/log
name: logs
volumes:
- name: logs
emptyDir: {}
2. The order-service container generates logs, and the logging-sidecar container
forwards these logs to a logging system (e.g., Fluentd aggregates logs and sends them
to Elasticsearch).
3. Inventory Service Sidecar: Similarly, an Inventory Service can also have its own
sidecar for logging.
Prometheus Sidecar: The sidecar can collect service-specific metrics like request count,
response times, error rates, etc., and expose them on a /metrics endpoint.
In Kubernetes, the Order Service pod might include a Prometheus sidecar container
alongside the Order Service container:
apiVersion: v1
kind: Pod
metadata:
name: order-service-pod
spec:
containers:
- name: order-service
image: order-service-image
ports:
- containerPort: 8080
- name: prometheus-sidecar
image: prom/prometheus
ports:
- containerPort: 9090
args:
- "--scrape-uri=http://order-service:8080/metrics"
1. This sidecar container scrapes metrics from the Order Service (via the /metrics
endpoint) and exposes them to Prometheus, which aggregates and visualizes the data
in a monitoring dashboard like Grafana.
1. Separation of Concerns: The sidecar keeps the main service's codebase clean by
delegating non-business concerns (such as logging, monitoring, and security) to a
separate service.
2. Reusability: A single sidecar can be reused across multiple services. For instance, the
same logging or monitoring sidecar can be used with many services.
3. Language Agnostic: The sidecar can be implemented in a different language from the
main service. It can interact with the main service via standardized protocols (e.g., HTTP,
TCP).
4. Scalability and Flexibility: The sidecar can be scaled independently of the main
service. If more logging capacity is needed, you can scale the sidecar up without
affecting the main service.
5. Easy Integration: Sidecars can be added incrementally to existing services, helping to
introduce capabilities like logging, monitoring, or security in a non-disruptive way.
1. Overhead: Running a sidecar container alongside each microservice introduces some
overhead in terms of resource consumption (CPU, memory, etc.). This might be a
concern in resource-constrained environments.
2. Complexity: Managing sidecars across multiple services can increase the complexity of
the overall system. You need to ensure that sidecar containers are correctly configured
and maintained.
3. Dependency on Sidecar: The main service often depends on the sidecar for critical
functions (e.g., logging, monitoring). If the sidecar fails, it could affect the operation of the
main service, especially in cases like logging or metrics collection.
4. Networking Overhead: Some sidecars (e.g., proxy sidecars) can introduce networking
overhead due to the additional hop in communication between services.
Example Scenario:
A Proxy Sidecar is a common use case for sidecar patterns. It acts as a reverse proxy that
handles traffic management, such as load balancing, retries, and circuit-breaking, and operates
alongside the main service.
apiVersion: v1
kind: Pod
metadata:
name: order-service-with-proxy
spec:
containers:
- name: order-service
image: order-service-image
ports:
- containerPort: 8080
- name: envoy-proxy
image: envoyproxy/envoy
ports:
- containerPort: 10000
In this case, the envoy-proxy sidecar provides load balancing, circuit breaking, and
observability for the Order Service without modifying its core logic.
Conclusion:
The Sidecar Pattern is a powerful architectural pattern that simplifies the management of
cross-cutting concerns in microservices architectures. By offloading common functionality (such
as logging, monitoring, security, or communication) to sidecars, the main services can remain
focused on their business logic. This pattern promotes separation of concerns, reusability, and
scalability, while also enabling flexibility in managing auxiliary functionalities independently of
the main application logic. However, it introduces some overhead and complexity, especially in
terms of resource consumption and managing sidecars across multiple services.