0% found this document useful (0 votes)
604 views87 pages

Master Microservices With Spring, Docker, Kubernetes

This document provides an overview of microservices architecture using Spring, Docker, and Kubernetes. It discusses the evolution from monolithic to SOA to microservices architectures. Key topics covered include deep dives into microservices architecture and patterns, containerization with Docker, and container orchestration with Kubernetes. Distributed tracing, logging, monitoring, and resilience patterns for microservices are also examined.

Uploaded by

karthik r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
604 views87 pages

Master Microservices With Spring, Docker, Kubernetes

This document provides an overview of microservices architecture using Spring, Docker, and Kubernetes. It discusses the evolution from monolithic to SOA to microservices architectures. Key topics covered include deep dives into microservices architecture and patterns, containerization with Docker, and container orchestration with Kubernetes. Distributed tracing, logging, monitoring, and resilience patterns for microservices are also examined.

Uploaded by

karthik r
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 87

1 MICROSERVICES

USING SPRING, DOCKER, KUBERNETES


MICROSERVICES WITH SPRING, DOCKER, KUBERNETES
WHA T WE CO V E R IN T HIS CO U R S E

ROLE OF SPRING CLOUD,


CLOUD NATIVE APPS EUREKA & CONFIG SERVER

DEEP DIVE ON MICROSERVICES


ARCHITECUTRE
RESILIENCE4J
PATTERNS
2

MICROSERVICES

CONTAINER ORCHESTRATION
USING KUBERNETES
ROUTING & HANDLING
CROSS CUTTING CONCERNS

DISTRUBUTED TRACING,
CONTAINERIZATION USING
LOG AGGREGATION & MONITORING
DOCKER
MONOLITHIC VS SOA VS MICROSERVICES
E V O L U T IO N O F M ICR O S E R V ICE S

MONOLITHIC SOA MICROSERVICES

UI UI
UI

Business
Enterprise Service
Logic 3 Bus

Data Access
Layer

Service 1 Service 2
Multiple Microservices deployed in
separate servers/containers
Single Server

DB DB DBs
MONOLITHIC ARCHITECTURE
S A M P L E BA N K A P P L ICA T IO N

Application/Web Server (Tomcat,


Jboss, Weblogic, Websphere)

Accounts Dev Team


Continuous Integration UI

Spring MVC
4
Spring JPA
Loans Dev Team Single Code repo
Single WAR/EAR

Cards Dev Team

Monolithic Architectures are synonymous with n-Tier applications. All the software’s
parts are unified and all its functions are managed in one server.
Database

UI/UX Dev Team


MONOLITHIC ARCHITECTURE
PROS & CONS

MONOLITHIC

Pros
UI
• Simpler development and deployment for smaller teams and applications
• Fewer cross-cutting concerns
Business • Better performance due to no network latency
Logic 5
Cons

Data Access • Difficult to adopt new technologies


Layer • Limited agility
• Single code base and difficult to maintain
• Not Fault tolerance
• Tiny update and feature development always need a full deployment
Single Server

DB
SOA ARCHITECTURE
S A M P L E BA N K A P P L ICA T IO N

UI Code

UI/UX Dev Team


UI Code repo

Enterprise Service
Bus

6
Accounts Dev Team

REST/SOAP
Services
App/Web Server

Spring JPA
Loans Dev Team Backend Services repo

Cards Dev Team SOA Services exposed with a standard protocol, such as SOAP and consumed/reused
Database
by other services – leverage messaging middleware.
SOA ARCHITECTURE
PROS & CONS
SOA

UI Pros

• Reusability of services
• Better maintainability
Enterprise Service • Higher reliability

Bus 7 Parallel development

Cons

• Complex management
Service 2
• High investment costs
Service 1
• Extra overload

DB
MICROSERVICES ARCHITECTURE
S A M P L E BA N K A P P L ICA T IO N

UI Web App Invokes all the backend


logic through REST
API calls
UI/UX Dev Team UI Code repo

Accounts DB
8
Accounts
Microservice

Accounts Dev Team Accounts Code repo

Loans DB
Loans
Microservice

Loans Dev Team Loans Code repo

Cards DB
Cards
Microservice

Cards Dev Team Cards Code repo


MICROSERVICES ARCHITECTURE
PROS & CONS

MICROSERVICES

UI Pros

• Easy to develop, test, and deploy


• Increased agility
• Ability to scale horizontally

9 Parallel development

Cons

• Complexity
• Infrastructure overhead
• Security concerns
Multiple Microservices deployed in
separate servers/containers

DBs
MONOLITHIC VS SOA VS MICROSERVICES
CO M P A R IS IO N

MONOLITHIC SOA MICROSERVICES

10

SINGLE UNIT COARSE-GRAINED FINE-GRAINED


MONOLITHIC VS SOA VS MICROSERVICES
CO M P A R IS IO N

FEATURES MONOLITHIC SOA MICROSERVICES

Parallel Development

Agility
11

Scalability

Usability

Complexity & Operational overhead

Security Concerns & Performance


WHAT ARE MICROSERVICES?
DEFINITION OF MICROSERVICES

“Microservices is an approach to developing a single application


as a suite of small services, each running in its own process and
12 communicating with lightweight mechanisms, built around
business capabilities and independently deployable by fully
automated deployment machinery.”

- From Article by James Lewis and Martin Fowler’s


WHY SPRING FOR MICROSERVICES?
WHY S P R IN G IS T HE BE S T FR A M E WO R K T O BU IL D M ICR O S E R V ICE

Spring is the most popular development framework for building java-based web applications & services. From the Day1, Spring is working on
building our code based on principles like loose coupling by using dependency injection. Over the years, Spring framework is evolving by staying
relevant in the market.

01 02 03 04 05
Building small services using Spring Cloud provides tools for Provides production ready Spring Cloud makes There is a large
SpringBoot is super easy & 13
dev to quickly build some of features like metrics, deployment of community of Spring
fast the common patterns in security, embedded microservices to cloud developers who can help
Microservices servers easy & adapt easily
WHAT IS SPRING BOOT?
U S IN G S P R IN G BO O T FO R M ICR O S E R V ICE S D E V E L O P M E N T

Spring Boot makes it easy to create stand-alone, production-grade Spring based Applications that you can "just run".

Starters projects are a set of convenient Automatically configure Spring and


dependency descriptors that you can use 3rd party libraries/beans whenever
to bootstrap your spring apps. possible
AUTO
STARTER
CONFIGURATION
PROJECTS
14
Inbuilt support of production-ready
Embed Tomcat, Jetty or
NO NEED TO features such as metrics, health
Undertow servers available PROD READY
DEPLOY INTO checks, and externalized
and the deployment happens FEATURES
A SERVER configuration
directly

Provides many annotations to


Creating Standalone Spring
STAND ALONE SIMPLE do simple configurations and no
applications/REST services is CONFIGURATIONS
SPRING APPs requirement for XML
super quick & easy
configuration
WHAT IS SPRING CLOUD?
U S IN G S P R IN G CL O U D FO R M ICR O S E R V ICE S D E V E L O P M E N T

Spring Cloud provides tools for developers to quickly build some of the common patterns of Microservices

Load balancing efficiently distributes network


Makes sure that all calls to your microservices go
traffic to multiple backend servers or server
through a single “front door” before the targeted
pool
service is invoked & the same will be traced. LOAD
ROUTING &
BALANCING
TRACING
15
New services will be registered & Provides features related to token-
later consumers can invoke them SERVICE SPRING based security in Spring Boot
REGISTRATION CLOUD
through a logical name rather than & DISCOVERY SECURITY applications/Microservices
physical location

Incorporated battle-tested Netflix


Ensures that no matter how
components include Service Discovery
many microservice instances you SPRING CLOUD SPRING CLOUD
NETFLIX (Eureka), Circuit Breaker (Hystrix),
bring up; they’ll always have the CONFIG
Intelligent Routing (Zuul) and Client
same configuration.
Side Load Balancing (Ribbon)..
CHALLENGE 1 WITH MICROSERVICES
R IGHT S IZIN G & ID E N T IFYIN G S E R V ICE BO U N D A R IE S

• One of the most challenging aspects of building a successful microservices system is the identification of
proper microservice boundaries and defining the size of each microservice

• Below are the most common followed approaches in the industry,

 Domain-Driven Sizing - Since many of our modifications or enhancements driven by the business
needs, we can size/define
16 boundaries of our microservices that are closely aligned with Domain-
Driven design & Business capabilities. But this process takes lot of time and need good domain
knowledge.

 Event Storming Sizing - Conducting an interactive fun session among various stake holder to identify
the list of important events in the system like ‘Completed Payment’, ‘Search for a Product’ etc. Based
on the events we can identify ‘Commands', 'Reactions’ and can try to group them to a domain-driven
services.

Reference for Event Storming Session : https://www.lucidchart.com/blog/ddd-event-storming


RIGHT SIZING MICROSERVICES
ID E N T IFYIN G S E R V ICE BO U N D A R IE S

Now let’s take an example of a Bank application that needs to be migrated/build based on a microservices
architecture and try to do sizing of the services.

Saving Account & Trading Account


Saving Account Trading Account Saving Account Trading Account

17
Debit Card Credit Card
Cards Loans
Cards & Loans

Home Loan Vehicle Loan Personal Loan

THIS MIGHT BE THE MOST REASONABLE CORRECT SIZING


NOT CORRECT SIZING AS WE CAN SEE TOO MANY
NOT CORRECT SIZING AS WE CAN SEE INDEPENDENT AS WE CAN SEE ALL INDEPENDENT MODULES HAVE
SERVICES UNDER LOANS & CARDS
MODULES LIKE CARDS & LOANS CLUBBED TOGETHER SEPARATE SERVICE MAINTAINING LOOSELY COUPLED &
HIGHLY COHESIVE
MONOLOTHIC TO MICROSERVICES
M IGR A T IO N U S E CA S E

Now let’s take a scenario where an E-Commerce startup is following monolithic architecture and try to
understand what’s the challenges with it.

CLIENT APP MOLOTHIC SERVER PROCESS DATABASE

18
Modules
APIs

Identity Catalog

MOBILE
Orders Invoices

WEB APP Sales Marketing


RELATIONAL DATABASE

WEBSITE
MONOLOTHIC TO MICROSERVICES
M IGR A T IO N U S E CA S E

Problem that E-Commerce team is facing due to traditional monolithic design

Initial Days

• It is straightforward to build, test, deploy, troubleshoot and scale during the launch and when the team size is less

Later after few days the app/site is a super hit and started evolving a lot. Now team has below problems,

• The app has become so overwhelmingly complicated that no single person understands it.
• You fear making changes19 - each change has unintended and costly side effects.
• New features/fixes become tricky, time-consuming, and expensive to implement.
• Each release as small as possible and requires a full deployment of the entire application.
• One unstable component can crash the entire system.
• New technologies and frameworks aren't an option.
• It's difficult to maintain small isolated teams and implement agile delivery methodologies.
MONOLOTHIC TO MICROSERVICES
M IGR A T IO N U S E CA S E

So the Ecommerce company decided and adopted the below cloud-native design by leveraging Microservices
architecture to make their life easy and less risk with the continuous changes.

IDENTITY MICROSERVICE
RDBMS
CLIENT APP DOCKER HOST

20
CATALOG MICROSERVICE
RDBMS

API GATEWAY

EVENT BUS
ORDER MICROSERVICE

MOBILE
INVOICES MICROSERVICE

SALES MICROSERVICE REDIS


CACHE

WEBSITE WITH MARKETING MICROSERVICE


REDIS
ANGULAR/REACT etc. CACHE
CHALLENGE 2 WITH MICROSERVICES
D E P L O YM E N T , P O R T A BIL IT Y & S CA L A BIL IT Y

D E P L O YM E N T
How do we deploy all the tiny 100s of microservices
with less effort & cost?

21
P O R T A BIL IT Y
How do we move our 100s of microservices across
environments with less effort, configurations & cost?

S CA L A BIL IT Y
How do we scale our applications based on the
demand on the fly with minimum effort & cost?
CONTAINERIZATION TECHNOLOGY
U S IN G D O CK E R
VIRTUAL MACHINES CONTAINERS

VM1 VM2 VM3 CONTAINER 2 CONTAINER 3


CONTAINER 1

Accounts Loans Cards Accounts Loans Cards


Service Service Service Service Service Service

Bins/libs Bins/libs Bins/libs Bins/libs Bins/libs Bins/libs

Guest OS Guest22
OS Guest OS
Container/Docker Engine

Hypervisor
Host Operating System

Server Physical Hardware


Server Physical Hardware

Main differences between virtual machines and containers. Containers don’t need the Guest Os nor the hypervisor to assign
resources; instead, they use the container engine.
INTRO TO DOCKER
WHA T A R E CO N T A IN E R S & D O CK E R ?

What is a container ?

A container is a loosely isolated environment that allows us to build and run software packages. These software
packages include the code and all dependencies to run applications quickly and reliably on any computing
environment. We call these packages container images.

23
What is software containerization?

Software containerization is an OS virtualization method that is used to deploy and run containers without using
a virtual machine (VM). Containers can run on physical hardware, in the cloud, VMs, and across multiple OSs.

What is Docker?

Docker is one of the tools that used the idea of the isolated resources to create a set of tools that allows
applications to be packaged with all the dependencies installed and ran wherever wanted.
INTRO TO DOCKER
D O CK E R A R CHIT E C T U R E
DOCKER CLIENT DOCKER HOST/SERVER DOCKER REGISTRY

Using Docker Daemon we can create and manages the docker


images
Accounts
Service
Docker Daemon Docker Hub
Docker Remote API
Bins/libs
CONTAINERS IMAGES
Guest OS Guest24
OS Guest OS
We can issue commands to The docker images can be
Docker Daemon using either CLI maintained and pulled from the
Container 1
or APIs docker hub or private registries.
Image of App1

Container 2
Docker CLI Image of App2 Private Registry
Container 3
CLOUD-NATIVE APPLICATIONS
IN T R O D U CT IO N

• Cloud-native applications are a collection of small, independent, and loosely coupled services. They are
designed to deliver well-recognized business value, like the ability to rapidly incorporate user feedback for
continuous improvement. Its goal is to deliver apps users want at the pace a business needs.

• If an app is "cloud-native," it’s specifically designed to provide a consistent development and automated
management experience across private, public, and hybrid clouds. So it’s about how applications are
created and deployed, not
25 where.

• When creating cloud-native applications, the developers divide the functions into microservices, with
scalable components such as containers in order to be able to run on several servers. These services are
managed by virtual infrastructures through DevOps processes with continuous delivery workflows. It's
important to understand that these types of applications do not require any change or conversion to work
in the cloud and are designed to deal with the unavailability of downstream components.
CLOUD-NATIVE APPLICATIONS
P R IN CIP L E S O F CL O U D -N A T IV E A P P L ICA T IO N S

CONTAINERS DEVOPS

02
Containers allow apps to be packaged and DevOps is an approach to culture,
isolated with their entire runtime automation, and platform design
environment, making it easy to move them intended to deliver increased business
between environments while retaining full value and responsiveness.
functionality.

26 0
03
CLOUD NATIVE

1 APPLICATION

MICROSERVICE CONTINOUS DELIVERY


A microservices architecture breaks apps It's a software development practice in which the
down into their smallest components, process of delivering software is automated to
independent from each other.
04 allow short-term deliveries into a production
environment.

To build and develop cloud native applications (microservices), we need to follow the best practises mentioned in the
twelve factor app methodology (https://12factor.net/)
CLOUD-NATIVE APPLICATIONS
D IFFE R E N CE B/ W CL O U D -N A T IV E & T R A D IT IO N A L A P P S

CLOUD NATIVE TRADITIONAL ENTERPRISE


APPLICATIONS APPLICATIONS

Predictable Behavior Unpredictable Behavior

OS abstraction 27 OS dependent

Right-sized capacity & Independent Oversized capacity & Dependent

Continuous delivery Waterfall development

Rapid recovery & Automated scalability Slow recovery


TWELVE FACTOR APP
BE S T P R A CT ICE S

3. Config
2. Dependencies
4. Backing
Services

1. Codebase

5. Build, Run,
Release

28
12. Admin
Processes
Twelve Factor 6. Processes
App
CLOUD NATIVE APPLICATION

11. Logs
7.Port Binding

8. Concurrency
10. Dev/Prod
parity 9. Disposability
TWELVE FACTOR APP
1. CO D E BA S E

• Each microservice should have a single codebase, managed in source control. The code base can have
multiple instances of deployment environments such as development, testing, staging, production, and
more but is not shared with any other microservice.

Environments

Development
29

Testing

Single Codebase

Production
TWELVE FACTOR APP
2. D E P E N D E N CIE S

• Explicitly declare the dependencies your application uses through build tools such as Maven,
Gradle (Java). Third-party JAR dependence should be declared using their specific versions
number. This allows your microservice to always be built using the same version of libraries.

• A twelve-factor app never relies on implicit existence of system-wide packages.


3) If the dependent jar/library is not in local repository, then it
Maven Central searches the maven central repository
30
1) Maven reads and build the pom.xml file
Repository
pom.xml
4) Download the Jar

Maven

5) Put the downloaded Jar


in the local repository Target
Folder
.M2 Local
Repository 6) Copy the jar files
2) Check if the dependent jar/library is in
local repository
TWELVE FACTOR APP
3. CO N FIG

• Store environment-specific configuration independently from your code. Never add embedded
configurations to your source code; instead, maintain your configuration completely separated from your
deployable microservice. If we keep the configuration packaged within the microservice, we’ll need to
redeploy each of the hundred instances to make the change.
Configurations Environments

31 Development Config Development

Testing Config Testing

Codebase

Production Config Production


TWELVE FACTOR APP
4. BA CK IN G S E R V ICE S

• Backing Services best practice indicates that a microservices deploy should be able to swap between local
connections to third party without any changes to the application code.

• In the below example, we can see that a local DB can be swapped easily to a third-party DB which is AWS
DB here with out any code changes.

32

URL
Local DB Microservice Deploy AWS S3

URL AWS DB
TWELVE FACTOR APP
5. BU IL D , R E L E A S E , R U N

• Keep your build, release, and run stages of deploying your application completely separated. We should
be able to build microservices that are independent of the environment which they are running.

33 Build Stage

Codebase
Release Stage

Configuration
TWELVE FACTOR APP
6. P R O CE S S E S

• Execute the app as one or more stateless processes. Twelve-factor processes are stateless and share-
nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.

• Microservices can be killed and replaced at any time without the fear that a loss of a service-instance will
result in data loss.

34

Loans
Microservice

We can store the data of the loans microservice


inside a SQL or NoSQL DB
TWELVE FACTOR APP
7. P O R T B IN D IN G

• Web apps are sometimes executed inside a webserver container. For example, PHP apps might run as a
module inside Apache HTTPD, or Java apps might run inside Tomcat. But each microservice should be self-
contained with its interfaces and functionality exposed on its own port. Doing so provides isolation from
other microservices.

• We will develop an application using Spring Boot. Spring Boot, apart from many other benefits, provides
35
us with a default embedded application server. Hence, the JAR we generated earlier using Maven is fully
capable of executing in any environment just by having a compatible Java runtime.
TWELVE FACTOR APP
8. CO N CU R R E N CY

• Services scale out across a large number of small identical processes (copies) as opposed to scaling-up a
single large instance on the most powerful machine available.

• Vertical scaling (Scale up) refers to increase the hardware infrastructure (CPU, RAM). Horizontal scaling
(Scale out) refers to adding more instances of the application. When you need to scale, launch more
microservice instances and scale out and not up.
36

1 CPU/ 1 GB RAM
1 CPU/ 1 CPU/ 1 CPU/ 1 CPU/
1 GB RAM 1 GB RAM 1 GB RAM 1 GB RAM
2 CPU/ 2 GB RAM

4 CPU/ 4 GB RAM Scale Out – Add more instances

Scale Up – Increase size of RAM, CPU


TWELVE FACTOR APP
9. D IS P O S A B IL IT Y

• Service instances should be disposable, favoring fast startups to increase scalability opportunities and
graceful shutdowns to leave the system in a correct state. Docker containers along with an orchestrator
inherently satisfy this requirement.

• For example, if one of the instances of the microservice is failing because of a failure in the underlying
hardware, we can shut down that instance without affecting other microservices and start another one
37
somewhere else if needed.
TWELVE FACTOR APP
10. D ev/ P r od p ar it y

• Keep environments across the application lifecycle as similar as possible, avoiding costly shortcuts. Here,
the adoption of containers can greatly contribute by promoting the same execution environment.

• As soon as a code is committed, it should be tested and then promoted as quickly as possible from
development all the way to production. This guideline is essential if we want to avoid
deployment errors. Having similar development and production environments allows us to
38
control all the possible scenarios we might have while deploying and executing our
application.
TWELVE FACTOR APP
11. L og s

• Treat logs generated by microservices as event streams. As logs are written out, they should be managed by
tools, such as Logstash(https://www.elastic.co/products/logstash) that will collect the logs and write them
to a central location.

• The microservice should never be concerned about the mechanisms of how this happens, they only need
to focus on writing the log entries into the stdout. We will discuss on how to provide an autoconfiguration
for sending these logs to the ELK stack (Elasticsearch, Logstash and Kibana) in the coming sections.
39

Microservice
Log file

Elastic
Microservice Log file Logstash Kibana
Search

Microservice Log file


TWELVE FACTOR APP
12. A d min p r oc es s es

• Run administrative/management tasks as one-off processes. Tasks can include data cleanup and pulling
analytics for a report. Tools executing these tasks should be invoked from the production environment, but
separately from the application.

• Developers will often have to do administrative tasks related to their microservices like Data migration, clean
up activities. These tasks should never be ad hoc and instead should be done via scripts that are managed
and maintained through40source code repository. These scripts should be repeatable and non-changing
across each environment they’re run against. It’s important to have defined the types of tasks we need to
take into consideration while running our microservice, in case we have multiple microservices with these
scripts we are able to execute all of the administrative tasks without having to do it manually.
CHALLENGE 3 WITH MICROSERVICES
CO N FIGU R A T I O N M A N A GE M E N T

S E P A R A T IO N O F CO N FIGs / P R O P E R T IE S
How do we separate the configurations/properties from the
microservices so that same Docker image can be deployed in multiple
envs.

41
IN J E CT CO N FIGs / P R O P E R T I E S
How do we inject configurations/properties that
microservice needed during start up of the service

M A IN T A IN CO N FIGs / P R O P E R T I E S
How do we maintain configurations/properties in a
centralized repository along with versioning of them
CONFIGURATION MANAGEMENT
A R C HIT E C T U R E IN S ID E M IC R O S E R V IC E S

CONFIGURATION
MICROSERVICES MANAGEMENT SERVICE

Database
Account

Loans
42
Git

Cards
File System

Loan configurations during Configuration service load all the Most commonly used central
startup by connecting to configurations by connecting to repositories
Configuration service central repository
SPRING CLOUD CONFIG
FO R CO N FIGU R A T I O N M A N A GE M E N T IN M ICR O S E R V ICE S

• Spring Cloud Config provides server and client-side support for externalized configuration in a distributed
system. With the Config Server you have a central place to manage external properties for applications
across all environments.

Configuration Environments
Management Service

43 Development Config Development

Testing Config Testing

Codebase

Production Config Production


SPRING CLOUD CONFIG
FO R CO N FIGU R A T I O N M A N A GE M E N T IN M ICR O S E R V ICE S

Spring Cloud Config Server features:

• HTTP, resource-based API for external configuration (name-value pairs, or equivalent YAML content)

• Encrypt and decrypt property values

• Embeddable easily in44


a Spring Boot application using @EnableConfigServer

Config Client features (for Microservices):

• Bind to the Config Server and initialize Spring Environment with remote property sources

• Encrypt and decrypt property values


CHALLENGE 4 WITH MICROSERVICES
S E R V ICE D IS CO V E R Y & R E GIS T R A T IO N

HO W D O S E R V ICE S L O CA T E E A CH O T HE R
IN S ID E A N E T WO R K ?
Each instance of a microservice exposes a remote API with it's own host
and port. how do other microservices & clients know about these dynamic
endpoint URLs to invoke them. So where is my service?

HO W D O N E 45 W S E R V ICE IN S T A N CE S
E N T E R IN T O T HE N E T WO R K ?
If an microservice instance fails, new instances will be brought online
to ensure constant availability. This means that the IP addresses of
the instances can be constantly changing. So how does these new
instances can start serving to the clients?

L O A D BA L A N CE , IN FO S HA R IN G B/ W
M ICR O S E R V ICE IN S T A N CE S
How do we make sure to properly load balance b/w the multiple
microservice instances especially a microservice is invoking
another microservice? How do a specific service information
shared across the network?
SERVICE DISCOVERY & REGISTRATION
IN S ID E M ICR O S E R V ICE S N E T WO R K

• Service discovery & registrations deals with the problems about how microservices talk to each other, i.e. perform API calls.

• In a traditional network topology, applications have static network locations. Hence IP addresses of relevant external locations can
be read from a configuration file, as these addresses rarely change.

• In a modern microservice architecture, knowing the right network location of an application is a much more complex problem for
46
the clients as service instances might have dynamically assigned IP addresses. Moreover the number instances may vary due to
autoscaling and failures.

• Microservices service discovery & registration is a way for applications and microservices to locate each other on a network. This
includes,
 A central server (or servers) that maintain a global view of addresses.
 Microservices/clients that connect to the central server to register their address when they start & ready
 Microservices/clients need to send their heartbeats at regular intervals to central server about their health
WHY NOT TRADITIONAL LOAD BALANCERS
F O R S E R V IC E D IS C O V E R Y & R E G IS T R A T I O N

Applications like UI or other services uses generic DNS along with the service specific path to invoke a specific service

services.eazybank.com/accounts services.eazybank.com/cards services.eazybank.com/loans

47
DNS name for load balancers
(services.eazybank.com)

Health checks
Routing Secondary Load
tables Primary Load Balancer
Balancer

Accounts Service Loans Service Cards Service

Traditional Service location resolution architecture using DNS & a load balancer
WHY NOT TRADITIONAL LOAD BALANCERS
FO R S E R V ICE D IS CO V E R Y & R E GIS T R A T I O N

• With traditional approach each instance of a service used to be deployed in one or more application servers. The number of these
application servers was often static and even in the case of restoration it would be restored to the same state with the same IP and
other configurations.

• While this type of model works well with monolithic and SOA based applications with a relatively small number of services running
on a group of static servers, it doesn’t work well for cloud-based microservice applications for the following reasons,
48
• Limited horizontal scalability & licenses costs
• Single point of failure & Centralized chokepoints
• Manually managed to update any IPs, configurations
• Not containers friendly
• Complex in nature
ARCHITECTURE OF SERVICE DISCOVERY
IN M ICR O S E R V ICE S
Client Applications never worry about the direct IP details of the microservice. They will just invoke service discovery layer with a logical
service name

Client Applications(Microservices)

services.eazybank.com/accounts services.eazybank.com/cards services.eazybank.com/loans

49
Service Discovery Layer

1. A service actual location can be looked 3. Service discovery nodes communicate


up based on the given logical name with each other about new services,
health of the services etc.

Service Discovery Node 1 Service Discovery Node 2 Service Discovery Node 3

Heartbeat

Microservices network 4. Service instances send a heartbeat to


2. When a service comes online it the service discovery agent. If a service
register its IP address with a service didn’t send a heartbeat, service discovery
discovery agent and let it know that it is will remove the IP of the dead instance
ready to take requests Accounts Service Loans Service Cards Service from the list

Server-Side discovery pattern/load Balancing


ARCHITECTURE OF SERVICE DISCOVERY
IN M ICR O S E R V ICE S

• Service discovery tools and patterns are developed to overcome the challenges with traditional load balancers.

• Mainly service discovery consists of a key-value store (Service Registry) and an API to read from and write to this store. New
instances of applications are saved to this service registry and deleted when the service is down or not healthy.

• Clients, that want to communicate with a certain service are supposed to interact with the service registry to know the exact
network location(s). 50

• Advantages of Service Discovery approach,

• No limitations on availability
• Peer to peer communication b/w Service Discovery agents
• Dynamically managed IPs, configurations & Load balanced
• Fault-tolerant & Resilient in nature
CLIENT-SIDE LOAD BALANCING
IN M ICR O S E R V ICE S
When a microservice want to connect with other microservice, it will check the local cache for the service instances IPs. Load balancing also
happens at the service level itself w/o depending on the Service Discovery

Accounts Service

Periodically the client side cache will be


Client Side Cache/load balancing refreshed with the service discovery layer

51
Service Discovery Layer
Service discovery nodes communicate
with each other about new services,
health of the services etc.

Service Discovery Node 1 Service Discovery Node 2 Service Discovery Node 3

Heartbeat

Service instances send a heartbeat to the


service discovery agent. If a service didn’t
Other Microservices in the network send a heartbeat, service discovery will
remove the IP of the dead instance from
If the client finds a service IP in the
the list
cache, it will use it. Otherwise it goes to Loans Service Cards Service
the service discovery
SPRING CLOUD SUPPORT
FO R S E R V ICE D IS CO V E R Y & R E GIS T R A T I O N

• Spring Cloud project makes Service Discovery & Registration setup trivial to undertake with the help of the below components,

• Spring Cloud Netflix's Eureka service which will act as a service discovery agent*
• Spring Cloud Load Balancer library for client-side load balancing**
• Netflix Feign client to look up for a service b/w microservices

52
* Though in this course we use Eureka since it is mostly used but they are other service registries such as etcd,Consul, and Apache Zookeeper which are also
good.

** Though Netflix Ribbon client-side is also good and stable product, we are going to use Spring Cloud Load Balancer for client-side load balancing. This is
because Ribbon has entered a maintenance mode and unfortunately, it will not be developed anymore
EUREKA SELF-PRESERVATION
T O A V O ID T R A P S IN N E T WO R K

Instance 1 – UP
Instance 2 - UP
Peer to peer communication Instance 3 - UP
Instance 4 - UP
Instance 5 - UP
Eureka Server 1 Eureka Server 2

53
Heartbeat by all the instances for every 30secs

Instance 1 Instance 2 Instance 3 Instance 4 Instance 5

Accounts Service Instances

Healthy Microservices System with all 5 instances up before encountering network problems
EUREKA SELF-PRESERVATION
T O A V O ID T R A P S IN N E T WO R K

Instance 1 – UP
Instance 2 - UP
Peer to peer communication Instance 3 - UP

Eureka Server 1 Eureka Server 2

54
Heartbeat by all the instances for every 30secs

Instance 1 Instance 2 Instance 3 Instance 4 Instance 5

Accounts Service Instances

2 of the instances not sending heartbeat. Eureka enters self-preservation mode since it met threshold percentage
EUREKA SELF-PRESERVATION
T O A V O ID T R A P S IN N E T WO R K

Instance 1 – UP
Instance 2 - UP
Peer to peer communication Instance 3 - UP

Eureka Server 1 Eureka Server 2

55
Heartbeat by all the instances for every 30secs

Instance 1 Instance 2 Instance 3 Instance 4 Instance 5

Accounts Service Instances

During Self-preservation, eureka will stop expiring the instances though it is not receiving heartbeat from instance 3
EUREKA SELF-PRESERVATION
T O A V O ID T R A P S IN N E T WO R K

• The reason behind self-preservation mode in eureka

 Servers not receiving heartbeats could be due to a poor network issue but does not necessarily mean the clients are down
which may be resolved sooner. So with out self-preservation we will end up have zero instances up with Eureka though the
instances might be up and running.
 Even though the connectivity is lost between servers and some clients, clients might have connectivity with each other. With
their local cache56
registration details they can keep communicating with each other

• Self-preservation mode never expires, until and unless the down microservices are brought back or the network glitch is resolved.
This is because eureka will not expire the instances till it is above the threshold limit.

• Self-preservation will be a savior where the networks glitches are common and help us to handle false-positive alarms.
EUREKA SELF-PRESERVATION
T O A V O ID T R A P S IN N E T WO R K

• Configurations which will directly or indirectly impact self-preservation behavior of eureka

 eureka.instance.lease-renewal-interval-in-seconds = 30
Indicates the frequency the client sends heartbeats to server to indicate that it is still alive.
 eureka.instance.lease-expiration-duration-in-seconds = 90
Indicates the duration the server waits since it received the last heartbeat before it can evict an instance
 eureka.server.eviction-interval-timer-in-ms = 60 * 1000
57
A scheduler(EvictionTask) is run at this frequency which will evict instances from the registry if the lease
of the instances are expired as configured by lease-expiration-duration-in-seconds. It will also check whether
the system has reached self-preservation mode (by comparing actual and expected heartbeats) before evicting.
 eureka.server.renewal-percent-threshold = 0.85
This value is used to calculate the expected % of heartbeats per minute eureka is expecting.
 eureka.server.renewal-threshold-update-interval-ms = 15 * 60 * 1000
A scheduler is run at this frequency which calculates the expected heartbeats per minute
 eureka.server.enable-self-preservation = true
By default self-preservation mode is enabled but if you need to disable it you can change it to ‘false’
CHALLENGE 5 WITH MICROSERVICES
R E S IL IE N CY

HO W D O WE A V O ID CA S CA D IN G FA IL U R E S ?
One failed or slow service should not have a ripple effect on the other
microservices. Like in the scenarios of multiple microservices are
communicating, we need to make sure that the entire chain of microservices
does not fail with the failure of a single microservice

HO W D O WE 58 HA N D L E FA IL U R E S
GR A CE FU L L Y WIT H FA L L B A C K S ?
In a chain of multiple microservices, how do we build a fallback
mechanism if one of the microservice is not working. Like returning a
default value or return values from cache or call another service/DB
to fetch the results etc.

HO W T O M A K E O U R S E R V ICE S
S E L F -HE A L IN G CA P A BL E
In the cases of slow performing services, how do we configure
timeouts, retries and give time for a failed services to recover itself.
SPRING SUPPORT
FO R R E S IL IE N CY U S IN G R E S IL IE N CE 4J

• Resilience4j is a lightweight, easy-to-use fault tolerance library inspired by Netflix Hystrix, but designed for Java 8 and functional
programming. Lightweight, because the library only uses Vavr, which does not have any other external library dependencies.
Netflix Hystrix, in contrast, has a compile dependency to Archaius which has many more external library dependencies such as
Guava and Apache Commons Configuration.

• Resilience4j offers the following patterns for increasing fault tolerance due to network problems or failure of any of the multiple
services:
59
 Circuit breaker - Used to stop making requests when a service invoked is failing.
 Fallback - Alternative paths to failing requests.
 Retry - Used to make retries when a service has temporarily failed.
 Rate limit - Limits the number of calls that a service receives in a time.
 Bulkhead - Limits the number of outgoing concurrent requests to a service to avoid overloading.

• Before using Resilience4j, Developers used to use Hystrix, one of the most common java libraries to implement the resiliency
patterns in microservices. But now Hystrix is in maintenance mode and no new features are developed. Due to this reason now
everyone uses Resilience4j which has more features than Hystrix.
TYPICAL SCENARIO
IN M ICR O S E R V ICE S
App 1 which needs response from all 3 services
App 2 which needs response from Accounts

Eureka Server 1
Accounts Database
Accounts Microservice

60

Loans Microservice Loans Database

App 3 which needs


response from Loans

Cards Microservice Cards Database

App 4 which needs


response from Cards

In my microservices network both Accounts, Loans are working fine but Cards is not responding properly due to DB connectivity issues
CIRCUIT BREAKER PATTERN
FO R R E S IL IE N CY IN M ICR O S E R V ICE S

• In a distributed environment, calls to remote resources and services can fail due to transient faults, such as slow network
connections, timeouts, or the resources being overcommitted or temporarily unavailable. These faults typically correct themselves
after a short period of time, and a robust cloud application should be prepared to handle them.

• The Circuit Breaker pattern which inspired from electrical circuit breaker will monitor the remote calls. If the calls take too long,
the circuit breaker will intercede and kill the call. Also, the circuit breaker will monitor all calls to a remote resource, and if enough
calls fail, the circuit break implementation will pop, failing fast and preventing future calls to the failing remote resource.

• 61 also enables an application to detect whether the fault has been resolved. If the problem appears to
The Circuit Breaker pattern
have been fixed, the application can try to invoke the operation.

• The advantages with circuit breaker pattern are,

 Fail fast
 Fail gracefully
 Recover seamlessly @CircuitBreaker(name="detailsForCustomerSupportApp",
fallbackMethod= "myCustomerDetailsFallBack")
CIRCUIT BREAKER PATTERN
FOR RESILIENCY IN MICROSERVICES

In Resilience4j the circuit breaker is implemented via a finite state machine with the following states

1) Failure rate above threshold


CLOSED OPEN

62

1) CLOSED – Initially the circuit breaker starts with Closed status and accepts
client requests
2) OPEN – If Circuit breaker sees a threshold requests are failing, then it will
OPEN the circuit which will make requests fail fast
3) HALF_OPEN – Periodically Circuit breaker checks if the issue is resolved by HALF_OPEN
allowing few requests. Based on the results it will either go to CLOSED or
OPEN.
CIRCUIT BREAKER PATTERN
FOR RESILIENCY IN MICROSERVICES

Scenario 1 – If Cards microservice is responding slowly, then with out circuit breaker it will start eating up all the resources threads on the Loans and
Accounts microservices which will make them also slow/down eventually

63

Scenario 2 – If Cards microservice is responding slowly, then with circuit breakers in between it will start acting and failing the services fast with the
states OPEN, HALF_OPEN, CLOSED. This way at least Accounts and Loans services will not have any issues for other Apps.

Scenario 3 – If Cards microservice is responding slowly then with circuit breakers and fallback mechanism, we can make sure that at least we are failing
gracefully with some default response is being returned and at the same time other microservices will not get impacted
RETRY PATTERN
FO R R E S IL IE N CY IN M ICR O S E R V ICE S

• The retry pattern will make configured multiple retry attempts when a service has temporarily failed. This pattern is very helpful in
the scenarios like network disruption where the client request may successful after a retry attempt.

• For retry pattern we can configure the following values,

 maxAttempts - The maximum number of attempts


 waitDuration - A fixed wait duration between retry attempts
 retryExceptions - Configures a list of Throwable classes that are recorded as a failure and thus are retried.
 ignoreExceptions -64Configures a list of Throwable classes that are ignored and thus are not retried.

• We can also define a fallback mechanism if the service call fails even after multiple retry attempts. Below is a sample
configuration,

@Retry(name="detailsForCustomerSupportApp",fallbackMethod=
"myCustomerDetailsFallBack")
RATE LIMITTER PATTERN
FO R R E S IL IE N CY IN M ICR O S E R V ICE S

• The rate limiter pattern will help to stop overloading the service with more calls more than it can consume in a given time. This is
an imperative technique to prepare our API for high availability and reliability.

• This pattern protect APIs and service endpoints from harmful effects, such as denial of service, cascading failure.

• For rate limiter pattern we can configure the following values,

 timeoutDuration - The default wait time a thread waits for a permission


65number of permissions available during one limit refresh period
 limitForPeriod - The
 limitRefreshPeriod - The period of a limit refresh. After each period the rate limiter sets its permissions count back to the
limitForPeriod value

• We can also define a fallback mechanism if the service call fails due to rate limiter configurations. Below is a sample configuration,

@RateLimiter(name="detailsForCustomerSupportApp",fallbackMethod=
"myCustomerDetailsFallBack")
BULKHEAD PATTERN
FO R R E S IL IE N CY IN M ICR O S E R V ICE S

• A ship is split into small multiple compartments using Bulkheads. Bulkheads are used to seal parts of the ship to prevent entire
ship from sinking in case of flood.

• Similarly microservices resources should be isolated in such a way


that failure of one component is not affecting the entire microservice.

• Bulkhead Pattern helps us to allocate limit the resources which can


be used for specific services. So that resource exhaustion can be
reduced. 66

• For Bulkhead pattern we can configure the following values,

 maxConcurrentCalls - Max amount of parallel executions


allowed by the bulkhead
 maxWaitDuration - Max amount of time a thread should be blocked for when attempting to enter a saturated bulkhead.

@Bulkhead(name="bulkheadAccounts",fallbackMethod=
"bulkheadAccountsFallBack")
BULKHEAD PATTERN
A R C HIT E C T U R E IN S ID E M IC R O S E R V IC E S

ACCOUNTS MICROSERVICES ACCOUNTS MICROSERVICES

REQUESTS REQUESTS

/myAccount /myAccount

67

/myCustomerDetails / myCustomerDetails

With Bulkhead, /myCustomerDetails and /myAccount will have their own


Without Bulkhead, /myCustomerDetails will start eating all the threads,
resources, threads pool defined
resources available which will effect the performance of /myAccount
CHALLENGE 6 WITH MICROSERVICES
R O U T IN G, C R O S S C U T T IN G C O N C E R N S

HO W D O WE R O U T E BA S E D O N CU S T O M R E Q U IR E M E N T S
If we have a custom requirements to route the incoming requests to the
appropriate destination both in static and dynamic way, how do we do that?

HO W D O WE 68 HA N D L E C R O S S
CU T T IN G CO N C E R N S ?
In a distributed microservices architecture, how do we make sure to
have a consistently enforced cross cutting concerns like

logging, auditing, tracing, security and metrics collection across


multiple microservices

HO W D O WE BU IL D A S IN GL E
GA T E K E E P E R ?
How do we build a single gatekeeper for all the inbound traffic to our
microservices which will act as a central Policy Enforcement Point (PEP)
for all service calls?
SPRING CLOUD SUPPORT
FO R R O U T IN G, CR O S S CU T T IN G CO N CE R N S

• Spring Cloud Gateway is API Gateway implementation by Spring Cloud team on top of Spring reactive ecosystem. It provides a
simple and effective way to route incoming requests to the appropriate destination using Gateway Handler Mapping.

• The service gateway sits as the gatekeeper for all inbound traffic to microservice calls within our application. With a service
gateway in place, our service clients never directly call the URL of an individual service, but instead place all calls to the service
gateway.


69
The service gateway sits between all calls from the client to the individual services, it also acts as a central Policy Enforcement
Point (PEP) like below for service calls.

 Routing (Both Static & Dynamic)


 Security (Authentication & Authorization)
 Logging, Auditing and Metrics collection
SPRING CLOUD SUPPORT
FO R R O U T IN G, CR O S S CU T T IN G CO N CE R N S

• Spring Cloud Gateway is a library for building an API gateway, so it looks like any another Spring Boot application. If you’re a Spring
developer, you’ll find it’s very easy to get started with Spring Cloud Gateway with just a few lines of code.

• Spring Cloud Gateway is intended to sit between a requester and a resource that’s being requested, where it intercepts, analyzes,
and modifies every request. That means you can route requests based on their context. Did a request include a header indicating
an API version? We can route that request to the appropriately versioned backend. Does the request require sticky sessions? The
gateway can keep track of each user’s session.
70
• Spring Cloud Gateway is replacement of Zuul for the following reasons and advantages,

• Spring Cloud Gateway is the preferred API gateway implementation from the Spring Cloud Team. It’s built on Spring 5,
Reactor, and Spring WebFlux. Not only that, it also includes circuit breaker integration, service discovery with Eureka
 Spring Cloud Gateway is non-blocking in nature. Though later Zuul 2 also supports it, but still Spring Cloud Gateway has an
edge here
 Spring Cloud Gateway has a superior performance compared to that of Zuul.
SPRING CLOUD GATEWAY
IN T E R N A L A R CHIT E CT U R E

EUREKA SERVER/
SPRING CLOUD GATEWAY MICROSERVICES

Predicates Request
(To check if the requests
fulfill a set of given condition)

71 Pre Filters
Request

Request
Gateway Handler Mapping
using Routing configs Response

Response Response
Post Filters
CLIENT

When the client makes a request to the Spring Cloud Gateway, the Gateway Handler Mapping first checks if the request matches a route. This matching is done using the predicates. If it
matches the predicate then the request is sent to the filters. Post filter it will send to the actual microservices or Eureka Server
CHALLENGE 7 WITH MICROSERVICES
D IS T R IBU T E D T R A CIN G & L O G A GGR E GA T IO N

HO W D O WE D E BU G WHE R E A P R O BL E M
IN M ICR O S E R V ICE S ?
How do we trace one or more transactions across multiple services, physical
machines, and different data stores, and try to find where exactly the
problem or bug is?

HO W D O WE 72 A GGR E GA T E
A L L A P P L ICA T IO N L O GS ?
How do we combine all the logs from multiple services into a central
location where they can be indexed, searched, filtered, and grouped
to find bugs that are contributing to a problem?

HO W D O WE M O N IT O R O U R
CHA IN O F S E R V ICE CA L L S ?
How do we understand for a specific chain service call the path it
travelled inside our microservices network, time it took at each
microservice etc. ?
SPRING CLOUD SUPPORT
FO R D IS T R IBU T E D T R A CIN G & L O G A GGR E GA T I O N

Spring Cloud Sleuth (https://spring.io/projects/spring-cloud-sleuth)


• Spring Cloud Sleuth provides Spring Boot auto-configuration for distributed tracing.

• It adds trace and span ids to all the logs, so you can just extract from a given trace or span in a log aggregator.

• It does this by adding the filters and interacting with other Spring components to let the correlation IDs being generated pass
73calls.
through to all the system

Zipkin (https://zipkin.io/)
• Zipkin is a is an open-source data-visualization tool that can helps aggregating all the logs and gather timing data needed to
troubleshoot latency problems in microservices architectures.

• It allows us to break a transaction down into its component pieces and visually identify where there might be
performance hotspots. Thus reducing time in triaging by contextualizing errors and delays.
SPRING CLOUD SLEUTH
T R A CE FO R M A T

• Spring Cloud Sleuth will add three pieces of information to all the logs written by a microservice.

[<App Name>,<Trace ID>, <Span ID>]

• Application name of the service: This is going to be the application name where the log entry is being made. Spring Cloud Sleuth
get this name from the74
‘spring.application.name’ property.

• Trace ID: Trace ID is the equivalent term for correlation ID. It’s a unique number that represents an entire transaction.

• Span ID: A span ID is a unique ID that represents part of the overall transaction. Each service participating within the transaction
will have its own span ID. Span IDs are particularly relevant when you integrate with Zipkin to visualize your transactions.
SPRING CLOUD SLEUTH
T R A CE FO R M A T
App which needs response from all 3 services

Accounts Microservice
Eureka Server 1
[accounts,0b6aaf642574edd3,0b6aaf642574edd3] Accounts Database

75

Loans Microservice
Loans Database
[loans,0b6aaf642574edd3, f5f4c6eca1748e77]

Cards Microservice
[cards,0b6aaf642574edd3,c7ba3e43e99f761d] Cards Database
ZIPKIN
A R CHIT E CT U R E O V E R V IE W

Accounts Microservice Loans Microservice Cards Microservice

SYNCHRONOUS (WEB)/ ASYNCHRONOUS (Rabbit,


Active MQ, ELK)

76
Zipkin Query Service (API)
Zipkin Collector COLLECTOR
Once the data is stored and indexed, we need a way to
Once the trace data arrives at the Zipkin collector
extract it. The query daemon provides a simple JSON
daemon, it is validated, stored, and indexed for lookups
API for finding and retrieving traces. The primary
by the Zipkin collector.
STORAGE consumer of this API is the Web UI.

Web UI
Storage ZIPKIN QUERY SERVICE (API)
The web UI provides a method for viewing traces based
Zipkin supports in-memory, MYSQL, Cassandra on service, time, and annotations.
and Elasticsearch for storing the logs, tracing
information.
WEB UI

ZIPKIN INTERNAL COMPONENTS


CHALLENGE 8 WITH MICROSERVICES
M O N IT O R IN G M ICR O S E R V ICE S HE A L T H & M E T R ICS

HO W D O WE M O N IT O R S E R V ICE S M E T R ICS ?
How do we monitor the metrics like CPU usage, JVM metrics etc. for all the
microservices applications we have inside our network easily and efficiently?

HO W D O WE 77
M O N IT O R S E R V IC E S
HE A L T H?
How do we monitor the status/health for all the microservices
applications we have inside our network in a single place?

HO W D O WE CR E A T E A L E R T S BA S E D
O N M O N IT O R IN G ?
How do we create alerts/notifications for any abnormal behavior of the
services?
DIFF APPROACHES TO MONITOR
M ICR O S E R V ICE S HE A L T H & M E T R ICS

MICROMETER PROMETHEUS
02
Micrometer automatically exposes /actuator/metrics data It is a time-series database that stores our metric data
into something your monitoring system can understand. by pulling it (using a built-in data scraper) periodically
All you need to do is include that vendor-specific over HTTP. It also has a simple user interface where
micrometer dependency in your application. Think SLF4J, we can visualize/query on all of the collected metrics.
but for metrics.

78 0
03
CLOUD NATIVE

1 APPLICATION

ACTUATOR GRAFANA
Actuator is mainly used to expose operational Grafana can pull data from various data sources like
information about the running application — Prometheus and offers a rich UI where you can build up
health, metrics, info, dump, env, etc. It uses HTTP
endpoints or JMX beans to enable us to interact
04 custom graphs quickly and create a dashboard out of
many graphs in no time. It also allows you to set rule-
with it. based alerts, for notifications.
CHALLENGE 9 WITH MICROSERVICES
CO N T A IN E R O R CHE S T R A T IO N

HO W D O WE A U T O M A T E T HE D E P L O YM E N T S ,
R O L L O U T S & R O L L BA CK S ?
How do we automate deployment of the containers into a complex cluster
env and perform rollout of new versions of the containers with out down
time along with an option of automatic rollback in case of any issues?

HO W D O WE 79 MAKE SURE OUR


S E R V ICE S A R E S E L F -HE A L IN G ?
How do we automatically restarts containers that fail, replaces
containers, kills containers that don't respond to your user-defined
health check, and doesn't advertise them to clients until they are
ready to serve.

HO W D O WE A U T O S CA L E O U R
S E R V ICE S ?
How do we monitor our services and scale them automatically based
on metrics like CPU Utilization etc. ?
KUBERNETES (K8S)
FO R CO N T A IN E R O R CHE S T R A T I O N

• Kubernetes, is an open-source system for automating deployment, scaling, and managing containerized applications. It is the most
famous orchestration platform and it is cloud neutral.

• The name Kubernetes originates from Greek, meaning helmsman or pilot. K8s as an abbreviation results from counting the eight
letters between the "K" and the "s".

• Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google's experience running
production workloads 80
at scale with best-of-breed ideas and practices from the community.

• Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your
application, provides deployment patterns, and more. It provides you with:

• Service discovery and load balancing


• Storage orchestration
• Automated rollouts and rollbacks
• Automatic bin packing
• Self-healing
• Secret and configuration management
KUBERNETES (K8S)
IN T E R N A L A R CHIT E CT U R E
MASTER NODE WORKER NODE 1
POD 1 POD 2
kubelet
Container 1
etcd Container 1
Docker Container 2
Container 2
CONTROL PANEL

kube-proxy Container 3

81 kube
Controller
manager API Server
POD 1 POD 2
kubelet
Container 1
Container 1
Docker Container 2
scheduler
Container 3
kube-proxy

WORKER NODE 2

UI CLI (kubectl) Users


KUBERNETES (K8S)
IN T E R N A L A R CHIT E CT U R E

Master Node (Control Plane)

• The master node is responsible for managing an entire cluster. It monitors the health check of all the nodes in the cluster, stores
members’ information regarding different nodes, plans the containers that are scheduled to certain worker nodes, monitors
containers and nodes, etc. So, when a worker node fails, the master moves the workload from the failed node to another healthy
worker node.

• The Kubernetes master 82is responsible for scheduling, provisioning, configuring, and exposing APIs to the client. So, all these are
done by a master node using control plane components. Kubernetes takes care of service discovery, scaling, load balancing, self-
healing, leader election, etc. Therefore, developers no longer have to build these services inside their applications.
KUBERNETES (K8S)
IN T E R N A L A R CHIT E CT U R E

Master Node (Control Plane)

• Four basic components of the master node (control plane):

 API server - The API Server is the front-end of the control plane and the only component in the control plane that we interact
with directly. Internal system components, as well as external user components, all communicate via the same API.

 Scheduler - Control plane component that watches for newly created Pods with no assigned node, and selects a node for
83 taken into account for scheduling decisions include: individual and collective resource requirements,
them to run on. Factors
hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference, and
deadlines.

 Controller manager - The controller manager maintains the cluster. It handles node failures, replicates components,
maintains the correct number of pods, etc. It constantly tries to keep the system in the desired state by comparing it with the
current state of the system.

 Etcd - Etcd is a data store that stores the cluster configuration. It is a distributed reliable, key-value store; all the
configurations are stored in documents, and it’s schema-less.
KUBERNETES (K8S)
IN T E R N A L A R CHIT E CT U R E

Worker Node (Data plane)

• The worker node is nothing but a virtual machine (VM) running in the cloud or on-prem (a physical server running inside your data
center). So, any hardware capable of running container runtime can become a worker node. These nodes expose underlying
compute, storage, and networking to the applications.

• Worker nodes do the heavy-lifting for the application running inside the Kubernetes cluster. Together, these nodes form a cluster –
84to them by the master node component, similar to how a manager would assign a task to a team member.
a workload assign is run
This way, we will be able to achieve fault-tolerance and replication.

• Pods are the smallest unit of deployment in Kubernetes just as a container is the smallest unit of deployment in Docker. To
understand in an easy way, we can say that pods are nothing but lightweight VMs in the virtual world. Each pod consists of one or
more containers. Pods are ephemeral in nature as they come and go, while containers are stateless in nature. Usually, we run a
single container inside a pod. There are some scenarios where we will run multiple containers that are dependent on each other
inside a single pod. Each time a pod spins up, it gets a new IP address with a virtual IP range assigned by the pod networking
solution.
KUBERNETES (K8S)
IN T E R N A L A R CHIT E CT U R E

Worker Node (Data plane)

• There are three basic components of the worker node (data plane):

 Kubelet - An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. The kubelet
takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those
PodSpecs are running and healthy.

85 is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes
 Kube-proxy - kube-proxy
Service concept. kube-proxy maintains network rules on nodes. These network rules allow network communication to your
Pods from network sessions inside or outside of your cluster.

 Container runtime - Container runtime runs containers like Docker, or containerd. Once you have the specification that
describes the image for your application, the container runtime will pull the images and run the containers.
KUBERNETES (K8S)
S U P P O R T BY CL O U D P R O V ID E R S

• Kubernetes is so modular, flexible, and extensible that it can be deployed on-prem, in a third-party data center, in any of the
popular cloud providers and even across multiple cloud providers.

• Creating and maintaining a K8S cluster can be very challenge in on-prem. Due that many enterprises look for the cloud providers
which will make their job easy to maintain their microservice architecture using Kubernetes.

• 86 cloud providers and their support to Kubernetes with the different names,
Below is the different famous

• GCP - GKE (Google Kubernetes Engine)


• AWS - EKS (Elastic Kubernetes Service)
• Azure - AKS (Azure Kubernetes Service)
8
7

THANK YOU & CONGRATULATIONS


YO U ARE N O W A M AS T E R OF MICROSERVICES USING SPRING, DOCKER AND KUBERNETES

87

You might also like