Micro Services
Micro Services
Evolution of Microservices
Before advent of Microservice, there were other flavors of architecture: Monolithic, SOA
a) In Monolithic, everything from UI, Business Logic, DAO is deployed in one single server. The
artifact is a single war or ear file using typically a single Java project. If you want to increase
horizontal scalability, then entire set up needs to be copied and deployed over other servers
with a Load Balancer in front.
b) In SOA based architecture, UI and Service Layer are in different servers and they interact via an
ESB. Typically, this is a SOAP based communication. With SOAP we have additional complexity
involved like message size and structure.
c) In case of Microservice based architecture we have different microservice for each component,
typically deployed in a container and orchestrated through an orchestration serviced like K8s. In
this case scaling up and scaling down of an individual component is easy. Moreover, each
Microservice can have its own database.
Monolithic Architecture
a) A change in one module, say Loans will require deployment and testing of entire application.
SOA Architecture
a) Spring supports modular architecture and loose coupled modules which in principle, philosophy
of Microservices based architecture
b) Spring Boot makes development of Microservices fairly very simple.
c) Spring Cloud helps in overcoming challenges associated with Microservices development
d) Provides production ready features like metrics, security, embedded servers
e) Spring Cloud makes deployment of Microservices to cloud very easy
WHAT IS SPRING BOOT
- When you add dependency of Spring Actuator in your project then you will have an added
feature of monitoring metrics and health checks by simply hitting a url.
- Spring starter project helps in building Spring Boot projects easy where automatically pom.xml
and dependencies will be created
a) Login to spring.initializer
b) Choose your build and programming language
c) Add any dependency like Web, Actuator etc
d) Download project zip, unzip and import it in eclipse
e) Class annotated with SpringBootApplication is main class
f) When you execute the jar, the main method of the class with annotation SpringBootApplication
will be called. The main method has code:
SpringApplication.run(HelloSvcApplication.class, args
Run method will if there is any class annotated with Web, so in this case Spring will initialize
Spring Web Application Context. It will then wire up all dependencies automatically.
g) After running main class you will see in logs that app will be up in port 8080 by default. If you
want to change the port then provide given below entry in application.properties file:
server.port=8082
h) Since we added Spring Actuator dependency, hence various metrics and monitoring stats will be
available for the Microservice. Access url:
Localhost:8080/actuator
When you want to design a new Microservices based architecture or migrate an existing monolithic then
we need to make sure that Microservice is not very big (system will not be modularized) or too small
(too many microservices which can introduce latency)
a) Domain Driven Sizing: Deciding boundaries based on business domain and components:
- Needs discussion with leaders who should have very good knowledge of business domain
- For example, in a bank application we can have Accounts department, loans department, stocks
department. So, we can have a microservice for each business component
- This process usually takes lots of time (maybe months), as we need to make sure that
microservice is too big and not too small and requires a thorough discussion with business
leaders.
But there is no right or prefect approach, you will evolve in sizing in due course of product
development.
Example:
1) While creating project in Spring Starter, choose dependencies as Web, Actuator, H2DB and
lombok
2) H2DB is an internal In Memory Data Base provided by spring. It’s used for POC Purpose. When
you start your application then Spring Boot Framework will automatically execute data.sql file
under resources folder. Its will create table and insert data as per scripts provided in data.sql
file. When you stop the application all tables and data will be deleted.
3) Project Lombok will generate Getter and Setter automatically for you with proper annotation,
but you have to execute the downloaded jar and provide the eclipse location during execution
of Lombok jar
4) When you start the spring boot application, you will get message like:
H2 console available at ‘h2-console’. Database available at ‘jdbc:h2:mem:testdb’
Access url localhost:<port where web app is deployed>/h2-console
In the rendered screen provide jdbc:h2:mem:testdb’ for JDBC url. Here you will be able to see
the In memory DB.
5) Please note that in model class, class name should exactly match the corresponding table name
or if that’s not the case then explicitly provide table name using annotation @Table.
For example,
@Entity
@Table (name=”accounts”)
Public class Account{
}
DOCKERIZING MICROSERVICES
- Launching a separate VM for each MS will be very expensive and time taking
- VM based approach in general takes time as each VM has a Guest OS which will always take
time to boot when VM is restarted
- Container do not have a Guest OS; their boundaries and space are limited. Each container will
only occupy space that is required to run, and it will only contain libs related to that Container.
For example, Accounts Service container can be Java 11 based, Loans on Java 14 and Cards on
Python.
- Adding, removing, and starting containers is also very fast.
- In a container a software package run in isolation. The software package is set of dependent
libraries and code required to run the software.
- Software containerization is a technology that is used to deploy and run containers without VMs
- Creating Docker containers out of Docker images is like creating instances of Java class
- An image can have multiple number of containers depending on load
- The way Docker images are built makes it env/platform compatible. The docker image which is
used in developer env can be used in test env or in cloud.
Suppose your docker hub user id is hitesh791. In your docker hub account you have a repository
with name accounts, then create docker image using command:
docker build . -t hitesh791/accounts
Above command will create a docker image with name (or repository name) as
hitesh791/accounts with tag as latest.
Now then to push the image to docker hub use command: docker push
hitesh791/accounts:latest.
These are the step you must follow to build and push image to docker hub
When we create a docker file then we pull open JDK image using
FROM openjdk:8
So here openjdk is repository name when open jdk image was pushed to docker hub.
If we do not specify, tag name then docker will generate a random number to tag your image
This will launch a new container where its port 8080 will be exposed as 8080 in outside world
If we want to create another container instance then we can use the same port for docker container as
each container will have its own file system, port and network but to outside world we have to change
the port as 8080 port is already used:
Now we have two container instances running for the image with id 1234
3) Docker compose:
Suppose we have 50 microservice. If we want to spawn container for each one of them, then we
must individually execute run command for each image and spawn a container for each one of
them. This is a time taking process.
With docker compose with a single command one can spawn containers for all micro services.
services:
accounts:
image: eazybytes/accounts:latest
mem_limit: 700m
ports:
- "8080:8080"
networks:
- eazybank-network
networks:
eazybank-network:
➢ Navigate to folder where docker-compose file is present. Now execute below command
to run docker compose file
docker – compose up
To stop containers, execute command:
docker – compose stop
- A docker image can be executed in Windows, Mac, or Linux. Docker will take care of OS
abstraction.
- CD means when any change is made then already running services will not be affected,
downtime will be negligible.
- Docker containers are right sized, they do not take up entire VM space. In case if more space is
needed then that can be achieved through orchestration services like K8s.
- Scalability is very easy. For example, on Sat and Sun we need to spawn extra 10 instances of
Accounts MS, then this cannot be easily achieved in traditional application.
3) Twelve Factor App Concepts:
Twelve Factor app lays down principles to be followed to build a cloud native application.
a) Code Base: Each Microservice should have a separate code base in a separate repository.
They can be deployed in multiple envs like Dev, Test or Production but each MS will have its
own repo. That way each MS can be developed and maintained separately.
b) Dependencies: Jar file dependencies needs to be figured out and explicitly defined using
build tools like Maven or Gradle. That way your docker image will contain all the required
libraries and you can then use that image anywhere to run.
c) Configuration: You should always store environment specific configuration (for example db
details etc.) outside of source code. If we store such configuration inside Micro service code
then we might have to change docker image from env to env, this will break philosophy of a
cloud-native app.
Hence, we should always keep configuration outside of deployable Microservice, that way if
configuration changes then you are not required to re deploy the MS, as service will
automatically refer the updated configuration.
d) Backing Services: This principle indicates that microservices should be able to switch
connection when deployment env changes with out change in code. For example, if in a
given env MS uses local DB then the same MS should be able to run in AWS env where it will
use AWS DB without change in code or container or image. This is possible if we keep
configuration externalized:
e) Build, Release, run: We should keep build stage separate from Release stage. Foe example
we build our code, then based on env to be deployed we choose a specific configuration and
create Release.
In essence, again we should build Microservice which should run independent of env in
which they are running.
f) Processes: In a typical MS communication, 1 can call 2, 2 will call 3, 3 will then send
response to 2 and then 2 will send response to 1. But in this communication never store
data in session of microservice instance (as the same instance may scale down). This is
known as Stateless communication. Only share request and response.
If there is still needed to store something, then that should be saved in a database.
g) Port Binding: Each Micro service should have its own port and interface. For example, we
generate a spring boot web app using 8080 as port. The same port can be used in all
instances of the same microservice, however outside port will be different for each instance.
There can be another MS which is a PHP application running on port 8081. The same port
can be used in all instances of the same microservice, however outside port will be different
for each instance.
Thus, with Spring Boot Framework and docker commands we can control on which port MS
instance is running.
h) Concurrency: This principle states that in situation of heavy load on application we should
always go for Horizontal scaling instead of vertical scaling. In other words, no of containers
instances should be increased or decreased instead of increasing or decreasing the
CPU/RAM etc of the server altogether.
i) Disposability: This principle states that at any point of time microservice instances should be
able to gracefully dispose of without affecting the application. For such situations
orchestration service like K8s are helpful as they ensure that:
j) Dev/Prod parity: Dev/Prod/Test env should be similar. This will make sure that entire
application is tested correctly. Think of a scenario where Dev configuration is different from
Testing, where we did some manual configuration in Dev env. In such cases your application
will run in Dev but not in Test.
If you keep same configuration, then you can promote code from Dev to Test fast and this
will decrease Testing time as entire application set up has already been tested with similar
configuration Dev env.
k) Logs: In a monolithic application its very easy to trouble shoot issues using logs. But in case
of MS based architecture there are several of Microservices instances running in multiple
servers. In such case there should be a centralized location to stream and analyze logs. For
this purpose, ELK stack is used. MS will push logs in the form of event stream in log stash,
then it’s up to ELK stack to search and analyze logs.
If MS a calls b and b calls c, using ELK stack we can figure out where exactly issue occurred in
entire request flow.
l) Admin Process: Admin Process like data clean up, pulling analytics report should be
maintained separately from the application. Further these scripts should be maintained in
source code repository and should not change form env to env
1) As per the principle, in a cloud native app, configuration details (db details, sftp folder location
etc) should be maintained outside of MS code base. Doing so will not require MS code to be
changed when configuration information is required to be changed
2) Challenged associated with configuration:
a) How to externalize configuration
b) How to inject them in MS
c) How to maintain configuration information in a way that any change in configuration should
not require restart of MS.
A) Create a new Micro Service Spring Boot service configserver adding Spring Cloud Server and
Spring Actuator as dependency
B) In Spring main class add annotation @EnableConfigServer. This annotation will indicate
spring framework that this is Config Server application which can read configuration from a
centralized repo like Git, Vault or Class path and it can expose configuration through rest
end points.
C) But we need to tell Config Server the location from where config information should be
loaded. There are three possible locations : File System, Git Hub, Class path.
D) Location of config repo is given in application.properties file
E) Loading configuration information from class path:
- Class path: In this case our Config Server will load configuration information from Class path. For
this provide given below entry on Config Service Micro Service’s application.properties file:
spring.application.name=configserver
spring.profiles.active=native
spring.cloud.config.server.native.search-locations=classpath:/config
- Since we have provided class path location as classpath:/config, hence create a folder config
under src/main/resources folder. Inside config folder provide configuration for each
Microservice for each environment. Since we have three microservice, we need configuration
properties for dev, prod and test for each of them. Hence we need total 9 configuration files.
Let’s take example of dev env configuration details for accounts microservice. Create a file with name
accounts-dev.properties (Similarly make accounts-prod.propeties, file with name accounts.propeties will
refer to default environment). A typical configuration information will look like below:
Notice that suffix everywhere is accounts. We can provide properties in the form of arrays and map data
structure also.
- Start the configuration management micro service. Once the micro service is up it will expose
rest end points which client microservice can use to access configuration information.
- You can check rest end point url using:
http://<ip>:<port>/<configurationfilename>/<env-name>
For example, if your microservice is running in local host at port 8071 and you want to access
configuration for accounts for dev environment then use url:
http://localhost:8071/accounts/dev
you will see response like given below:
Please note that default configuration is also loaded
To access configuration information for accounts microservice for prod environment use url:
http://localhost:8071/accounts/prod
To access configuration information for accounts microservice for default environment use url:
http://localhost:8071/accounts/default
- Now here is the beauty. Change the value of any property. You will see that change will be
reflected in rest service response WITHOUT STOPPING THE MICROSERVICE.
F) Loading configuration information from local file system: One may choose this option as we
do not want to keep configuration information as part of configuration management service
itself, we need to keep it separate outside of microservice. We also do not want to keep it
inside Git Hub location to avoid versioning and changes from other developers:
- File location can be from your local file system or from cloud location say AWS S3 bucket.
- Copy config folder from class path to local file system say in C drive
- Keep profile as native only in application.properties file
- Comment the entry:
spring.cloud.config.server.native.search-locations=classpath:/config
- Now here is the beauty. Change the value of any property. You will see that change will be
reflected in rest service response WITHOUT STOPPING THE MICROSERVICE.
G) Loading Configuration from Git Hub location: This is the more preferred and advisable
approach to follow:
- Create a repo in Git Hub and load configuration information in that repo
- Change the profile to git in application.propeties file
- Comment the given below entries:
spring.cloud.config.server.native.search-locations=classpath:/config
and
spring.cloud.config.server.native.search-locations=file:///C://config
A) Change in pom.xml:
- Add a new entry for spring cloud version under property tag:
<properties>
<java.version>17</java.version>
<spring-cloud.version>2021.0.4</spring-cloud.version>
</properties>
- Add dependency management for spring cloud:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>${spring-cloud.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
- We need to tell microservice the location from where it can read configuration properties. For
this add given below three entries in Microservices’s application properties file:
spring.application.name=accounts
spring.profiles.active=prod
spring.config.import=optional:configserver:http://localhost:8071/
- Write code in client Micro service to load configuration information at start up.
@Configuration
@ConfigurationProperties(prefix = "accounts")
@Getter @Setter @ToString
public class AccountsServiceConfig {
}
This is how you have a version of Microservice which will read configuration information on
service start up from an external resource. The beauty is that when you change configuration
information then you are not at all required to change code and reboot either client or config
MS. Hence same docker file can still be used upon configuration information change.
Similarly, you can microservice/docker image version for dev and uat env/profile as well. But
this means there will be a different version of docker file and image if we move from one env to
other env. But there is solution to this as well.
Using docker compose file, we can provide profile information in docker compose file thereby
same docker image can be used for different environments. Only docker compose file must
change.
Follow given below steps to implement this:
A) Regenerate jar files for all business micro services and push docker image to docker hub.
Remember do not change any entry in application.properties file. Although we will
externalize profile related information in docker compose file, even then do not remove
given below entry from application.properties file
spring.application.name=accounts
spring.profiles.active=prod
spring.config.import=optional:configserver:http://localhost:8071/
Reason being our client micro services are still annotated to read configuration file so at
application build time these properties will be sought after.
accounts:
image: eazybytes/accounts:latest
mem_limit: 700m
ports:
- "8080:8080"
networks:
- eazybank
depends_on:
- configserver
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
environment:
SPRING_PROFILES_ACTIVE: default
SPRING_CONFIG_IMPORT: configserver:http://configserver:8071/
- Please note that all micro service will share the same network for creating a bridge.
- Provide entries for all other micro services in docker compose file similarly.
- At application start time, using docker compose information AccountsServiceConfig will load
configuration information from config server using information from docker compose file.
8) Refreshing properties: If we change property specific to any env/profile, we need to make sure
that all micro service instances should refer latest value without restarting them. For this spring
cloud confiig provide a special annotation @RefreshScope. This annotation will expose a new
endpoint in spring actuator. When we invoke that url for individual micro services then all micro
service will reload their configuration information without restarting them.
Apart from this also add given below entry in micro service’s application.properties file:
management.endpoints.web.exposure.include=*
This will expose all end points urls for spring actuator.
Here you will see refresh url for your micro service.
Hit that url using post request (without providing any body)
This will make your micro service to re-fetch config information from config server.
G) Get accounts properties, you will not get updated value.
H) Please note that properties like db details, email will still require server restart as such
properties require a re-connection to db or SMTP server.
I) One can create a shell script file or decompose file to hit restart urls for all microservices on
config information update instead of manually calling refresh urls for all micro services
encrypt.key=hitesh791
B) Once you this then config server will expose two given below POST urls:
http://<ip>:<port>/encrypt.
http://<ip>:<port>/decrypt
All these challenges are solved by Service Discovery and Registration pattern:
C) Client Micro services: Here, we are talking about back end micro services invoking other
backend micro service i.e. internal micro service communication.
(For Micro service invocation from UI we have a different concept of API gateway)
- Client micro service invoke any other micro service by calling service discovery layer. For
example a client micro service will invoke accounts micro service by invoking service discovery
layer using url services.eazybank.com/accounts.
Service Discovery Layer will check available instances for accounts micro services. It will find that
if multiple instances are available then it will route request to a specific instance using some
algorithm (for example round robin)
- We use Client-Side Load Balancing to cache exact end point url information for a service
- This also reduces load on Service Discovery Layer
- Lets try to understand this with a flow:
➢ Accounts Service requests Loans Service for the first time
➢ There is no information for Loans Service in Client-side cache
➢ It invokes Service Discovery Layer to get available instances and returns an instance
➢ Client side cache synchronizes Loans service instances (all available) info from Service
Discovery Layer
➢ Accounts service request requests Loans service once again
➢ This time information for Loans Service will be found in Client Cache
➢ Client side Load Balancer will then return a specific instance based on some logic
(round-robin or proximity)
- Client side cache will always refresh information from Service Discovery layer in a configurable
periodic manner
- But there may arise a scenario when Client side cache can return an instance end point address
which is down. In that case instead of failing Client Micro service will query Service Discovery
layer to get latest instances information. Client side cache will also refresh its information
- All these is achieved by Spring Boot Framework with minimal configuration and coding.
5) Spring Cloud Support for Service Discovery and Registration:
Spring Cloud provides this support using three components:
A) Spring Cloud Netflix Eureka Server (Eureka Server): Service Discovery Agent which act as
Service Registry
B) Spring Cloud Load Balancer Library for client side-Load Balancing
C) Netflix Feign client: For performing operation of Service Discovery.
eureka.instance.hostname=localhost
eureka.client.registerWithEureka=false
eureka.client.fetchRegistry=false
eureka.client.serviceUrl.defaultZone=http://${eureka.instance.hostname}:${server.port}/eu
reka/
Please note that above information can also be provided in application.properties file of Eureka Server
Micro service.
Once you start the Eureka Server application, it will check if spring.config.import property is available, if
Yes then it will connect to Config Server and read relevant properties and start the Eureka Server
(Otherwise it will read information available in application.propeties file to get details of port, end point
address etc.)
eureka.instance.preferIpAddress = true
eureka.client.registerWithEureka = true
eureka.client.fetchRegistry = true
eureka.client.serviceUrl.defaultZone = http://localhost:8070/eureka/
endpoints.shutdown.enabled=true
management.endpoint.shutdown.enabled=true
c) Start the config server, Eureka Server and client micro service application. Now hit the url
for Eureka Server, you can now see the information regarding running micro service
instance.
Hit the highlighted url. Upon hitting it wil take you to actuator url where information will be displayed as
per info.app..* properties values in application.properties file
Important Points:
➢ Some times it takes time to get information reflected Eureka Server. So wait for some time
before information of your instance gets displayed in Eureka Server
➢ Right now we have only one Accounts (name comes from spring.application.name property
value) Micro service instance. Hence no of AMIS is 1. If we start multiple instances then there
will be multiple entries to Micro service with logical name Accounts where each entry will
correspond to each instance.
So as user has to invoke Eureka Server url with logical name of the micro service to get details of
available instances of that micro service.
For example,hit the url:
http://<euraka server ip>:<eureka server port>/eureka/apps/<service name>
For example in our case, hit the url:
http://localhost:8070/eureka/apps/accounts
Then given below xml will be displayed:
<application>
<name>ACCOUNTS</name>
<instance>
<instanceId>md603cxc.ad001.siemens.net:accounts:8081</instanceId>
<hostName>192.168.29.174</hostName>
<app>ACCOUNTS</app>
<ipAddr>192.168.29.174</ipAddr>
<status>UP</status>
<overriddenstatus>UNKNOWN</overriddenstatus>
<port enabled="true">8081</port>
<securePort enabled="false">443</securePort>
<countryId>1</countryId>
<dataCenterInfo class="com.netflix.appinfo.InstanceInfo$DefaultDataCenterInfo">
<name>MyOwn</name>
</dataCenterInfo>
<leaseInfo>
<renewalIntervalInSecs>30</renewalIntervalInSecs>
<durationInSecs>90</durationInSecs>
<registrationTimestamp>1671385333253</registrationTimestamp>
<lastRenewalTimestamp>1671385333253</lastRenewalTimestamp>
<evictionTimestamp>0</evictionTimestamp>
<serviceUpTimestamp>1671385333253</serviceUpTimestamp>
</leaseInfo>
<metadata>
<management.port>8081</management.port>
</metadata>
<homePageUrl>http://192.168.29.174:8081/</homePageUrl>
<statusPageUrl>http://md603cxc.ad001.siemens.net:8081/actuator/info</statusPageUrl>
<healthCheckUrl>http://md603cxc.ad001.siemens.net:8081/actuator/health</healthCheckUrl>
<vipAddress>accounts</vipAddress>
<secureVipAddress>accounts</secureVipAddress>
<isCoordinatingDiscoveryServer>false</isCoordinatingDiscoveryServer>
<lastUpdatedTimestamp>1671385333253</lastUpdatedTimestamp>
<lastDirtyTimestamp>1671385333203</lastDirtyTimestamp>
<actionType>ADDED</actionType>
</instance>
</application>
You get other useful information regarding instance like, ip, port, health, actual url etc.
➢ Eureka server exposes Rest urls to access information of registered micro services.
➢ By default all client micro services will send hear beat signal to Eureka Server every 30s
8) De- registering micro service instance:
a) The actuator info for each micro service will expose a shut down url (because we enabled
this url in application.propeties file)
b) Hit the url: http://<ip address of ms>:<port of ms>/actuator. In our case hit url for accounts
micro service:
http:localhost:8081/actuator
c) If instance is shutdown forcefully then, Eureka Server wil use concept of Heat Beat to
deregister any instance.
9) Feign Clients to invoke other micro services: Now all client micro services are able to register
themselves in Eureka Server, its time to learn how individual micro services can discover
instances of other micro services using Feign Clients:
Let take scenario where we will expose a new Rest API in Accounts micro service which will fetch
account, loan and card details of a customer. Hence in this case Accounts micro service has to invoke
loan and card APIs to get required information. Follow given below steps:
c) Just like in JPA Repository we only added interface and rest of the code was automatically
generated by JPA Framework. Similarly here also we will only create a client code and rest of the
code will be generated by Feign client FW. For example, accounts ms has to invoke client ms
code so we will create a Feign client for client ms in accounts ms. See below:
@FeignClient("cards")
d) Develop an API in Accounts micro service to fetch details of accounts, loan and cards by
customer id
10) Make changes in docker compose file for eureka server: (Here we have taken example of default
docker compose file)
a) Make entry for eureka server registry:
eurekaserver:
image: hitesh791/eurekaserver:latest
ports:
- "8070:8070"
networks:
- hitesh791-network
depends_on:
- configserver
deploy:
restart_policy:
condition: on-failure
delay: 15s
max_attempts: 3
window: 120s
resources:
limits:
memory: 700m
environment:
SPRING_PROFILES_ACTIVE: default
SPRING_CONFIG_IMPORT: configserver:http://configserver:8071
b) Make changes for client service definition in docker compose file. (Here we have taken
example of accounts micro service). Add given below two entries:
depends_on:
- configserver
- eurekaserver
……
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://eurekaserver:8070/eureka/
11) Running Account Micro service with two instances using docker compose file:
a) Add one more entry for account microservice in docker compose file:
Accounts1:
image: hitesh791/accounts:latest
ports:
- "8080:8080"
networks:
- hitesh791-network
depends_on:
- configserver
- eurekaserver
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
memory: 700m
environment:
SPRING_PROFILES_ACTIVE: default
SPRING_CONFIG_IMPORT: configserver:http://configserver:8071/
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://eurekaserver:8070/eureka/
- Note that to run another instance of same micro service we have changed service name in
docker compose file as in a docker compose file we cannot have two services with same name.
But both of them will refer same docker image, hence they both belong to same Accounts Micro
service (as per value of property spring.application.name). Hence once there are two instances
and they are up they will be registered under same micro service in Eureka Server
- Once both of these instances are up then you will see two instances under Account Micro
server, see screen shot below:
First two properties are configured at client micro services, remaining are configured at Eureka
server.
Resiliency is a feature which enables a system to come out of an erroneous or failure situation. In a
typical architecture there can be 100s of microservices running, how we can make sure that we will get
the desired collective output in case of any failure situation.
A) Cascading Failure: One slow or failed microservice should not fail the entire chain of
microservices
B) Handling Failed micro service gracefully: Building a fallback mechanism when one of the
microservices fail. For example, returning a default value of fetching data from DB or calling
another micro service or getting value from cache
C) Making Services Self Healing: In the case of slow performing microservices, how we should
configure time outs, retries, and give slow micro service some time to recover (during this time
slow micro service should not accept any other request)
Spring provide support for above issues through Resilience4j which is a light weight framework
inspired from Netflix Hystrix. Resilience4j provide fault tolerance mechanism in event of network or
a micro service failure through 4 patterns:
- Circuit Breaker
- Fallback
- Retry
- Rate Limit
- Bulkhead
A) App1 needs customer details, hence it invokes Account Micro Service API which in turn
invokes Loans and Cards Microservice
B) App2 needs response from Accounts
C) App3 needs response from Loans
D) App4 needs response from Cards
E) Cards Microservice started responding slowly due to DB connectivity issue with Cards DB
F) Due to this response from App1 becomes slow
G) Now multiple customers invoke App1. Due to this, multiple threads are created as previous
thread are still waiting for response from Cards
H) This will result in multiple threads creation and resource utilization will increase
I) Due to increase in usage of memory, resources and CPU, accounts and loans micro service
will become slow.
J) The entire system will hang now.
How it works:
A) It is inspired from electric circuit breaker. In a electric circuit breaker when too much current
passes then circuit is opened so that no more current can pass.
B) Similarly in software circuit breaker pattern if a call takes too long then that call is killed.
Also, if there are too many calls to the service then it can also prevent future calls being
made to a service. Thereby we can manage load in system and we can avoid cascading
failures.
When CB is open (it means network is open) then Account and Loans Micro service will
respond immediately instead of waiting, also they can receive default response if that is
configured. If default response is not configured then they will receive exceptional response
in a graceful manner
C) CB pattern is smart enough to monitor the state of erroneous service. In our case cards ms
network is opened as it was responding slowly. By the time when no new requests are sent
to cards ms, it can leverage this time to heal itself.
CB will then periodically check the state of cards microservice. It will half open the network,
if the problem is resolved then it will close the network otherwise it will again open the
network.
This is a continuous process which will happen in CB pattern
D) Advantages of CB pattern:
- Fail Fast: Fails fat instead of waiting
- Fail gracefully: Default response can also be configured
- Recover seamlessly: Gives time to erroneous service to recover.
E) Implementing CB Pattern:
- Add pom.xml dependency in Micro service where you want to apply CB pattern.
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-spring-boot2</artifactId>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-circuitbreaker</artifactId>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-timelimiter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
We want to make Accounts Micro service API myCustomerDetails (which get details from
Accounts, cards and loans) resilient, hence above entry in pom.xml of Accounts MS
- Add annotation for CB pattern. In our case add given below annotation to myCustomerDetails
API
@PostMapping("/myCustomerDetails")
@CircuitBreaker(name="detailsForCustomerSupportApp")
public CustomerDetails myCustomerDetails(@RequestBody Customer customer) {…}
Please note that we can add this annotation at individual API (method) level or Micro service
level. But never forget to provide a name for the CB so that you can know which CB has the
issue
resilience4j.circuitbreaker.configs.default.registerHealthIndicator= true
resilience4j.circuitbreaker.instances.detailsForCustomerSupportApp.minimumNumberOfCalls= 5
resilience4j.circuitbreaker.instances.detailsForCustomerSupportApp.failureRateThreshold= 50
resilience4j.circuitbreaker.instances.detailsForCustomerSupportApp.waitDurationInOpenState=
30000
resilience4j.circuitbreaker.instances.detailsForCustomerSupportApp.permittedNumberOfCallsIn
HalfOpenState=2
F) Testing CB Pattern:
- Start all Micro service except Cards
- Hit Eureka Server url to check if required services are up:
- Now call myCustomerDetails API from postman. (Remember this API will fetch data from
Accounts, Loans and Cards MS, but cards MS is down)
- You will observe that upon calling, you will receive internal server error as Cards MS is down.
- {
- "timestamp": "2023-01-17T15:23:55.504+00:00",
- "status": 500,
- "error": "Internal Server Error",
- "path": "/myCustomerDetails"
- }
- In console you will receive error like
- Invoke the same call from postman 4 more times. After total of 5 calls you will see that response
from server will be fast and exception in console will be different:
io.github.resilience4j.circuitbreaker.CallNotPermittedException: CircuitBreaker
'detailsForCustomerSupportApp' is OPEN and does not permit further calls
We can now see that after 5 calls, CB has opened the circuit for Cards Microservice and error
now changed in console.
@CircuitBreaker(name="detailsForCustomerSupportApp", fallbackMethod =
"myCustomerDetailsFallBack")
There are certain rules to follow while writing a fall back method:
➢ It should accept the exact same input as original method.
➢ Second argument must be Throwable, so that you can provide exception specific
implementation
➢ In this example, we are simply not fetching data from Cards API, but you can write
fallback method logic as per your business requirement, for example you can return
default cards response or fetch data from cards cache etc.
➢ Please note that fall back method will be called only when there is error from cards
microservice, otherwise we will receive reply from original method only.
3) Retry Pattern:
Sometimes micro service call can fail due to some network issue. In such cases we can make use
of Retry Pattern where the call will be retried for a configurable no of times.
- With retryAttempts of 3, call will be onvoked 3 times before invoking fallback method. Please
note that in this case, original invocation will be called 3 times (i.e invocation on which
annotation is applied)
resilience4j.ratelimiter.configs.default.registerHealthIndicator= true
resilience4j.ratelimiter.instances.sayHello.timeoutDuration=5000
resilience4j.ratelimiter.instances.sayHello.limitRefreshPeriod=5000
resilience4j.ratelimiter.instances.sayHello.limitForPeriod=1
Bulk Head Pattern is similar to Bulk Head design in a ship. In a ship if one compartment is filled
with water then entire ship is not sunk because we seal that compartment. Similar pattenr we
can use in Micro service where we limit the number of resources to be used by a micro service,
That way if a give nmicroservices crosses the threshold number of resources then it has to wait
so that it does not eat up all resources. This will not block other API calls.
- For example, in our sample code we can apply this pattern on myCustomerDetails API as this is a
complex API which calls other APIs.
ROUTING AND CROSS CUTTING CONCERNS
A) Challenges:
- How to route a request to a given end point based on url. For example, a client may want to
invoke beta or stable version of a micro service
- How to handle cross cutting concerns like logging, auditing, security in micro service. We can
give this responsibility to individual micro services, but then this will be a huge burden on
individual developers, and they can also break the consistency. We can also use a common
library but then, this will increase coupling.
Hence, we need a common service which can take care of these issues
- How to monitor the inbound and outbound traffic and make it policy aware
-
- After this Pre Filters are executed. This is the place where we can take care of cross cutting
concerns like logging, tracing, auditing
- Now request is sent to Eureka Server or Micro service instance
- Once Micro Server instance process the request, the response is sent to Post Filters. Here again
we can take care of cross cutting concerns like adding a header in response or calculating the
actual time of request processing
- The response is then finally sent via Gateway Handler Mapping
spring.application.name=gatewayserver
spring.config.import=optional:configserver:http://localhost:8071/
management.endpoints.web.exposure.include=*
## Configuring info endpoint
info.app.name=Gateway Server Microservice
info.app.description=Eazy Bank Gateway Server Application
info.app.version=1.0.0
management.info.env.enabled = true
management.endpoint.gateway.enabled=true
spring.cloud.gateway.discovery.locator.enabled=true
spring.cloud.gateway.discovery.locator.lowerCaseServiceId=true
logging.level.com.eaztbytes.gatewayserver: DEBUG
- Hit the url found in previous step. Here you will find that Gateway server loads all registry
information from Eureka Server on start up.
- Finally invoke the Micro service using Gateway Server:
http://localhost:<gateway-server-port>/accounts/myCustomerDetails
Upon invoking above url Gateway Server will locate Micro service instance from Eureka registry and
invoke the service
Please note that here we have not used any custom routing, pre-filter or post-filter
http://localhost:<gateway-server-port>/<company-name>accounts/myCustomerDetails
In above case we need to remove <company-name> from input url to locate actual micro service
from Eureka Server
@Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder){
return builder.routes()
.route(p -> p
.path("/eazybank/accounts/**")
.filters(f ->
f.rewritePath("/eazybank/accounts/(?<segment>.*)","/${segment}")
.addResponseHeader("X-
Response-Time",new Date().toString()))
.uri("lb://ACCOUNTS")).
route(p -> p
.path("/eazybank/loans/**")
.filters(f ->
f.rewritePath("/eazybank/loans/(?<segment>.*)","/${segment}")
.addResponseHeader("X-
Response-Time",new Date().toString()))
.uri("lb://LOANS")).
route(p -> p
.path("/eazybank/cards/**")
.filters(f ->
f.rewritePath("/eazybank/cards/(?<segment>.*)","/${segment}")
.addResponseHeader("X-
Response-Time",new Date().toString()))
.uri("lb://CARDS")).build();
A) Challenges:
- How to trace a specific request in a Micro Service based distributed environment. For example,
a request initiated by client may span multiple microservices distributed across multiple
containers/nodes/azs. How to trach a single request in such cases
- How to aggregate logs? Take previous example where a request span multiple micro services.
Each micro service will generate its own log. How to aggregate logs from multiple micro services
in a single place.
When above two problems are solved then we can also debug issues easily.
- Spring cloud sleuth will add a trace id and span id for each log statement. Thereby we can search
by trace id or span id.
It does so by intercepting each incoming request and generating id for each call and appending
that id across multiple calls.
- Zipkin performs the task of aggregating logs from multiple microservice location into a single
centralized location.
-
It also breaks down a transaction or a call into individual components to identify hot spots,
performance issues etc.
-
-
We can observe that trace ID remains same, but span id changes when service changes. Please
note that for first microservice span id and trace id are same.
- Each individual micro service will write logs information to a centralized zip kin server. For that a
small configuration regarding zipkin server location will be required, rest will be taken by spring
cloud framework.
- Log information can be pushed to zip kin server in a synchronous manner, but that might be
slow
- Log information can also be pushed to zip kin server in an asynchronous manner by pushing log
information to Rabbit MQ or a JMS Queue. Zip kin can then be configured to listen from that
queue.
- One can also write log information to ELK or Splunk but in this case we will discuss about zip kin
server.
- Zip kin server has four components: Collector, Storage, Zip Kin Query Service API, Web UI
In this tutorial we will discuss In Memory Storage of logs in Zip Kin.
- Invoke Account’s myCustomerDetailsAPI which also invoked Loans and Card API
Accounts Microservice:
2023-04-02 00:20:45.690 INFO [accounts,69ffeccad0ba7b9f,69ffeccad0ba7b9f] 19392 --- [nio-
8081-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet
'dispatcherServlet'
Since accounts is the first API to get called hence span is and trace id are same
Cards Microservice:
2023-04-02 00:20:46.837 INFO [cards,69ffeccad0ba7b9f,cae7fded484b9699] 8824 --- [nio-
9001-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet
'dispatcherServlet'
Loans Microservice:
2023-04-02 00:20:46.420 INFO [loans,69ffeccad0ba7b9f,0dc363f6639ff543] 17004 --- [nio-
8090-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet
'dispatcherServlet'
- Add given section to start zipkin in a docker container in docker compose file:
zipkin:
image: openzipkin/zipkin
deploy:
resources:
limits:
memory: 700m
ports:
- "9411:9411"
networks:
- hitesh791-network
SPRING_ZIPKIN_BASEURL: http://zipkin:9411/
- Now restart all micro services, create docker images and run docker compose file
MICROSERVICE MONITORING
We can have 100s of microservices running in a distributed environment. In such case we need a
mechanism to monitor health, statistics, and metrics of individual microservices.
A) Challenge:
B) Approaches:
- Actuator:
➢ Actuator by default exposes end points to get information regarding a running micro
service like health, metrics, dump etc.
➢ This is a very basic approach.
➢ In case we have multiple instances of microservices then we have to visit url ofr each
instance to get information from actuator. This will be tedious task.
- Centralized Framework Approach:
➢ There should be a centralized app from where we can get all monitoring related
information for all microservices/instances.
➢ This is where Micrometer, Prometheus and Grafana comes into play.
➢ But this should not be responsibility of individual microservice instances to send
information to Prometheus as this will be performance issue.
➢ Hence its Prometheus responsibility to collect information from individual microservices
by calling actuator end point url exposed by microservice instances.
➢ But data coming from actuator needs to be converted into format that Prometheus
understands. This is done by Micrometer.
➢ Micrometer is a generic framework for data conversion. It can also convert data into ELK
format. You just need to add relevant micrometer plugin.
➢ The interval within which Prometheus pulls data from actuator via Micrometer is
configurable.
➢ But Prometheus has limited functionality to offer in its UI. This is where Grafana comes
into play.
➢ Grafana pulls data from Prometheus to render a very rich monitoring UI. Here alerts can
also be configured. Grafana has lot to offer in this context.
- One can also create custom metric to be sent to Prometheus via micrometer. For this, ass given
below dependency in pom.xml:
- <dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
- For custom metric add given below @Bean entry in AccountsApplication class. This will inject
the bean in spring context.
@Bean
public TimedAspect timedAspect(MeterRegistry registry){
return new TimedAspect(registry);
}
Add given below annotation in the method in which you want to calculate the time:
@PostMapping("/myAccount")
@Timed(value = "getAccountsDetail.time", description = "Time taken to return
account details")
public Account getAccountsDetail(@RequestBody Customer customer){
Account account =
accountsRepository.findByCustomerId(customer.getCustomerId());
if(account!=null){
return account;
}
return null;
}
- Once you bring micro services up then actuator will expose a new url where actuator metric
data will be available in Prometheus format. Prometheus will invoke this url in regular intervals
to collect and collate data
For example, for accounts micro service, the url will be like: http://localhost:8080/actuator/prometheus
D) Setting up Prometheus:
Now its time to set up Prometheus so that it can collect data from actuator.
- Note that we need to set up Prometheus in a way that it should be able to pull data from each
micro service instance.
- Hence each instance should know information regarding Prometheus details.
- For this create a file Prometheus.yml with given below content:
global:
scrape_interval: 5s # Set the scrape interval to every 5 seconds.
evaluation_interval: 5s # Evaluate rules every 5 seconds.
scrape_configs:
- job_name: 'accounts'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['accounts:8080']
- job_name: 'loans'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['loans:8090']
- job_name: 'cards'
metrics_path: '/actuator/prometheus'
static_configs:
- targets: ['cards:9000']
Under scrap config we define mircro services whose instance information we need to read via
actuator url. Metric Path refers to path exposed by Actuator. Static config defines end point
where micro service instance is running. In above example we have specified that Prometheus
should collect actuator info for accounts micro service instance running in port 8080. For other
instances we need to provide comma separated values in target section. Please note that we
specify target in the form accounts:8080 as in docker ompose environment we identify location
of a micro service via service name rather than actual ip.
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
networks:
- eazybank
Here we are pulling Prometheus image from docker hub. In the volume section we are
instructing to copy Prometheus.yml file in location /ect/Prometheus/promethus.yml in
container. This is required as when Prometheus container is launched and run then it will look
for Prometheus configuration information in folder /etc/prometheus/prometheus.yml
- Keep docker compose file and Prometheus.yml file in same location.
- Execute command docker-compose up
- Navigate to url: http://<ip-address>:<port>/actuator/prometheus
Here you will not see our custom metric yet as we have not hit the getAccountDetailsAPI yet. Hit
the API then you will the metric for the same in above url
- Navigate to Prometheus url: http://<ip>:<port>:9090.
Here you can see all metrics related to micro services instances you configured in
Prometheus.yml.
E) Setting Up Grafana:
If the monitoring feature offered by Prometheus is not enough then, you can configure Grafana.
Grafana will pull data from Prometheus to create more interactive dashboards and UIs.
- Add given below image entry for Grafana in docker compose file:
grafana:
image: "grafana/grafana:latest"
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=password
networks:
- hitesh791-network
depends_on:
- prometheus
- Add Prometheus as data source, if configured correctly then your connection test will be
successfull
- You can create custom dashboards, but some inbuilt dash boards are also available. For that you
can visit Grafana web site and import dashboards from there using url
CONTAINER ORCHESTRATION
A) Challenge:
- Auto Deployment
- Deployment of newer version without downtime
- Automatic Scale Up, Scale Down, Healing
- Management in a complex cluster
KUBERNETES
K8s is an orchestration framework used for maintaining, scaling, self-healing, deployment of
containers.
It provides:
➢ Service Discovery and Load Balancing
➢ Storage Orchestration
➢ Automated Roll outs and rollbacks
➢ Automates bin packing
➢ Self-healing
➢ Secret and configuration management ( No config server required)
➢ It cloud neutral. It can be run and deployed on On Prem. AWS, GCP and Azure also
provides support for K8s
A) Deep Dive of Kubernetes Architecture:
- Just like in any distributed computing environment, K8s also follows a clustered architecture.
- It consists of Master Node and Worker Nodes
- There can be any number of Worker Nodes like 10, 50, 100 or 1000. You can have one or more
Master Nodes, depending in number of Worker Nodes
- Worker Nodes is where containers are deployed and run.
- Master node makes sure that Worker Nodes are working properly.
- Maser Node has 4 important components:
➢ kube API Server:
• Its like a set of rest services which represents all operations that can be done by
master node.
• Anyone who wants to interact with cluster then that is facilitated through kube
API Server.
• So kube API is like a Gateway to cluster.
• It also acts like a Gate Keeper for the cluster so that only authenticated users
are interacting with the cluster.
• There are two ways to interact with kube API Server: UI and (CLI)kubectl.
• So, to interact with K8s cluster you only tell Master Node what to do. Request
can be like adding a new container, scaling up scale down containers, adding
more container instances, self-healing etc. Eventually that request is served by
kube API Server.
➢ scheduler:
• Upon a receiving a request, kube API server forwards that request to
scheduler.
• A request can be for example to deploy a new microservice. So for example, a
user can initiate a request to deploy accounts micro service with 3 replicas
• Scheduler will receive a request like this user want to deploy three containers
for this docker image.
• Scheduler will then perform internal calculation and check which Worker Node
has less load.
• Upon choosing a Worker node, deployment of micro service container will be
scheduled in chosen worker node.
• Worker Node will then take instruction from scheduler to deploy containers in
POD.
➢ Controller manager:
• Once can also give deployment instruction like I want three instances of
Accounts Micro service always.
• So, it will be job of controller manager to ensure that three instances(desired) of
micro service are always available. It will accomplish this through health checks.
• So, there is a desired state and a current state. Controller Manager ensures that
desired state is same as current state.
➢ etcd:
• Its like Namenode in Hadoop cluster
• It stores meta data information
• Its brain of your cluster in the form of a database (where info is stored in key
value form)
• It has information like number of worker nodes, location of worker nodes,
number of PODS, number of microservices containers (desired)
• Scheduler when receives deployment instruction then it writes all relevant
information in etcd (like number of desired container for a micro service)
• Controller Manager queries etcd to get desired state and then compare it with
current state.
➢ Docker:
• Since our container will run on docker, hence each Worker Node will have
docker installed on it.
➢ kube-proxy:
• Via kube-proxy you can expose your container end point urls:
• You make your services private or public
• It will also help in setting firewall settings
• End user willing to invoke service of a container has to invoke via kube-proxy.
➢ POD:
• POD is a smallest deployment component used to deploy and run containers.
Just like container is a smallest unit of deployment in docker.
• When a deployment instruction is sent via kube API Server then that request is
forwarded to Scehduler
• Scheduler will then choose a Worker Node based on load and other parameters
• Once a Worker Node is chosen then, scheduler will pass instruction to kubelet
of that Worker Node.
• Kubelet will then create a new POD where the container will be deployed and
run.
• POD can be treated like a mini version of Worker Node having its own memory,
RAM, CPU. Inside POD containers run.
• Usually in POD singe container is run. Its not recommended to run multiple
containers inside a POD.
• However, you can run helper containers along with main container inside a
single POD.
• Once POD is created it will get an ip address assigned by pod networking
solution and port to interact with containers. This information will be available
via kube-proxy which can be used by end users to invoke container
logic/business logic.
- What we call Pseudo Distribution in Hadoop is called as mini-kube in K8s.
- GCP’s K8s service comes in free tier unlike in AWS and Azure.
B) Creating a K8s cluster in GCP:
- Create an account in GCP
- Create a new project
- Under the new project search for Kubernetes Service. Below page will be rendered:
- Click on the node, and you will see given below details:
➢ Some monitoring stats of the node
➢ Log information in node
➢ Events happening inside node
➢ Default PODS created by GKE to accomplish above activities
So, in essence by default we have some monitoring ang logging enabled in out cluster.
- To enable logging and monitoring for any service deployed cluster we need to enable
given below services:
➢ API and Services -> Cloud Logging API
➢ API and Services -> Stack driver API
➢ API and Services -> Stack Driver Monitoring API
- Open your Kubernetes cluster. Click on ellipsis and choose option of connect:
- Upon clicking in Connect, you will get a command to connect to cluster
- Open Google cloud SDK CLI and execute that command:
If you face error like given below:
Now once again establish connection with cluster. Upon successful connection you will see given below
message:
- We will create yaml files to provide deployment information to K8s regarding which
docker image to use, no of replications etc.
- We will also create a yaml file by which we will establish connection between given set
of micro services. For example account needs to know location of config server and
eureka server. Zip kin needs ot know location of microservice to import log details.
- Given below is example of zip kin yaml file:
apiVersion:
apps/v1
kind: Deployment
metadata:
name: zipkin-deployment
labels:
app: zipkin
spec:
replicas: 1
selector:
matchLabels:
app: zipkin
template:
metadata:
labels:
app: zipkin
spec:
containers:
- name: zipkin
image: openzipkin/zipkin
ports:
- containerPort: 9411
---
apiVersion: v1
kind: Service
metadata:
name: zipkin-service
spec:
selector:
app: zipkin
type: LoadBalancer
ports:
- protocol: TCP
port: 9411
targetPort: 9411
apiVersion:
apps/v1
kind: Deployment
metadata:
name: configserver-deployment
labels:
app: configserver
spec:
replicas: 1
selector:
matchLabels:
app: configserver
template:
metadata:
labels:
app: configserver
spec:
containers:
- name: configserver
image: eazybytes/configserver:latest
ports:
- containerPort: 8071
env:
- name: MANAGEMENT_ZIPKIN_TRACING_ENDPOINT
valueFrom:
configMapKeyRef:
name: eazybank-configmap
key: MANAGEMENT_ZIPKIN_TRACING_ENDPOINT
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: eazybank-configmap
key: SPRING_PROFILES_ACTIVE
---
apiVersion: v1
kind: Service
metadata:
name: configserver-service
spec:
selector:
app: configserver
type: LoadBalancer
ports:
- protocol: TCP
port: 8071
targetPort: 8071
- Content of configmaps.yaml:
Given below is content of configmap.yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: bankacc-configmap
data:
SPRING_ZIPKIN_BASEURL: http://zipkin-service:9411/
#MANAGEMENT_ZIPKIN_TRACING_ENDPOINT: http://zipkin-
service:9411/api/v2/spans
SPRING_PROFILES_ACTIVE: default
SPRING_CONFIG_IMPORT: configserver:http://configserver-service:8071/
EUREKA_CLIENT_SERVICEURL_DEFAULTZONE: http://eurekaserver-
service:8070/eureka/
You can see that the information that we provided in docker-compose file is now in
configmaps.yaml.
However, our config server is still loading configuration information via git hub config
repo.
Please note that you can also create config maps using kubectl shell provided in
GCP console. In GCP console you can also view YAML based config map in K8s
cluster. If you want to use same config map then we can use this YAML.
Config Maps are a good replacement for Config server. But if we have large
amount of configuration information then, config server is a better approach.
As we can see that instances are up and running. In 1/1 first 1 indicates current
replica and second 1 indicates desired replicas
We can see that our micro services instances are now exposed as services
corresponding ip address, service name and port. This is actually done by kube
proxy. We can use above information to invoke operations on micro service
instance.
➢ Lets revise the over all deployment procedure with K8s architecture:
• When we want to deploy a new micro service in K8s then we pass that
command via kubectl CLI. This command is passed to Kube API Server
running in Master. Kube API Server will pass that command to
Scheduler. Scheduler will see which node has bandwidth to deploy a
new instance. Once it identifies a worker node then it pass that
command to kubelet of that worker node. Kubelet will launch a POD
where micro service instance will be running. Kube proxy will then
create a service for the instance and expose a port to external world.
• When we issue command like get pods the Kube API server will pass
that command to etcd as etcd is a database holding entire cluster
information.
➢ Verifying if microservices are deployed in K8S:
• Click on services and Ingres link in K8S cluster on left panel.
• On the rendered page click on link provided under Endpoints. This will
direct us to microservice:
Note that we have changed deployment configuration and we did not change
service configration
➢ Execute command kubectl get replicaset
Here you will see 2 replicas for accounts microservice
Also, one more POD will be added for the new replica. But service number will
remain same as we have defined service type as LoadBalancer, hence the same
Load Balancer will serve via two replicas.
➢ Delete one of the pods using command kubectl delete pod <pod-name>.
You will see that as soon as we delete one pod, another pod will take its place
via command from Controller Manager.
➢ Click on the accounts service link and scroll down. You will see that accounts
service load balancer is serving two PODS:
A) Create yaml files for remaining micro services i.e. cards and loans
B) Upload them in K8s using Google cloud CLI
C) Validate them if they are all up and running
HELM CHARTS
A) Introduction:
- Helm is a package manager for K8s
- Package manager is a component used to install/update software modules
- For example, pip is a package manager for Python, npm is package manager for Node
- Similarly, Helm is a package manager fot K8s which facilitates deployment of multiple
microservices in K8s.
- Helm chart will contain definitions of all micro services deployment in a single file.
- If we see yml files of our services, there will be common content which is applicable in
each file and then there is some dynamic content. Helm takes advantage of this
common static content and facilitates creation of a common template yml file which is
applicable for all micro services:
See above example where we can observe that there is a common/static content among
all yml files
B) Installing Helm:
- Navigate to website: https://helm.sh/
- Click on Get started and then click on Installation.
- We need to have chocolatey windows package manager to install helm:
➢ Navigate to website https://chocolatey.org/
➢ Click on Install option
➢ Choose Individual installation
➢ Follow the steps as mentioned in website and install chocolatey package
manager
- Now execute command:
choco install kubernetes-helm
➢ Open the charts.yaml file. There will be some default entries, do not change it.
Add given below entry to import common template chart files:
dependencies:
- name: eazybank-common
version: 0.1.0
repository: file://../../eazybank-common
➢ Under templates folder we will create template for deployment and service for
accounts microservice
➢ Under template folder, create file deployment.yaml. Provide given below
content:
{{- template "common.deployment" . -}}
Above instruction will import common.deployment template
➢ Under template folder, create file service.yaml. Provide given below content:
{{- template "common.service" . -}}
Above instruction will import common.service template
➢ Now we have defined common templates and imported them as well. Lets now
create values.yaml.
Under accounts folder, provide given below entries:
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
deploymentName: accounts-deployment
deploymentLabel: accounts
replicaCount: 1
image:
repository: eazybytes/accounts
tag: latest
containerPort: 8080
service:
type: LoadBalancer
port: 8080
targetPort: 8080
config_enabled: true
zipkin_enabled: true
profile_enabled: true
eureka_enabled: true
As you can see we have defined all dynamic values for our accounts chart
- Follow above steps to create helm charts for other micro services.
- We will create another chart one for each environment like Prod, Dev etc.
- These environmental charts will contain entries for all helm charts.
- Thus, by uploading a single chart we can deploy all micro services in K8S.
SECURING MICROSERVICE USING K8S SERVICE
A) With Load Balancer as service type, all our micro services in K8s cluster are exposed to outside
world. The set up looks like below diagram:
Even with presence of Gateway server, any one can access micro service as Load Balancer
exposes a public API
A) We are still able to secure our Micro service using Spring Cloud GW and Cluster IP Address but
still we do not know if requested user is an authenticated user.
B) We can use OAuth 2 Framework for the same. We can use KeyCloak Framework
C) Introduction to OAuth 2 Framework:
- Basic Authentication and drawbacks:
➢ In basic authentication you provide a user name and password in an html form
➢ The request goes to back end server where credentials are validated from DB.
➢ If user is valid then a session is generated, as long as session is valid, the user
will be allowed to access.
➢ For session cookies are used
➢ Its not a good solution when you have rest based interfaces or in micro service
ecosystem
➢ Its also not a good solution when a third party needs to access your API
- OAuth 2 Framework:
➢ OAuth2 keeps authentication logic in a separate authentication server. Hence
authentication is coupled with business logic
➢ It has 4 to 5 grant flows.
➢ Authorization Grant Flow: Used when end user is involved. For example a user
trying to access micro service
➢ Client Credential Grant Flow: Used when one application wants to access other.
For example internal micro service communication.
➢ Once we send user credentials or client credentials to auth server then it
validates it and issues a token. Based on token one can access all applications in
a secured manner
➢ SSO with OAuth2: OAuth2 supports SSO. In an enterprise or an Organization
there can be multiple apps, mobile apps, web apps. If all of them uses or point
to same auth server then wit ha single token we can jump from one app to
other without authenticating every time. This is known as SSO. OAuth
Framework provided this.
➢ OAuth also support authentication from third party applications like FB or
Gmail. For example lets say you long into a web app. Now instead of you
providing your name, surname, address, email id the web app exposes a social
login like gmail or FB. So your web app will fetch your name, surname etc from
social app details. In this case your are not providing your social app user name
and password to third party web app. Once you login into gmail then gmail will
issue a token, using that token third party web app will fetch name, surname,
address and other details.
B) OAuth 2 Flow:
K8S Ingress & Service Mesh
A) Ingress:
- An alternative to Spring Cloud Gateway in K8S deployment
- With Ingress you won’t require Spring Cloud Gateway
- It will act as an edge server
-
-
-
B) Service Mesh:
- With Service Mesh you won’t require Sleuth, Zipkin, OAuth 2 implementation to
monitor and secure micro services