0% found this document useful (0 votes)
4 views

Java

The document provides an extensive overview of Spring Boot, including its advantages, architecture, and features such as embedded servers, auto-configuration, and dependency injection. It also contrasts Spring Boot with Spring MVC, discusses various resources for learning, and outlines common annotations and tools like Spring Initializr and Actuator. Additionally, it touches on topics like scheduling, email sending, and transaction management within Spring Boot applications.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Java

The document provides an extensive overview of Spring Boot, including its advantages, architecture, and features such as embedded servers, auto-configuration, and dependency injection. It also contrasts Spring Boot with Spring MVC, discusses various resources for learning, and outlines common annotations and tools like Spring Initializr and Actuator. Additionally, it touches on topics like scheduling, email sending, and transaction management within Spring Boot applications.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 379

1

Resource
●​ https://www.geeksforgeeks.org/spring-boot/?ref=lbp#testing
●​ https://www.tutorialspoint.com/spring_boot/index.htm
●​ Mockito
○​ https://www.youtube.com/watch?v=W-wiUSu_zeY&list=PL6Zs6LgrJj3vy7yWpH9
xb3Y0I_pAPrvCU&index=8
○​
○​ https://www.youtube.com/watch?v=0ZtU3X9n6tI
○​ https://youtu.be/eILy4p99ac8?list=PLsyeobzWxl7po1i2mSjNg5AnkE21mgjb5
○​ https://www.youtube.com/watch?v=RfErIPo94bc
○​
●​ Java streams
○​ https://www.youtube.com/watch?v=dAjvmN2k7kM&list=PLab_if3UBk9_FAq7e2G
PWP60okAgMKDwh
●​ Java kafka
○​ https://www.youtube.com/watch?v=SqVfCyfCJqw
○​ https://www.youtube.com/watch?v=tU_37niRh4U
○​
○​ https://www.youtube.com/watch?v=c7LPlWvxZcQ
●​ MS architecture
○​ https://www.youtube.com/watch?v=aOen1-pQLZg&list=PLVz2XdJiJQxw1H3JVhc
lHc__WYDaiS1uL
●​ Interview FAQ
○​ https://www.youtube.com/watch?v=fFnuer3AD8Q&list=PLVz2XdJiJQxwS8FyWn
WyKyfILxHPLsiro&index=7
○​ https://www.youtube.com/watch?v=UHAW7v3f9SU&list=PL6Zs6LgrJj3uetS3eCoI
j5-MxYAIgnz7C&pp=iAQB
○​ https://www.youtube.com/watch?v=Ct0VwX7Mtts&list=PL6Zs6LgrJj3tHw5TAeLR
Kjk3qQ83BGRNP&pp=iAQB
○​ https://www.youtube.com/watch?v=jJUiWel2nQw&list=PL6Zs6LgrJj3ugFhUGKK
EixrtUybK3J11h&pp=iAQB
○​
●​ Datastracture
○​ https://www.youtube.com/watch?v=kFZ2lUsXY3w&list=PL6Zs6LgrJj3v0AhGdKw
vkP_6IzGX-0alk
○​ https://www.youtube.com/watch?v=-0L81p6rZ4E&list=PL6Zs6LgrJj3vlemu_Cwjd
ERRf3FR2MRiM
○​ https://www.youtube.com/watch?v=fF1aJL-4Yzk&list=PL6Zs6LgrJj3u57thS7K7yL
PQb5nA23iVu&index=3
●​ Kafka
○​ https://www.javaguides.net/2022/06/spring-boot-apache-kafka-tutorial.html
○​ https://www.javaguides.net/2022/05/spring-boot-kafka-jsonserializer-and-Jsondes
erializer-example.html
○​ https://javatechonline.com/how-to-work-with-apache-kafka-in-spring-boot/
2

○​ https://www.javaguides.net/2024/06/apache-kafka-interview-questions.html
○​
○​ https://www.geeksforgeeks.org/microservices-communication-with-apache-kafka-
in-spring-boot/
○​
○​ https://medium.com/simform-engineering/kafka-integration-made-easy-with-sprin
g-boot-b7aaf44d8889
○​
●​ Lombok
●​ Flyway
●​ Security (jwt, oAuth)
●​ Reactive programming
●​ Webflux

========================================================================

●​ Disadvantage of spring boot


○​ configuration is really time-consuming
○​ can be a bit overwhelming for the new developers.
●​ Spring boot
○​ . Spring Boot is built on the top of the spring and contains all the features of
spring.
○​ Convention over configuration
●​
3

●​

Spring: A powerful, flexible framework for Java applications with a lot of manual setup and
configuration.

Spring Boot: An opinionated extension of Spring that simplifies development by providing


auto-configuration, embedded servers, and production-ready features.
4

Spring Boot – Architecture

●​ The spring boot follows a layered architecture in which each layer communicates to
other layers(Above or below in hierarchical order).
5

●​

●​

●​
●​
6

●​

Major Reasons to Choose Spring Boot for Microservices Development


●​ Embedded server
○​ In the microservice architecture, there may be hundreds of microservice
instances deployed at a given time. We would like to automate the
development and deployment of microservices to the maximum extent
possible. An embedded server is implanted as part of the deployable application.
If we take the example of Java applications, then it would be a JAR. The benefit
of it is, we don’t require the server pre-installed in the deployment environment.
7

So the first point why you’re choosing Spring Boot is because of the presence of
the embedded server.
●​ Supports load balancer
○​ We can build a microservice application that uses Spring Cloud LoadBalancer
to provide client-side load-balancing to call other microservices.
●​ Auto configuration
○​ In spring boot everything is auto-configured, unlike the Spring MVC Project.
○​ Let’s say you want to build an application fast because in Microservices you have
to build fast. For example, if you want some database connectivity there should
be some starter dependency that is going to help you to configure your Session
Factory, Connection Factory, Data Source, and all these things. So you don’t
have to create those beans and the same goes for the security also
●​ Minimal code using annotation
●​ loose coupling
○​ Inversion of Control
○​ Dependency Injection
●​ Open sources

Difference between Spring MVC and Spring Boot :


8

●​

●​

IOC:
●​ Full form:Spring IoC (Inversion of Control)
●​ What
○​ Is a Container which creates the objects, configures and assembles their
dependencies, manages their entire life cycle.
○​ It gets the information about the objects from a configuration file(XML) or Java
Code or Java Annotations and Java POJO class.
○​ These objects are called Beans.
○​ Since the Controlling of Java objects and their lifecycle is not done by the
developers, hence the name Inversion Of Control.
○​ The Container uses Dependency Injection(DI) to manage the components that
make up the application.
○​
9

●​

●​
●​ There are 2 types of IoC containers:
○​ BeanFactory
○​ ApplicationContext
○​ That means if you want to use an IoC container in spring we need to use a
BeanFactory or ApplicationContext.
10

○​ The BeanFactory is the most basic version of IoC containers, and the
ApplicationContext extends the features of BeanFactory.
●​ Spring – BeanFactory
○​ BeanFactory interface is the simplest container providing an advanced configuration mechanism
to instantiate, configure, and manage the life cycle of beans.
○​ Beans are Java objects that are configured at run-time by Spring IoC Container.
○​ BeanFactory represents a basic IoC container which is a parent interface of ApplicationContext.
○​ BeanFactory uses Beans and their dependencies metadata to create and configure them at
run-time.
○​ BeanFactory loads the bean definitions and dependency amongst the beans based on a
configuration file (XML) or the beans can be directly returned when required using Java
Configuration.
○​ BeanFactory is an interface that defines a mechanism to retrieve and
manage beans, supporting dependency injection.
○​ It acts as a container that manages the lifecycle of beans, ensuring their
creation, configuration, and injection.
○​ It uses a factory pattern to create beans when requested, ensuring lazy
initialization of beans (i.e., beans are created only when they are needed).
○​ BeanFactory does not support Annotation-based configuration whereas
ApplicationContext does.

○​
11

○​
12

○​
13

○​

○​

○​
●​
14

●​

●​
15

●​

Spring Dependency Injection with Example

●​
16

●​
17

●​
18

●​
19

●​
20

●​
21

●​

Spring – Injecting Objects By Constructor Injection


22

●​

●​

●​ Spring – Dependency Injection by Setter Method


○​
23

Spring – Injecting Literal Values By Setter Injection


24

●​

●​
25

●​

Spring – Injecting Literal Values By Constructor Injection

●​
26

Bean life cycle in Java Spring

●​
27

●​
28

●​
29

●​
30

●​
31

●​
32

●​
33

●​
34

●​

Custom Bean Scope in Spring

●​ We can’t override/modify the standard bean scopes of Spring i.e. singleton and
prototype. It’s generally considered a bad practice to override the web-aware scopes.
But sometimes application demands something out of box or additional capabilities from
those found in the provided scopes.
●​ As of Spring 2.0, we can define custom spring bean scopes as well as modify existing
spring bean scopes (except singleton and prototype scopes).
●​ To integrate your custom scope(s) into the Spring container, you need to implement the
org.springframework.beans.factory.config.Scope interface. This Scope interface
contains four methods,
○​ Object get(String name, ObjectFactory objectFactory)
○​ Object remove(String name)
○​ void registerDestructionCallback(String name, Runnable destructionCallback)
○​ String getConversationId()
35

●​
36

●​
37

●​
38

●​
39

●​

How to Create a Spring Bean in 3 Different Ways?


40

●​

●​
41

●​
42

●​
43

●​

●​
44

●​
●​ difference between component and bean annotation while creating beans

○​
45

○​

Spring – Autowiring
46

●​
47

●​
48

●​

●​
49

●​
50

●​

●​
51

●​
52

●​

Singleton and Prototype Bean Scopes in Java Spring


53

●​

●​
54

●​

●​

●​
55

●​

●​

How to Configure Dispatcher Servlet in web.xml File?


56

●​
●​ So now you might be thinking about how to create a front controller in a Spring MVC
Application? But the good news is, the front controller is already created by the Spring
Framework Developer, and the name of that particular controller is DispatcherServlet.
You can use that front controller in your Spring MVC project. You really not required to
create a front controller but you can reuse that front controller created by the Spring
Framework Developer and they named it as DispatcherServlet. We can say
●​ In Spring, the /WEB-INF/web.xml file is the Web Application Deployment Descriptor of
the application. This file is an XML document that defines everything about your
application that a server needs to know (except the context path, which is assigned by
the Application Deployer and Administrator when the application is deployed), servlets,
and other components like filters or listeners, initialization parameters,
container-managed security constraints, resources, welcome pages, etc.
57

●​

======================== Spring boot =============================

Spring Boot – Annotations

●​ Spring Boot Annotations are a form of metadata that provides information about a
spring application.
●​ Here are some common Spring Boot annotations and their uses:
58

○​

○​
59

○​
60

○​
61

○​
62

○​
63

○​

○​
64

○​

○​
65

○​

Spring Boot Actuator

●​ Spring Boot Actuator is a powerful module that can be used to monitor and manage your
spring boot application.
●​ Actuator comes with a set of built-in endpoints that allow you to gather metrics, health
checks, and information about the application in real time.

●​

●​ Common Actuator Endpoints


66

○​
67

○​
68

○​

○​
69

○​
70

○​
71

○​

○​

Spring Initializr
72

●​ Spring Initializr is a web-based tool provided by Spring that allows developers to quickly
generate Spring Boot project templates.
●​ It helps bootstrap a new Spring Boot application by providing a customized project
structure with dependencies, packaging options, and configurations.
●​ This tool can be accessed through a web interface or directly integrated into IDEs like
IntelliJ IDEA, Eclipse, and Visual Studio Code.
●​ GroupIda and Artifact Id.
○​ In a Spring Boot project (or any Maven/Gradle project), the Group ID and
Artifact ID are important identifiers used to uniquely identify your project,
especially when it's built into a JAR or published to a repository.

●​
●​

Spring Boot – Code Structure

●​ Let us discuss two approaches that are typically used by most developers to structure
their spring boot projects.
○​ Structure by Feature
○​ Structure by Layer
73

●​

●​
74

●​

●​

Spring – RestTemplate

●​ ‘RestTemplate’ is a synchronous REST client provided by the core Spring Framework.


●​ Why
○​ To interact with REST, the client needs to create a client instance and request
object, execute the request, interpret the response, map the response to domain
objects, and also handle the exceptions
75

●​

●​
76

●​

●​
77

●​

●​
78

●​

●​
79

How to Change the Default Port in Spring Boot?

●​

Spring Boot – Scheduling

●​ In Spring Boot, scheduling is a powerful feature that allows you to run tasks at specific
intervals or points in time. Spring provides the @Scheduled annotation to schedule
tasks with various options such as fixed rate, fixed delay, or cron expressions.

●​
80

●​
81

●​
●​ Start/Stop Tasks Dynamically: Programmatically control scheduling using
TaskScheduler.
●​

●​
82

Spring Boot – Sending Email via SMTP

●​ Spring Boot provides the ability to send emails via SMTP using the JavaMail Library.

●​
83

●​

●​
84

●​

●​

Spring Boot – Transaction Management Using @Transactional Annotation


●​ a transaction is a sequence of actions performed by the application that together is
pipelined to perform a single operation. For example, booking a flight ticket is also a
transaction where the end user has to enter his information and then make a payment to
book the ticket.
●​ @Transactional annotation is the metadata used for managing transactions in the
Spring Boot application.
●​ To configure Spring Transaction, this annotation can be applied at the class level or
method level. I
85

●​
86

●​
87

●​
88

●​
89

●​

Spring Boot – Map Entity to DTO using ModelMapper

●​ DTO
○​ stands for Data Transfer Object, these are the objects that move from one layer
to another layer.
○​ Why
■​ DTO can be also used to hide the implementation detail of database layer
objects. Exposing the Entities to the web layer without handling the
response properly can become a security issue.
●​ For example, If we have an endpoint that exposes the details of
an entity class called User. The endpoint handles the GET
request. If the response is not handled properly with the GET
endpoint one can get all the fields of the User class even the
password also, which is not a good practice for writing restful
services. To overcome these problems DTO came into the picture,
with DTO we can choose which fields we need to expose to the
web layer.
■​ In Spring Boot, mapping entities to Data Transfer Objects (DTOs) is a
common pattern to decouple your service layer from the structure of your
persistence layer.
90

●​ The ModelMapper library is a powerful tool for automating this mapping process,
reducing boilerplate code.

●​

●​
91

●​
92

●​
93

●​

●​
94

●​
95

●​
96

●​

Spring Boot – Validation using Hibernate Validator

●​ Hibernate validator offers validation annotations for Spring Boot that can be applied to
the data fields within your Entity class, and allows you to follow specific rules and
conditions for fields in which we are applying validators to meet your custom constraints.
97

●​
98

●​

●​
99

●​

●​
100

●​
101

●​
102

●​
103

●​

●​
104

●​

Spring Boot – Cache Provider


●​ Spring Boot supports several cache providers, allowing you to integrate caching
mechanisms into your applications.
●​ cache providers such as EhCache, Redis, Guava, Caffeine, etc.
●​ In Spring Boot, Cache Abstraction depends on the abstraction occurring by two
interfaces i.e. org.springframework.cache.CacheManager interface or
org.springframework.cache.Cache interface.
●​

●​
105

●​

●​
106

●​

●​

Spring Boot – Logging

●​ It supports various logging frameworks, and by default, Spring Boot uses Logback for its
logging

●​
107

●​

●​

●​
108

●​

Spring Boot – Auto-configuration

●​
109

●​

●​
110

Spring Boot – EhCaching

●​
111

●​
112

●​
113

●​
114

●​
115

●​

Spring Boot – File Handling

●​
116

●​
117

●​
118

●​

●​

Spring Boot – Create a Custom Auto-Configuration


119

●​

●​
120

●​

●​
121

●​

●​
122

●​

Exception Handling in Spring Boot


123

-​
124

-​
125

-​
126

-​
127

-​

-​
128
129

Exception Handling in Spring Boot


●​ And Tomcat is a very popular Java Servlet Container. Tomcat is the default spring boot
server which can manage multiple applications within the same application which avoids
multiple setups for each application in a single application.

Spring Boot – Packaging

packaging refers to the way an application is built and bundled for deployment, usually as either
a JAR or WAR file:

●​
130

●​
131

●​
132

●​

●​

Spring Boot – Thymeleaf with Example


133

●​

●​
134

●​

Multi-Module Project With Spring Boot

●​
135

●​

Spring Boot – DevTools

●​
●​ Note: It is important to understand that the ‘DevTools’ is not an IDE plugin, nor
does it require that you use a specific IDE.
136

●​
137

H2 Database
●​ H2 Database in Spring Boot is an embedded, open-source, and in-memory database. It
is a relational database management system written in Java. It is a client/server
application. It stores data in memory, not persisting the data on disk. Here we will be
discussing how can we configure and perform some basic operations in Spring Boot
using the H2 Database.
●​ H2 is a lightweight and fast SQL database written in Java. It can run in two modes:
in-memory and embedded. The in-memory mode is particularly useful for testing and
development because it allows you to create a temporary database that is automatically
destroyed when the application stops. The embedded mode is used for applications that
need a small, self-contained database.
●​ Features of the H2 Database:
○​ Very fast, open-source, JDBC API
○​ Embedded and server modes; disk-based or in-memory databases.
○​ Transaction support, multi-version concurrency
○​ Browser-based Console application
○​ Encrypted databases
138

○​ Fulltext search
○​ Pure Java with a small footprint: around 2.5 MB jar file size
○​ ODBC driver
●​ Configure H2 Database in Spring Boot Application
○​ Step 1: Adding the dependency
■​ <dependency>
■​ <groupId>com.h2database</groupId>
■​ <artifactId>h2</artifactId>
■​ <scope>runtime</scope>
■​ </dependency>
○​ Step 2: Configure Application Properties

■​
139

■​
○​

Spring Boot – Dependency Management


140

●​
●​
141

●​
142

●​
143

●​
144

●​

●​

Spring Boot – Caching


145

●​
146

●​
147

●​

●​
148

●​
149

●​

Spring Boot – Starter Web


150

●​
151

●​

●​

Spring Boot – application.yml/application.yaml File


152

●​
153

●​
154

●​
155

●​
156

●​
157

●​

●​

Spring Boot – Starter Parent


158

●​
159

●​

Spring Boot – Customize the Jackson ObjectMapper


160

●​
161

●​
162

●​
163

●​
164

●​
165

●​
166

●​
167

●​

●​

Spring Boot – Difference Between @Service Annotation and @Repository Annotation


168

Spring Boot – Starters


169

●​
170

●​

●​
171

●​

How to Implement Simple Authentication in Spring Boot?

●​
172

●​

●​
173

●​

●​
174

●​

●​

Validation in Spring Boot


●​ @GetMapping("/validatePathVariable/{id}")
175

●​ ResponseEntity<String> validatePathVariable(@PathVariable("id")
@Min(5) int id) {
●​ return ResponseEntity.ok("valid");
●​ }

●​
176

●​
177

●​
178

●​
●​

What is the Command Line Runner Interface in Spring Boot?

●​ Spring Boot CLI (Command Line Interface) is a command line software or tool. This tool is provided by the
Spring framework for quickly developing and testing Spring Boot applications from the command prompt.
●​ The Spring Boot CLI (Command Line Interface) is present in the dependency of the org.springframework.boot that is
used to quickly bootstrap the spring boot application. CLI in Spring Boot contains a run method() that is executed after
the application startup.
179

●​
180

●​
181

●​
182

●​
183

●​

Difference Between Spring Boot Starter Web and Spring Boot Starter Tomcat
184

●​

●​

How to encrypt passwords in a Spring Boot project using Jasypt


185

●​
186

●​
187

●​

●​
188

●​

●​

Spring Boot JDBC


189

●​
190

●​

●​
191

●​

●​
192

●​

●​
193

●​
194

●​
●​

Spring Boot CrudRepository


●​ CrudRepository is a part of the Spring Data JPA framework and provides a
convenient way to perform CRUD (Create, Read, Update, Delete) operations in Spring
Boot. By extending the CrudRepository interface, you can quickly build repositories
with minimal boilerplate code.
●​ To use CrudRepository in Spring Boot, make sure you have included the Spring Data
JPA and a database connector (like MySQL, PostgreSQL, or H2) in your dependencies.
○​ implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
runtimeOnly 'mysql:mysql-connector-java'
195

●​

●​
196

●​

Spring Boot JpaRepository

●​
197

●​

●​
198

●​

●​
●​ Difference between CRUDRepository and Jpa Repository
199

○​
200

○​

Spring Boot – Integrating Hibernate and JPA

●​

Spring Boot – MongoRepository with Example


201

●​

How to Connect MongoDB with Spring Boot?


202

●​

Spring Boot – Spring Data JPA


●​ The java persistence API provides a specification for persisting, reading, and managing
data from your java object to your relational tables in the database. JPA specifies the set
of rules and guidelines for developing interfaces that follow standards. Straight to the
point: JPA is just guidelines to implement ORM and there is no underlying code for the
implementation. Spring Data JPA is part of the spring framework. The goal of spring data
repository abstraction is to significantly reduce the amount of boilerplate code required to
implement a data access layer for various persistence stores. Spring Data JPA is not a
JPA provider, it is a library/framework that adds an extra layer of abstraction on the top of
our JPA provider line Hibernate.

Spring Boot with Kafka

●​ Kafka core is a publish-subscribe messaging system that’s often


used for collecting and analyzing large amounts of data.
203

○​ Kafka Core allows applications to publish (produce) and


subscribe (consume) to streams of records, as well as store
them in a durable, fault-tolerant, and distributed manner.
○​ Replication: Kafka stores copies of data across multiple

brokers for fault tolerance.

○​ Offset Management: Kafka tracks the position (offset) of

consumers within a topic.

●​ Kafka Storage Architecture:


204

○​

●​ Topic (image above one)


○​ Few following topic’s points need to keep in minds
■​ Producers
●​ Producers write records to topics. Each topic
can receive data from multiple producers, which
are typically applications, microservices, or data
sources.
●​
■​ Consumers
205

●​ Consumers read records from topics. They


subscribe to topics, receive new data as it’s
added, and can read the data either in real-time
or at any specified point in history based on
offsets.
■​ Retention Policy
●​ Kafka allows setting how long data is retained in
a topic. Records can be retained for a specific
amount of time (e.g., 7 days) or until a certain
storage size is reached.
●​
■​ Replication
●​ Kafka supports replicating topics across multiple
brokers for fault tolerance. If a broker fails, a
replica can take over, ensuring data availability.
More replicas provide greater fault tolerance.
●​
■​ Durability
●​ Topics store records durably, so they are
persisted to disk, ensuring they remain available
even if the system restarts.
●​
■​ Partitions
●​ When creating a topic, you specify the number of
partitions and replicas.
○​ More partitions allow for better
parallelism;
○​ more replicas provide greater fault
tolerance.
●​
■​ Offset
206

●​ The unique number assigned to each record in a


partition. It enables consumers to keep track of
where they left off in the stream.
●​
■​ Partitioned Log
●​ Each topic is divided into partitions. Each
partition is an ordered, immutable sequence of
records, and each record within a partition has a
unique offset (a sequential ID). Partitions allow
Kafka to scale horizontally, as multiple
consumers can read from different partitions in
parallel.
■​ Cleanup Policy
●​ Topics can have different cleanup policies, such
as:
●​ Delete: Default option where records are
removed after the retention period.
●​ Compact: Keeps only the most recent record
with a particular key, useful for maintaining a
latest state.
●​ Important considerations during topic creation:
○​ Number of Partitions
○​ Replication Factor
○​ Retention Policy
○​ Partition Key Selection
○​ Compression
○​ Access Control and Security
○​ Resource Allocation (Quota)
○​ Schema Management (Optional)
●​ Number of Partitions:
207

○​ There are no hard limits on the number of partitions in a


Kafka cluster, but there are some general guidelines to
consider and best practice as following.
■​ - The maximum number of partitions per broker is
4,000.
■​ - The maximum number of partitions per Kafka
cluster is 200,000.
■​ - A common rule of thumb is to have 10 partitions per
topic.
○​ Note
■​ - The number of partitions in a Kafka cluster can
impact throughput, availability, and latency.
■​ - Exceeding the partitions limits can lead to higher
memory, CPU, and network usage.
■​ - You can increase the number of partitions for a
topic after it has been created, but you cannot
decrease the number of partitions once they have
been created.
●​ Producers in details
○​ https://medium.com/@javatechpro22/kafka-core-produce
r-understanding-and-overview-6505a2758d63
○​ What
■​ Producers write data to Kafka topics.
■​ Producers push data to Kafka brokers, which then
store the messages and distribute them across
partitions.
○​ Event format
208

■​
209

■​
○​ The Producer Buffer
210

■​ is a memory area in the producer client used to hold


records before they are sent to the Kafka broker.
■​ It helps to optimise the producer’s performance by
batching messages together and reducing the number
of requests made to the Kafka broker
■​ Partition Level Buffers
●​ Kafka producer always send all messages in a
batch to the same partition within a specific
topic
●​ The default batch size of a Kafka producer is 16
KB (16,384 bytes). This is controlled by the
batch.size configuration parameter.
■​ Producer Decides Partitions:
●​ The producer decides which message or record
should go in which partitions.
●​ The producer always decides the partitions
based on round-robin or events key or any
custom logic.
●​ All the messages with the same key go to the
same partition.
■​ Producer Sync/Async:
●​ The producer can send messages synchronously
which will wait for acknowledgement from the
broker.
●​ The producer can send messages
asynchronously which will not wait for the
acknowledgement from the broker.
●​ By default, a Kafka producer sends messages
asynchronously. This means that when you
call the send() method, the producer sends the
message to Kafka in the background without
blocking the main thread, and it doesn’t wait for
211

an acknowledgement from Kafka before


continuing.

●​
○​ Producer retry
■​ purpose to handle transient failures during message
transmission (e.g., broker is temporarily unavailable
or network issues)
■​ When the producer sends messages to a broker, the
broker can return either a success or an error code.
Those error codes belong to two categories.
●​ Retriable errors. Errors that can be resolved
after retrying. For example, if the broker returns
the exception
NotEnoughReplicasException, the
producer can try sending the message again -
maybe replica brokers will come back online and
the second attempt will succeed.
●​ Nonretriable error. Errors that won’t be
resolved. For example, if the broker returns an
INVALID_CONFIG exception, trying the same
212

producer request again will not change the


outcome of the request.

●​
●​ Producer Timeout:
○​ In Kafka, producer timeouts are a key mechanism for
managing how long the producer waits for an
acknowledgment from brokers, how it handles retries, and
how long it waits before throwing errors.
○​ Delivery Timeout:
■​ delivery.timeout.ms
213

○​ Request Timeout:
■​ request.timeout.ms
○​ Batch Timeout:
■​ linger.ms
○​ Acks Configuration:

●​
●​ Producer Serializer:
○​ Serialization is the process of converting
data objects into a binary format suitable
for transmission or storage.
○​ This is useful because Kafka brokers only
work with bytes, so data is stored and
214

transmitted in binary format for


efficiency.
●​ Consumers
○​ https://medium.com/@javatechpro22/kafka-core-consum
er-fe7872f34e01
○​ A consumer in Apache Kafka is a client application that
reads and processes data from a broker (from topics in
Kafka)
○​ Consumers can subscribe to specific topics or partitions
and retrieve messages from them in real-time.
○​ Consumers can specify a log offset when making a request,
which gives them control over what they consume.
○​ Offset in details
■​ https://medium.com/@javatechpro22/kafka-core-co
nsumer-offset-tracking-9f4d693d9791
○​

●​ What is kafka
○​ Apache Kafka is an open source distributed publish-subscribe messaging queue
used for real-time streams of data.
○​ Used for high-performance data pipelines, streaming analytics, data integration,
and mission-critical applications.
●​ Kafka cluster
○​ Set/Group of brokers. A cluster has a minimum of 3 brokers.

○​
215

●​ Kafka broker
○​ Kafka servers that store and serve data​

○​ The broker is the Kafka server. And this name makes sense as well because all
that Kafka does is act as a message broker between producer and consumer.
○​ The producer and consumer don't interact directly. They use the Kafka server as
an agent or a broker to exchange messages.
○​ The following diagram shows a Kafka broker, it acts as an agent or broker to
exchange messages between Producer and Consumer:

○​
●​ Kafka producer
○​ Write data to Kafka topics.

○​ Producer is an application that sends messages. It does not send messages


directly to the recipient. It sends messages only to the Kafka server.
○​ The following diagram shows Producer sends messages directly to Kafka broker:
●​ Kafka consumer
○​ Read data from Kafka topics.

○​ Consumer is an application that reads messages from the Kafka server.


○​ The following diagram shows Producer sends messages directly to the Kafka
broker and the Consumer consumes or reads messages from the Kafka broker:
●​ Kafka topic
○​ Logical channels to which messages are written and from

which they are read.

○​
○​ a Kafka topic is like a category or channel where messages are sent and
stored for a certain period of time.
○​ Producers send messages to a specific topic.
○​ Consumers read messages from a specific topic.
○​ Topics can have partitions, allowing for scalable and distributed message
handling.
○​ Adv
■​ This design allows Kafka to handle high-throughput and real-time data
streams effectively.
216

○​
●​ Kafka partitions
○​ Kafka topics are divided into a number of partitions, which contain records in an
unchangeable sequence.

●​

●​
○​ The following diagram shows Kafka's topic is further divided into a number of
partitions:
217

○​
●​ Kafka offset
○​ Offset is a sequence of ids given to messages as they arrive at a partition. Once
the offset is assigned it will never be changed. The first message gets an offset
zero. The next message receives an offset one and so on.

○​
●​ Kafka consumer group
○​ A consumer group contains one or more consumers working together to process
the messages.

○​
218

●​ Spring Boot Kafka Producer and Consumer Example

○​

○​
219

○​

○​
220

○​
○​ Kafka templates
■​ KafkaTemplate is a Spring Kafka utility that helps you send messages
to Kafka topics in an easier and more convenient way.

○​
221

●​

●​
222

●​

●​
223

●​

●​
●​ The Kafka Streams API
○​ is a powerful library provided by Apache Kafka to build real-time, stream
processing applications. It allows you to process data directly from Kafka
topics, perform complex operations like filtering, aggregating, or joining streams,
and then output the results back to Kafka topics or other systems.
●​ Kafka Connect API
○​ is a part of Apache Kafka that provides a framework for connecting Kafka with
external data sources and sinks. It is used to stream data into and out of
Kafka topics without having to write custom integration code. Kafka Connect
224

simplifies integration with databases, cloud storage, message queues, and other
systems by using connectors.
●​
●​ Advantage of kafka
○​ 1. High Throughput and Low Latency
■​ Kafka can handle large volumes of data with minimal latency. It can
process millions of messages per second
○​ Scalability
■​ Kafka's architecture is highly scalable. It uses a partitioned log model,
allowing data to be distributed across multiple servers (brokers). You can
add more brokers to a Kafka cluster to increase its capacity seamlessly.
○​ 3. Durability and Fault Tolerance
■​ Kafka ensures durability by replicating data across multiple brokers. It
uses a distributed commit log, making it highly fault-tolerant. If a broker
fails, other brokers in the cluster can take over seamlessly.
○​ 4. Decoupling of Systems
■​ Kafka acts as a buffer between producers (data sources) and consumers
(data processors), decoupling them. This allows different systems to
evolve independently without direct integration, simplifying the
architecture.
○​ 5. High Availability
■​ Kafka’s architecture is designed for high availability. With replication and
leader election mechanisms, it ensures that the system remains available
even when individual nodes fail.
○​ 6. Multi-client Support
■​ Kafka has a wide range of client libraries, including Java, Python, Go,
.NET, and more, making it easy to integrate with various applications and
programming languages.
225

○​
226

○​

●​ Zoo keeper
○​ It acts as a central coordinator that helps all parts of Kafka stay in sync.
○​ Helps in managing servers, decide leaders, and keep everything organized.
227

○​

●​
○​
228

●​

●​

●​
229

●​

●​
●​
●​ kafka stream
230

○​

●​
231

●​
232

●​

●​
233

●​
234

●​

○​
235

○​
236

○​
237

○​

○​
238

●​

●​
●​
239

●​
●​ Kafka vs RabbitMQ
○​ Apache Kafka is best suited for high-throughput, real-time streaming and
analytics. It excels in scenarios where data durability and long-term storage are
critical, like event sourcing, log aggregation, and real-time data pipelines.
○​ RabbitMQ is more suitable for traditional message queuing with a focus on
complex routing, pub/sub patterns, and lower latency use cases such as task
scheduling, microservices communication, and RPC (Remote Procedure Call).
■​ Consumer Model
●​ K: Pull-based (consumers fetch data)
●​ R: Push-based (broker pushes messages to consumers)
■​ Architecture
●​ Distributed, log-based
●​ Broker-centric (with queues and exchanges)
■​ Data Storage
●​ Persistent log (data retained for a specified period)
●​ In-memory or disk-based queues (FIFO order)
■​ Message Ordering
●​ Maintains order within a partition
●​ Maintains order within a single queue
■​ Delivery Guarantees
●​ At-least-once, at-most-once, exactly-once (with config)
240

●​ At-least-once or at-most-once
■​ Message Retention
●​ Configurable (even if consumed, data can be retained)
●​ Messages are removed after consumption
■​ Transactions
●​ Supports transactions (atomic reads/writes)
●​ Limited support, transactions are complex
○​ Choose based on the use case and specific requirements:
■​ If you need high throughput and streaming analytics, go with Kafka.
■​ If you need complex message routing and low latency with simpler setup,
go with RabbitMQ.
■​
●​ Spring Boot Kafka JsonSerializer and JsonDeserializer Example

○​

○​
241

○​

○​
○​ 5. Create Simple POJO to Serialize / Deserialize

■​
242

○​

●​
243

●​

●​
●​
●​
244

AOP(Aspect Oriented Programming)


●​ Aspect-Oriented Programming (AOP) is a programming paradigm that allows you to
separate cross-cutting concerns (like logging, security, or transaction management)
from your main business logic.
●​ Why
○​ It helps make your code cleaner and more easy to maintain
●​ Aspects
○​ The reusable module contains cross-cutting concerns.
○​ Example : logging, security
●​ Advice:
○​ The code you want to inject (e.g., logging code).

○​
●​ Join point:
○​ The specific points in your code where advice should run (e.g., before a method
executes).
●​ Point cut:
○​ A set of rules to determine which join points the advice applies to.
●​
●​
245

●​
246

●​

●​

Spring Boot – Difference Between AOP and OOP


247

●​

Here’s a simpler comparison between AOP and AspectJ:

●​ AOP is the general idea of separating cross-cutting concerns.


●​ AspectJ is a specific implementation of AOP in Java with advanced features like
different weaving types.

●​
●​
248

●​ https://www.javaguides.net/2019/05/understanding-spring-aop-concepts-and-terminology
-with-example.html
●​

●​
●​ LoggingAspect.java
249

○​

○​
○​
●​
●​
●​
250

●​

●​
●​
251

Unit Testing : Mockito

●​

●​
●​
252

●​

●​
253

●​

●​
254

●​
○​ Fake test example without mockito
○​ Class
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/main/java/com/
hubberspot/mockito/test_doubles/fake
○​ Test
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/test/java/com/h
ubberspot/mockito/test_doubles/fake

●​
○​ Example
■​ Test
●​ https://github.com/dinesh-varyani/mockito/tree/master/src/test/java
/com/hubberspot/mockito/test_doubles/dummy
■​ Class
255

●​ https://github.com/dinesh-varyani/mockito/tree/master/src/main/jav
a/com/hubberspot/mockito/test_doubles/dummy

●​

●​
○​ Class
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/main/java/com/
hubberspot/mockito/test_doubles/stub
○​ Test
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/test/java/com/h
ubberspot/mockito/test_doubles/stub
●​ Spy
○​ Keep the eye on external dependency and there interaction
256

○​

○​
○​ Class
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/main/java/com/
hubberspot/mockito/test_doubles/spy
○​ Test
■​ https://github.com/dinesh-varyani/mockito/tree/master/src/test/java/com/h
ubberspot/mockito/test_doubles/spy
257

●​

●​
●​ Class
○​ https://github.com/dinesh-varyani/mockito/tree/master/src/main/java/com/hubbers
pot/mockito/test_doubles/mock
●​ Test
○​ https://github.com/dinesh-varyani/mockito/tree/master/src/test/java/com/hubbersp
ot/mockito/test_doubles/mock

Mocking
●​ Mocking is a technique used in unit testing to simulate the behavior of complex objects,
systems, or dependencies.
258

●​ Instead of interacting with actual objects, which might be difficult or time-consuming to


set up, you create mock objects that mimic the behavior of the real ones.
●​ Benefits of Mocking:
○​ Isolation: Allows you to test components in isolation, ensuring that failures are
due to the component being tested and not its dependencies.
○​ Performance: Mocking can significantly speed up tests, as mock objects are
generally faster than real objects.
○​ Simplicity: Simplifies the setup and teardown of tests by eliminating the need to
configure complex dependencies.

Mockito
●​ Mockito is a popular Java-based framework for mocking objects in unit tests.
●​ It allows developers to create mock objects, define their behaviors, and verify
interactions, making it easier to write isolated unit tests for classes and methods.
●​ It uses java reflections to mock the objects
●​ Advantages of Mockito:
○​ Ease of Use: Mockito provides a simple and intuitive API that makes it easy to
create and use mock objects. This reduces the complexity of writing unit tests.
○​ Readable and Maintainable Tests: The syntax and structure of Mockito-based
tests are straightforward, making the tests easy to read and maintain.
○​ Isolation of Tests: Mockito allows you to isolate the code under test by mocking
dependencies. This ensures that tests focus on the functionality of the class
being tested without being affected by external dependencies.
○​ Annotations for Simplification: Mockito's annotations (@Mock,
@InjectMocks, @Captor, etc.) help simplify the creation and injection of
mocks, making tests cleaner and reducing boilerplate code.
●​ Annotation
259

○​
○​ ExtendWith
■​ One of the most common uses of @ExtendWith is to integrate third-party
libraries like Mockito into JUnit 5 tests.
■​ When using Mockito with JUnit 5, you can use the MockitoExtension
to enable Mockito’s annotations like @Mock and @InjectMocks.
○​ Mock Vs Inject Mock
260

○​
●​ @RunWith is an annotation in JUnit 4 that allows you to customize the test runner used
to execute your tests.

○​
●​ initMocks
○​ One more way to enable annotation in junit 4

○​
●​ Rule
○​ One more way to enable annotation in junit 4
○​ It should be public
261

○​
●​ Stubbing in mockito

○​

●​
●​ When then return
262

○​
○​

○​
○​ doReturn when

○​
●​ Stubbing Multiple Calls to the Same Method
263

○​

○​
●​ Stubbing void method

○​

○​
264

●​

○​
●​ Verify number of interactions with mock

○​
●​ Verify an interaction has not occurred

○​
●​ To verify if there are any interaction are made to any function in class object

○​
●​ Verify there are no unexpected interactions
○​ If we want to test after certain function no function has called
265

○​

○​
●​ Verify Order of interactions
○​ Used to check order of execution of methods, to check is certain methods are
called in proper order or not

○​
●​ Verify an interaction has occurred at least certain number of times

○​
●​ Exception handling with Non Void methods
266

○​

○​
●​ Exception handling with Void methods

○​

○​
●​ ArgumentCaptor without using annotations
○​ Capture the arguments passed to mock method and can perform assertion on it

○​
○​ ArgumentCaptor using annotations
267

■​
●​ What is a Spy in Mockito ?

●​

●​
●​ Creating a Spy using annotations
○​ Two ways to create
○​ First way
268

■​
○​ Second way
■​ Using annotation

■​
○​ While performing stubbing using spy need to use doreturn-when as it will stub the
function if we use when-return then it will call actual function

■​

○​
269

○​
●​ Behavior Verification in Spy
○​ It is same as mock

Testing website
https://www.geeksforgeeks.org/spring-boot-mockmvc-testing-with-example-project/

—------------------------------------------- Flyway --------------------------------------------------------------------

Flyway:
●​ Data migration tool which simplifies DB schema management, making it easier to handle
schema changes, upgrades and version control across different env

●​
270

●​
271

●​
272

●​
273

●​
274

●​

●​
275

●​

—------------------------------------- Lombok —-------------------------------------------

●​
276

●​
277

●​
278

●​

●​

—-------------------------- spring boot database relationship mapping —---------------------


279

●​

●​
280

●​

●​
281

●​
282

●​
283

●​
284

●​
285

●​
286

●​

●​
287

—--------------------------------------------- spring boot reactive programming —--------------------------

●​

●​
288

●​
289

●​
290

●​
291

●​
292

●​
293

●​
294

●​

●​
295

●​
296

●​
297

●​

●​
298

●​
299

●​
300

●​
●​ Spring Data R2DBC
○​
■​ implementation 'org.springframework.boot:spring-boot-starter-data-r2dbc'
implementation 'io.r2dbc:r2dbc-postgresql' // Replace with your database
301

○​
○​

●​
302

●​

●​

Spring web client


303

●​
●​ Adding dependency​
○​ implementation
'org.springframework.boot:spring-boot-starter-webflux'
304

●​
305

●​

●​
306

●​
307

●​
308

●​

●​
309

●​
310

●​

spring webflux webtestclient


311

●​
312

●​
313

●​
314

●​
315

●​
316

●​
317
318
319
320

spring boot functional programming

●​
321

●​
322

●​
323

●​

spring boot WebFlux.fn rest services


324

●​
325

●​
326

●​
327

●​

●​
328

●​
329

●​

Java Streams
330

●​
331

●​
332

●​
333

●​
334

●​
335

●​
336

●​
337

●​
338

●​

VAVR library
●​ VAVR is a functional programming library for Java that provides immutable data types
and functional control structures
●​ It is inspired by Scala and enhances Java with powerful tools to write cleaner, more
functional code.
●​ When integrating VAVR in a Spring Boot application, you can leverage its data structures
(like Option, Either, Try) and functional constructs for improved error handling,
immutability, and overall cleaner code.
339

●​
340

●​

●​
●​
341

●​
342

●​
343

●​
344

●​
345

●​
346

●​
347

●​
348

●​

●​
●​

================================================================
349

Multi Thread

●​

●​
350

●​
351

●​
352

●​

●​
353

●​

●​
354

●​
●​
●​
●​
●​ Thread
○​ A thread is a lightweight process, the smallest unit of processing.
●​ Thread Class: Java provides a Thread class to create and manage threads.
●​ Multi Thread
○​ Java multithreading allows concurrent execution of two or more threads
○​ Multi-threading enables you to write in a way where multiple activities can
proceed concurrently in the same program.
●​ Why
○​ For developing high-performance applications
○​ Fully utilize the resources
●​ Creating a Thread:
355

○​
●​ Thread Lifecycle
○​ New: A thread is in this state when it is created but not yet started.
○​ Runnable: The thread is ready to run and waiting for CPU allocation.
○​ Running: The thread is executing.
○​ Blocked/Waiting: The thread is blocked or waiting for some resource.
○​ Terminated: The thread has completed its execution or exited.
356

○​

●​
●​
●​ Create a Thread by Implementing a Runnable Interface
357

○​

○​
○​
○​
○​
●​ Create a Thread by Extending a Thread Class
358

○​
○​
○​
○​

●​
359

●​
360

●​
361

●​
362

●​

●​
363

●​
●​ Sleep()

○​
○​
○​
●​ Java - Naming a Thread with real world Examples
364

○​
365

○​
366

○​
●​
●​ Executor services
367

○​
○​
○​
○​

Java - Scheduling Threads with Examples


368

●​
●​
●​
●​
●​
●​

Thread Pools

●​ Why Use Thread Pools in Java?


○​ It saves time as a result of there's no need to produce new thread.
○​ It is utilized in Servlet and JSP wherever instrumentality creates a thread pool to
method the request.
369

●​
370

●​

●​
371

Main Thread in Java

●​
372

●​
373

●​

Priority of a Thread in Java


●​ Thread Priorities
○​ Every Java thread has a priority that helps the operating system determine the
order in which threads are scheduled.
○​ Java thread priorities are in the range between MIN_PRIORITY (a constant of 1)
and MAX_PRIORITY (a constant of 10). By default, every thread is given priority
NORM_PRIORITY (a constant of 5).
○​ Threads with higher priority are more important to a program and should be
allocated processor time before lower-priority threads. However, thread priorities
cannot guarantee the order in which threads execute and are very much platform
dependent.
●​ Thread class provides methods and constants for working with the priorities of a Thread.
○​ MIN_PRIORITY: Specifies the minimum priority that a thread can have.
○​ NORM_PRIORITY: Specifies the default priority that a thread is assigned.
○​ MAX_PRIORITY: Specifies the maximum priority that a thread can have.
374

●​

Daemon Thread in Java


●​ A Daemon thread is created to support the user threads. It generallty works in
background and terminated once all the other threads are closed. Garbage collector is
one of the example of Daemon thread.
●​ Characteristics of a Daemon Thread in Java
○​ A Daemon thread is a low priority thread.
○​ A Daemon thread is a service provider thread and should not be used as user
thread.
○​ JVM automatically closes the daemon thread(s) if no active thread is present and
revives it if user threads are active again.
○​ A daemon thread cannot prevent JVM to exit if all user threads are done.
●​ In this example, we've created a ThreadDemo class which extends Thread class. In
main method, we've created three threads. As we're setting one thread as daemon
thread, one thread will be printed as daemon thread.
375

○​

ThreadGroup Class
●​ The Java ThreadGroup class represents a set of threads. It can also include other
thread groups. The thread groups form a tree in which every thread group except the
initial thread group has a parent.
●​ Example in the site

Java - Collections Framework

●​ The Java Collections Framework provides a set of classes and interfaces to handle
collections of objects in a systematic way.
●​ It offers a unified architecture for manipulating and storing groups of objects, allowing
developers to work with collections of data more efficiently and effectively.
●​ Here's an overview of the key components and concepts in the Java Collections
Framework:
376

○​
377

○​
378

○​
379

○​
●​

You might also like