0% found this document useful (0 votes)
2 views

Advance Java all

The document is a comprehensive SQL cheat sheet that covers basic to advanced SQL concepts, including installation, data types, commands, and constraints. It provides detailed explanations and examples of various SQL functionalities such as creating tables, executing CRUD operations, and using SQL commands like DDL, DML, DCL, and TCL. Additionally, it outlines the importance of SQL in database management and data manipulation.

Uploaded by

Rishi Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Advance Java all

The document is a comprehensive SQL cheat sheet that covers basic to advanced SQL concepts, including installation, data types, commands, and constraints. It provides detailed explanations and examples of various SQL functionalities such as creating tables, executing CRUD operations, and using SQL commands like DDL, DML, DCL, and TCL. Additionally, it outlines the importance of SQL in database management and data manipulation.

Uploaded by

Rishi Singh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 642

Rishi singh

Advance Java Interview Q


SQL, MySQL, JDBC
J2EE Question
JDBC Interview Q
Servlet Question
JSP Question
Spring AOP, JDBC, Hibernate
Hibernate
Spring JPA
Spring Core
Spring Boot Question
Spring MVC
Spring Security
Docker
Microservices
DSA, Algorithm

2024

Written by: Rishi singh

1
SQL Cheat Sheet

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Learn SQL: Basic to Advanced Concepts


1. Installation
2. Tables
3. SQL DataTypes
4. SQL Commands
5. SQL Constraints
6. Crud Operations in SQL
7. Important SQL Keywords
8. Clauses in SQL
9. SQL Operators
10. Keys in SQL
11. Functions in SQL
12. Joins in SQL
13. Triggers in SQL
14. SQL Stored Procedures
15. SQL Injection

Page 1 © Copyright by Interviewbit


Let's get Started

Introduction: What is SQL?

To get introduced to SQL, we first need to know about Databases and Database
Management Systems(DBMS).
Data is basically a collection of facts related to some object. A Database is a
collection of small units of data arranged in a systematic manner. A Relational
Database Management System is a collection of tools that allows the users to
manipulate, organize and visualize the contents of a database while following some
standard rules that facilitate fast response between the database and the user side.
A er getting introduced to the concept of data, databases and DBMS/RDBMS, we can
finally learn about SQL. SQL or Structured Query Language is basically the language
that we (the user) use to communicate with the Databases and get our required
interpretation of data out of it. It is used for storing, manipulating and retrieving data
out of a database.

Page 2 © Copyright by Interviewbit


SQL Cheat Sheet

SQL Features

SQL allows us to interact with the databases and bring out/manipulate data within
them. Using SQL, we can create our own databases and then add data into these
databases in the form of tables.
The following functionalities can be performed on a database using SQL:
Create or Delete a Database.
Create or Alter or Delete some tables in a Database.
SELECT data from tables.
INSERT data into tables.
UPDATE data in tables.
DELETE data from tables.
Create Views in the database.
Execute various aggregate functions.

Learn SQL: Basic to Advanced Concepts


1. Installation

Page 3 © Copyright by Interviewbit


SQL Cheat Sheet

To get started with using SQL, we first need to install some Database Management
System server. A er installing the RDBMS, the RDBMS itself will provide all the
required tools to perform operations on the database and its contents through SQL.
Some common RDBMS which is highly in use are:
Oracle
MySQL
PostgreSQL
Heidi SQL
To install any RDBMS, we just need to visit their official website and install the setup
file from there, by following the instructions available there. With the server setup,
we can set up a Query Editor, on which we can type our SQL Queries.

2. Tables
All data in the database are organized efficiently in the form of tables. A database can
be formed from a collection of multiple tables, where each table would be used for
storing a particular kind of data and the table by themselves would be linked with
each other by using some relations.
Example:

ID Name Phone Class

INTEGER VARCHAR(25) VARCHAR(12) INTEGER

The above example is for a table of students and stores their Name, Phone, and Class
as data. The ID is assigned to each student to uniquely identify each student and
using this ID, we can relate data from this table to other tables.

SQL-Create Table:

Page 4 © Copyright by Interviewbit


SQL Cheat Sheet

We use the CREATE command to create a table. The table in the above example can
be created with the following code:

CREATE TABLE student(


ID INT NOT NULL,
Name varchar(25),
Phone varchar(12),
Class INT
);

SQL-Delete Table:

To delete a table from a database, we use the DROP command.

DROP TABLE student;

3. SQL DataTypes
To allow the users to work with tables effectively, SQL provides us with various
datatypes each of which can be useful based on the type of data we handle.

Page 5 © Copyright by Interviewbit


SQL Cheat Sheet

The above image is a chart that shows all the datatypes available in SQL along with
some of their examples.
The next section describes various most popular SQL server datatypes categorised
under each major division.

String Datatypes:

The table below lists all the String type datatypes available in SQL, along with their
descriptions:

Page 6 © Copyright by Interviewbit


SQL Cheat Sheet

Datatype Description

CHAR(size) A fixed-length string containing


numbers, letters or special
characters. Length may vary from 0-
255.

VARCHAR(size) Variable-length string where the


length may vary from 0-65535. Similar
to CHAR.

TEXT(size) Can contain a string of size up to


65536 bytes.

TINY TEXT Can contain a string of up to 255


characters.

MEDIUM TEXT Can contain a string of up to


16777215 characters.

LONG TEXT Can contain a string of up to


4294967295 characters.

BINARY(size) Similar to CHAR() but stores binary


byte strings.

VARBINARY(size) Similar to VARCHAR() but stores


binary byte strings.

BLOB(size) Holds blobs up to 65536 bytes.

TINYBLOB It is used for Binary Large Objects and


has a maximum size of 255bytes.

MEDIUMBLOB Holds blobs up to 16777215 bytes.

LONGBLOB Holds blobs upto 4294967295 bytes.


Page 7 © Copyright by Interviewbit
SQL Cheat Sheet

Numeric Datatypes:

The table below lists all the Numeric Datatypes in SQL along with their descriptions:

Page 8 © Copyright by Interviewbit


SQL Cheat Sheet

Datatype Description

BIT(size) Bit-value type, where size varies from 1


to 64. Default value: 1

INT(size) Integer with values in the signed range


of -2147483648 to 2147483647 and
values in the unsigned range of 0 to
4294967295.

TINYINT(size) Integer with values in the signed range


of -128 to 127 and values in the unsigned
range of 0 to 255.

SMALLINT(size) Integer with values in the signed range


of -32768 to 32767 and values in the
unsigned range of 0 to 65535.

MEDIUMINT(size) Integer with values in the signed range


of -8388608 to 8388607 and values in the
unsigned range of 0 to 16777215.

BIGINT(size) Integer with values in the signed range


of 9223372036854775808 to
9223372036854775807 and values in the
unsigned range of 0 to
18446744073709551615.

BOOLEAN Boolean values where 0 is considered as


FALSE and non-zero values are
considered TRUE.

FLOAT (p) The floating-point number is stored. If


the precision parameter is set between 0
to 24, the type is FLOAT() else if it lies
between 25 to 53, the datatype is
DOUBLE()
Page 9 © Copyright by Interviewbit
SQL Cheat Sheet

Date/Time Datatypes:

The datatypes available in SQL to handle Date/Time operations effectively are called
the Date/Time datatypes. The below table lists all the Date/Time variables in SQL
along with their description:

Datatype Description

DATE Stores date in YYYY-MM-DD format with


dates in the range of ‘1000-01-01’ to
‘9999-12-31’.

TIME(fsp) Stores time in hh:mm:ss format with


times in the range of ‘-838:59:59’ to
‘838:59:59’.

DATETIME(fsp) Stores a combination of date and time in


YYYY-MM-DD and hh:mm:ss format, with
values in the range of ‘1000-01-01
00:00:00’ to ‘9999-12-31 23:59:59’.

TIMESTAMP(fsp) It stores values relative to the Unix


Epoch, basically a Unix Timestamp.
Values lie in the range of ‘1970-01-01
00:00:01’ UTC to ‘2038-01-09 03:14:07’
UTC.

YEAR Stores values of years as a 4digit number


format, with a range lying between -1901
to 2155.

Page 10 © Copyright by Interviewbit


SQL Cheat Sheet

4. SQL Commands
SQL Commands are instructions that are used by the user to communicate with the
database, to perform specific tasks, functions and queries of data.
Types of SQL Commands:

The above image broadly shows the different types of SQL commands available in
SQL in the form of a chart.
1. Data Definition Language(DDL): It changes a table’s structure by adding, deleting
and altering its contents. Its changes are auto-committed(all changes are
automatically permanently saved in the database). Some commands that are a part
of DDL are:
CREATE: Used to create a new table in the database.
Example:

CREATE TABLE STUDENT(Name VARCHAR2(20), Email VARCHAR2(100), DOB DATE);

ALTER: Used to alter contents of a table by adding some new column or


attribute, or changing some existing attribute.

Page 11 © Copyright by Interviewbit


SQL Cheat Sheet

Example:

ALTER TABLE STUDENT ADD(ADDRESS VARCHAR2(20));


ALTER TABLE STUDENT MODIFY (ADDRESS VARCHAR2(20));

DROP: Used to delete the structure and record stored in the table.
Example:

DROP TABLE STUDENT;

TRUNCATE: Used to delete all the rows from the table, and free up the space in
the table.
Example:

TRUNCATE TABLE STUDENT;

2. Data Manipulation Language(DML): It is used for modifying a database, and is


responsible for any form of change in a database. These commands are not auto-
committed, i.e all changes are not automatically saved in the database. Some
commands that are a part of DML are:
INSERT: Used to insert data in the row of a table.
Example:

INSERT INTO STUDENT (Name, Subject) VALUES ("Scaler", "DSA");

In the above example, we insert the values “Scaler” and “DSA” in the columns Name
and Subject in the STUDENT table.
UPDATE: Used to update value of a table’s column.
Example:

Page 12 © Copyright by Interviewbit


SQL Cheat Sheet

UPDATE STUDENT
SET User_Name = 'Interviewbit'
WHERE Student_Id = '2'

In the above example, we update the name of the student, whose Student_ID is 2, to
the User_Name = “Interviewbit”.
DELETE: Used to delete one or more rows in a table.
Example:

DELETE FROM STUDENT


WHERE Name = "Scaler";

In the above example, the query deletes the row where the Name of the student is
“Scaler” from the STUDENT table.
3. Data Control Language(DCL): These commands are used to grant and take back
access/authority (revoke) from any database user. Some commands that are a part of
DCL are:
Grant: Used to grant a user access privileges to a database.
Example:

GRANT SELECT, UPDATE ON TABLE_1 TO USER_1, USER_2;

In the above example, we grant the rights to SELECT and UPDATE data from the table
TABLE_1 to users - USER_1 and USER_2.
Revoke: Used to revoke the permissions from an user.
Example:

REVOKE SELECT, UPDATE ON TABLE_1 FROM USER_1, USER_2;

In the above example we revoke the rights to SELECT and UPDATE data from the
table TABLE_1 from the users- USER_1 and USER_2.

Page 13 © Copyright by Interviewbit


SQL Cheat Sheet

4. Transaction Control Language: These commands can be used only with DML
commands in conjunction and belong to the category of auto-committed
commands. Some commands that are a part of TCL are:
COMMIT: Saves all the transactions made on a database.
Example:

DELETE FROM STUDENTS


WHERE AGE = 16;
COMMIT;

In the above database, we delete the row where AGE of the students is 16, and then
save this change to the database using COMMIT.
ROLLBACK: It is used to undo transactions which are not yet been saved.
Example:

DELETE FROM STUDENTS


WHERE AGE = 16;
ROLLBACK;

By using ROLLBACK in the above example, we can undo the deletion we performed in
the previous line of code, because the changes are not committed yet.
SAVEPOINT: Used to roll transaction back to a certain point without having to
roll back the entirity of the transaction.
Example:

SAVEPOINT SAVED;
DELETE FROM STUDENTS
WHERE AGE = 16;
ROLLBACK TO SAVED;

In the above example, we have created a savepoint just before performing the delete
operation in the table, and then we can return to that savepoint using the ROLLBACK
TO command.

Page 14 © Copyright by Interviewbit


SQL Cheat Sheet

5. Data Query Language: It is used to fetch some data from a database. The
command belonging to this category is:
SELECT: It is used to retrieve selected data based on some conditions which are
described using the WHERE clause. It is to be noted that the WHERE clause is
also optional to be used here and can be used depending on the user’s needs.
Example: With WHERE clause,

SELECT Name
FROM Student
WHERE age >= 18;

Example: Without WHERE clause,

SELECT Name
FROM Student

In the first example, we will only select those names in the Student table, whose
corresponding age is greater than 17. In the 2nd example, we will select all the names
from the Student table.

5. SQL Constraints
Constraints are rules which are applied on a table. For example, specifying valid limits
or ranges on data in the table etc.
The valid constraints in SQL are:
1. NOT NULL: Specifies that this column cannot store a NULL value.
Example:

CREATE TABLE Student


(
ID int(8) NOT NULL,
NAME varchar(30) NOT NULL,
ADDRESS varchar(50)
);

Page 15 © Copyright by Interviewbit


SQL Cheat Sheet

In the above example, we create a table STUDENT, which has some attributes it has
to store. Among these attributes we declare that the columns ID and NAME cannot
have NULL values in their fields using NOT NULL constraint.
2. UNIQUE: Specifies that this column can have only Unique values, i.e the values
cannot be repeated in the column.
Example:

CREATE TABLE Student


(
ID int(8) UNIQUE,
NAME varchar(10) NOT NULL,
ADDRESS varchar(20)
);

In the above example, we create a table Student and declare the ID column to be
unique using the UNIQUE constraint.
3. Primary Key: It is a field using which it is possible to uniquely identify each row in
a table. We will get to know about this in detail in the upcoming section.
4. Foreign Key: It is a field using which it is possible to uniquely identify each row in
some other table. We will get to know about this in detail in the upcoming section.
5. CHECK: It validates if all values in a column satisfy some particular condition or
not.
Example:

CREATE TABLE Student


(
ID int(6) NOT NULL,
NAME varchar(10),
AGE int CHECK (AGE < 20)
);

Here, in the above query, we add the CHECK constraint into the table. By adding the
constraint, we can only insert entries that satisfy the condition AGE < 20 into the
table.

Page 16 © Copyright by Interviewbit


SQL Cheat Sheet

6. DEFAULT: It specifies a default value for a column when no value is specified for
that field.
Example:

CREATE TABLE Student


(
ID int(8) NOT NULL,
NAME varchar(50) NOT NULL,
CLASS int DEFAULT 2
);

In the above query, we set a default value of 2 for the CLASS attribute. While inserting
records into the table, if the column has no value specified, then 2 is assigned to that
column as the default value.

6. Crud Operations in SQL


CRUD is an abbreviation for Create, Read, Update and Delete. These 4 operations
comprise the most basic database operations. The relevant commands for these 4
operations in SQL are:
Create: INSERT
Read: SELECT
Update: UPDATE
Delete: DELETE

Page 17 © Copyright by Interviewbit


SQL Cheat Sheet

The above image shows the pillars of SQL CRUD operations.


INSERT: To insert any new data ( create operation - C ) into a database, we use
the INSERT INTO statement.
SQL Syntax:

INSERT INTO name_of_table(column1, column2, ....)


VALUES(value1, value2, ....)

Example:

INSERT INTO student(ID, name, phone, class)


VALUES(1, 'Scaler', '+1234-4527', 12)

For multiple rows,


SQL Syntax:

Page 18 © Copyright by Interviewbit


SQL Cheat Sheet

INSERT INTO name_of_table(column1, column2, ....)


VALUES(value1, value2, ....),
(new_value1, new_value2, ...),
(....), ... ;

Example:

INSERT INTO student(ID, name, phone, class)


VALUES(1, 'Scaler', '+1234-4527', 12),
(2, 'Interviewbit', '+4321-7654', 11);

The above example will insert into the student table having the values 1, Scaler,
+1234-5678 and 12 to the columns ID, name, phone and class columns.
SELECT: We use the select statement to perform the Read ( R ) operation of
CRUD.
SQL Syntax:

SELECT column1,column2,.. FROM name_of_table;

Example:

SELECT name,class FROM student;

The above example allows the user to read the data in the name and class columns
from the student table.
UPDATE: Update is the ‘U’ component of CRUD. The Update command is used to
update the contents of specific columns of specific rows.
SQL Syntax:

UPDATE name_of_table
SET column1=value1,column2=value2,...
WHERE conditions...;

Page 19 © Copyright by Interviewbit


SQL Cheat Sheet

Example:

UPDATE customers
SET phone = '+1234-9876'
WHEREID = 2;

The above SQL example code will update the table ‘customers’ whose ID is 2 with the
new given phone number.
DELETE:
The Delete command is used to delete or remove some rows from a table. It is the ‘D’
component of CRUD.
SQL Syntax:

DELETE FROM name_of_table


WHERE condition1, condition2, ...;

Example:

DELETE FROM student


WHERE class = 11;

The above SQL example code will delete the row from table student, where the class
= 11 conditions becomes true.

7. Important SQL Keywords

Page 20 © Copyright by Interviewbit


SQL Cheat Sheet

The below table lists some important keywords used in SQL, along with their
description and example.

Page 21 © Copyright by Interviewbit


SQL Cheat Sheet

Keyword Description Example

ADD Will add a new ALTER TABLE student


column to an ADD email_address
existing table. VARCHAR(255)

ALTER Adds edits or


ALTER TABLE student
TABLE deletes
DROP COLUMN
columns in a
email_address;
table

ALTER Can change


ALTER TABLE student
COLUMN the datatype
ALTER COLUMN phone
of a table’s
VARCHAR(15)
column

AS Renames a
table/column
SELECT name AS
with an alias
student_name, phone
existing only
FROM student;
for the query
duration.

ASC Used in
conjunction SELECT column1,
with ORDER column2, … FROM
BY to sort table_name ORDER BY
data in column1, column2, …
ascending ASC;
order.

DESC Used in
conjunction SELECT column1,
with ORDER column2, … FROM
BY to sort table_name ORDER BY
data in column1, column2, …
descending DESC;
Page 22 © Copyright by Interviewbit
SQL Cheat Sheet

8. Clauses in SQL
Clauses are in-built functions available in SQL and are used for filtering and analysing
data quickly allowing the user to efficiently extract the required information from the
database.
The below table lists some of the important SQL clauses and their description with
examples:

Page 23 © Copyright by Interviewbit


SQL Cheat Sheet

Name Description Example

WHERE Used to select data


SELECT * from
from the database
Employee WHERE
based on some
age >= 18;
conditions.

AND Used to combine 2 or SELECT * from


more conditions and Employee WHERE
returns true if all the age >= 18 AND
conditions are True. salary >= 45000 ;

OR Similar to AND but Select * from


returns true if any of Employee where
the conditions are salary >= 45000 OR
True. age >= 18

LIKE Used to search for a SELECT * FROM


specified pattern in a Students WHERE
column. Name LIKE ‘a%’;

LIMIT Puts a restriction on


SELECT * FROM
how many rows are
table1 LIMIT 3;
returned from a query.

ORDER Used to sort given SELECT * FROM


BY data in Ascending or student ORDER BY
Descending order. age ASC

GROUP SELECT
BY Groups rows that have COUNT(StudentID),
the same values into State FROM
summary rows. Students GROUP
BY State;

Page 24 © Copyright by Interviewbit


SQL Cheat Sheet

9. SQL Operators
Operators are used in SQL to form complex expressions which can be evaluated to
code more intricate queries and extract more precise data from a database.
There are 3 main types of operators: Arithmetic, Comparision and Logical operators,
each of which will be described below.

Arithmetic Operators:
Arithmetic Operators allows the user to perform arithmetic operations in SQL. The
table below shows the list of arithmetic operators available in SQL:

Page 25 © Copyright by Interviewbit


SQL Cheat Sheet

Operator Description

+ Addition

- Subtraction

* Multiplication

/ Division

% Modulo

Bitwise Operators:
Bitwise operators are used to performing Bit manipulation operations in SQL. The
table below shows the list of bitwise operators available in SQL:

Operator Description

& Bitwise AND

| Bitwise OR

^ Bitwise XOR

Relational Operators:
Relational operators are used to performing relational expressions in SQL, i.e those
expressions whose value either result in true or false. The table below shows the list
of relational operators available in SQL:

Page 26 © Copyright by Interviewbit


SQL Cheat Sheet

Operator Description

= Equal to

> Greater than

< Less than

>= Greater than or equal to

<= Less than or equal to

<> Not equal to

Compound Operators:
Compound operators are basically a combination of 2 or more arithmetic or
relational operator, which can be used as a shorthand while writing code. The table
below shows the list of compound operators available in SQL:

Page 27 © Copyright by Interviewbit


SQL Cheat Sheet

Operator Description

+= Add equals

-= Subtract equals

*= Multiply equals

/= Divide equals

%= Modulo equals

&= AND equals

|= OR equals

^= XOR equals

Logical Operators:
Logical operators are used to combining 2 or more relational statements into 1
compound statement whose truth value is evaluated as a whole. The table below
shows the SQL logical operators with their description:

Page 28 © Copyright by Interviewbit


SQL Cheat Sheet

Operator Description

ALL Returns True if all subqueries meet the given


condition.

AND Returns True if all the conditions turn out to be


true

ANY True if any of the subqueries meet the given


condition

BETWEEN True if the operand lies within the range of the


conditions

EXISTS True if the subquery returns one or more


records

IN Returns True if the operands to at least one of


the operands in a given list of expressions

LIKE Return True if the operand and some given


pattern match.

NOT Displays some record if the set of given


conditions is False

OR Returns True if any of the conditions turn out to


be True

SOME Returns True if any of the Subqueries meet the


given condition.

Page 29 © Copyright by Interviewbit


SQL Cheat Sheet

10. Keys in SQL


A database consists of multiple tables and these tables and their contents are related
to each other by some relations/conditions. To identify each row of these tables
uniquely, we make use of SQL keys. A SQL key can be a single column or a group of
columns used to uniquely identify the rows of a table. SQL keys are a means to
ensure that no row will have duplicate values. They are also a means to establish
relations between multiple tables in a database.
Types of Keys:
1. Primary Key: They uniquely identify a row in a table.
Properties:
Only a single primary key for a table. (A special case is a composite key, which
can be formed by the composition of 2 or more columns, and act as a single
candidate key.)
The primary key column cannot have any NULL values.
The primary key must be unique for each row.
Example:

CREATE TABLE Student (


ID int NOT NULL,
LastName varchar(255) NOT NULL,
FirstName varchar(255),
Class int,
PRIMARY KEY (ID)
);

The above example creates a table called STUDENT with some given
properties(columns) and assigns the ID column as the primary key of the table. Using
the value of ID column, we can uniquely identify its corresponding row.
2. Foreign Key: Foreign keys are keys that reference the primary keys of some other
table. They establish a relationship between 2 tables and link them up.

Page 30 © Copyright by Interviewbit


SQL Cheat Sheet

Example: In the below example, a table called Orders is created with some given
attributes and its Primary Key is declared to be OrderID and Foreign Key is declared
to be PersonId referenced from the Person's table. A person's table is assumed to be
created beforehand.

CREATE TABLE Orders (


OrderID int NOT NULL,
OrderNumber int NOT NULL,
PersonID int,
PRIMARY KEY (OrderID),
FOREIGN KEY (PersonID) REFERENCES Persons(PersonID)
);

Super Key: It is a group of single or multiple keys which identifies row of a table.
Candidate Key: It is a collection of unique attributes that can uniquely identify
tuples in a table.
Alternate Key: It is a column or group of columns that can identify every row in
a table uniquely.
Compound Key: It is a collection of more than one record that can be used to
uniquely identify a specific record.
Composite Key: Collection of more than one column that can uniquely identify
rows in a table.
Surrogate Key: It is an artificial key that aims to uniquely identify each record.
Amongst these, the Primary and Foreign keys are most commonly used.

11. Functions in SQL


The SQL Server has many builtin functions some of which are listed below:
SQL Server String Functions:
The table below lists some of the String functions in SQL with their description:

Page 31 © Copyright by Interviewbit


SQL Cheat Sheet

Name Description

ASCII Returns ASCII values for a specific character.

CHAR Returns character based on the ASCII code.

CONCAT Concatenates 2 strings together.

SOUNDEX Returns similarity of 2 strings in terms of a 4


character code.

DIFFERENCE Compares 2 SOUNDEX values and returns the


result as an integer.

SUBSTRING Extracts a substring from a given string.

TRIM Removes leading and trailing whitespaces


from a string.

UPPER Converts a string to upper-case.

SQL Server Numeric Functions:


The table below lists some of the Numeric functions in SQL with their description:

Page 32 © Copyright by Interviewbit


SQL Cheat Sheet

Name Description

ABS Returns the absolute value of a number.

ASIN Returns arc sine value of a number.

AVG Returns average value of an expression.

COUNT Counts the number of records returned by a


SELECT query.

EXP Returns e raised to the power of a number.

FLOOR Returns the greatest integer <= the number.

RAND Returns a random number.

SIGN Returns the sign of a number.

SQRT Returns the square root of a number.

SUM Returns the sum of a set of values.

SQL Server Date Functions:


The table below lists some of the Date functions in SQL with their description:

Page 33 © Copyright by Interviewbit


SQL Cheat Sheet

Name Description

CURRENT_TIMESTAMP Returns current date and time.

DATEADD Adds a date/time interval to date


and returns the new date.

DATENAME Returns a specified part of a


date(as a string).

DATEPART Returns a specified part of a


date(as an integer).

DAY Returns the day of the month for a


specified date.

GETDATE Returns the current date and time


from the database.

SQL Server Advanced Functions:


The table below lists some of the Advanced functions in SQL with their description:

Page 34 © Copyright by Interviewbit


SQL Cheat Sheet

Name Description

CAST Typecasts a value into specified datatype.

CONVERT Converts a value into a specified datatype.

IIF Return a value if a condition evaluates to


True, else some other value.

ISNULL Return a specified value if the expression is


NULL, else returns the expression.

ISNUMERIC Checks if an expression is numeric or not.

SYSTEM_USER Returns the login name for the current user

USER_NAME Returns the database user name based on


the specified id.

12. Joins in SQL


Joins are a SQL concept that allows us to fetch data a er combining multiple tables
of a database.
The following are the types of joins in SQL:
INNER JOIN: Returns any records which have matching values in both tables.

Page 35 © Copyright by Interviewbit


SQL Cheat Sheet

Example:
Consider the following tables,

Let us try to build the below table, using Joins,

Page 36 © Copyright by Interviewbit


SQL Cheat Sheet

The SQL code will be as follows,

SELECT orders.order_id, products.product_name,customers.customer_name,products.price


FROM orders
INNER JOIN products ON products.product_id = order.product_id
INNER JOIN customers on customers.customer_id = order.customer_id;

NATURAL JOIN: It is a special type of inner join based on the fact that the
column names and datatypes are the same on both tables.
Syntax:

Select * from table1 Natural JOIN table2;

Example:

Select * from Customers Natural JOIN Orders;

In the above example, we are merging the Customers and Orders table shown above
using a NATURAL JOIN based on the common column customer_id.
RIGHT JOIN: Returns all of the records from the second table, along with any
matching records from the first.

Page 37 © Copyright by Interviewbit


SQL Cheat Sheet

Example:
Let us define an Orders table first,

Let us also define an Employee table,

Page 38 © Copyright by Interviewbit


SQL Cheat Sheet

Applying right join on these tables,

SELECT Orders.OrderID, Employees.LastName, Employees.FirstName


FROM Orders
RIGHT JOIN Employees
ON Orders.EmployeeID = Employees.EmployeeID
ORDER BY Orders.OrderID;

The resultant table will be,

LEFT JOIN: Returns all of the records from the first table, along with any
matching records from the second table.

Page 39 © Copyright by Interviewbit


SQL Cheat Sheet

Example:
Consider the below Customer and Orders table,

Page 40 © Copyright by Interviewbit


SQL Cheat Sheet

We will apply Le Join on the above tables, as follows,

SELECT Customers.CustomerName, Orders.OrderID


FROM Customers
LEFT JOIN Orders
ON Customers.CustomerID=Orders.CustomerID
ORDER BY Customers.CustomerName;

The top few entries of the resultant table will appear as shown in the below image.

FULL JOIN: Returns all records from both tables when there is a match.

Example:

Page 41 © Copyright by Interviewbit


SQL Cheat Sheet

Consider the below tables, Customers and Orders,


Table Customers:

Table Orders:

Applying Outer Join on the above 2 tables, using the code:

Page 42 © Copyright by Interviewbit


SQL Cheat Sheet

SELECT ID, NAME, AMOUNT, DATE


FROM CUSTOMERS
FULL JOIN ORDERS
ON CUSTOMERS.ID = ORDERS.CUSTOMER_ID;

We will get the following table as the result of the outer join.

13. Triggers in SQL


SQL codes automatically executed in response to a certain event occurring in a table
of a database are called triggers. There cannot be more than 1 trigger with a similar
action time and event for one table.
Syntax:

Create Trigger Trigger_Name


(Before | After) [ Insert | Update | Delete]
on [Table_Name]
[ for each row | for each column ]
[ trigger_body ]

Example:

Page 43 © Copyright by Interviewbit


SQL Cheat Sheet

CREATE TRIGGER trigger1


before INSERT
ON Student
FOR EACH ROW
SET new.total = (new.marks/ 10) * 100;

Here, we create a new Trigger called trigger1, just before we perform an INSERT
operation on the Student table, we calculate the percentage of the marks for each
row.
Some common operations that can be performed on triggers are:
DROP: This operation will drop an already existing trigger from the table.
Syntax:

DROP TRIGGER trigger name;

SHOW: This will display all the triggers that are currently present in the table.
Syntax:

SHOW TRIGGERS IN database_name;

14. SQL Stored Procedures


SQL procedures are stored in SQL codes, which can be saved for reuse again and
again.
Syntax:

CREATE PROCEDURE procedure_name AS sql_statement


GO;

To execute a stored procedure,

EXEC procedure_name;

Page 44 © Copyright by Interviewbit


SQL Cheat Sheet

Example:

CREATE PROCEDURE SelectAllCustomers AS SELECT * FROM Customers;


GO;

The above example creates a stored procedure called ‘SelectAllCustomers’, which


selects all the records from the customer table.

15. SQL Injection


Insertion or ‘Injection’ of some SQL Query from the input data of the client to the
application is called SQL Injection. They can perform CRUD operations on the
database and can read to vulnerabilities and loss of data.
It can occur in 2 ways:
Data is used to dynamically construct an SQL Query.
Unintended data from an untrusted source enters the application.
The consequences of SQL Injections can be Confidentiality issues, Authentication
breaches, Authorization vulnerabilities, and breaking the Integrity of the system.

Page 45 © Copyright by Interviewbit


SQL Cheat Sheet

The above image shows an example of SQL injections, through the use of 2 tables -
students and library.
Here the hacker is injecting SQL code -

UNION SELECT studentName, rollNo FROM students

into the Database server, where his query is used to JOIN the tables - students and
library. Joining the 2 tables, the result of the query is returned from the database,
using which the hacker gains access to the information he needs thereby taking
advantage of the system vulnerability. The arrows in the diagram show the flow of
how the SQL Injection causes the vulnerability in the database system, starting from
the hacker’s computer.

Conclusion:
Databases are growing increasingly important in our modern industry where data is
considered to be a new wealth. Managing these large amounts of data, gaining
insights from them and storing them in a cost-effective manner makes database
management highly important in any modern so ware being made. To manage any
form of databases/RDBMS, we need to learn SQL which allows us to easily code and
manage data from these databases and create large scalable applications of the
future, which caters to the needs of millions.

Page 46 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 47 © Copyright by Interviewbit


SQL Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
If you're preparing for your upcoming SQL interview, on top of the
interview questions, guidance from an expert can prove to be
extremely useful. One of our top instructors is hosting a FREE
Masterclass for aspirants like you! Feel free to register if you're
interested.

“As a regular attendee of the Masterclass, I feel the


most valuable part about Scaler Academy's
Shubh Masterclasses is the unparalleled content quality that
they deliver. A 3 hour Masterclass is very helpful and
Agrawal good enough for the teaching part and doubt clearing
sessions. I use the study material they provide before
B.Tech, IIIT,
each Masterclass. In these times when nothing is for
Bhagalpur
free, these classes are a life-savior!”

“The session was so well structured and simplified. 3


engaging hours of learning, interesting polls and lots
of doubt resolution! The best part was, he saw curious
Abhinav
learners and extended the session for another hour for Koppula
people who wished to stay. Recommend it to all
beginners out there who are clueless about starting Developer,
HLD themselves! This is a must!!” Mckinsey

If interested, don't hesitate to


REGISTER NOW
attend this FREE session.
Contents

SQL Interview Questions


1. What is Database?
2. What is DBMS?
3. What is RDBMS? How is it different from DBMS?
4. What is SQL?
5. What is the difference between SQL and MySQL?
6. What are Tables and Fields?
7. What are Constraints in SQL?
8. What is a Primary Key?
9. What is a UNIQUE constraint?
10. What is a Foreign Key?
11. What is a Join? List its different types.
12. What is a Self-Join?
13. What is a Cross-Join?
14. What is an Index? Explain its different types.
15. What is the difference between Clustered and Non-clustered index?
16. What is Data Integrity?
17. What is a Query?
18. What is a Subquery? What are its types?
19. What is the SELECT statement?
20. What are some common clauses used with SELECT query in SQL?

Page 1 © Copyright by Interviewbit


SQL Interview Questions

SQL Interview Questions (.....Continued)


21. What are UNION, MINUS and INTERSECT commands?
22. What is Cursor? How to use a Cursor?
23. What are Entities and Relationships?
24. List the different types of relationships in SQL.
25. What is an Alias in SQL?
26. What is a View?
27. What is Normalization?
28. What is Denormalization?
29. What are the various forms of Normalization?
30. What are the TRUNCATE, DELETE and DROP statements?
31. What is the difference between DROP and TRUNCATE statements?
32. What is the difference between DELETE and TRUNCATE statements?
33. What are Aggregate and Scalar functions?
34. What is User-defined function? What are its various types?
35. What is OLTP?
36. What are the differences between OLTP and OLAP?
37. What is Collation? What are the different types of Collation Sensitivity?
38. What is a Stored Procedure?
39. What is a Recursive Stored Procedure?
40. How to create empty tables with the same structure as another table?

Page 2 © Copyright by Interviewbit


SQL Interview Questions

SQL Interview Questions (.....Continued)

41. What is Pattern Matching in SQL?

PostgreSQL Interview Questions


42. What is PostgreSQL?
43. How do you define Indexes in PostgreSQL?
44. How will you change the datatype of a column?
45. What is the command used for creating a database in PostgreSQL?
46. How can we start, restart and stop the PostgreSQL server?
47. What are partitioned tables called in PostgreSQL?
48. Define tokens in PostgreSQL?
49. What is the importance of the TRUNCATE statement?
50. What is the capacity of a table in PostgreSQL?
51. Define sequence.
52. What are string constants in PostgreSQL?
53. How can you get a list of all databases in PostgreSQL?
54. How can you delete a database in PostgreSQL?
55. What are ACID properties? Is PostgreSQL compliant with ACID?
56. Can you explain the architecture of PostgreSQL?
57. What do you understand by multi-version concurrency control?
58. What do you understand by command enable-debug?

Page 3 © Copyright by Interviewbit


SQL Interview Questions

PostgreSQL Interview Questions (.....Continued)


59. How do you check the rows affected as part of previous transactions?
60. What can you tell about WAL (Write Ahead Logging)?
61. What is the main disadvantage of deleting data from an existing table using the
DROP TABLE command?
62. How do you perform case-insensitive searches using regular expressions in
PostgreSQL?
63. How will you take backup of the database in PostgreSQL?
64. Does PostgreSQL support full text search?
65. What are parallel queries in PostgreSQL?
66. Differentiate between commit and checkpoint.

Page 4 © Copyright by Interviewbit


Let's get Started
Are you preparing for your SQL developer interview?
Then you have come to the right place.
This guide will help you to brush up on your SQL skills, regain your confidence and be
job-ready!
Here, you will find a collection of real-world Interview questions asked in companies
like Google, Oracle, Amazon, and Microso , etc. Each question comes with a perfectly
written answer inline, saving your interview preparation time.
It also covers practice problems to help you understand the basic concepts of SQL.
We've divided this article into the following sections:
SQL Interview Questions
PostgreSQL Interview Questions
In the end, multiple-choice questions are provided to test your understanding.

SQL Interview Questions and Answers with Examples | SQL …

Page 5 © Copyright by Interviewbit


SQL Interview Questions

SQL Interview Questions


1. What is Database?
A database is an organized collection of data, stored and retrieved digitally from a
remote or local computer system. Databases can be vast and complex, and such
databases are developed using fixed design and modeling approaches.

2. What is DBMS?
DBMS stands for Database Management System. DBMS is a system so ware
responsible for the creation, retrieval, updation, and management of the database. It
ensures that our data is consistent, organized, and is easily accessible by serving as
an interface between the database and its end-users or application so ware.

3. What is RDBMS? How is it different from DBMS?


RDBMS stands for Relational Database Management System. The key difference here,
compared to DBMS, is that RDBMS stores data in the form of a collection of tables,
and relations can be defined between the common fields of these tables. Most
modern database management systems like MySQL, Microso SQL Server, Oracle,
IBM DB2, and Amazon Redshi are based on RDBMS.

4. What is SQL?
SQL stands for Structured Query Language. It is the standard language for relational
database management systems. It is especially useful in handling organized data
comprised of entities (variables) and relations between different entities of the data.

Page 6 © Copyright by Interviewbit


SQL Interview Questions

5. What is the difference between SQL and MySQL?


SQL is a standard language for retrieving and manipulating structured databases. On
the contrary, MySQL is a relational database management system, like SQL Server,
Oracle or IBM DB2, that is used to manage SQL databases.

6. What are Tables and Fields?


A table is an organized collection of data stored in the form of rows and columns.
Columns can be categorized as vertical and rows as horizontal. The columns in a
table are called fields while the rows can be referred to as records.

7. What are Constraints in SQL?


Constraints are used to specify the rules concerning data in the table. It can be
applied for single or multiple fields in an SQL table during the creation of the table or
a er creating using the ALTER TABLE command. The constraints are:
NOT NULL - Restricts NULL value from being inserted into a column.
CHECK - Verifies that all values in a field satisfy a condition.
DEFAULT - Automatically assigns a default value if no value has been specified
for the field.
UNIQUE - Ensures unique values to be inserted into the field.
INDEX - Indexes a field providing faster retrieval of records.
PRIMARY KEY - Uniquely identifies each record in a table.
FOREIGN KEY - Ensures referential integrity for a record in another table.

8. What is a Primary Key?

Page 7 © Copyright by Interviewbit


SQL Interview Questions

The PRIMARY KEY constraint uniquely identifies each row in a table. It must contain
UNIQUE values and has an implicit NOT NULL constraint.
A table in SQL is strictly restricted to have one and only one primary key, which is
comprised of single or multiple fields (columns).

CREATE TABLE Students ( /* Create table with a single field as primary key */
ID INT NOT NULL
Name VARCHAR(255)
PRIMARY KEY (ID)
);

CREATE TABLE Students ( /* Create table with multiple fields as primary key */
ID INT NOT NULL
LastName VARCHAR(255)
FirstName VARCHAR(255) NOT NULL,
CONSTRAINT PK_Student
PRIMARY KEY (ID, FirstName)
);

ALTER TABLE Students /* Set a column as primary key */


ADD PRIMARY KEY (ID);
ALTER TABLE Students /* Set multiple columns as primary key */
ADD CONSTRAINT PK_Student /*Naming a Primary Key*/
PRIMARY KEY (ID, FirstName);

write a sql statement to add primary key 't_id' to the table 'teachers'.

Check

Write a SQL statement to add primary key constraint 'pk_a' for table 'table_a'
and fields 'col_b, col_c'.

Check
9. What is a UNIQUE constraint?
A UNIQUE constraint ensures that all values in a column are different. This provides
uniqueness for the column(s) and helps identify each row uniquely. Unlike primary
key, there can be multiple unique constraints defined per table. The code syntax for
UNIQUE is quite similar to that of PRIMARY KEY and can be used interchangeably.

Page 8 © Copyright by Interviewbit


SQL Interview Questions

CREATE TABLE Students ( /* Create table with a single field as unique */


ID INT NOT NULL UNIQUE
Name VARCHAR(255)
);

CREATE TABLE Students ( /* Create table with multiple fields as unique */


ID INT NOT NULL
LastName VARCHAR(255)
FirstName VARCHAR(255) NOT NULL
CONSTRAINT PK_Student
UNIQUE (ID, FirstName)
);

ALTER TABLE Students /* Set a column as unique */


ADD UNIQUE (ID);
ALTER TABLE Students /* Set multiple columns as unique */
ADD CONSTRAINT PK_Student /* Naming a unique constraint */
UNIQUE (ID, FirstName);

10. What is a Foreign Key?


A FOREIGN KEY comprises of single or collection of fields in a table that essentially
refers to the PRIMARY KEY in another table. Foreign key constraint ensures referential
integrity in the relation between two tables.
The table with the foreign key constraint is labeled as the child table, and the table
containing the candidate key is labeled as the referenced or parent table.

CREATE TABLE Students ( /* Create table with foreign key - Way 1 */


ID INT NOT NULL
Name VARCHAR(255)
LibraryID INT
PRIMARY KEY (ID)
FOREIGN KEY (Library_ID) REFERENCES Library(LibraryID)
);

CREATE TABLE Students ( /* Create table with foreign key - Way 2 */


ID INT NOT NULL PRIMARY KEY
Name VARCHAR(255)
LibraryID INT FOREIGN KEY (Library_ID) REFERENCES Library(LibraryID)
);

ALTER TABLE Students /* Add a new foreign key */


ADD FOREIGN KEY (LibraryID)
REFERENCES Library (LibraryID);

Page 9 © Copyright by Interviewbit


SQL Interview Questions

What type of integrity constraint does the foreign key ensure?

Check

Write a SQL statement to add a FOREIGN KEY 'col_fk' in 'table_y' that references
'col_pk' in 'table_x'.

Check
11. What is a Join? List its different types.
The SQL Join clause is used to combine records (rows) from two or more tables in a
SQL database based on a related column between the two.

There are four different types of JOINs in SQL:


(INNER) JOIN: Retrieves records that have matching values in both tables
involved in the join. This is the widely used join for queries.

Page 10 © Copyright by Interviewbit


SQL Interview Questions

SELECT *
FROM Table_A
JOIN Table_B;
SELECT *
FROM Table_A
INNER JOIN Table_B;

LEFT (OUTER) JOIN: Retrieves all the records/rows from the le and the
matched records/rows from the right table.

SELECT *
FROM Table_A A
LEFT JOIN Table_B B
ON A.col = B.col;

RIGHT (OUTER) JOIN: Retrieves all the records/rows from the right and the
matched records/rows from the le table.

SELECT *
FROM Table_A A
RIGHT JOIN Table_B B
ON A.col = B.col;

FULL (OUTER) JOIN: Retrieves all the records where there is a match in either
the le or right table.

SELECT *
FROM Table_A A
FULL JOIN Table_B B
ON A.col = B.col;

12. What is a Self-Join?


A self JOIN is a case of regular join where a table is joined to itself based on some
relation between its own column(s). Self-join uses the INNER JOIN or LEFT JOIN
clause and a table alias is used to assign different names to the table within the
query.

Page 11 © Copyright by Interviewbit


SQL Interview Questions

SELECT A.emp_id AS "Emp_ID",A.emp_name AS "Employee",


B.emp_id AS "Sup_ID",B.emp_name AS "Supervisor"
FROM employee A, employee B
WHERE A.emp_sup = B.emp_id;

13. What is a Cross-Join?


Cross join can be defined as a cartesian product of the two tables included in the join.
The table a er join contains the same number of rows as in the cross-product of the
number of rows in the two tables. If a WHERE clause is used in cross join then the
query will work like an INNER JOIN.

SELECT stu.name, sub.subject


FROM students AS stu
CROSS JOIN subjects AS sub;

Write a SQL statement to CROSS JOIN 'table_1' with 'table_2' and fetch 'col_1'
from table_1 & 'col_2' from table_2 respectively. Do not use alias.

Check

Page 12 © Copyright by Interviewbit


SQL Interview Questions

Write a SQL statement to perform SELF JOIN for 'Table_X' with alias 'Table_1'
and 'Table_2', on columns 'Col_1' and 'Col_2' respectively.

Check
14. What is an Index? Explain its different types.
A database index is a data structure that provides a quick lookup of data in a column
or columns of a table. It enhances the speed of operations accessing data from a
database table at the cost of additional writes and memory to maintain the index
data structure.

CREATE INDEX index_name /* Create Index */


ON table_name (column_1, column_2);
DROP INDEX index_name; /* Drop Index */

There are different types of indexes that can be created for different purposes:
Unique and Non-Unique Index:
Unique indexes are indexes that help maintain data integrity by ensuring that no two
rows of data in a table have identical key values. Once a unique index has been
defined for a table, uniqueness is enforced whenever keys are added or changed
within the index.

CREATE UNIQUE INDEX myIndex


ON students (enroll_no);

Non-unique indexes, on the other hand, are not used to enforce constraints on the
tables with which they are associated. Instead, non-unique indexes are used solely to
improve query performance by maintaining a sorted order of data values that are
used frequently.
Clustered and Non-Clustered Index:
Clustered indexes are indexes whose order of the rows in the database corresponds
to the order of the rows in the index. This is why only one clustered index can exist in
a given table, whereas, multiple non-clustered indexes can exist in the table.

Page 13 © Copyright by Interviewbit


SQL Interview Questions

The only difference between clustered and non-clustered indexes is that the
database manager attempts to keep the data in the database in the same order as
the corresponding keys appear in the clustered index.
Clustering indexes can improve the performance of most query operations because
they provide a linear-access path to data stored in the database.

Write a SQL statement to create a UNIQUE INDEX "my_index" on "my_table" for


fields "column_1" & "column_2".

Check
15. What is the difference between Clustered and Non-clustered
index?
As explained above, the differences can be broken down into three small factors -
Clustered index modifies the way records are stored in a database based on the
indexed column. A non-clustered index creates a separate entity within the table
which references the original table.
Clustered index is used for easy and speedy retrieval of data from the database,
whereas, fetching records from the non-clustered index is relatively slower.
In SQL, a table can have a single clustered index whereas it can have multiple
non-clustered indexes.

16. What is Data Integrity?


Data Integrity is the assurance of accuracy and consistency of data over its entire life-
cycle and is a critical aspect of the design, implementation, and usage of any system
which stores, processes, or retrieves data. It also defines integrity constraints to
enforce business rules on the data when it is entered into an application or a
database.

17. What is a Query?


A query is a request for data or information from a database table or combination of
tables. A database query can be either a select query or an action query.

Page 14 © Copyright by Interviewbit


SQL Interview Questions

SELECT fname, lname /* select query */


FROM myDb.students
WHERE student_id = 1;

UPDATE myDB.students /* action query */


SET fname = 'Captain', lname = 'America'
WHERE student_id = 1;

18. What is a Subquery? What are its types?


A subquery is a query within another query, also known as a nested query or inner
query. It is used to restrict or enhance the data to be queried by the main query, thus
restricting or enhancing the output of the main query respectively. For example, here
we fetch the contact information for students who have enrolled for the maths
subject:

SELECT name, email, mob, address


FROM myDb.contacts
WHERE roll_no IN (
SELECT roll_no
FROM myDb.students
WHERE subject = 'Maths');

There are two types of subquery - Correlated and Non-Correlated.


A correlated subquery cannot be considered as an independent query, but it can
refer to the column in a table listed in the FROM of the main query.
A non-correlated subquery can be considered as an independent query and the
output of the subquery is substituted in the main query.

Write a SQL query to update the field "status" in table "applications" from 0 to
1.

Check

Page 15 © Copyright by Interviewbit


SQL Interview Questions

Write a SQL query to select the field "app_id" in table "applications" where
"app_id" less than 1000.

Check

Write a SQL query to fetch the field "app_name" from "apps" where "apps.id" is
equal to the above collection of "app_id".

Check
19. What is the SELECT statement?
SELECT operator in SQL is used to select data from a database. The data returned is
stored in a result table, called the result-set.

SELECT * FROM myDB.students;

20. What are some common clauses used with SELECT query in
SQL?
Some common SQL clauses used in conjuction with a SELECT query are as follows:
WHERE clause in SQL is used to filter records that are necessary, based on
specific conditions.
ORDER BY clause in SQL is used to sort the records based on some field(s) in
ascending (ASC) or descending order (DESC).

SELECT *
FROM myDB.students
WHERE graduation_year = 2019
ORDER BY studentID DESC;

Page 16 © Copyright by Interviewbit


SQL Interview Questions

GROUP BY clause in SQL is used to group records with identical data and can be
used in conjunction with some aggregation functions to produce summarized
results from the database.
HAVING clause in SQL is used to filter records in combination with the GROUP BY
clause. It is different from WHERE, since the WHERE clause cannot filter
aggregated records.

SELECT COUNT(studentId), country


FROM myDB.students
WHERE country != "INDIA"
GROUP BY country
HAVING COUNT(studentID) > 5;

21. What are UNION, MINUS and INTERSECT commands?


The UNION operator combines and returns the result-set retrieved by two or more
SELECT statements.
The MINUS operator in SQL is used to remove duplicates from the result-set obtained
by the second SELECT query from the result-set obtained by the first SELECT query
and then return the filtered results from the first.
The INTERSECT clause in SQL combines the result-set fetched by the two SELECT
statements where records from one match the other and then returns this
intersection of result-sets.
Certain conditions need to be met before executing either of the above statements in
SQL -
Each SELECT statement within the clause must have the same number of
columns
The columns must also have similar data types
The columns in each SELECT statement should necessarily have the same order

Page 17 © Copyright by Interviewbit


SQL Interview Questions

SELECT name FROM Students /* Fetch the union of queries */


UNION
SELECT name FROM Contacts;
SELECT name FROM Students /* Fetch the union of queries with duplicates*/
UNION ALL
SELECT name FROM Contacts;

SELECT name FROM Students /* Fetch names from students */


MINUS /* that aren't present in contacts */
SELECT name FROM Contacts;

SELECT name FROM Students /* Fetch names from students */


INTERSECT /* that are present in contacts as well */
SELECT name FROM Contacts;

Write a SQL query to fetch "names" that are present in either table "accounts" or
in table "registry".

Check

Write a SQL query to fetch "names" that are present in "accounts" but not in
table "registry".

Check

Write a SQL query to fetch "names" from table "contacts" that are neither
present in "accounts.name" nor in "registry.name".

Check
22. What is Cursor? How to use a Cursor?
A database cursor is a control structure that allows for the traversal of records in a
database. Cursors, in addition, facilitates processing a er traversal, such as retrieval,
addition, and deletion of database records. They can be viewed as a pointer to one
row in a set of rows.
Working with SQL Cursor:

Page 18 © Copyright by Interviewbit


SQL Interview Questions

1. DECLARE a cursor a er any variable declaration. The cursor declaration must


always be associated with a SELECT Statement.
2. Open cursor to initialize the result set. The OPEN statement must be called
before fetching rows from the result set.
3. FETCH statement to retrieve and move to the next row in the result set.
4. Call the CLOSE statement to deactivate the cursor.
5. Finally use the DEALLOCATE statement to delete the cursor definition and
release the associated resources.

DECLARE @name VARCHAR(50) /* Declare All Required Variables */


DECLARE db_cursor CURSOR FOR /* Declare Cursor Name*/
SELECT name
FROM myDB.students
WHERE parent_name IN ('Sara', 'Ansh')
OPEN db_cursor /* Open cursor and Fetch data into @name */
FETCH next
FROM db_cursor
INTO @name
CLOSE db_cursor /* Close the cursor and deallocate the resources */
DEALLOCATE db_cursor

23. What are Entities and Relationships?


Entity: An entity can be a real-world object, either tangible or intangible, that can be
easily identifiable. For example, in a college database, students, professors, workers,
departments, and projects can be referred to as entities. Each entity has some
associated properties that provide it an identity.
Relationships: Relations or links between entities that have something to do with
each other. For example - The employee's table in a company's database can be
associated with the salary table in the same database.

Page 19 © Copyright by Interviewbit


SQL Interview Questions

24. List the different types of relationships in SQL.


One-to-One - This can be defined as the relationship between two tables where
each record in one table is associated with the maximum of one record in the
other table.
One-to-Many & Many-to-One - This is the most commonly used relationship
where a record in a table is associated with multiple records in the other table.
Many-to-Many - This is used in cases when multiple instances on both sides are
needed for defining a relationship.
Self-Referencing Relationships - This is used when a table needs to define a
relationship with itself.

25. What is an Alias in SQL?


An alias is a feature of SQL that is supported by most, if not all, RDBMSs. It is a
temporary name assigned to the table or table column for the purpose of a particular
SQL query. In addition, aliasing can be employed as an obfuscation technique to
secure the real names of database fields. A table alias is also called a correlation
name.

Page 20 © Copyright by Interviewbit


SQL Interview Questions

An alias is represented explicitly by the AS keyword but in some cases, the same can
be performed without it as well. Nevertheless, using the AS keyword is always a good
practice.

SELECT A.emp_name AS "Employee" /* Alias using AS keyword */


B.emp_name AS "Supervisor"
FROM employee A, employee B /* Alias without AS keyword */
WHERE A.emp_sup = B.emp_id;

Write an SQL statement to select all from table "Limited" with alias "Ltd".

Check
26. What is a View?
A view in SQL is a virtual table based on the result-set of an SQL statement. A view
contains rows and columns, just like a real table. The fields in a view are fields from
one or more real tables in the database.

27. What is Normalization?

Page 21 © Copyright by Interviewbit


SQL Interview Questions

Normalization represents the way of organizing structured data in the database


efficiently. It includes the creation of tables, establishing relationships between
them, and defining rules for those relationships. Inconsistency and redundancy can
be kept in check based on these rules, hence, adding flexibility to the database.

28. What is Denormalization?


Denormalization is the inverse process of normalization, where the normalized
schema is converted into a schema that has redundant information. The
performance is improved by using redundancy and keeping the redundant data
consistent. The reason for performing denormalization is the overheads produced in
the query processor by an over-normalized structure.

29. What are the various forms of Normalization?


Normal Forms are used to eliminate or reduce redundancy in database tables. The
different forms are as follows:
First Normal Form:
A relation is in first normal form if every attribute in that relation is a single-
valued attribute. If a relation contains a composite or multi-valued attribute, it
violates the first normal form. Let's consider the following students table. Each
student in the table, has a name, his/her address, and the books they issued
from the public library -
Students Table

Page 22 © Copyright by Interviewbit


SQL Interview Questions

Student Address Books Issued Salutation

Until the Day I


Die (Emily
Amanora
Carpenter),
Sara Park Ms.
Inception
Town 94
(Christopher
Nolan)

The
Alchemist
62nd
(Paulo
Ansh Sector A- Mr.
Coelho),
10
Inferno (Dan
Brown)

Beautiful Bad
24th
(Annie Ward),
Street
Sara Woman 99 Mrs.
Park
(Greer
Avenue
Macallister)

Windsor
Dracula
Ansh Street Mr.
(Bram Stoker)
777

As we can observe, the Books Issued field has more than one value per record, and to
convert it into 1NF, this has to be resolved into separate individual records for each
book issued. Check the following table in 1NF form -
Students Table (1st Normal Form)

Page 23 © Copyright by Interviewbit


SQL Interview Questions

Student Address Books Issued Salutation

Amanora Until the Day I


Sara Park Die (Emily Ms.
Town 94 Carpenter)

Amanora Inception
Sara Park (Christopher Ms.
Town 94 Nolan)

The
62nd
Alchemist
Ansh Sector A- Mr.
(Paulo
10
Coelho)

62nd
Inferno (Dan
Ansh Sector A- Mr.
Brown)
10

24th
Street Beautiful Bad
Sara Mrs.
Park (Annie Ward)
Avenue

24th
Woman 99
Street
Sara (Greer Mrs.
Park
Macallister)
Avenue

Windsor Dracula
Ansh Street (Bram Mr.
777 Stoker)

Page 24 © Copyright by Interviewbit


SQL Interview Questions

Second Normal Form:


A relation is in second normal form if it satisfies the conditions for the first normal
form and does not contain any partial dependency. A relation in 2NF has no partial
dependency, i.e., it has no non-prime attribute that depends on any proper subset of
any candidate key of the table. O en, specifying a single column Primary Key is the
solution to the problem. Examples -
Example 1 - Consider the above example. As we can observe, the Students Table in
the 1NF form has a candidate key in the form of [Student, Address] that can uniquely
identify all records in the table. The field Books Issued (non-prime attribute) depends
partially on the Student field. Hence, the table is not in 2NF. To convert it into the 2nd
Normal Form, we will partition the tables into two while specifying a new Primary
Key attribute to identify the individual records in the Students table. The Foreign
Key constraint will be set on the other table to ensure referential integrity.
Students Table (2nd Normal Form)

Page 25 © Copyright by Interviewbit


SQL Interview Questions

Student_ID Student Address Salutation

Amanora
1 Sara Park Ms.
Town 94

62nd
2 Ansh Sector A- Mr.
10

24th
Street
3 Sara Mrs.
Park
Avenue

Windsor
4 Ansh Street Mr.
777

Books Table (2nd Normal Form)

Page 26 © Copyright by Interviewbit


SQL Interview Questions

Student_ID Book Issued

1 Until the Day I Die (Emily Carpenter)

1 Inception (Christopher Nolan)

2 The Alchemist (Paulo Coelho)

2 Inferno (Dan Brown)

3 Beautiful Bad (Annie Ward)

3 Woman 99 (Greer Macallister)

4 Dracula (Bram Stoker)

Example 2 - Consider the following dependencies in relation to R(W,X,Y,Z)

WX -> Y [W and X together determine Y]


XY -> Z [X and Y together determine Z]

Here, WX is the only candidate key and there is no partial dependency, i.e., any proper
subset of WX doesn’t determine any non-prime attribute in the relation.
Third Normal Form
A relation is said to be in the third normal form, if it satisfies the conditions for the
second normal form and there is no transitive dependency between the non-prime
attributes, i.e., all non-prime attributes are determined only by the candidate keys of
the relation and not by any other non-prime attribute.

Page 27 © Copyright by Interviewbit


SQL Interview Questions

Example 1 - Consider the Students Table in the above example. As we can observe,
the Students Table in the 2NF form has a single candidate key Student_ID (primary
key) that can uniquely identify all records in the table. The field Salutation (non-
prime attribute), however, depends on the Student Field rather than the candidate
key. Hence, the table is not in 3NF. To convert it into the 3rd Normal Form, we will
once again partition the tables into two while specifying a new Foreign Key
constraint to identify the salutations for individual records in the Students table. The
Primary Key constraint for the same will be set on the Salutations table to identify
each record uniquely.
Students Table (3rd Normal Form)

Student_ID Student Address Salutation_ID

Amanora
1 Sara Park 1
Town 94

62nd
2 Ansh Sector A- 2
10

24th
Street
3 Sara 3
Park
Avenue

Windsor
4 Ansh Street 1
777

Books Table (3rd Normal Form)

Page 28 © Copyright by Interviewbit


SQL Interview Questions

Student_ID Book Issued

1 Until the Day I Die (Emily Carpenter)

1 Inception (Christopher Nolan)

2 The Alchemist (Paulo Coelho)

2 Inferno (Dan Brown)

3 Beautiful Bad (Annie Ward)

3 Woman 99 (Greer Macallister)

4 Dracula (Bram Stoker)

Salutations Table (3rd Normal Form)

Salutation_ID Salutation

1 Ms.

2 Mr.

3 Mrs.

Example 2 - Consider the following dependencies in relation to R(P,Q,R,S,T)

Page 29 © Copyright by Interviewbit


SQL Interview Questions

P -> QR [P together determine C]


RS -> T [B and C together determine D]
Q -> S
T -> P

For the above relation to exist in 3NF, all possible candidate keys in the above
relation should be {P, RS, QR, T}.
Boyce-Codd Normal Form
A relation is in Boyce-Codd Normal Form if satisfies the conditions for third normal
form and for every functional dependency, Le -Hand-Side is super key. In other
words, a relation in BCNF has non-trivial functional dependencies in form X –> Y, such
that X is always a super key. For example - In the above example, Student_ID serves as
the sole unique identifier for the Students Table and Salutation_ID for the
Salutations Table, thus these tables exist in BCNF. The same cannot be said for the
Books Table and there can be several books with common Book Names and the same
Student_ID.

30. What are the TRUNCATE, DELETE and DROP statements?


DELETE statement is used to delete rows from a table.

DELETE FROM Candidates


WHERE CandidateId > 1000;

TRUNCATE command is used to delete all the rows from the table and free the space
containing the table.

TRUNCATE TABLE Candidates;

DROP command is used to remove an object from the database. If you drop a table,
all the rows in the table are deleted and the table structure is removed from the
database.

DROP TABLE Candidates;

Page 30 © Copyright by Interviewbit


SQL Interview Questions

Write a SQL statement to wipe a table 'Temporary' from memory.

Check

Write a SQL query to remove first 1000 records from table 'Temporary' based on
'id'.

Check

Write a SQL statement to delete the table 'Temporary' while keeping its relations
intact.

Check
31. What is the difference between DROP and TRUNCATE
statements?
If a table is dropped, all things associated with the tables are dropped as well. This
includes - the relationships defined on the table with other tables, the integrity
checks and constraints, access privileges and other grants that the table has. To
create and use the table again in its original form, all these relations, checks,
constraints, privileges and relationships need to be redefined. However, if a table is
truncated, none of the above problems exist and the table retains its original
structure.

32. What is the difference between DELETE and TRUNCATE


statements?
The TRUNCATE command is used to delete all the rows from the table and free the
space containing the table.
The DELETE command deletes only the rows from the table based on the condition
given in the where clause or deletes all the rows from the table if no condition is
specified. But it does not free the space containing the table.

33. What are Aggregate and Scalar functions?

Page 31 © Copyright by Interviewbit


SQL Interview Questions

An aggregate function performs operations on a collection of values to return a single


scalar value. Aggregate functions are o en used with the GROUP BY and HAVING
clauses of the SELECT statement. Following are the widely used SQL aggregate
functions:
AVG() - Calculates the mean of a collection of values.
COUNT() - Counts the total number of records in a specific table or view.
MIN() - Calculates the minimum of a collection of values.
MAX() - Calculates the maximum of a collection of values.
SUM() - Calculates the sum of a collection of values.
FIRST() - Fetches the first element in a collection of values.
LAST() - Fetches the last element in a collection of values.
Note: All aggregate functions described above ignore NULL values except for the
COUNT function.
A scalar function returns a single value based on the input value. Following are the
widely used SQL scalar functions:
LEN() - Calculates the total length of the given field (column).
UCASE() - Converts a collection of string values to uppercase characters.
LCASE() - Converts a collection of string values to lowercase characters.
MID() - Extracts substrings from a collection of string values in a table.
CONCAT() - Concatenates two or more strings.
RAND() - Generates a random collection of numbers of a given length.
ROUND() - Calculates the round-off integer value for a numeric field (or decimal
point values).
NOW() - Returns the current date & time.
FORMAT() - Sets the format to display a collection of values.

34. What is User-defined function? What are its various types?


The user-defined functions in SQL are like functions in any other programming
language that accept parameters, perform complex calculations, and return a value.
They are written to use the logic repetitively whenever required. There are two types
of SQL user-defined functions:

Page 32 © Copyright by Interviewbit


SQL Interview Questions

Scalar Function: As explained earlier, user-defined scalar functions return a


single scalar value.
Table-Valued Functions: User-defined table-valued functions return a table as
output.
Inline: returns a table data type based on a single SELECT statement.
Multi-statement: returns a tabular result-set but, unlike inline, multiple
SELECT statements can be used inside the function body.

35. What is OLTP?


OLTP stands for Online Transaction Processing, is a class of so ware applications
capable of supporting transaction-oriented programs. An essential attribute of an
OLTP system is its ability to maintain concurrency. To avoid single points of failure,
OLTP systems are o en decentralized. These systems are usually designed for a large
number of users who conduct short transactions. Database queries are usually
simple, require sub-second response times, and return relatively few records. Here is
an insight into the working of an OLTP system [ Note - The figure is not important for
interviews ] -

36. What are the differences between OLTP and OLAP?

Page 33 © Copyright by Interviewbit


SQL Interview Questions

OLTP stands for Online Transaction Processing, is a class of so ware applications


capable of supporting transaction-oriented programs. An important attribute of an
OLTP system is its ability to maintain concurrency. OLTP systems o en follow a
decentralized architecture to avoid single points of failure. These systems are
generally designed for a large audience of end-users who conduct short transactions.
Queries involved in such databases are generally simple, need fast response times,
and return relatively few records. A number of transactions per second acts as an
effective measure for such systems.
OLAP stands for Online Analytical Processing, a class of so ware programs that are
characterized by the relatively low frequency of online transactions. Queries are o en
too complex and involve a bunch of aggregations. For OLAP systems, the
effectiveness measure relies highly on response time. Such systems are widely used
for data mining or maintaining aggregated, historical data, usually in multi-
dimensional schemas.

37. What is Collation? What are the different types of Collation


Sensitivity?

Page 34 © Copyright by Interviewbit


SQL Interview Questions

Collation refers to a set of rules that determine how data is sorted and compared.
Rules defining the correct character sequence are used to sort the character data. It
incorporates options for specifying case sensitivity, accent marks, kana character
types, and character width. Below are the different types of collation sensitivity:
Case sensitivity: A and a are treated differently.
Accent sensitivity: a and á are treated differently.
Kana sensitivity: Japanese kana characters Hiragana and Katakana are treated
differently.
Width sensitivity: Same character represented in single-byte (half-width) and
double-byte (full-width) are treated differently.

38. What is a Stored Procedure?


A stored procedure is a subroutine available to applications that access a relational
database management system (RDBMS). Such procedures are stored in the database
data dictionary. The sole disadvantage of stored procedure is that it can be executed
nowhere except in the database and occupies more memory in the database server.
It also provides a sense of security and functionality as users who can't access the
data directly can be granted access via stored procedures.

DELIMITER $$
CREATE PROCEDURE FetchAllStudents()
BEGIN
SELECT * FROM myDB.students;
END $$
DELIMITER ;

Page 35 © Copyright by Interviewbit


SQL Interview Questions

39. What is a Recursive Stored Procedure?


A stored procedure that calls itself until a boundary condition is reached, is called a
recursive stored procedure. This recursive function helps the programmers to deploy
the same set of code several times as and when required. Some SQL programming
languages limit the recursion depth to prevent an infinite loop of procedure calls
from causing a stack overflow, which slows down the system and may lead to system
crashes.

DELIMITER $$ /* Set a new delimiter => $$ */


CREATE PROCEDURE calctotal( /* Create the procedure */
IN number INT, /* Set Input and Ouput variables */
OUT total INT
) BEGIN
DECLARE score INT DEFAULT NULL; /* Set the default value => "score" */
SELECT awards FROM achievements /* Update "score" via SELECT query */
WHERE id = number INTO score;
IF score IS NULL THEN SET total = 0; /* Termination condition */
ELSE
CALL calctotal(number+1); /* Recursive call */
SET total = total + score; /* Action after recursion */
END IF;
END $$ /* End of procedure */
DELIMITER ; /* Reset the delimiter */

Page 36 © Copyright by Interviewbit


SQL Interview Questions

40. How to create empty tables with the same structure as


another table?
Creating empty tables with the same structure can be done smartly by fetching the
records of one table into a new table using the INTO operator while fixing a WHERE
clause to be false for all records. Hence, SQL prepares the new table with a duplicate
structure to accept the fetched records but since no records get fetched due to the
WHERE clause in action, nothing is inserted into the new table.

SELECT * INTO Students_copy


FROM Students WHERE 1 = 2;

41. What is Pattern Matching in SQL?


SQL pattern matching provides for pattern search in data if you have no clue as to
what that word should be. This kind of SQL query uses wildcards to match a string
pattern, rather than writing the exact word. The LIKE operator is used in conjunction
with SQL Wildcards to fetch the required information.
Using the % wildcard to perform a simple search
The % wildcard matches zero or more characters of any type and can be used to
define wildcards both before and a er the pattern. Search a student in your database
with first name beginning with the letter K:

SELECT *
FROM students
WHERE first_name LIKE 'K%'

Omitting the patterns using the NOT keyword


Use the NOT keyword to select records that don't match the pattern. This query
returns all students whose first name does not begin with K.

SELECT *
FROM students
WHERE first_name NOT LIKE 'K%'

Page 37 © Copyright by Interviewbit


SQL Interview Questions

Matching a pattern anywhere using the % wildcard twice


Search for a student in the database where he/she has a K in his/her first name.

SELECT *
FROM students
WHERE first_name LIKE '%Q%'

Using the _ wildcard to match pattern at a specific position


The _ wildcard matches exactly one character of any type. It can be used in
conjunction with % wildcard. This query fetches all students with letter K at the third
position in their first name.

SELECT *
FROM students
WHERE first_name LIKE '__K%'

Matching patterns for a specific length


The _ wildcard plays an important role as a limitation when it matches exactly one
character. It limits the length and position of the matched results. For example -

SELECT * /* Matches first names with three or more letters */


FROM students
WHERE first_name LIKE '___%'

SELECT * /* Matches first names with exactly four characters */


FROM students
WHERE first_name LIKE '____'

PostgreSQL Interview Questions


42. What is PostgreSQL?

Page 38 © Copyright by Interviewbit


SQL Interview Questions

PostgreSQL was first called Postgres and was developed by a team led by Computer
Science Professor Michael Stonebraker in 1986. It was developed to help developers
build enterprise-level applications by upholding data integrity by making systems
fault-tolerant. PostgreSQL is therefore an enterprise-level, flexible, robust, open-
source, and object-relational DBMS that supports flexible workloads along with
handling concurrent users. It has been consistently supported by the global
developer community. Due to its fault-tolerant nature, PostgreSQL has gained
widespread popularity among developers.

43. How do you define Indexes in PostgreSQL?


Indexes are the inbuilt functions in PostgreSQL which are used by the queries to
perform search more efficiently on a table in the database. Consider that you have a
table with thousands of records and you have the below query that only a few records
can satisfy the condition, then it will take a lot of time to search and return those
rows that abide by this condition as the engine has to perform the search operation
on every single to check this condition. This is undoubtedly inefficient for a system
dealing with huge data. Now if this system had an index on the column where we are
applying search, it can use an efficient method for identifying matching rows by
walking through only a few levels. This is called indexing.

Select * from some_table where table_col=120

44. How will you change the datatype of a column?


This can be done by using the ALTER TABLE statement as shown below:
Syntax:

ALTER TABLE tname


ALTER COLUMN col_name [SET DATA] TYPE new_data_type;

45. What is the command used for creating a database in


PostgreSQL?

Page 39 © Copyright by Interviewbit


SQL Interview Questions

The first step of using PostgreSQL is to create a database. This is done by using the
createdb command as shown below: createdb db_name
A er running the above command, if the database creation was successful, then the
below message is shown:

CREATE DATABASE

46. How can we start, restart and stop the PostgreSQL server?
To start the PostgreSQL server, we run:

service postgresql start

Once the server is successfully started, we get the below message:

Starting PostgreSQL: ok

To restart the PostgreSQL server, we run:

service postgresql restart

Once the server is successfully restarted, we get the message:

Restarting PostgreSQL: server stopped


ok

To stop the server, we run the command:

service postgresql stop

Once stopped successfully, we get the message:

Stopping PostgreSQL: server stopped


ok

Page 40 © Copyright by Interviewbit


SQL Interview Questions

47. What are partitioned tables called in PostgreSQL?


Partitioned tables are logical structures that are used for dividing large tables into
smaller structures that are called partitions. This approach is used for effectively
increasing the query performance while dealing with large database tables. To create
a partition, a key called partition key which is usually a table column or an
expression, and a partitioning method needs to be defined. There are three types of
inbuilt partitioning methods provided by Postgres:
Range Partitioning: This method is done by partitioning based on a range of
values. This method is most commonly used upon date fields to get monthly,
weekly or yearly data. In the case of corner cases like value belonging to the end
of the range, for example: if the range of partition 1 is 10-20 and the range of
partition 2 is 20-30, and the given value is 10, then 10 belongs to the second
partition and not the first.
List Partitioning: This method is used to partition based on a list of known
values. Most commonly used when we have a key with a categorical value. For
example, getting sales data based on regions divided as countries, cities, or
states.
Hash Partitioning: This method utilizes a hash function upon the partition key.
This is done when there are no specific requirements for data division and is
used to access data individually. For example, you want to access data based on
a specific product, then using hash partition would result in the dataset that we
require.
The type of partition key and the type of method used for partitioning determines
how positive the performance and the level of manageability of the partitioned table
are.

48. Define tokens in PostgreSQL?


A token in PostgreSQL is either a keyword, identifier, literal, constant, quotes
identifier, or any symbol that has a distinctive personality. They may or may not be
separated using a space, newline or a tab. If the tokens are keywords, they are usually
commands with useful meanings. Tokens are known as building blocks of any
PostgreSQL code.

Page 41 © Copyright by Interviewbit


SQL Interview Questions

49. What is the importance of the TRUNCATE statement?


TRUNCATE TABLE name_of_table statement removes the data efficiently and quickly
from the table.
The truncate statement can also be used to reset values of the identity columns
along with data cleanup as shown below:

TRUNCATE TABLE name_of_table


RESTART IDENTITY;

We can also use the statement for removing data from multiple tables all at once by
mentioning the table names separated by comma as shown below:

TRUNCATE TABLE
table_1,
table_2,
table_3;

50. What is the capacity of a table in PostgreSQL?


The maximum size of PostgreSQL is 32TB.

51. Define sequence.


A sequence is a schema-bound, user-defined object which aids to generate a
sequence of integers. This is most commonly used to generate values to identity
columns in a table. We can create a sequence by using the CREATE SEQUENCE
statement as shown below:

CREATE SEQUENCE serial_num START 100;

To get the next number 101 from the sequence, we use the nextval() method as
shown below:

SELECT nextval('serial_num');

Page 42 © Copyright by Interviewbit


SQL Interview Questions

We can also use this sequence while inserting new records using the INSERT
command:

INSERT INTO ib_table_name VALUES (nextval('serial_num'), 'interviewbit');

52. What are string constants in PostgreSQL?


They are character sequences bound within single quotes. These are using during
data insertion or updation to characters in the database.
There are special string constants that are quoted in dollars. Syntax:
$tag$<string_constant>$tag$ The tag in the constant is optional and when we are
not specifying the tag, the constant is called a double-dollar string literal.

53. How can you get a list of all databases in PostgreSQL?


This can be done by using the command \l -> backslash followed by the lower-
case letter L.

54. How can you delete a database in PostgreSQL?


This can be done by using the DROP DATABASE command as shown in the syntax
below:

DROP DATABASE database_name;

If the database has been deleted successfully, then the following message would be
shown:

DROP DATABASE

55. What are ACID properties? Is PostgreSQL compliant with


ACID?
ACID stands for Atomicity, Consistency, Isolation, Durability. They are database
transaction properties which are used for guaranteeing data validity in case of errors
and failures.

Page 43 © Copyright by Interviewbit


SQL Interview Questions

Atomicity: This property ensures that the transaction is completed in all-or-


nothing way.
Consistency: This ensures that updates made to the database is valid and
follows rules and restrictions.
Isolation: This property ensures integrity of transaction that are visible to all
other transactions.
Durability: This property ensures that the committed transactions are stored
permanently in the database.
PostgreSQL is compliant with ACID properties.

56. Can you explain the architecture of PostgreSQL?


The architecture of PostgreSQL follows the client-server model.
The server side comprises of background process manager, query processer,
utilities and shared memory space which work together to build PostgreSQL’s
instance that has access to the data. The client application does the task of
connecting to this instance and requests data processing to the services. The
client can either be GUI (Graphical User Interface) or a web application. The
most commonly used client for PostgreSQL is pgAdmin.

Page 44 © Copyright by Interviewbit


SQL Interview Questions

57. What do you understand by multi-version concurrency


control?
MVCC or Multi-version concurrency control is used for avoiding unnecessary database
locks when 2 or more requests tries to access or modify the data at the same time.
This ensures that the time lag for a user to log in to the database is avoided. The
transactions are recorded when anyone tries to access the content.
For more information regarding this, you can refer here.

58. What do you understand by command enable-debug?


The command enable-debug is used for enabling the compilation of all libraries and
applications. When this is enabled, the system processes get hindered and generally
also increases the size of the binary file. Hence, it is not recommended to switch this
on in the production environment. This is most commonly used by developers to
debug the bugs in their scripts and help them spot the issues. For more information
regarding how to debug, you can refer here.

59. How do you check the rows affected as part of previous


transactions?
SQL standards state that the following three phenomena should be prevented whilst
concurrent transactions. SQL standards define 4 levels of transaction isolations to
deal with these phenomena.

Page 45 © Copyright by Interviewbit


SQL Interview Questions

Dirty reads: If a transaction reads data that is written due to concurrent


uncommitted transaction, these reads are called dirty reads.
Phantom reads: This occurs when two same queries when executed separately
return different rows. For example, if transaction A retrieves some set of rows
matching search criteria. Assume another transaction B retrieves new rows in
addition to the rows obtained earlier for the same search criteria. The results are
different.
Non-repeatable reads: This occurs when a transaction tries to read the same
row multiple times and gets different values each time due to concurrency. This
happens when another transaction updates that data and our current
transaction fetches that updated data, resulting in different values.
To tackle these, there are 4 standard isolation levels defined by SQL standards. They
are as follows:
Read Uncommitted – The lowest level of the isolations. Here, the transactions
are not isolated and can read data that are not committed by other transactions
resulting in dirty reads.
Read Committed – This level ensures that the data read is committed at any
instant of read time. Hence, dirty reads are avoided here. This level makes use of
read/write lock on the current rows which prevents read/write/update/delete of
that row when the current transaction is being operated on.
Repeatable Read – The most restrictive level of isolation. This holds read and
write locks for all rows it operates on. Due to this, non-repeatable reads are
avoided as other transactions cannot read, write, update or delete the rows.
Serializable – The highest of all isolation levels. This guarantees that the
execution is serializable where execution of any concurrent operations are
guaranteed to be appeared as executing serially.
The following table clearly explains which type of unwanted reads the levels avoid:

Page 46 © Copyright by Interviewbit


SQL Interview Questions

Isolation levels Dirty Reads Phantom Reads Non-repe

Read
Might occur Might occur Might occ
Uncommitted

Read
Won’t occur Might occur Might occ
Committed

Repeatable
Won’t occur Might occur Won’t occ
Read

Serializable Won’t occur Won’t occur Won’t occ

60. What can you tell about WAL (Write Ahead Logging)?
Write Ahead Logging is a feature that increases the database reliability by logging
changes before any changes are done to the database. This ensures that we have
enough information when a database crash occurs by helping to pinpoint to what
point the work has been complete and gives a starting point from the point where it
was discontinued.
For more information, you can refer here.

61. What is the main disadvantage of deleting data from an


existing table using the DROP TABLE command?
DROP TABLE command deletes complete data from the table along with removing
the complete table structure too. In case our requirement entails just remove the
data, then we would need to recreate the table to store data in it. In such cases, it is
advised to use the TRUNCATE command.

62. How do you perform case-insensitive searches using regular


expressions in PostgreSQL?

Page 47 © Copyright by Interviewbit


SQL Interview Questions

To perform case insensitive matches using a regular expression, we can use POSIX
(~*) expression from pattern matching operators. For example:

'interviewbit' ~* '.*INTervIewBit.*'

63. How will you take backup of the database in PostgreSQL?


We can achieve this by using the pg_dump tool for dumping all object contents in the
database into a single file. The steps are as follows:
Step 1: Navigate to the bin folder of the PostgreSQL installation path.

C:\>cd C:\Program Files\PostgreSQL\10.0\bin

Step 2: Execute pg_dump program to take the dump of data to a .tar folder as shown
below:

pg_dump -U postgres -W -F t sample_data > C:\Users\admin\pgbackup\sample_data.tar

The database dump will be stored in the sample_data.tar file on the location
specified.

64. Does PostgreSQL support full text search?


Full-Text Search is the method of searching single or collection of documents stored
on a computer in a full-text based database. This is mostly supported in advanced
database systems like SOLR or ElasticSearch. However, the feature is present but is
pretty basic in PostgreSQL.

65. What are parallel queries in PostgreSQL?


Parallel Queries support is a feature provided in PostgreSQL for devising query plans
capable of exploiting multiple CPU processors to execute the queries faster.

Page 48 © Copyright by Interviewbit


SQL Interview Questions

66. Differentiate between commit and checkpoint.


The commit action ensures that the data consistency of the transaction is
maintained and it ends the current transaction in the section. Commit adds a new
record in the log that describes the COMMIT to the memory. Whereas, a checkpoint is
used for writing all changes that were committed to disk up to SCN which would be
kept in datafile headers and control files.

Conclusion:

Page 49 © Copyright by Interviewbit


SQL Interview Questions

SQL is a language for the database. It has a vast scope and robust capability of
creating and manipulating a variety of database objects using commands like
CREATE, ALTER, DROP, etc, and also in loading the database objects using commands
like INSERT. It also provides options for Data Manipulation using commands like
DELETE, TRUNCATE and also does effective retrieval of data using cursor commands
like FETCH, SELECT, etc. There are many such commands which provide a large
amount of control to the programmer to interact with the database in an efficient
way without wasting many resources. The popularity of SQL has grown so much that
almost every programmer relies on this to implement their application's storage
functionalities thereby making it an exciting language to learn. Learning this provides
the developer a benefit of understanding the data structures used for storing the
organization's data and giving an additional level of control and in-depth
understanding of the application.
PostgreSQL being an open-source database system having extremely robust and
sophisticated ACID, Indexing, and Transaction supports has found widespread
popularity among the developer community.
References and Resources:

Page 50 © Copyright by Interviewbit


SQL Interview Questions

PostgreSQL Download
PostgreSQL Tutorial
SQL Guide
SQL Server Interview Questions
MySQL Interview Questions
DBMS Interview Questions
PL SQL Interview Questions
MongoDB Interview Questions
Database Testing Interview Questions
SQL Vs MySQL
PostgreSQL vs MySQL
Difference Between SQL and PLSQL
SQL Vs NoSQL
SQL IDE
SQL Projects
MySQL Commands

Page 51 © Copyright by Interviewbit


If you're preparing for your upcoming SQL interview, on top of the
interview questions, guidance from an expert can prove to be
extremely useful. One of our top instructors is hosting a FREE
Masterclass for aspirants like you! Feel free to register if you're
interested.

“As a regular attendee of the Masterclass, I feel the


most valuable part about Scaler Academy's
Shubh Masterclasses is the unparalleled content quality that
they deliver. A 3 hour Masterclass is very helpful and
Agrawal good enough for the teaching part and doubt clearing
sessions. I use the study material they provide before
B.Tech, IIIT,
each Masterclass. In these times when nothing is for
Bhagalpur
free, these classes are a life-savior!”

“The session was so well structured and simplified. 3


engaging hours of learning, interesting polls and lots
of doubt resolution! The best part was, he saw curious
Abhinav
learners and extended the session for another hour for Koppula
people who wished to stay. Recommend it to all
beginners out there who are clueless about starting Developer,
HLD themselves! This is a must!!” Mckinsey

If interested, don't hesitate to


REGISTER NOW
attend this FREE session.
Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 52 © Copyright by Interviewbit


SQL
(Notes by Apna College)

What is Database?
Database is a collection of interrelated data.

What is DBMS?
DBMS (Database Management System) is software used to create, manage, and organize
databases.

What is RDBMS?
● RDBMS (Relational Database Management System) - is a DBMS based on the
concept of tables (also called relations).
● Data is organized into tables (also known as relations) with rows (records) and
columns (attributes).
● Eg - MySQL, PostgreSQL, Oracle etc.

What is SQL?
SQL is Structured Query Language - used to store, manipulate and retrieve data from
RDBMS.
(It is not a database, it is a language used to interact with database)

We use SQL for CRUD Operations :


● CREATE - To create databases, tables, insert tuples in tables etc
● READ - To read data present in the database.
● UPDATE - Modify already inserted data.
● DELETE - Delete database, table or specific data point/tuple/row or multiple rows.

*Note - SQL keywords are NOT case sensitive. Eg: select is the same as SELECT in SQL.

SQL v/s MySQL


SQL is a language used to perform CRUD operations in Relational DB, while MySQL is a
RDBMS that uses SQL.
SQL Data Types
In SQL, data types define the kind of data that can be stored in a column or variable.

To See all data types of MYSQL, visit :


https://dev.mysql.com/doc/refman/8.0/en/data-types.html

Here are the frequently used SQL data types:

DATATYPE DESCRIPTION USAGE

CHAR string(0-255), can store characters of fixed length CHAR(50)

VARCHAR string(0-255), can store characters up to given length VARCHAR(50)

BLOB string(0-65535), can store binary large object BLOB(1000)

INT integer( -2,147,483,648 to 2,147,483,647 ) INT

TINYINT integer(-128 to 127) TINYINT

BIGINT integer( -9,223,372,036,854,775,808 to BIGINT


9,223,372,036,854,775,807 )

BIT can store x-bit values. x can range from 1 to 64 BIT(2)

FLOAT Decimal number - with precision to 23 digits FLOAT

DOUBLE Decimal number - with 24 to 53 digits DOUBLE

BOOLEAN Boolean values 0 or 1 BOOLEAN

DATE date in format of YYYY-MM-DD ranging from DATE


1000-01-01 to 9999-12-31

TIME HH:MM:SS TIME

YEAR year in 4 digits format ranging from 1901 to 2155 YEAR

*Note - CHAR is for fixed length & VARCHAR is for variable length strings. Generally,
VARCHAR is better as it only occupies necessary memory & works more efficiently.

We can also use UNSIGNED with datatypes when we only have positive values to add.
Eg - UNSIGNED INT

Types of SQL Commands:


1. DQL (Data Query Language) : Used to retrieve data from databases. (SELECT)

2. DDL (Data Definition Language) : Used to create, alter, and delete database objects
like tables, indexes, etc. (CREATE, DROP, ALTER, RENAME, TRUNCATE)

3. DML (Data Manipulation Language): Used to modify the database. (INSERT,


UPDATE, DELETE)

4. DCL (Data Control Language): Used to grant & revoke permissions. (GRANT,
REVOKE)

5. TCL (Transaction Control Language): Used to manage transactions. (COMMIT,


ROLLBACK, START TRANSACTIONS, SAVEPOINT)

1. Data Definition Language (DDL)

Data Definition Language (DDL) is a subset of SQL (Structured Query Language)


responsible for defining and managing the structure of databases and their objects.

DDL commands enable you to create, modify, and delete database objects like tables,
indexes, constraints, and more.

Key DDL Commands are:

● CREATE TABLE:

○ Used to create a new table in the database.


○ Specifies the table name, column names, data types, constraints, and more.
○ Example:
CREATE TABLE employees (id INT PRIMARY KEY, name VARCHAR(50),
salary DECIMAL(10, 2));

● ALTER TABLE:

○ Used to modify the structure of an existing table.


○ You can add, modify, or drop columns, constraints, and more.
○ Example: ALTER TABLE employees ADD COLUMN email VARCHAR(100);

● DROP TABLE:

○ Used to delete an existing table along with its data and structure.
○ Example: DROP TABLE employees;
● CREATE INDEX:

○ Used to create an index on one or more columns in a table.


○ Improves query performance by enabling faster data retrieval.
○ Example: CREATE INDEX idx_employee_name ON employees (name);

● DROP INDEX:

○ Used to remove an existing index from a table.


○ Example: DROP INDEX idx_employee_name;

● CREATE CONSTRAINT:

○ Used to define constraints that ensure data integrity.


○ Constraints include PRIMARY KEY, FOREIGN KEY, UNIQUE, NOT NULL,
and CHECK.
○ Example: ALTER TABLE orders ADD CONSTRAINT fk_customer FOREIGN
KEY (customer_id) REFERENCES customers(id);

● DROP CONSTRAINT:

○ Used to remove an existing constraint from a table.


○ Example: ALTER TABLE orders DROP CONSTRAINT fk_customer;

● TRUNCATE TABLE:

○ Used to delete the data inside a table, but not the table itself.
○ Syntax – TRUNCATE TABLE table_name

2. DATA QUERY/RETRIEVAL LANGUAGE (DQL or DRL)

DQL (Data Query Language) is a subset of SQL focused on retrieving data from databases.

The SELECT statement is the foundation of DQL and allows us to extract specific columns
from a table.

● SELECT:

The SELECT statement is used to select data from a database.


Syntax: SELECT column1, column2, ... FROM table_name;

Here, column1, column2, ... are the field names of the table.

If you want to select all the fields available in the table, use the following syntax:
SELECT * FROM table_name;

Ex: SELECT CustomerName, City FROM Customers;

● WHERE:

The WHERE clause is used to filter records.

Syntax: SELECT column1, column2, ... FROM table_name WHERE condition;

Ex: SELECT * FROM Customers WHERE Country='Mexico';

Operators used in WHERE are:

= : Equal
> : Greater than
< : Less than
>= : Greater than or equal
<= : Less than or equal
<> : Not equal.

Note: In some versions of SQL this operator may be written as !=

● AND, OR and NOT:

- The WHERE clause can be combined with AND, OR, and NOT operators.

- The AND and OR operators are used to filter records based on more than one
condition:

- The AND operator displays a record if all the conditions separated by AND are
TRUE.

- The OR operator displays a record if any of the conditions separated by OR is TRUE.

- The NOT operator displays a record if the condition(s) is NOT TRUE.

Syntax:
SELECT column1, column2, ... FROM table_name WHERE condition1 AND condition2 AND
condition3 ...;

SELECT column1, column2, ... FROM table_name WHERE condition1 OR condition2 OR


condition3 ...;

SELECT column1, column2, ... FROM table_name WHERE NOT condition;

Example:

SELECT * FROM Customers WHERE Country=’India’ AND City=’Japan’;

SELECT * FROM Customers WHERE Country=’America’ AND (City=’India’ OR


City=’Korea’);

● DISTINCT:

Removes duplicate rows from query results.

Syntax: SELECT DISTINCT column1, column2 FROM table_name;

● LIKE:

The LIKE operator is used in a WHERE clause to search for a specified pattern in a column.

There are two wildcards often used in conjunction with the LIKE operator:

- The percent sign (%) represents zero, one, or multiple characters


- The underscore sign (_) represents one, single character

Example: SELECT * FROM employees WHERE first_name LIKE 'J%';

WHERE CustomerName LIKE 'a%'


- Finds any values that start with "a"

WHERE CustomerName LIKE '%a'


- Finds any values that end with "a"

WHERE CustomerName LIKE '%or%'


- Finds any values that have "or" in any position

WHERE CustomerName LIKE '_r%'


- Finds any values that have "r" in the second position
WHERE CustomerName LIKE 'a_%'
- Finds any values that start with "a" and are at least 2 characters in length

WHERE CustomerName LIKE 'a__%'


- Finds any values that start with "a" and are at least 3 characters in length

WHERE ContactName LIKE 'a%o'


- Finds any values that start with "a" and ends with "o"

● IN:

Filters results based on a list of values in the WHERE clause.

Example: SELECT * FROM products WHERE category_id IN (1, 2, 3);

● BETWEEN:

Filters results within a specified range in the WHERE clause.

Example: SELECT * FROM orders WHERE order_date BETWEEN '2023-01-01' AND


'2023-06-30';

● IS NULL:

Checks for NULL values in the WHERE clause.

Example: SELECT * FROM customers WHERE email IS NULL;

● AS:

Renames columns or expressions in query results.

Example: SELECT first_name AS "First Name", last_name AS "Last Name" FROM


employees;

● ORDER BY

The ORDER BY clause allows you to sort the result set of a query based on one or more
columns.

Basic Syntax:

- The ORDER BY clause is used after the SELECT statement to sort query results.
- Syntax: SELECT column1, column2 FROM table_name ORDER BY column1
[ASC|DESC];

Ascending and Descending Order:

- By default, the ORDER BY clause sorts in ascending order (smallest to largest).


- You can explicitly specify descending order using the DESC keyword.
- Example: SELECT product_name, price FROM products ORDER BY price DESC;

Sorting by Multiple Columns:

- You can sort by multiple columns by listing them sequentially in the ORDER BY
clause.
- Rows are first sorted based on the first column, and for rows with equal values,
subsequent columns are used for further sorting.
- Example: SELECT first_name, last_name FROM employees ORDER BY last_name,
first_name;

Sorting by Expressions:

- It's possible to sort by calculated expressions, not just column values.


- Example: SELECT product_name, price, price * 1.1 AS discounted_price FROM
products ORDER BY discounted_price;

Sorting NULL Values:

- By default, NULL values are considered the smallest in ascending order and the
largest in descending order.
- You can control the sorting behaviour of NULL values using the NULLS FIRST or
NULLS LAST options.
- Example: SELECT column_name FROM table_name ORDER BY column_name
NULLS LAST;

Sorting by Position:

- Instead of specifying column names, you can sort by column positions in the ORDER
BY clause.
- Example: SELECT product_name, price FROM products ORDER BY 2 DESC, 1
ASC;

● GROUP BY

The GROUP BY clause in SQL is used to group rows from a table based on one or more
columns.

Syntax:
- The GROUP BY clause follows the SELECT statement and is used to group rows
based on specified columns.

- Syntax: SELECT column1, aggregate_function(column2) FROM table_name


GROUP BY column1;

- Aggregation Functions:
○ Aggregation functions (e.g., COUNT, SUM, AVG, MAX, MIN) are often used
with GROUP BY to calculate values for each group.
○ Example: SELECT department, AVG(salary) FROM employees GROUP BY
department;
- Grouping by Multiple Columns:

○ You can group by multiple columns by listing them in the GROUP BY clause.
○ This creates a hierarchical grouping based on the specified columns.
○ Example: SELECT department, gender, AVG(salary) FROM employees
GROUP BY department, gender;

- HAVING Clause:

○ The HAVING clause is used with GROUP BY to filter groups based on


aggregate function results.
○ It's similar to the WHERE clause but operates on grouped data.
○ Example: SELECT department, AVG(salary) FROM employees GROUP BY
department HAVING AVG(salary) > 50000;

- Combining GROUP BY and ORDER BY:

○ You can use both GROUP BY and ORDER BY in the same query to control
the order of grouped results.
○ Example: SELECT department, COUNT(*) FROM employees GROUP BY
department ORDER BY COUNT(*) DESC;

● AGGREGATE FUNCTIONS

These are used to perform calculations on groups of rows or entire result sets. They provide
insights into data by summarising and processing information.

Common Aggregate Functions:

- COUNT():
Counts the number of rows in a group or result set.

- SUM():
Calculates the sum of numeric values in a group or result set.

- AVG():
Computes the average of numeric values in a group or result set.

- MAX():
Finds the maximum value in a group or result set.

- MIN():
Retrieves the minimum value in a group or result set.

3. DATA MANIPULATION LANGUAGE

Data Manipulation Language (DML) in SQL encompasses commands that manipulate data
within a database. DML allows you to insert, update, and delete records, ensuring the
accuracy and currency of your data.

● INSERT:

- The INSERT statement adds new records to a table.


- Syntax: INSERT INTO table_name (column1, column2, ...) VALUES (value1, value2,
...);

- Example: INSERT INTO employees (first_name, last_name, salary) VALUES ('John',


'Doe', 50000);

● UPDATE:

- The UPDATE statement modifies existing records in a table.


- Syntax: UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE
condition;
- Example: UPDATE employees SET salary = 55000 WHERE first_name = 'John';

● DELETE:

- The DELETE statement removes records from a table.


- Syntax: DELETE FROM table_name WHERE condition;
- Example: DELETE FROM employees WHERE last_name = 'Doe';

4. Data Control Language (DCL)


Data Control Language focuses on the management of access rights, permissions, and
security-related aspects of a database system.

DCL commands are used to control who can access the data, modify the data, or perform
administrative tasks within a database.

DCL is an important aspect of database security, ensuring that data remains protected and
only authorised users have the necessary privileges.

There are two main DCL commands in SQL: GRANT and REVOKE.

1. GRANT:

The GRANT command is used to provide specific privileges or permissions to users or roles.
Privileges can include the ability to perform various actions on tables, views, procedures,
and other database objects.

Syntax:

GRANT privilege_type
ON object_name
TO user_or_role;

In this syntax:

● privilege_type refers to the specific privilege or permission being granted (e.g.,


SELECT, INSERT, UPDATE, DELETE).
● object_name is the name of the database object (e.g., table, view) to which the
privilege is being granted.
● user_or_role is the name of the user or role that is being granted the privilege.

Example: Granting SELECT privilege on a table named "Employees" to a user named


"Analyst":

GRANT SELECT ON Employees TO Analyst;

2. REVOKE:

The REVOKE command is used to remove or revoke specific privileges or permissions that
have been previously granted to users or roles.

Syntax:

REVOKE privilege_type
ON object_name
FROM user_or_role;

In this syntax:

● privilege_type is the privilege or permission being revoked.


● object_name is the name of the database object from which the privilege is being
revoked.
● user_or_role is the name of the user or role from which the privilege is being
revoked.

Example: Revoking the SELECT privilege on the "Employees" table from the "Analyst" user:

REVOKE SELECT ON Employees FROM Analyst;

DCL and Database Security:

DCL plays a crucial role in ensuring the security and integrity of a database system.

By controlling access and permissions, DCL helps prevent unauthorised users from
tampering with or accessing sensitive data. Proper use of GRANT and REVOKE commands
ensures that only users who require specific privileges can perform certain actions on
database objects.

5. Transaction Control Language (TCL)

Transaction Control Language (TCL) deals with the management of transactions within a
database.
TCL commands are used to control the initiation, execution, and termination of transactions,
which are sequences of one or more SQL statements that are executed as a single unit of
work.
Transactions ensure data consistency, integrity, and reliability in a database by grouping
related operations together and either committing or rolling back changes based on the
success or failure of those operations.

There are three main TCL commands in SQL: COMMIT, ROLLBACK, and SAVEPOINT.

1. COMMIT:

The COMMIT command is used to permanently save the changes made during a
transaction.
It makes all the changes applied to the database since the last COMMIT or ROLLBACK
command permanent.
Once a COMMIT is executed, the transaction is considered successful, and the changes are
made permanent.

Example: Committing changes made during a transaction:

UPDATE Employees
SET Salary = Salary * 1.10
WHERE Department = 'Sales';

COMMIT;

2. ROLLBACK:

The ROLLBACK command is used to undo changes made during a transaction.


It reverts all the changes applied to the database since the transaction began.

ROLLBACK is typically used when an error occurs during the execution of a transaction,
ensuring that the database remains in a consistent state.

Example: Rolling back changes due to an error during a transaction:

BEGIN;

UPDATE Inventory
SET Quantity = Quantity - 10
WHERE ProductID = 101;

-- An error occurs here

ROLLBACK;

3. SAVEPOINT:

The SAVEPOINT command creates a named point within a transaction, allowing you to set a
point to which you can later ROLLBACK if needed.

SAVEPOINTs are useful when you want to undo part of a transaction while preserving other
changes.

Syntax: SAVEPOINT savepoint_name;

Example: Using SAVEPOINT to create a point within a transaction:

BEGIN;
UPDATE Accounts
SET Balance = Balance - 100
WHERE AccountID = 123;

SAVEPOINT before_withdrawal;

UPDATE Accounts
SET Balance = Balance + 100
WHERE AccountID = 456;

-- An error occurs here

ROLLBACK TO before_withdrawal;

-- The first update is still applied

COMMIT;

TCL and Transaction Management:

Transaction Control Language (TCL) commands are vital for managing the integrity and
consistency of a database's data.
They allow you to group related changes into transactions, and in the event of errors, either
commit those changes or roll them back to maintain data integrity.
TCL commands are used in combination with Data Manipulation Language (DML) and other
SQL commands to ensure that the database remains in a reliable state despite unforeseen
errors or issues.

JOINS

In a DBMS, a join is an operation that combines rows from two or more tables based on a
related column between them.
Joins are used to retrieve data from multiple tables by linking them together using a common
key or column.

Types of Joins:
1. Inner Join
2. Outer Join
3. Cross Join
4. Self Join

1) Inner Join

An inner join combines data from two or more tables based on a specified condition, known
as the join condition.
The result of an inner join includes only the rows where the join condition is met in all
participating tables.
It essentially filters out non-matching rows and returns only the rows that have matching
values in both tables.

Syntax:

SELECT columns
FROM table1
INNER JOIN table2
ON table1.column = table2.column;

Here:

● columns refers to the specific columns you want to retrieve from the tables.
● table1 and table2 are the names of the tables you are joining.
● column is the common column used to match rows between the tables.
● The ON clause specifies the join condition, where you define how the tables are
related.

Example: Consider two tables: Customers and Orders.

Customers Table:

CustomerID CustomerName

1 Alice

2 Bob

3 Carol

Orders Table:

OrderID CustomerID Product


101 1 Laptop

102 3 Smartphone

103 2 Headphones

Inner Join Query:

SELECT Customers.CustomerName, Orders.Product


FROM Customers
INNER JOIN Orders ON Customers.CustomerID = Orders.CustomerID;

Result:

CustomerName Product

Alice Laptop

Bob Headphones

Carol Smartphone

2) Outer Join

Outer joins combine data from two or more tables based on a specified condition, just like
inner joins. However, unlike inner joins, outer joins also include rows that do not have
matching values in both tables.
Outer joins are particularly useful when you want to include data from one table even if there
is no corresponding match in the other table.

Types:

There are three types of outer joins: left outer join, right outer join, and full outer join.

1. Left Outer Join (Left Join):

A left outer join returns all the rows from the left table and the matching rows from the right
table.
If there is no match in the right table, the result will still include the left table's row with NULL
values in the right table's columns.

Example:

SELECT Customers.CustomerName, Orders.Product


FROM Customers
LEFT JOIN Orders ON Customers.CustomerID = Orders.CustomerID;

Result:

CustomerName Product

Alice Laptop

Bob Headphones

Carol Smartphone

NULL Monitor

In this example, the left outer join includes all rows from the Customers table.

Since there is no matching customer for the order with OrderID 103 (Monitor), the result
includes a row with NULL values in the CustomerName column.

2. Right Outer Join (Right Join):

A right outer join is similar to a left outer join, but it returns all rows from the right table and
the matching rows from the left table.

If there is no match in the left table, the result will still include the right table's row with NULL
values in the left table's columns.

Example: Using the same Customers and Orders tables.

SELECT Customers.CustomerName, Orders.Product


FROM Customers
RIGHT JOIN Orders ON Customers.CustomerID = Orders.CustomerID;

Result:
CustomerName Product

Alice Laptop

Carol Smartphone

Bob Headphones

NULL Keyboard

Here, the right outer join includes all rows from the Orders table. Since there is no matching
order for the customer with CustomerID 4, the result includes a row with NULL values in the
CustomerName column.

3. Full Outer Join (Full Join):

A full outer join returns all rows from both the left and right tables, including matches and
non-matches.

If there's no match, NULL values appear in columns from the table where there's no
corresponding value.

Example: Using the same Customers and Orders tables.

SELECT Customers.CustomerName, Orders.Product


FROM Customers
FULL OUTER JOIN Orders ON Customers.CustomerID = Orders.CustomerID;

Result:

CustomerName Product

Alice Laptop

Bob Headphones

Carol Smartphone
NULL Monitor

NULL Keyboard

In this full outer join example, all rows from both tables are included in the result. Both
non-matching rows from the Customers and Orders tables are represented with NULL
values.

3) Cross Join

A cross join, also known as a Cartesian product, is a type of join operation in a Database
Management System (DBMS) that combines every row from one table with every row from
another table.

Unlike other join types, a cross join does not require a specific condition to match rows
between the tables. Instead, it generates a result set that contains all possible combinations
of rows from both tables.

Cross joins can lead to a large result set, especially when the participating tables have many
rows.

Syntax:

SELECT columns
FROM table1
CROSS JOIN table2;

In this syntax:

● columns refers to the specific columns you want to retrieve from the cross-joined
tables.
● table1 and table2 are the names of the tables you want to combine using a cross
join.

Example: Consider two tables: Students and Courses.

Students Table:

StudentID StudentName

1 Alice
2 Bob

Courses Table:

CourseID CourseName

101 Maths

102 Science

Cross Join Query:

SELECT Students.StudentName, Courses.CourseName


FROM Students
CROSS JOIN Courses;

Result:

StudentName CourseName

Alice Maths

Alice Science

Bob Maths

Bob Science

In this example, the cross join between the Students and Courses tables generates all
possible combinations of rows from both tables. As a result, each student is paired with each
course, leading to a total of four rows in the result set.

4) Self Join

A self join involves joining a table with itself.

This technique is useful when a table contains hierarchical or related data and you need to
compare or analyse rows within the same table.
Self joins are commonly used to find relationships, hierarchies, or patterns within a single
table.

In a self join, you treat the table as if it were two separate tables, referring to them with
different aliases.

Syntax:

The syntax for performing a self join in SQL is as follows:

SELECT columns
FROM table1 AS alias1
JOIN table1 AS alias2 ON alias1.column = alias2.column;

In this syntax:
● columns refers to the specific columns you want to retrieve from the self-joined table.
● table1 is the name of the table you're joining with itself.
● alias1 and alias2 are aliases you assign to the table instances for differentiation.
● column is the column you use as the join condition to link rows from the same table.

Example: Consider an Employees table that contains information about employees and their
managers.

Employees Table:

EmployeeID EmployeeName ManagerID

1 Alice 3

2 Bob 3

3 Carol NULL

4 David 1

Self Join Query:

SELECT e1.EmployeeName AS Employee, e2.EmployeeName AS Manager


FROM Employees AS e1
JOIN Employees AS e2 ON e1.ManagerID = e2.EmployeeID;

Result:

Employee Manager
Alice Carol

Bob Carol

David Alice

In this example, the self join is performed on the Employees table to find the relationship
between employees and their managers. The join condition connects the ManagerID column
in the e1 alias (representing employees) with the EmployeeID column in the e2 alias
(representing managers).

SET OPERATIONS

Set operations in SQL are used to combine or manipulate the result sets of multiple SELECT
queries.
They allow you to perform operations similar to those in set theory, such as union,
intersection, and difference, on the data retrieved from different tables or queries.

Set operations provide powerful tools for managing and manipulating data, enabling you to
analyse and combine information in various ways.

There are four primary set operations in SQL:

● UNION
● INTERSECT
● EXCEPT (or MINUS)
● UNION ALL

1. UNION:

The UNION operator combines the result sets of two or more SELECT queries into a single
result set.
It removes duplicates by default, meaning that if there are identical rows in the result sets,
only one instance of each row will appear in the final result.

Example:

Assume we have two tables: Customers and Suppliers.


Customers Table:

CustomerID CustomerName

1 Alice

2 Bob

Suppliers Table:

SupplierID SupplierName

101 SupplierA

102 SupplierB

UNION Query:

SELECT CustomerName FROM Customers


UNION
SELECT SupplierName FROM Suppliers;

Result:

CustomerName

Alice

Bob

SupplierA

SupplierB

2. INTERSECT:

The INTERSECT operator returns the common rows that exist in the result sets of two or
more SELECT queries.

It only returns distinct rows that appear in all result sets.


Example: Using the same tables as before.

SELECT CustomerName FROM Customers


INTERSECT
SELECT SupplierName FROM Suppliers;

Result:

CustomerName

In this example, there are no common names between customers and suppliers, so the
result is an empty set.

3. EXCEPT (or MINUS):

The EXCEPT operator (also known as MINUS in some databases) returns the distinct rows
that are present in the result set of the first SELECT query but not in the result set of the
second SELECT query.

Example: Using the same tables as before.

SELECT CustomerName FROM Customers


EXCEPT
SELECT SupplierName FROM Suppliers;

Result:

CustomerName

Alice

Bob

In this example, the names "Alice" and "Bob" are customers but not suppliers, so they
appear in the result set.

4. UNION ALL:

The UNION ALL operator performs the same function as the UNION operator but does not
remove duplicates from the result set. It simply concatenates all rows from the different
result sets.
Example: Using the same tables as before.

SELECT CustomerName FROM Customers


UNION ALL
SELECT SupplierName FROM Suppliers;

Result:

CustomerName

Alice

Bob

SupplierA

SupplierB

Difference between Set Operations and Joins

Aspect Set Operations Joins

Combine data from related


Manipulate result sets based on
Purpose tables based on specified
set theory principles.
conditions.
Tables that are related by
Data Source Result sets of SELECT queries.
common columns.

Combine rows from different Combine rows from different

Combining Rows result sets. May remove tables based on specified

duplicates. conditions.

Require the SELECT queries to


Can combine columns from
have the same number of output
Output Columns different tables, regardless of
columns and compatible data
data types or column numbers.
types.

Common UNION, INTERSECT, EXCEPT INNER JOIN, LEFT JOIN, RIGHT

Operations (MINUS). JOIN, FULL JOIN.

Conditional No specific join conditions are Require specified join conditions

Requirements required. for combining data.

Joins do not inherently handle


Handling UNION removes duplicates by
duplicates; it depends on the join
Duplicates default.
type and data.
Useful for combining and Used to retrieve and relate data

Usage Scenarios analysing related data from from different tables based on

different queries or tables. their relationships.

Result sets may have different Result sets can have different
Result Set
column names, but data types and column names, data types, and
Structure
counts must match. counts.

Joins can be more complex and


Performance Generally faster and less complex
resource-intensive, especially for
Considerations than joins.
larger datasets.

SUB QUERIES

Subqueries, also known as nested queries or inner queries, allow you to use the result of
one query (the inner query) as the input for another query (the outer query).

Subqueries are often used to retrieve data that will be used for filtering, comparison, or
calculation within the context of a larger query.

They are a way to break down complex tasks into smaller, manageable steps.

Syntax:

SELECT columns
FROM table
WHERE column OPERATOR (SELECT column FROM table WHERE condition);

In this syntax:

● columns refers to the specific columns you want to retrieve from the outer query.
● table is the name of the table you're querying.
● column is the column you're applying the operator to in the outer query.
● OPERATOR is a comparison operator such as =, >, <, IN, NOT IN, etc.
● (SELECT column FROM table WHERE condition) is the subquery that provides the
input for the comparison.

Example: Consider two tables: Products and Orders.

Products Table:

ProductID ProductName Price

1 Laptop 1000

2 Smartphone 500

3 Headphones 50

Orders Table:

OrderID ProductID Quantity

101 1 2

102 3 1

For Example: Retrieve the product names and quantities for orders with a total cost greater
than the average price of all products.

SELECT ProductName, Quantity


FROM Products
WHERE Price * Quantity > (SELECT AVG(Price) FROM Products);

Result:

ProductName Quantity
Laptop 2

Differences Between Subqueries and Joins:

Aspect Subqueries Joins

Retrieve data for filtering,


comparison, or calculation Combine data from related tables
Purpose
within the context of a larger based on specified conditions.
query.

Result of one query used as


Data Source Data from multiple related tables.
input for another query.

Not used for combining rows; Combines rows from different tables
Combining Rows
used to filter or evaluate data. based on specified join conditions.

Subqueries return scalar


Result Set Joins return multi-column result
values, single-column results,
Structure sets.
or small result sets.

Subqueries can be slower and


Performance Joins can be more efficient for
less efficient, especially when
Considerations combining data from multiple tables.
dealing with large datasets.

Joins can become complex, but are


Subqueries can be easier to
more suited for handling large-scale
Complexity understand for simple tasks or
data retrieval and combination
smaller datasets.
tasks.

Subqueries can be used in


Joins are primarily used in the
Versatility various clauses: WHERE,
FROM clause for combining tables.
FROM, HAVING, etc.
MySQL Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Basic MySQL Interview Questions


1. What is MySQL?
2. What are some of the advantages of using MySQL?
3. What do you mean by ‘databases’?
4. What does SQL in MySQL stand for?
5. What does a MySQL database contain?
6. How can you interact with MySQL?
7. What are MySQL Database Queries?
8. What are some of the common MySQL commands?
9. How do you create a database in MySQL?
10. How do you create a table using MySQL?
11. How do you Insert Data Into MySQL?
12. How do you remove a column from a database?
13. How to create an Index in MySQL?
14. How to Delete Data From a MySQL Table?
15. How do you view a database in MySQL?
16. What are the Numeric Data Types in MySQL?
17. What are the String Data Types in MySQL?
18. What are the Temporal Data Types in MySQL?
19. What is BLOB in MySQL?
20. How to add users in MySQL?

Page 1 © Copyright by Interviewbit


MySQL Interview Questions

Intermediate MySQL Interview Questions


21. What are MySQL “Views”?
22. How do you create and execute views in MySQL?
23. What are MySQL Triggers?
24. How many Triggers are possible in MySQL?
25. What is the MySQL server?
26. What are the MySQL clients and utilities?
27. What are the types of relationships used in MySQL?

Advanced MySQL Interview Questions


28. Can you explain the logical architecture of MySQL?
29. What is Scaling in MySQL?
30. What is Sharding in SQL?
31. What are Transaction Storage Engines in MySQL?

Conclusion
32. Conclusion

Page 2 © Copyright by Interviewbit


Let's get Started
Introduction to MySQL:
MySQL is an open-source relational database management system (RDBMS). It runs
on the web as well as on the server. MySQL is fast, reliable, and easy to use. It is open-
source so ware. MySQL uses standard SQL and compiles on a number of platforms. It
is a multithreaded, multi-user SQL database management system.
The data in a MySQL database is stored in the form of tables. A table is a collection of
related data, and it consists of columns and rows.
MySQL has stand-alone clients that allow users to interact directly with a MySQL
database using SQL, but more o en MySQL is used with other programs to
implement applications that need relational database capability.
MySQL has more than 11 million installations.

Basic MySQL Interview Questions


1. What is MySQL?
MySQL is a database management system for web servers. It can grow with the
website as it is highly scalable. Most of the websites today are powered by MySQL.

2. What are some of the advantages of using MySQL?

Page 3 © Copyright by Interviewbit


MySQL Interview Questions

Flexibility: MySQL runs on all operating systems


Power: MySQL focuses on performance
Enterprise-Level SQL Features: MySQL had for some time been lacking in
advanced features such as subqueries, views, and stored procedures.
Full-Text Indexing and Searching
Query Caching: This helps enhance the speed of MySQL greatly
Replication: One MySQL server can be duplicated on another, providing
numerous advantages
Configuration and Security

3. What do you mean by ‘databases’?


A database is a structured collection of data stored in a computer system and
organized in a way to be quickly searched. With databases, information can be
rapidly retrieved.

4. What does SQL in MySQL stand for?


The SQL in MySQL stands for Structured Query Language. This language is also used
in other databases such as Oracle and Microso SQL Server. One can use commands
such as the following to send requests from a database:
SELECT title FROM publications WHERE author = ' J. K. Rowling’;

Note that SQL is not case sensitive. However, it is a good practice to write the SQL ke

5. What does a MySQL database contain?


A MySQL database contains one or more tables, each of which contains records or
rows. Within these rows are various columns or fields that contain the data itself.

6. How can you interact with MySQL?


There are three main ways you can interact with MySQL:
using a command line
via a web interface
through a programming language

Page 4 © Copyright by Interviewbit


MySQL Interview Questions

7. What are MySQL Database Queries?


A query is a specific request or a question. One can query a database for specific
information and have a record returned.

8. What are some of the common MySQL commands?

Page 5 © Copyright by Interviewbit


MySQL Interview Questions

Command Action

ALTER To alter a database or table

BACKUP To back-up a table

\c To cancel Input

CREATE To create a database

DELETE To delete a row from a table

DESCRIBE To describe a table's columns

DROP To delete a database or table

EXIT(ctrl+c) To exit

GRANT To change user privileges

HELP (\h, \?) Display help

INSERT Insert data

LOCK Lock table(s)

QUIT(\q) Same as EXIT

RENAME Rename a Table

SHOW List details about an object

SOURCE Execute a file

STATUS (\s) Display the current status

TRUNCATE Empty a table


Page 6 © Copyright by Interviewbit
MySQL Interview Questions

9. How do you create a database in MySQL?


Use the following command to create a new database called ‘books’:
CREATE DATABASE books;

10. How do you create a table using MySQL?


Use the following to create a table using MySQL:

CREATE TABLE history (


author VARCHAR(128),
title VARCHAR(128),
type VARCHAR(16),
year CHAR(4)) ENGINE InnoDB;

11. How do you Insert Data Into MySQL?


The INSERT INTO statement is used to add new records to a MySQL table:

INSERT INTO table_name (column1, column2, column3,...)


VALUES (value1, value2, value3,...)

If we want to add values for all the columns of the table, we do not need to specify
the column names in the SQL query. However, the order of the values should be in
the same order as the columns in the table. The INSERT INTO syntax would be as
follows:

INSERT INTO table_name


VALUES (value1, value2, value3, ...);

12. How do you remove a column from a database?


You can remove a column by using the DROP keyword:
ALTER TABLE classics DROP pages;

13. How to create an Index in MySQL?

Page 7 © Copyright by Interviewbit


MySQL Interview Questions

In MySQL, there are different index types, such as a regular INDEX, a PRIMARY KEY, or
a FULLTEXT index. You can achieve fast searches with the help of an index. Indexes
speed up performance by either ordering the data on disk so it's quicker to find your
result or, telling the SQL engine where to go to find your data.
Example: Adding indexes to the history table:

ALTER TABLE history ADD INDEX(author(10));


ALTER TABLE history ADD INDEX(title(10));
ALTER TABLE history ADD INDEX(category(5));
ALTER TABLE history ADD INDEX(year);
DESCRIBE history;

14. How to Delete Data From a MySQL Table?


In MySQL, the DELETE statement is used to delete records from a table:

DELETE FROM table_name


WHERE column_name = value_name

15. How do you view a database in MySQL?


One can view all the databases on the MySQL server host using the following
command:
mysql> SHOW DATABASES;

16. What are the Numeric Data Types in MySQL?


MySQL has numeric data types for integer, fixed-point, floating-point, and bit values,
as shown in the table below. Numeric types can be signed or unsigned, except BIT. A
special attribute enables the automatic generation of sequential integer or floating-
point column values, which is useful for applications that require a series of unique
identification numbers.

Page 8 © Copyright by Interviewbit


MySQL Interview Questions

Type Name Meaning

TINYINT Very Small Integer

SMALLINT Small Integer

MEDIUMINT Medium-sized Integer

INT Standard Integer

BIGINT Large Integer

DECIMAL Fixed-point number

FLOAT Single-precision floating-point number

DOUBLE Double-precision floating-point number

BIT Bit-field

17. What are the String Data Types in MySQL?

Page 9 © Copyright by Interviewbit


MySQL Interview Questions

Type Name Meaning

CHAR fixed-length nonbinary(character) string

VARCHAR variable-length nonbinary string

BINARY fixed-length binary string

VARBINARY variable-length binary string

TINYBLOB Very small BLOB(binary large object)

BLOB Small BLOB

MEDIUMBLOB Medium-sized BLOB

LONGBLOB Large BLOB

TINYTEXT A very small nonbinary string

TEXT Small nonbinary string

MEDIUMTEXT Medium-sized nonbinary string

LONGTEXT Large nonbinary string

An enumeration; each column value is


ENUM
assigned, one enumeration member

A set; each column value is assigned zero or


SET
more set members

NULL in SQL is the term used to represent a


missing value. A NULL value in a table is a
NULL value in a field that appears to be blank. This
value is different than a zero value or a field
that contains spaces.
Page 10 © Copyright by Interviewbit
MySQL Interview Questions

18. What are the Temporal Data Types in MySQL?

Type Name Meaning

DATE A date value, in ' CCYY-MM-DD ' Format

TIME A Time value, in ' hh : mm :ss ' format

Date and time value, in ' CCYY-MM-DD hh : mm


DATETIME
:ss ' format

A timestamp value, in ' CCYY-MM-DD hh : mm


TIMESTAMP
:ss ' format

YEAR A year value, in CCYY or YY format

Example: To select the records with an Order Date of "2018-11-11" from a table:

SELECT * FROM Orders WHERE OrderDate='2018-11-11'

19. What is BLOB in MySQL?


BLOB is an acronym that stands for a binary large object. It is used to hold a variable
amount of data.
There are four types of BLOB:
TINYBLOB
BLOB
MEDIUMBLOB
LONGBLOB
A BLOB can hold a very large amount of data. For example - documents, images, and
even videos. You could store your complete novel as a file in a BLOB if needed.

Page 11 © Copyright by Interviewbit


MySQL Interview Questions

20. How to add users in MySQL?


You can add a User by using the CREATE command and specifying the necessary
credentials. For example:

CREATE USER ‘testuser’ IDENTIFIED BY ‘sample password’;

Intermediate MySQL Interview Questions


21. What are MySQL “Views”?
In MySQL, a view consists of a set of rows that is returned if a particular query is
executed. This is also known as a ‘virtual table’. Views make it easy to retrieve the way
of making the query available via an alias.
The advantages of views are:
Simplicity
Security
Maintainability

22. How do you create and execute views in MySQL?


Creating a view is accomplished with the CREATE VIEW statement. As an example:

CREATE
[OR REPLACE]
[ALGORITHM = {MERGE | TEMPTABLE | UNDEFINED }]
[DEFINER = { user | CURRENT_USER }]
[SQL SECURITY { DEFINER | INVOKER }]
VIEW view_name [(column_list)]
AS select_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]

23. What are MySQL Triggers?

Page 12 © Copyright by Interviewbit


MySQL Interview Questions

A trigger is a task that executes in response to some predefined database event, such
as a er a new row is added to a particular table. Specifically, this event involves
inserting, modifying, or deleting table data, and the task can occur either prior to or
immediately following any such event.
Triggers have many purposes, including:
Audit Trails
Validation
Referential integrity enforcement

24. How many Triggers are possible in MySQL?


There are six Triggers allowed to use in the MySQL database:
Before Insert
A er Insert
Before Update
A er Update
Before Delete
A er Delete

25. What is the MySQL server?


The server, mysqld, is the hub of a MySQL installation; it performs all manipulation of
databases and tables.

26. What are the MySQL clients and utilities?


Several MySQL programs are available to help you communicate with the server. For
administrative tasks, some of the most important ones are listed here:
• mysql—An interactive program that enables you to send SQL statements to the
server and to view the results. You can also use mysql to execute batch scripts (text
files containing SQL statements).
• mysqladmin—An administrative program for performing tasks such as shutting
down the server, checking its configuration, or monitoring its status if it appears not
to be functioning properly.

Page 13 © Copyright by Interviewbit


MySQL Interview Questions

• mysqldump—A tool for backing up your databases or copying databases to another


server.
• mysqlcheck and myisamchk—Programs that help you perform table checking,
analysis, and optimization, as well as repairs if tables become damaged. mysqlcheck
works with MyISAM tables and to some extent with tables for other storage engines.
myisamchk is for use only with MyISAM tables.

27. What are the types of relationships used in MySQL?


There are three categories of relationships in MySQL:
One-to-One: Usually, when two items have a one-to-one relationship, you just
include them as columns in the same table.
One-to-Many: One-to-many (or many-to-one) relationships occur when one row
in one table is linked to many rows in another table.
Many-to-Many: In a many-to-many relationship, many rows in one table are
linked to many rows in another table. To create this relationship, add a third
table containing the same key column from each of the other tables

Advanced MySQL Interview Questions


28. Can you explain the logical architecture of MySQL?
The top layer contains the services most network-based client/server tools or servers
need such as connection handling, authentication, security, and so forth.
The second layer contains much of MySQL’s brains. This has the code for query
parsing, analysis, optimization, caching, and all the built-in functions.
The third layer contains the storage engines that are responsible for storing and
retrieving the data stored in MySQL.

Page 14 © Copyright by Interviewbit


MySQL Interview Questions

29. What is Scaling in MySQL?


In MySQL, scaling capacity is actually the ability to handle the load, and it’s useful to
think of load from several different angles such as:
Quantity of data
Number of users
User activity
Size of related datasets

30. What is Sharding in SQL?


The process of breaking up large tables into smaller chunks (called shards) that are
spread across multiple servers is called Sharding.
The advantage of Sharding is that since the sharded database is generally much
smaller than the original; queries, maintenance, and all other tasks are much faster.

31. What are Transaction Storage Engines in MySQL?


To be able to use MySQL’s transaction facility, you have to be using MySQL’s InnoDB
storage engine (which is the default from version 5.5 onward). If you are not sure
which version of MySQL your code will be running on, rather than assuming InnoDB is
the default engine you can force its use when creating a table, as follows.

Page 15 © Copyright by Interviewbit


MySQL Interview Questions

Conclusion
32. Conclusion
Several free or low-cost database management systems are available from which to
choose, such as MySQL, PostgreSQL, or SQLite.
When you compare MySQL with other database systems, think about what’s most
important to you. Performance, features (such as SQL conformance or extensions),
support, licensing conditions, and price all are factors to take into account.
MySQL is one of the best RDBMS being used for developing various web-based
so ware applications.
MySQL is offered under two different editions: the open-source MySQL Community
Server and the proprietary Enterprise Server.
Given these considerations, MySQL has many attractive qualities:
Speed
Ease of use
Query language support
Capability
Connectivity and security
Portability
Availability and cost
Open distribution and source code
Few MySQL References:
https://www.mysql.com
https://learning.oreilly.com/library/view/learning-mysql/0596008643/

Page 16 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 17 © Copyright by Interviewbit


DBMS Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Basic DBMS Interview Questions


1. What is meant by DBMS and what is its utility? Explain RDBMS with examples.
2. What is meant by a database?
3. Mention the issues with traditional file-based systems that make DBMS a better
choice?
4. Explain a few advantages of a DBMS.
5. Explain different languages present in DBMS.
6. What is meant by ACID properties in DBMS?
7. Are NULL values in a database the same as that of blank space or zero?

Intermediate DBMS Interview Questions


8. What is meant by Data Warehousing?
9. Explain different levels of data abstraction in a DBMS.
10. What is meant by an entity-relationship (E-R) model? Explain the terms Entity,
Entity Type, and Entity Set in DBMS.
11. Explain different types of relationships amongst tables in a DBMS.
12. Explain the difference between intension and extension in a database.
13. Explain the difference between the DELETE and TRUNCATE command in a
DBMS.
14. What is a lock. Explain the major difference between a shared lock and an
exclusive lock during a transaction in a database.
15. What is meant by normalization and denormalization?

Advanced DBMS Interview Questions


16. Explain different types of Normalization forms in a DBMS.
17. Explain different types of keys in a database.

Page 1 © Copyright by Interviewbit


DBMS Interview Questions

Advanced DBMS Interview Questions (.....Continued)

18. Explain the difference between a 2-tier and 3-tier architecture in a DBMS.

Page 2 © Copyright by Interviewbit


Let's get Started
To consolidate your knowledge and concepts in DBMS, here we've listed the most
commonly asked DBMS interview questions to help you ace your interview!

Basic DBMS Interview Questions


1. What is meant by DBMS and what is its utility? Explain RDBMS
with examples.
As the name suggests DBMS or Database Management System is a set of applications
or programs that enable users to create and maintain a database. DBMS provides a
tool or an interface for performing various operations such as inserting, deleting,
updating, etc. into a database. It is so ware that enables the storage of data more
compactly and securely as compared to a file-based system. A DBMS system helps a
user to overcome problems like data inconsistency, data redundancy, etc. in a
database and makes it more convenient and organized to use it.
Examples of popular DBMS systems are file systems, XML, Windows Registry, etc.

Page 3 © Copyright by Interviewbit


DBMS Interview Questions

RDBMS stands for Relational Database Management System and was introduced in
the 1970s to access and store data more efficiently than DBMS. RDBMS stores data in
the form of tables as compared to DBMS which stores data as files. Storing data as
rows and columns makes it easier to locate specific values in the database and makes
it more efficient as compared to DBMS.
Examples of popular RDBMS systems are MySQL, Oracle DB, etc.

2. What is meant by a database?


A Database is an organized, consistent, and logical collection of data that can easily
be updated, accessed, and managed. Database mostly contains sets of tables or
objects (anything created using create command is a database object) which consist
of records and fields. A tuple or a row represents a single entry in a table. An attribute
or a column represents the basic units of data storage, which contain information
about a particular aspect of the table. DBMS extracts data from a database in the
form of queries given by the user.

3. Mention the issues with traditional file-based systems that


make DBMS a better choice?
The absence of indexing in a traditional file-based system leaves us with the only
option of scanning the full page and hence making the access of content tedious and
super slow. The other issue is redundancy and inconsistency as files have many
duplicate and redundant data and changing one of them makes all of them
inconsistent. Accessing data is harder in traditional file-based systems because data
is unorganized in them.
Another issue is the lack of concurrency control, which leads to one operation locking
the entire page, as compared to DBMS where multiple operations can work on a
single file simultaneously.
Integrity check, data isolation, atomicity, security, etc. are some other issues with
traditional file-based systems for which DBMSs have provided some good solutions.

4. Explain a few advantages of a DBMS.


Following are the few advantages of using a DBMS.

Page 4 © Copyright by Interviewbit


DBMS Interview Questions

Data Sharing: Data from a single database can be simultaneously shared by


multiple users. Such sharing also enables end-users to react to changes quickly
in the database environment.
Integrity constraints: The existence of such constraints allows storing of data in
an organized and refined manner.
Controlling redundancy in a database: Eliminates redundancy in a database by
providing a mechanism that integrates all the data in a single database.
Data Independence: This allows changing the data structure without altering
the composition of any of the executing application programs.
Provides backup and recovery facility: It can be configured to automatically
create the backup of the data and restore the data in the database whenever
required.
Data Security: DBMS provides the necessary tools to make the storage and
transfer of data more reliable and secure. Authentication (the process of giving
restricted access to a user) and encryption (encrypting sensitive data such as
OTP, credit card information, etc.) are some popular tools used to secure data in
a DBMS.

5. Explain different languages present in DBMS.

Page 5 © Copyright by Interviewbit


DBMS Interview Questions

Following are various languages present in DBMS:


DDL(Data Definition Language): It contains commands which are required to
define the database.
E.g., CREATE, ALTER, DROP, TRUNCATE, RENAME, etc.
DML(Data Manipulation Language): It contains commands which are required
to manipulate the data present in the database.
E.g., SELECT, UPDATE, INSERT, DELETE, etc.
DCL(Data Control Language): It contains commands which are required to
deal with the user permissions and controls of the database system.
E.g., GRANT and REVOKE.
TCL(Transaction Control Language): It contains commands which are required
to deal with the transaction of the database.
E.g., COMMIT, ROLLBACK, and SAVEPOINT.

6. What is meant by ACID properties in DBMS?


ACID stands for Atomicity, Consistency, Isolation, and Durability in a DBMS these are
those properties that ensure a safe and secure way of sharing data among multiple
users.

Page 6 © Copyright by Interviewbit


DBMS Interview Questions

Atomicity: This property reflects the concept of either executing the whole
query or executing nothing at all, which implies that if an update occurs in a
database then that update should either be reflected in the whole database or
should not be reflected at all.

Page 7 © Copyright by Interviewbit


DBMS Interview Questions

Consistency: This property ensures that the data remains consistent before and
a er a transaction in a database.

Isolation: This property ensures that each transaction is occurring


independently of the others. This implies that the state of an ongoing
transaction doesn’t affect the state of another ongoing transaction.

Page 8 © Copyright by Interviewbit


DBMS Interview Questions

Durability: This property ensures that the data is not lost in cases of a system
failure or restart and is present in the same state as it was before the system
failure or restart.

7. Are NULL values in a database the same as that of blank space


or zero?
No, a NULL value is very different from that of zero and blank space as it represents a
value that is assigned, unknown, unavailable, or not applicable as compared to blank
space which represents a character and zero represents a number.
Example: NULL value in “number_of_courses” taken by a student represents that its
value is unknown whereas 0 in it means that the student hasn’t taken any courses.

Intermediate DBMS Interview Questions


8. What is meant by Data Warehousing?

Page 9 © Copyright by Interviewbit


DBMS Interview Questions

The process of collecting, extracting, transforming, and loading data from multiple
sources and storing them into one database is known as data warehousing. A data
warehouse can be considered as a central repository where data flows from
transactional systems and other relational databases and is used for data analytics. A
data warehouse comprises a wide variety of organization’s historical data that
supports the decision-making process in an organization.

9. Explain different levels of data abstraction in a DBMS.


The process of hiding irrelevant details from users is known as data abstraction. Data
abstraction can be divided into 3 levels:

Page 10 © Copyright by Interviewbit


DBMS Interview Questions

Physical Level: it is the lowest level and is managed by DBMS. This level
consists of data storage descriptions and the details of this level are typically
hidden from system admins, developers, and users.
Conceptual or Logical level: it is the level on which developers and system
admins work and it determines what data is stored in the database and what is
the relationship between the data points.
External or View level: it is the level that describes only part of the database
and hides the details of the table schema and its physical storage from the users.
The result of a query is an example of View level data abstraction. A view is a
virtual table created by selecting fields from one or more tables present in the
database.

10. What is meant by an entity-relationship (E-R) model?


Explain the terms Entity, Entity Type, and Entity Set in
DBMS.
An entity-relationship model is a diagrammatic approach to a database design where
real-world objects are represented as entities and relationships between them are
mentioned.

Page 11 © Copyright by Interviewbit


DBMS Interview Questions

Entity: An entity is defined as a real-world object having attributes that


represent characteristics of that particular object. For example, a student, an
employee, or a teacher represents an entity.
Entity Type: An entity type is defined as a collection of entities that have the
same attributes. One or more related tables in a database represent an entity
type. Entity type or attributes can be understood as a characteristic which
uniquely identifies the entity. For example, a student represents an entity that
has attributes such as student_id, student_name, etc.
Entity Set: An entity set can be defined as a set of all the entities present in a
specific entity type in a database. For example, a set of all the students,
employees, teachers, etc. represent an entity set.

11. Explain different types of relationships amongst tables in a


DBMS.
Following are different types of relationship amongst tables in a DBMS system:
One to One Relationship: This type of relationship is applied when a particular
row in table X is linked to a singular row in table Y.

Page 12 © Copyright by Interviewbit


DBMS Interview Questions

One to Many Relationship: This type of relationship is applied when a single


row in table X is related to many rows in table Y.

Many to Many Relationship: This type of relationship is applied when multiple


rows in table X can be linked to multiple rows in table Y.

Self Referencing Relationship: This type of relationship is applied when a


particular row in table X is associated with the same table.

Page 13 © Copyright by Interviewbit


DBMS Interview Questions

12. Explain the difference between intension and extension in a


database.
Following is the major difference between intension and extension in a database:
Intension: Intension or popularly known as database schema is used to define
the description of the database and is specified during the design of the
database and mostly remains unchanged.
Extension: Extension on the other hand is the measure of the number of tuples
present in the database at any given point in time. The extension of a database
is also referred to as the snapshot of the database and its value keeps changing
as and when the tuples are created, updated, or destroyed in a database.

13. Explain the difference between the DELETE and TRUNCATE


command in a DBMS.
DELETE command: this command is needed to delete rows from a table based on
the condition provided by the WHERE clause.

Page 14 © Copyright by Interviewbit


DBMS Interview Questions

It deletes only the rows which are specified by the WHERE clause.
It can be rolled back if required.
It maintains a log to lock the row of the table before deleting it and hence it’s
slow.
TRUNCATE command: this command is needed to remove complete data from a
table in a database. It is like a DELETE command which has no WHERE clause.
It removes complete data from a table in a database.
It can be rolled back even if required.
It doesn’t maintain a log and deletes the whole table at once and hence it’s fast.

14. What is a lock. Explain the major difference between a


shared lock and an exclusive lock during a transaction in a
database.
A database lock is a mechanism to protect a shared piece of data from getting
updated by two or more database users at the same time. When a single database
user or session has acquired a lock then no other database user or session can modify
that data until the lock is released.
Shared Lock: A shared lock is required for reading a data item and many
transactions may hold a lock on the same data item in a shared lock. Multiple
transactions are allowed to read the data items in a shared lock.
Exclusive lock: An exclusive lock is a lock on any transaction that is about to
perform a write operation. This type of lock doesn’t allow more than one
transaction and hence prevents any inconsistency in the database.

15. What is meant by normalization and denormalization?


Normalization is a process of reducing redundancy by organizing the data into
multiple tables. Normalization leads to better usage of disk spaces and makes it
easier to maintain the integrity of the database.

Page 15 © Copyright by Interviewbit


DBMS Interview Questions

Denormalization is the reverse process of normalization as it combines the tables


which have been normalized into a single table so that data retrieval becomes faster.
JOIN operation allows us to create a denormalized form of the data by reversing the
normalization.

Advanced DBMS Interview Questions


16. Explain different types of Normalization forms in a DBMS.
Following are the major normalization forms in a DBMS:

Page 16 © Copyright by Interviewbit


DBMS Interview Questions

Considering the above Table-1 as the reference example for understanding


different normalization forms.
1NF: It is known as the first normal form and is the simplest type of
normalization that you can implement in a database. A table to be in its first
normal form should satisfy the following conditions:
Every column must have a single value and should be atomic.
Duplicate columns from the same table should be removed.
Separate tables should be created for each group of related data and each
row should be identified with a unique column.

Page 17 © Copyright by Interviewbit


DBMS Interview Questions

Table-1 converted to 1NF form


2NF: It is known as the second normal form. A table to be in its second normal
form should satisfy the following conditions:
The table should be in its 1NF i.e. satisfy all the conditions of 1NF.
Every non-prime attribute of the table should be fully functionally
dependent on the primary key i.e. every non-key attribute should be
dependent on the primary key in such a way that if any key element is
deleted then even the non_key element will be saved in the database.

Page 18 © Copyright by Interviewbit


DBMS Interview Questions

Breaking Table-1 into 2 different tables to move it to 2NF.


3NF: It is known as the third normal form. A table to be in its second normal
form should satisfy the following conditions:
The table should be in its 2NF i.e. satisfy all the conditions of 2NF.
There is no transitive functional dependency of one attribute on any
attribute in the same table.

Page 19 © Copyright by Interviewbit


DBMS Interview Questions

Breaking Table-1 into 3 different tables to move it to 3NF.


BCNF: BCNF stands for Boyce-Codd Normal Form and is an advanced form of
3NF. It is also referred to as 3.5NF for the same reason. A table to be in its BCNF
normal form should satisfy the following conditions:
The table should be in its 3NF i.e. satisfy all the conditions of 3NF.
For every functional dependency of any attribute A on B
(A->B), A should be the super key of the table. It simply implies that A can’t
be a non-prime attribute if B is a prime attribute.

Page 20 © Copyright by Interviewbit


DBMS Interview Questions

17. Explain different types of keys in a database.


There are mainly 7 types of keys in a database:
Candidate Key: The candidate key represents a set of properties that can
uniquely identify a table. Each table may have multiple candidate keys. One key
amongst all candidate keys can be chosen as a primary key. In the below
example since studentId and firstName can be considered as a Candidate Key
since they can uniquely identify every tuple.
Super Key: The super key defines a set of attributes that can uniquely identify a
tuple. Candidate key and primary key are subsets of the super key, in other
words, the super key is their superset.

Page 21 © Copyright by Interviewbit


DBMS Interview Questions

Primary Key: The primary key defines a set of attributes that are used to
uniquely identify every tuple. In the below example studentId and firstName are
candidate keys and any one of them can be chosen as a Primary Key. In the given
example studentId is chosen as the primary key for the student table.
Unique Key: The unique key is very similar to the primary key except that
primary keys don’t allow NULL values in the column but unique keys allow them.
So essentially unique keys are primary keys with NULL values.
Alternate Key: All the candidate keys which are not chosen as primary keys are
considered as alternate Keys. In the below example, firstname and lastname are
alternate keys in the database.
Foreign Key: The foreign key defines an attribute that can only take the values
present in one table common to the attribute present in another table. In the
below example courseId from the Student table is a foreign key to the Course
table, as both, the tables contain courseId as one of their attributes.
Composite Key: A composite key refers to a combination of two or more
columns that can uniquely identify each tuple in a table. In the below example
the studentId and firstname can be grouped to uniquely identify every tuple in
the table.

Page 22 © Copyright by Interviewbit


DBMS Interview Questions

18. Explain the difference between a 2-tier and 3-tier


architecture in a DBMS.
The 2-tier architecture refers to the client-server architecture in which applications
at the client end directly communicate with the database at the server end without
any middleware involved.
Example – Contact Management System created using MS-Access or Railway
Reservation System, etc.

The above picture represents a 2-tier architecture in a DBMS.


The 3-tier architecture contains another layer between the client and the server to
provide GUI to the users and make the system much more secure and accessible. In
this type of architecture, the application present on the client end interacts with an
application on the server end which further communicates with the database system.
Example – Designing registration form which contains a text box, label, button or a
large website on the Internet, etc.

Page 23 © Copyright by Interviewbit


DBMS Interview Questions

The above picture represents a 3-tier architecture in a DBMS.


Recommended Tutorials:
SQL Interview Questions
SQL Server Interview Questions
MySQL Interview Questions
MongoDB Interview Questions
PL SQL Interview Questions

Page 24 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 25 © Copyright by Interviewbit


Contents

Hibernate Interview Questions For Freshers


1. What is ORM in Hibernate?
2. What are the advantages of Hibernate over JDBC?
3. What are some of the important interfaces of Hibernate framework?
4. What is a Session in Hibernate?
5. What is a SessionFactory?
6. What do you think about the statement - “session being a thread-safe object”?
7. Can you explain what is lazy loading in hibernate?
8. What is the difference between first level cache and second level cache?
9. What can you tell about Hibernate Configuration File?
10. How do you create an immutable class in hibernate?
11. Can you explain the concept behind Hibernate Inheritance Mapping?
12. Is hibernate prone to SQL injection attack?

Intermediate Interview Questions


13. Explain hibernate mapping file
14. What are the most commonly used annotations available to support hibernate
mapping?
15. Explain Hibernate architecture
16. Can you tell the difference between getCurrentSession and openSession
methods?
17. Differentiate between save() and saveOrUpdate() methods in hibernate session.
18. Differentiate between get() and load() in Hibernate session

Page 1 © Copyright by Interviewbit


Hibernate Interview Questions

Intermediate Interview Questions (.....Continued)

19. What is criteria API in hibernate?


20. What is HQL?
21. Can you tell something about one to many associations and how can we use
them in Hibernate?
22. What are Many to Many associations?
23. What does session.lock() method in hibernate do?
24. What is hibernate caching?
25. When is merge() method of the hibernate session useful?
26. Collection mapping can be done using One-to-One and Many-to-One
Associations. What do you think?
27. Can you tell the difference between setMaxResults() and setFetchSize() of
Query?
28. Does Hibernate support Native SQL Queries?

Hibernate Interview Questions For Experienced


29. What happens when the no-args constructor is absent in the Entity bean?
30. Can we declare the Entity class final?
31. What are the states of a persistent entity?
32. Explain Query Cache
33. Can you tell something about the N+1 SELECT problem in Hibernate?
34. How to solve N+1 SELECT problem in Hibernate?
35. What are the concurrency strategies available in hibernate?
36. What is Single Table Strategy?

Page 2 © Copyright by Interviewbit


Hibernate Interview Questions

Hibernate Interview Questions For


Experienced (.....Continued)

37. Can you tell something about Table Per Class Strategy.
38. Can you tell something about Named SQL Query
39. What are the benefits of NamedQuery?

Page 3 © Copyright by Interviewbit


Let's get Started
Hibernate is a Java-based persistence framework and an object-relational
mapping (ORM) framework that basically allows a developer to map POJO - plain old
Java objects - to relational database tables.
The aim of hibernate framework is to free the developer from the common data
persistence-related complex configurations and tasks. It does so by mapping the
POJO objects with the database tables efficiently and most importantly in an abstract
manner.
The developer need not know the underlying complications involved. Along with
abstraction, the queries can be executed in a very efficient manner. All these helps
developers to save a lot of time involved in development.
Top we will walk you through the top questions to get you ready for a Hibernate
interview. This article would cover basic, intermediate, and advanced questions.

Hibernate Interview Questions For Freshers


1. What is ORM in Hibernate?
Hibernate ORM stands for Object Relational Mapping. This is a mapping tool
pattern mainly used for converting data stored in a relational database to an object
used in object-oriented programming constructs. This tool also helps greatly in
simplifying data retrieval, creation, and manipulation.

Page 4 © Copyright by Interviewbit


Hibernate Interview Questions

Object Relational Mapping

2. What are the advantages of Hibernate over JDBC?


- The advantages of Hibernate over JDBC are listed below:
Clean Readable Code: Using hibernate, helps in eliminating a lot of JDBC API-
based boiler-plate codes, thereby making the code look cleaner and readable.
HQL (Hibernate Query Language): Hibernate provides HQL which is closer to
Java and is object-oriented in nature. This helps in reducing the burden on
developers for writing database independent queries. In JDBC, this is not the
case. A developer has to know the database-specific codes.
Transaction Management: JDBC doesn't support implicit transaction
management. It is upon the developer to write transaction management code
using commit and rollback methods. Whereas, Hibernate implicity provides this
feature.
Exception Handling: Hibernate wraps the JDBC exceptions and throws
unchecked exceptions like JDBCException or HibernateException. This along
with the built-in transaction management system helps developers to avoid
writing multiple try-catch blocks to handle exceptions. In the case of JDBC, it
throws a checked exception called SQLException thereby mandating the
developer to write try-catch blocks to handle this exception at compile time.
Special Features: Hibernate supports OOPs features like inheritance,
associations and also supports collections. These are not available in JDBC.

Page 5 © Copyright by Interviewbit


Hibernate Interview Questions

3. What are some of the important interfaces of Hibernate


framework?
Hibernate core interfaces are:
Configuration
SessionFactory
Session
Criteria
Query
Transaction

4. What is a Session in Hibernate?


A session is an object that maintains the connection between Java object application
and database. Session also has methods for storing, retrieving, modifying or deleting
data from database using methods like persist(), load(), get(), update(), delete(), etc.
Additionally, It has factory methods to return Query, Criteria, and Transaction
objects.

5. What is a SessionFactory?
SessionFactory provides an instance of Session. It is a factory class that gives the
Session objects based on the configuration parameters in order to establish the
connection to the database.
As a good practice, the application generally has a single instance of SessionFactory.
The internal state of a SessionFactory which includes metadata about ORM is
immutable, i.e once the instance is created, it cannot be changed.
This also provides the facility to get information like statistics and metadata related
to a class, query executions, etc. It also holds second-level cache data if enabled.

6. What do you think about the statement - “session being a


thread-safe object”?
No, Session is not a thread-safe object which means that any number of threads can
access data from it simultaneously.

7 C l i h ti l l di i hib t ?
Page 6 © Copyright by Interviewbit
Hibernate Interview Questions

Lazy loading is mainly used for improving the application performance by helping to
load the child objects on demand.
It is to be noted that, since Hibernate 3 version, this feature has been enabled by
default. This signifies that child objects are not loaded until the parent gets loaded.

8. What is the difference between first level cache and second


level cache?
Hibernate has 2 cache types. First level and second level cache for which the
difference is given below:

First Level Cache Second Level Cache

This is local to the Session This cache is maintained at the


object and cannot be SessionFactory level and
shared between multiple shared among all sessions in
sessions. Hibernate.

This cache is enabled by This is disabled by default, but


default and there is no way we can enable it through
to disable it. configuration.

The first level cache is The second-level cache is


available only until the available through the
session is open, once the application’s life cycle, it is only
session is closed, the first destroyed and recreated when
level cache is destroyed. an application is restarted.

Page 7 © Copyright by Interviewbit


Hibernate Interview Questions

If an entity or object is loaded by calling the get() method then Hibernate first
checked the first level cache, if it doesn’t find the object then it goes to the second
level cache if configured. If the object is not found then it finally goes to the database
and returns the object, if there is no corresponding row in the table then it returns
null.

9. What can you tell about Hibernate Configuration File?


Hibernate Configuration File or hibernate.cfg.xml is one of the most required
configuration files in Hibernate. By default, this file is placed under the
src/main/resource folder.
The file contains database related configurations and session-related configurations.
Hibernate facilitates providing the configuration either in an XML file (like
hibernate.cfg.xml) or a properties file (like hibernate.properties).
This file is used to define the below information:
Database connection details: Driver class, URL, username, and password.
There must be one configuration file for each database used in the application,
suppose if we want to connect with 2 databases, then we must create 2
configuration files with different names.
Hibernate properties: Dialect, show_sql, second_level_cache, and mapping file
names.

10. How do you create an immutable class in hibernate?


Immutable class in hibernate creation could be in the following way. If we are using
the XML form of configuration, then a class can be made immutable by
markingmutable=false. The default value is true there which indicating that the class
was not created by default.
In the case of using annotations, immutable classes in hibernate can also be created
by using @Immutable annotation.

11. Can you explain the concept behind Hibernate Inheritance


Mapping?

Page 8 © Copyright by Interviewbit


Hibernate Interview Questions

Java is an Object-Oriented Programming Language and Inheritance is one of the


most important pillars of object-oriented principles. To represent any models in Java,
inheritance is most commonly used to simplify and simplify the relationship. But,
there is a catch. Relational databases do not support inheritance. They have a flat
structure.
Hibernate’s Inheritance Mapping strategies deal with solving how to hibernate being
an ORM tries to map this problem between the inheritance of Java and flat structure
of Databases.
Consider the example where we have to divide InterviewBitEmployee into Contract
and Permanent Employees represented by IBContractEmployee and
IBPermanentEmployee classes respectively. Now the task of hibernate is to represent
these 2 employee types by considering the below restrictions:
The general employee details are defined in the parent InterviewBitEmployee class.
Contract and Permanent employee-specific details are stored in IBContractEmployee
and IBPermanentEmployee classes respectively
The class diagram of this system is as shown below:

Hibernate’s Inheritance Mapping

Page 9 © Copyright by Interviewbit


Hibernate Interview Questions

There are different inheritance mapping strategies available:


Single Table Strategy
Table Per Class Strategy
Mapped Super Class Strategy
Joined Table Strategy

12. Is hibernate prone to SQL injection attack?


SQL injection attack is a serious vulnerability in terms of web security wherein an
attacker can interfere with the queries made by an application/website to its
database thereby allowing the attacker to view sensitive data which are generally
irretrievable. It can also give the attacker to modify/ remove the data resulting in
damages to the application behavior.
Hibernate does not provide immunity to SQL Injection. However, following good
practices avoids SQL injection attacks. It is always advisable to follow any of the
below options:
Incorporate Prepared Statements that use Parameterized Queries.
Use Stored Procedures.
Ensure data sanity by doing input validation.

Intermediate Interview Questions


13. Explain hibernate mapping file
Hibernate mapping file is an XML file that is used for defining the entity bean fields
and corresponding database column mappings.
These files are useful when the project uses third-party classes where JPA
annotations provided by hibernating cannot be used.
In the previous example, we have defined the mapping resource as
“InterviewBitEmployee.hbm.xml” in the config file. Let us see what that sample
hbm.xml file looks like:

Page 10 © Copyright by Interviewbit


Hibernate Interview Questions

<?xml version = "1.0" encoding = "utf-8"?>


<!DOCTYPE hibernate-mapping PUBLIC
"-//Hibernate/Hibernate Mapping DTD//EN"
"http://www.hibernate.org/dtd/hibernate-mapping-3.0.dtd">

<hibernate-mapping>
<!-- What class is mapped to what database table-->
<class name = "InterviewBitEmployee" table = "InterviewBitEmployee">

<meta attribute = "class-description">


This class contains the details of employees of InterviewBit.
</meta>

<id name = "id" type = "int" column = "employee_id">


<generator class="native"/>
</id>

<property name = "fullName" column = "full_name" type = "string"/>


<property name = "email" column = "email" type = "string"/>

</class>
</hibernate-mapping>

14. What are the most commonly used annotations available to


support hibernate mapping?
Hibernate framework provides support to JPA annotations and other useful
annotations in the org.hibernate.annotations package. Some of them are as follows:

Page 11 © Copyright by Interviewbit


Hibernate Interview Questions

javax.persistence.Entity: This annotation is used on the model classes by using


“@Entity” and tells that the classes are entity beans.
javax.persistence.Table: This annotation is used on the model classes by using
“@Table” and tells that the class maps to the table name in the database.
javax.persistence.Access: This is used as “@Access” and is used for defining the
access type of either field or property. When nothing is specified, the default
value taken is “field”.
javax.persistence.Id: This is used as “@Id” and is used on the attribute in a class
to indicate that attribute is the primary key in the bean entity.
javax.persistence.EmbeddedId: Used as “@EmbeddedId” upon the attribute and
indicates it is a composite primary key of the bean entity.
javax.persistence.Column: “@Column” is used for defining the column name in
the database table.
javax.persistence.GeneratedValue: “@GeneratedValue” is used for defining the
strategy used for primary key generation. This annotation is used along with
javax.persistence.GenerationType enum.
javax.persistence.OneToOne: “@OneToOne” is used for defining the one-to-one
mapping between two bean entities. Similarly, hibernate provides OneToMany,
ManyToOne and ManyToMany annotations for defining different mapping types.
org.hibernate.annotations.Cascade: “@Cascade” annotation is used for defining
the cascading action between two bean entities. It is used with
org.hibernate.annotations.CascadeType enum to define the type of cascading.
Following is a sample class where we have used the above listed annotations:

Page 12 © Copyright by Interviewbit


Hibernate Interview Questions

package com.dev.interviewbit.model;
import javax.persistence.Access;
import javax.persistence.AccessType;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToOne;
import javax.persistence.Table;

import org.hibernate.annotations.Cascade;

@Entity
@Table(name = "InterviewBitEmployee")
@Access(value=AccessType.FIELD)
public class InterviewBitEmployee {

@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "employee_id")
private long id;

@Column(name = "full_name")
private String fullName;

@Column(name = "email")
private String email;

@OneToOne(mappedBy = "employee")
@Cascade(value = org.hibernate.annotations.CascadeType.ALL)
private Address address;

//getters and setters methods


}

15. Explain Hibernate architecture


The Hibernate architecture consists of many objects such as a persistent object,
session factory, session, query, transaction, etc. Applications developed using
Hibernate is mainly categorized into 4 parts:

Page 13 © Copyright by Interviewbit


Hibernate Interview Questions

Java Application
Hibernate framework - Configuration and Mapping Files
Internal API -
JDBC (Java Database Connectivity)
JTA (Java Transaction API)
JNDI (Java Naming Directory Interface).
Database - MySQL, PostGreSQL, Oracle, etc

Hibernate Architecture

The main elements of Hibernate framework are:

Page 14 © Copyright by Interviewbit


Hibernate Interview Questions

SessionFactory: This provides a factory method to get session objects and clients
of ConnectionProvider. It holds a second-level cache (optional) of data.
Session: This is a short-lived object that acts as an interface between the java
application objects and database data.
The session can be used to generate transaction, query, and criteria objects.
It also has a mandatory first-level cache of data.
Transaction: This object specifies the atomic unit of work and has methods
useful for transaction management. This is optional.
ConnectionProvider: This is a factory of JDBC connection objects and it provides
an abstraction to the application from the DriverManager. This is optional.
TransactionFactory: This is a factory of Transaction objects. It is optional.

Hibernate Objects

16. Can you tell the difference between getCurrentSession and


openSession methods?
Both the methods are provided by the Session Factory. The main differences are
given below:

Page 15 © Copyright by Interviewbit


Hibernate Interview Questions

getCurrentSession() openSession()

This method returns the session bound to the This method always open
context. a new session.

This session object scope belongs to the


A new session object has
hibernate context and to make this work
to be created for each
hibernate configuration file has to be modified
request in a multi-
by adding <property name =
threaded environment.
"hibernate.current_session_context_class">
Hence, you need not
thread </property>. If not added, then using
configure any property to
the method would throw an
call this method.
HibernateException.

It's the developer’s


responsibility to close this
This session object gets closed once the
object once all the
session factory is closed.
database operations are
done.

In single threaded
environment, it is slower
In a single-threaded environment, this method
than
is faster than openSession().
getCurrentSession()single
threadeda

Apart from these two methods, there is another method openStatelessSession() and
this method returns a stateless session object.

17. Differentiate between save() and saveOrUpdate() methods


in hibernate session.

Page 16 © Copyright by Interviewbit


Hibernate Interview Questions

Both the methods save records to the table in the database in case there are no
records with the primary key in the table. However, the main differences between
these two are listed below:

save() saveOrUpdate()

save() generates a
Session.saveOrUpdate() can either
new identifier and
INSERT or UPDATE based upon
INSERT record into a
existence of a record.
database

The insertion fails if


the primary key In case the primary key already
already exists in the exists, then the record is updated.
table.

The return type is


Serializable which is
The return type of the
the newly generated
saveOrUpdate() method is void.
identifier id value as
a Serializable object.

This method can bring both


This method is used
transient (new) and detached
to bring only a
(existing) objects into a persistent
transient object to a
state. It is o en used to re-attach a
persistent state.
detached object into a Session

Clearly, saveOrUpdate() is more flexible in terms of use but it involves extra


processing to find out whether a record already exists in the table or not.

18. Differentiate between get() and load() in Hibernate session

Page 17 © Copyright by Interviewbit


Hibernate Interview Questions

These are the methods to get data from the database. The primary differences
between get and load in Hibernate are given below:

get() load()

This method gets the This method returns a proxy


data from the database object and loads the data only
as soon as it is called. when it is required.

The database is hit only when it


The database is hit every
is really needed and this is called
time the method is
Lazy Loading which makes the
called.
method better.

The method throws


The method returns null
ObjectNotFoundException if the
if the object is not found.
object is not found.

This method should be


This method is to be used when
used if we are unsure
we know for sure that the data is
about the existence of
present in the database.
data in the database.

19. What is criteria API in hibernate?


Criteria API in Hibernate helps developers to build dynamic criteria queries on the
persistence database. Criteria API is a more powerful and flexible alternative to HQL
(Hibernate Query Language) queries for creating dynamic queries.

Page 18 © Copyright by Interviewbit


Hibernate Interview Questions

This API allows to programmatically development criteria query objects. The


org.hibernate.Criteria interface is used for these purposes. The Session interface of
hibernate framework has createCriteria() method that takes the persistent object’s
class or its entity name as the parameters and returns persistence object instance the
criteria query is executed.
It also makes it very easy to incorporate restrictions to selectively retrieve data from
the database. It can be achieved by using the add() method which accepts the
org.hibernate.criterion.Criterion object representing individual restriction.
Usage examples:
To return all the data of InterviewBitEmployee entity class.

Criteria criteria = session.createCriteria(InterviewBitEmployee.class);


List<InterviewBitEmployee> results = criteria.list();

To retrive objects whose property has value equal to the restriction, we use
Restrictions.eq() method. For example, to fetch all records with name
‘Hibernate’:

Criteria criteria= session.createCriteria(InterviewBitEmployee.class);


criteria.add(Restrictions.eq("fullName","Hibernate"));
List<InterviewBitEmployee> results = criteria.list();

To get objects whose property has the value “not equal to” the restriction, we
use Restrictions.ne() method. For example, to fetch all the records whose
employee’s name is not Hibernate:

Criteria criteria= session.createCriteria(InterviewBitEmployee.class);


criteria.add(Restrictions.ne("fullName","Hibernate"));
List<Employee> results = criteria.list()

To retrieve all objects whose property matches a given pattern, we use


Restrictions.like() (for case sensitivenes) and Restrictions.ilike()(for case
insensitiveness)

Page 19 © Copyright by Interviewbit


Hibernate Interview Questions

Criteria criteria= session.createCriteria(InterviewBitEmployee.class);


criteria.add(Restrictions.like("fullName","Hib%",MatchMode.ANYWHERE));
List<InterviewBitEmployee> results = criteria.list();

Similarly, it also has other methods like isNull(), isNotNull(), gt(), ge(), lt(), le() etc for
adding more varieties of restrictions. It has to be noted that for Hibernate 5 onwards,
the functions returning an object of typeCriteria are deprecated. Hibernate 5 version
has provided interfaces like CriteriaBuilder and CriteriaQuery to serve the purpose:

javax.persistence.criteria.CriteriaBuilder
javax.persistence.criteria.CriteriaQuery

// Create CriteriaBuilder
CriteriaBuilder builder = session.getCriteriaBuilder();

// Create CriteriaQuery
CriteriaQuery<YourClass> criteria = builder.createQuery(YourClass.class);

For introducing restrictions in CriteriaQuery, we can use the CriteriaQuery.where


method which is analogous to using the WHERE clause in a JPQL query.

20. What is HQL?


Hibernate Query Language (HQL) is used as an extension of SQL. It is very simple,
efficient, and very flexible for performing complex operations on relational databases
without writing complicated queries. HQL is the object-oriented representation of
query language, i.e instead of using table name, we make use of the class name
which makes this language independent of any database.
This makes use of the Query interface provided by Hibernate. The Query object is
obtained by calling the createQuery() method of the hibernate Session interface.
Following are the most commonly used methods of query interface:

Page 20 © Copyright by Interviewbit


Hibernate Interview Questions

public int executeUpdate() : This method is used to run the update/delete query.
public List list(): This method returns the result as a list.
public Query setFirstResult(int rowNumber): This method accepts the row
number as the parameter using which the record of that row number would be
retrieved.
public Query setMaxResult(int rowsCount): This method returns a maximum up
to the specified rowCount while retrieving from the database.
public Query setParameter(int position, Object value): This method sets the
value to the attribute/column at a particular position. This method follows the
JDBC style of the query parameter.
public Query setParameter(String name, Object value): This method sets the
value to a named query parameter.
Example: To get a list of all records from InterviewBitEmployee Table:

Query query=session.createQuery("from InterviewBitEmployee");


List<InterviewBitEmployee> list=query.list();
System.out.println(list.get(0));

21. Can you tell something about one to many associations and
how can we use them in Hibernate?
The one-to-many association is the most commonly used which indicates that one
object is linked/associated with multiple objects.
For example, one person can own multiple cars.

Page 21 © Copyright by Interviewbit


Hibernate Interview Questions

Hibernate One To Many Mapping

In Hibernate, we can achieve this by using @OnetoMany of JPA annotations in the


model classes. Consider the above example of a person having multiple cars as
shown below:

@Entity
@Table(name="Person")
public class Person {

//...

@OneToMany(mappedBy="owner")
private Set<Car> cars;

// getters and setters


}

In the Person class, we have defined the car's property to have @OneToMany
association. The Car class would have owned property that is used by the mappedBy
variable in the Person class. The Car class is as shown below:

Page 22 © Copyright by Interviewbit


Hibernate Interview Questions

@Entity
@Table(name="Car")
public class Car {

// Other Properties

@ManyToOne
@JoinColumn(name="person_id", nullable=false)
private Person owner;

public Car() {}

// getters and setters


}

@ManyToOne annotation indicates that many instances of an entity are mapped to


one instance of another entity – many cars of one person.

22. What are Many to Many associations?


Many-to-many association indicates that there are multiple relations between the
instances of two entities. We could take the example of multiple students taking part
in multiple courses and vice versa.
Since both the student and course entities refer to each other by means of foreign
keys, we represent this relationship technically by creating a separate table to hold
these foreign keys.

Page 23 © Copyright by Interviewbit


Hibernate Interview Questions

Many To Many Associations

Here, Student-Course Table is called the Join Table where the student_id and
course_id would form the composite primary key.

23. What does session.lock() method in hibernate do?


session.lock() method is used to reattach a detached object to the session.
session.lock() method does not check for any data synchronization between the
database and the object in the persistence context and hence this reattachment
might lead to loss of data synchronization.

24. What is hibernate caching?


Hibernate caching is the strategy for improving the application performance by
pooling objects in the cache so that the queries are executed faster. Hibernate
caching is particularly useful when fetching the same data that is executed multiple
times. Rather than hitting the database, we can just access the data from the cache.
This results in reduced throughput time of the application.

Types of Hibernate Caching

Page 24 © Copyright by Interviewbit


Hibernate Interview Questions

First Level Cache:


This level is enabled by default.
The first level cache resides in the hibernate session object.
Since it belongs to the session object, the scope of the data stored here will not
be available to the entire application as an application can make use of multiple
session objects.

First Level Caching

Second Level Cache:


Second level cache resides in the SessionFactory object and due to this, the data
is accessible by the entire application.
This is not available by default. It has to be enabled explicitly.
EH (Easy Hibernate) Cache, Swarm Cache, OS Cache, JBoss Cache are some
example cache providers.

Page 25 © Copyright by Interviewbit


Hibernate Interview Questions

Second Level Caching

25. When is merge() method of the hibernate session useful?


Merge() method can be used for updating existing values. The specialty of this
method is, once the existing values are updated, the method creates a copy from the
entity object and returns it. This result object goes into the persistent context and is
then tracked for any changes. The object that was initially used is not tracked.

26. Collection mapping can be done using One-to-One and


Many-to-One Associations. What do you think?
False, collection mapping is possible only with One-to-Many and Many-to-Many
associations.

27. Can you tell the difference between setMaxResults() and


setFetchSize() of Query?
setMaxResults() the function works similar to LIMIT in SQL. Here, we set the
maximum number of rows that we want to be returned. This method is implemented
by all database drivers.

Page 26 © Copyright by Interviewbit


Hibernate Interview Questions

setFetchSize() works for optimizing how Hibernate sends the result to the caller for
example: are the results buffered, are they sent in different size chunks, etc. This
method is not implemented by all the database drivers.

28. Does Hibernate support Native SQL Queries?


Yes, it does. Hibernate provides the createSQLQuery() method to let a developer call
the native SQL statement directly and returns a Query object.
Consider the example where you want to get employee data with the full name
“Hibernate”. We don’t want to use HQL-based features, instead, we want to write our
own SQL queries. In this case, the code would be:

Query query = session.createSQLQuery( "select * from interviewbit_employee ibe where ib


.addEntity(InterviewBitEmployee.class)
.setParameter("fullName", "Hibernate"); //named parameters
List result = query.list();

Alternatively, native queries can also be supported when using NamedQueries.

Hibernate Interview Questions For Experienced


29. What happens when the no-args constructor is absent in the
Entity bean?
Hibernate framework internally uses Reflection API for creating entity bean instances
when get() or load() methods are called. The method Class.newInstance() is used
which requires a no-args constructor to be present. When we don't have this
constructor in the entity beans, then hibernate fails to instantiate the bean and
hence it throws HibernateException.

30. Can we declare the Entity class final?


No, we should not define the entity class final because hibernate uses proxy classes
and objects for lazy loading of data and hits the database only when it is absolutely
needed. This is achieved by extending the entity bean. If the entity class (or bean) is
made final, then it cant be extended and hence lazy loading can not be supported.

31. What are the states of a persistent entity?


Page 27 © Copyright by Interviewbit
Hibernate Interview Questions

A persistent entity can exist in any of the following states:


Transient:
This state is the initial state of any entity object.
Once the instance of the entity class is created, then the object is said to have
entered a transient state. These objects exist in heap memory.
In this state, the object is not linked to any session. Hence, it is not related to any
database due to which any changes in the data object don't affect the data in
the database.

InterviewBitEmployee employee=new InterviewBitEmployee(); //The object is in the transi


employee.setId(101);
employee.setFullName("Hibernate");
employee.setEmail("[email protected]");

Persistent:
This state is entered whenever the object is linked or associated with the
session.
An object is said to be in a persistence state whenever we save or persist an
object in the database. Each object corresponds to the row in the database
table. Any modifications to the data in this state cause changes in the record in
the database.
Following methods can be used upon the persistence object:

session.save(record);
session.persist(record);
session.update(record);
session.saveOrUpdate(record);
session.lock(record);
session.merge(record);

Detached:

Page 28 © Copyright by Interviewbit


Hibernate Interview Questions

The object enters this state whenever the session is closed or the cache is
cleared.
Due to the object being no longer part of the session, any changes in the object
will not reflect in the corresponding row of the database. However, it would still
have its representation in the database.
In case the developer wants to persist changes of this object, it has to be
reattached to the hibernate session.
In order to achieve the reattachment, we can use the methods load(), merge(),
refresh(), update(), or save() methods on a new session by using the reference of
the detached object.
The object enters this state whenever any of the following methods are called:

session.close();
session.clear();
session.detach(record);
session.evict(record);

Persistent Entity

32. Explain Query Cache

Page 29 © Copyright by Interviewbit


Hibernate Interview Questions

Hibernate framework provides an optional feature called cache region for the
queries’ resultset. Additional configurations have to be done in code in order to
enable this. The query cache is useful for those queries which are most frequently
called with the same parameters. This increases the speed of the data retrieval and
greatly improves performance for commonly repetitive queries.
This does not cache the state of actual entities in the result set but it only stores the
identifier values and results of the value type. Hence, query cache should be always
used in association with second-level cache.
Configuration:
In the hibernate configuration XML file, set the use_query_cache property to true as
shown below:

<property name="hibernate.cache.use_query_cache">true</property>

In the code, we need to do the below changes for the query object:
Query query = session.createQuery("from InterviewBitEmployee");
query.setCacheable(true);
query.setCacheRegion("IB_EMP");

33. Can you tell something about the N+1 SELECT problem in
Hibernate?
N+1 SELECT problem is due to the result of using lazy loading and on-demand
fetching strategy. Let's take an example. If you have an N items list and each item
from the list has a dependency on a collection of another object, say bid. In order to
find the highest bid for each item while using the lazy loading strategy, hibernate has
to first fire 1 query to load all items and then subsequently fire N queries to load big
of each item. Hence, hibernate actually ends up executing N+1 queries.

34. How to solve N+1 SELECT problem in Hibernate?


Some of the strategies followed for solving the N+1 SELECT problem are:

Page 30 © Copyright by Interviewbit


Hibernate Interview Questions

Pre-fetch the records in batches which helps us to reduce the problem of N+1 to
(N/K) + 1 where K refers to the size of the batch.
Subselect the fetching strategy
As last resort, try to avoid or disable lazy loading altogether.

35. What are the concurrency strategies available in hibernate?


Concurrency strategies are the mediators responsible for storing and retrieving items
from the cache. While enabling second-level cache, it is the responsibility of the
developer to provide what strategy is to be implemented to decide for each
persistent class and collection.
Following are the concurrency strategies that are used:
Transactional: This is used in cases of updating data that most likely causes
stale data and this prevention is most critical to the application.
Read-Only: This is used when we don't want the data to be modified and can be
used for reference data only.
Read-Write: Here, data is mostly read and is used when the prevention of stale
data is of critical importance.
Non-strict-Read-Write: Using this strategy will ensure that there wouldn't be
any consistency between the database and cache. This strategy can be used
when the data can be modified and stale data is not of critical concern.

36. What is Single Table Strategy?


Single Table Strategy is a hibernate’s strategy for performing inheritance mapping.
This strategy is considered to be the best among all the other existing ones. Here, the
inheritance data hierarchy is stored in the single table by making use of a
discriminator column which determines to what class the record belongs.
For the example defined in the Hibernate Inheritance Mapping question above, if we
follow this single table strategy, then all the permanent and contract employees’
details are stored in only one table called InterviewBitEmployee in the database and
the employees would be differentiated by making use of discriminator column
named employee_type.

Page 31 © Copyright by Interviewbit


Hibernate Interview Questions

Hibernate provides @Inheritance annotation which takes strategy as the parameter.


This is used for defining what strategy we would be using. By giving them value,
InheritanceType.SINGLE_TABLE signifies that we are using a single table strategy for
mapping.
@DiscriminatorColumn is used for specifying what is the discriminator column
of the table in the database corresponding to the entity.
@DiscriminatorValue is used for specifying what value differentiates the records
of two types.
The code snippet would be like this:
InterviewBitEmployee class:

@Entity
@Table(name = "InterviewBitEmployee")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "employee_type")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitEmployee {
@Id
@Column(name = "employee_id")
private String employeeId;
private String fullName;
private String email;
}

InterviewBitContractEmployee class:

@Entity
@DiscriminatorValue("contract")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitContractEmployee extends InterviewBitEmployee {
private LocalDate contractStartDate;
private LocalDate contractEndDate;
private String agencyName;
}

InterviewBitPermanentEmployee class:

Page 32 © Copyright by Interviewbit


Hibernate Interview Questions

@Entity
@DiscriminatorValue("permanent")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitPermanentEmployee extends InterviewBitEmployee {
private LocalDate workStartDate;
private int numberOfLeaves;
}

37. Can you tell something about Table Per Class Strategy.
Table Per Class Strategy is another type of inheritance mapping strategy where each
class in the hierarchy has a corresponding mapping database table. For example, the
InterviewBitContractEmployee class details are stored in the
interviewbit_contract_employee table and InterviewBitPermanentEmployee class
details are stored in interviewbit_permanent_employee tables respectively. As the
data is stored in different tables, there will be no need for a discriminator column as
done in a single table strategy.
Hibernate provides @Inheritance annotation which takes strategy as the parameter.
This is used for defining what strategy we would be using. By giving them value,
InheritanceType.TABLE_PER_CLASS, it signifies that we are using a table per class
strategy for mapping.
The code snippet will be as shown below:
InterviewBitEmployee class:

@Entity(name = "interviewbit_employee")
@Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitEmployee {
@Id
@Column(name = "employee_id")
private String employeeId;
private String fullName;
private String email;
}

InterviewBitContractEmployee class:

Page 33 © Copyright by Interviewbit


Hibernate Interview Questions

@Entity(name = "interviewbit_contract_employee")
@Table(name = "interviewbit_contract_employee")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitContractEmployee extends InterviewBitEmployee {
private LocalDate contractStartDate;
private LocalDate contractEndDate;
private String agencyName;
}

InterviewBitPermanentEmployee class:

@Entity(name = "interviewbit_permanent_employee")
@Table(name = "interviewbit_permanent_employee")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitPermanentEmployee extends InterviewBitEmployee {
private LocalDate workStartDate;
private int numberOfLeaves;
}

Disadvantages:
This type of strategy offers less performance due to the need for additional joins
to get the data.
This strategy is not supported by all JPA providers.
Ordering is tricky in some cases since it is done based on a class and later by the
ordering criteria.

38. Can you tell something about Named SQL Query


A named SQL query is an expression represented in the form of a table. Here, SQL
expressions to select/retrieve rows and columns from one or more tables in one or
more databases can be specified. This is like using aliases to the queries.
In hibernate, we can make use of @NameQueries and @NameQuery annotations.
@NameQueries annotation is used for defining multiple named queries.
@NameQuery annotation is used for defining a single named query.
Code Snippet: We can define Named Query as shown below

Page 34 © Copyright by Interviewbit


Hibernate Interview Questions

@NamedQueries(
{
@NamedQuery(
name = "findIBEmployeeByFullName",
query = "from InterviewBitEmployee e where e.fullName = :fullName"
)
}
)

:fullName refers to the parameter that is programmer defined and can be set using
the query.setParameter method while using the named query.
Usage:

TypedQuery query = session.getNamedQuery("findIBEmployeeByFullName");


query.setParameter("fullName","Hibernate");
List<InterviewBitEmployee> ibEmployees = query.getResultList();

The getNamedQuery method takes the name of the named query and returns the
query instance.

39. What are the benefits of NamedQuery?


In order to understand the benefits of NamedQuery, let's first understand the
disadvantage of HQL and SQL. The main disadvantage of having HQL and SQL
scattered across data access objects is that it makes the code unreadable. Hence, as
good practice, it is recommended to group all HQL and SQL codes in one place and
use only their reference in the actual data access code. In order to achieve this,
Hibernate gives us named queries.
A named query is a statically defined query with a predefined unchangeable query
string. They are validated when the session factory is created, thus making the
application fail fast in case of an error.

Conclusion

Page 35 © Copyright by Interviewbit


LEARNCODEWITH DURGESH

PREREQUISITE

Java Core JDBC Database

Basic concepts Basics of


database
are very Basic JDBC API tables, keys
important and queries
WRITE CODE
IS A BEST SOLUTION TO LEARN
PERFECTLY

ABOUT
THIS
TUTORIAL
• SUBSCRIBE
LEARNCODEWITH
• LIKE
DURGESH
• And SHARE

LEARNCODEWITH DURGESH
HIBERNATE FRAMEWORK
• Hibernate is a Java framework that simplifies the development of Java application to interact
with the database.
• Hibernate is ORM (Object Relational Mapping) tool.
• Hibernate is an Open source, lightweight.
• Hibernate is a non-invasive framework, means it won't forces the programmers to
extend/implement any class/interface.
• It is invented by Gavin King in 2001.
• Any type of application can build with Hibernate Framework.

TRADITIONAL WAY TO SAVE


DATA(JDBC)

Objects
JDBC

TABLE

WE write code manually to store objects(data) to


database using jdbc
WHERE HIBERNATE PLAY ITS ROLE
class
{ ORM

HIBERNATE
Objects

TABLE

Now it is done automatically by hibernate….

ORM( Object Relational Mapping)


COMMONLY USE HIBERNATE
ANNOTATIONS
• @Entity – use to mark any class as Entity.
• @Table – use to change the table details.
• @Id- use to mark column as id(primary key).
• @GeneratedValue- hibernate will automatically generate values for that using an internal sequence.
Therefore we don’t have to set it manually.
• @Column-Can be used to specify column mappings. For example, to change the column name in the
associated table in database.
• @Transient-This tells hibernate not to save this field.
• @Temporal- @Temporal over a date field tells hibernate the format in which the date needs to be saved
• @Lob-@Lob tells hibernate that this is a large object, not a simple object.
• @OneToOne , @OneToMany , @ManyToOne, @JoinColumn etc.

FETCH DATA
get( ) load()
get method of Hibernate Session returns load() method throws
null if object is not found in cache as well as ObjectNotFoundException if object is not
on database. found on cache as well as on database but
never return null.

get() involves database hit if object doesn't load method can return proxy in place and
exists in Session Cache and returns a fully only initialize the object or hit the database
initialized object which may involve several if any method other than getId() is called on
database call persistent or entity object. This lazy
initialization
increases the performance.

Use if you are not sure that object exists in Use if you are sure that object exists.
db or not
MANY TO MANY MAPPING

ONE TO MANY MAPPING


question_id question
Foreign key
12 What is Java?

13 What is python?

123 How networking works?

Question answer_id answer q_id


87 Java is ….. 12
3 Hibernate… 12
13 Python is … 13
42 ML….. 13
35 Django…. 13
Answer
MANY TO MANY MAPPING
eid ename pid project_name
2 Library Management
12 Ram
3 Chatbot
13 Shyam 13 Ecom website
42 School management
123 Sunder
35 Online booking
EMP PROJECT

Eid pid

12 2

13 2

13 3

EMP_PROJECT

FETCH TYPE

Answers

A1 A2 A3 A4

Questions
@Entity
public class Question {

@Id
@Column(name = "question_id")
private int questionId;

private String question;

@OneToMany(mappedBy = "question")
private List<Answer> answers;
}

FETCH TYPE

LAZY EAGER
In Lazy loading, associated It is a design pattern in
data loads only when we which data loading occurs
explicitly call getter or on the spot.
size method.
Transient Persistent
State State

Detached Removed
State State

Transient Persistent
State State

database

Detached Removed
State State

Session Object
HQL
Hibernate Query Language

How to get the data in hibernate?

get( ) load( )
How to load complex
data ?

HQL SQL
• Database independent • Database dependent
• Easy to learn for programmer. • Easy to learn for DBA.

• from Student • Select * from Student

Table Name
Entity Name
CACHING IN HIBERNATE
Caching is a mechanism to
enhance the performance of a
Application.

Cache is use to reduce the


number of database queries.

USE CASE

JAVA
APPLICATION Database
NOW CACHING COMES

JAVA
APPLICATION Database

HIBERNATE CACHING

FIRST LEVEL SECOND LEVEL

Session Object SessionFactory


By default Manually
Provide Enable
HIBERNATE WITH SPRING

TodoDao HibernateTemplate
save(todo)
getAll() SessionFactory

DriverManagerDataSource LocalSessionFactoryBean

DataSource
chatGpt Annotaion
Saturday, July 13, 2024 1:50 AM

Chapter 1: Spring Core Annotations


1. @Autowired
Definition: Used to automatically inject a bean into another bean.
Example:
@Autowired
private UserService userService;
Uses:
• Automatically resolves dependencies.
• Reduces boilerplate code.

2. @Component
Definition: Indicates a class as a Spring component.
Example:
@Component
public class UserService {
// ...
}
Uses:
• Enables component scanning.
• Facilitates dependency management by the Spring container.

3. @Configuration
Definition: Indicates a class that declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Provides a Java-based configuration.
• Groups related bean definitions together.

4. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserService();
}
Uses:
• Allows programmatic bean registration.
• Facilitates dependency injection.

All Interview inone Page 1


• Facilitates dependency injection.

5. @Scope
Definition: Specifies the scope of a bean (e.g., singleton, prototype).
Example:
@Scope("prototype")
@Bean
public UserService userService() {
return new UserService();
}
Uses:
• Controls the lifecycle of a bean.
• Supports different visibility for beans.

6. @PostConstruct
Definition: Marks a method to be executed after dependency
injection.
Example:
@PostConstruct
public void init() {
// Initialization logic
}
Uses:
• Initializes resources after bean properties are set.
• Ensures proper setup before bean use.

7. @PreDestroy
Definition: Marks a method to be executed before a bean is
destroyed.
Example:
@PreDestroy
public void cleanup() {
// Cleanup logic
}
Uses:
• Releases resources before bean destruction.
• Prevents memory leaks.

8. @Qualifier
Definition: Used to resolve ambiguity when multiple beans of the
same type exist.
Example:
@Autowired
@Qualifier("userService")
private UserService userService;
Uses:
• Differentiates between multiple beans.
• Ensures the correct bean is injected.

All Interview inone Page 2


9. @Value
Definition: Injects values from properties files or expressions.
Example:
@Value("${app.name}")
private String appName;
Uses:
• Externalizes configuration.
• Supports dynamic value injection.

10. @Profile
Definition: Indicates that a bean is available only in certain profiles.
Example:
@Profile("dev")
@Bean
public DataSource devDataSource() {
return new HikariDataSource();
}
Uses:
• Supports environment-specific configurations.
• Enables profile-based bean management.

11. @Conditional
Definition: Specifies a condition under which a bean is registered.
Example:
@ConditionalOnProperty(name = "feature.enabled", havingValue =
"true")
@Bean
public MyFeature myFeature() {
return new MyFeature();
}
Uses:
• Provides flexible configuration options.
• Controls bean creation based on conditions.

12. @Import
Definition: Imports additional configuration classes.
Example:
@Import({DataConfig.class, SecurityConfig.class})
public class AppConfig {
// ...
}
Uses:
• Supports modular configuration.
• Facilitates class importing.

13. @ComponentScan
Definition: Specifies packages to scan for Spring components.
Example:

All Interview inone Page 3


Example:
@ComponentScan(basePackages = "com.example")
public class AppConfig {
// ...
}
Uses:
• Configures component discovery.
• Enhances package management.

14. @EventListener
Definition: Marks a method as an event listener.
Example:
@EventListener
public void handleEvent(MyEvent event) {
// Event handling logic
}
Uses:
• Supports event-driven architecture.
• Facilitates decoupling of components.

15. @Transactional
Definition: Declares a method or class as transactional.
Example:
@Transactional
public void save(User user) {
// ...
}
Uses:
• Manages database transactions.
• Ensures data integrity.

16. @EnableAspectJAutoProxy
Definition: Enables support for handling components marked with
@Aspect.
Example:
@EnableAspectJAutoProxy
@Configuration
public class AppConfig {
// ...
}
Uses:
• Activates AOP features in the application.
• Facilitates proxy-based AOP support.

17. @Aspect
Definition: Indicates that a class is an aspect in Spring AOP.
Example:
@Aspect
public class LoggingAspect {

All Interview inone Page 4


public class LoggingAspect {
// ...
}
Uses:
• Supports modularization of cross-cutting concerns.
• Enables declarative AOP.

18. @Pointcut
Definition: Declares a pointcut expression for AOP.
Example:
@Pointcut("execution(* com.example.service.*.*(..))")
public void serviceMethods() {
// Pointcut definition
}
Uses:
• Centralizes pointcut definitions for reuse.
• Enhances readability.

19. @Order
Definition: Specifies the order of execution for aspects.
Example:
@Aspect
@Order(1)
public class FirstAspect {
// ...
}
Uses:
• Controls execution order of multiple aspects.
• Supports complex AOP configurations.

20. @ConfigurationProperties
Definition: Binds properties to a configuration class.
Example:
@ConfigurationProperties(prefix = "app")
public class AppConfig {
private String name;
// ...
}
Uses:
• Supports type-safe configuration management.
• Simplifies property access.

Chapter 2: Spring JDBC/DAO Annotations


1. @Repository
Definition: Indicates a class as a DAO component.
Example:
@Repository
public class UserRepository {
// ...

All Interview inone Page 5


// ...
}
Uses:
• Marks a class as a repository.
• Facilitates exception translation for data access.

2. @JdbcTemplate
Definition: Indicates that a class uses JDBC operations.
Example:
@Autowired
private JdbcTemplate jdbcTemplate;
Uses:
• Simplifies database operations.
• Reduces boilerplate code for JDBC.

3. @Transactional
Definition: Indicates that a method should run within a transaction.
Example:
@Transactional
public void saveUser(User user) {
// ...
}
Uses:
• Manages transaction boundaries.
• Ensures data consistency.

4. @Query
Definition: Specifies a query for a repository method.
Example:
@Query("SELECT u FROM User u WHERE u.id = ?1")
User findUserById(Long id);
Uses:
• Customizes SQL or JPQL queries in repositories.
• Supports complex queries.

5. @Modifying
Definition: Indicates a modifying query (insert, update, delete).
Example:
@Modifying
@Query("UPDATE User u SET u.name = ?1 WHERE u.id = ?2")
void updateUserName(String name, Long id);
Uses:
• Specifies data modification operations.
• Supports non-select queries.

6. @Transactional(readOnly = true)
Definition: Indicates a read-only transaction.
Example:
@Transactional(readOnly = true)

All Interview inone Page 6


@Transactional(readOnly = true)
public List<User> findAll() {
// ...
}
Uses:
• Optimizes transaction management for read operations.
• Enhances performance for read-heavy applications.
7. @Entity
Definition: Indicates a class is a JPA entity.
Example:
@Entity
public class User {
// ...
}
Uses:
• Maps a class to a database table.
• Facilitates JPA entity management.

8. @Table
Definition: Specifies the table name for an entity.
Example:
@Entity
@Table(name = "users")
public class User {
// ...
}
Uses:
• Customizes the table mapping for entities.
• Supports database schema management.

9. @Id
Definition: Indicates the primary key of an entity.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Marks the identifier property of an entity.
• Facilitates primary key generation.

10. @GeneratedValue
Definition: Specifies the strategy for primary key generation.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Controls the primary key generation strategy.

All Interview inone Page 7


• Controls the primary key generation strategy.
• Supports various generation strategies (AUTO, SEQUENCE,
etc.).

11. @Column
Definition: Specifies a column for the entity.
Example:
@Column(name = "user_name", nullable = false)
private String username;
Uses:
• Maps a field to a specific database column.
• Supports column constraints like nullability.

12. @ManyToOne
Definition: Defines a many-to-one relationship between entities.
Example:
@ManyToOne
@JoinColumn(name = "role_id")
private Role role;
Uses:
• Manages relationships in JPA entities.
• Facilitates foreign key mappings.

13. @OneToMany
Definition: Defines a one-to-many relationship between entities.
Example:
@OneToMany(mappedBy = "user")
private List<Order> orders;
Uses:
• Maps collections of related entities.
• Supports bidirectional relationships.

14. @JoinColumn
Definition: Specifies a column for joining two entities.
Example:
@ManyToOne
@JoinColumn(name = "user_id")
private User user;
Uses:
• Defines the join column for associations.
• Enhances relationship mapping.

15. @Transactional(propagation = Propagation.REQUIRES_NEW)


Definition: Defines the propagation behavior of a transaction.
Example:
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void processOrder(Order order) {
// ...
}

All Interview inone Page 8


}
Uses:
• Controls transaction boundaries and behaviors.
• Supports nested transactions.

16. @Embedded
Definition: Indicates that an entity contains an embedded object.
Example:
@Embedded
private Address address;
Uses:
• Allows for reusable, composite data types.
• Supports complex entity relationships.

17. @ManyToMany
Definition: Defines a many-to-many relationship between entities.
Example:
@ManyToMany
@JoinTable(name = "user_roles",
joinColumns = @JoinColumn(name = "user_id"),
inverseJoinColumns = @JoinColumn(name = "role_id"))
private Set<Role> roles;
Uses:
• Manages complex relationships in data models.
• Supports associative tables for many-to-many relationships.

18. @Fetch
Definition: Specifies the fetching strategy for collections.
Example:
@OneToMany(fetch = FetchType.LAZY)
private List<Order> orders;
Uses:
• Controls data fetching strategies (EAGER or LAZY).
• Optimizes performance based on data access patterns.

19. @Version
Definition: Used for optimistic locking.
Example:
@Version
private Long version;
Uses:
• Prevents concurrent modification issues.
• Supports optimistic concurrency control.

20. @Lock
Definition: Specifies the locking strategy for an entity.
Example:

java

All Interview inone Page 9


java
Copy code
@Lock(LockModeType.PESSIMISTIC_WRITE)
public User findById(Long id) {
// ...
}
Uses:
• Controls transaction isolation levels.
• Prevents conflicts in concurrent data access.

Chapter 3: Spring AOP Annotations


1. @Aspect
Definition: Indicates that a class is an aspect.
Example:
@Aspect
public class LoggingAspect {
// ...
}
Uses:
• Enables separation of cross-cutting concerns.
• Supports declarative AOP.

2. @Before
Definition: Indicates that a method runs before a join point.
Example:
@Before("execution(* com.example.service.*.*(..))")
public void logBefore(JoinPoint joinPoint) {
// Logging logic
}
Uses:
• Provides pre-processing logic for methods.
• Supports cross-cutting concerns such as logging.

3. @After
Definition: Indicates that a method runs after a join point.
Example:
@After("execution(* com.example.service.*.*(..))")
public void logAfter(JoinPoint joinPoint) {
// Logging logic
}
Uses:
• Provides post-processing logic for methods.
• Supports cleanup operations.

4. @Around
Definition: Indicates that a method runs both before and after a join
point.
Example:
@Around("execution(* com.example.service.*.*(..))")

All Interview inone Page 10


@Around("execution(* com.example.service.*.*(..))")
public Object logAround(ProceedingJoinPoint joinPoint) throws
Throwable {
// Before logic
Object result = joinPoint.proceed();
// After logic
return result;
}
Uses:
• Combines pre- and post-processing.
• Allows for method execution control.

5. @AfterReturning
Definition: Indicates that a method runs after a join point returns
successfully.
Example:
@AfterReturning(pointcut = "execution(* com.example.service.*.
*(..))", returning = "result")
public void logAfterReturning(JoinPoint joinPoint, Object result) {
// Logging logic
}
Uses:
• Captures successful method return values.
• Supports logging or further processing.

6. @AfterThrowing
Definition: Indicates that a method runs after a join point throws an
exception.
Example:
@AfterThrowing(pointcut = "execution(* com.example.service.*.
*(..))", throwing = "error")
public void logAfterThrowing(JoinPoint joinPoint, Throwable error) {
// Logging logic
}
Uses:
• Captures exceptions thrown by methods.
• Supports error handling and logging.

7. @Pointcut
Definition: Declares a pointcut expression for AOP.
Example:
Copy code
@Pointcut("execution(* com.example.service.*.*(..))")
public void serviceMethods() {
// Pointcut definition
}
Uses:
• Centralizes pointcut definitions for reuse.
• Enhances readability of aspect definitions.

All Interview inone Page 11


• Enhances readability of aspect definitions.

8. @DeclareParents
Definition: Used to introduce a mixin interface to a target class.
Example:
@DeclareParents(value = "com.example.service.*", defaultImpl =
DefaultMixin.class)
public static MixinInterface mixin;
Uses:
• Supports mixin functionality in AOP.
• Enables flexible design patterns.

9. @Order
Definition: Specifies the order of execution for aspects.
Example:
@Aspect
@Order(1)
public class FirstAspect {
// ...
}
Uses:
• Controls execution order of multiple aspects.
• Supports complex AOP configurations.

10. @EnableAspectJAutoProxy
Definition: Enables support for handling components marked with
@Aspect.
Example:
@EnableAspectJAutoProxy
@Configuration
public class AppConfig {
// ...
}
Uses:
• Activates AOP features in the application.
• Facilitates proxy-based AOP support.

11. @Around(value = "execution( com.example.service..*(..))")**


Definition: Defines the specific join point for the aspect.
Example:
@Around("execution(* com.example.service.*.*(..))")
public Object aroundAdvice(ProceedingJoinPoint joinPoint) throws
Throwable {
// Before logic
Object result = joinPoint.proceed();
// After logic
return result;
}
Uses:

All Interview inone Page 12


Uses:
• Targets specific method executions.
• Enhances the aspect's precision.

12. @AspectJ
Definition: Indicates an aspect defined using AspectJ style.
Example:
@Aspect
public class TransactionAspect {
// ...
}
Uses:
• Provides an alternative AOP style.
• Supports AspectJ-specific features.

13. @ContextConfiguration
Definition: Specifies the context configuration for testing.
Example:
@ContextConfiguration(classes = AppConfig.class)
public class MyTest {
// ...
}
Uses:
• Integrates AOP with test contexts.
• Supports configuration management in tests.

14. @AspectJProxy
Definition: Enables AspectJ-style proxying.
@EnableAspectJAutoProxy
public class MyConfig {
// ...
}
Uses:
• Activates AspectJ proxy support.
• Enhances AOP features.

15. @Transactional
Definition: Applies transaction management to an aspect method.
Example:
@Transactional
@Around("execution(* com.example.service.*.*(..))")
public Object transactionAdvice(ProceedingJoinPoint joinPoint)
throws Throwable {
// Transactional logic
}
Uses:
• Combines AOP with transaction management.
• Ensures data integrity.

All Interview inone Page 13


16. @Advice
Definition: Specifies the type of advice.
Example:
@Advice
public void adviceMethod() {
// ...
}
Uses:
• Centralizes advice behavior.
• Supports clear aspect definitions.

17. @Target
Definition: Specifies the target type for the aspect.
Example:
@Target(ElementType.METHOD)
@Pointcut
public void methodPointcut() {
// ...
}
Uses:
• Controls where the aspect applies.
• Enhances aspect precision.

18. @Annotation
Definition: Targets methods annotated with a specific annotation.
Example:
@Pointcut("@annotation(com.example.annotations.Loggable)")
public void loggableMethods() {
// ...
}
Uses:
• Enhances aspect targeting.
• Supports annotation-driven behavior.

19. @Weave
Definition: Used in AspectJ for weaving aspects.
Example:
@Weave
public class MyWeavedClass {
// ...
}
Uses:
• Supports AspectJ weaving.
• Integrates aspects into compiled code.

20. @Configuration
Definition: Indicates a class that contains Spring configuration.
Example:

All Interview inone Page 14


Example:
@Configuration
@EnableAspectJAutoProxy
public class AopConfig {
// ...
}
Uses:
• Centralizes AOP configuration.
• Combines AOP with Spring configuration.

Chapter 4: Spring ORM/Hibernate/JPA Annotations


1. @Entity
Definition: Marks a class as a JPA entity, representing a table in the
database.
Example:
@Entity
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String username;
// getters and setters
}
Uses:
• Defines the database schema through Java classes.
• Supports ORM (Object-Relational Mapping).

2. @Table
Definition: Specifies the table name in the database that the entity
maps to.
Example:
@Entity
@Table(name = "users")
public class User {
// ...
}
Uses:
• Allows customization of table names.
• Supports different naming conventions.

3. @Id
Definition: Indicates the primary key of the entity.
Example:
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
Uses:

All Interview inone Page 15


Uses:
• Identifies the unique identifier for the entity.
• Essential for entity management.

4. @GeneratedValue
Definition: Specifies the strategy for generating primary key values.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Simplifies the management of primary keys.
• Supports different ID generation strategies (e.g., AUTO,
SEQUENCE).

5. @Column
Definition: Maps a field to a database column.
Example:
@Column(name = "username", nullable = false, unique = true)
private String username;
Uses:
• Customizes column definitions and constraints.
• Enhances database schema management.

6. @OneToMany
Definition: Defines a one-to-many relationship between two
entities.
Example:
@OneToMany(mappedBy = "user")
private List<Order> orders;
Uses:
• Represents complex relationships in the data model.
• Facilitates data retrieval across related entities.

7. @ManyToOne
Definition: Indicates a many-to-one relationship between two
entities.
Example:
@ManyToOne
@JoinColumn(name = "user_id")
private User user;
Uses:
• Supports relationships between entities.
• Enhances data integrity and organization.

8. @ManyToMany
Definition: Defines a many-to-many relationship between two
entities.
Example:

All Interview inone Page 16


Example:
@ManyToMany
@JoinTable(
name = "user_roles",
joinColumns = @JoinColumn(name = "user_id"),
inverseJoinColumns = @JoinColumn(name = "role_id")
)
private Set<Role> roles;
Uses:
• Models complex relationships effectively.
• Supports efficient data retrieval.

9. @JoinColumn
Definition: Specifies the foreign key column in a relationship.
Example:
@ManyToOne
@JoinColumn(name = "user_id", nullable = false)
private User user;
Uses:
• Customizes foreign key mapping.
• Ensures database integrity.

10. @Embedded
Definition: Indicates that a field is an embedded object that is part
of the entity.
Example:
@Embedded
private Address address;
Uses:
• Supports complex data structures within entities.
• Facilitates better data organization.

11. @Embeddable
Definition: Marks a class as an embeddable object that can be
included in an entity.
Example:
@Embeddable
public class Address {
private String street;
private String city;
// getters and setters
}
Uses:
• Defines reusable components for entity classes.
• Promotes better encapsulation.

12. @Transient
Definition: Indicates that a field should not be persisted in the
database.

All Interview inone Page 17


database.
Example:
@Transient
private String temporaryField;
Uses:
• Excludes fields from persistence.
• Useful for non-persistent data.

13. @Version
Definition: Specifies the version of the entity for optimistic locking.
Example:
@Version
private Long version;
Uses:
• Prevents concurrent modifications.
• Ensures data integrity during updates.

14. @Fetch
Definition: Specifies how to fetch associated entities.
Example:
@OneToMany(fetch = FetchType.LAZY)
private List<Order> orders;
Uses:
• Controls the fetching strategy for associations.
• Optimizes performance.

15. @OrderBy
Definition: Specifies the ordering of a collection of associated
entities.
Example:
@OneToMany
@OrderBy("createdDate DESC")
private List<Order> orders;
Uses:
• Provides default sorting for collections.
• Enhances query performance.

16. @Query
Definition: Defines a custom query for a repository method.
Example:
@Query("SELECT u FROM User u WHERE u.username = ?1")
User findByUsername(String username);
Uses:
• Supports custom JPQL or SQL queries.
• Enhances query flexibility.

17. @NamedQuery
Definition: Defines a static query that can be referenced by name.
Example:

All Interview inone Page 18


Example:
@NamedQuery(name = "User.findByUsername", query = "SELECT u
FROM User u WHERE u.username = :username")
Uses:
• Promotes reuse of queries.
• Enhances organization of query definitions.

18. @EntityListeners
Definition: Specifies classes that contain callback methods for entity
lifecycle events.
Example:
@EntityListeners(AuditListener.class)
public class User {
// ...
}
Uses:
• Implements entity lifecycle callbacks.
• Supports auditing and logging.

19. @PrePersist
Definition: Indicates a method that should be called before the
entity is persisted.
Example:
@PrePersist
public void onPrePersist() {
this.createdDate = new Date();
}
Uses:
• Supports pre-insert operations.
• Enhances data integrity.

20. @PostLoad
Definition: Indicates a method that should be called after the entity
is loaded from the database.
Example:
@PostLoad
public void onPostLoad() {
// Logic after loading
}
Uses:
• Implements actions post-load.
• Enhances data processing.

Summary of Annotations in Spring ORM/Hibernate/JPA


Annotation Definition/Use
@Entity Marks a class as a JPA entity.
@Table Specifies the table name for the entity.
@Id Indicates the primary key of the entity.

All Interview inone Page 19


@Id Indicates the primary key of the entity.
@GeneratedValue Specifies the strategy for generating primary keys.
@Column Maps a field to a database column.
@OneToMany Defines a one-to-many relationship.
@ManyToOne Indicates a many-to-one relationship.
@ManyToMany Defines a many-to-many relationship.
@JoinColumn Specifies the foreign key column in a relationship.
@Embedded Indicates an embedded object within an entity.
@Embeddable Marks a class as embeddable.
@Transient Excludes a field from persistence.
@Version Supports optimistic locking for entity versions.
@Fetch Specifies fetching strategy for associations.
@OrderBy Specifies ordering for collections.
@Query Defines a custom query for repository methods.
@NamedQuery Defines a static query referenced by name.
@EntityListeners Specifies listener classes for lifecycle events.
@PrePersist Called before an entity is persisted.
@PostLoad Called after an entity is loaded from the database.

Chapter 5: Spring MVC Annotations


1. @Controller
Definition: Indicates a class as a Spring MVC controller.
Example:
@Controller
public class UserController {
// ...
}
Uses:
• Marks a class as a web controller.
• Facilitates request handling.

2. @RequestMapping
Definition: Maps HTTP requests to handler methods.
Example:
@RequestMapping("/users")
public String getUsers() {
return "userList";
}
Uses:
• Configures URL mapping for requests.
• Supports multiple HTTP methods.

3. @GetMapping
Definition: A shortcut for @RequestMapping with the GET method.
Example:
All Interview inone Page 20
Example:
@GetMapping("/users")
public List<User> getAllUsers() {
return userService.findAll();
}
Uses:
• Simplifies GET request mappings.
• Enhances readability.

4. @PostMapping
Definition: A shortcut for @RequestMapping with the POST method.
Example:
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Simplifies POST request mappings.
• Supports resource creation.

5. @PutMapping
Definition: A shortcut for @RequestMapping with the PUT method.
Example:
@PutMapping("/users/{id}")
public void updateUser(@PathVariable Long id, @RequestBody User
user) {
userService.update(id, user);
}
Uses:
• Simplifies PUT request mappings.
• Supports resource updates.

6. @DeleteMapping
Definition: A shortcut for @RequestMapping with the DELETE
method.
Example:
@DeleteMapping("/users/{id}")
public void deleteUser(@PathVariable Long id) {
userService.delete(id);
}
Uses:
• Simplifies DELETE request mappings.
• Supports resource deletion.

7. @PathVariable
Definition: Binds a URI template variable to a method parameter.
Example:
@GetMapping("/users/{id}")
public User getUser(@PathVariable Long id) {

All Interview inone Page 21


public User getUser(@PathVariable Long id) {
return userService.findById(id);
}
Uses:
• Extracts variables from the URI.
• Supports dynamic URL handling.

8. @RequestParam
Definition: Binds a request parameter to a method parameter.
Example:
@GetMapping("/users")
public List<User> getUsers(@RequestParam(required = false) String
name) {
return userService.findByName(name);
}
Uses:
• Handles query parameters in requests.
• Supports optional parameters.

9. @RequestBody
Definition: Binds the request body to a method parameter.
Example:
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Supports deserialization of request bodies.
• Facilitates JSON/XML request handling.

10. @ResponseBody
Definition: Indicates that a method return value should be bound to
the web response body.
Example:
@GetMapping("/users/{id}")
@ResponseBody
public User getUser(@PathVariable Long id) {
return userService.findById(id);
}
Uses:
• Supports direct response body writing.
• Simplifies AJAX response handling.

11. @RestController
Definition: Combines @Controller and @ResponseBody.
@RestController
@RequestMapping("/api")
public class UserRestController {
// ...

All Interview inone Page 22


// ...
}
Uses:
• Simplifies RESTful API development.
• Automatically serializes responses to JSON/XML.

12. @ExceptionHandler
Definition: Handles exceptions thrown by controller methods.
Example:
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String>
handleUserNotFound(UserNotFoundException ex) {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body(ex.getMessag
e());
}
Uses:
• Centralizes exception handling logic.
• Supports custom error responses.

13. @ModelAttribute
Definition: Binds a method parameter or method return value to a
model attribute.
Example:
@ModelAttribute("user")
public User populateUser() {
return new User();
}
Uses:
• Prepares model attributes for views.
• Supports data binding.

14. @SessionAttributes
Definition: Indicates which model attributes should be stored in the
session.
Example:
@SessionAttributes("user")
public class UserController {
// ...
}
Uses:
• Manages session data in controllers.
• Supports multi-step forms.

15. @RequestMapping(produces = "application/json")


Definition: Specifies the content type of the response.
Example:
@GetMapping(value = "/users", produces = "application/json")
public List<User> getUsers() {

All Interview inone Page 23


public List<User> getUsers() {
return userService.findAll();
}
Uses:
• Controls response content negotiation.
• Supports different media types.

16. @RedirectAttributes
Definition: Allows attributes to be stored in the session for
redirection.
Example:
@PostMapping("/users")
public String addUser(@ModelAttribute User user,
RedirectAttributes redirectAttributes) {
userService.save(user);
redirectAttributes.addFlashAttribute("message", "User added!");
return "redirect:/users";
}
Uses:
• Supports flash attributes for redirection.
• Enhances user feedback after actions.

17. @ResponseStatus
Definition: Specifies the HTTP status code for a method.
Example:
@ResponseStatus(HttpStatus.CREATED)
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Customizes response status codes.
• Enhances RESTful service design.

18. @InitBinder
Definition: Customizes the data binding process.
Example:
@InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(User.class, new UserEditor());
}
Uses:
• Configures data binding settings.
• Supports custom validation logic.

19. @CrossOrigin
Definition: Enables Cross-Origin Resource Sharing (CORS) on a
method.
Example:

All Interview inone Page 24


Example:
@CrossOrigin(origins = "http://localhost:3000")
@GetMapping("/users")
public List<User> getUsers() {
return userService.findAll();
}
Uses:
• Supports cross-origin requests.
• Enhances API accessibility.

20. @RequestMapping(consumes = "application/json")


Definition: Specifies the content type of the request.
Example:
@PostMapping(value = "/users", consumes = "application/json")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Controls request content negotiation.
• Supports different media types for input.

Chapter 5: Spring Boot Annotations


1. @SpringBootApplication
Definition: A convenience annotation that combines
@Configuration, @EnableAutoConfiguration, and
@ComponentScan.
Example:
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Uses:
• Simplifies Spring Boot application setup.
• Facilitates auto-configuration.

2. @EnableAutoConfiguration
Definition: Tells Spring Boot to automatically configure the
application context.
Example:
@EnableAutoConfiguration
public class Application {
// ...
}
Uses:
• Simplifies configuration for different setups.
• Automatically configures beans based on dependencies.

All Interview inone Page 25


3. @ComponentScan
Definition: Specifies the packages to scan for Spring components.
Example:
@ComponentScan("com.example")
public class Application {
// ...
}
Uses:
• Controls component discovery.
• Supports modular application design.

4. @Configuration
Definition: Indicates that a class declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Centralizes configuration.
• Supports explicit bean definitions.

5. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserServiceImpl();
}
Uses:
• Explicitly defines beans in a configuration class.
• Supports customization of bean instantiation.

6. @Value
Definition: Injects values from property files.
Example:
@Value("${app.name}")
private String appName;
Uses:
• Supports external configuration.
• Enhances flexibility with environment variables.

7. @Autowired
Definition: Marks a constructor, field, or method for dependency
injection.
Example:

All Interview inone Page 26


Example:
@Autowired
private UserService userService;
Uses:
• Facilitates automatic dependency injection.
• Simplifies bean wiring.

8. @Qualifier
Definition: Specifies which bean to inject when multiple candidates
are available.
Example:
@Autowired
@Qualifier("userServiceImpl")
private UserService userService;
Uses:
• Resolves ambiguity in dependency injection.
• Supports multiple bean implementations.

9. @PostConstruct
Definition: Indicates a method to be executed after dependency
injection is complete.
Example:
@PostConstruct
public void init() {
// Initialization logic
}
Uses:
• Supports initialization after bean creation.
• Ensures dependencies are fully injected.

10. @PreDestroy
Definition: Indicates a method to be executed just before bean
destruction.
@PreDestroy
public void cleanup() {
// Cleanup logic
}
Uses:
• Supports resource cleanup before bean destruction.
• Ensures proper shutdown of resources.

11. @Conditional
Definition: Specifies a condition for bean registration.
Example:
@Bean
@ConditionalOnProperty(name = "feature.enabled", havingValue =
"true")
public UserService userService() {
return new UserServiceImpl();

All Interview inone Page 27


return new UserServiceImpl();
}
Uses:
• Supports conditional bean creation.
• Enhances configurability based on environment.

12. @ConfigurationProperties
Definition: Binds external configuration to a Java object.
Example:
@ConfigurationProperties(prefix = "app")
public class AppProperties {
private String name;
// getters and setters
}
Uses:
• Centralizes configuration management.
• Supports strong typing for configuration properties.

13. @EnableConfigurationProperties
Definition: Enables support for @ConfigurationProperties annotated
classes.
Example:
@EnableConfigurationProperties(AppProperties.class)
public class Application {
// ...
}
Uses:
• Supports type-safe configuration management.
• Integrates property binding with Spring context.

14. @RestControllerAdvice
Definition: Combines @ControllerAdvice and @ResponseBody for
global exception handling.
Example:
@RestControllerAdvice
public class GlobalExceptionHandler {
// ...
}
Uses:
• Centralizes exception handling for REST controllers.
• Simplifies response handling for errors.

15. @EnableScheduling
Definition: Enables Spring’s scheduled task execution capability.
Example:
@EnableScheduling
public class SchedulerConfig {
// ...
}

All Interview inone Page 28


}
Uses:
• Supports scheduled tasks in Spring applications.
• Facilitates periodic job execution.

16. @EnableAsync
Definition: Enables Spring’s asynchronous method execution
capability.
Example:
@EnableAsync
public class AsyncConfig {
// ...
}
Uses:
• Supports asynchronous execution in Spring.
• Enhances application responsiveness.

17. @SpringBootTest
Definition: Indicates that a class is a Spring Boot test.
Example:
@SpringBootTest
public class ApplicationTests {
// ...
}
Uses:
• Simplifies testing of Spring Boot applications.
• Provides a comprehensive application context for tests.

18. @MockBean
Definition: Creates a mock instance of a bean for testing.
Example:
@MockBean
private UserService userService;
Uses:
• Supports mocking dependencies in tests.
• Simplifies unit testing with Spring context.

19. @Profile
Definition: Indicates that a component is eligible for registration
when a specified profile is active.
Example:
@Profile("dev")
@Bean
public DataSource dataSource() {
return new H2DataSource();
}
Uses:
• Supports different configurations for different environments.
• Enhances flexibility and modularity.

All Interview inone Page 29


• Enhances flexibility and modularity.

20. @Scheduled
Definition: Indicates that a method should be scheduled to run at
fixed intervals.
Example:
@Scheduled(fixedRate = 5000)
public void performTask() {
// Task logic
}
Uses:
• Facilitates scheduled task execution.
• Supports cron expressions and fixed delays.

Chapter 6: Spring Boot 3 Annotations


1. @SpringBootApplication
Definition: A convenience annotation that combines
@Configuration, @EnableAutoConfiguration, and
@ComponentScan.
Example:
@SpringBootApplication
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}
Uses:
• Simplifies Spring Boot application setup.
• Facilitates auto-configuration.

2. @EnableAutoConfiguration
Definition: Instructs Spring Boot to automatically configure the
application based on the dependencies present in the classpath.
Example:
@EnableAutoConfiguration
public class MyApplication {
// ...
}
Uses:
• Automatically configures common application components.
• Reduces the need for explicit configuration.

3. @ConfigurationProperties
Definition: Binds external configuration properties to a Java object.
Example:
@ConfigurationProperties(prefix = "app")
public class AppProperties {
private String name;
private String version;

All Interview inone Page 30


private String version;
// getters and setters
}
Uses:
• Centralizes configuration management.
• Supports strong typing for properties.

4. @EnableConfigurationProperties
Definition: Enables support for @ConfigurationProperties annotated
classes.
Example:
@EnableConfigurationProperties(AppProperties.class)
public class MyApplication {
// ...
}
Uses:
• Supports type-safe configuration management.
• Integrates property binding with Spring context.

5. @SpringBootTest
Definition: Indicates that a class is a Spring Boot test, providing a
comprehensive application context for testing.
Example:
@SpringBootTest
public class MyApplicationTests {
// ...
}
Uses:
• Simplifies integration testing for Spring Boot applications.
• Provides a full application context for tests.

6. @MockBean
Definition: Creates a mock instance of a bean for testing.
Example:
@MockBean
private UserService userService;
Uses:
• Supports mocking dependencies in tests.
• Simplifies unit testing with Spring context.

7. @Profile
Definition: Indicates that a component is eligible for registration
when a specified profile is active.
Example:
@Profile("dev")
@Bean
public DataSource dataSource() {
return new H2DataSource();
}

All Interview inone Page 31


}
Uses:
• Supports different configurations for different environments.
• Enhances flexibility and modularity.

8. @Scheduled
Definition: Indicates that a method should be scheduled to run at
fixed intervals.
Example:
@Scheduled(fixedRate = 5000)
public void performTask() {
// Task logic
}
Uses:
• Facilitates scheduled task execution.
• Supports cron expressions and fixed delays.

9. @Async
Definition: Indicates that a method should run asynchronously.
Example:
@Async
public CompletableFuture<String> asyncMethod() {
return CompletableFuture.completedFuture("Hello");
}
Uses:
• Enhances application responsiveness.
• Supports non-blocking operations.

10. @RestControllerAdvice
Definition: Combines @ControllerAdvice and @ResponseBody for
global exception handling in REST APIs.
Example:
@RestControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String>
handleUserNotFound(UserNotFoundException ex) {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body(ex.getMessag
e());
}
}
Uses:
• Centralizes exception handling for REST controllers.
• Simplifies response handling for errors.

11. @EnableScheduling
Definition: Enables Spring's scheduled task execution capability.
Example:

All Interview inone Page 32


Example:
@EnableScheduling
public class SchedulerConfig {
// ...
}
Uses:
• Supports scheduled tasks in Spring applications.
• Facilitates periodic job execution.

12. @Configuration
Definition: Indicates that a class declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Centralizes configuration.
• Supports explicit bean definitions.

13. @Value
Definition: Injects values from property files into fields.
Example:
@Value("${app.name}")
private String appName;
Uses:
• Supports external configuration.
• Enhances flexibility with environment variables.

14. @CommandLineRunner
Definition: Indicates a method to be executed after the Spring Boot
application has started.
Example:
@Bean
public CommandLineRunner run() {
return args -> {
// Logic to run at startup
};
}
Uses:
• Supports executing initialization logic on startup.
• Facilitates running application-specific code.

15. @Conditional
Definition: Specifies a condition for bean registration.
Example:
@Bean
@ConditionalOnProperty(name = "feature.enabled", havingValue =

All Interview inone Page 33


@ConditionalOnProperty(name = "feature.enabled", havingValue =
"true")
public UserService userService() {
return new UserServiceImpl();
}
Uses:
• Supports conditional bean creation.
• Enhances configurability based on environment.

16. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserServiceImpl();
}
Uses:
• Explicitly defines beans in a configuration class.
• Supports customization of bean instantiation.

17. @Import
Definition: Allows importing additional configuration classes.
Example:
@Import({DatabaseConfig.class, SecurityConfig.class})
public class AppConfig {
// ...
}
Uses:
• Modularizes configuration.
• Supports importing configuration from multiple classes.

18. @EnableConfigurationPropertiesScan
Definition: Scans for @ConfigurationProperties beans in the
specified packages.
Example:
@EnableConfigurationPropertiesScan("com.example.config")
public class MyApplication {
// ...
}
Uses:
• Automatically detects and registers configuration properties
classes.
• Enhances organization of configuration classes.

19. @EntityScan
Definition: Specifies the packages to scan for JPA entities.
Example:
@EntityScan("com.example.model")

All Interview inone Page 34


@EntityScan("com.example.model")
public class MyApplication {
// ...
}
Uses:
• Centralizes entity configuration.
• Supports modular entity organization.

20. @EnableJpaRepositories
Definition: Enables JPA repositories in the specified packages.
Example:
@EnableJpaRepositories("com.example.repository")
public class MyApplication {
// ...
}
Uses:
• Automatically registers JPA repository beans.
• Simplifies repository management.

Chapter 7: Spring Security Annotations


1. @EnableWebSecurity
Definition: Enables Spring Security’s web security support.
Example:
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
}
Uses:
• Configures web security settings.
• Enables method-level security.

2. @Configuration
Definition: Indicates that a class contains Spring security
configuration.
Example:
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
}
Uses:
• Centralizes security configuration.
• Combines multiple security settings.

3. @EnableGlobalMethodSecurity
Definition: Enables method-level security in Spring Security.
Example:
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
All Interview inone Page 35
// ...
}
Uses:
• Supports security annotations on methods.
• Enhances security granularity.

4. @PreAuthorize
Definition: Indicates that a method can be invoked only if the user
has the specified authority.
Example:
@PreAuthorize("hasRole('ADMIN')")
public void adminOnlyMethod() {
// ...
}
Uses:
• Enforces method-level security.
• Supports fine-grained access control.

5. @PostAuthorize
Definition: Indicates that a method can be invoked but checks the
security after the method execution.
Example:
@PostAuthorize("returnObject.username == authentication.name")
public User getUser(Long id) {
return userService.findById(id);
}
Uses:
• Checks authorization after method execution.
• Supports dynamic security checks.

6. @Secured
Definition: Specifies the roles allowed to invoke a method.
Example:
@Secured("ROLE_USER")
public void userMethod() {
// ...
}
Uses:
• Provides simple role-based access control.
• Enhances method-level security.

7. @RolesAllowed
Definition: Indicates the roles permitted to execute a method.
Example:
@RolesAllowed({"ROLE_USER", "ROLE_ADMIN"})
public void userOrAdminMethod() {
// ...
}
Uses:

All Interview inone Page 36


Uses:
• Supports JSR-250 security annotations.
• Facilitates role-based access control.

8. @AuthenticationPrincipal
Definition: Indicates a method parameter should be bound to the
current authenticated user's details.
Example:
@GetMapping("/user")
public String getUser(@AuthenticationPrincipal UserDetails
userDetails) {
return userDetails.getUsername();
}
Uses:
• Simplifies access to authenticated user information.
• Enhances method readability.

9. @EnableWebSecurity
Definition: Enables Spring Security’s web security support.
Example:
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
}
Uses:
• Configures web security settings.
• Enables method-level security.

10. @WithMockUser
Definition: Creates a mock user for testing secured methods.
Example:
@WithMockUser(username = "admin", roles = {"ADMIN"})
public void testAdminAccess() {
// Test logic
}
Uses:
• Simplifies testing with mock security contexts.
• Supports integration testing of security features.

11. @EnableGlobalMethodSecurity
Definition: Enables method-level security using annotations.
Example:
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig {
// ...
}
Uses:
• Allows use of @PreAuthorize and @PostAuthorize.
• Enhances method security configuration.

All Interview inone Page 37


• Enhances method security configuration.

12. @Secured
Definition: Provides role-based security at the method level.
Example:
@Secured("ROLE_ADMIN")
public void adminMethod() {
// ...
}
Uses:
• Simple way to restrict access to methods based on roles.
• Enhances method-level security.

13. @RequestMapping
Definition: Maps web requests to specific handler methods.
Example:
@RequestMapping("/api/users")
public List<User> getUsers() {
return userService.findAll();
}
Uses:
• Supports RESTful API design.
• Centralizes request mapping.

14. @PathVariable
Definition: Binds a method parameter to a URI template variable.
Example:
@GetMapping("/users/{id}")
public User getUser(@PathVariable Long id) {
return userService.findById(id);
}
Uses:
• Simplifies parameter binding from URL.
• Enhances API design.

15. @RequestBody
Definition: Binds the HTTP request body to a method parameter.
Example:
@PostMapping("/users")
public User createUser(@RequestBody User user) {
return userService.save(user);
}
Uses:
• Simplifies data binding from request body.
• Enhances API handling.

16. @RequestParam
Definition: Binds a method parameter to a web request parameter.
Example:

All Interview inone Page 38


Example:
@GetMapping("/users")
public List<User> getUsers(@RequestParam(required = false) String
role) {
return userService.findByRole(role);
}
Uses:
• Supports query parameter binding.
• Enhances method flexibility.

17. @ResponseStatus
Definition: Marks a method or exception with a specific HTTP status
code.
Example:
@ResponseStatus(HttpStatus.NOT_FOUND)
public void handleUserNotFound() {
// Logic
}
Uses:
• Customizes response status for controllers.
• Enhances error handling.

18. @ControllerAdvice
Definition: Global handler for controller exceptions and binding.
Example:
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String> handleUserNotFound() {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body("User not
found");
}
}
Uses:
• Centralizes exception handling.
• Simplifies response management.

19. @CrossOrigin
Definition: Allows cross-origin requests on the specified controller or
method.
Example:
@CrossOrigin(origins = "http://localhost:3000")
@GetMapping("/api/data")
public Data getData() {
return dataService.getData();
}
Uses:
• Supports CORS in REST APIs.

All Interview inone Page 39


• Supports CORS in REST APIs.
• Enhances security and flexibility in frontend-backend
communication.

20. @EnableCaching
Definition: Enables caching support in a Spring application.
@EnableCaching
public class AppConfig {
// ...
}
Uses:
• Improves application performance with caching.
• Simplifies caching configuration.

All Interview inone Page 40


Contents

JDBC Interview Questions for Freshers


1. What is JDBC in Java?
2. What is ResultSet?
3. What is JDBC driver?
4. What is DriverManager in JDBC?
5. Which JDBC driver is fastest and used more commonly?
6. Which data types are used for storing the image and file in the database table?
7. What is stored procedure? What are the parameter types in stored procedure?
8. What do you mean by DatabaseMetaData and why we are using it?
9. What are the differences between ODBC and JDBC?
10. What is Rowset?

JDBC Interview Questions for Experienced


11. What are the different types of JDBC drivers in Java? Explain each with an
example.
12. What are difference between ResultSet and RowSet?
13. Explain the types of ResultSet.
14. Explain JDBC API components.
15. What are the types of JDBC statements?
16. Explain JDBC Batch processing.
17. What is the difference between Statement and PreparedStatement?
18. What is DataSource in JDBC? What are its benefits?

Page 1 © Copyright by Interviewbit


JDBC Interview Questions

JDBC Interview Questions for


Experienced (.....Continued)

19. Explain the difference between execute(), executeQuery() and executeUpdate()


methods in JDBC.
20. Explain the types of RowSet available in JDBC.
21. Explain the usage of the getter and setter methods in ResultSet.
22. What is meant by a locking system in JDBC?
23. What is database connection pooling? What are the advantages of connection
pool?
24. What is “Dirty read” in terms of database?
25. What causes “No suitable driver” error?
26. What is JDBC Connection? Explain steps to get JDBC database connection in a
simple Java program.
27. How to use JDBC API to call Stored procedures?
28. What are the types of JDBC architecture?
29. What is JDBC Transaction Management and why is it needed?
30. Explain the benefits of PreparedStatement over Statement.
31. Explain the methods available for Transaction Management in JDBC.
32. Give few examples of most common exceptions in JDBC.
33. What is Two phase commit in JDBC?
34. What are the isolation levels of connections in JDBC?
35. How to create a table dynamically from a JDBC application?
36. Conclusion

Page 2 © Copyright by Interviewbit


Let's get Started

Introduction to JDBC:

JDBC is an Application Programming Interface(API) for Java, which is helpful for


interaction with the database and for executing the SQL query. JDBC is an
abbreviation used for Java Database Connectivity. It uses JDBC drivers for connecting
with the database. JDBC API is used to access tabular data stored in relational
databases like Oracle, MySQL, MS Access, etc.

Components of JDBC:

There are four major components of JDBC using which it can interact with a
database. They are:

Page 3 © Copyright by Interviewbit


JDBC Interview Questions

1. JDBC API: It provides different methods and interfaces for easier


communication with the database. By using this, applications are able to
execute SQL statements, retrieve results and make updation to the database. It
has two packages as follows which consist of Java SE and Java EE platforms to
exhibit Write Once Run Everywhere(WORA) capabilities.
1. java.sql.*;
2. javax.sql.*;
Also, it provides a standard for connecting a database to a client
application.
2. JDBC DriverManager: It is the class in JDBC API. It loads the JDBC driver in a
Java application for establishing a connection with the database. It is useful in
making a database-specific call for processing the user request.
3. JDBC Test suite: It is used to test the operations like insertion, deletion,
updation etc., being performed by JDBC Drivers.
4. JDBC-ODBC bridge drivers: It will connect database drivers to the database.
JDBC-ODBC bridge interprets JDBC method call to the ODBC function call. It will
use sun.jdbc.odbc package, which consists of the native library to access
characteristics of ODBC.

Page 4 © Copyright by Interviewbit


JDBC Interview Questions

Scope of JDBC:

Earlier, ODBC API was used as the database API to connect with the database and
execute the queries. But, ODBC API uses C language for ODBC drivers(i.e. platform-
dependent and unsecured). Hence, Java has defined its own JDBC API that uses JDBC
drivers, which offers a natural Java interface for communicating with the database
through SQL. JDBC is required to provide a “pure Java” solution for the development
of an application using Java programming.

JDBC Interview Questions for Freshers


1. What is JDBC in Java?
JDBC(Java Database Connectivity) is a Java API, which is helpful in interaction with
the database to retrieve, manipulate and process the data using SQL. It will make use
of JDBC drivers for connecting with the database. By using JDBC, we can access
tabular data stored in various types of relational databases such as Oracle, MySQL,
MS Access, etc.

2. What is ResultSet?

Page 5 © Copyright by Interviewbit


JDBC Interview Questions

The java.sql.ResultSet interface represents the database result set, which is


obtained a er the execution of SQL query using Statement objects.
Object of ResultSet maintains a cursor pointing to the current row of data in the
result set. Initially, the cursor is located before the first row. Then the cursor is
moved to the next row by using the next() method. The next() method
can be used to iterate through the result set with the help of a while loop. If
there are no further rows, the next() method will return false.
Example for the creation of ResultSet is given below:
ResultSet rs = con.executeQuery(sqlQuery);

3. What is JDBC driver?


JDBC driver is a so ware component having various classes and interfaces, that
enables the Java application to interact with a database.
To connect with individual databases, JDBC requires particular drivers for each
specific database. These drivers are provided by the database vendor in addition to
the database. For example:
MySQL Connector/J is the official JDBC driver for MySQL and we can locate the
mysql-connector-java-<version>-bin.jar file among the installed files. On
windows, this file can be obtained at

C:\Program Files (x86)\MySQL\MySQL Connector J\mysql-connector-java-5.1.30-bin.jar.

JDBC driver of Oracle 10G is ojdbc14.jar and it can be obtained in the installation
directory of an Oracle at …/Oracle/app/oracle/product/10.2.0/server/jdbc/lib .
JDBC driver provides the connection to the database. Also, it implements the
protocol for sending the query and result between client and database.

Page 6 © Copyright by Interviewbit


JDBC Interview Questions

4. What is DriverManager in JDBC?


JDBC DriverManager is a static class in Java, through which we manage the set
of JDBC drivers that are available for an application to use.
Multiple JDBC drivers can be used concurrently by an application, if necessary.
By using a Uniform Resource Locator(URL), each application specifies a JDBC
driver.
When we load the JDBC Driver class into an application, it registers itself to the
DriverManager by using Class.forName() or DriverManager.registerDriver() .
To check this, you can have a look into the source code of JDBC Driver classes.
A er this, when we call DriverManager.getConnection() method by passing the
details regarding database configuration, DriverManager will make use of
registered drivers to obtain the connection and return it to the caller program.

5. Which JDBC driver is fastest and used more commonly?


JDBC Net pure Java driver(Type 4 driver) is the fastest driver for localhost and
remote connections because it directly interacts with the database by converting the
JDBC calls into vendor-specific protocol calls.

6. Which data types are used for storing the image and file in
the database table?

Page 7 © Copyright by Interviewbit


JDBC Interview Questions

BLOB data type is used to store the image in the database. We can also store
videos and audio by using the BLOB data type. It stores the binary type of data.
CLOB data type is used to store the file in the database. It stores the character
type of data.

7. What is stored procedure? What are the parameter types in


stored procedure?
Stored procedure is a group of SQL queries that are executed as a single logical
unit to perform a specific task. Name of the procedure should be unique since
each procedure is represented by its name.
For example, operations on an employee database like obtaining information
about an employee could be coded as stored procedures that will be executed
by an application. Code for creating a stored procedure named
GET_EMP_DETAILS is given below:

DELIMITER $$
DROP PROCEDURE IF EXISTS `EMP`.`GET_EMP_DETAILS` $$
CREATE PROCEDURE `EMP`.`GET_EMP_DETAILS`
(IN EMP_ID INT, OUT EMP_DETAILS VARCHAR(255))
BEGIN
SELECT first INTO EMP_DETAILS
FROM Employees
WHERE ID = EMP_ID;
END $$
DELIMITER ;

Stored procedures are called using CallableStatement class available in JDBC API.
Below given code demonstrates this:

CallableStatement cs = con.prepareCall("{call GET_EMP_DETAILS(?,?)}");


ResultSet rs = cs.executeQuery();

Page 8 © Copyright by Interviewbit


JDBC Interview Questions

Three types of parameters are provided in the stored procedures. They are:
IN: It is used for passing the input values to the procedure. With the help of
setXXX() methods, you can bind values to IN parameters.
OUT: It is used for getting the value from the procedure. With the help of
getXXX() methods, you can obtain values from OUT parameters.
IN/OUT: It is used for passing the input values and obtaining the value
to/from the procedure. You bind variable values with the setXXX() methods
and obtain values with the getXXX() methods.

8. What do you mean by DatabaseMetaData and why we are


using it?
DatabaseMetaData is an interface that provides methods to obtain information
about the database.
We can use this for getting database-related informations, such as database
name, database version, driver name, the total number of tables or views, etc.

9. What are the differences between ODBC and JDBC?

Page 9 © Copyright by Interviewbit


JDBC Interview Questions

ODBC(Open Database Connectivity) JDBC(Java Database Connectivity

ODBC can be used for languages like JDBC is used only for the Java
C, C++, Java, etc. language

We can use ODBC only for the


We can use JDBC on any platform,
Windows platform, thus it is
thus it is platform-independent
platform-dependent.

Most of the ODBC Drivers developed JDBC drivers are developed using
in native languages like C, C++ the Java language

It is not recommended to use ODBC


It is highly recommended to use
for Java applications, because of low
JDBC for Java applications because
performance due to internal
there are no performance issues.
conversion.

ODBC is procedural. JDBC is Object Oriented.

10. What is Rowset?


A RowSet is an object that encapsulates a row set from either JDBC result sets or
tabular data sources such as files or spreadsheets. It supports component-based
development models like JavaBeans, with the help of a standard set of
properties and event notifications.
The advantages of using RowSet are:
It is easier and flexible to use.
It is Scrollable and Updatable by default.

JDBC Interview Questions for Experienced

Page 10 © Copyright by Interviewbit


JDBC Interview Questions

11. What are the different types of JDBC drivers in Java? Explain
each with an example.
There are four types of JDBC drivers in Java. They are:
Type I: JDBC - ODBC bridge driver
In this, the JDBC–ODBC bridge acts as an interface between the client and
database server. When a user uses a Java application to send requests to
the database using JDBC–ODBC bridge, it converts the JDBC API into ODBC
API and then sends it to the database. When the result is received from the
database, it is sent to ODBC API and then to JDBC API.
It is platform-dependent because it uses ODBC which depends on the native
library of the operating system. In this, JDBC–ODBC driver should be
installed in every client system and database must support for ODBC driver.'
It is easier to use but it gives low performance because it involves the
conversion of JDBC method calls to the ODBC method calls.

Page 11 © Copyright by Interviewbit


JDBC Interview Questions

Type II: Native API – Partially Java Driver:


It is almost similar to a Type I driver. Here, native code replaces the ODBC
part. This native code part is targeted at a particular database product. It
uses libraries of the client-side of the database. This Type II Driver converts
the JDBC method calls to native calls of the database native API.
When the database gets the requests from the user, the requests are
processed and sends the results back in the native format which is then
converted into JDBC format and pass it to the Java application.
It was instantly adopted by the database vendors because it was quick and
cheaper to implement. This driver gives faster response and performance
compared to the Type I driver.

Page 12 © Copyright by Interviewbit


JDBC Interview Questions

Type III: Network Protocol - Fully Java Driver:


The type III driver is completely written in Java. It is similar to the 3-tier
approach to access the database. It helps to send the JDBC method calls to
an intermediate server. The intermediate server communicates with the
database on behalf of JDBC. The application server converts the JDBC calls
either directly or indirectly to the database protocol which is vendor-
specific.
This approach does not increase the efficiency of architecture and it is
costlier, due to this most of the database vendors don’t choose this driver.
You need to have good knowledge about the application server for using
this approach since the application server is used here.

Page 13 © Copyright by Interviewbit


JDBC Interview Questions

Type IV: Thin Driver - Fully Java Driver


Type IV driver is directly implemented and it directly converts JDBC calls
into vendor-specific database protocol. Most of the JDBC Drivers used
today are type IV drivers.
It is platform-independent since it is written fully in Java. It can be installed
inside the Java Virtual Machine(JVM) of the client, so there is no need of
installing any so ware on the client or server side. This drive architecture is
having all the logic to communicate directly with the database in a single
driver.
It provides better performance compared to other driver types. It permits
easy deployment. Nowadays, this driver is developed by the database
vendor itself so that programmers can use it directly without any
dependencies on other sources.

12. What are difference between ResultSet and RowSet?

Page 14 © Copyright by Interviewbit


JDBC Interview Questions

ResultSet RowSet

ResultSet cannot
be serialized as it
RowSet is disconnected from the database so it
handles the
can be serialized.
connection to
the database.

By default,
ResultSet object
By default, the RowSet object is scrollable and
is non-scrollable
updatable.
and non-
updatable.

ResultSet object
is not a RowSet object is a JavaBean object.
JavaBean object.

ResultSet is
returned by the Rowset extends the ResultSet interface and it is
executeQuery() returned by calling the
method of RowSetProvider.newFactory().createJdbcRowSet()
Statement method.
interface

It is difficult to
pass ResultSet
from one class to It is easier to pass RowSet from one class to
another class as another class as it has no connection with the
it has a database.
connection with
the database.

Page 15 © Copyright by Interviewbit


JDBC Interview Questions

13. Explain the types of ResultSet.


ResultSet refers to the row and column data contained in a ResultSet object. The
object of ResultSet maintains a cursor pointing to the current row of data in the
result set.
There are three types of ResultSet which have constants to control the movement of
the cursor in backward, forward, and in a particular row. If we do not declare any
ResultSet, then by default TYPE_FORWARD_ONLY will be called.
ResultSet.TYPE_FORWARD_ONLY: Using this, the cursor can only move forward
from start to end in the result set.
ResultSet.TYPE_SCROLL_INSENSITIVE: Using this, the cursor can move in both
forward and backward directions. Here, the result set is insensitive to the
changes done in the database by others, that occur a er the result set was
created.
ResultSet.TYPE_SCROLL_SENSITIVE: Using this, the cursor can move in
forward and backward direction, and the result set is sensitive to changes made
to the database by others, that occur a er the result set was created.

14. Explain JDBC API components.


The java.sql package contains different interfaces and classes for JDBC API. They
are:

Interfaces:

Page 16 © Copyright by Interviewbit


JDBC Interview Questions

Connection: The object of Connection is created by using getConnection()


method of DriverManager class. DriverManager is the factory for connection.
Statement: The object of the Statement is created by using createStatement()
method of the Connection class. The Connection interface is the factory for
Statement.
PreparedStatement: The PreparedStatement object is created by using
prepareStatement() method of Connection class. It is used for executing the
parameterized query.
ResultSet: The ResultSet object maintains a cursor pointing to a table row. At
first, the cursor points before the first row. The executeQuery() method of the
Statement interface returns the object of ResultSet.
ResultSetMetaData: The ResultSetMetaData interface object contains the
details about the data(table) such as number of columns, name of the column,
column type etc. The getMetaData() method of ResultSet returns the
ResultSetMetaData object.
DatabaseMetaData: It is an interface that has methods to get metadata of a
database, like name of the database product, version of database product,
driver name, name of the total number of views, name of the total number of
tables, etc. The getMetaData() method that belongs to Connection interface
returns the DatabaseMetaData object.
CallableStatement: CallableStatement interface is useful for calling the stored
procedures and functions. We can have business logic on the database through
the usage of stored procedures and functions, which will be helpful for the
improvement in the performance as these are pre-compiled. The prepareCall()
method that belongs to the Connection interface returns the object of
CallableStatement.
Classes:

Page 17 © Copyright by Interviewbit


JDBC Interview Questions

DriverManager: It pretends to be an interface between the user and drivers.


DriverManager keeps track of the available drivers and handles establishing a
connection between a database and the relevant driver. It contains various
methods to keep the interaction between the user and drivers.
BLOB: BLOB stands for Binary Large Object. It represents a collection of binary
data such as images, audio, and video files, etc., which is stored as a single entity
in the DBMS(Database Management System).
CLOB: CLOB stands for Character Large Object. This data type is used by multiple
database management systems to store character files. It is the same as BLOB
except for the difference, instead of binary data, CLOB represents character
stream data such as character files, etc.

15. What are the types of JDBC statements?


Statements are useful for sending SQL commands to the database and receiving data
from the database. There are three types of statements in JDBC. They are:
Statement: It is the factory for ResultSet. It is used for general-purpose access to
the database by executing the static SQL query at runtime. Example:

Page 18 © Copyright by Interviewbit


JDBC Interview Questions

Statement st = conn.createStatement( );
ResultSet rs = st.executeQuery();

PreparedStatement: It is used when we need to give input data to the query at


runtime and also if we want to execute SQL statements repeatedly. It is more
efficient than a statement because it involves the pre-compilation of SQL.
Example:

String SQL = "Update item SET limit = ? WHERE itemType = ?";


PreparedStatement ps = conn.prepareStatement(SQL);
ResultSet rs = ps.executeQuery();

CallableStatement: It is used to call stored procedures on the database. It is


capable of accepting runtime parameters. Example:

CallableStatement cs = con.prepareCall("{call SHOW_CUSTOMERS}");


ResultSet rs = cs.executeQuery();

16. Explain JDBC Batch processing.

Page 19 © Copyright by Interviewbit


JDBC Interview Questions

Batch processing is the process of executing multiple SQL statements in one


transaction. For example, consider the case of loading data from CSV(Comma-
Separated Values) files to relational database tables. Instead of using Statement
or PreparedStatement, we can use Batch processing which executes the bulk of
queries in a single go for a database.
Advantages of Batch processing:
It will reduce the communication time and improves performance.
Batch processing makes it easier to process a huge amount of data and
consistency of data is also maintained.
It is much faster than executing a single statement at a time because of the
fewer number of database calls.
How to perform Batch processing?
To perform Batch processing, addBatch() and executeBatch() methods are used.
These 2 methods are available in the Statement and PreparedStatement classes
of JDBC API.

17. What is the difference between Statement and


PreparedStatement?

Page 20 © Copyright by Interviewbit


JDBC Interview Questions

Statement PreparedStatement

The query is compiled every The query is compiled only


time we run the program. once.

It is used in the situation


It is used when we want to
where we need to run the SQL
give input parameters to
query without providing
the query at runtime.
parameters at runtime.

Provides better
performance than
Performance is less compared
Statement, as it executes
to PreparedStatement.
the pre-compiled SQL
statements.

It is suitable for executing


It is suitable for executing DDL
DML statements such as
statements such as CREATE,
INSERT, UPDATE, and
ALTER, DROP and TRUNCATE.
DELETE.

It cannot be used for It can be used for


storing/retrieving images and storing/retrieving images
files in the database. and files in the database.

It executes static SQL It executes pre-compiled


statements. SQL statements.

More secured as they use


Less secured as it enforces SQL
bind variables, which can
injection.
prevent SQL injection.

Page 21 © Copyright by Interviewbit


JDBC Interview Questions

18. What is DataSource in JDBC? What are its benefits?


DataSource is an interface defined in javax.sql package and is used for
obtaining the database connection. It can be used as a good alternative for a
DriverManager class as it allows the details about the database to your
application program.
A driver that is accessed through a DataSource object, does not register itself
with the DriverManager. Instead, a DataSource object is retrieved through a
lookup operation and then it can be used to create a Connection object.
Benefits of DataSource:
Caching of PreparedStatement for faster processing
ResultSet maximum size threshold
Logging features
Connection timeout settings
Connection Pooling in servlet container using the support of JNDI registry.

19. Explain the difference between execute(), executeQuery()


and executeUpdate() methods in JDBC.

Page 22 © Copyright by Interviewbit


JDBC Interview Questions

execute() executeQuery() executeUpdate()

It is used to execute
the SQL statements
It can be
It is used to such as
used for any
execute SQL Insert/Update/Delete
SQL
Select queries. which will update or
statements.
modify the database
data.

It returns
the boolean
value TRUE It returns the It returns an integer
if the result ResultSet object value which
is a which contains represents the
ResultSet the data number of affected
object and retrieved by the rows where 0
FALSE when SELECT indicates that the
there is no statement. query returns null.
ResultSet
object.

Used for
executing
Used for
both Used for executing
executing only
SELECT and only a non-SELECT
the SELECT
non- query.
Query.
SELECT
queries.

Page 23 © Copyright by Interviewbit


JDBC Interview Questions

The execute() method is used in the situations when you are not sure about the type
of statement else you can use executeQuery() or executeUpdate() method.

20. Explain the types of RowSet available in JDBC.


A RowSet is an object that encapsulates a set of rows from JDBC result sets or tabular
data sources.
There are five types of RowSet interfaces available in JDBC. They are:
JDBCRowSet: It is a connected RowSet, which is having a live connection to the
database, and all calls on this are percolated to the mapping call in the JDBC
connection, result set, or statement. The Oracle implementation of
JDBCRowSet is done by using oracle.jdbc.rowset.OracleJDBCRowSet .
CachedRowSet: It is a RowSet in which the rows are cached and RowSet is
disconnected, which means it does not maintain an active database connection.
The oracle.jdbc.rowset.OracleCachedRowSet class is used as the Oracle
implementation of CachedRowSet.
WebRowSet: It is an extension to CachedRowSet and it represents a set of
fetched rows of tabular data that can be passed between tiers and components
so that no active data source connections need to be maintained.
It provides support for the production and consumption of result sets and their
synchronization with the data source, both in XML(Extensible Markup Language)
format and in a disconnected fashion. This permits result sets to be transmitted
across tiers and over Internet protocols. The Oracle implementation of
WebRowSet is done by using oracle.jdbc.rowset.OracleWebRowSet .
FilteredRowSet: It’s an extension to WebRowSet and gives programmatic
support to filter its content. This enables you to avoid the difficulty of query
supply and processing involved. The Oracle implementation of FilteredRowSet is
done by using oracle.jdbc.rowset.OracleFilteredRowSet .
JoinRowSet: It’s an extension to WebRowSet and consists of related data from
various RowSets. There is no standard way to establish a SQL JOIN operation
between disconnected RowSets without a data source connection. A
JoinRowSet addresses this problem. The Oracle implementation of JoinRowSet
is done by using oracle.jdbc.rowset.OracleJoinRowSet class.

Page 24 © Copyright by Interviewbit


JDBC Interview Questions

21. Explain the usage of the getter and setter methods in


ResultSet.
Getter methods: These are used for retrieving the particular column values of
the table from ResultSet. As a parameter, either the column index value or
column name should be passed. Usually, the getter method is represented as
getXXX() methods.
Example:
int getInt(string Column_Name)

The above statement is used to retrieve the value of the specified column Index and
the return type is an int data type.
Setter Methods: These methods are used to set the value in the database. It is
almost similar to getter methods, but here it requires to pass the data/values for
the particular column to insert into the database and the column name or index
value of that column. Usually, setter method is represented as setXXX() methods.
Example:
void setInt(int Column_Index, int Data_Value)

The above statement is used to insert the value of the specified column Index with an
int value.

22. What is meant by a locking system in JDBC?

Page 25 © Copyright by Interviewbit


JDBC Interview Questions

If two users are viewing the same record, then there is no issue, and locking will
not be done. If one user is updating a record and the second user also wants to
update the same record, in this situation, we are going to use locking so that
there will be no lost update.
Two types of locking are available in JDBC by which we can handle multiple user
issues using the record. They are:
Optimistic Locking: It will lock the record only when an update takes
place. This type of locking will not make use of exclusive locks when reading
or selecting the record.
Pessimistic Locking: It will lock the record as soon as it selects the row to
update. The strategy of this locking system guarantees that the changes are
made safely and consistently.

23. What is database connection pooling? What are the


advantages of connection pool?
Connection pooling means database connections will be stored in the cache and
can be reused when future requests to the database are required. So in this
mechanism, the client need not make new connections every time for
interacting with the database. Instead of that, connection objects are stored in
the connection pool and the client will get the connection object from there.
Advantages of using a connection pool are:
It is faster.
Easier to diagnose and analyze database connections.
Increases the performance of executing commands on a database.

24. What is “Dirty read” in terms of database?

Page 26 © Copyright by Interviewbit


JDBC Interview Questions

Dirty read implies the meaning “read the value which may or may not be
correct”. In the database, when a transaction is executing and changing some
field value, at the same time another transaction comes and reads the changed
field value before the first transaction could commit or rollback the value, which
may cause an invalid value for that particular field. This situation is known as a
dirty read.
Consider an example given below, where Transaction 2 changes a row but does
not commit the changes made. Then Transaction 1 reads the uncommitted
data. Now, if Transaction 2 goes for roll backing its changes (which is already
read by Transaction 1) or updates any changes to the database, then the view of
the data may be wrong in the records related to Transaction 1. But in this case,
no row exists that has an id of 100 and an age of 25.

25. What causes “No suitable driver” error?


“No suitable driver” error occurs during a call to the DriverManager.getConnection()
method, because of the following reasons:

Page 27 © Copyright by Interviewbit


JDBC Interview Questions

Unable to load the appropriate JDBC drivers before calling the getConnection()
method.
It can specify an invalid or wrong JDBC URL, which cannot be recognized by the
JDBC driver.
This error may occur when one or more shared libraries required by the bridge
cannot be loaded.

26. What is JDBC Connection? Explain steps to get JDBC


database connection in a simple Java program.
Loading the driver: At first, you need to load or register the driver before using it in
the program. Registration must be done once in your program. You can register a
driver by using any one of the two methods mentioned below:
Class.forName(): Using this, we load the driver’s class file into memory during
runtime. It’s not required to use a new or creation of an object.
The below given example uses Class.forName() to load the Oracle driver:

Class.forName(“oracle.jdbc.driver.OracleDriver”);

The MySQL Connector/J version 8.0 library comes with a JDBC driver class:
com.mysql.jdbc.Driver. Before Java 6, we had to load the driver explicitly using the
statement given below:

Class.forName("com.mysql.jdbc.Driver");

However, this statement is no longer needed, because of a new update in JDBC 4.0
that comes from Java 6. As long as you place the MySQL JDBC driver JAR file into the
classpath of your program, the driver manager can find and load the driver.
: DriverManager is a built-in Java class with a
DriverManager.registerDriver()
static member register. Here we will be calling the constructor of the driver class
during compile time.

Page 28 © Copyright by Interviewbit


JDBC Interview Questions

The below given example uses DriverManager.registerDriver() to register the Oracle


driver:

DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());

For registering the MySQL driver, use the below-given code:

DriverManager.registerDriver(new com.mysql.jdbc.Driver(); );

Create the connections:


A er loading the driver into the program, establish connections using the code
given below:

Connection con = DriverManager.getConnection(url,user,password);

Here,
con: Reference to a Connection interface.
url: Uniform Resource Locator.
user: Username from which SQL command prompt is accessed.
password: Password from which SQL command prompt is accessed.
Url in Oracle can be created as follows:

String url = "jdbc:oracle:thin:@localhost:1521:xe";

Where oracle represents the database used, thin is the driver used, @localhost is the
IP(Internet Protocol) address where the database is stored, 1521 is the port number
and xe represents the service provider.
All 3 parameters given above are of string type and are expected to be declared by
the programmer before the function call. Use of this can be referred from the final
code of an application.

Page 29 © Copyright by Interviewbit


JDBC Interview Questions

Url in MySQL can be created as follows:

String url = "jdbc:mysql://localhost:3306/test1";

Where localhost represents hostname or IP address of the MySQL server, 3306 port
number of the server and by default, it is 3306, test1 is the name of the database on
the server.
Create a statement:
Once a connection establishment is done, you can interact with the database.
The Statement, PreparedStatement, and CallableStatement JDBC interfaces
will define the methods that permit you to send SQL commands and receive
data from the database.
We can use JDBC Statement as follows:

Statement st = con.createStatement();

Here, con is a reference to the Connection interface used in the earlier step.
Execute the query:
Here, query means an SQL query. We can have various types of queries. A few of
them are as follows:
Query for updating or inserting a table in a database.
Query for data retrieval.
The executeQuery() method that belongs to the Statement interface is used for
executing queries related to values retrieval from the database. This method
returns the ResultSet object which can be used to get all the table records.
The executeUpdate(sql_query) method of the Statement interface is used for
executing queries related to the update/insert operation.
Example:

Page 30 © Copyright by Interviewbit


JDBC Interview Questions

int m = st.executeUpdate(sql);
if (m==1)
System.out.println("Data inserted successfully : "+sql);
else
System.out.println("Data insertion failed");

Here SQL is the SQL query of string type.


Close the connection:
So finally we have sent the data to the location specified and now we are at the
end of our task completion.
On connection closing, objects of Statement and ResultSet will be automatically
closed. The close() method of the Connection interface is used for closing the
connection.
Example:
con.close();

Implementation of JDBC Oracle database connection using a Java program:

Page 31 © Copyright by Interviewbit


JDBC Interview Questions

import java.sql.*;
import java.util.*;
class OracleCon
{
public static void main(String a[])
{
//Creating the connection
String url = "jdbc:oracle:thin:@localhost:1521:xe";
String user = "system";
String password = "123";

//Entering the data


Scanner k = new Scanner(System.in);
System.out.println("Enter employee Id");
int empid = k.nextInt();
System.out.println("Enter employee name");
String empname = k.next();
System.out.println("Enter employee address");
String address = k.next();

//Inserting data using SQL query


String sql = "insert into employee values("+empid+",'"+empname+"','"+address+"')
Connection con=null;
try
{
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());

//Reference to connection interface


con = DriverManager.getConnection(url,user,password);

Statement st = con.createStatement();
int m = st.executeUpdate(sql);
if (m == 1)
System.out.println("Data inserted successfully : "+sql);
else
System.out.println("Data insertion failed");
con.close();
}
catch(Exception ex)
{
System.err.println(ex);
}
}
}

Implementation of JDBC MySQL database connection using Java program:

Page 32 © Copyright by Interviewbit


JDBC Interview Questions

import java.sql.*;
class MysqlCon
{
public static void main(String args[])
{
//Creating the connection
String url = "jdbc:mysql://localhost:3306/test1";
String user = "system";
String password = "123";
try
{
Class.forName("com.mysql.jdbc.Driver");

//Reference to connection interface


Connection con=DriverManager.getConnection(url,user,password);

Statement st = con.createStatement();

//Displaying all the records of employee table


ResultSet rs = st.executeQuery("select * from employee");
while(rs.next())
System.out.println(rs.getInt(1)+" "+rs.getString(2)+" "+rs.getString(3))
con.close();
}
catch(Exception e)
{
System.out.println(e);
}
}
}

27. How to use JDBC API to call Stored procedures?


Stored procedures are a set of SQL queries that are compiled in the database and will
be executed from JDBC API. For executing Stored procedures in the database, JDBC
CallableStatement can be used. The syntax for initializing a CallableStatement is:

Page 33 © Copyright by Interviewbit


JDBC Interview Questions

CallableStatement cs = con.prepareCall("{call insertEmployee(?,?,?,?,?)}");


stmt.setInt(1, id);
stmt.setString(2, name);
stmt.setString(3, role);
stmt.setString(4, address);
stmt.setString(5, salary);
//registering the OUT parameter before calling the stored procedure
cs.registerOutParameter(5, java.sql.Types.VARCHAR);

cs.executeUpdate();

We must register the OUT parameters before executing the CallableStatement.

28. What are the types of JDBC architecture?


JDBC supports 2 types of processing models to access the database. They are:
Two-tier Architecture: Here Java programs are explicitly connected with the
database. It doesn’t require any mediator such as an application server for
connecting with the database except the JDBC driver. It is also called client-
server architecture.

Page 34 © Copyright by Interviewbit


JDBC Interview Questions

Three-tier Architecture: It is the complete opposite of two-tier architecture.


There will be no explicit communication between the JDBC driver or Java
application and the database. It will make use of an application server as a
mediator between them. Java code will send the request to an application
server, then the server will send it to the database and receive the response from
the database.

29. What is JDBC Transaction Management and why is it


needed?

Page 35 © Copyright by Interviewbit


JDBC Interview Questions

The sequence of actions (SQL statements) served as a single unit that is called a
transaction. Transaction Management places an important role in RDBMS-
oriented applications to maintain data consistency and integrity.
Transaction Management can be described well – by using ACID properties. ACID
stands for Atomicity, Consistency, Isolation, and Durability.
Atomicity - If all queries are successfully executed, then only data will be
committed to the database.
Consistency - It ensures bringing the database into a consistent state a er
any transaction.
Isolation - It ensures that the transaction is isolated from other
transactions.
Durability - If a transaction has been committed once, it will remain always
committed, even in the situation of errors, power loss, etc.
Need for Transaction Management:
When creating a connection to the database, the auto-commit mode will be
selected by default. This implies that every time when the request is executed, it
will be committed automatically upon completion.
We might want to commit the transaction a er the execution of few more SQL
statements. In such a situation, we must set the auto-commit value to False. So
that data will not be able to commit before executing all the queries. In case if
we get an exception in the transaction, we can rollback() changes made and
make it like before.

30. Explain the benefits of PreparedStatement over Statement.


Benefits of PreparedStatement over Statement interface are:

Page 36 © Copyright by Interviewbit


JDBC Interview Questions

It performs faster compared to the Statement because the Statement needs to


be compiled each time when we run the code whereas the PreparedStatement
is compiled once and then executed only on runtime.
It can execute parametrized queries. But Statement can only run static queries.
The query used in PreparedStatement looks similar each time, so the database
can reuse the previous access plan. Statement inline the parameters into the
string, so the query doesn’t look to be the same every time which prevents
reusage of cache.

31. Explain the methods available for Transaction Management


in JDBC.
The connection interface is having 5 methods for transaction management. They are
given below:

Page 37 © Copyright by Interviewbit


JDBC Interview Questions

setAutoCommit() method:
The value of AutoCommit is set to TR
UE by default. A er the SQL statement execution, it will be committed
automatically. By using this method we can set the value for AutoCommit.
Syntax: conn.setAutoCommit(boolean_value)
Here, boolean_value is set to TRUE for enabling autocommit mode for the
connection, FALSE for disabling it.
Commit() method:
The commit() method is used for committing the data. A er the SQL statement
execution, we can call the commit() method. It will commit the changes made
by the SQL statement.
Syntax: conn.commit();
Rollback() method:
The rollback() method is used to undo the changes made till the last commit has
occurred. If we face any problem or exception in the SQL statements execution
flow, we may roll back the transaction.
Syntax: conn.rollback();
setSavepoint() method:
If you have set a savepoint in the transaction (a group of SQL statements), you
can use the rollback() method to undo all the changes till the savepoint or a er
the savepoint(), if something goes wrong within the current transaction. The
setSavepoint() method is used to create a new savepoint which refers to the
current state of the database within the transaction.
Syntax: Savepoint sp= conn.setSavepoint("MysavePoint")
releaseSavepoint() method:
It is used for deleting or releasing the created savepoint.
Syntax: conn.releaseSavepoint("MysavePoint");

32. Give few examples of most common exceptions in JDBC.


Some of the most common JDBC exceptions are given below:

Page 38 © Copyright by Interviewbit


JDBC Interview Questions

java.sql.SQLException- It is the base class for JDBC exceptions.


java.sql.BatchUpdateException – It occurs during the batch update operation.
May depend on the JDBC driver type that the base SQLException may throw
instead.
java.sql.SQLWarning – It is displayed as a warning message of various SQL
operations.
java.sql.DataTruncation – This exception occurs when data values are
unexpectedly truncated due to reasons independent of exceeding MaxFieldSize.

33. What is Two phase commit in JDBC?


Two-phase commit is useful for a distributed environment where numerous
processes take part in the distributed transaction process. In simpler words, we
can say that, if a transaction is executing and it is affecting multiple databases
then a two-phase commit will be used to make sure that all databases are
synchronized with each other.
In two-phase commit, commit or rollback is performed by two phases given
below:
Commit request phase: In this phase, the main process or co-ordinator
process take a vote of all other process that they have completed their
process successfully and ready to commit, if all the votes are “yes” then
they continue for the next phase. And if “No” then rollback will be
performed.
Commit phase: As per vote, if all the votes are “yes” then commit is done.
In the same way, when any transaction changes multiple databases a er
transaction execution, it will issue a pre-commit command on each database
and all databases will send an acknowledgment. Based on acknowledgment, if
all are positive transactions then it will issue the commit command otherwise
rollback will be done.

34. What are the isolation levels of connections in JDBC?

Page 39 © Copyright by Interviewbit


JDBC Interview Questions

The transaction isolation level is a value that decides the level at which
inconsistent data is permitted in a transaction, which means it represents the
degree of isolation of one transaction from another. A higher level of isolation
will result in improvement of data accuracy, but it might decrease the number of
concurrent transactions. Similarly, a lower level of isolation permits for more
concurrent transactions, but it reduces the data accuracy.
To ensure data integrity during transactions in JDBC, the DBMS make use of
locks to prevent access to other accesses to the data which is involved in the
transaction. Such locks are necessary for preventing Dirty Read, Non-Repeatable
Read, and Phantom-Read in the database.
It is used for the locking mechanism by DBMS and can be set using
setTransactionIsolation() method. You can obtain details about the level of
isolation used by the connection using getTransactionIsolation() method.

35. How to create a table dynamically from a JDBC application?


We can dynamically create a table by using the following code:

Page 40 © Copyright by Interviewbit


Contents

J2EE Interview Questions for Freshers


1. What is J2EE?
2. What are the main advantages of J2EE?
3. What are some of the technologies provided by the J2EE platform?
4. What are the various components of J2EE application architecture?
5. How is JDK different from JIT?
6. How are PATH and CLASSPATH different from each other in terms of J2EE?
7. How is multi-tier client-server architectural model advantageous?
8. What do you understand by build file?
9. Why do we have JDBC and JNDI in J2EE? How are they different from each other?
10. What is an EJB? How can you use it in J2EE?
11. What are the J2EE applets? Why can we use it?
12. What is the architecture model of Struts?
13. What do you understand by ORM?
14. What constitutes web components?
15. What do you understand by JSF?
16. What factors should a J2EE application possess for operating in a global
economy?
17. What are the differences between JVM vs JIT vs JDK vs JRE?

J2EE Interview Questions for Experienced


18. What are the design goals of J2EE architecture?

Page 1 © Copyright by Interviewbit


J2EE Interview Questions

J2EE Interview Questions for Experienced(.....Continued)


19. What do you understand by Connectors? Can you describe the Connector
Architecture?
20. What do you understand by JRMP?
21. What happens if the database connected to the Java application via connection
pool suddenly goes down?
22. How is 32-bit JVM different from 64-bit JVM?
23. How is a webserver different from an application server?
24. What is the purpose of heap dumps and how do you analyze a heap dump?
25. How can we take a heap dump of a Java process?
26. How is J2EE different from Spring?
27. What are EAR, WAR, and JAR?
28. What do you know about Hibernate?
29. What are deployment descriptors used for?
30. Can you describe the phases of the servlet lifecycle?
31. How does a servlet application work?
32. What do you understand by Java Message Service (JMS)?

Page 2 © Copyright by Interviewbit


Let's get Started

Introduction to J2EE

J2EE (Java Enterprise Edition) standards were first proposed by Oracle (Sun
Microsystems) to help developers develop, build and deploy reusable, distributed,
reliable, scalable, portable and secure enterprise-level business applications. In
simple terms, J2EE constitutes a set of frameworks, a collection of APIs and various
J2EE technologies like JSP, Servlets etc that are used as standards for simplifying
the development and building of large scale applications.
It is aimed at easing the development, build and deployment process of enterprise-
level applications that can be run on different platforms which supports Java. J2EE
remains the most popular standard followed by the Java developers community
which is why it is important for developers to know about J2EE concepts and have
hands-on experience in them.
In this article, we will see the most commonly asked interview questions on J2EE for
both freshers and experienced professionals.

J2EE Interview Questions for Freshers


1. What is J2EE?
J2EE or Java Enterprise Edition is a Java-based platform that is a combination of
services protocols and APIs (Application Programming Interfaces) that provides
capabilities to develop multi-tier, secure, stable and fast enterprise-level
applications. J2EE provides web, enterprise, web service and various other
specifications for developing enterprise-level web applications.

Page 3 © Copyright by Interviewbit


J2EE Interview Questions

2. What are the main advantages of J2EE?


Following are the advantages of the J2EE platform:

Page 4 © Copyright by Interviewbit


J2EE Interview Questions

Support for Web Services: J2EE provides a platform to develop and deploy web
services. The JAX-RPC (Java API for XML based Remote Procedure Call) helps
developers develop SOAP-based portable and interoperable web services, clients
and endpoints.
Faster Time to Market: J2EE uses the concept of containers for simplifying the
development. This helps in business logic separation from lifecycle
management and resources which aids developers to focus on business logic
than on the infrastructure. For instance, the EJB (Enterprise JavaBeans)
container takes care of threading, distributed communication, transaction
management, scaling etc and provides a necessary abstraction to the
developers.
Compatibility: J2EE platform follows the principle of “Write Once, Run
Anywhere”. It provides comprehensive standards and APIs that ensures
compatibility among different application vendors resulting in the portability of
applications.
Simplified connectivity: J2EE helps in easier applications connectivity which
allows utilizing the capabilities of different devices. It also provides JMS (Java
Message Service) to integrate diverse applications in asynchronous and loosely
coupled ways. It also provides CORBA (Common Object Request Broker
Architecture) support for linking systems tightly via remote calls.
Due to all the above benefits packed in one technology, it helps the developers to
reduce the TCO (Total Cost of Ownership) and also focus more on actual business
logic implementation.

3. What are some of the technologies provided by the J2EE


platform?
Some of the important technologies provided by J2EE are:

Page 5 © Copyright by Interviewbit


J2EE Interview Questions

Java API for XML-Based RPC (JAX-RPC): This is used to build web services and
clients that make use of XML and Remote Procedure Calls.
Java Server Pages (JSP): This is used for delivering XML and HTML documents.
Apart from these, we can make use of OutputStream for delivering other data
types as well.
Java Servlets: Servlets are classes used for extending the server capabilities
which hosts applications and can be accessed using the request-response
model.

Page 6 © Copyright by Interviewbit


J2EE Interview Questions

Enterprise Java Beans (EJB): This is a server-side component that is used for
encapsulating the application’s business logic by providing runtime
environment, security, servlet lifecycle management, transaction management
and other services.
J2EE Connector Architecture: This defines standard architecture to connect
J2EE platforms to different EIS (Enterprise Information Systems) such as
mainframe processes, database systems and different legacy applications coded
in another language.
J2EE Deployment API: Provides specifications for web services deployment
Java Management Extensions (JMX): They are used for supplying tools for
monitoring and managing applications, objects, devices and networks.
J2EE Authorization Contract for Containers (JACC): This is used to define
security contracts between authorization policy modules and application
servers.
Java API for XML Registries (JAXR): This provides standard API to access
different XML Registries to enable infrastructure for the building and
deployment of web services.
Java Message Service (JMS): This is a messaging standard for allowing different
JEE components for creating, sending, receiving and reading messages by
enabling communication in a distributed, loosely coupled, asynchronous and
reliable manner.
Java Naming and Directory Interface (JNDI): This is an API that provides
naming and directory functionality for Java-based applications.
Java Transaction API (JTA): This is used for specifying Java standard interfaces
between transaction systems and managers.
Common Object Request Broker Architecture (CORBA): This provides a
standard for defining Object Management Group designed for facilitating system
communication deployed on diverse platforms.
JDBC data access API: This provides API for getting data from any data sources
like flat files, spreadsheets, relational databases etc.

4. What are the various components of J2EE application


architecture?

Page 7 © Copyright by Interviewbit


J2EE Interview Questions

J2EE is made up of 3 main components (tiers) - Client tier, Middle tier, Enterprise
data tier as shown in the below image:

Client Tier: This tier has programs and applications which interact with the user
and they are generally located in different machines from the server. Here,
different inputs are taken from the user and these requests are forwarded to the
server for processing and the response will be sent back to the client.
Middle Tier: This tier comprises of Web components and EJB containers. The
web components are either servlet or JSP pages that process the request and
generate the response. At the time of the application’s assembly, the client’s
static HTML codes, programs and applets along with the server’s components
are bundled within the web components. The EJB components are present for
processing inputs from the user which are sent to the Enterprise Bean that is
running in the business tier.
Enterprise Data Tier: This tier includes database servers, resource planning
systems and various other data sources that are located on a separate machine
which are accessed by different components by the business tier. Technologies
like JPA, JDBC, Java transaction API, Java Connector Architecture etc are used in
this tier.

Page 8 © Copyright by Interviewbit


J2EE Interview Questions

5. How is JDK different from JIT?


JDK (Java Development Kit) is a cross-platformed so ware development
environment offering various collections of libraries and tools required for
developing Java applications and applets. It also consists of JRE that provides tools
and libraries which aids in byte code execution. JDK is needed for writing and
running programs in Java. Whereas JIT stands for Just In Time Compiler which is a
module inside JVM (which is inside JRE). JIT compiler is used for compiling some
parts of byte code having similar functionality at the same time to machine code for
optimising the compilation time and performance.

6. How are PATH and CLASSPATH different from each other in


terms of J2EE?
PATH and CLASSPATH are key environmental variables used by Java platforms.
The PATH variable points to JDK binaries or native libraries like java.exe.
Whereas The CLASSPATH variable points to the binaries of Java such as JAR files
that consist of bytecode.
PATH is a system-level variable independent of Java being present in the system
or not. Whereas CLASSPATH is purely Java-specific which is used by JVM for
loading classes required by Java applications while running.

7. How is multi-tier client-server architectural model


advantageous?
Multi-tier client-server architectural model consists of various components known as
tiers that interact with each other. The below image represents the three-tier
application model which has client/presentation tier, business logic tier and the
database tier which interact with each other to process a request and send a
response:

Page 9 © Copyright by Interviewbit


J2EE Interview Questions

In the multi-tier system, we have the following advantages:


Any changes to the user interface or business logic can be done independently.
It introduces abstraction between the components. For example, the client tier
can access data without knowing from where and how the response comes
from, what is the server infrastructure available in the backend etc.
Each tier can be coded or developed independently. For example, the middle
tier can be coded in Java or python, the client tier can be coded in Angular or
React etc.
The database can have pooled connections for sharing data among multiple
users without the need for creating a new connection for every user.

8. What do you understand by build file?


A build file is used for automating various steps involved in so ware development.
Along with this, the build file also specifies libraries and their versions that need to be
included. It also includes the type of optimizations required for the project.
Whenever the project size increases, build provides a standard way to build the
project.

9. Why do we have JDBC and JNDI in J2EE? How are they


different from each other?

Page 10 © Copyright by Interviewbit


J2EE Interview Questions

JDBC or Java Database Connectivity provides guidelines and APIs for connecting
databases from different vendors like MySQL, Oracle, PostgreSQL etc for getting
data. JNDI (Java Naming and Directory Interface) helps in providing logical structure
to retrieve a resource from the database, EJB beans, messaging queues etc without
knowing the actual host address or port. A resource can be registered with JNDI and
then those registered application components can be accessed using the JNDI name.

10. What is an EJB? How can you use it in J2EE?


EJB or Enterprise Java Beans is one of the most important parts of the J2EE platform
that helps to develop enterprise-level multi-tiered applications and deploy them by
keeping in mind performance, scalability and robustness. EJBs can be used when we
want to achieve the following:
Clustering: For deploying the application in a cluster environment, EJBs can be
used to achieve high availability and fault tolerance.
Concurrency without using Threads: EJBs can be used to achieve concurrency
without using actual threads since they are instantiated using the object pool
and are available in the EJB container. This helps in achieving performance
without involving complexities around Threads.

Page 11 © Copyright by Interviewbit


J2EE Interview Questions

Transaction management: EJBs can be used for achieving transaction


management for databases by using annotations provided by EJB.
Database Connection Pool: EJB can access connection pools defined in the
J2EE server. The connection pools help in achieving abstraction of database
connectivity and operations.
Security: EJBs use JAAS (Java Authentication and Authorization Service) to
develop secure applications. EJB methods can be authenticated and authorised
with only configuration changes.
Scheduler Service: EJBs can be used in the Timer Service which enables task
implementation for further execution or repetitive execution.

11. What are the J2EE applets? Why can we use it?
Applets are J2EE client components that are written in Java and are executed in a
web browser or a variety of other devices which supports the applet programming
model. They are used for providing interactive features to web apps and help in
providing small, portable embedded Java programs in HTML pages which will be run
automatically when we view the pages.

12. What is the architecture model of Struts?


Strut is a combination of JSP, Java Servlets, messages and custom tags that together
form an application development framework for developing enterprise-level
applications. It is based on MVC (Model-View-Controller) architecture.

Page 12 © Copyright by Interviewbit


J2EE Interview Questions

Model: This component defines the internal system state. It can either be a Java
Beans cluster or a single bean depending on the application architecture.
View: Struts make use of JSP technology for designing views of enterprise-level
applications.
Controller: It is a servlet and is used for managing user actions that process
requests and respond to them.

13. What do you understand by ORM?


ORM stands for Object-Relational Mapping that transforms objects of Java class to
tables in relational databases and vice versa using metadata describing the mapping
between the database and the objects.
This is represented as shown in the below image:

Page 13 © Copyright by Interviewbit


J2EE Interview Questions

Consider an example where we have an Employee class having employeeId,


firstName, lastName, contactNo as attributes. Consider we also have an Employee
table that has ID, FNAME, LNAME and CONTACT_NO as columns. If we want to send
data from our Java application and save it in the database, we cannot do it
straightforwardly by simply saving the Java objects in the database directly. We need
some sort of a mapper that maps the Java objects to the records that are compatible
to be saved in the database table. This is where ORM comes into the picture. ORM
helps in this transformation while writing data to the database as described in the
below image:

The database records cannot be directly consumed by the Java applications as Java
only deals with objects. ORM again plays a major role in the transformation of
database records to Java objects.

14. What constitutes web components?


Java Servlets and Java Server Pages (JSP) components together constitute web
components.

Page 14 © Copyright by Interviewbit


J2EE Interview Questions

Java Servlets dynamically process requests and responses. JSP pages are used for
executing servlets that allow a natural approach to creating static content.

15. What do you understand by JSF?


JSF stands for Java Server Faces which is a web framework that is intended for
simplifying the development process for user interfaces. It is a standardized display
technology for Java-based web applications. It is based on MVC (Model-View-
Controller) pattern and provides reusable UI components.

16. What factors should a J2EE application possess for


operating in a global economy?
Following are the factors that a J2EE application should possess to operate globally:
Financial Considerations: Each country has its taxes, restrictions and tariffs
depending on the government. All these factors should be considered while
developing the J2EE application.
Language Requirements: An application developed should support regional
languages of the country for wider user coverage.
Legal Differences: Every government has their custom laws, privacy laws and
requirements for each country. An application developed should abide by all the
rules of the land.

Page 15 © Copyright by Interviewbit


J2EE Interview Questions

17. What are the differences between JVM vs JIT vs JDK vs JRE?

Page 16 © Copyright by Interviewbit


J2EE Interview Questions

JVM JIT JDK JRE

Java
Development Java Runtime
Kit: JDK is a Environment:
Java Virtual cross- JRE is part of
Machine: platformed JDK that
Just in Time
Introduced for so ware consists of
Compilation:
managing development JVM, core
This is a part
system environment classes and
of JVM and
memory and offering support
was
also to provide various libraries. It is
developed
a portable collections of used for
for improving
execution libraries and providing a
JVM
environment tools runtime
performance.
for Java required for environment
applications. developing for running
Java Java
applications programs.
and applets.

JRE is a
subset of JDK
and is like a
container
that consists
This is used JDK is
Used for of JVM,
for compiling essential for
compiling byte supporting
only reusable writing and
code to libraries and
byte code to running
machine code other files. It
machine programs in
completely. doesn't have
code. Java.
development
tools such as
compilers
and
debuggers.

Page 17 © Copyright by Interviewbit


J2EE Interview Questions

J2EE Interview Questions for Experienced


18. What are the design goals of J2EE architecture?
The design goals of J2EE architecture are as follows:
Service Availability: To ensure that the application is available 24*7 to achieve
required business goals.
Data Connectivity: The connection between a J2EE application and legacy
systems should remain compatible enough for ensuring business functions.
Ease of Accessibility: The user should be able to connect to applications using
any device and from anywhere.
User Interaction: The user interaction should be seamless and should be able to
connect to different devices like desktops, mobiles, laptops etc.
Abstraction and Flexibility: The developer should focus on business logic and
the configuration details should be handled by the server.

19. What do you understand by Connectors? Can you describe


the Connector Architecture?
Connectors are used for providing standard extension mechanisms to provide
connectivity to different enterprise information systems. A connector architecture
consists of resource adapters and system-level contracts, both of which are specific
to enterprise information systems. The resource adapters are plugged into the
container. The connector architecture defines certain contracts which a resource
adapter must support for plugging into J2EE applications like security, transaction,
resource management etc.

20. What do you understand by JRMP?


JRMP stands for Java Remote Method Protocol which is used for Remote Method
Invocation (RMI) for passing Java objects as arguments. It is an underlying protocol
used by RMI for marshalling objects as a stream during object serialization for
transferring objects from one JVM to other.

21. What happens if the database connected to the Java


application via connection pool suddenly goes down?
Page 18 © Copyright by Interviewbit
J2EE Interview Questions

Since the Java application uses a connection pool, it has active connections that
would get disconnected if the database goes down. When the queries are executed to
retrieve or modify data, then we will get a Socket exception.

22. How is 32-bit JVM different from 64-bit JVM?


64-bit JVM is used in 64-bit operating systems whereas 32-bit JVM is used for 32-bit
operating systems. In 64-bit JVM, we can specify more heap size memory up to 100G
when compared to the 4G limit of 32-bit JVM. Java applications take more memory
while running in 64-bit JVM when compared to running the same application in 32-bit
JVM. This is because of the increased size of the Ordinary Object Pointer. However,
this can be bypassed by making use of the -XXCompressedOOP option of the JVM for
telling to use 32-bit pointers. Additionally, 64-bit JVM uses 12 bytes object header size
and a maximum of 8 bytes of internal references whereas the 32-bit JVM uses 8 bytes
headers and a maximum of 4 bytes of internal references.

23. How is a webserver different from an application server?

Page 19 © Copyright by Interviewbit


J2EE Interview Questions

Web Server A

Web servers are computer programs that accept requests and returns
responses based on that. A
a

It constitutes the web container. It

These are useful for getting static content for the applications. T

This consumes and utilizes fewer resources. It

Provide an environment for running web applications. P

Web servers don’t support multithreading. M

T
This server makes use of HTML and HTTP protocols.
s

24. What is the purpose of heap dumps and how do you analyze
a heap dump?

Page 20 © Copyright by Interviewbit


J2EE Interview Questions

Heap dumps consist of a snapshot of all live objects on Java heap memory that are
used by running Java applications. Detailed information for each object like type,
class name, address, size and references to other objects can be obtained in the heap
dump. Various tools help in analyzing heap dumps in Java. For instance, JDK itself
provides jhat tool for analysing heap dump. Heap dumps are also used for analysing
memory leaks which is a phenomenon that occurs when there are objects that are
not used by the application anymore and the garbage collection is not able to free
that memory as they are still shown as referenced objects. Following are the causes
that result in memory leaks:
Continuously instantiating objects without releasing them.
Unclosed connection objects (such as connections to the database) post the
required operation.
Static variables holding on to references of objects.
Adding objects in HashMap without overriding hashCode() equals() method. If
these methods are not included, then the hashmap will continuously grow
without ignoring the duplicates.
Unbounded caches.
Listener methods that are uninvoked.
Due to this, the application keeps consuming more and more memory and eventually
this leads to OutOfMemory Errors and can ultimately crash the application. We can
make use of the Eclipse Memory Analyzer or jvisualVM tool for analysing heap dump
to identify memory leaks.

25. How can we take a heap dump of a Java process?


There are multiple ways for taking heap dump of Java process. Tools like jCmd,
jVisualVM, jmap are available for this purpose. For example, if we are using jmap,
then heap dump can be taken by running the below command:

$ jmap -dump:live, file=/path/of/heap_dump.hprof PID

Page 21 © Copyright by Interviewbit


J2EE Interview Questions

This heap dump contains live objects that are stored in heap_dump.hprof file.
Process ID (PID) of the Java process is needed to get the dump that can be obtained
by using ps or grep commands.

26. How is J2EE different from Spring?

J2EE Spring

J2EE is a standard or
Spring is a framework used for
specification defined by
designing templates for an
Sun/Oracle which is used
application.
for web development.

J2EE has an Oracle-based Spring is an open-source


license. framework.

J2EE is based on a 3D Spring is based on layered


framework- Logical Tiers, architecture having many
Client Tiers, and modules that are made on top
Presentation Tiers. of the core container.

J2EE makes use of high-


Spring doesn’t have a specific
level object-oriented
programming model.
languages like Java.

J2EE is faster. Spring is slower than J2EE.

Spring provides a layer of


Makes use of JTA API with
abstraction to help JTA
execution.
execution merchants.

27. What are EAR, WAR, and JAR?

Page 22 © Copyright by Interviewbit


J2EE Interview Questions

EAR stands for Enterprise Archive file and it consists of web, EJB and client
components all compressed and packed into a file called .ear file. EAR files allow us
to deploy different modules onto the application server simultaneously.
WAR stands for Web Archive file and consists of all web components packed and
compressed in a .war file. This file allows testing and deploying web applications
easily in a single request.
JAR stands for Java Archive file. It consists of all libraries and class files that
constitute APIs. These are packed and compressed in a file called the .jar file. These
are used for deploying the entire application including classes and resources in a
single request.

28. What do you know about Hibernate?


Hibernate is an Object Relational Mapper framework in Java that provides a layer of
abstraction for retrieving or modifying data in the database. It handles all the
implementations internally and the developer need not worry about how the
connections to the databases are made, how the data translation from Java
application to Database and vice versa happens. Hibernate supports powerful object-
oriented concepts like inheritance, association, polymorphism, compositions,
collections that help in making queries using the Java approach by using HQL
(Hibernate Query Language).

29. What are deployment descriptors used for?


Servlets are server-side components that aid in developing powerful server-side
applications. Ther are servers that are platform-independent and follow various
protocols as per the application design. The most commonly used protocol is the
HTTP protocol. In Java, we can create servlets by implementing the Servlet interface
that has 3 lifecycle methods - init, service and destroy - and we can use the below
classes for implementing servlets:
javax.servlet.http.HttpServletRequest
javax.servlet.http.HttpServletResponse
javax.servlet.http.HttpSession.

30. Can you describe the phases of the servlet lifecycle?

Page 23 © Copyright by Interviewbit


J2EE Interview Questions

The below image describes the different phases of the servlet lifecycle:

There are five phases, are as follows:


Classloading phase: The first step is to load the servlet class file (.class
extension) by the web container.
Instantiation phase: Next step is to instantiate the servlet by calling the default
constructor.
Initialize phase: In this phase, the init() method of the servlet is run where the
servlet configuration will be assigned to the servlet. This is a lifecycle method
provided by the Servlet interface which is run only once in the servlet lifetime.
Request Handling phase: Here, the servlets provide services to different
requests by making use of the service() method of the Servlet interface.
Removal phase: In this phase, the destroy() lifecycle method of the Servlet
interface will be called that is used for clearing the configuration and closing
resources before servlet destruction. Post this, the garbage collection will take
place.

31. How does a servlet application work?

Page 24 © Copyright by Interviewbit


J2EE Interview Questions

A Java servlet is typically multithreaded. This means that multiple requests can be
sent to the same servlet and they can be executed at the same time. All the local
variables (not pointing to shared resources) inside the servlet are automatically
thread-safe and are request specific. Care has to be taken when the servlet is
accessing or modifying the global shared variable. The servlet instance lifecycle for
different requests are managed by the web container as follows:
User clicks on a link in a client that requests a response from a server. In this
instance, consider that the client performs GET request to the server as shown in
the image below:

The web container intercepts the request and identifies which servlet has to
serve the request by using the deployment descriptor file and then creates two
objects as shown below-
HttpServletRequest - to send servlet request
HttpServletResponse - to get the servlet response

Page 25 © Copyright by Interviewbit


J2EE Interview Questions

The web container then creates and allocates a thread that inturn creates a
request that calls the service() lifecycle method of the servlet and passes the
request and response objects as parameters as shown below:

service() method decides which servlet method - doGet() or doPost() or doPut()


or doDelete()- depending on the HTTP requests received from the client. In our
case, we got the GET request from the client and hence the servlet will call the
doGet() method as described below.

Servlet makes use of the response object obtained from the servlet method for
writing the response to the client.

Page 26 © Copyright by Interviewbit


J2EE Interview Questions

Once the request is served completely, the thread dies and the objects are made
ready for garbage collection.

32. What do you understand by Java Message Service (JMS)?


JMS is a Java-based API that is like a gateway to the message-oriented middleware
like SonicMQ, IBM MQSeries etc. It provides a mechanism for sending and receiving
messages by making use of the publishing/subscribe (1 message multiple receivers)
model or point-to-point (1 message 1 receiver) paradigm from one client to another.
The following image describes how 2 clients can communicate with each other
utilizing JMS providers.

Page 27 © Copyright by Interviewbit


J2EE Interview Questions

Conclusion:

J2EE defines standards and specifications for various components such as e-mailing,
database connectivity, security, XML parsing, CORBA communication etc that help in
developing complex, reliable, secure and distributed servlets and applications that
follow the client-server model. It provides various API interfaces that act as standards
between different vendor adapters and J2EE components. This ensures that the
application components are not dependent on vendor codes. Due to this, J2EE has
been very popular among Java developers in the field of so ware development.

Page 28 © Copyright by Interviewbit


Contents

Servlet Interview Questions for Freshers


1. What is a Servlet?
2. How do you write a servlet that is part of a web application?
3. What are some of the advantages of Servlets?
4. Explain the Servlet API.
5. What do you mean by server-side include (SSI) functionality in Servlets?
6. Explain the server-side include expansion.
7. Define ‘init’ and ‘destroy’ methods in servlets.
8. How is retrieving information different in Servlets as compared to CGI?
9. Compare CGI Environment Variables and the Corresponding Servlet Methods.
10. How does a servlet get access to its init parameters?
11. How does a servlet examine all its init parameters?

Servlet Interview Questions for Experienced


12. What do you mean by Servlet chaining?
13. What do you mean by ‘filtering’ in servlets?
14. What are the uses of Servlet chaining?
15. What are the advantages of Servlet chains?
16. Explain the Servlet Life Cycle.
17. What is the life cycle contract that a servlet engine must conform to?
18. What do you mean by Servlet Reloading?

Page 1 © Copyright by Interviewbit


Servlet Interview Questions

Servlet Interview Questions for


Experienced (.....Continued)

19. What are the methods that a servlet can use to get information about the
server?
20. How can a servlet get the name of the server and the port number for a
particular request?
21. How can a servlet get information about the client machine?
22. Explain the Single-Thread Model in servlets.
23. How does Background Processing take place in servlets?
24. How does Servlet collaboration take place?
25. Explain Request parameters associated with servlets.
26. What are the three methods of inter-servlet communication?
27. What are the reasons we use inter-servlet communication?
28. What do you mean by Servlet Manipulation?
29. What is the javax.servlet package?

Page 2 © Copyright by Interviewbit


Let's get Started

Introduction to Servlet:

A servlet is an extension to a server. It is a Java class that is loaded to expand the


functionality of the server. It helps extend the capability of web servers by providing
support for dynamic response and data persistence. These are commonly used with
web servers, where they can take the place of CGI scripts. A servlet runs inside a Java
Virtual Machine (JVM) on the server, and hence it is safe and portable. Servlets
operate only within the domain of the server. These do not require support for Java
in the web browser.
The original servlet specification was created by Sun Microsystems. Sun packed Java
with Internet functionality and announced the servlet interface. The first version was
finalized in June 1997. The servlet specification was developed under the Java
Community Process starting with version 2.3. Servlets represent a more efficient
architecture as compared to the older CGI.

Servlet Interview Questions for Freshers


1. What is a Servlet?

Page 3 © Copyright by Interviewbit


Servlet Interview Questions

A servlet is a small Java program that runs within a Web server. Servlets receive and
respond to requests from Web clients, usually across HTTP, the HyperText Transfer
Protocol. Servlets can also access a library of HTTP-specific calls and receive all the
benefits of the mature Java language, including portability, performance, reusability,
and crash protection. Servlets are o en used to provide rich interaction functionality
within the browser for users (clicking link, form submission, etc.)

2. How do you write a servlet that is part of a web application?


To write a servlet that is part of a web application:
Create a Java class that extends javax.servlet.http.HttpServlet.
Import the classes from servlet.jar (or servlet-api.jar ).
These will be needed to compile the servlet.

3. What are some of the advantages of Servlets?


Servlets provide a number of advantages over the other approaches. These include
power, integration, efficiency, safety, portability, endurance, elegance, extensibility,
and also flexibility. Here are the advantages of servlets:
A Servlet is convenient in modifying regular HTML
We can write the servlet code into the JSP
Servlets includes the feature of multithreading of java
We can make use of exception handling
Servlets have a separate layer of business logic in the application
Easy for developers to show and process the information.
Servlets provide a convenient way to modify HTML pages.
Servlets have a separate layer of business logic in the application.
All the advantages of Java-like multi-threading, exception handling, etc. are
there in Servlets

4. Explain the Servlet API.


A servlet does not have a main() method, unlike a regular Java program, and just like
an applet. It has some methods of a servlet that are called upon by the server for the
purpose of handling requests. It invokes the servlet’s service() method, every time the
server sends a request to a servlet.

Page 4 © Copyright by Interviewbit


Servlet Interview Questions

To handle requests that are appropriate for the servlet, a typical servlet must
override its service() method. The service() method allows 2 parameters: these are
the request object and the response object. The request object is used to inform the
servlet about the request, whereas the response object is used to then give a
response.
As opposed to this, an HTTP servlet typically does not override the service() method.
However, it actually overrides the doGet() to handle the GET requests and the
doPost() to handle POST requests. Depending on the type of requests it needs to
handle, an HTTP servlet can override either or both of these methods.

5. What do you mean by server-side include (SSI) functionality


in Servlets?
Servlets can be added in HTML pages with the server-side include (SSI) functionality.
A page can be preprocessed by the server to add the output from servlets at some
points within the page, in the servers that support servlets.

Page 5 © Copyright by Interviewbit


Servlet Interview Questions

<SERVLET CODE=ServletName CODEBASE=http://server:port/dir


initParam1=initValue1
initParam2=initValue2>
<PARAM NAME=param1 VALUE=val1>
<PARAM NAME=param2 VALUE=val2>
Text appearing here indicates that the web server which provides this page does not
</SERVLET>

6. Explain the server-side include expansion.


Server-side inclusion (SSI) is a feature of a server in which a placeholder <SERVLET>
tag is also returned. The <SERVLET> tag is then substituted by the corresponding
servlet code.
The server just parses the pages that are specially tagged, and it doesn’t parse and
analyses each page it returns. The Java Web Server parses solely pages with a .shtml
extension by default. With the SERVLET tag, in contrast to the APPLET tag, the client
web browser doesn’t see anything between SERVLET and /SERVLET unless SSI is not
supported by the server.

7. Define ‘init’ and ‘destroy’ methods in servlets.


Servlets Init Method is used to initialise a Servlet.

Page 6 © Copyright by Interviewbit


Servlet Interview Questions

A er the web container loads and instantiates the servlet class and before it delivers
requests from clients, the web container initializes the servlet. To customize this
process to allow the servlet to read persistent configuration data, initialize resources,
and perform any other one-time activities, you override the init method of the
Servlet interface.
Example:

public class CatalogServlet extends HttpServlet {


private ArticleDBAO articleDB;
public void init() throws ServletException {
articleDB = (ArticleDBAO)getServletContext().
getAttribute("articleDB");
if (articleDB == null) throw new
UnavailableException("Database not loaded");
}
}

When a servlet container determines that a servlet should be removed from service
(for example, when a container wants to reclaim memory resources or when it is
being shut down), the container calls the destroy method of the Servlet interface.
The following destroy method releases the database object created in the init
method.

public void destroy() {


bookDB = null;
}

8. How is retrieving information different in Servlets as


compared to CGI?
Servlets have a variety of ways to realize access to information. For the bulk of it,
every method returns a specific result. Compared with CGI programs its information
by making use of passed environment variables, One can see multiple advantages by
using the servlet approach.

Page 7 © Copyright by Interviewbit


Servlet Interview Questions

Stronger type checking:


Stronger type checking means that there is more support in the compiler for
finding errors in syntax and types. A CGI program utilizes one function to get its
environment variables, and several errors are not caught at compile-time and
they get only know at runtime cannot be caught until some runtime issue got
caused.
Delayed calculation:
The value for every environment variable has to be precalculated and passed
when a server starts a CGI program, even if the program uses it or not. In
contrast, servlets launched by servers can enhance the performance on the fly
by delaying calculation and do calculations when that piece of code is actually
used.
Interactives with the server:
A CGI program is free from its server, once the execution begins. Then, the single
communication path that the program uses is its standard output. However, a
servlet can work with the server. A servlet works in 2 ways: either in the server or
as a connected sidecar process outside the server.

9. Compare CGI Environment Variables and the Corresponding


Servlet Methods.

Page 8 © Copyright by Interviewbit


Servlet Interview Questions

CGI Environment Variable HTTP Servlet Method

SERVER_NAME req.getServerName()

SERVER_SOFTWARE getServletContext().getServerInfo()

SERVER_PROTOCOL req.getProtocol()

SERVER_PORT req.getServerPort()

REQUEST_METHOD req.getMethod()

PATH_INFO req.getPathInfo()

PATH_TRANSLATED req.getPathTranslated()

SCRIPT_NAME req.getServletPath()

DOCUMENT_ROOT req.getRealPath("/")

QUERY_STRING req.getQueryString()

REMOTE_HOST req.getRemoteHost()

REMOTE_ADDR req.getRemoteAddr()

AUTH_TYPE req.getAuthType()

REMOTE_USER req.getRemoteUser()

CONTENT_TYPE req.getContentType()

CONTENT_LENGTH req.getContentLength()

HTTP_ACCEPT req.getHeader("Accept")

HTTP_USER_AGENT req.getHeader("User-Agent")
Page 9 © Copyright by Interviewbit
Servlet Interview Questions

10. How does a servlet get access to its init parameters?


The getInitParameter() method is used by the servlet in order to get access to its init
parameters:

public String ServletConfig.getInitParameter(String name)

The above method returns the value of the named init parameter or if the named init
parameter does not exist it will return null. The value returned is always a single
string. The servlet then interprets the value.

11. How does a servlet examine all its init parameters?


We can make use of getInitParameterNames() function to examine all its init
parameters.

public Enumeration ServletConfig.getInitParameterNames()

This returns the names of the servlet's initialization parameters as an Enumeration of


String objects, or an empty Enumeration if the servlet has no initialization
parameters. This is o en used for debugging.

Servlet Interview Questions for Experienced


12. What do you mean by Servlet chaining?
Servlet Chaining is a way where the output of one servlet is piped to the input of
another servlet, and the output of that servlet can be piped to the input of yet
another servlet and so on. Each servlet in the pipeline can either change or extend
the incoming request. The response is returned to the browser from the last servlet
within the servlet chain. In the middle, the output out of each servlet is passed as the
input to the next servlet, so every servlet within the chain has an option to either
change or extend the content. The figure below represents this. Servlets can help in
creating content via servlet chaining.

Page 10 © Copyright by Interviewbit


Servlet Interview Questions

13. What do you mean by ‘filtering’ in servlets?


There are usually 2 ways during which one will trigger a series of servlets for an
associate incoming request. In the first manner, it is such that the server that bound
URLs ought to be handled with the associated specified chain. the other manner is
that one will inform the server to redirect all the output of a selected content
through a selected servlet before it's returned to the client. This effectively creates a
series on the fly. once a servlet transforms one sort of content into another, this
method is named filtering.

14. What are the uses of Servlet chaining?


Given below are some of the use cases of Servlet chaining:
Change how a group of pages, a single page, or a type of content appears
quickly
One can talk to those who don’t understand a particular language by dynamically
translating the text from the pages to the language that can be read by the client.
One can keep away certain words that one doesn’t want others to read.
Display in special formats a kernel of content

Page 11 © Copyright by Interviewbit


Servlet Interview Questions

For instance, one can add custom tags within a page, and then a servlet can replace
these with HTML content.
Support for the esoteric data types
For instance, one can provide a filter that converts nonstandard image types to GIF or
JPEG for the unsupported image types.

15. What are the advantages of Servlet chains?


Servlet chains have the following advantages:
Servlet chains can be undone easily. This helps in quickly reversing the change.
Servlet chains dynamically handle content that is created. Because of this, one
can trust that all our restrictions are maintained, that the special tags are
replaced, and even in the output of a servlet, all the dynamically converted
PostScript images are properly displayed.
Servlet chains cache the content for later, so it does not execute the script every
time got added.

16. Explain the Servlet Life Cycle.


One of the most striking features of servlets is the Servlet Life Cycle. This is a powerful
mixture of the life cycles used in CGI programming and lower-level NSAPI and ISAPI
programming.
The CGI has certain resource and performance problems. In low-level server API
programming, there are some security concerns as well. These are addressed by the
servlet engines by the servlet life cycle. A servlet engine might execute all of its
servlets in a single Java virtual machine (JVM). Servlets can efficiently share data with
each other as they share the same JVM. Still, they are prevented from accessing each
other’s private data by the Java language. Additionally, servlets can be permitted to
remain between requests as object instances. Thus they take up lesser memory than
the complete processes.

17. What is the life cycle contract that a servlet engine must
conform to?
The life cycle contract that a servlet engine must conform to is as follows:

Page 12 © Copyright by Interviewbit


Servlet Interview Questions

Create the servlet and initialize it.


Manage none or more calls for service from clients.
Destroy the servlet and then the garbage collects it.

18. What do you mean by Servlet Reloading?


Servlet reloading may appear to be a simple feature, but it’s actually quite a trick—
and requires quite a hack. The objects in ClassLoader are developed to load a class
just once. To solve this limitation and to load servlets multiple times, servers use
custom class loaders. These custom class loaders load servlets from the default
servlets directory.
When a server dispatches a request to a servlet, it first checks if the servlet’s class file
has changed on disk. If the change appears, then the server abandons the class that
the loader used to load the old version and then creates a new instance of the
custom class loader to load the new version. Old servlet versions can stay in memory
indefinitely, but the old versions are not used to handle any more requests.

19. What are the methods that a servlet can use to get
information about the server?
A servlet can be used to learn about its server using 4 different methods. Out of these,
two methods are called using the ServletRequest object. These are passed to the
servlet. The other two are called from the ServletContext object. In these, the servlet
is executing.

20. How can a servlet get the name of the server and the port
number for a particular request?
A servlet can get the name of the server and the port number for a particular request
with getServerName() and getServerPort() , respectively:

public String ServletRequest.getServerName()


public int ServletRequest.getServerPort()

Page 13 © Copyright by Interviewbit


Servlet Interview Questions

These methods are attributes of ServletRequest because the values can change for
different requests if the server has more than one name (a technique called virtual
hosting).
The getServerInfo() and getAttribute() methods of ServletContext supply
information about the server so ware and its attributes:

public String ServletContext.getServerInfo()


public Object ServletContext.getAttribute(String name)

21. How can a servlet get information about the client


machine?
A servlet can use getRemoteAddr() and getRemoteHost() to retrieve the IP address
and hostname of the client machine, respectively:

public String ServletRequest.getRemoteAddr()


public String ServletRequest.getRemoteHost()

Both values are returned as String objects.

22. Explain the Single-Thread Model in servlets.


It is standard to have a single servlet instance for each registered name of the servlet.
However, instead of this, it is also possible for a servlet to choose to have a pool of
instances created for each of its names that all share the task of handling requests.
These servlets indicate this action by implementing the
javax.servlet.SingleThreadModel interface.

According to the Servlet API documentation, a server loading the SingleThreadModel


servlet should guarantee, “that no two threads will execute concurrently the service
method of that servlet.” Each thread uses a free servlet instance from the pool in
order to achieve this. Therefore, any servlet using the SingleThreadModel isn’t
needed to synchronize usage to its instance variables and is considered thread-safe.

Page 14 © Copyright by Interviewbit


Servlet Interview Questions

23. How does Background Processing take place in servlets?


Servlets can do more than just persist between the accesses. They can also execute
between accesses. A thread that has been started by a servlet can continue to
execute even a er the response has been sent. This ability proves most useful for the
tasks that are long-running, and whose incremental results should be made available
to multiple clients. A background thread that has been started in init() performs
continuous work. It also performs request-handling threads displaying the current
status with doGet().

24. How does Servlet collaboration take place?


Servlets running together in the same server have many ways to communicate with
one another. There are two main styles of servlet collaboration:
Sharing information: Sharing information involves two or more servlets
sharing the state or even resources. A special case of sharing information is
Session tracking.
Sharing control: Sharing control involves two or more servlets sharing control
of the request. For example, one servlet could receive the request but let
another servlet handle some or all of the request-handling responsibilities.

Page 15 © Copyright by Interviewbit


Servlet Interview Questions

25. Explain Request parameters associated with servlets.


There can be any variety of request parameters related to the servlet with every
access to it. These parameters are usually [name-value] pairs that give the servlet any
further information that it desires so as to handle the request.
An HTTP servlet gets its request parameters as a part of its query string or as encoded
post data. A servlet used as a server-side includes its parameters equipped with
PARAM tags.
Fortunately, although a servlet will receive parameters in an exceeding variety of
various ways, every servlet retrieves its parameters the same way, by using
getParameter() and getParameterValues() :

public String ServletRequest.getParameter(String name)


public String[] ServletRequest.getParameterValues(String name)

26. What are the three methods of inter-servlet


communication?
The three methods of inter servlet communication are:
Servlet manipulation: In Servlet manipulation, one servlet directly invokes the
methods of another. These servlets can get references to other servlets using
getServletNames() and getServlet(String name).
Servlet reuse: In Servlet reuse, one servlet uses another’s abilities for its own
purposes. In some cases, this requires forcing a servlet load using a manual
HTTP request.
Servlet collaboration: In Servlet collaboration, the cooperating servlets share
information. Servlets can share information using the system properties list,
using a shared object, or using inheritance.

27. What are the reasons we use inter-servlet communication?


There are three major reasons to use the inter servlet communication:

Page 16 © Copyright by Interviewbit


Servlet Interview Questions

Direct servlet manipulation


Servlet reuse
Servlet collaboration

28. What do you mean by Servlet Manipulation?


When one servlet accesses the loaded servlets on its server, it is called Servlet
Manipulation. It also optionally performs some task on one or more of them. A
servlet gets information about other servlets through the ServletContext object. We
use getServlet() to get a particular servlet:

public Servlet ServletContext.getServlet(String name) throws ServletException

29. What is the javax.servlet package?


The core of the Servlet API is the javax.servlet package. It includes the basic Servlet
interface, which all servlets must implement in one form or another, and an abstract
GenericServlet class for developing basic servlets.
This package comprises of the following:
Classes for communicating with the host server and client (ServletRequest and
ServletResponse)
Communicating with the client (ServletInputStream and ServletOutputStream).
In situations where the underlying protocol is unknown, servlets should confine
themselves to the classes within this package.

Page 17 © Copyright by Interviewbit


Servlet Interview Questions

Conclusion:

Unlike CGI and FastCGI, which use many processes to handle separate programs and
separate requests, servlets are all handled by separate threads within the webserver
process. Thus, the servlets are efficient and scalable. As servlets run within the web
server, they can interact very closely with the server to do things that are not possible
with CGI scripts.
An advantage of servlets is that they are portable: both across operating systems like
with Java and also across web servers.
All of the major web servers support servlets.
References and Resources:
Java Servlet Programming, by Jason Hunter, Published by O'Reilly Media, Inc.
Java Servlet & JSP Cookbook, by Bruce W. Perry
Java Interview
JSP Interview

Page 18 © Copyright by Interviewbit


JSP Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

JSP Interview Questions for Freshers


1. What is JSP?
2. How does JSP work?
3. How does JSP Initialization take place?
4. What is the use of JSP?
5. What are some of the advantages of using JSP?
6. What is Java Server Template Engines?
7. What are Servlets?
8. Explain the Life Cycle of a servlet.
9. What are the types of elements with Java Server Pages (JSP)?
10. What is the difference between JSP and Javascript?
11. What is JSP Expression Language (EL)?
12. What are JSP Operators?
13. Explain the JSP for loop.
14. Explain the JSP while loop.

JSP Interview Questions for Experienced


15. What are Implicit JSP Objects?
16. What do you mean by JavaBeans?
17. What is J2EE?
18. What is JSTL?

Page 1 © Copyright by Interviewbit


JSP Interview Questions

JSP Interview Questions for Experienced (.....Continued)

19. What are JSTL Core tags used for?


20. Which methods are used for reading form data using JSP?
21. What is an Exception Object?
22. How does JSP processing take place?
23. Explain the anatomy of a JSP page?
24. What are the various action tags used in JSP?
25. What is the JSP Scriptlet?
26. What is MVC in JSP?
27. What is a JSP Declaration?

Conclusion
28. Conclusion

Page 2 © Copyright by Interviewbit


Let's get Started

Introduction to Java Server Pages (JSP):

Java Server Pages (JSP) enables the creation of dynamic and a platform-independent
way for building web-based applications. It is a server-side programming technology.
JSP is a technology that is an integral part of Java EE, a complete platform for
enterprise-class applications. This means that JSP can be used from the simplest
applications to the most complex and demanding ones.
JavaServer Pages (JSP) o en serve the same purpose as programs implemented
using the Common Gateway Interface (CGI). However, in addition, JSP offers several
advantages as compared to CGI.

JSP Interview Questions for Freshers


1. What is JSP?
JSP stands for Java Server Pages. This technology is used to create dynamic web
pages in the form of HyperText Markup Language (HTML). They have embedded Java
code pieces in them. They are an extension to the Servlet Technology and generate
Servlet from a page. It is common to use both servlets and JSP pages in the same
web apps.

Page 3 © Copyright by Interviewbit


JSP Interview Questions

2. How does JSP work?


The JSP container has a special servlet called the page compiler. All HTTP requests
with URLs that match the .jsp file extension are forwarded to this page compiler by
the configuration of the servlet container. The servlet container is turned into a JSP
container with this page compiler. When a .jsp page is first called, the page compiler
parses and compiles the .jsp page into a servlet class. The JSP servlet class is loaded
into memory on the successful compilation. For the subsequent calls, the servlet
class for that .jsp page is already in memory. Hence, the page compiler servlet will
always compare the timestamp of the JSP servlet with the JSP page. If the .jsp page
is more current, recompilation is necessary. With this process, once deployed, JSP
pages only go through the time-consuming compilation process once.

3. How does JSP Initialization take place?


When a container loads a JSP, it invokes the jspInit() method before servicing any
requests.

public void jspInit(){


// Initialization code...
}

4. What is the use of JSP?


Earlier, Common Gateway Interface (CGI) was the only tool for developing dynamic
web content and was not very efficient. The web server has to create a new operating
system process, load an interpreter and a script, execute the script, and then tear it
all down again, for every request that comes in. This is taxing for the server and
doesn’t scale well when the number of traffic increases.
Alternatives such as ISAPI from Microso , and Java Servlets from Sun Microsystems,
offer better performance and scalability. However, they generate web pages by
embedding HTML directly in programming language code. JavaServer Pages (JSP)
changes all of that.

5. What are some of the advantages of using JSP?

Page 4 © Copyright by Interviewbit


JSP Interview Questions

Better performance and quality as JSP is a specification and not a product.


JSP pages can be used in combination with servlets.
JSP is an integral part of J2EE, a complete platform for Enterprise-class
applications.
JSP supports both scripting and element-based dynamic content.

6. What is Java Server Template Engines?


A Java servlet template engine is a technology for separating presentation from
processing. Template engines have been developed as open-source products to help
get HTML out of the servlets. These template engines are intended to be used
together with pure code components (servlets) and use only web pages with scripting
code for the presentation part.
Two popular template engines are WebMacro (http://www.webmacro.org) and
FreeMarker (http://freemarker.sourceforge.net).

7. What are Servlets?


JSP pages are o en combined with servlets in the same application. The JSP
specification is based on the Java servlet specification. Simply put, a servlet is a piece
of code that adds new functionality to a web server, just like CGI and proprietary
server extensions such as NSAPI and ISAPI. Compared to other technologies, servlets
have a number of advantages:
Platform and vendor independence
Integration
Efficiency
Scalability
Robustness and security

Page 5 © Copyright by Interviewbit


JSP Interview Questions

8. Explain the Life Cycle of a servlet.


A Java class that uses the Servlet Application Programming Interface (API) is a
Servlet. The Servlet API consists of many classes and interfaces that define some
methods. These methods make it possible to process HTTP requests in a web server-
independent manner.
A servlet is loaded when a web server receives a request that should be handled by it.
Once a servlet has been loaded, the same servlet instance (object) is called to process
succeeding requests. Eventually, the webserver needs to shut down the servlet,
typically when the web server itself is shut down.
The 3 life cycle methods are:
public void init(ServletConfig config)
public void service(ServletRequest req, ServletResponse res)
public void destroy( )
These methods define the interactions between the web server and the servlet.

Page 6 © Copyright by Interviewbit


JSP Interview Questions

9. What are the types of elements with Java Server Pages


(JSP)?
The three types of elements with Java Server Pages (JSP) are directive, action, and
scripting elements.
Following are the Directive Elements:

Page 7 © Copyright by Interviewbit


JSP Interview Questions

Element Description

<%@ Defines page-dependent attributes, such as


page ... scripting language, error page, and buffering
%> requirements.

<%@
include Includes a file during the translation phase.
... %>

<%@
Declares a tag library, containing custom actions,
taglib ...
used on the page.
%>

The Action elements are:

Page 8 © Copyright by Interviewbit


JSP Interview Questions

Element Description

This is for making the JavaBeans


<jsp:useBean>
component available on a page.

This is used to get a property value from


<jsp:getProperty> a JavaBeans component and to add it to
the response.

This is used to set a value for the


<jsp:setProperty>
JavaBeans property.

This includes the response from a servlet


<jsp:include> or JSP page during the request
processing phase.

This is used to forward the processing of


<jsp:forward>
a request to a JSP page or servlet.

This is used for adding a parameter


value to a request given to another
<jsp:param>
servlet or JSP page by using
<jsp:include> or <jsp:forward>

This is used to generate HTML that


contains the proper client browser-
<jsp:plugin> dependent elements which are used to
execute an Applet with Java Plugin
so ware.

And lastly, the Scripting elements are:

Page 9 © Copyright by Interviewbit


JSP Interview Questions

Element Description

<% ...
Scriptlet used to embed scripting code.
%>

Expression, used to embed Java expressions


<%= ...
when the result shall be added to the response.
%>
Also used as runtime action attribute values.

Declaration used to declare instance variables


<%! ...
and methods in the JSP page implementation
%>
class.

10. What is the difference between JSP and Javascript?


JSP is a server-side scripting language as it runs on the server. Whereas, JavaScript
runs on the client. Commonly, JSP is more used to change the content of a webpage,
and JavaScript for the presentation. Both are quite commonly used on the same
page.

11. What is JSP Expression Language (EL)?


Expression Language (EL) was introduced in JSP 2.0. It is a mechanism that simplifies
the accessibility of the data stored in Javabean components and other objects like
request, session, and application, etc. There are many operators in JSP that are used
in EL like arithmetic and logical operators to perform an expression.

12. What are JSP Operators?


JSP Operators support most of the arithmetic and logical operators that are
supported by java within expression language (EL) tags.
Following are the frequently used jsp operators:

Page 10 © Copyright by Interviewbit


JSP Interview Questions

. Access a bean property or Map entry.

[] Access an array or List element.

Group a subexpression to change the evaluation


()
order.

+ Addition

- Subtraction or negation of a value

* Multiplication

/ or div Division

% or
Modulo (remainder)
mod

== or eq Test for equality

!= or ne Test for inequality

< or lt Test for less than

> or gt Test for greater than

<= or le Test for less than or equal

>= or ge Test for greater than or equal

&& or
Test for logical AND
and

|| or or Test for logical OR

! or not Unary Boolean complement

Page 11 © Copyright by Interviewbit


JSP Interview Questions

13. Explain the JSP for loop.


The JSP For loop is used for iterating the elements for a certain condition, and it has
the following three parameters:
The variable counter is initialized
Condition till the loop has to be executed
The counter has to be incremented
The for loop syntax is as follows:

for(inti=0;i<n;i++)
{
//block of statements
}

14. Explain the JSP while loop.


The JSP While loop is used to iterate the elements where it has one parameter of the
condition.
Syntax of While loop:

While(i<n)
{
//Block of statements
}

JSP Interview Questions for Experienced


15. What are Implicit JSP Objects?

Page 12 © Copyright by Interviewbit


JSP Interview Questions

Variable Name Java Type Descriptio

The reque
object is u
to request
informati
like a
request javax.servlet.http.HttpServletRequest
paramete
header
informati
server nam
etc.

The respo
is an insta
of a class t
response javax.servlet.http.HttpServletResponse represent
response
can be giv
to the clie

This is use
get, set, a
remove th
pageContext javax.servlet.jsp.PageContext attributes
from a
particular
scope.

This is use
get, set, a
remove
attributes
session javax.servlet.http.HttpSession
session sc
and also u
to get sess
i f ti
Page 13 © Copyright by Interviewbit
JSP Interview Questions

16. What do you mean by JavaBeans?


JavaBeans component is a Java class that complies with certain coding conventions.
JSP elements o en work with JavaBeans. For information that describes application
entities, JavaBeans are typically used as containers.

17. What is J2EE?


J2EE is basically a compilation of different Java APIs that have previously been
offered as separate packages. J2EE Blueprints describe how they can all be
combined. J2EE vendors can use a test suite to test their products for compatibility.
J2EE comprises the following enterprise-specific APIs:
JavaServer Pages ( JSP)
Java Servlets
Enterprise JavaBeans (EJB)
Java Database Connection ( JDBC)
Java Transaction API ( JTA) and Java Transaction Service ( JTS)
Java Naming and Directory Interface ( JNDI)
Java Message Service ( JMS)
Java IDL and Remote Method Invocation (RMI)
Java XML

Page 14 © Copyright by Interviewbit


JSP Interview Questions

18. What is JSTL?


JSTL stands for Java server pages standard tag library. It is a collection of custom JSP
tag libraries that provide common functionality for web development.
Following are some of the properties of JSTL:
Code is Neat and Clean.
Being a Standard Tag, it provides a rich layer of the portable functionality of JSP
pages.
It has Automatic Javabeans Introspection Support. The JSTL Expression
language handles JavaBean code very easily. We don't need to downcast the
objects, which have been retrieved as scoped attributes.
Easier for humans to read and easier for computers to understand.

19. What are JSTL Core tags used for?


The JSTL Core tags are used for the following purposes:

Page 15 © Copyright by Interviewbit


JSP Interview Questions

Iteration
Conditional logic
Catch exception
URL forward
Redirect, etc.
Following is the syntax to include a tag library:

<%@ taglib prefix="c" uri=http://java.sun.com/jsp/jstl/core%>

20. Which methods are used for reading form data using JSP?
JSP is used to handle the form data parsing automatically. It dies so by using the
following methods depending on the situation:
getParameter() − To get the value of a form parameter, call the
request.getParameter() method.
getParameterValues() − If a parameter appears more than once and it returns
multiple values, call this method.
getParameterNames() − This method is used if, in the current request, you want
a complete list of all parameters.
getInputStream() − This method is used for reading binary data streams from
the client.

21. What is an Exception Object?


The exception object is an instance of a subclass of Throwable (e.g., java.lang.
NullPointerException). It is only available on the error pages. The following table lists
out the important methods available in the Throwable class:

Page 16 © Copyright by Interviewbit


JSP Interview Questions

1 public String getMessage()


Returns a detailed message about the exception that has
occurred. This message is initialized in the Throwable
constructor.

2 public Throwable getCause()


Returns the cause of the exception as represented by a
Throwable object.

3 public String toString()


Returns the name of the class concatenated with the
result of getMessage().

4 public void printStackTrace()


Prints the result of toString() along with the stack trace
to System.err, the error output stream.

5 public StackTraceElement [] getStackTrace()


Returns an array containing each element on the stack
trace. The element at index 0 represents the top of the
call stack, and the last element in the array represents
the method at the bottom of the call stack.

6 public Throwable fillInStackTrace()


Fills the stack trace of this Throwable object with the
current stack trace, adding to any previous information
in the stack trace.

22. How does JSP processing take place?

Page 17 © Copyright by Interviewbit


JSP Interview Questions

The JSP page is turned into a servlet for all the JSP elements to be processed by the
server. Then the servlet is executed. The servlet container and the JSP container—are
o en combined into one package under the name “web container”.
In the translation phase, the JSP container is responsible for converting the JSP page
into a servlet and compiling the servlet. This is used to automatically initiate the
translation phase for a page when the first request for the page is received.
In the “request processing” phase, the JSP container is also responsible for invoking
the JSP page implementation class to process each request and generate the
response.

23. Explain the anatomy of a JSP page?


Different JSP elements are used for generating the parts of the page that differ for
each request. A JSP page is a regular web page with different JSP elements. The
three types of elements with JavaServer Pages are directive, action, and scripting
elements. JSP elements are o en used to work with JavaBeans.
The elements of the page that are not JSP elements are simply called the “template
text”. The template text is commonly HTML, but it could also be any other text.

Page 18 © Copyright by Interviewbit


JSP Interview Questions

When a page request of JSP is processed, the template text and the dynamic content
generated by the JSP elements are merged, and the result is sent as the response to
the browser.

24. What are the various action tags used in JSP?


Various action tags used in JSP are as follows:

Page 19 © Copyright by Interviewbit


JSP Interview Questions

jsp:forward: This action tag forwards the request and response to another
resource.
jsp:include: This action tag is used to include another resource.
jsp:useBean: This action tag is used to create and locates bean objects.
jsp:setProperty: This action tag is used to set the value of the property of the
bean.
jsp:getProperty: This action tag is used to print the value of the property of the
bean.
jsp:plugin: This action tag is used to embed another component such as the
applet.
jsp:param: This action tag is used to set the parameter value. It is used in
forward and includes mostly.
jsp:fallback: This action tag can be used to print the message if the plugin is
working.

25. What is the JSP Scriptlet?


The JSP Scriptlet tag allows you to write Java code into a JSP file. The JSP container
moves statements in the _jspservice() method while generating servlets from JSP.
For each request of the client, the service method of the JSP gets invoked hence the
code inside the Scriptlet executes for every request.
In Scriptlet, a java code is executed every time the JSP is invoked.
Syntax of Scriptlet tag:
<% java code %>

Here <%%> tags are scriptlet tags and within it, we can place the java code.

26. What is MVC in JSP?


In MVC,
M stands for Model
V stands for View
C stands for the controller.

Page 20 © Copyright by Interviewbit


JSP Interview Questions

It is an architecture that separates business logic, presentation, and data. In this, the
flow starts from the view layer, where the request is raised and processed in the
controller layer. This is then sent to the model layer to insert data and get back the
success or failure message.

27. What is a JSP Declaration?


The tags used in declaring variables are called JSP Declaration tags. These are used in
declaring functions and variables. They are enclosed in <%!%> tag. Following is the
syntax for JSP Declaration:

<%@page contentType=”text/html” %>


<html>
<body>
<%!
int a=0;
private int getCount(){
a++;
return a;
}
%>
<p>Values of a are:</p>
<p><%=getCount()%></p>
</body>
</html>

Conclusion
28. Conclusion

Page 21 © Copyright by Interviewbit


JSP Interview Questions

The Java 2 Enterprise Edition (J2EE) takes the task of building an Internet presence
and transforms it to the point where developers can use Java to efficiently create
multi-tier, server-side applications. In late 1999, Sun Microsystems added a new
element to the collection of Enterprise Java tools, called the JavaServer Pages (JSP).
The JSP, built on top of Java servlets, is designed to increase the efficiency in which
programmers, and even nonprogrammers, can create web content.
JavaServer Pages helps in developing web pages that include dynamic content. A JSP
page can change its content based on any number of variable items. A JSP page not
only contains standard markup language elements like a regular web page but also
contains special JSP elements that allow the server to insert dynamic content in the
page. This combination of standard elements and custom elements allows for the
creation of powerful web apps.
References:
JavaServer Pages, 3rd Edition, O'Reilly.
Web Development with JavaServer Pages, by Duane and Mark.

Page 22 © Copyright by Interviewbit


Spring Interview Questions

@RequestMapping:
This provides the routing information and informs Spring that any HTTP
request matching the URL must be mapped to the respective method.
org.springframework.web.bind.annotation.RequestMapping has to be imported
to use this annotation.
@RestController:
This is applied to a class to mark it as a request handler thereby creating
RESTful web services using Spring MVC. This annotation adds the
@ResponseBody and @Controller annotation to the class.
org.springframework.web.bind.annotation.RestController has to be imported
to use this annotation.
Check out more Interview Questions on Spring Boot here.

Spring AOP, Spring JDBC, Spring Hibernate


Interview Questions
27. What is Spring AOP?

Page 19 © Copyright by Interviewbit


Spring Interview Questions

Spring AOP (Aspect Oriented Programming) is similar to OOPs (Object Oriented


Programming) as it also provides modularity.
In AOP key unit is aspects or concerns which are nothing but stand-alone
modules in the application. Some aspects have centralized code but other
aspects may be scattered or tangled code like in the case of logging or
transactions. These scattered aspects are called cross-cutting concern.
A cross-cutting concern such as transaction management, authentication,
logging, security etc is a concern that could affect the whole application
and should be centralized in one location in code as much as possible for
security and modularity purposes.
AOP provides platform to dynamically add these cross-cutting concerns before,
a er or around the actual logic by using simple pluggable configurations.
This results in easy maintainenance of code. Concerns can be added or removed
simply by modifying configuration files and therefore without the need for
recompiling complete sourcecode.
There are 2 types of implementing Spring AOP:
Using XML configuration files
Using AspectJ annotation style

28. What is an advice? Explain its types in spring.


An advice is the implementation of cross-cutting concerns can be applied to other
modules of the spring application. Advices are of mainly 5 types:

Page 20 © Copyright by Interviewbit


Spring Interview Questions

Before:
This advice executes before a join point, but it does not have the ability to
prevent execution flow from proceeding to the join point (unless it throws
an exception).
To use this, use @Before annotation.
A erReturning:
This advice is to be executed a er a join point completes normally i.e if a
method returns without throwing an exception.
To use this, use @A erReturning annotation.
A erThrowing:
This advice is to be executed if a method exits by throwing an exception.
To use this, use @A erThrowing annotation.
A er:
This advice is to be executed regardless of the means by which a join point
exits (normal return or exception encounter).
To use this, use @A er annotation.
Around:
This is the most powerful advice surrounds a join point such as a method
invocation.
To use this, use @Around annotation.

29. What is Spring AOP Proxy pattern?


A proxy pattern is a well-used design pattern where a proxy is an object that
looks like another object but adds special functionality to it behind the scenes.
Spring AOP follows proxy-based pattern and this is created by the AOP
framework to implement the aspect contracts in runtime.
The standard JDK dynamic proxies are default AOP proxies that enables any
interface(s) to be proxied. Spring AOP can also use CGLIB proxies that are
required to proxy classes, rather than interfaces. In case a business object does
not implement an interface, then CGLIB proxies are used by default.

30. What are some of the classes for Spring JDBC API?

Page 21 © Copyright by Interviewbit


Spring Interview Questions

Following are the classes


JdbcTemplate
SimpleJdbcTemplate
NamedParameterJdbcTemplate
SimpleJdbcInsert
SimpleJdbcCall
The most commonly used one is JdbcTemplate. This internally uses the JDBC
API and has the advantage that we don’t need to create connection, statement,
start transaction, commit transaction, and close connection to execute different
queries. All these are handled by JdbcTemplate itself. The developer can focus
on executing the query directly.

31. How can you fetch records by Spring JdbcTemplate?


This can be done by using the query method of JdbcTemplate. There are two
interfaces that help to do this:
ResultSetExtractor:
It defines only one method extractData that accepts ResultSet
instance as a parameter and returns the list.
Syntax:

public T extractData(ResultSet rs) throws SQLException,DataAccessException;

RowMapper:
This is an enhanced version of ResultSetExtractor that saves a lot of code.
It allows to map a row of the relations with the instance of the user-defined
class.
It iterates the ResultSet internally and adds it into the result collection
thereby saving a lot of code to fetch records.

32. What is Hibernate ORM Framework?

Page 22 © Copyright by Interviewbit


Spring Interview Questions

Object-relational mapping (ORM) is the phenomenon of mapping application


domain model objects to the relational database tables and vice versa.
Hibernate is the most commonly used java based ORM framework.

33. What are the two ways of accessing Hibernate by using


Spring.
Inversion of Control approach by using Hibernate Template and Callback.
Extending HibernateDAOSupport and Applying an AOP Interceptor node.

34. What is Hibernate Validator Framework?


Data validation is a crucial part of any application. We can find data validation in:
UI layer before sending objects to the server
At the server-side before processing it
Before persisting data into the database
Validation is a cross-cutting concern/task, so as good practice, we should try to
keep it apart from our business logic. JSR303 and JSR349 provide specifications
for bean validation by using annotations.
This framework provides the reference implementation for JSR303 and JSR349
specifications.

35. What is HibernateTemplate class?


Prior to Hibernate 3.0.1, Spring provided 2 classes namely:
HibernateDaoSupport to get the Session from Hibernate and
HibernateTemplate for Spring transaction management purposes.
However, from Hibernate 3.0.1 onwards, by using HibernateTemplate class we
can use SessionFactory getCurrentSession() method to get the current session
and then use it to get the transaction management benefits.
HibernateTemplate has the benefit of exception translation but that can be
achieved easily by using @Repository annotation with service classes.

Spring MVC Interview Questions


36. What is the Spring MVC framework?

Page 23 © Copyright by Interviewbit


JPA Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

JPA Interview Questions for Freshers


1. What is JPA?
2. What is ORM Framework and how is JPA related to that?
3. What are some benefits of using an ORM framework like JPA?
4. Can you tell the difference between JPA and Hibernate?
5. What are entities in JPA? Explain the concept in detail.
6. What is JPQL and how is it used in JPA?
7. What is a database transaction and how is it used in JPA?
8. What are the advantages of using JPA over JDBC?
9. Difference between JPA Repository and CRUD Repository? Explain with the help of
an example.
10. What is a Named Query in JPA? How is it used? And what are the benefits of
using this?
11. What are the various query methods in JPA to retrieve data from the database?
List some of the most used methods.
12. Describe in detail about the Persistence Unit in JPA?
13. What is the purpose of EntityManager in JPA?
14. What is the difference between EntityManager.find() and
EntityManager.getReference() methods in JPA?
15. What is the purpose of the @JoinColumn annotation in JPA?
16. What types of cascades does JPA support?
17. What is the difference between a detached and attached entity in JPA?
18. What is the purpose of the @Transactional annotation in JPA?
19. Difference between JpaRepository.save() and JpaRepository.saveAndFlush()
methods?
20. What is the purpose of the EntityManagerFactory in Spring Data JPA?
Page 1 © Copyright by Interviewbit
JPA Interview Questions

JPA Interview Questions for Experienced


21. Explain in detail the JPA application life cycle?
22. How does JPA handle optimistic locking? Can you give an example of how you
would implement optimistic locking in JPA?
23. What is the purpose of the @Version annotation in JPA? How is it used in
optimistic locking? Explain the concept in detail.
24. How can you use JPA to perform pagination of query results? What are the
advantages of using pagination over fetching all results at once?
25. How would you implement a custom JPA entity listener? Can you give an
example of when you might use a custom entity listener in your application?
26. How can you use JPA to handle optimistic concurrency control? Can you explain
how the EntityManager.lock() method works?
27. What is the purpose of the @OneToOne and @OneToMany annotations in JPA?
Explain in detail with examples.
28. What types of identifier generation does JPA support?
29. Can you explain how JPA handles entity state transitions (e.g. from new to
managed, managed to remove, etc.)? What are some best practices for
managing entity states in JPA?
30. Explain the difference between a shared cache mode and a local cache mode in
JPA? What are the advantages and disadvantages of each?
31. What is the difference between CascadeType.ALL and CascadeType.PERSIST in
JPA?

Page 2 © Copyright by Interviewbit


Let's get Started

Introduction
If you are a fresh graduate looking for a job in the so ware development industry, or
an experienced developer planning to switch to a new job, it is essential to prepare
for the interviews. One of the essential skills for any Java developer is understanding
Java Persistence API (JPA), which is the standard specification for mapping Java
objects to relational databases.
In this article, we have compiled a list of JPA Interview Questions that are essential
and frequently asked during the interviews. We have categorized these questions
into the following sections:
JPA interview questions for freshers
JPA interview questions for experienced developers
JPA MCQ Questions
The questions in the first section are designed to test the basic knowledge of JPA,
while the questions in the second section are more advanced and require a deeper
understanding of the framework.
So, let's get started and dive into the list of important questions for the JPA
interview.

JPA Interview Questions for Freshers


1. What is JPA?

Page 3 © Copyright by Interviewbit


JPA Interview Questions

Java Persistence API (JPA) is a specification for managing data persistence in Java
applications. JPA is used to simplify the process of writing code for data persistence
by providing a high-level abstraction layer over the underlying data storage
technology, such as relational databases. JPA helps in mapping Java objects to
relational database tables and allows developers to perform CRUD (create, read,
update, delete) operations on data. JPA is o en used in coexistence with Hibernate,
a popular open-source ORM (object-relational mapping) framework. It is a part of the
Java EE platform and is commonly used in enterprise applications.

2. What is ORM Framework and how is JPA related to that?


An Object-Relational Mapping (ORM) framework is a so ware tool that allows
developers to map object-oriented programming language constructs to relational
database constructs. It provides a layer of abstraction between the application code
and the database, allowing developers to work with objects and classes rather than
SQL queries.
JPA (Java Persistence API) is a Java EE standard that provides an ORM framework for
mapping Java objects to relational databases. It defines a set of interfaces and
annotations that allow developers to create persistent entities, query data, and
manage relationships between entities.
JPA is built on top of the Java Persistence Architecture (JPA), which is a standard for
managing persistence in Java applications. JPA provides a set of standard interfaces
and annotations that can be used with any JPA-compliant ORM framework.

Page 4 © Copyright by Interviewbit


JPA Interview Questions

In this diagram, the ORM Framework provides a set of interfaces and annotations
that allow developers to map Java objects to relational databases. JPA
implementation interacts with the Relational DB and uses ORM Framework to map
Java objects to the database.

3. What are some benefits of using an ORM framework like JPA?


Using an Object-Relational Mapping (ORM) framework like JPA (Java Persistence API)
has several benefits. Some of them are:

Page 5 © Copyright by Interviewbit


JPA Interview Questions

Increased Productivity: JPA provides a high level of abstraction that allows


developers to focus on business logic instead of writing SQL queries. This can
lead to faster development cycles and fewer errors.
Portability: JPA abstracts away the details of the underlying database, which
makes it possible to switch databases without changing the application code.
This can save a lot of time and effort when porting applications between
different databases.
Scalability: JPA provides a caching mechanism that can help improve
application performance by reducing the number of database queries needed to
access data. This can help an application scale better as the number of users and
amount of data grows.
Maintainability: JPA provides a clear separation between application logic and
persistence logic. This makes it easier to maintain and modify an application
over time.
Standardization: JPA is a Java EE standard, which means that it is widely
adopted and supported by many different vendors. This helps ensure that the
application code is portable and compatible with a wide range of different
platforms.

4. Can you tell the difference between JPA and Hibernate?


JPA (Java Persistence API) is a specification for ORM (Object-Relational Mapping)
in Java, while Hibernate is an implementation of JPA.
In other words, JPA provides a standard set of interfaces and annotations for
ORM, while Hibernate is a concrete implementation of those interfaces and
annotations.
You can find More Differences listed here - JPA vs Hibernate

5. What are entities in JPA? Explain the concept in detail.


In JPA, an entity is a lightweight Java class that represents a persistent data object.
Entities are used to map Java objects to database tables, where each entity
corresponds to a row in the table.

Page 6 © Copyright by Interviewbit


JPA Interview Questions

Entities are defined using annotations, which provide metadata about how the entity
should be persisted and how it relates to other entities in the application. The most
commonly used annotation for defining entities is @Entity, which marks a Java class
as an entity. Entities typically have instance variables that correspond to columns in
the database table, and methods that provide access to these variables. JPA also
provides annotations for defining relationships between entities, such as
@OneToOne, @OneToMany, @ManyToOne, and @ManyToMany.
Entities can be persisted in the database using the JPA “EntityManager” interface,
which provides methods for creating, reading, updating, and deleting entities. When
an entity is persisted, JPA creates a corresponding row in the database table, and
when an entity is read from the database, JPA populates the entity's instance
variables with the corresponding column values.

In this diagram, the Entity represents a persistent data object, which is defined using
fields and methods. Each field corresponds to a column in the database table, and
each method provides access to these fields. The Id field is typically annotated with
@Id annotation to indicate that it is the primary key for the entity.

6. What is JPQL and how is it used in JPA?

Page 7 © Copyright by Interviewbit


JPA Interview Questions

JPQL stands for Java Persistence Query Language. It is a platform-independent


object-oriented query language that is used to retrieve data from a relational
database using Java Persistence API. JPQL is similar to SQL (Structured Query
Language) in terms of syntax, but instead of operating on tables and columns, it
operates on JPA entities and their corresponding attributes.
JPQL is used in JPA to create dynamic queries that can be executed against a
relational database. These queries are defined as strings and can be executed using
the JPA EntityManager interface. JPQL allows developers to write complex queries
that can retrieve data from multiple tables, perform aggregations, and filter results
based on conditions.
JPQL queries can be run against various databases without modification because it is
intended to be portable across various databases. Additionally, JPQL supports object-
oriented features like polymorphism and inheritance, enabling developers to create
queries that interact with object hierarchies as compared to just flat tables.
Let’s look at the sample code to understand it better.

String jpql = "SELECT e FROM Employee e WHERE e.department = :dept";

TypedQuery<Employee> query = entityManager.createQuery(jpql, Employee.class);


query.setParameter("dept", "IT");

List<Employee> results = query.getResultList();

In this example, the JPQL query selects all ‘Employee’ objects that belong to the ‘IT’
department. The query is executed using the ‘createQuery’ method of the
‘EntityManager’ interface, and the ‘setParameter’ method is used to bind the value of
the ‘dept’ parameter to the query

7. What is a database transaction and how is it used in JPA?


A database transaction is a sequence of database operations that are executed as a
single logical unit of work. A transaction is typically used to ensure data consistency
and integrity, by ensuring that either all of the operations in the transaction are
executed, or none of them is executed.

Page 8 © Copyright by Interviewbit


JPA Interview Questions

In JPA, transactions are used to manage the interactions between Java code and the
underlying relational database. JPA provides a transaction management system that
allows developers to define and control transactions in their applications.
JPA defines a ‘javax.persistence.EntityTransaction’ interface that represents a
transaction between a Java application and the database. A typical usage pattern for
a JPA transaction involves the following steps:
Obtain an instance of the ‘EntityManager’ interface.
Begin a transaction using the ‘EntityTransaction’ interface's ‘begin()’ method.
Perform one or more database operations using the ‘EntityManager’ interface's
persistence methods, such as ‘persist()’, ‘merge()’, or ‘remove()’.
Commit the transaction using the ‘EntityTransaction’ interface's ` method.
If any errors occur during the transaction, roll back the transaction using the
‘EntityTransaction’ interface's ‘rollback()’ method.

8. What are the advantages of using JPA over JDBC?


JPA is a higher-level abstraction of JDBC (Java Database Connectivity) that provides
several advantages over JDBC. Here are some of the key advantages of using JPA over
JDBC:

Page 9 © Copyright by Interviewbit


JPA Interview Questions

Object-Relational Mapping: It offers an Object-Relational Mapping (ORM)


framework that enables developers to map Java objects to database tables
without having to create SQL queries. Developers will have to write less code as
a result, and the codebase will be simpler to maintain.
Portability: It is a standardized API that is independent of any specific database
implementation. This means that applications written using JPA can be easily
ported to different databases without having to rewrite the database access
code.
Increased Productivity: It offers a higher-level API that is simpler and easier to
use than JDBC. This reduces the amount of time that developers spend writing
and debugging database access code, and allows them to focus on other aspects
of the application.
Improved Performance: By minimizing the number of database queries that are
run, it uses a caching mechanism that can enhance performance. This may lead
to quicker response times and improved scalability.
Transaction Management: It offers a transaction management system that
simplifies the process of managing database transactions.
Object-Oriented Features: It provides support for object-oriented features such
as inheritance and polymorphism. This allows developers to work with Java
objects instead of relational tables, which is easy to maintain.

9. Difference between JPA Repository and CRUD Repository?


Explain with the help of an example.
JPA Repository is an interface provided by Spring Data JPA that extends the
JpaRepository interface. It provides a set of methods for performing common
operations on entities, such as save, delete, findAll, and findById. In addition to these
methods, it also allows you to define custom query methods using the @Query
annotation.
On the other hand, CRUD Repository is an interface provided by Spring Data that
provides a set of methods for performing CRUD (Create, Read, Update, Delete)
operations on entities. It provides basic functionality for working with data, such as
save, delete, findById, and findAll.

Page 10 © Copyright by Interviewbit


JPA Interview Questions

In short, JPA Repository extends the functionality of the CRUD Repository by


providing additional methods and the ability to define custom queries. However, if
you only need basic CRUD functionality, then using CRUD Repository may be
sufficient.
Example -
Let's say we have an entity called "Product" with the following properties: id, name,
description, and price. We want to create a Spring Data repository to perform CRUD
operations on this entity.
First, let's create a repository using the CRUD Repository interface:

import org.springframework.data.repository.CrudRepository;
public interface ProductRepository extends CrudRepository<Product, Long> {
}

This interface provides basic CRUD functionality for the Product entity, such as
save(), delete(), findById(), and findAll().
Now let's create a repository using the JPA Repository interface:

import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;

public interface ProductRepository extends JpaRepository<Product, Long> {


List<Product> findByPriceGreaterThan(double price);

@Query("SELECT p FROM Product p WHERE p.name LIKE %?1%")


List<Product> findByNameContaining(String keyword);
}

This interface extends the JpaRepository interface and provides additional methods,
such as findByPriceGreaterThan() and findByNameContaining(). These methods are
defined using Spring Data's method name query and the @Query annotation,
respectively.

10. What is a Named Query in JPA? How is it used? And what are
the benefits of using this?

Page 11 © Copyright by Interviewbit


JPA Interview Questions

In JPA, a named query is a pre-defined query that is given a name and can be used in
multiple places in an application. It is defined in the entity class using the
@NamedQuery annotation and can be used to retrieve entities based on specific
criteria.
Consider the below snippet to understand better about this -

@Entity
@NamedQuery(
name = "Product.findByPriceGreaterThan",
query = "SELECT p FROM Product p WHERE p.price > :price"
)
public class Product {
// ...
}

In this example, we can see a named query called "Product.findByPriceGreaterThan",


which selects all products whose price is greater than a given value. The query is
defined using JPQL syntax and uses a named parameter ":price" to specify the price
value.
To use the named query in our code, we can retrieve it using EntityManager's
createNamedQuery() method and pass in the name of the query:

TypedQuery<Product> query = entityManager.createNamedQuery("Product.findByPriceGreaterT


query.setParameter("price", 10.0);
List<Product> products = query.getResultList();

In this code snippet, we create a TypedQuery object using the named query
"Product.findByPriceGreaterThan" and pass in the Product class as the expected
result type. We then set the value of the named parameter ":price" to 10.0 and
execute the query using getResultList() to retrieve a list of products that match the
criteria.
Using named queries in JPA has several benefits, including:

Page 12 © Copyright by Interviewbit


JPA Interview Questions

Reusability: named queries can be defined once and used multiple times
throughout the application.
Performance: named queries are compiled and cached by the JPA provider,
which can improve performance for frequently used queries.
Maintenance: named queries can be easily modified or updated in a central
location, rather than scattered throughout the codebase.

11. What are the various query methods in JPA to retrieve data
from the database? List some of the most used methods.
In JPA, there are several query methods that can be used to retrieve data from the
database:
createQuery(): This method creates a JPQL (Java Persistence Query Language)
query that can be used to retrieve data from the database. JPQL queries are
similar to SQL queries, but they operate on JPA entities rather than database
tables.
createNamedQuery(): This method creates a named JPQL query that has been
defined in the entity class using the @NamedQuery annotation.
createNativeQuery(): This method creates a native SQL query that can be used
to retrieve data from the database using SQL syntax. Native SQL queries can be
used when JPQL is not sufficient for complex queries or for accessing database-
specific features.
find(): This method retrieves an entity from the database by its primary key.
getReference(): This method retrieves a reference to an entity from the
database by its primary key, without actually loading the entity data from the
database.
createQuery(criteriaQuery): This method creates a JPA Criteria API query that
can be used to retrieve data from the database. The Criteria API provides a type-
safe, object-oriented way to construct queries at runtime.
getSingleResult(): This method executes a query and returns a single result. If
the query returns more than one result or no results, an exception is thrown.
getResultList(): This method executes a query and returns a list of results. If the
query returns no results, an empty list is returned.

Page 13 © Copyright by Interviewbit


JPA Interview Questions

12. Describe in detail about the Persistence Unit in JPA?


A Persistence Unit in JPA is a set of one or more entity classes that are managed
together as a unit for the purpose of data persistence. It is a logical grouping of entity
classes and their associated metadata, including their mappings to database tables,
relationships between entities, and any other configuration information required to
persist and retrieve data. A Persistence Unit is defined in a persistence.xml file,
which is typically located in the META-INF directory of a Java project. This file
contains metadata that describes the properties and configuration of the Persistence
Unit, including the database connection details, the list of entity classes to be
managed, and any additional configuration options.
When an application is deployed, the JPA provider reads the persistence.xml file and
creates a Persistence Unit that is used to manage the entities within it. The
application can then use the entity classes and the ‘EntityManager’ API to perform
CRUD (Create, Read, Update, Delete) operations on the database.
JPA supports two types of Persistence Units:
Container-Managed Persistence Unit: In this type, the application server
manages the lifecycle of the Persistence Unit and its associated EntityManager
instances.
Application-Managed Persistence Unit: In this type, the application manages
the lifecycle of the Persistence Unit and its associated EntityManager instances.

13. What is the purpose of EntityManager in JPA?


The EntityManager in JPA is the primary interface through which an application
interacts with the Persistence Context, which is responsible for managing the
lifecycle of entity objects and their persistence in the database. The EntityManager
provides a set of APIs for performing CRUD (Create, Read, Update, Delete) operations
on the database using the entity objects.
The EntityManager is responsible for the following tasks:

Page 14 © Copyright by Interviewbit


JPA Interview Questions

Creating and removing entity objects.


Retrieving entity objects from the database.
Updating and persisting changes made to entity objects.
Managing the association between entities.
Managing the lifecycle of entity objects.
Executing queries on the database using JPQL (Java Persistence Query
Language).
Caching entity objects for improved performance
The EntityManager API provides several methods for performing these tasks, such as
’persist()’, ‘find()’, ‘merge()’, ‘remove()’, and ‘createQuery()’. The EntityManager is
typically obtained from a PersistenceContext, which is created and managed by the
JPA provider, either by injection or programmatically.

14. What is the difference between EntityManager.find() and


EntityManager.getReference() methods in JPA?

Page 15 © Copyright by Interviewbit


JPA Interview Questions

EntityManager.find() EntityManager.getReference()

It returns the entity It returns a "reference" to the


instance immediately if it entity, which may not actually
exists in the persistence be loaded from the database
context or the database. until it is accessed.

If the entity instance is not


found in the persistence
If the entity instance is not
context or the database, an
found in the persistence
EntityNotFoundException will
context or the database, it
be thrown when any method
returns null.
other than getId() is called on
the reference.

It returns a lightweight
reference object that only
It immediately loads the contains the entity's primary
entity from the database key and does not actually load
and returns it as a fully the entity from the database
initialized object. until a method other than
getId() is called on the
reference.

It only returns a managed


It can be used to retrieve entity if it already exists in the
an entity in either a persistence context, otherwise
managed or detached returns a "hollow" reference
state. that is not managed by the
persistence context.

It throws an
It throws an
IllegalArgumentException
EntityNotFoundException if
if the argument passed to
the entity does not exist in the
the method is not a valid
database.
entity type.

Page 16 © Copyright by Interviewbit


JPA Interview Questions

15. What is the purpose of the @JoinColumn annotation in JPA?


The @JoinColumn annotation in JPA is used to specify a join column for a
relationship mapping between two entities. It is used to define the columns in a table
that will be used to establish the association between two entities, where one entity
is the owner of the relationship (the one that has the foreign key column), and the
other is the inverse side.
The @JoinColumn annotation can be applied to a field or property that is mapped as
a foreign key column in the database. It allows you to specify the name of the
column, its type, its nullable attribute, and its foreign key constraints. You can also
use the @JoinColumn annotation to specify the name of the table that contains the
foreign key column.
The @JoinColumn annotation can be used with the @ManyToOne, @OneToOne,
@OneToMany, and @ManyToMany annotations to define the join columns for the
relationship mapping.
Here's an example of using the @JoinColumn annotation to define a join column in
JPA:

@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;

@ManyToOne
@JoinColumn(name="department_id")
private Department department;

// other fields and methods


}

16. What types of cascades does JPA support?


JPA supports several types of cascading operations to propagate changes made to
entities across relationships. The cascading operations can be specified using the
“javax.persistence.CascadeType” enumeration. Here are the different types of
cascading operations supported by JPA:

Page 17 © Copyright by Interviewbit


JPA Interview Questions

CascadeType.ALL: This cascades all operations - including persist, merge,


remove, and refresh - from the parent entity to the associated child entities.
CascadeType.PERSIST: This cascades the persistent operation from the parent
entity to the associated child entities.
CascadeType.MERGE: This cascades the merge operation from the parent
entity to the associated child entities.
CascadeType.REMOVE: This cascades the remove operation from the parent
entity to the associated child entities.
CascadeType.REFRESH: This cascades the refresh operation from the parent
entity to the associated child entities.
CascadeType.DETACH: This cascades the detach operation from the parent
entity to the associated child entities.
CascadeType.DETACH: This cascades the lock operation from the parent entity
to the associated child entities.

17. What is the difference between a detached and attached


entity in JPA?

Page 18 © Copyright by Interviewbit


JPA Interview Questions

Attached Entity Detached Entity

Definition An entity that is An entity that was


currently being previously managed
managed by the by the persistence
persistence context but is no
context. longer attached.

Persistence The entity is


The entity is not
Context associated with
associated with a
a persistence
persistence context.
context.

Entity State The entity is in a


The entity is not in a
synchronized
synchronized state
state with the
with the database.
database.

Automatic Any changes


Updates made to the Any changes made
entity's state to the entity's state
are are not
automatically automatically
synchronized synchronized with
with the the database.
database.

Persistence The entity can


Operations be used to
The entity needs to
perform CRUD
be reattached to a
operations
persistence context
using
before any CRUD
EntityManager
operations can be
without any
performed.
additional
steps.

Page 19 © Copyright by Interviewbit


JPA Interview Questions

18. What is the purpose of the @Transactional annotation in


JPA?
The @Transactional annotation in JPA is used to indicate that a method should be
executed within a transaction. It is used to define the scope of a transaction, which
determines when changes made to the database will be committed. It can be applied
at the class or method level, and it is typically used with the Spring Framework's
declarative transaction management feature.
When a method annotated with @Transactional is called, a transaction will be
started before the method is executed, and any changes made to the database within
the method will be persisted to the database when the transaction is committed. If
an exception is thrown within the method, the transaction will be rolled back, and
any changes made to the database within the method will be discarded.
Example of using the @Transactional annotation in JPA:

@Service
@Transactional
public class EmployeeService {
@Autowired
private EntityManager entityManager;

public void createEmployee(Employee employee) {


entityManager.persist(employee);
}

public void updateEmployee(Employee employee) {


entityManager.merge(employee);
}

public void deleteEmployee(Employee employee) {


entityManager.remove(employee);
}

public Employee findEmployeeById(Long id) {


return entityManager.find(Employee.class, id);
}
}

Page 20 © Copyright by Interviewbit


JPA Interview Questions

In this example, the @Transactional annotation is applied to the EmployeeService


class, which means that all public methods in the class will be executed within a
transaction. When any of the CRUD methods are called, a transaction will be started
before the method is executed, and any changes made to the database will be
persisted when the transaction is committed.

19. Difference between JpaRepository.save() and


JpaRepository.saveAndFlush() methods?

Page 21 © Copyright by Interviewbit


JPA Interview Questions

JpaRepository.save() JpaRepository.saveAndFlus

Execution It saves an entity to It saves an entity to the


the database and database and immediately
returns the saved flushes the changes to the
entity. database.

Return Value It returns the saved


It returns the saved entity.
entity.

Transaction The changes made to


the entity are not The changes made to the en
immediately persisted are immediately persisted to
in the database. They the database, regardless of
are persisted when whether the current
the current transaction is committed or
transaction is not.
committed.

Use Case Use this method when


Use this method when you
you want to save an
want to save an entity and
entity and continue
immediately see the change
working with it in the
reflected in the database.
same transaction.

Performance This method is faster This method is slower than


than saveAndFlush() save() because it immediate
because it does not persists changes to the
immediately persist database, which can be a
changes to the performance bottleneck if us
database. excessively.

Page 22 © Copyright by Interviewbit


JPA Interview Questions

20. What is the purpose of the EntityManagerFactory in Spring


Data JPA?
EntityManagerFactory in Spring Data JPA serves the following purposes:
The EntityManagerFactory in Spring Data JPA is responsible for creating
EntityManager instances.
It reads the persistence configuration and creates EntityManager instances
based on that configuration.
The EntityManagerFactory manages the lifecycle of EntityManager instances and
is thread-safe.
It is responsible for managing the connection to the database and can be
configured with various properties to control the behavior of the EntityManager
instances it creates.
In Spring Data JPA, the EntityManagerFactory is usually created automatically by
the framework and injected into the application's code.

JPA Interview Questions for Experienced


21. Explain in detail the JPA application life cycle?
The lifecycle of a JPA application can be divided into several stages, each with its own
set of actions and interactions between the various components involved. These
stages are:

Page 23 © Copyright by Interviewbit


JPA Interview Questions

Entity Class Creation: The first stage in the lifecycle of a JPA application is the
creation of entity classes. Entity classes are Java classes that represent database
tables and have properties that correspond to columns in those tables.
Entity Mapping: The next stage is entity mapping, which involves defining the
mapping between the entity classes and the database tables. This is typically
done using annotations or XML configuration files, and it specifies how the
properties of the entity classes correspond to the columns in the database
tables.
Persistence Unit Creation: The third stage is the creation of a Persistence Unit,
which is a logical grouping of one or more entity classes and their associated
metadata. This is typically done using a persistence.xml file, which specifies the
database connection details, the list of entity classes to be managed, and any
additional configuration options.
EntityManagerFactory Creation: The next stage is the creation of an
EntityManagerFactory, which is responsible for creating EntityManager
instances. The EntityManagerFactory is typically created once at the start of the
application and is used to create EntityManager instances throughout the
application.
EntityManager Creation: The next stage is the creation of an EntityManager,
which provides the primary interface for interacting with the Persistence
Context. The EntityManager is responsible for managing the lifecycle of entity
objects, executing queries, and performing CRUD operations on the database.
Transaction Management: The next stage is transaction management, which
involves defining the boundaries of transactions and managing their lifecycle.
Transactions are used to ensure data consistency and integrity, and they are
typically managed using annotations or programmatic APIs.
Entity Lifecycle Management: The next stage is entity lifecycle management,
which involves managing the lifecycle of entity objects within the Persistence
Context. Entity objects can be in one of several states, including New, Managed,
Detached, and Removed, and their state can be changed using the
EntityManager API.
Query Execution: The final stage is query execution, which involves executing
JPQL queries to retrieve data from the database. JPQL is a query language that
is similar to SQL but is specific to JPA.

Page 24 © Copyright by Interviewbit


JPA Interview Questions

Note: This is a simplified view of the JPA lifecycle and there may be additional
stages or variations depending on the specific implementation and
configuration of the application.

22. How does JPA handle optimistic locking? Can you give an
example of how you would implement optimistic locking in
JPA?
JPA (Java Persistence API) provides support for optimistic locking through the use of
version fields. Optimistic locking is a concurrency control mechanism that allows
multiple transactions to access the same data concurrently while ensuring data
consistency.
In JPA, optimistic locking is implemented by defining a version field on the entity
class. This version field is automatically incremented by the persistence provider
each time an entity is updated. When an entity is updated, JPA checks if the version
of the entity in the database matches the version of the entity in the persistence
context. If the versions do not match, it means that another transaction has modified
the entity in the meantime, and JPA throws an optimistic locking exception.
Consider the below snippets for the implementation of optimistic locking in JPA:

@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;

private String name;

@Version
private int version;

// getters and setters


}

In this example, we have an Employee entity with an id, a name, and a version field
annotated with @Version. The version field is an integer that JPA uses for optimistic
locking.

Page 25 © Copyright by Interviewbit


JPA Interview Questions

Now let's say we want to update an employee's name:

EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();

Employee employee = em.find(Employee.class, 1L);


employee.setName("John Doe");

em.getTransaction().commit();
em.close();

When we call em.getTransaction().commit(), JPA checks the version of the


employee entity in the database against the version of the employee entity in the
persistence context. If the versions match, JPA updates the entity and increments the
version number. If the versions do not match, JPA throws an optimistic locking
exception.

23. What is the purpose of the @Version annotation in JPA? How


is it used in optimistic locking? Explain the concept in detail.
The purpose of the @Version annotation in JPA is to define a version field on an
entity that can be used for optimistic locking.
When an entity is updated in JPA, the persistence provider checks whether the
version of the entity in the database matches the version of the entity in the
persistence context. If the versions match, JPA updates the entity and increments the
version number. If the versions do not match, it means that another transaction has
modified the entity in the meantime, and JPA throws an optimistic locking
exception.
The @Version annotation is used to mark a field on an entity as the version field. This
field should be an integer or a timestamp type. When an entity is updated, JPA
automatically increments the version number or updates the timestamp value.
Let’s understand this with the help of the below snippet:

Page 26 © Copyright by Interviewbit


JPA Interview Questions

@Entity
public class Book {
@Id
@GeneratedValue
private Long id;
private String title;
private String author;
private double price;

@Version
private int version;

// getters and setters


}

In this example, we have a Book entity with an id, a title, an author, a price, and a
version field annotated with @Version. The version field is an integer that JPA uses
for optimistic locking.
When an entity is updated, JPA checks whether the version of the entity in the
database matches the version of the entity in the persistence context. If the versions
match, JPA updates the entity and increments the version number. If the versions do
not match, JPA throws an optimistic locking exception.
The @Version annotation can be applied to only one field per entity class. If the entity
has more than one field that needs to be used for optimistic locking, you can create a
composite version field using an embedded object or a concatenated string.

24. How can you use JPA to perform pagination of query


results? What are the advantages of using pagination over
fetching all results at once?
JPA provides support for the pagination of query results through the use of the
setFirstResult and setMaxResults methods of the Query interface.
Consider the below code to understand how to use pagination with JPA:

Page 27 © Copyright by Interviewbit


JPA Interview Questions

EntityManager em = entityManagerFactory.createEntityManager();
Query query = em.createQuery("SELECT e FROM Employee e ORDER BY e.name");
query.setFirstResult(0); // Starting index of the results to return
query.setMaxResults(10); // Maximum number of results to return
List<Employee> employees = query.getResultList();
em.close();

In this example, we have created a query to select all employees and order them by
name. We then set the starting index of the results to 0 and the maximum number of
results to 10 using the setFirstResult() and setMaxResults() methods. Finally, we
execute the query and retrieve the results using getResultList().
The advantages of using pagination over fetching all results at once include:
1. Reduced memory usage: When fetching a large number of results, it can
consume a lot of memory to hold all the results in memory at once. Pagination
allows you to retrieve a smaller subset of results at a time, reducing memory
usage.
2. Faster response times: If a query returns a large number of results, it can take a
long time to return all the results. Pagination allows you to retrieve smaller
subsets of results, which can improve response times.
3. Improved user experience: If you're displaying query results to users, it can be
overwhelming to display a large number of results at once. Pagination allows
you to display a smaller subset of results at a time, making it easier for users to
navigate through the results.
4. Better performance: When using pagination, the database can use more
efficient algorithms to retrieve and sort smaller subsets of results. This can result
in better performance than fetching all results at once.

25. How would you implement a custom JPA entity listener?


Can you give an example of when you might use a custom
entity listener in your application?
To implement a custom JPA entity listener, we need to create a class that
implements one of the JPA entity listeners interfaces : EntityListener or
EntityCallback.

Page 28 © Copyright by Interviewbit


JPA Interview Questions

You can then annotate the entity class or the entity listener class with the
@EntityListeners annotation to register the listener.
Consider the below example of a custom JPA entity listener:

public class UserListener {


@PrePersist
public void prePersist(User user) {
user.setCreationDate(new Date());
}
}

In this example, we have a UserListener class with a prePersist method that sets the
creation date of a user before it's persisted to the database. We can then annotate
the User entity with the @EntityListeners annotation to register the listener:

@Entity
@Table(name = "users")
@EntityListeners(UserListener.class)
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;

private String username;

private String password;

@Temporal(TemporalType.TIMESTAMP)
private Date creationDate;

// getters and setters


}

In this example, we have a User entity with an id, a username, a password, and a
creation date field. We annotate the entity with @EntityListeners and specify the
UserListener class to register the listener.

Page 29 © Copyright by Interviewbit


JPA Interview Questions

We can use a custom entity listener in our application for various purposes, such as
auditing, validation, or processing entity lifecycle events. For example, we could use
a custom listener to calculate and update the average rating of a product when a new
review is added or to validate that a user has a unique email address before it's
persisted in the database, etc.

26. How can you use JPA to handle optimistic concurrency


control? Can you explain how the EntityManager.lock()
method works?
JPA provides a mechanism for optimistic concurrency control to handle situations
where multiple transactions are trying to modify the same entity concurrently. In
optimistic concurrency control, each transaction checks to see if any other
transaction has modified the entity since it was last read. This is achieved by using a
version field in the entity, which is incremented each time the entity is modified.
JPA provides the @Version annotation to indicate which field in the entity should be
used as the version field. When an entity is persisted or updated, JPA automatically
checks the current version of the entity in the database and compares it with the
version in the entity being persisted or updated. If the versions do not match, a
“javax.persistence.OptimisticLockException” is thrown, indicating that the entity has
been modified by another transaction.
In addition to the automatic optimistic locking provided by JPA, the EntityManager
interface provides a lock() method that allows you to manually acquire a lock on an
entity to prevent other transactions from modifying it until the lock is released.
The EntityManager.lock() method allows you to specify the lock mode to use and the
timeout for the lock. The lock mode can be either optimistic or pessimistic. With
optimistic locking, the lock is released immediately a er the transaction completes.
With pessimistic locking, the lock is held until the transaction completes or the lock
timeout expires.
Here's an example of using the EntityManager.lock() method to acquire an optimistic
lock on an entity:

Page 30 © Copyright by Interviewbit


JPA Interview Questions

EntityManager em = ... // obtain the EntityManager


em.getTransaction().begin();

// find the entity to update


MyEntity entity = em.find(MyEntity.class, entityId);

// acquire an optimistic lock on the entity


em.lock(entity, LockModeType.OPTIMISTIC);

// modify the entity


entity.setSomeField(newValue);

em.getTransaction().commit();

In this example, the EntityManager finds the entity to update and acquires an
optimistic lock on it using the lock() method. The entity is then modified and the
transaction is committed. If another transaction has modified the entity in the
meantime, an OptimisticLockException will be thrown when the transaction tries to
commit the changes.

27. What is the purpose of the @OneToOne and @OneToMany


annotations in JPA? Explain in detail with examples.
In JPA, @OneToOne and @OneToMany are two annotations used to specify the type
of relationship between two entities.
@OneToOne is used to specify a one-to-one relationship between two entities. It is
typically used when one entity has a single associated entity of another type and vice
versa.
For example, let's consider two entities, Employee and Address, where an employee
can have only one address and an address can be associated with only one employee.
In this scenario, the Employee entity would have a field annotated with @OneToOne
that references the Address entity, and the Address entity would have a field
annotated with @OneToOne that references the Employee entity.
Let's look at the snippets of this example:

Page 31 © Copyright by Interviewbit


JPA Interview Questions

@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;

@OneToOne(mappedBy="employee")
private Address address;

// other fields and methods


}

@Entity
public class Address {
@Id
@GeneratedValue
private Long id;

@OneToOne
@JoinColumn(name="employee_id")
private Employee employee;

// other fields and methods


}

@OneToMany is used to specify a one-to-many relationship between two entities. It is


typically used when one entity has multiple associated entities of another type, and
the associated entities only have a single association back to the original entity.
For example, let's consider two entities, Department and Employee, where a
department can have multiple employees, but an employee can belong to only one
department. In this scenario, the Department entity would have a collection field
annotated with @OneToMany that references the Employee entity.
Let's look at the snippet for this example:

Page 32 © Copyright by Interviewbit


JPA Interview Questions

@Entity
public class Department {
@Id
@GeneratedValue
private Long id;

@OneToMany(mappedBy="department")
private List<Employee> employees;

// other fields and methods


}

@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;

@ManyToOne
@JoinColumn(name="department_id")
private Department department;

// other fields and methods


}

In this example, the @OneToMany annotation is used to specify that the Department
entity has a one-to-many relationship with the Employee entity. The mappedBy
attribute is used to indicate that the relationship is mapped by the department field
in the Employee entity.

28. What types of identifier generation does JPA support?


JPA provides several strategies for generating unique identifiers for entity objects.
Here are the different types of identifier generation supported by JPA:

Page 33 © Copyright by Interviewbit


JPA Interview Questions

1. GenerationType.AUTO: This is the default strategy, and the choice of strategy is


determined by the JPA provider. The strategy may be GenerationType.IDENTITY,
GenerationType.SEQUENCE, or GenerationType.TABLE.
2. GenerationType.IDENTITY: This strategy uses an auto-incremented database
column to generate unique identifier values. This is only supported for
databases that have auto-increment columns, such as MySQL, PostgreSQL, and
SQL Server.
3. GenerationType.SEQUENCE: This strategy uses a database sequence to
generate unique identifier values. The database must support sequences, and
the sequence must be defined in the database. This strategy is useful when the
database doesn't support auto-increment columns, or when you want to
generate unique identifiers before inserting the entity object into the database.
4. GenerationType.TABLE: This strategy uses a separate database table to
generate unique identifier values. Each time an identifier is needed, a row is
inserted into the table, and the identifier value is obtained from the inserted
row. This strategy is useful when the database doesn't support sequences or
auto-increment columns, or when you want to generate unique identifiers
outside of the database.
5. GenerationType.IDENTITY: This strategy uses a user-defined algorithm to
generate unique identifier values. You can define your own identifier generation
strategy by implementing the “javax.persistence.spi.IdGenerator” interface.

29. Can you explain how JPA handles entity state transitions
(e.g. from new to managed, managed to remove, etc.)?
What are some best practices for managing entity states in
JPA?
JPA manages the state of entities as they are created, modified, and deleted in the
application. The state of an entity can be one of the following:

Page 34 © Copyright by Interviewbit


JPA Interview Questions

1. New: When an entity is first created using the new operator, it's in a new state.
2. Managed: Once an entity is persisted using the EntityManager.persist() method,
it enters the managed state. Entities in this state are managed by the
persistence context, and any changes made to the entity are tracked and
automatically synchronized with the database.
3. Detached: Entities become detached when they are removed from the
persistence context or when the persistence context is closed. In this state,
changes made to the entity are not tracked or synchronized with the database.
However, they can be re-attached to the persistence context later using the
EntityManager.merge() method.
4. Removed: When an entity is removed using the EntityManager.remove()
method, it enters the removed state. Entities in this state are scheduled for
deletion from the database when the transaction is committed.
To manage entity states in JPA, it's important to follow some best practices:
1. Always use EntityManager to create, retrieve, update, and delete entities.
2. Use the EntityManager.persist() method to create new entities.
3. Use the EntityManager.merge() method to update existing entities or to re-
attach detached entities to the persistence context.
4. Use the EntityManager.remove() method to delete entities.
5. Avoid using the new operator to create entities once JPA is involved in your
application.
6. Be aware of the transaction boundaries and make sure that all database
operations are performed within a transaction.
7. Keep the persistence context as small as possible to avoid unnecessary memory
usage and performance issues.
8. By following these best practices, you can ensure that entity state transitions are
properly managed in your application and that your data is consistent and up-
to-date in the database.

30. Explain the difference between a shared cache mode and a


local cache mode in JPA? What are the advantages and
disadvantages of each?

Page 35 © Copyright by Interviewbit


JPA Interview Questions

In JPA, there are two cache modes:


1. Shared cache mode allows multiple ‘EntityManager’ instances to share the
same cache. This means that if one ‘EntityManager’ instance loads an entity
from the database and stores it in the cache, another ‘EntityManager’ instance
can retrieve the same entity from the cache without having to hit the database.
The shared cache is managed by the JPA provider and is typically implemented
using a second-level cache. The advantage of shared cache mode is that it can
improve application performance by reducing the number of database queries.
However, the disadvantage is that it can lead to consistency issues if the cache is
not properly managed.
2. Local cache mode, on the other hand, is specific to a single ‘EntityManager’
instance. When an entity is loaded from the database using an ‘EntityManager’
in local cache mode, it is stored in the local cache of that ‘EntityManager’
instance. Subsequent requests for the same entity within that ‘EntityManager’
instance will be retrieved from the local cache instead of hitting the database.
The advantage of local cache mode is that it provides greater control over the
caching process and avoids potential consistency issues. However, the
disadvantage is that it can lead to increased memory usage and slower
performance if large numbers of entities are cached.
In general, the choice between shared and local cache mode depends on the specific
requirements of the application. If the application requires high performance and
can tolerate some consistency issues, shared cache mode may be a good choice. If
the application requires greater control over the caching process and cannot tolerate
consistency issues, local cache mode may be a better choice.

31. What is the difference between CascadeType.ALL and


CascadeType.PERSIST in JPA?

Page 36 © Copyright by Interviewbit


JPA Interview Questions

CascadeType.ALL CascadeType.PERSIST

Cascades all operations:


Cascades only the ‘persist’
‘persist’, ‘merge’,
operation.
‘remove’, and ‘refresh’.

If an entity is associated
If an entity is associated with
with another entity
another entity using
using CascadeType.ALL,
CascadeType.PERSIST, only the
any operation
‘persist’ operation will be
performed on the parent
propagated to the child entity.
entity will be
For example, if we persist a
propagated to the child
parent entity, any child entities
entity. For example, if
associated with it will also be
we delete a parent
persisted, but any subsequent
entity, any child entities
operations (e.g. remove or
associated with it will
merge) will not be propagated.
also be deleted.

CascadeType.ALL should
be used with caution as CascadeType.PERSIST is less risky
it can result in as it only propagates the ‘persist’
unintended operation, but it may require
consequences, such as additional operations (such as
deleting child entities ‘remove’ or ‘merge’) to ‘update’
that should not be or ‘delete’ child entities.
deleted.

Conclusion

Page 37 © Copyright by Interviewbit


JPA Interview Questions

In conclusion, JPA (Java Persistence API) is a powerful tool for developers working
with Java applications that need to interact with databases. As such, it's become an
increasingly popular topic in interviews for Java development positions.
We hope that this article has provided you with a solid foundation for preparing for
JPA interviews. By reviewing these questions and practising your answers, you can
increase your chances of success and impress potential employers with your JPA
expertise.
Remember, JPA is just one aspect of Java development, so be sure to also brush up
on other relevant technologies and concepts. With a well-rounded understanding of
Java development and JPA in particular, you'll be well-positioned to excel in your
career. Also, if you are a java developer then other things related to java like Spring,
Hibernate, etc content you can find here -
Technical Interview Questions
Interview Preparation Resources

Page 38 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 39 © Copyright by Interviewbit


Contents

Spring, Spring Core, Spring IoC Interview


Questions
1. What is Spring Framework?
2. What are the features of Spring Framework?
3. What is a Spring configuration file?
4. What do you mean by IoC (Inversion of Control) Container?
5. What do you understand by Dependency Injection?
6. Explain the difference between constructor and setter injection?
7. What are Spring Beans?
8. How is the configuration meta data provided to the spring container?
9. What are the bean scopes available in Spring?
10. Explain Bean life cycle in Spring Bean Factory Container.
11. What do you understand by Bean Wiring.
12. What is autowiring and name the different modes of it?
13. What are the limitations of autowiring?

Spring Boot Interview Questions


14. What do you understand by the term ‘Spring Boot’?
15. Explain the advantages of using Spring Boot for application development.
16. Differentiate between Spring and Spring Boot.
17. What are the features of Spring Boot?
18. What does @SpringBootApplication annotation do internally?

Page 1 © Copyright by Interviewbit


Spring Interview Questions

Spring Boot Interview Questions (.....Continued)

19. What are the effects of running Spring Boot Application as “Java Application”?
20. What is Spring Boot dependency management system?
21. What are the possible sources of external configuration?
22. Can we change the default port of the embedded Tomcat server in Spring boot?
23. Can you tell how to exclude any package without using the basePackages filter?
24. How to disable specific auto-configuration class?
25. Can the default web server in the Spring Boot application be disabled?
26. What are the uses of @RequestMapping and @RestController annotations in
Spring Boot?

Spring AOP, Spring JDBC, Spring Hibernate


Interview Questions
27. What is Spring AOP?
28. What is an advice? Explain its types in spring.
29. What is Spring AOP Proxy pattern?
30. What are some of the classes for Spring JDBC API?
31. How can you fetch records by Spring JdbcTemplate?
32. What is Hibernate ORM Framework?
33. What are the two ways of accessing Hibernate by using Spring.
34. What is Hibernate Validator Framework?
35. What is HibernateTemplate class?

Spring MVC Interview Questions

Page 2 © Copyright by Interviewbit


Spring Interview Questions

Spring MVC Interview Questions (.....Continued)


36. What is the Spring MVC framework?
37. What are the benefits of Spring MVC framework over other MVC frameworks?
38. What is DispatcherServlet in Spring MVC?
39. What is a View Resolver pattern and explain its significance in Spring MVC?
40. What is the @Controller annotation used for?
41. Can you create a controller without using @Controller or @RestController
annotations?
42. What is ContextLoaderListener and what does it do?
43. What are the differences between @RequestParam and @PathVariable
annotations?
44. What is the Model in Spring MVC?
45. What is the use of @Autowired annotation?
46. What is the role of @ModelAttribute annotation?
47. What is the importance of the web.xml in Spring MVC?
48. What are the types of Spring MVC Dependency Injection?
49. What is the importance of session scope?
50. What is the importance of @Required annotation?
51. Differentiate between the @Autowired and the @Inject annotations.
52. Are singleton beans thread-safe?
53. How can you achieve thread-safety in beans?
54. What is the significance of @Repository annotation?
55. How is the dispatcher servlet instantiated?

Page 3 © Copyright by Interviewbit


Spring Interview Questions

Spring MVC Interview Questions (.....Continued)

56. How is the root application context in Spring MVC loaded?


57. How does the Spring MVC flow look like? In other words, How does a
DispatcherServlet know what Controller needs to be called when there is an
incoming request to the Spring MVC?
58. Where does the access to the model from the view come from?
59. Why do we need BindingResults?
60. What are Spring Interceptors?
61. Is there any need to keepspring-mvc.jar on the classpath or is it already present
as part of spring-core?
62. What are the differences between the vs tags?
63. How is the form data validation done in Spring Web MVC Framework?
64. How to get ServletConfig and ServletContext objects in spring bean?
65. Differentiate between a Bean Factory and an Application Context.
66. How are i18n and localization supported in Spring MVC?
67. What do you understand by MultipartResolver?
68. How is it possible to use the Tomcat JNDI DataSource in the Spring applications?
69. What will be the selection state of a checkbox input if the user first checks the
checkbox and gets validation errors in other fields and then unchecks the
checkbox a er getting the errors?

Page 4 © Copyright by Interviewbit


Let's get Started
The Spring Framework was first developed by Rod Johnson in 2003. It was developed
to make the development of Java applications quicker, easier and safer for
developers. Spring Framework is an open-source, lightweight, easy-to-build
framework that can also be considered as a framework of frameworks as it houses
various frameworks like Dependency Injection, Spring MVC, Spring JDBC, Spring
Hibernate, Spring AOP, EJB, JSF, etc. A key element of Spring is the feature of the
application’s infrastructural support. Spring focuses on providing the “plumbing” of
enterprise-level applications and ensures that the developers can focus purely on
application-level business logic, without unnecessary ties to specific deployment
environments. Applications developed in Spring are more reliable, scalable, and very
simple to build and maintain. Spring was developed as means to help developers
manage the business objects of the application. Due to its vast features and
flexibilities, Spring became the most loved framework for developing enterprise-level
Java-based applications. In the following section, we will see what are the most
commonly asked interview questions and answers to prepare you for Spring-based
interviews.

Spring, Spring Core, Spring IoC Interview


Questions
1. What is Spring Framework?

Page 5 © Copyright by Interviewbit


Spring Interview Questions

Spring is a powerful open-source, loosely coupled, lightweight, java framework


meant for reducing the complexity of developing enterprise-level applications.
This framework is also called the “framework of frameworks” as spring provides
support to various other important frameworks like JSF, Hibernate, Structs,
EJB, etc.
There are around 20 modules which are generalized into the following types:
Core Container
Data Access/Integration
Web
AOP (Aspect Oriented Programming)
Instrumentation
Messaging
Test

Spring handles all the infrastructure-related aspects which lets the programmer
to focus mostly on application development.

2. What are the features of Spring Framework?

Page 6 © Copyright by Interviewbit


Spring Interview Questions

Spring framework follows layered architecture pattern that helps in the


necessary components selection along with providing a robust and cohesive
framework for J2EE applications development.
The AOP (Aspect Oriented Programming) part of Spring supports unified
development by ensuring separation of application’s business logic from
other system services.
Spring provides highly configurable MVC web application framework which has
the ability to switch to other frameworks easily.
Provides provision of creation and management of the configurations and
defining the lifecycle of application objects.
Spring has a special design principle which is known as IoC (Inversion of
Control) that supports objects to give their dependencies rather than looking
for creating dependent objects.
Spring is a lightweight, java based, loosely coupled framework.
Spring provides generic abstraction layer for transaction management that is
also very useful for container-less environments.
Spring provides a convenient API to translate technology-specific exceptions
(thrown by JDBC, Hibernate or other frameworks) into consistent, unchecked
exceptions. This introduces abstraction and greatly simplifies exception
handling.

3. What is a Spring configuration file?


A Spring configuration file is basically an XML file that mainly contains the classes
information and describes how those classes are configured and linked to each other.
The XML configuration files are verbose and cleaner.

4. What do you mean by IoC (Inversion of Control) Container?


Spring container forms the core of the Spring Framework. The Spring container uses
Dependency Injection (DI) for managing the application components by creating
objects, wiring them together along with configuring and managing their overall life
cycles. The instructions for the spring container to do the tasks can be provided
either by XML configuration, Java annotations, or Java code.

Page 7 © Copyright by Interviewbit


Spring Interview Questions

5. What do you understand by Dependency Injection?


The main idea in Dependency Injection is that you don’t have to create your objects
but you just have to describe how they should be created.
The components and services need not be connected by us in the code directly.
We have to describe which services are needed by which components in the
configuration file. The IoC container present in Spring will wire them up
together.

Page 8 © Copyright by Interviewbit


Spring Interview Questions

In Java, the 2 major ways of achieving dependency injection are:


Constructor injection: Here, the IoC container invokes the class constructor
with a number of arguments where each argument represents a
dependency on the other class.
Setter injection: Here, the spring container calls the setter methods on the
beans a er invoking a no-argument static factory method or default
constructor to instantiate the bean.

6. Explain the difference between constructor and setter


injection?
In constructor injection, partial injection is not allowed whereas it is allowed in
setter injection.
The constructor injection doesn’t override the setter property whereas the same
is not true for setter injection.
Constructor injection creates a new instance if any modification is done. The
creation of a new instance is not possible in setter injection.
In case the bean has many properties, then constructor injection is preferred. If
it has few properties, then setter injection is preferred.

Page 9 © Copyright by Interviewbit


Spring Interview Questions

7. What are Spring Beans?


They are the objects forming the backbone of the user’s application and are
managed by the Spring IoC container.
Spring beans are instantiated, configured, wired, and managed by IoC container.
Beans are created with the configuration metadata that the users supply to the
container (by means of XML or java annotations configurations.)

8. How is the configuration meta data provided to the spring


container?
There are 3 ways of providing the configuration metadata. They are as follows:
XML-Based configuration: The bean configurations and their dependencies are
specified in XML configuration files. This starts with a bean tag as shown below:

<bean id="interviewBitBean" class="org.intervuewBit.firstSpring.InterviewBitBean">


<property name="name" value="InterviewBit"></property>
</bean>

Annotation-Based configuration: Instead of the XML approach, the beans can


be configured into the component class itself by using annotations on the
relevant class, method, or field declaration.
Annotation wiring is not active in the Spring container by default. This has
to be enabled in the Spring XML configuration file as shown below

<beans>
<context:annotation-config/>
<!-- bean definitions go here -->
</beans>

Page 10 © Copyright by Interviewbit


Spring Interview Questions

Java-based configuration: Spring Framework introduced key features as part


of new Java configuration support. This makes use of the @Configuration
annotated classes and @Bean annotated methods. Note that:
@Bean annotation has the same role as the <bean/> element.
Classes annotated with @Configuration allow to define inter-bean
dependencies by simply calling other @Bean methods in the same class.

9. What are the bean scopes available in Spring?


The Spring Framework has five scope supports. They are:
Singleton: The scope of bean definition while using this would be a single
instance per IoC container.
Prototype: Here, the scope for a single bean definition can be any number of
object instances.
Request: The scope of the bean definition is an HTTP request.
Session: Here, the scope of the bean definition is HTTP-session.
Global-session: The scope of the bean definition here is a Global HTTP session.
Note: The last three scopes are available only if the users use web-aware
ApplicationContext containers.

10. Explain Bean life cycle in Spring Bean Factory Container.


The Bean life cycle is as follows:

Page 11 © Copyright by Interviewbit


Spring Interview Questions

The IoC container instantiates the bean from the bean’s definition in the XML
file.
Spring then populates all of the properties using the dependency injection as
specified in the bean definition.
The bean factory container calls setBeanName() which take the bean ID and the
corresponding bean has to implement BeanNameAware interface.
The factory then calls setBeanFactory() by passing an instance of itself (if
BeanFactoryAware interface is implemented in the bean).
If BeanPostProcessors is associated with a bean, then the
preProcessBeforeInitialization() methods are invoked.
If an init-method is specified, then it will be called.
Lastly, postProcessAfterInitialization() methods will be called if there are any
BeanPostProcessors associated with the bean that needs to be run post
creation.

11. What do you understand by Bean Wiring.

Page 12 © Copyright by Interviewbit


Spring Interview Questions

When beans are combined together within the Spring container, they are said to
be wired or the phenomenon is called bean wiring.
The Spring container should know what beans are needed and how the beans
are dependent on each other while wiring beans. This is given by means of XML /
Annotations / Java code-based configuration.

12. What is autowiring and name the different modes of it?


The IoC container autowires relationships between the application beans. Spring lets
collaborators resolve which bean has to be wired automatically by inspecting the
contents of the BeanFactory.
Different modes of this process are:
no: This means no autowiring and is the default setting. An explicit bean
reference should be used for wiring.
byName: The bean dependency is injected according to the name of the bean.
This matches and wires its properties with the beans defined by the same names
as per the configuration.
byType: This injects the bean dependency based on type.
constructor: Here, it injects the bean dependency by calling the constructor of
the class. It has a large number of parameters.
autodetect: First the container tries to wire using autowire by the constructor, if
it isn't possible then it tries to autowire by byType.

13. What are the limitations of autowiring?


Overriding possibility: Dependencies are specified using <constructor-arg>
and <property> settings that override autowiring.
Data types restriction: Primitive data types, Strings, and Classes can’t be
autowired.

Spring Boot Interview Questions


14. What do you understand by the term ‘Spring Boot’?

Page 13 © Copyright by Interviewbit


Contents

Spring Boot Interview Questions For Freshers


1. What are the advantages of using Spring Boot?
2. What are the Spring Boot key components?
3. Why Spring Boot over Spring?
4. What is the starter dependency of the Spring boot module?
5. How does Spring Boot works?
6. What does the @SpringBootApplication annotation do internally?
7. What is the purpose of using @ComponentScan in the class files?
8. How does a spring boot application get started?
9. What are starter dependencies?
10. What is Spring Initializer?
11. What is Spring Boot CLI and what are its benefits?
12. What are the most common Spring Boot CLI commands?

Advanced Spring Boot Questions


13. What Are the Basic Annotations that Spring Boot Offers?
14. What is Spring Boot dependency management?
15. Can we create a non-web application in Spring Boot?
16. Is it possible to change the port of the embedded Tomcat server in Spring Boot?
17. What is the default port of tomcat in spring boot?
18. Can we override or replace the Embedded tomcat server in Spring Boot?

Page 1 © Copyright by Interviewbit


Spring Boot Interview Questions

Advanced Spring Boot Questions (.....Continued)


19. Can we disable the default web server in the Spring boot application?
20. How to disable a specific auto-configuration class?
21. Explain @RestController annotation in Sprint boot?
22. What is the difference between @RestController and @Controller in Spring
Boot?
23. Describe the flow of HTTPS requests through the Spring Boot application?
24. What is the difference between RequestMapping and GetMapping?
25. What is the use of Profiles in spring boot?
26. What is Spring Actuator? What are its advantages?
27. How to enable Actuator in Spring boot application?
28. What are the actuator-provided endpoints used for monitoring the Spring boot
application?
29. How to get the list of all the beans in your Spring boot application?
30. How to check the environment properties in your Spring boot application?
31. How to enable debugging log in the spring boot application?
32. Where do we define properties in the Spring Boot application?
33. What is dependency Injection?
34. What is an IOC container?

Page 2 © Copyright by Interviewbit


Let's get Started
Spring boot is the hottest topic of discussion in interviews if it comes to Java
Application development. Because of its fast, low configuration, inbuild server, and
monitoring features, it helps to build a stand-alone java application from scratch
with very robust and maintainable.
The article will walk you through the Spring Boot interview questions for basic to
advanced level.

What is Spring boot?


Sprint boot is a Java-based spring framework used for Rapid Application
Development (to build stand-alone microservices). It has extra support of auto-
configuration and embedded application server like tomcat, jetty, etc.

Features of Spring Boot that make it different?


Creates stand-alone spring application with minimal configuration needed.
It has embedded tomcat, jetty which makes it just code and run the application.
Provide production-ready features such as metrics, health checks, and
externalized configuration.
Absolutely no requirement for XML configuration.

Page 3 © Copyright by Interviewbit


Spring Boot Interview Questions

Spring Boot Features

Spring Boot Interview Questions For Freshers


1. What are the advantages of using Spring Boot?
The advantages of Spring Boot are listed below:
Easy to understand and develop spring applications.
Spring Boot is nothing but an existing framework with the addition of an
embedded HTTP server and annotation configuration which makes it easier to
understand and faster the process of development.
Increases productivity and reduces development time.
Minimum configuration.
We don’t need to write any XML configuration, only a few annotations are
required to do the configuration.

2. What are the Spring Boot key components?


Below are the four key components of spring-boot:

Page 4 © Copyright by Interviewbit


Spring Boot Interview Questions

Spring Boot auto-configuration.


Spring Boot CLI.
Spring Boot starter POMs.
Spring Boot Actuators.

3. Why Spring Boot over Spring?


Below are some key points which spring boot offers but spring doesn’t:
Starter POM.
Version Management.
Auto Configuration.
Component Scanning.
Embedded server.
InMemory DB.
Actuators
Spring Boot simplifies the spring feature for the user:

Spring vs Spring Boot

4. What is the starter dependency of the Spring boot module?

Page 5 © Copyright by Interviewbit


Spring Boot Interview Questions

Spring boot provides numbers of starter dependency, here are the most commonly
used -
Data JPA starter.
Test Starter.
Security starter.
Web starter.
Mail starter.
Thymeleaf starter.

5. How does Spring Boot works?


Spring Boot automatically configures your application based on the dependencies
you have added to the project by using annotation. The entry point of the spring boot
application is the class that contains @SpringBootApplication annotation and the
main method.
Spring Boot automatically scans all the components included in the project by using
@ComponentScan annotation.

6. What does the @SpringBootApplication annotation do


internally?
The @SpringBootApplication annotation is equivalent to using @Configuration,
@EnableAutoConfiguration, and @ComponentScan with their default attributes.
Spring Boot enables the developer to use a single annotation instead of using
multiple. But, as we know, Spring provided loosely coupled features that we can use
for each annotation as per our project needs.

7. What is the purpose of using @ComponentScan in the class


files?
Spring Boot application scans all the beans and package declarations when the
application initializes. You need to add the @ComponentScan annotation for your
class file to scan your components added to your project.

8. How does a spring boot application get started?

Page 6 © Copyright by Interviewbit


Spring Boot Interview Questions

Just like any other Java program, a Spring Boot application must have a main
method. This method serves as an entry point, which invokes the
SpringApplication#run method to bootstrap the application.

@SpringBootApplication
public class MyApplication {

public static void main(String[] args) {

SpringApplication.run(MyApplication.class);
// other statements
}
}

9. What are starter dependencies?


Spring boot starter is a maven template that contains a collection of all the relevant
transitive dependencies that are needed to start a particular functionality.
Like we need to import spring-boot-starter-web dependency for creating a web
application.

<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-web </artifactId>
</dependency>

10. What is Spring Initializer?


Spring Initializer is a web application that helps you to create an initial spring boot
project structure and provides a maven or gradle file to build your code. It solves the
problem of setting up a framework when you are starting a project from scratch.

11. What is Spring Boot CLI and what are its benefits?
Spring Boot CLI is a command-line interface that allows you to create a spring-based
java application using Groovy.
Example: You don’t need to create getter and setter method or access modifier,
return statement. If you use the JDBC template, it automatically loads for you.

12. What are the most common Spring Boot CLI commands?
Page 7 © Copyright by Interviewbit
Spring Boot Interview Questions

-run, -test, -grap, -jar, -war, -install, -uninstall, --init, -shell, -help.
To check the description, run spring --help from the terminal.

Spring Boot CLI Commands

Advanced Spring Boot Questions


13. What Are the Basic Annotations that Spring Boot Offers?
The primary annotations that Spring Boot offers reside in its
org.springframework.boot.autoconfigure and its sub-packages. Here are a couple of
basic ones:
@EnableAutoConfiguration – to make Spring Boot look for auto-configuration beans
on its classpath and automatically apply them.
@SpringBootApplication – used to denote the main class of a Boot Application. This
annotation combines @Configuration, @EnableAutoConfiguration, and
@ComponentScan annotations with their default attributes.

14. What is Spring Boot dependency management?

Page 8 © Copyright by Interviewbit


Spring Boot Interview Questions

Spring Boot dependency management is used to manage dependencies and


configuration automatically without you specifying the version for any of that
dependencies.

15. Can we create a non-web application in Spring Boot?


Yes, we can create a non-web application by removing the web dependencies from
the classpath along with changing the way Spring Boot creates the application
context.

16. Is it possible to change the port of the embedded Tomcat


server in Spring Boot?
Yes, it is possible. By using the server.port in the application.properties.

17. What is the default port of tomcat in spring boot?


The default port of the tomcat server-id 8080. It can be changed by adding sever.port
properties in the application.property file.

18. Can we override or replace the Embedded tomcat server in


Spring Boot?
Yes, we can replace the Embedded Tomcat server with any server by using the Starter
dependency in the pom.xml file. Like you can use spring-boot-starter-jetty as a
dependency for using a jetty server in your project.

19. Can we disable the default web server in the Spring boot
application?
Yes, we can use application.properties to configure the web application type i.e
spring.main.web-application-type=none.

20. How to disable a specific auto-configuration class?


You can use exclude attribute of @EnableAutoConfiguration if you want auto-
configuration not to apply to any specific class.

Page 9 © Copyright by Interviewbit


Spring Boot Interview Questions

21. Explain @RestController annotation in Sprint boot?


It is a combination of @Controller and @ResponseBody, used for creating a restful
controller. It converts the response to JSON or XML. It ensures that data returned by
each method will be written straight into the response body instead of returning a
template.

22. What is the difference between @RestController and


@Controller in Spring Boot?
@Controller Map of the model object to view or template and make it human
readable but @RestController simply returns the object and object data is directly
written in HTTP response as JSON or XML.

23. Describe the flow of HTTPS requests through the Spring


Boot application?
On a high-level spring boot application follow the MVC pattern which is depicted in
the below flow diagram.

Spring Boot Flow Architecture

Page 10 © Copyright by Interviewbit


Spring Boot Interview Questions

24. What is the difference between RequestMapping and


GetMapping?
RequestMapping can be used with GET, POST, PUT, and many other request methods
using the method attribute on the annotation. Whereas getMapping is only an
extension of RequestMapping which helps you to improve on clarity on request.

25. What is the use of Profiles in spring boot?


While developing the application we deal with multiple environments such as dev,
QA, Prod, and each environment requires a different configuration. For eg., we might
be using an embedded H2 database for dev but for prod, we might have proprietary
Oracle or DB2. Even if DBMS is the same across the environment, the URLs will be
different.
To make this easy and clean, Spring has the provision of Profiles to keep the separate
configuration of environments.

26. What is Spring Actuator? What are its advantages?


An actuator is an additional feature of Spring that helps you to monitor and manage
your application when you push it to production. These actuators include auditing,
health, CPU usage, HTTP hits, and metric gathering, and many more that are
automatically applied to your application.

27. How to enable Actuator in Spring boot application?


To enable the spring actuator feature, we need to add the dependency of “spring-
boot-starter-actuator” in pom.xml.

<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-actuator </artifactId>
</dependency>

28. What are the actuator-provided endpoints used for


monitoring the Spring boot application?
Actuators provide below pre defined endpoints to monitor our application
Page 11 © Copyright by Interviewbit
Spring Boot Interview Questions

Health
Info
Beans
Mappings
Configprops
Httptrace
Heapdump
Threaddump
Shutdown

29. How to get the list of all the beans in your Spring boot
application?
Spring Boot actuator “/Beans” is used to get the list of all the spring beans in your
application.

30. How to check the environment properties in your Spring


boot application?
Spring Boot actuator “/env” returns the list of all the environment properties of
running the spring boot application.

31. How to enable debugging log in the spring boot application?


Debugging logs can be enabled in three ways -
We can start the application with --debug switch.
We can set the logging.level.root=debug property in application.property file.
We can set the logging level of the root logger to debug in the supplied logging
configuration file.

32. Where do we define properties in the Spring Boot


application?

Page 12 © Copyright by Interviewbit


Spring Boot Interview Questions

You can define both application and Spring boot-related properties into a file called
application.properties. You can create this file manually or use Spring Initializer to
create this file. You don’t need to do any special configuration to instruct Spring Boot
to load this file, If it exists in classpath then spring boot automatically loads it and
configure itself and the application code accordingly.

33. What is dependency Injection?


The process of injecting dependent bean objects into target bean objects is called
dependency injection.
Setter Injection: The IOC container will inject the dependent bean object into
the target bean object by calling the setter method.
Constructor Injection: The IOC container will inject the dependent bean object
into the target bean object by calling the target bean constructor.
Field Injection: The IOC container will inject the dependent bean object into the
target bean object by Reflection API.

34. What is an IOC container?


IoC Container is a framework for implementing automatic dependency injection. It
manages object creation and its life-time and also injects dependencies into the
class.

Page 13 © Copyright by Interviewbit


MVC Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

MVC Interview Questions and Answers


1. Explain Model, View and Controller in Brief.
2. What are the different return types used by the controller action method in MVC?
3. Name the assembly in which the MVC framework is typically defined.
4. Explain the MVC Application life cycle.
5. What are the various steps to create the request object?
6. Explain some benefits of using MVC?
7. Explain in brief the role of different MVC components?
8. How will you maintain the sessions in MVC?
9. What do you mean by partial view of MVC?
10. Explain in brief the difference between adding routes in a webform application &
an MVC application?
11. How will you define the 3 logical layers of MVC?
12. What is the use of ActionFilters in MVC?
13. How to execute any MVC project? Explain its steps.
14. What is the concept of routing in MVC?
15. What are the 3 important segments for routing?
16. What are the different properties of MVC routes?
17. How is the routing carried out in MVC?
18. How will you navigate from one view to another view in MVC? Explain with a
hyperlink example.
19. Explain the 3 concepts in one line; Temp data, View, and Viewbag?
20. Mention & explain the different approaches you will use to implement Ajax in
MVC?

Page 1 © Copyright by Interviewbit


MVC Interview Questions

MVC Interview Questions and Answers (.....Continued)

21. How will you differentiate between ActionResult and ViewResult?


22. What is Spring MVC?
23. Explain briefly what you understand by separation of concern.
24. What is TempData in MVC?
25. Define Output Caching in MVC?
26. Why are Minification and Bundling introduced in MVC?
27. Describe ASP.NET MVC?
28. Which class will you use for sending the result back in JSON format in MVC?
29. Make a differentiation between View and Partial View?
30. Define the concept of Filters in MVC?
31. Mention the significance of NonActionAttribute?
32. What is used to handle an error in MVC?
33. Define Scaffolding in MVC?
34. When multiple filters are used in MVC, how is the ordering of execution of the
filters done?
35. What is ViewStart?
36. Which type of filters are executed in the end while developing an MVC
application?
37. Mention the possible file extensions used for razor views?
38. Explain briefly the two approaches of adding constraints to an MVC route?
39. Point out the different stages a Page life cycle of MVC has?
40. Explain briefly the use of ViewModel in MVC?

Page 2 © Copyright by Interviewbit


MVC Interview Questions

MVC Interview Questions and Answers (.....Continued)


41. Define Default Route in MVC?
42. Explain briefly the GET and POST Action types?
43. What are the rules of Razor syntax?
44. How can you implement the MVC forms authentication?
45. What are the areas of benefits in using MVC?
46. Point out the two instances where you cannot use routing or where routing is
not necessary
47. How will you explain the concept of RenderBody and RenderPage of MVC?

Page 3 © Copyright by Interviewbit


Let's get Started
MVC (full form Model View Controller)is a so ware architecture or application
design model containing 3 interconnected verticals or portions. These 3 portions are
the model (data associated with the application), the view (which is the user
interface of an MVC application), and the controller (the processes that are
responsible for handling the input).
The MVC model is normally used to develop modern applications with user
interfaces. It provides the central pieces for designing a desktop or mobile
application, as well as modern web applications.
In this article, you will find a collection of real-world MVC interview questions with
inline answers that are asked in top tech companies. So, here we go!

MVC Interview Questions and Answers


1. Explain Model, View and Controller in Brief.
A model can be defined as the data that will be used by the program. Commonly
used examples of models in MVC are the database, a simple object holding data
(such as any multimedia file or the character of a game), a file, etc.
A view is a way of displaying objects (user interfaces) within an application. This
is the particular vertical through which end users will communicate.
A controller is the third vertical which is responsible for updating both models
and views. It accepts input from users as well as performs the equivalent update.
In other words, it is the controller which is responsible for responding to user
actions.

Page 4 © Copyright by Interviewbit


MVC Interview Questions

2. What are the different return types used by the controller


action method in MVC?
The various return types of controller action methods in MVC are:
View Result
JSON Result
Content Result
Redirect Result
JavaScript Result

3. Name the assembly in which the MVC framework is typically


defined.
In the System.Web.Mvc , the MVC framework is usually defined.

4. Explain the MVC Application life cycle.


Web applications usually have 2 primary execution steps. These are:
Understanding the request.
Sending an appropriate response based on the type of request.

Page 5 © Copyright by Interviewbit


MVC Interview Questions

The same thing can be related to MVC applications also whose life cycle has 2
foremost phases:
For creating a request object.
For sending the response to any browser.

5. What are the various steps to create the request object?


In order to create a request object, we have to go through 4 different steps. These
are:
Step 1: Fill the route.
Step 2: Fetch the route.
Step 3: Create a request context.
Step 4: Create a controller instance.

6. Explain some benefits of using MVC?


Some common benefits of MVC are:
Support of multiple views: Since there is a separation of the model from its
view, the user interface (UI) gets the capability to implement multiple views of
the same data concurrently.
Faster development process: MVC has the ability to provide rapid and parallel
development. This means that while developing an application, it is more likely
that one programmer will perform some action on the view and in parallel,
another can work on creating the application’s business logic.
SEO-friendly development: The platform of MVC can support the SEO-friendly
development of web pages or web applications.
More Control: The MVC framework (of ASP.NET) offers additional control over
HTML, CSS and JavaScript than that of traditional WebForms.
Lightweight: MVC framework does not make use of View State which eventually
minimises the requested bandwidth to some extent.

7. Explain in brief the role of different MVC components?


The different MVC components have the following roles:

Page 6 © Copyright by Interviewbit


MVC Interview Questions

Presentation: This component takes care of the visual representation of a


particular abstraction in the application.
Control: This component takes care of the consistency and uniformity between
the abstraction within the system along with their presentation to the user. It is
also responsible for communicating with all other controls within the MVC
system.
Abstraction: This component deals with the functionality of the business
domain within the application.

8. How will you maintain the sessions in MVC?


The sessions of an MVC can be maintained in 3 possible ways:
viewdata
temp data and
view bag

9. What do you mean by partial view of MVC?


A partial view can be defined as a portion of HTML that is carefully injected into an
existing DOM. Partial views are commonly implemented for componentizing Razor
views, making them simpler to build and update. Controller methods can also
directly return the partial views.

10. Explain in brief the difference between adding routes in a


webform application & an MVC application?
We make use of the MapPageRoute() which is of the RouteCollection class for adding
routes in a webform application. Whereas, the MapRoute() method is used for adding
routes to an MVC application.

11. How will you define the 3 logical layers of MVC?


The 3 logical layers of MVC can be defined as follows:
Model logic acts as a business layer.
View logic acts as a display layer.
Controller logic acts as input control.

h i h f i il i ?
Page 7 © Copyright by Interviewbit
MVC Interview Questions

ActionFilters are used for executing the logic while MVC action is executed.
Furthermore, action filters permit the implementation of pre and post-processing
logic and action methods.

13. How to execute any MVC project? Explain its steps.


For executing an MVC project, the steps followed are:
Receive the first request for the application.
Then, the routing is performed.
Then, the MVC request handler is created.
A er that, the controller is created and executed.
Then, the action is invoked.
Then, the results are executed.

14. What is the concept of routing in MVC?


MVC routing can be defined as a pattern-matching scheme that is used for mapping
incoming requests of browsers to a definite MVC controller action.

15. What are the 3 important segments for routing?


The 3 important segments for routing are:
ControllerName.
ActionMethodName.
Parameter.

16. What are the different properties of MVC routes?


MVC routes are accountable for governing which controller method will be executed
for a given URL. Thus, the URL comprises the following properties:
Route Name: It is the URL pattern that is used for mapping the handler.
URL Pattern: It is another property containing the literal values as well as
variable placeholders (known as URL parameters).
Defaults: This is the default parameter value assigned at the time of parameter
creation.
Constraints: These are used for applying against the URL pattern for more
narrowly defining the URL matching it.

Page 8 © Copyright by Interviewbit


MVC Interview Questions

17. How is the routing carried out in MVC?


The RouteCollection contains a set of routes that are responsible for registering the
routes in the application. The RegisterRoutes method is used for recording the routes
in the collection. The URL patterns are defined by the routes and a handler is used
which checks the request matching the pattern. The MVC routing has 3 parameters.
The first parameter determines the name of the route. The second parameter
determines a specific pattern with which the URL matches. The third parameter is
responsible for providing default values for its placeholders.

18. How will you navigate from one view to another view in
MVC? Explain with a hyperlink example.
We will make use of the ActionLink method which will help us to navigate from one
view to another. Here is an example of navigating the Home controller by invoking
the Go to Home action. He is how we can code it:

<%=Html.ActionLink("Home","GoTo Home")%>

19. Explain the 3 concepts in one line; Temp data, View, and
Viewbag?
We can briefly describe Temp data, View, and Viewbag as:

Page 9 © Copyright by Interviewbit


MVC Interview Questions

Temp data: This is used for maintaining the data when there is a shi of work
from one controller to another.
View data: This is used for maintaining the data when we will shi from a
controller to a view within an application.
View Bag: This acts as a view data’s dynamic wrapper.

20. Mention & explain the different approaches you will use to
implement Ajax in MVC?
There are 2 different approaches to implementing Ajax in MVC. These are:
jQuery: This is a library written using JavaScript for simplifying HTML-DOM
manipulation.
AJAX libraries: Asynchronous JavaScript and XML libraries are a set of web
development libraries written using JavaScript and are used to perform
common operations.

21. How will you differentiate between ActionResult and


ViewResult?
Some common differentiation between ActionResult and ViewResult is:

Page 10 © Copyright by Interviewbit


MVC Interview Questions

ActionResult ViewResult

It is not so effective
It becomes effective if you want to
in deriving different
derive different types of views
types of views
dynamically.
dynamically.

It is an abstract class, meaning it has This has been


methods and variables without the derived from the
implementation body of instruction. ActionResult class.

JsonResult, ViewResult, and This class do not


FileStreamResult are some examples have its own derived
of its derived class. class.

22. What is Spring MVC?


The Spring MVC or Spring Web MVC can be defined as a framework that provides a
“Model View Controller” (MVC) architecture in the application as well as ready
components implemented for developing adjustable and adaptable web
applications. It is actually a Java-based framework intended to build web
applications. It works on the Model-View-Controller design approach. This framework
also makes use of all the elementary traits of a core Spring Framework such as
dependency injection, lightweight, integration with other frameworks, inversion of
control, etc. Spring MVC has a dignified resolution for implementing MVC in Spring
Framework with the use of DispatcherServlet.

23. Explain briefly what you understand by separation of


concern.

Page 11 © Copyright by Interviewbit


MVC Interview Questions

Separation of Concerns can be defined as one of the core features as well as benefits
of using MVC and is supported by ASP.NET. Here, the MVC framework offers a distinct
detachment of the different concerns such as User Interface (UI), data and the
business logic.

24. What is TempData in MVC?


TempData can be defined as a dictionary object used for storing data for a short
period of time. This is the MVC’s TempDataDictionary class which acts as a Controller
base-class’s instance property. TempData has the ability to preserve data for an HTTP
request.

25. Define Output Caching in MVC?


Output Caching is an approach used for improving the performance of an MVC
application. It is used for enabling its users to cache the data sent back by the
controller method so that the data used earlier does not get generated each time
while invoking the same controller method. It has advantages to use Output Caching
as it cuts down database server round trips, minimizes server round trips as well as
reduces the network traffic.

26. Why are Minification and Bundling introduced in MVC?


Two new techniques have been included in MVC, known as Bundling and
minification, whose primary function is to progress the request load time. It advances
the load time by dipping the number of requests sent to the server as well as
reducing the requested asset’s (JavaScript and CSS) size.

27. Describe ASP.NET MVC?


The term ASP.NET MVC can be defined as a web application framework that is very
lightweight and has high testable features. ASP.NET supporting MVC uses 3 separate
components in its application. These are the Model, View, and Controller.

28. Which class will you use for sending the result back in JSON
format in MVC?
For sending back the result in JSON format in any MVC application, you have to
l h l l
Page 12 © Copyright by Interviewbit
MVC Interview Questions

29. Make a differentiation between View and Partial View?


The major differentiation between View and Partial View is as follows:

View Partial View

Partial view, as the name


The view is not as lightweight as
suggests, is lightweight
that of the Partial view.
than View.

The view has its own layout The partial view does not
page. have its own layout page.

A partial view is designed


The Viewstart page is rendered
particularly for rendering
just before rendering any view.
within the view.

The view can have markup tags


The partial view does not
of HTML such as HTML, head,
contain any markup.
body, title, meta, etc.

30. Define the concept of Filters in MVC?


There are situations where I want to implement some logic either prior to the
execution of the action method or right a er it. In that scenario, the Action Filters are
used. Filters are used to determine the logic needed for executing before or a er the
action method gets executed. Action methods make use of the action filters as an
attribute.
Different types of MVC action filters are:

Page 13 © Copyright by Interviewbit


MVC Interview Questions

Action filter (that implements the IActionFilter).


Exception filter (that implements the IExceptionFilter attribute).
Authorization filter (that implements the IAuthorizationFilter).
Result filter (that implements the IResultFilter).

31. Mention the significance of NonActionAttribute?


The various public methods that are associated with the controller class are
considered to be the action method. For preventing the default method, you have to
allocate its public method with NonActionAttribute.

32. What is used to handle an error in MVC?


Error handling is usually done using Exception handling, whether it’s a Windows
Forms application or a web application. The HandleError attribute is used, which
helps in providing built-in exception filters. The HandleError attribute of ASP.NET can
be functional over the action method as well as Controller at its global level.
Example of implementation:

public static void RegGlobalFilters(Global_FilterCollection filt)


{
filt.Add(new HandleErrorAttribute());
}
protected void Application_Start()
{
AreaRegn.RegisterAllAreas();
RegGlobalFilters(Global_Filters.Filters);
RegisterRoutes(Route_Table.Routes);
}

33. Define Scaffolding in MVC?


Scaffolding can be defined as an ASP.NET’s code-generation framework used in web
applications. Scaffolding is used in developing MVC applications when anyone wants
to rapidly enhance the code that intermingles with the application’s data model.
Scaffolding can also lower the quantity of time for developing a standard data
operation in the application.

Page 14 © Copyright by Interviewbit


MVC Interview Questions

34. When multiple filters are used in MVC, how is the ordering of
execution of the filters done?
The order in which filters are used:
First, the authorization filters are executed.
Followed by the Action filters.
Then, the response filters are executed.
Finally, the exception filters.

35. What is ViewStart?


A new layout called _ViewStart is introduced by the Razor View Engine that is applied
to all views automatically. ViewStart is executed at the very beginning followed by the
start rendering as well as other views.
Example:

@ {
Layout = "~/ Views/ Shared/ _
file.cshtml";
}
<html>
<head>
<meta name="viewport" />
<title> InitialView </title> </head>
<body> ….
</body>
</html>

36. Which type of filters are executed in the end while


developing an MVC application?
In the end, while developing an MVC application, the “Exception Filters” are
executed.

37. Mention the possible file extensions used for razor views?
The different file extensions that are used by razor views are:
.cshtml: When your MVC application is using C# as the programming language.
.vbhtml: When your MVC application is using VB is the programming language.
Page 15 © Copyright by Interviewbit
MVC Interview Questions

38. Explain briefly the two approaches of adding constraints to


an MVC route?
For adding constraints to an MVC route, the 2 different approaches are:
By making use of regular expressions.
By making use of objects that implement the “IRouteConstraint” interface.

39. Point out the different stages a Page life cycle of MVC has?
The different steps or stages of the page life-cycle of MVC are:
Initialization of app.
Routing.
Instantiate the object followed by executing the controller.
Locate as well as invoke the controller action.
Instantiating and then rendering the view.

40. Explain briefly the use of ViewModel in MVC?


ViewModel can be defined as a plain class having different properties. It is used for
binding a view that is strongly typed. ViewModel consists of various validation rules
for defining the properties of practising data annotation.

41. Define Default Route in MVC?


The default Route of project templates in MVC includes a generic route that makes
use of the given URL resolution for breaking the URL based on the request into 3
tagged segments. URL: “{controller} / {action} / {id}”

42. Explain briefly the GET and POST Action types?


The GET Action Type is implemented for requesting the data from a particular
resource. Using these GET requests, a developer can pass the URL (that is
compulsory).
The POST Action Type is implemented for submitting the data that needs to be
handled to a certain resource. Using these POST requests, a developer can move
with the URL, which is essential along with the data.

Page 16 © Copyright by Interviewbit


MVC Interview Questions

The primary rules for creating Razor are:


The block of Razor codes is enclosed within @{ ... }.
Variables and functions of inline expressions start with @ symbol.
The ‘var’ keyword is used for declaring variables.
Razor code statements are terminated with a semicolon.
C# files have .cshtml as file extension.

44. How can you implement the MVC forms authentication?


Authentication in forms is added in order to include a layer of security to access the
user for a specific service. This authentication is done by verifying the user’s identity
through the credentials such as username with password or email with a password.
The code snippet will look something like this:

<system.web>
<authentication mode = "Forms" >
<formsloginUrl = "Login.aspx" protection = "All" timeout = "30" name = ".ASPXAU
</authentication>
</system.web>

45. What are the areas of benefits in using MVC?


The area of benefits of using MVC is:
Unit testing becomes much easier.
It permits its users to shape views, models, and controllers into 3 distinct
operational sections within an application.
It becomes easy to assimilate with other areas produced by another application.

46. Point out the two instances where you cannot use routing or
where routing is not necessary
The 2 situations where routing is not used or not necessary are:
When there is a physical file matching the URL pattern.
When any routing gets disabled in any particular URL pattern.

Page 17 © Copyright by Interviewbit


MVC Interview Questions

47. How will you explain the concept of RenderBody and


RenderPage of MVC?
RenderBody can be considered as a ContentPlaceHolder of web forms. It is available
on the layout page and will be responsible for rendering the child pages/views. On
the other hand, the layout page contains a single RenderBody() method. Multiple
RenderPage() can reside within the Layout page.

Additional Interview Resources:

C# Interview Questions and Answers


.NET Interview Questions and Answers
ASP.NET Interview Questions and Answers
Entity Framework Interview Questions and Answers

Page 18 © Copyright by Interviewbit


Spring Security Interview
Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Spring Security Interview Questions for Freshers


1. What are some essential features of Spring Security?
2. What is Spring security authentication and authorization?
3. What do you mean by basic authentication?
4. What do you mean by digest authentication?
5. What do you mean by session management in Spring Security?
6. Explain SecurityContext and SecurityContext Holder in Spring security.
7. Explain spring security OAuth2.
8. What do you mean by OAuth2 Authorization code grant type?
9. What is method security and why do we need it?
10. What do you mean by HASHING in spring security?
11. Explain salting and its usage.
12. What is PasswordEncoder?
13. Explain AbstractSecurityInterceptor in spring security?
14. Is security a cross-cutting concern?

Spring Security Interview Questions for


Experienced
15. What is SpEL (Spring Expression Language)?
16. Name security annotations that are allowed to use SpEL.
17. Explain what is AuthenticationManager in Spring security.
18. Explain what is ProviderManager in Spring security.

Page 1 © Copyright by Interviewbit


Spring Security Interview Questions

Spring Security Interview Questions for


Experienced (.....Continued)

19. What is JWT?


20. What is Spring Security Filter Chain?
21. Explain how the security filter chain works.
22. Name some predefined filters used in spring security and write their functions.
23. What do you mean by principal in Spring security?
24. Can you explain what is DelegatingFilterProxy in spring security?
25. Can you explain what is FilterChainProxy in spring security?
26. What is the intercept-url pattern and why do we need it?
27. Does order matter in the intercept-url pattern? If yes, then in which order should
we write it?
28. State the difference between ROLE_USER and ROLE_ANONYMOUS in a spring
intercept-url configuration.
29. State the difference between @PreAuthorize and @Secured in Spring security.
30. State the difference between @Secured and @RolesAllowed.

Page 2 © Copyright by Interviewbit


Let's get Started
Anything on the web like web applications is exposed to the open world of the
Internet, they are vulnerable to security threats. Only authorized personnel should
have access to Web pages, files, and other classified resources. There are o en
several layers of security, such as firewalls, proxy servers, JVM security, etc., but, if
access is to be controlled, application-level security should also be applied. Hence,
Spring Security, a part of the Spring Framework, provides a means for applying a
layer of security to Java applications.

What is Spring Security?

Page 3 © Copyright by Interviewbit


Spring Security Interview Questions

Spring Security is essentially just a bunch of servlet filters that enable Java
applications to include authentication and authorization functionality. It is one of the
most powerful, and highly customizable access-control frameworks (security
framework) that provide authentication, authorization, and other security features
for Java EE (Enterprise edition) based enterprise applications. The real power of
Spring Security lies in its ability to be extended to meet custom needs. Its main
responsibility is to authenticate and authorize incoming requests for accessing any
resource, including rest API endpoints, MVC (Model-View-Controller) URLs, static
resources, etc.

Spring Security Interview Questions for Freshers


1. What are some essential features of Spring Security?
Some essential features of Spring Security include:
Supports authentication and authorization in a flexible and comprehensive
manner.
Detection and prevention of attacks including session fixation, clickjacking,
cross-site request forgery, etc.
Integrate with Servlet API.
Offers optional integration with Spring Web MVC (Model-View-Controller).
Java Authentication and Authorization Service (JAAS) is used for authentication
purposes.
Allows Single Sign-On so that users can access multiple applications with just
one account (username and password).

2. What is Spring security authentication and authorization?

Page 4 © Copyright by Interviewbit


Spring Security Interview Questions

Authentication: This refers to the process of verifying the identity of the user,
using the credentials provided when accessing certain restricted resources. Two
steps are involved in authenticating a user, namely identification and
verification. An example is logging into a website with a username and a
password. This is like answering the question Who are you?
Authorization: It is the ability to determine a user's authority to perform an
action or to view data, assuming they have successfully logged in. This ensures
that users can only access the parts of a resource that they are authorized to
access. It could be thought of as an answer to the question Can a user do/read
this?

3. What do you mean by basic authentication?


RESTful web services can be authenticated in many ways, but the most basic one is
basic authentication. For basic authentication, we send a username and password
using the HTTP [Authorization] header to enable us to access the resource.
Usernames and passwords are encoded using base64 encoding (not encryption) in
Basic Authentication. The encoding is not secure since it can be easily decoded.
Syntax:

Page 5 © Copyright by Interviewbit


Spring Security Interview Questions

Value = username:password
Encoded Value = base64(Value)
Authorization Value = Basic <Encoded Value>
//Example: Authorization: Basic VGVzdFVzZXI6dGVzdDEyMw==
//Decode it'll give back the original username:password UserName:user123

4. What do you mean by digest authentication?


RESTful web services can be authenticated in many ways, but advanced
authentication methods include digest authentication. It applies a hash function to
username, password, HTTP method, and URI in order to send credentials in
encrypted form. It generates more complex cryptographic results by using the
hashing technique which is not easy to decode.
Syntax:

Hash1=MD5(username:realm:password)
Hash2=MD5(method:digestURI)
response=MD5(Hash1:nonce:nonceCount:cnonce:qop:Hash2)
//Example, this got generated by running this example
Authorization: Digest username="TestAdmin", realm="admin-digest-realm", nonce="MTYwMDEw

5. What do you mean by session management in Spring


Security?
As far as security is concerned, session management relates to securing and
managing multiple users' sessions against their request. It facilitates secure
interactions between a user and a service/application and pertains to a sequence of
requests and responses associated with a particular user. Session Management is one
of the most critical aspects of Spring security as if sessions are not managed properly,
the security of data will suffer. To control HTTP sessions, Spring security uses the
following options:
SessionManagementFilter.
SessionAuthneticationStrategy
With these two, spring-security can manage the following security session options:

Page 6 © Copyright by Interviewbit


Spring Security Interview Questions

Session timeouts (amount of time a user can remain inactive on a website


before the site ends the session.)
Concurrent sessions (the number of sessions that an authenticated user can
have open at once).
Session-fixation (an attack that permits an attacker to hijack a valid user
session).

6. Explain SecurityContext and SecurityContext Holder in


Spring security.
There are two fundamental classes of Spring Security: SecurityContext and
SecurityContextHolder.
SecurityContext: In this, information/data about the currently authenticated
user (also known as the principal) is stored. So, in order to obtain a username or
any other information about the user, you must first obtain the SecurityContext.
SecurityContextHolder: Retrieving the currently authenticated principal is
easiest via a static call to the SecurityContextHolder. As a helper class, it
provides access to the security context. By default, it uses a ThreadLocal object
to store SecurityContext, so SecurityContext is always accessible to methods in
the same thread of execution, even if SecurityContext isn't passed around.

7. Explain spring security OAuth2.


A simple authorization framework, OAuth 2.0, permits client applications to access
protected resources via an authorization server. Using it, a client application (third
party) can gain limited access to an HTTP service on behalf of the resource owner or
on its own behalf.

Page 7 © Copyright by Interviewbit


Spring Security Interview Questions

In OAuth2, four roles are available as shown below:


Resource Owner/User: The owner of a resource, i.e., the individual who holds
the rights to that resource.
Client: The application requests an access token (represents a user's permission
for the client to access their data/resources), then accesses the protected
resource server a er receiving the access token.
Authorization Server: A er successfully authenticating the resource owner and
obtaining authorization, the server issues access tokens to the client.
Resource Server: It provides access to requested resources. Initially, it validates
the access tokens, then it provides authorization.

8. What do you mean by OAuth2 Authorization code grant type?


The term "grant type" in OAuth 2.0 refers to the way an application gets an access
token. The authorization code flow is one of several types of grants defined by OAuth
2.0. This grant is used by both web applications and native applications to obtain an
access token a er a user authorizes the application. As opposed to most other grant
types, it requires the application to first launch a browser to begin the process/flow.
The process involves the following steps:

Page 8 © Copyright by Interviewbit


Spring Security Interview Questions

The application opens a browser to direct the user to an OAuth server.


Upon seeing the authorization prompt, the user approves the application's
request.
Upon approval, the user is redirected back to the application with an
authorization code in the query string.
Application exchange authorization codes for access tokens.

9. What is method security and why do we need it?


Simply put, Spring method security lets us add or support authorization at the
method level. Spring security checks the authorization of the logged-in user in
addition to authentication. Upon login, the ROLE of the user is used to determine
which user is authorized to access the resource. When creating a new user in
WebSecurityConfig, we can specify his ROLE as well. A security measure applied to a
method prevents unauthorized users and only allows authentic users. The purpose of
method level security is not to facilitate users who have access but to prevent
unauthorized users from performing activities beyond their privileges and roles.
Method level security is implemented using AOP (Aspect-Oriented Programming).

10. What do you mean by HASHING in spring security?


Databases o en suffer from security problems when storing passwords. Plain text
passwords cannot be stored in your database because then anyone who has access to
the database would know the passwords of every user. The solution to this problem is
to store encrypted passwords in a database. This is called password hashing.

Page 9 © Copyright by Interviewbit


Spring Security Interview Questions

As part of a general security concept, Hashing involves encoding a string according to


the hashing algorithm used. MD4, MD5, SHA (Security Hashing Algorithm) like
SHA256 SHA128, etc., are some of the hashing algorithms that can be applied. The
hashing method should take the password as input and return a hashed string, which
should be stored in a database rather than plain text.

11. Explain salting and its usage.


Spring Security automatically applies salting since version 3.1. Salting is the process
of combining random data with a password before password hashing. Salt improves
hashing by increasing its uniqueness and complexity without increasing the
requirements for users, thereby reducing password attacks. Hashed passwords are
then stored in a database, along with salt. Your application will be protected from
Dictionary-Attack by using salting. With Salt, you can add an extra string to the
password to make it more difficult for hackers to crack it.

Page 10 © Copyright by Interviewbit


Spring Security Interview Questions

12. What is PasswordEncoder?


Password encoding is provided by Spring Security using the PasswordEncoder
interface. This interface defines two methods:
encode(): It converts a plain password into an encoded form.
matches(): It compares an encoded password from the database with a plain
password (input by the user) that's been encoded using the same salting and
hashing algorithm as the encoded password.

13. Explain AbstractSecurityInterceptor in spring security?


In Spring Security, the AbstractSecurityInterceptor handles the initial authorization
of incoming requests. AbstractSecurityInterceptor has two concrete
implementations:
FilterSecurityInterceptor: It will authorize all authenticated user requests.
MethodSecurityInterceptor: This is crucial for implementing method-level
security. It allows us to secure our program at the method level.

14. Is security a cross-cutting concern?

Page 11 © Copyright by Interviewbit


Spring Security Interview Questions

Spring Security is indeed a cross-cutting concern. Spring security is also using Spring
AOP (Aspect Oriented Programming) internally. A cross-cutting concern is one that
applies throughout the whole application and affects it all. Below are some cross-
cutting concerns related to the enterprise application.
Logging and tracing
Transaction management
Security
Caching
Error handling
Performance monitoring
Custom Business Rules

Spring Security Interview Questions for


Experienced
15. What is SpEL (Spring Expression Language)?
Spring Framework 3.0 introduced Expression Language/ SpEL. In Spring Expression
Language (SpEL), queries and manipulations of object graphs are possible at
runtime. You can use it with XML and annotation-based Spring configurations. JSP
EL, OGNL, MVEL and JBoss EL are some of the expression languages available, but
SpEL provides additional features including string template functionality and
method invocation.
Example:

Page 12 © Copyright by Interviewbit


Spring Security Interview Questions

import org.springframework.expression.Expression;
import org.springframework.expression.ExpressionParser;
import org.springframework.expression.spel.standard.SpelExpressionParser;
public class WelcomeTest
{
public static void main(String[] args)
{
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("'WELCOMEtoSPEL'");
String message = (String) exp.getValue();
System.out.println(message);
//OR
//System.out.println(parser.parseExpression("'Hello SPEL'").getValue());
}
}

Output:

WELCOMEtoSPEL

16. Name security annotations that are allowed to use SpEL.


Some security annotations that are allowed to use SpEL include:
@PreAuthorize
@PreFilter
@PostAuthorize
@PostFilter
These provide expression-based access control. In Spring Security, @PreAuthorize is
one of the most powerful annotations that allows you to use SpEL. But the old
@Secured annotation cannot use it, for example you cannot write
@Secured("hasRole('ROLEADMIN')"), but you can do
@PreAuthorize("hasRole('ROLEADMIN')").

17. Explain what is AuthenticationManager in Spring security.

Page 13 © Copyright by Interviewbit


Spring Security Interview Questions

A Spring Security component called AuthenticationManager tells "How


authentication will happen". Because the how part of this question depends on
which authentication provider we are using for our application, an
AuthenticationManager contains references to all the AuthenticationProviders.
AuthenticationManager is the strategy interface for authentication, which has only
one method:

public interface AuthenticationManager {


Authentication authenticate(Authentication authentication)
throws AuthenticationException;
}

AuthenticationManagers can perform one of three actions in their authenticate()


method:
If it can verify that the input represents a valid principal, it will return an
Authentication (normally authenticated=true).
If the input is believed to represent an invalid principal, it will throw an
AuthenticationException.
If it is unable to decide, it will return null.

18. Explain what is ProviderManager in Spring security.


The default implementation of AuthenticationManager is ProviderManager. It does
not handle the authentication request itself, rather delegates the authentication
process to a list of configured AuthenticationProviders. Each authenticationprovider
in turn is queried to see if it can handle the authentication request.

19. What is JWT?

Page 14 © Copyright by Interviewbit


Spring Security Interview Questions

JWT (JSON Web Tokens) are tokens that are generated by a server upon user
authentication in a web application and are then sent to the client (normally a
browser). As a result, these tokens are sent on every HTTP request, allowing the
server to verify or authenticate the user's identity. This method is used for
authorizing transactions or requests between client and server. The use of JWT does
not intend to hide data, but rather ensure its authenticity. JWTs are signed and
encoded, instead of encrypted. A cryptographic algorithm is used to digitally sign
JWTs in order to ensure that they cannot be altered a er they are issued. Information
contained in the token is signed by the server's private key in order to ensure
integrity.

Login credentials are sent by the user. When successful, JWT tokens (signed by
private key/secret key) are sent back by the server to the client.
The client takes JWT and inserts it in the Authorization header to make data
requests for the user.
Upon receiving the token from the client, the server simply needs to compare
the signature sent by the client to the one it generated with its private
key/secret key. The token will be valid once the signatures match.

Page 15 © Copyright by Interviewbit


Spring Security Interview Questions

Three parts make up JSON Web Tokens, separated by a dot (.). The first two (the
header and the payload) contain Base64-URL encoded JSON, while the third is a
cryptographic signature.
For example:

eyJhbGciOfefeiI1NiJ9.eyJuYW1lIjdgdfeENvZGVyIn0.5dlp7GmziL2dfecegse4mtaqv0_xX4oFUuTDh14K

Take a look at each of the sections:

eyJhbGciOfefeiI1NiJ9 #header
eyJuYW1lIjdgdfeENvZGVyIn0 #payload
5dlp7GmziL2dfecegse4mtaqv0_xX4oFUuTDh14KuF #signature

20. What is Spring Security Filter Chain?


Spring Security executes most of its security features using the filter chain. Spring
security is driven through servlet filters in web applications. A servlet filter intercepts
requests before they reach the protected resource (e.g., a Spring controller). As a
result, every request for a protected resource will be processed through a spring
security filter chain for completing authentication and authorization purposes.

21. Explain how the security filter chain works.


Here's how filters work in a web application:

Page 16 © Copyright by Interviewbit


Spring Security Interview Questions

Step 1: The client first sends a request for a resource (MVC controller). The
application container creates a filter chain for handling and processing incoming
requests.
Step 2: Each HttpServletRequest passes through the filter chain depending
upon the request URI. (We can configure whether the filter chains should be
applied to all requests or to the specific request URI).
Step 3: For most web applications, filters perform the following functions:
Modify or Change the HttpServletRequest/HttpServletResponse before it
reaches the Spring MVC controller.
Can stop the processing of the request and send a response to the client,
such as Servlets not allowing requests to specific URI's.

22. Name some predefined filters used in spring security and


write their functions.

Page 17 © Copyright by Interviewbit


Spring Security Interview Questions

Filter chains in Spring Security are very complex and flexible. They use services such
as UserDetailsService and AuthenticationManager to accomplish their tasks. It is also
important to consider their orders since you might want to verify their authenticity
before authorizing them. A few of the important security filters from Spring's filter
chain are listed below in the order they occur:
SecurityContextPersistenceFilter: Stores the SecurityContext contents
between HTTP requests. It also clears SecurityContextHolder when a request is
finished.
ConcurrentSessionFilter: It is responsible for handling concurrent sessions. Its
purpose is to refresh the last modified time of the request's session and to
ensure the session hasn't expired.
UsernamePasswordAuthenticationFilter: It's the most popular authentication
filter and is the one that's most o en customized.
ExceptionTranslationFilter: This filter resides above FilterSecurityInterceptor in
the security filter stack. Although it doesn't perform actual security
enforcement, it handles exceptions thrown by the security interceptors and
returns valid and suitable HTTP responses.
FilterSecurityInterceptor: It is responsible for securing HTTP resources (web
URIs), and raising or throwing authentication and authorization exceptions
when access is denied.

23. What do you mean by principal in Spring security?


The principal is actually the currently logged in user that is using the application.
Information/data about the principal (currently authenticated user) is stored in the
SecurityContext of the application. As a helper class, SecurityContextHolder provides
access to the security context. By default, it uses a ThreadLocal object to store
SecurityContext, so SecurityContext is always accessible to methods in the same
thread of execution, even if SecurityContext isn't passed around explicitly.

24. Can you explain what is DelegatingFilterProxy in spring


security?

Page 18 © Copyright by Interviewbit


Spring Security Interview Questions

A servlet filter must be declared in the web.xml file so that it can be invoked before
the request is passed on to the actual Servlet class. DelegatingFilterProxy is a servlet
filter embedded in the spring context. It acts as a bridge between web.xml (web
application) and the application context (Spring IoC Container).
DelegatingFilterProxy is a proxy that delegates an incoming request to a group of
filters (which are not managed as spring beans) provided by the Spring web
framework. It provides full access to the Spring context's life cycle machinery and
dependency injection.

Whenever a request reaches the web application, the proxy ensures that the request
is delegated to Spring Security, and, if everything goes smoothly, it will ensure that
the request is directed to the right resource within the web application. The following
example demonstrates how to configure the DelegatingProxyFilter in web.xml:

Page 19 © Copyright by Interviewbit


Spring Security Interview Questions

<?xml version="1.0" encoding="UTF-8"?>


<web-app>
<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-class>
org.springframework.web.filter.DelegatingFilterProxy
</filter-class>
</filter>

<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
</web-app>

25. Can you explain what is FilterChainProxy in spring security?


FilterChainProxy is another servlet filter designed to invoke the appropriate filters
based on the path of the incoming request. It contains information about the
security filters that make up the security filter chain. It is not directly executed, but it
is started by the DelegatingFilterProxy.

26. What is the intercept-url pattern and why do we need it?

Page 20 © Copyright by Interviewbit


Spring Security Interview Questions

<Intercept-url> is used to configure authorizations or access-controls in a Spring


Security application. It is used to restrict access to a particular URL. The majority of
web applications using Spring Security usually have just a few intercept-URLs
because their security needs are quite less.
Example: Basic Spring security using intercept URL

<http realm="Example" use-expressions="false">


<intercept-url pattern="/index.jsp" access="IS_AUTHENTICATED_ANONYMOUSLY"/>
<intercept-url pattern="/login.jsp*" access="IS_AUTHENTICATED_ANONYMOUSLY"/>
<intercept-url pattern="/admin/*" access="ROLE_ADMIN"/>
<intercept-url pattern="/trade/*" access="ROLE_TRADER"/>
<intercept-url pattern="/**" access="ROLE_USER,ROLE_ADMIN,ROLE_TRADER"/>
<http-basic/>

In this case, index.jsp and admin.jsp can be accessed without authentication.


Anything with admin in the URL requires ROLE_ADMIN access, and anything with
trade in the URL requires ROLE_TRADER access.

27. Does order matter in the intercept-url pattern? If yes, then


in which order should we write it?
Yes, ordering is crucial when we have multiple intercept-URL patterns. Multiple
intercept URLs should be written from more specific to less specific. As intercept-URL
patterns are processed in the order they appear in a spring security configuration file,
the URL must match the right pattern.

28. State the difference between ROLE_USER and


ROLE_ANONYMOUS in a spring intercept-url configuration.

Page 21 © Copyright by Interviewbit


Spring Security Interview Questions

ROLE_USER: It has no relevance unless you assign it to your users as soon as


they are authenticated. You are responsible for loading the roles (authorities) for
each authenticated user.
ROLE_ANONYMOUS: When a configuration uses Spring Security's "anonymous
authentication" filter, ROLE_ANONYMOUS is the default role assigned to an
anonymous (unauthenticated) user. ROLE_ANONYMOUS is enabled by default.
However, it would be better if you used the expression isAnonymous() instead,
which has the same meaning.

29. State the difference between @PreAuthorize and @Secured


in Spring security.
A variety of security options are available with Spring Framework. This framework
offers many useful tools or methods for securing applications. In order to provide
method-level security, @Secured and @PreAuthorize are the most commonly used
annotations. Compared to @Secured, @PreAuthorize is quite new but becoming well
known very fast. There aren't many differences between @Secured and
@PreAuthorize; they're nearly identical. However, @PreAuthorize is considerably
more powerful than @Secured.

Page 22 © Copyright by Interviewbit


Spring Security Interview Questions

@PreAuthorize @Secu

We can access the methods and properties of We can


SecurityExpressionRoot while using @PreAuthorize. Securi

It can work with Spring EL. It cann

It does
It supports multiple roles in conjunction with AND operator. If mor
For example: the OR
For ex
@PreAuthorize("hasRole('ROLE_role1') and hasRole('ROLE_role2')")
@Sec

Add the following line to spring-security.xml and spring boot to


Add th
enable @PreAuthorize and @PostAuthorize annotations in your
the @S
code:
XML:
XML: <glob
<global-method-security pre-post-annotations="enabled"/>

Spri
Spring boot: @Ena
@EnableGlobalMethodSecurity(prePostEnabled = true)

30. State the difference between @Secured and


@RolesAllowed.

Page 23 © Copyright by Interviewbit


Spring Security Interview Questions

@RolesAllowed: It is a Java standard annotation (JSR250) (i.e., not only spring


security). Because this annotation only supports role-based security, it is more
limited than the @PreAuthorize annotation. To enable the @RolwesAllowed
annotation in your code, add the following line to spring-security.xml and spring
boot.

XML: <global-method-security jsr250-annotations="enabled"/>

Spring boot: @EnableGlobalMethodSecurity(jsr250Enabled = true)

@Secured: It is a Spring specific annotation. There is more to it than just role-based


security. It secures methods implemented by beans (objects whose life-cycle is
managed by the Spring IoC). However, Spring Expression Language (SpEL) is not
supported for defining security constraints. To enable the @Secured annotation in
your code, add the following line to spring-security.xml and spring boot.

XML: global-method-security secured-annotations="enabled"/>

Spring boot: @EnableGlobalMethodSecurity(securedEnabled=true)

Conclusion
Spring Security is one of the most popular, powerful, and highly customizable access-
control frameworks (security framework) that provide authentication, authorization,
and other security features for enterprise applications. In this article, we have
compiled a comprehensive list of Spring Security Interview questions, which are
typically asked during interviews. In addition to checking your existing Spring
Security skills, these questions serve as a good resource for reviewing some
important concepts before you appear for an interview. It is suitable for both freshers
as well as experienced developers and tech leads.

Page 24 © Copyright by Interviewbit


Spring Security Interview Questions

Additional Useful Resources


Interview questions on Java
Interview questions on Spring Boot
Spring MVC vs Spring Boot
Spring vs Spring Boot

Page 25 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 26 © Copyright by Interviewbit


Contents

Docker Basic Interview Questions


1. Can you tell something about docker container?
2. What are docker images?
3. What is a DockerFile?
4. Can you tell what is the functionality of a hypervisor?
5. What can you tell about Docker Compose?
6. Can you tell something about docker namespace?
7. What is the docker command that lists the status of all docker containers?
8. On what circumstances will you lose data stored in a container?
9. What is docker image registry?
10. How many Docker components are there?
11. What is a Docker Hub?
12. What command can you run to export a docker image as an archive?
13. What command can be run to import a pre-exported Docker image into another
Docker host?
14. Can a paused container be removed from Docker?
15. What command is used to check for the version of docker client and server?

Docker Intermediate Interview Questions


16. Differentiate between virtualization and containerization.
17. Differentiate between COPY and ADD commands that are used in a Dockerfile?
18. Can a container restart by itself?

Page 1 © Copyright by Interviewbit


Docker Interview Questions

Docker Intermediate Interview Questions (.....Continued)


19. Can you tell the differences between a docker Image and Layer?
20. What is the purpose of the volume parameter in a docker run command?
21. Where are docker volumes stored in docker?
22. What does the docker info command do?
23. Can you tell the what are the purposes of up, run, and start commands of docker
compose?
24. What are the basic requirements for the docker to run on any system?
25. Can you tell the approach to login to the docker registry?
26. List the most commonly used instructions in Dockerfile?
27. Can you differentiate between Daemon Logging and Container Logging?
28. What is the way to establish communication between docker host and Linux
host?
29. What is the best way of deleting a container?
30. Can you tell the difference between CMD and ENTRYPOINT?

Docker Advanced Interview Questions


31. Can we use JSON instead of YAML while developing docker-compose file in
Docker?
32. How many containers you can run in docker and what are the factors influencing
this limit?
33. Describe the lifecycle of Docker Container?
34. How to use docker for multiple application environments?
35. How will you ensure that a container 1 runs before container 2 while using
docker compose?

Conclusion

Page 2 © Copyright by Interviewbit


Docker Interview Questions

Conclusion (.....Continued)
36. Conclusion

Page 3 © Copyright by Interviewbit


Let's get Started
Introduction to Docker:
Docker is a very popular and powerful open-source containerization platform that is
used for building, deploying, and running applications. Docker allows you to
decouple the application/so ware from the underlying infrastructure.

What is a Container?
A container is a standard unit of so ware bundled with dependencies so that
applications can be deployed fast and reliably b/w different computing platforms.
Docker can be visualized as a big ship (docker) carrying huge boxes of products
(containers).
Docker container doesn’t require the installation of a separate operating
system. Docker just relies or makes use of the kernel’s resources and its
functionality to allocate them for the CPU and memory it relies on the kernel’s
functionality and uses resource isolation for CPU and memory, and separate
namespaces to isolate the application’s view of the OS (operating system).

Page 4 © Copyright by Interviewbit


Docker Interview Questions

Why Learn Docker?


Application development is a lot more than just writing code! They involve a lot of
behind-the-scenes things like usage of multiple frameworks and architectures for
every stage of its lifecycle which makes the process more complex and challenging.
Using the nature of containerization helps developers to simplify and efficiently
accelerate the application workflow along with giving them the liberty to develop
using their own choice of technology and development environments.

Page 5 © Copyright by Interviewbit


Docker Interview Questions

All these aspects form the core part of DevOps which becomes all the more
important for any developer to know these in order to improve productivity,
fasten the development along with keeping in mind the factors of application
scalability and more efficient resource management.
Imagine containers as a very lightweight pre-installed box with all the packages,
dependencies, so ware required by your application, just deploy to production
with minimal configuration changes.
Lots of companies like PayPal, Spotify, Uber, etc use Docker to simplify the
operations and to bring the infrastructure and security closer to make more
secure applications.
Being portable, Containers can be deployed on multiple platforms like bare
instances, virtual machines, Kubernetes platform etc. as per requirements of
scale or desired platform.

Docker Basic Interview Questions


1. Can you tell something about docker container?
In simplest terms, docker containers consist of applications and all their
dependencies.
They share the kernel and system resources with other containers and run as
isolated systems in the host operating system.
The main aim of docker containers is to get rid of the infrastructure dependency
while deploying and running applications. This means that any containerized
application can run on any platform irrespective of the infrastructure being used
beneath.
Technically, they are just the runtime instances of docker images.

2. What are docker images?


They are executable packages(bundled with application code & dependencies,
so ware packages, etc.) for the purpose of creating containers. Docker images can be
deployed to any docker environment and the containers can be spun up there to run
the application.

3. What is a DockerFile?

Page 6 © Copyright by Interviewbit


Docker Interview Questions

It is a text file that has all commands which need to be run for building a given
image.

4. Can you tell what is the functionality of a hypervisor?


A hypervisor is a so ware that makes virtualization happen because of which is
sometimes referred to as the Virtual Machine Monitor. This divides the resources of
the host system and allocates them to each guest environment installed.

Page 7 © Copyright by Interviewbit


Docker Interview Questions

This means that multiple OS can be installed on a single host system.


Hypervisors are of 2 types:

1. Native Hypervisor: This type is also called a Bare-metal Hypervisor and runs
directly on the underlying host system which also ensures direct access to the
host hardware which is why it does not require base OS.
2. Hosted Hypervisor: This type makes use of the underlying host operating
system which has the existing OS installed.

5. What can you tell about Docker Compose?


It is a YAML file consisting of all the details regarding various services, networks, and
volumes that are needed for setting up the Docker-based application. So, docker-
compose is used for creating multiple containers, host them and establish
communication between them. For the purpose of communication amongst the
containers, ports are exposed by each and every container.

6. Can you tell something about docker namespace?


A namespace is basically a Linux feature that ensures OS resources partition in a
mutually exclusive manner. This forms the core concept behind containerization as
namespaces introduce a layer of isolation amongst the containers. In docker, the
namespaces ensure that the containers are portable and they don't affect the
underlying host. Examples for namespace types that are currently being supported
by Docker – PID, Mount, User, Network, IPC.

7. What is the docker command that lists the status of all docker
containers?
In order to get the status of all the containers, we run the below command: docker
ps -a

8. On what circumstances will you lose data stored in a


container?
The data of a container remains in it until and unless you delete the container.

9. What is docker image registry?


Page 8 © Copyright by Interviewbit
Docker Interview Questions

A Docker image registry, in simple terms, is an area where the docker images are
stored. Instead of converting the applications to containers each and every time,
a developer can directly use the images stored in the registry.
This image registry can either be public or private and Docker hub is the most
popular and famous public registry available.

10. How many Docker components are there?


There are three docker components, they are - Docker Client, Docker Host, and
Docker Registry.
Docker Client: This component performs “build” and “run” operations for the
purpose of opening communication with the docker host.
Docker Host: This component has the main docker daemon and hosts
containers and their associated images. The daemon establishes a connection
with the docker registry.
Docker Registry: This component stores the docker images. There can be a
public registry or a private one. The most famous public registries are Docker
Hub and Docker Cloud.

11. What is a Docker Hub?

Page 9 © Copyright by Interviewbit


Docker Interview Questions

It is a public cloud-based registry provided by Docker for storing public images of


the containers along with the provision of finding and sharing them.
The images can be pushed to Docker Hub through the docker push command.

12. What command can you run to export a docker image as an


archive?
This can be done using the docker save command and the syntax is: docker save -o
<exported_name>.tar <container-name>

13. What command can be run to import a pre-exported Docker


image into another Docker host?
This can be done using the docker load command and the syntax is docker load -i
<export_image_name>.tar

14. Can a paused container be removed from Docker?


No, it is not possible! A container MUST be in the stopped state before we can
remove it.

15. What command is used to check for the version of docker


client and server?
The command used to get all version information of the client and server is the
docker version.
To get only the server version details, we can run docker version --format
'{{.Server.Version}}'

Docker Intermediate Interview Questions


16. Differentiate between virtualization and containerization.
The question indirectly translates to explaining the difference between virtual
machines and Docker containers.

Page 10 © Copyright by Interviewbit


Docker Interview Questions

Virtualization Containerization

This helps developers to This helps developers to deploy


run and host multiple multiple applications using the
OS on the hardware of a same operating system on a
single physical server. single virtual machine or server.

Containers ensure isolated


environment/ user spaces are
Hypervisors provide
provided for running the
overall virtual machines
applications. Any changes done
to the guest operating
within the container do not
systems.
reflect on the host or other
containers of the same host.

These virtual machines


form an abstraction of Containers form abstraction of
the system hardware the application layer which
layer this means that means that each container
each virtual machine on constitutes a different
the host acts like a application.
physical machine.

17. Differentiate between COPY and ADD commands that are


used in a Dockerfile?
Both the commands have similar functionality, but COPY is more preferred
because of its higher transparency level than that of ADD .
COPY provides just the basic support of copying local files into the container
whereas ADD provides additional features like remote URL and tar extraction
support.

Page 11 © Copyright by Interviewbit


Docker Interview Questions

18. Can a container restart by itself?


Yes, it is possible only while using certain docker-defined policies while using the
docker run command. Following are the available policies:

1. Off: In this, the container won’t be restarted in case it's stopped or it fails.
2. On-failure: Here, the container restarts by itself only when it experiences
failures not associated with the user.
3. Unless-stopped: Using this policy, ensures that a container can restart only
when the command is executed to stop it by the user.
4. Always: Irrespective of the failure or stopping, the container always gets
restarted in this type of policy.

These policies can be used as:


docker run -dit — restart [restart-policy-value] [container_name]

19. Can you tell the differences between a docker Image and
Layer?
Image: This is built up from a series of read-only layers of instructions. An image
corresponds to the docker container and is used for speedy operation due to the
caching mechanism of each step.

Layer: Each layer corresponds to an instruction of the image’s Dockerfile. In simple


words, the layer is also an image but it is the image of the instructions run.

Consider the example Dockerfile below.


FROM ubuntu:18.04 COPY . /myapp RUN make /myapp CMD python /myapp/app.py Importantly,
each layer is only a set of differences from the layer before it.

- The result of building this docker file is an image. Whereas the instructions present
in this file add the layers to the image. The layers can be thought of as intermediate
images. In the example above, there are 4 instructions, hence 4 layers are added to
the resultant image.

Page 12 © Copyright by Interviewbit


Docker Interview Questions

20. What is the purpose of the volume parameter in a docker run


command?
The syntax of docker run when using the volumes is: docker run -v
host_path:docker_path <container_name>
The volume parameter is used for syncing a directory of a container with any of
the host directories. Consider the below command as an example: docker run -
v /data/app:usr/src/app myapp
The above command mounts the directory /data/app in the host to the
usr/src/app directory. We can sync the container with the data files from the
host without having the need to restart it.
This also ensures data security in cases of container deletion. This ensures that
even if the container is deleted, the data of the container exists in the volume
mapped host location making it the easiest way to store the container data.

21. Where are docker volumes stored in docker?


Volumes are created and managed by Docker and cannot be accessed by non-docker
entities. They are stored in Docker host filesystem at /var/lib/docker/volumes/

22. What does the docker info command do?


The command gets detailed information about Docker installed on the host system.
The information can be like what is the number of containers or images and in what
state they are running and hardware specifications like total memory allocated,
speed of the processor, kernel version, etc.

23. Can you tell the what are the purposes of up, run, and start
commands of docker compose?

Page 13 © Copyright by Interviewbit


Docker Interview Questions

Using the up command for keeping a docker-compose up (ideally at all times),


we can start or restart all the networks, services, and drivers associated with the
app that are specified in the docker-compose.yml file. Now if we are running the
docker-compose up in the “attached” mode then all the logs from the
containers would be accessible to us. In case the docker-compose is run in the
“detached” mode, then once the containers are started, it just exits and shows
no logs.
Using the run command, the docker-compose can run one-off or ad-hoc tasks
based on the business requirements. Here, the service name has to be provided
and the docker starts only that specific service and also the other services to
which the target service is dependent (if any).
- This command is helpful for testing the containers and also performing tasks
such as adding or removing data to the container volumes etc.
Using the start command, only those containers can be restarted which were
already created and then stopped. This is not useful for creating new containers
on its own.

24. What are the basic requirements for the docker to run on
any system?
Docker can run on both Windows and Linux platforms.
For the Windows platform, docker atleast needs Windows 10 64bit with 2GB RAM
space. For the lower versions, docker can be installed by taking help of the
toolbox. Docker can be downloaded from https://docs.docker.com/docker-for-
windows/ website.
For Linux platforms, Docker can run on various Linux flavors such as Ubuntu
>=12.04, Fedora >=19, RHEL >=6.5, CentOS >=6 etc.

25. Can you tell the approach to login to the docker registry?
Using the docker login command credentials to log in to their own cloud
repositories can be entered and accessed.

26. List the most commonly used instructions in Dockerfile?

Page 14 © Copyright by Interviewbit


Docker Interview Questions

FROM: This is used to set the base image for upcoming instructions. A docker file
is considered to be valid if it starts with the FROM instruction.
LABEL: This is used for the image organization based on projects, modules, or
licensing. It also helps in automation as we specify a key-value pair while
defining a label that can be later accessed and handled programmatically.
RUN: This command is used to execute instructions following it on the top of the
current image in a new layer. Note that with each RUN command execution, we
add layers on top of the image and then use that in subsequent steps.
CMD: This command is used to provide default values of an executing container.
In cases of multiple CMD commands the last instruction would be considered.

27. Can you differentiate between Daemon Logging and


Container Logging?
In docker, logging is supported at 2 levels and they are logging at the Daemon
level or logging at the Container level.
Daemon Level: This kind of logging has four levels- Debug, Info, Error, and Fatal.
- Debug has all the data that happened during the execution of the daemon
process.
- Info carries all the information along with the error information during the
execution of the daemon process.
- Errors have those errors that occurred during the execution of the daemon
process.
- Fatal has the fatal errors that occurred during the execution.
Container Level:
- Container level logging can be done using the command: sudo docker run –it
<container_name> /bin/bash
- In order to check for the container level logs, we can run the command: sudo
docker logs <container_id>

28. What is the way to establish communication between


docker host and Linux host?

Page 15 © Copyright by Interviewbit


Docker Interview Questions

This can be done using networking by identifying the “ipconfig” on the docker host.
This command ensures that an ethernet adapter is created as long as the docker is
present in the host.

29. What is the best way of deleting a container?


We need to follow the following two steps for deleting a container:
- docker stop <container_id>
- docker rm <container_id>

30. Can you tell the difference between CMD and ENTRYPOINT?
CMD command provides executable defaults for an executing container. In case
the executable has to be omitted then the usage of ENTRYPOINT instruction
along with the JSON array format has to be incorporated.
ENTRYPOINT specifies that the instruction within it will always be run when the
container starts.
This command provides an option to configure the parameters and the
executables. If the DockerFile does not have this command, then it would still
get inherited from the base image mentioned in the FROM instruction.
- The most commonly used ENTRYPOINT is /bin/sh or /bin/bash for most
of the base images.
As part of good practices, every DockerFile should have at least one of these two
commands.

Docker Advanced Interview Questions


31. Can we use JSON instead of YAML while developing docker-
compose file in Docker?
Yes! It can be used. In order to run docker-compose with JSON, docker-compose -f
docker-compose.json up can be used.

32. How many containers you can run in docker and what are
the factors influencing this limit?

Page 16 © Copyright by Interviewbit


Docker Interview Questions

There is no clearly defined limit to the number of containers that can be run within
docker. But it all depends on the limitations - more specifically hardware restrictions.
The size of the app and the CPU resources available are 2 important factors
influencing this limit. In case your application is not very big and you have abundant
CPU resources, then we can run a huge number of containers.

33. Describe the lifecycle of Docker Container?


The different stages of the docker container from the start of creating it to its end are
called the docker container life cycle.
The most important stages are:
Created: This is the state where the container has just been created new but not
started yet.
Running: In this state, the container would be running with all its associated
processes.
Paused: This state happens when the running container has been paused.
Stopped: This state happens when the running container has been stopped.
Deleted: In this, the container is in a dead state.

34. How to use docker for multiple application environments?

Page 17 © Copyright by Interviewbit


Docker Interview Questions

Docker-compose feature of docker will come to help here. In the docker-


compose file, we can define multiple services, networks, and containers along
with the volume mapping in a clean manner, and then we can just call the
command “docker-compose up”.
When there are multiple environments involved - it can be either dev, staging,
uat, or production servers, we would want to define the server-specific
dependencies and processes for running the application. In this case, we can go
ahead with creating environment-specific docker-compose files of the name
“docker-compose.{environment}.yml” and then based on the environment, we
can set up and run the application.

35. How will you ensure that a container 1 runs before container
2 while using docker compose?
Docker-compose does not wait for any container to be “ready” before going ahead
with the next containers. In order to achieve the order of execution, we can use:
The “depends_on” which got added in version 2 of docker-compose can be used
as shown in a sample docker-compose.yml file below:

version: "2.4"
services:
backend:
build: .
depends_on:
- db
db:
image: postgres

The introduction of service dependencies has various causes and effects:

Page 18 © Copyright by Interviewbit


Docker Interview Questions

The docker-compose up command starts and runs the services in the


dependency order specified. For the above example, the DB container is started
before the backend.
docker-compose up SERVICE_NAME by default includes the dependencies
associated with the service. In the given example, running docker-compose up
backend creates and starts DB (dependency of backend).
Finally, the command docker-compose stop also stops the services in the order
of the dependency specified. For the given example, the backend service is
stopped before the DB service.

Conclusion
36. Conclusion
DevOps technologies are growing at an exponential pace. As the systems are being
more and more distributed, developers have turned towards containerization
because of the need to develop so ware faster and maintain them better. They also
aid in easier and faster continuous integration and deployment process which is why
these technologies have experienced tremendous growth.
Docker is the most famous and popular tool for achieving the purpose of
containerization and continuous integration/development and also for continuous
deployment due to its great support for pipelines. With the growing ecosystem,
docker has proven itself to be useful to operate on multiple use cases thereby making
it all the more exciting to learn it!

To build a good DockerFile: https://docs.docker.com/engine/reference/builder/

Page 19 © Copyright by Interviewbit


Microservices Interview
Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Microservices Interview Questions for Freshers


1. Write main features of Microservices.
2. Write main components of Microservices.
3. What are the benefits and drawbacks of Microservices?
4. Name three common tools mostly used for microservices.
5. Explain the working of Microservice Architecture.
6. Write difference between Monolithic, SOA and Microservices Architecture.
7. Explain spring cloud and spring boot.
8. What is the role of actuator in spring boot?
9. Explain how you can override the default properties of Spring boot projects.
10. What issues are generally solved by spring clouds?
11. What do you mean by Cohesion and Coupling?
12. What do you mean by Bounded Context?
13. Write the fundamental characteristics of Microservice Design.
14. What are the challenges that one has to face while using Microservices?
15. Explain PACT in microservices.
16. Explain how independent microservices communicate with each other.
17. What do you mean by client certificates?
18. Explain CDC.
19. Name some famous companies that use Microservice architecture.

Page 1 © Copyright by Interviewbit


Microservices Interview Questions

Microservices Interview Questions for Experienced


20. What do you mean by Semantic Monitoring?
21. Explain continuous monitoring.
22. What do you mean by Domain driven design?
23. Explain OAuth.
24. What do you mean by Distributed Transaction?
25. Explain Idempotence and its usage.
26. What do you mean by end-to-end microservices testing?
27. Explain the term Eureka in Microservices.
28. Explain the way to implement service discovery in microservices architecture.
29. Explain the importance of reports and dashboards in microservices.
30. What are Reactive Extensions in Microservices?
31. Explain type of tests mostly used in Microservices.
32. What do you mean by Mike Cohn’s Test Pyramid?
33. Explain Container in Microservices.
34. What is the main role of docker in microservices?

Page 2 © Copyright by Interviewbit


Let's get Started

What do you mean by Microservice?

Microservices, also known as Microservices Architecture, is basically an SDLC


approach in which large applications are built as a collection of small functional
modules. It is one of the most widely adopted architectural concepts within so ware
development. In addition to helping in easy maintenance, this architecture also
makes development faster. Additionally, microservices are also a big asset for the
latest methods of so ware development such as DevOps and Agile. Furthermore, it
helps deliver large, complex applications promptly, frequently, and reliably.
Applications are modeled as collections of services, which are:
Maintainable and testable
Loosely coupled
Independently deployable
Designed or organized around business capabilities
Managed by a small team

Page 3 © Copyright by Interviewbit


Microservices Interview Questions

Microservices Interview Questions for Freshers


1. Write main features of Microservices.
Some of the main features of Microservices include:

Page 4 © Copyright by Interviewbit


Microservices Interview Questions

Decoupling: Within a system, services are largely decoupled. The application as


a whole can therefore be easily constructed, altered, and scalable
Componentization: Microservices are viewed as independent components that
can easily be exchanged or upgraded
Business Capabilities: Microservices are relatively simple and only focus on one
service
Team autonomy: Each developer works independently of each other, allowing
for a faster project timeline
Continuous Delivery: Enables frequent so ware releases through systematic
automation of so ware development, testing, and approval
Responsibility: Microservices are not focused on applications as projects.
Rather, they see applications as products they are responsible for
Decentralized Governance: Choosing the right tool according to the job is the
goal. Developers can choose the best tools to solve their problems
Agility: Microservices facilitate agile development. It is possible to create new
features quickly and discard them again at any time.

2. Write main components of Microservices.


Some of the main components of microservices include:
Containers, Clustering, and Orchestration
IaC [Infrastructure as Code Conception]
Cloud Infrastructure
API Gateway
Enterprise Service Bus
Service Delivery

3. What are the benefits and drawbacks of Microservices?


Benefits:

Page 5 © Copyright by Interviewbit


Microservices Interview Questions

Self-contained, and independent deployment module.


Independently managed services.
In order to improve performance, the demand service can be deployed on
multiple servers.
It is easier to test and has fewer dependencies.
A greater degree of scalability and agility.
Simplicity in debugging & maintenance.
Better communication between developers and business users.
Development teams of a smaller size.
Drawbacks:
Due to the complexity of the architecture, testing and monitoring are more
difficult.
Lacks the proper corporate culture for it to work.
Pre-planning is essential.
Complex development.
Requires a cultural shi .
Expensive compared to monoliths.
Security implications.
Maintaining the network is more difficult.

4. Name three common tools mostly used for microservices.


Three common tools used for microservices include:
Wiremock
Docker
Hystrix

5. Explain the working of Microservice Architecture.


Microservice architectures consist of the following components:

Page 6 © Copyright by Interviewbit


Microservices Interview Questions

Clients: Different users send requests from various devices.


Identity Provider: Validate a user's or client's identity and issue security tokens.
API Gateway: Handles the requests from clients.
Static Content: Contains all of the system's content.
Management: Services are balanced on nodes and failures are identified.
Service Discovery: A guide to discovering the routes of communication between
microservices.
Content Delivery Network: Includes distributed network of proxy servers and
their data centers.
Remote Service: Provides remote access to data or information that resides on
networked computers and devices.

6. Write difference between Monolithic, SOA and Microservices


Architecture.

Page 7 © Copyright by Interviewbit


Microservices Interview Questions

Monolithic Architecture: It is "like a big container" where all the so ware


components of an application are bundled together tightly. It is usually built as
one large system and is one code-base.
SOA (Service-Oriented Architecture): It is a group of services interacting or
communicating with each other. Depending on the nature of the
communication, it can be simple data exchange or it could involve several
services coordinating some activity.
Microservice Architecture: It involves structuring an application in the form of a
cluster of small, autonomous services modeled around a business domain. The
functional modules can be deployed independently, are scalable, are aimed at
achieving specific business goals, and communicate with each other over
standard protocols.

7. Explain spring cloud and spring boot.


Spring Cloud: In Microservices, the Spring cloud is a system that integrates with
external systems. This is a short-lived framework designed to build applications
quickly. It contributes significantly to microservice architecture due to its association
with finite amounts of data processing. Some of the features of spring cloud are
shown below:

Page 8 © Copyright by Interviewbit


Microservices Interview Questions

Spring Boot: Spring Boot is an open-sourced, Java-based framework that provides


its developers with a platform on which they can create stand-alone, production-
grade Spring applications. In addition to reducing development time and increasing
productivity, it is easily understood.

8. What is the role of actuator in spring boot?

Page 9 © Copyright by Interviewbit


Microservices Interview Questions

A spring boot actuator is a project that provides restful web services to access the
current state of an application that is running in production. In addition, you can
monitor and manage application usage in a production environment without having
to code or configure any of the applications.

9. Explain how you can override the default properties of Spring


boot projects.
By specifying properties in the application.properties file, it is possible to override the
default properties of a spring boot project.

Example:
In Spring MVC applications, you need to specify a suffix and prefix. You can do this by
adding the properties listed below in the application.properties file.
For suffix – spring.mvc.view.suffix: .jsp
For prefix – spring.mvc.view.prefix: /WEB-INF/

10. What issues are generally solved by spring clouds?


The following problems can be solved with spring cloud:
Complicated issues caused by distributed systems: This includes network
issues, latency problems, bandwidth problems, and security issues.
Service Discovery issues: Service discovery allows processes and services to
communicate and locate each other within a cluster.
Redundancy issues: Distributed systems can o en have redundancy issues.
Load balancing issues: Optimize the distribution of workloads among multiple
computing resources, including computer clusters, central processing units, and
network links.
Reduces performance issues: Reduces performance issues caused by various
operational overheads.

11. What do you mean by Cohesion and Coupling?

Page 10 © Copyright by Interviewbit


Microservices Interview Questions

Coupling: It is defined as a relationship between so ware modules A and B, and how


much one module depends or interacts with another one. Couplings fall into three
major categories. Modules can be highly coupled (highly dependent), loosely
coupled, and uncoupled from each other. The best kind of coupling is loose coupling,
which is achieved through interfaces.

Cohesion: It is defined as a relationship between two or more parts/elements of a


module that serves the same purpose. Generally, a module with high cohesion can
perform a specific function efficiently without needing communication with any
other modules. High cohesion enhances the functionality of the module.

12. What do you mean by Bounded Context?


A Bounded Context is a central pattern in DDD (Domain-Driven Design), which deals
with collaboration across large models and teams. DDD breaks large models down
into multiple contexts to make them more manageable. Additionally, it explains their
relationship explicitly. The concept promotes an object-oriented approach to
developing services bound to a data model and is also responsible for ensuring the
integrity and mutability of said data model.

Page 11 © Copyright by Interviewbit


Microservices Interview Questions

13. Write the fundamental characteristics of Microservice


Design.
Based on Business Capabilities: Services are divided and organized around
business capabilities.
Products not projects: A product should belong to the team that handles it.
Essential messaging frameworks: Rely on functional messaging frameworks:
Eliminate centralized service buses by embracing the concept of
decentralization.
Decentralized Governance: The development teams are accountable for all
aspects of the so ware they produce.
Decentralized data management: Microservices allow each service to manage
its data separately.
Automated infrastructure: These systems are complete and can be deployed
independently.
Design for failure: Increase the tolerance for failure of services by focusing on
continuous monitoring of the applications.

Page 12 © Copyright by Interviewbit


Microservices Interview Questions

14. What are the challenges that one has to face while using
Microservices?
The challenges that one has to face while using microservices can be both functional
and technical as given below:

Functional Challenges:
Require heavy infrastructure setup.
Need Heavy investment.
Require excessive planning to handle or manage operations overhead.
Technical Challenges:
Microservices are always interdependent. Therefore, they must communicate
with each other.
It is a heavily involved model because it is a distributed system.
You need to be prepared for operations overhead if you are using Microservice
architecture.
To support heterogeneously distributed microservices, you need skilled
professionals.
It is difficult to automate because of the number of smaller components. For
that reason, each component must be built, deployed, and monitored
separately.
It is difficult to manage configurations across different environments for all
components.
Challenges associated with deployment, debugging, and testing.

15. Explain PACT in microservices.


PACT is defined as an open-source tool that allows service providers and consumers
to test interactions in isolation against contracts that have been made to increase
the reliability of microservice integration. It also offers support for numerous
languages, such as Ruby, Java, Scala, .NET, JavaScript, Swi /Objective-C.

Page 13 © Copyright by Interviewbit


Microservices Interview Questions

16. Explain how independent microservices communicate with


each other.
Communication between microservices can take place through:
HTTP/REST with JSON or binary protocol for request-response
Websockets for streaming.
A broker or server program that uses advanced routing algorithms.
RabbitMQ, Nats, Kafka, etc., can be used as message brokers; each is built to handle a
particular message semantic. You can also use Backend as a Service like Space Cloud
to automate your entire backend.

17. What do you mean by client certificates?


The client certificate is a type of digital certificate that generally allows client systems
to authenticate their requests to remote servers. In many mutual authentication
designs, it plays a key role in providing strong assurance of the requestor's identity.

18. Explain CDC.


As the name implies, CDC (Consumer-Driven Contract) basically ensures service
communication compatibility by establishing an agreement between consumers and
service providers regarding the format of the data exchanged between them. An
agreement like this is called a contract. Basically, it is a pattern used to develop
Microservices so that they can be efficiently used by external systems.

19. Name some famous companies that use Microservice


architecture.
Microservices architecture has replaced monolithic architecture for most large-scale
websites like:
Twitter
Netflix
Amazon, etc.

Microservices Interview Questions for Experienced


Page 14 © Copyright by Interviewbit
Microservices Interview Questions

The semantic monitoring method, also called synthetic monitoring, uses automated
tests and monitoring of the application to identify errors in business processes. This
technology provides a deeper look into the transaction performance, service
availability, and overall application performance to identify performance issues of
microservices, catch bugs in transactions and provide an overall higher level of
performance.

21. Explain continuous monitoring.


Continuous monitoring involves identifying compliance and risk issues in a
company's financial and operational environment. It consists of people, processes,
and working systems that support efficient and effective operations.

22. What do you mean by Domain driven design?


DDD (Domain-Driven-Design) is basically an architectural style that is based on
Object-Oriented Analysis Design approaches and principles. In this approach, the
business domain is modeled carefully in so ware, without regard to how the system
actually works. By interconnecting related components of the so ware system into a
continuously evolving system, it facilitates the development of complex systems.
There are three fundamental principles underlying it as shown below:
Concentrate on the core domain and domain logic.
Analyze domain models to find complex designs.
Engage in regular collaboration with the domain experts to improve the
application model and address emerging domain issues.

Page 15 © Copyright by Interviewbit


Microservices Interview Questions

23. Explain OAuth.


Generally speaking, OAuth (Open Authorization Protocol) enables users to
authenticate themselves with third-party service providers. With this protocol, you
can access client applications on HTTP for third-party providers such as GitHub,
Facebook, etc. Using it, you can also share resources on one site with another site
without requiring their credentials.

24. What do you mean by Distributed Transaction?


Distributed transactions are an outdated approach in today's microservice
architecture that leaves the developer with severe scalability issues. Transactions are
distributed to several services that are called to complete the transaction in
sequence. With so many moving parts, it is very complex and prone to failure.

25. Explain Idempotence and its usage.

Page 16 © Copyright by Interviewbit


Microservices Interview Questions

The term 'idempotence' refers to the repeated performance of a task despite the
same outcome. In other words, it is a situation in which a task is performed
repeatedly with the end result remaining the same.

Usage: When the remote service or data source receives instructions more than
once, Idempotence ensures that it will process each request once.

26. What do you mean by end-to-end microservices testing?


Usually, end-to-end (E2E) microservice testing is an uncoordinated, high-cost
technique that is used to ensure that all components work together for a complete
user journey. Usually, it is done through the user interface, mimicking how it appears
to the user. It also ensures all processes in the workflow are working properly.

27. Explain the term Eureka in Microservices.


Eureka Server, also referred to as Netflix Service Discovery Server, is an application
that keeps track of all client-service applications. As every Microservice registers to
Eureka Server, Eureka Server knows all the client applications running on the
different ports and IP addresses. It generally uses Spring Cloud and is not heavy on
the application development process.

28. Explain the way to implement service discovery in


microservices architecture.
There are many ways to set up service discovery, but Netflix's Eureka is the most
efficient. This is a hassle-free procedure that doesn't add much weight to the
application. It also supports a wide range of web applications. A number of
annotations are provided by Spring Cloud to make its use as simple as possible and
to hide complex concepts.

29. Explain the importance of reports and dashboards in


microservices.
Monitoring a system usually involves the use of reports and dashboards. Using
reports and dashboards for microservices can help you:

Page 17 © Copyright by Interviewbit


Microservices Interview Questions

Determine which microservices support which resources.


Determine which services are impacted whenever changes are made or occur to
components.
Make documentation easy to access whenever needed.
Review deployed component versions.
Determine the level of maturity and compliance from the components.

30. What are Reactive Extensions in Microservices?


A reactive extension, also known as Rx, is basically a design approach that calls
multiple services and then generates a single response by combining the results. The
calls can either be blocking or not blocking, synchronous or asynchronous. A popular
tool in distributed systems, Rx works exactly opposite to legacy flows.

31. Explain type of tests mostly used in Microservices.


As there are multiple microservices working together, microservice testing becomes
quite complex when working with microservices. Consequently, tests are categorized
according to their level:

Page 18 © Copyright by Interviewbit


Microservices Interview Questions

Botton-level tests: The bottom-level tests are those that deal with technology, such
as unit tests and performance tests. This is a completely automated process.
Middle-level tests: In the middle, we have exploratory tests such as stress tests and
usability tests.
Top-level tests: In the top-level testing, we have a limited number of acceptance
tests. The acceptance tests help stakeholders understand and verify the so ware
features.

32. What do you mean by Mike Cohn’s Test Pyramid?


Mike Cohn's Test Pyramid explains the different types of automated tests needed for
so ware development. The test pyramid is basically used to maximize automation at
all levels of testing, including unit testing, service level testing, UI testing, etc. The
pyramid also states that unit tests are faster and more isolated, while UI tests, which
are at the top, are more time-consuming and are centered around integration.

In accordance with the pyramid, the number of tests should be highest at the first
layer. At the service layer, fewer tests should be performed than at the unit test level,
but greater than that at the end-to-end level.

33. Explain Container in Microservices.

Page 19 © Copyright by Interviewbit


Microservices Interview Questions

Containers are useful technologies for allocating and sharing resources. It is


considered the most effective and easiest method for managing microservice-based
applications to develop and deploy them individually. Using Docker, you may also
encapsulate a microservice along with its dependencies in a container image, which
can then be used to roll on-demand instances of the microservice without any
additional work.

34. What is the main role of docker in microservices?


Docker generally provides a container environment, in which any application can be
hosted. This is accomplished by tightly packaging both the application and the
dependencies required to support it. These packaged products are referred to as
Containers, and since Docker is used to doing that, they are called Docker containers.
Docker, in essence, allows you to containerize your microservices and manage these
microservices more easily.

Conclusion:

Page 20 © Copyright by Interviewbit


Microservices Interview Questions

Microservices architecture is a method of developing a large-scale application as a


collection of small autonomous services developed for a business domain. Since its
debut in 2011, microservices have become a popular technology, especially among
organizations building forward-thinking applications. This list of Microservices
interview questions was carefully constructed to assist the development community
in their interviews. Hope these Microservices Architect Interview Questions would be
helpful for your interview.

Page 21 © Copyright by Interviewbit


Links to More Interview
Questions

C Interview Questions Php Interview Questions C Sharp Interview Questions

Web Api Interview Hibernate Interview Node Js Interview Questions


Questions Questions

Cpp Interview Questions Oops Interview Questions Devops Interview Questions

Machine Learning Interview Docker Interview Questions Mysql Interview Questions


Questions

Css Interview Questions Laravel Interview Questions Asp Net Interview Questions

Django Interview Questions Dot Net Interview Questions Kubernetes Interview


Questions

Operating System Interview React Native Interview Aws Interview Questions


Questions Questions

Git Interview Questions Java 8 Interview Questions Mongodb Interview


Questions

Dbms Interview Questions Spring Boot Interview Power Bi Interview Questions


Questions

Pl Sql Interview Questions Tableau Interview Linux Interview Questions


Questions

Ansible Interview Questions Java Interview Questions Jenkins Interview Questions

Page 22 © Copyright by Interviewbit


Contents

Basic Data Structure Interview Questions for


Freshers
1. What are Data Structures?
2. Why Create Data Structures?
3. What are some applications of Data structures?
4. Explain the process behind storing a variable in memory.
5. Can you explain the difference between file structure and storage structure?
6. Describe the types of Data Structures?
7. What is a stack data structure? What are the applications of stack?
8. What are different operations available in stack data structure?
9. What is a queue data structure? What are the applications of queue?
10. What are different operations available in queue data structure?
11. Differentiate between stack and queue data structure.
12. How to implement a queue using stack?
13. How do you implement stack using queues?
14. What is array data structure? What are the applications of arrays?
15. Elaborate on different types of array data structure
16. What is a linked list data structure? What are the applications for the Linked list?
17. Elaborate on different types of Linked List data structures?
18. Difference between Array and Linked List.
19. What is an asymptotic analysis of an algorithm?
20. What is hashmap in data structure?

Page 1 © Copyright by Interviewbit


Data Structure Interview Questions

Basic Data Structure Interview Questions


for Freshers (.....Continued)

21. What is the requirement for an object to be used as key or value in HashMap?
22. How does HashMap handle collisions in Java?
23. What is the time complexity of basic operations get() and put() in HashMap
class?

Data Structure Interview Questions for


Experienced
24. What is binary tree data structure? What are the applications for binary trees?
25. What is binary search tree data structure? What are the applications for binary
search trees?
26. What are tree traversals?
27. What is a deque data structure and its types? What are the applications for
deque?
28. What are some key operations performed on the Deque data structure?
29. What is a priority queue? What are the applications for priority queue?
30. Compare different implementations of priority queue
31. What is graph data structure and its representations? What are the applications
for graphs?
32. What is the difference between the Breadth First Search (BFS) and Depth First
Search (DFS)?
33. What is AVL tree data structure, its operations, and its rotations? What are the
applications for AVL trees?
34. What is a B-tree data structure? What are the applications for B-trees?
35. Define Segment Tree data structure and its applications.
36. Define Trie data structure and its applications
37. Define Red-Black Tree and its applications
38. Which data structures are used for implementing LRU cache?
Page 2 © Copyright by Interviewbit
Data Structure Interview Questions

Data Structure Interview Questions for


Experienced (.....Continued)

39. What is a heap data structure?

Data Structure Coding Interview Questions


40. Write a program to remove duplicates from a sorted array in place?
41. Write a function for zigzag traversal in a binary tree
42. Write a function to sort a linked list of 0s, 1s and 2s
43. Write a function to detect cycle in an undirected graph
44. Write a function to convert an infix expression to postfix expression
45. Write a function to find the maximum for each and every contiguous subarray of
size k.
46. Write a function to merge two sorted binary search tree
47. Write a function to print all unique rows of the given matrix.
48. Write a function to find number of subarrays with product less than K
49. Find the subsequence of length 3 with the highest product from a sequence of
non-negative integers, with the elements in increasing order.
50. Write a function to implement Quicksort on Doubly Linked List
51. Write a function to connect nodes at the same level of a binary tree
52. Write a function to find number of structurally unique binary trees are possible
53. Implement LRU(Least Recently Used) Cache
54. Write a function to determine whether duplicate elements in a given array are
within a given distance of each other.
55. Write a recursive function to calculate the height of a binary tree in Java.
56. Write Java code to count number of nodes in a binary tree

Page 3 © Copyright by Interviewbit


Data Structure Interview Questions

Data Structure Coding Interview


Questions (.....Continued)

57. Print Le view of any binary trees.


58. Given an m x n 2D grid map of '1’s which represents land and '0’s that represents
water return the number of islands (surrounded by water and formed by
connecting adjacent lands in 2 directions - vertically or horizontally).
59. What is topological sorting in a graph?

Page 4 © Copyright by Interviewbit


Let's get Started
Data structures are the building blocks of any computer program as they help in
organizing and manipulating data in an efficient manner. Without data structures,
the computer would be unable to understand how to follow a program's instructions
properly. It also defines their relationship with one another.
Arrays, Linked Lists, Stacks, Queues, and others are examples of Data Structure.
Data structures also provide clarity, organization and structure to the program's code
while also helping the programmer ensure that each line of code performs its
function correctly.

In this article, we've compiled the answers to the most frequently asked Data
Structure Interview Questions so that you may better prepare for your job
interview.

Page 5 © Copyright by Interviewbit


Data Structure Interview Questions

Basic Data Structure Interview Questions for Freshers


Data Structure Interview Questions for Experienced
Data Structure MCQ

Basic Data Structure Interview Questions for


Freshers
1. What are Data Structures?
A data structure is a mechanical or logical way that data is organized within a
program. The organization of data is what determines how a program performs.
There are many types of data structures, each with its own uses. When designing
code, we need to pay particular attention to the way data is structured. If data isn't
stored efficiently or correctly structured, then the overall performance of the code
will be reduced.

2. Why Create Data Structures?


Data structures serve a number of important functions in a program. They ensure
that each line of code performs its function correctly and efficiently, they help the
programmer identify and fix problems with his/her code, and they help to create a
clear and organized code base.

3. What are some applications of Data structures?


Following are some real-time applications of data structures:

Page 6 © Copyright by Interviewbit


Data Structure Interview Questions

Decision Making
Genetics
Image Processing
Blockchain
Numerical and Statistical Analysis
Compiler Design
Database Design and many more

4. Explain the process behind storing a variable in memory.


A variable is stored in memory based on the amount of memory that is needed.
Following are the steps followed to store a variable:
The required amount of memory is assigned first.
Then, it is stored based on the data structure being used.
Using concepts like dynamic allocation ensures high efficiency and that the
storage units can be accessed based on requirements in real-time.

5. Can you explain the difference between file structure and


storage structure?

Page 7 © Copyright by Interviewbit


Data Structure Interview Questions

File Structure: Representation of data into secondary or auxiliary memory say


any device such as a hard disk or pen drives that stores data which remains
intact until manually deleted is known as a file structure representation.
Storage Structure: In this type, data is stored in the main memory i.e RAM, and
is deleted once the function that uses this data gets completely executed.
The difference is that the storage structure has data stored in the memory of the
computer system, whereas the file structure has the data stored in the auxiliary
memory.

6. Describe the types of Data Structures?

Linear Data Structure: A data structure that includes data elements arranged
sequentially or linearly, where each element is connected to its previous and
next nearest elements, is referred to as a linear data structure. Arrays and linked
lists are two examples of linear data structures.
Non-Linear Data Structure: Non-linear data structures are data structures in
which data elements are not arranged linearly or sequentially. We cannot walk
through all elements in one pass in a non-linear data structure, as in a linear
data structure. Trees and graphs are two examples of non-linear data structures.

7. What is a stack data structure? What are the applications of


stack?

Page 8 © Copyright by Interviewbit


Data Structure Interview Questions

A stack is a data structure that is used to represent the state of an application at a


particular point in time. The stack consists of a series of items that are added to the
top of the stack and then removed from the top. It is a linear data structure that
follows a particular order in which operations are performed. LIFO (Last In First Out)
or FILO (First In Last Out) are two possible orders. A stack consists of a sequence of
items. The element that's added last will come out first, a real-life example might be
a stack of clothes on top of each other. When we remove the cloth that was
previously on top, we can say that the cloth that was added last comes out first.

Following are some applications for stack data structure:


It acts as temporary storage during recursive operations
Redo and Undo operations in doc editors
Reversing a string
Parenthesis matching
Postfix to Infix Expressions
Function calls order

8. What are different operations available in stack data


structure?

Page 9 © Copyright by Interviewbit


Data Structure Interview Questions

Some of the main operations provided in the stack data structure are:
push: This adds an item to the top of the stack. The overflow condition occurs if
the stack is full.
pop: This removes the top item of the stack. Underflow condition occurs if the
stack is empty.
top: This returns the top item from the stack.
isEmpty: This returns true if the stack is empty else false.
size: This returns the size of the stack.

9. What is a queue data structure? What are the applications of


queue?
A queue is a linear data structure that allows users to store items in a list in a
systematic manner. The items are added to the queue at the rear end until they are
full, at which point they are removed from the queue from the front. Queues are
commonly used in situations where the users want to hold items for a long period of
time, such as during a checkout process. A good example of a queue is any queue of
customers for a resource where the first consumer is served first.

Following are some applications of queue data structure:

Page 10 © Copyright by Interviewbit


Data Structure Interview Questions

Breadth-first search algorithm in graphs


Operating system: job scheduling operations, Disk scheduling, CPU scheduling
etc.
Call management in call centres

10. What are different operations available in queue data


structure?
enqueue: This adds an element to the rear end of the queue. Overflow
conditions occur if the queue is full.
dequeue: This removes an element from the front end of the queue. Underflow
conditions occur if the queue is empty.
isEmpty: This returns true if the queue is empty or else false.
rear: This returns the rear end element without removing it.
front: This returns the front-end element without removing it.
size: This returns the size of the queue.

11. Differentiate between stack and queue data structure.

Page 11 © Copyright by Interviewbit


Data Structure Interview Questions

Stack Queue

Stack is a linear data


Queue is a linear data structure
structure where data is
where data is ended at the rear
added and removed from
end and removed from the front.
the top.

Stack is based on
Queue is based on FIFO(First In
LIFO(Last In First Out)
First Out) principle
principle

Insertion operation in Insertion operation in Queue is


Stack is known as push. known as eneque.

Delete operation in Stack Delete operation in Queue is


is known as pop. known as pop.

Only one pointer is


Two pointers are available for
available for both
addition and deletion: front()
addition and deletion:
and rear()
top()

Used in solving recursion Used in solving sequential


problems processing problems

12. How to implement a queue using stack?

Page 12 © Copyright by Interviewbit


Data Structure Interview Questions

A queue can be implemented using two stacks. Let q be the queue and stack1
and stack2 be the 2 stacks for implementing q . We know that stack supports
push, pop, and peek operations and using these operations, we need to emulate the
operations of the queue - enqueue and dequeue. Hence, queue q can be
implemented in two methods (Both the methods use auxillary space complexity of
O(n)):
1. By making enqueue operation costly:
Here, the oldest element is always at the top of stack1 which ensures
dequeue operation occurs in O(1) time complexity.
To place the element at top of stack1, stack2 is used.
Pseudocode:
Enqueue: Here time complexity will be O(n)

enqueue(q, data):
While stack1 is not empty:
Push everything from stack1 to stack2.
Push data to stack1
Push everything back to stack1.

Dequeue: Here time complexity will be O(1)

deQueue(q):
If stack1 is empty then error else
Pop an item from stack1 and return it

2. By making the dequeue operation costly:

Page 13 © Copyright by Interviewbit


Data Structure Interview Questions

Here, for enqueue operation, the new element is pushed at the top of stack1 .
Here, the enqueue operation time complexity is O(1).
In dequeue, if stack2 is empty, all elements from stack1 are moved to
stack2 and top of stack2 is the result. Basically, reversing the list by
pushing to a stack and returning the first enqueued element. This operation of
pushing all elements to a new stack takes O(n) complexity.
Pseudocode:
Enqueue: Time complexity: O(1)

enqueue(q, data):
Push data to stack1

Dequeue: Time complexity: O(n)

dequeue(q):
If both stacks are empty then raise error.
If stack2 is empty:
While stack1 is not empty:
push everything from stack1 to stack2.
Pop the element from stack2 and return it.

13. How do you implement stack using queues?


A stack can be implemented using two queues. We know that a queue supports
enqueue and dequeue operations. Using these operations, we need to develop
push, pop operations.
Let stack be ‘s’ and queues used to implement be ‘q1’ and ‘q2’. Then, stack ‘s’
can be implemented in two ways:
1. By making push operation costly:

Page 14 © Copyright by Interviewbit


Data Structure Interview Questions

This method ensures that the newly entered element is always at the front of
‘q1’ so that pop operation just dequeues from ‘q1’.
‘q2’ is used as auxillary queue to put every new element in front of ‘q1’ while
ensuring pop happens in O(1) complexity.
Pseudocode:
Push element to stack s: Here push takes O(n) time complexity.

push(s, data):
Enqueue data to q2
Dequeue elements one by one from q1 and enqueue to q2.
Swap the names of q1 and q2

Pop element from stack s: Takes O(1) time complexity.

pop(s):
dequeue from q1 and return it.

2. By making pop operation costly:


In push operation, the element is enqueued to q1.
In pop operation, all the elements from q1 except the last remaining element,
are pushed to q2 if it is empty. That last element remaining of q1 is dequeued
and returned.
Pseudocode:
Push element to stack s: Here push takes O(1) time complexity.

push(s,data):
Enqueue data to q1

Pop element from stack s: Takes O(n) time complexity.

Page 15 © Copyright by Interviewbit


Data Structure Interview Questions

pop(s):
Step1: Dequeue every elements except the last element from q1 and enqueue to q2.
Step2: Dequeue the last item of q1, the dequeued item is stored in result variable.
Step3: Swap the names of q1 and q2 (for getting updated data after dequeue)
Step4: Return the result.

14. What is array data structure? What are the applications of


arrays?
An array data structure is a data structure that is used to store data in a way that is
efficient and easy to access. It is similar to a list in that it stores data in a sequence.
However, an array data structure differs from a list in that it can hold much more data
than a list can. An array data structure is created by combining several arrays
together. Each array is then given a unique identifier, and each array’s data is stored
in the order in which they are created.

Array data structures are commonly used in databases and other computer systems
to store large amounts of data efficiently. They are also useful for storing information
that is frequently accessed, such as large amounts of text or images.

15. Elaborate on different types of array data structure


There are several different types of arrays:

Page 16 © Copyright by Interviewbit


Data Structure Interview Questions

One-dimensional array: A one-dimensional array stores its elements in


contiguous memory locations, accessing them using a single index value. It is a
linear data structure holding all the elements in a sequence.

Two-dimensional array: A two-dimensional array is a tabular array that


includes rows and columns and stores data. An M × N two-dimensional array is
created by grouping M rows and N columns into N columns and rows.

Page 17 © Copyright by Interviewbit


Data Structure Interview Questions

Three-dimensional array: A three-dimensional array is a grid that has rows,


columns, and depth as a third dimension. It comprises a cube with rows,
columns, and depth as a third dimension. The three-dimensional array has three
subscripts for a position in a particular row, column, and depth. Depth
(dimension or layer) is the first index, row index is the second index, and column
index is the third index.

16. What is a linked list data structure? What are the


applications for the Linked list?
A linked list can be thought of as a series of linked nodes (or items) that are
connected by links (or paths). Each link represents an entry into the linked list, and
each entry points to the next node in the sequence. The order in which nodes are
added to the list is determined by the order in which they are created.

Page 18 © Copyright by Interviewbit


Data Structure Interview Questions

Following are some applications of linked list data structure:


Stack, Queue, binary trees, and graphs are implemented using linked lists.
Dynamic management for Operating System memory.
Round robin scheduling for operating system tasks.
Forward and backward operation in the browser.

17. Elaborate on different types of Linked List data structures?


Following are different types of linked lists:

Page 19 © Copyright by Interviewbit


Data Structure Interview Questions

1. Singly Linked List: A singly linked list is a data structure that is used to store
multiple items. The items are linked together using the key. The key is used to
identify the item and is usually a unique identifier. In a singly linked list, each item is
stored in a separate node. The node can be a single object or it can be a collection of
objects. When an item is added to the list, the node is updated and the new item is
added to the end of the list. When an item is removed from the list, the node that
contains the removed item is deleted and its place is taken by another node. The key
of a singly linked list can be any type of data structure that can be used to identify an
object. For example, it could be an integer, a string, or even another singly linked list.
Singly-linked lists are useful for storing many different types of data. For example,
they are commonly used to store lists of items such as grocery lists or patient records.
They are also useful for storing data that is time sensitive such as stock market prices
or flight schedules.

2. Doubly Linked List: A doubly linked list is a data structure that allows for two-way
data access such that each node in the list points to the next node in the list and also
points back to its previous node. In a doubly linked list, each node can be accessed by
its address, and the contents of the node can be accessed by its index. It's ideal for
applications that need to access large amounts of data in a fast manner. A
disadvantage of a doubly linked list is that it is more difficult to maintain than a
single-linked list. In addition, it is more difficult to add and remove nodes than in a
single-linked list.

Page 20 © Copyright by Interviewbit


Data Structure Interview Questions

3. Circular Linked List: A circular linked list is a unidirectional linked list where each
node points to its next node and the last node points back to the first node, which
makes it circular.

4. Doubly Circular Linked List: A doubly circular linked list is a linked list where each
node points to its next node and its previous node and the last node points back to
the first node and first node’s previous points to the last node.

Page 21 © Copyright by Interviewbit


Data Structure Interview Questions

5. Header List: A list that contains the header node at the beginning of the list, is
called the header-linked list. This is helpful in calculating some repetitive operations
like the number of elements in the list etc.

18. Difference between Array and Linked List.

Page 22 © Copyright by Interviewbit


Data Structure Interview Questions

Arrays Linked Lists

A linked list is a collection of


An array is a collection
entities known as nodes. The node
of data elements of
is divided into two sections: data
the same type.
and address.

It keeps the data


It stores elements at random, or
elements in a single
anywhere in the memory.
memory.

The memory size of an


array is fixed and The memory size of a linked list is
cannot be changed allocated during runtime.
during runtime.

An array's elements
Linked List elements are
are not dependent on
dependent on one another.
one another.

It is easier and faster


In the linked list, it takes time to
to access an element
access an element.
in an array.

Memory utilization is
Memory utilization is effective in
ineffective in the case
the case of an array.
of an array.

Operations like
insertion and deletion Operations like insertion and
take longer time in an deletion are faster in the linked list.
array.

Page 23 © Copyright by Interviewbit


Data Structure Interview Questions

19. What is an asymptotic analysis of an algorithm?


Asymptotic analysis of an algorithm defines the run-time performance as per its
mathematical boundations. Asymptotic analysis helps us articulate the best
case(Omega Notation, Ω), average case(Theta Notation, θ), and worst case(Big Oh
Notation, Ο) performance of an algorithm.

20. What is hashmap in data structure?


Hashmap is a data structure that uses an implementation of a hash table data
structure which allows access to data in constant time (O(1)) complexity if you have
the key.

21. What is the requirement for an object to be used as key or


value in HashMap?
The key or value object that gets used in the hashmap must implement
equals() and hashcode() method.
The hash code is used when inserting the key object into the map and the equals
method is used when trying to retrieve a value from the map.

22. How does HashMap handle collisions in Java?


The java.util.HashMap class in Java uses the approach of chaining to handle
collisions. In chaining, if the new values with the same key are attempted to be
pushed, then these values are stored in a linked list stored in a bucket of the key
as a chain along with the existing value.
In the worst-case scenario, it can happen that all keys might have the same
hashcode, which will result in the hash table turning into a linked list. In this
case, searching a value will take O(n) complexity as opposed to O(1) time due to
the nature of the linked list. Hence, care has to be taken while selecting hashing
algorithm.

23. What is the time complexity of basic operations get() and


put() in HashMap class?
The time complexity is O(1) assuming that the hash function used in the hash map
di t ib t l t if l th b k t
Page 24 © Copyright by Interviewbit
Data Structure Interview Questions

Data Structure Interview Questions for


Experienced
24. What is binary tree data structure? What are the
applications for binary trees?
A binary tree is a data structure that is used to organize data in a way that allows for
efficient retrieval and manipulation. It is a data structure that uses two nodes, called
leaves and nodes, to represent the data. The leaves represent the data and the nodes
represent the relationships between the leaves. Each node has two children, called
siblings, and each child has one parent. The parent is the node that is closest to the
root of the tree. When a node is deleted from the tree, it is deleted from both its child
and its parent.
Following are some applications for binary tree data structure:

It's widely used in computer networks for storing routing table information.
Decision Trees.
Expression Evaluation.
Database indices.

Page 25 © Copyright by Interviewbit


Data Structure Interview Questions

25. What is binary search tree data structure? What are the
applications for binary search trees?
A binary search tree is a data structure that stores items in sorted order. In a binary
search tree, each node stores a key and a value. The key is used to access the item
and the value is used to determine whether the item is present or not. The key can be
any type of value such as an integer, floating point number, character string, or even
a combination of these types. The value can be any type of items such as an integer,
floating point number, character string, or even a combination of these types. When
a node is added to the tree, its key is used to access the item stored at that node.
When a node is removed from the tree, its key is used to access the item stored at
that node.
A binary search tree is a special type of binary tree that has a specific order of
elements in it. It has three basic qualities:
All elements in the le subtree of a node should have a value less than or equal
to the parent node's value, and
All elements in the right subtree of a node should have a value greater than or
equal to the parent node's value.
Both the le and right subtrees must be binary search trees too.

Page 26 © Copyright by Interviewbit


Data Structure Interview Questions

Following are some applications for binary tree data structure:


It is used for indexing and multi-level indexing.
It is used for implementing various search algorithms.
It is helpful in organizing a sorted stream of data.

26. What are tree traversals?


Tree traversal is the process of visiting all the nodes of a tree. Since the root (head) is
the first node and all nodes are connected via edges (or links) we always start with
that node. There are three ways which we use to traverse a tree −
1. Inorder Traversal:
Algorithm:
Step 1. Traverse the le subtree, i.e., call Inorder(root.le )
Step 2. Visit the root.
Step 3. Traverse the right subtree, i.e., call Inorder(root.right)
Inorder traversal in Java:

Page 27 © Copyright by Interviewbit


Data Structure Interview Questions

// Print inorder traversal of given tree.


void printInorderTraversal(Node root)
{
if (root == null)
return;
//first traverse to the left subtree
printInorderTraversal(root.left);
//then print the data of node
System.out.print(root.data + " ");
//then traverse to the right subtree
printInorderTraversal(root.right);
}

Uses: In binary search trees (BST), inorder traversal gives nodes in ascending
order.
2. Preorder Traversal:
Algorithm:
Step 1. Visit the root.
Step 2. Traverse the le subtree, i.e., call Preorder(root.le )
Step 3. Traverse the right subtree, i.e., call Preorder(root.right)
Preorder traversal in Java:

// Print preorder traversal of given tree.


void printPreorderTraversal(Node root)
{
if (root == null)
return;
//first print the data of node
System.out.print(root.data + " ");
//then traverse to the left subtree
printPreorderTraversal(root.left);
//then traverse to the right subtree
printPreorderTraversal(root.right);
}

Uses:
Preorder traversal is commonly used to create a copy of the tree.
It is also used to get prefix expression of an expression tree.

Page 28 © Copyright by Interviewbit


Data Structure Interview Questions

3. Postorder Traversal:
Algorithm:
Step 1. Traverse the le subtree, i.e., call Postorder(root.le )
Step 2. Traverse the right subtree, i.e., call Postorder(root.right)
Step 3. Visit the root.
Postorder traversal in Java:

// Print postorder traversal of given tree.


void printPostorderTraversal(Node root)
{
if (root == null)
return;
//first traverse to the left subtree
printPostorderTraversal(root.left);
//then traverse to the right subtree
printPostorderTraversal(root.right);
//then print the data of node
System.out.print(root.data + " ");
}

Uses:
Postorder traversal is commonly used to delete the tree.
It is also useful to get the postfix expression of an expression tree.
Consider the following tree as an example, then:

Page 29 © Copyright by Interviewbit


Data Structure Interview Questions

Inorder Traversal => Le , Root, Right : [4, 2, 5, 1, 3]


Preorder Traversal => Root, Le , Right : [1, 2, 4, 5, 3]
Postorder Traversal => Le , Right, Root : [4, 5, 2, 3, 1]

27. What is a deque data structure and its types? What are the
applications for deque?
A deque can be thought of as an array of items, but with one important difference:
Instead of pushing and popping items off the end to make room, deques are
designed to allow items to be inserted at either end. This property makes deques
well-suited for performing tasks such as keeping track of inventory, scheduling tasks,
or handling large amounts of data.

Page 30 © Copyright by Interviewbit


Data Structure Interview Questions

There are two types of deque:


Input Restricted Deque: Insertion operations are performed at only one end
while deletion is performed at both ends in the input restricted queue.

Output Restricted Deque: Deletion operations are performed at only one end
while insertion is performed at both ends in the output restricted queue.

Following are some real-time applications for deque data structure:


It can be used as both stack and queue, as it supports all the operations for both
data structures.
Web browser’s history can be stored in a deque.
Operating systems job scheduling algorithm.

Page 31 © Copyright by Interviewbit


Data Structure Interview Questions

28. What are some key operations performed on the Deque data
structure?
Following are the key operations available deque:
insertFront(): This adds an element to the front of the Deque.
insertLast(): This adds an element to the rear of the Deque.
deleteFront(): This deletes an element from the front of the Deque.
deleteLast():This deletes an element from the front of the Deque.
getFront(): This gets an element from the front of the Deque.
getRear(): This gets an element from the rear of the Deque.
isEmpty(): This checks whether Deque is empty or not.
isFull(): This checks whether Deque is full or not.

29. What is a priority queue? What are the applications for


priority queue?
Priority Queue is an abstract data type that is similar to a queue in that each element
is assigned a priority value. The order in which elements in a priority queue are
served is determined by their priority (i.e., the order in which they are removed). If
the elements have the same priority, they are served in the order they appear in the
queue.

Page 32 © Copyright by Interviewbit


Data Structure Interview Questions

Following are some real-time applications for priority queue:


Used in graph algorithms like Dijkstra, Prim’s Minimum spanning tree etc.
Huffman code for data compression
Finding Kth Largest/Smallest element

30. Compare different implementations of priority queue


The following table contains an asymptotic analysis of different implementations of a
priority queue:

Operations peek insert delete

Linked List O(1) O(n) O(1)

Binary Heap O(1) O(log n) O(log n)

Binary Search Tree O(1) O(log n) O(log n)

Page 33 © Copyright by Interviewbit


Data Structure Interview Questions

31. What is graph data structure and its representations? What


are the applications for graphs?
A graph is a type of non-linear data structure made up of nodes and edges. The nodes
are also known as vertices, and edges are lines or arcs that connect any two nodes in
the graph.

The following are the two most common graph representations:


1. Adjacency Matrix: Adjacency Matrix is a two-dimensional array with the
dimensions V x V, where V is the number of vertices in a graph. Representation is
simpler to implement and adhere to. It takes O(1) time to remove an edge. Queries
such as whether there is an edge from vertex 'u' to vertex 'v' are efficient and can be
completed in O(1).

Page 34 © Copyright by Interviewbit


Data Structure Interview Questions

One of the cons of this representation is that even if the graph is sparse (has fewer
edges), it takes up the same amount of space. Adding a vertex takes O(V^2). It also
takes O(V) time to compute all of a vertex's neighbours, which is not very efficient.
2. Adjacency List: In this method, each Node holds a list of Nodes that are directly
connected to that vertex. Each node at the end of the list is connected with null
values to indicate that it is the last node in the list. This saves space O(|V|+|E|). In the
worst-case scenario, a graph can have C(V, 2) edges, consuming O(V^2) space. It is
simpler to add a vertex. It takes the least amount of time to compute all of a vertex's
neighbours.

Page 35 © Copyright by Interviewbit


Data Structure Interview Questions

One of the cons of this representation is that queries such as "is there an edge from
vertex u to vertex v?" are inefficient and take O (V) in the worst case.

32. What is the difference between the Breadth First Search


(BFS) and Depth First Search (DFS)?

Page 36 © Copyright by Interviewbit


Data Structure Interview Questions

Breadth First Search (BFS) Depth First Search (DFS)

It stands for “Breadth First It stands for “Depth First


Search” Search”

BFS (Breadth First Search)


DFS (Depth First Search) finds
finds the shortest path
the shortest path using the
using the Queue data
Stack data structure.
structure.

DFS begins at the root node


We walk through all nodes
and proceeds as far as possible
on the same level before
through the nodes until we
passing to the next level in
reach the node with no
BFS.
unvisited nearby nodes.

When compared to DFS, When compared to BFS, DFS is


BFS is slower. faster.

BFS performs better when


DFS performs better when the
the target is close to the
target is far from the source.
source.

BFS necessitates more


DFS necessitates less memory.
memory.

Nodes that have been When there are no more nodes


traversed multiple times to visit, the visited nodes are
are removed from the added to the stack and then
queue. removed.

The DFS algorithm is a


Backtracking is not an recursive algorithm that
option in BFS. employs the concept of
backtracking.

It is based on the FIFO It is based on the LIFO


Page 37 © Copyright by Interviewbit
Data Structure Interview Questions

33. What is AVL tree data structure, its operations, and its
rotations? What are the applications for AVL trees?
AVL trees are height balancing binary search trees named a er their inventors
Adelson, Velski, and Landis. The AVL tree compares the heights of the le and right
subtrees and ensures that the difference is less than one. This distinction is known as
the Balance Factor.
BalanceFactor = height(le -subtree) − height(right-subtree)

We can perform the following two operations on AVL tree:


Insertion: Insertion in an AVL tree is done in the same way that it is done in a
binary search tree. However, it may cause a violation in the AVL tree property,
requiring the tree to be balanced. Rotations can be used to balance the tree.
Deletion: Deletion can also be performed in the same manner as in a binary
search tree. Because deletion can disrupt the tree's balance, various types of
rotations are used to rebalance it.
An AVL tree can balance itself by performing the four rotations listed below:

Page 38 © Copyright by Interviewbit


Data Structure Interview Questions

Le rotation: When a node is inserted into the right subtree of the right subtree
and the tree becomes unbalanced, we perform a single le rotation.
Right rotation: If a node is inserted in the le subtree of the le subtree, the AVL
tree may become unbalanced. The tree then requires right rotation.
Le -Right rotation: The RR rotation is performed first on the subtree, followed
by the LL rotation on the entire tree.
Right-Le rotation: The LL rotation is performed first on the subtree, followed
by the RR rotation on the entire tree.
Following are some real-time applications for AVL tree data structure:
AVL trees are typically used for in-memory sets and dictionaries.
AVL trees are also widely used in database applications where there are fewer
insertions and deletions but frequent data lookups are required.
Apart from database applications, it is used in applications that require
improved searching.

34. What is a B-tree data structure? What are the applications


for B-trees?
The B Tree is a type of m-way tree that is commonly used for disc access. A B-Tree
with order m can only have m-1 keys and m children. One of the primary reasons for
using a B tree is its ability to store a large number of keys in a single node as well as
large key values while keeping the tree's height relatively small.
A B-tree of order 4 is shown below in the image:

Following are the key properties of a B-tree data structure:

Page 39 © Copyright by Interviewbit


Data Structure Interview Questions

All of the leaves are at the same height.


The term minimum degree 't' describes a B-Tree. The value of t is determined by
the size of the disc block.
Except for root, every node must have at least t-1 keys. The root must contain at
least one key.
All nodes (including root) can have no more than 2*t - 1 keys.
The number of children of a node is equal to its key count plus one.
A node's keys are sorted in ascending order. The child of two keys k1 and k2
contains all keys between k1 and k2.
In contrast to Binary Search Tree, B-Tree grows and shrinks from the root.
Following are real-time applications of a B-Tree data structure:
It is used to access data stored on discs in large databases.
Using a B tree, you can search for data in a data set in significantly less time.
The indexing feature allows for multilevel indexing.
The B-tree approach is also used by the majority of servers.

35. Define Segment Tree data structure and its applications.


A segment Tree is a binary tree that is used to store intervals or segments. The
Segment Tree is made up of nodes that represent intervals. Segment Tree is used
when there are multiple range queries on an array and changes to array elements.
The segment tree of array A[7] will look like this:

Page 40 © Copyright by Interviewbit


Data Structure Interview Questions

Following are key operations performed on the Segment tree data structure:
Building Tree: In this step, we create the structure and initialize the segment
tree variable.
Updating the Tree: In this step, we change the tree by updating the array value at
a point or over an interval.
Querying Tree: This operation can be used to run a range query on the array.
Following are real-time applications for Segment Tree:
Used to efficiently list all pairs of intersecting rectangles from a list of rectangles
in the plane.
The segment tree has become popular for use in pattern recognition and image
processing.
Finding range sum/product, range max/min, prefix sum/product, etc
Computational geometry
Geographic information systems
Static and Dynamic RMQ (Range Minimum Query)
Storing segments in an arbitrary manner

36. Define Trie data structure and its applications

Page 41 © Copyright by Interviewbit


Data Structure Interview Questions

The word "Trie" is an abbreviation for "retrieval." Trie is a data structure that stores a
set of strings as a sorted tree. Each node has the same number of pointers as the
number of alphabet characters. It can look up a word in the dictionary by using its
prefix. Assuming that all strings are formed from the letters 'a' to 'z' in the English
alphabet, each trie node can have a maximum of 26 points.
Trie is also referred to as the digital tree or the prefix tree. The key to which a node is
connected is determined by its position in the Trie. Trie allows us to insert and find
strings in O(L) time, where L is the length of a single word. This is clearly faster than
BST. Because of how it is implemented, this is also faster than Hashing. There is no
need to compute a hash function. There is no need to handle collisions (like we do in
open addressing and separate chaining)
Another benefit of Trie is that we can easily print all words in alphabetical order,
which is not easy with hashing. Trie can also perform prefix search (or auto-complete)
efficiently.

The main disadvantage of tries is that they require a large amount of memory to
store the strings. We have an excessive number of node pointers for each node
Following are some real-time applications for Trie data structure:

Page 42 © Copyright by Interviewbit


Data Structure Interview Questions

Auto-Complete and Search for Search Engines


Genome Analysis
Data Analytics
Browser History
Spell Checker

37. Define Red-Black Tree and its applications


Red Black Trees are a type of self-balancing binary search tree. Rudolf Bayer invented
it in 1972 and dubbed it "symmetric binary B-trees."
A red-black tree is a Binary tree in which each node has a colour attribute, either red
or black. By comparing the node colours on any simple path from the root to a leaf,
red-black trees ensure that no path is more than twice as long as any other, ensuring
that the tree is generally balanced.
Red-black trees are similar to binary trees in that they both store their data in two's
complementary binary formats. However, red-black trees have one important
advantage over binary trees: they are faster to access. Because red-black trees are so
fast to access, they are o en used to store large amounts of data.
Red-black trees can be used to store any type of data that can be represented as a set
of values.

Page 43 © Copyright by Interviewbit


Data Structure Interview Questions

Every Red-Black Tree Obeys the Following Rules:


Every node is either red or black.
The tree's root is always black.
There are no two red nodes that are adjacent.
There is the same number of black nodes on every path from a node to any of its
descendant's NULL nodes.
All of the leaf nodes are black.
Following are some real-time applications for the Red-Black Tree data structure:
The majority of self-balancing BST library functions in C++ or Java use Red-Black
Trees.
It is used to implement Linux CPU Scheduling.
It is also used to reduce time complexity in the K-mean clustering algorithm in
machine learning.
MySQL also employs the Red-Black tree for table indexes in order to reduce
searching and insertion time.

38. Which data structures are used for implementing LRU


cache?

Page 44 © Copyright by Interviewbit


Data Structure Interview Questions

LRU cache or Least Recently Used cache allows quick identification of an element
that hasn’t been put to use for the longest time by organizing items in order of use. In
order to achieve this, two data structures are used:
Queue – This is implemented using a doubly-linked list. The maximum size of
the queue is determined by the cache size, i.e by the total number of available
frames. The least recently used pages will be near the front end of the queue
whereas the most recently used pages will be towards the rear end of the queue.
Hashmap – Hashmap stores the page number as the key along with the address
of the corresponding queue node as the value.

39. What is a heap data structure?


Heap is a special tree-based non-linear data structure in which the tree is a complete
binary tree. A binary tree is said to be complete if all levels are completely filled
except possibly the last level and the last level has all elements as le as possible.
Heaps are of two types:

Page 45 © Copyright by Interviewbit


Data Structure Interview Questions

Max-Heap:
In a Max-Heap the data element present at the root node must be the
greatest among all the data elements present in the tree.
This property should be recursively true for all sub-trees of that binary tree.
Min-Heap:
In a Min-Heap the data element present at the root node must be the
smallest (or minimum) among all the data elements present in the tree.
This property should be recursively true for all sub-trees of that binary tree.

Data Structure Coding Interview Questions


40. Write a program to remove duplicates from a sorted array in
place?
Input: {1, 1, 1, 2, 3, 3, 6, 6, 7}
Output: {1, 2, 3, 6, 7}
Explanation: The given input has only 1,2,3,6, and 7 as unique elements, hence
the output only lists them out.

Page 46 © Copyright by Interviewbit


Data Structure Interview Questions

#include <bits/stdc++.h>
using namespace std;

class Solution{
public:
//function that takes an array and its size as arguments
int removeDuplicates(int a[],int n){
int index=0;
for(int i=1;i<n;i++) {

if(a[i]!=a[index]) { //change index


index++; //swap next line
a[index]=a[i];
}
}
return index+1;
}
};

int main()
{
int T;
//taking the number of test cases from user
cin>>T;
//running the loop for all test cases
while(T--)
{
int N;
//taking size input from user
cin>>N;
int a[N];
//taking array input from user
for(int i=0;i<N;i++)
{
cin>>a[i];
}
Solution ob;
//calling the removeDuplicates in the Solution class
int n = ob.removeDuplicates(a,N);
//printing the array after removing duplicates
for(int i=0;i<n;i++)
cout<<a[i]<<" ";
cout<<endl;
}
}

Page 47 © Copyright by Interviewbit


Data Structure Interview Questions

Time Complexity: O(n)


Space Complexity: O(1)

41. Write a function for zigzag traversal in a binary tree


Input:
Output: [1, 3, 2, 4, 5, 6, 8, 7]
Explanation: Zigzag Traversal first iterates the given level of the tree from le to
right and then the next level as the right to the level.

Page 48 © Copyright by Interviewbit


Data Structure Interview Questions

// Tree Node
struct Node {
int data;
Node* left;
Node* right;
};

//Function to store the zigzag order traversal of a tree in a list.


vector <int> zigZagTraversal(Node* root)
{
//creating two stacks for level traversals in both order
stack<Node*> st1;
stack<Node*> st2;
//vector to store the zigzag traversal
vector<int> result;

//Initialize the first stack with the root element


st1.push(root);

//Iterate until either of the stack is not empty


while(!st1.empty() || !st2.empty()){
//iterate until the first stack is not empty
while(!st1.empty()){
Node* temp=st1.top();
st1.pop();
result.push_back(temp->data);

if(temp->left)
st2.push(temp->left);
if(temp->right)
st2.push(temp->right);
}
//Iterate until the second stack is not empty
while(!st2.empty()){
Node* temp=st2.top();
st2.pop();
result.push_back(temp->data);

if(temp->right)
st1.push(temp->right);
if(temp->left)
st1.push(temp->left);

}
}
return result;
}

Page 49 © Copyright by Interviewbit


Data Structure Interview Questions

Time Complexity: O(n)


Space Complexity: O(n)

42. Write a function to sort a linked list of 0s, 1s and 2s


Input: 0->1->0->2->1->0->2->1
Output: 0->0->0->1->1->1->2->2
Explanation: All 0’s will come first then 1s and then 2s. This can be done in O(n)
time by counting the occurrences of all three and rearranging them in the linked
list.

Page 50 © Copyright by Interviewbit


Data Structure Interview Questions

//structure of the linked list


struct Node {
int data;
Node *left;
Node *right;
}
//function take the head of the linked list as a parameter
void sortList(Node *head)
{
//if linked list is empty then return back
if(head==NULL)
return;
else
{
Node *temp=head;
Node *temp1=head;
//to store count of 0s, 1s, and 2s
int count0=0,count1=0,count2=0;
//calculating the count of 0s, 1s, and 2s
while(temp!=NULL)
{
if(temp->data==0)
count0++;
else if(temp->data==1)
count1++;
else
count2++;
temp=temp->next;
}
//iterating over count of 0s and filling the linked list
while(count0!=0)
{
temp1->data=0;
temp1=temp1->next;
count0--;
}
//iterating over count of 1s and filling the linked list
while(count1!=0)
{
temp1->data=1;
temp1=temp1->next;
count1--;
}
//iterating over count of 2s and filling the linked list
while(count2!=0)
{
temp1->data=2;
temp1=temp1->next;
count2--;
}
}
}

Page 51 © Copyright by Interviewbit


Data Structure Interview Questions

Time Complexity: O(n)


Space Complexity: O(1)

43. Write a function to detect cycle in an undirected graph


Input: n = 4, e = 4 , 0 1, 1 2, 2 3, 3 1
Output: Yes
Explanation: The graph is represented as follows in adjacency list
representation:
0->1
1->2
2->3
3->1
From the above representation, we can see that there exists a cycle: 1→2→3→1

Page 52 © Copyright by Interviewbit


Data Structure Interview Questions

//function to run dfs for a given node in the graph


int dfs(int v,vector<int> adj[],vector<int> &visited,vector<int> &rec,int i,int paren
int ans=0;
visited[i]=1;
rec[i]=1;
for(auto x : adj[i]){
if(x!=parent) {
if(rec[x])
return 1;
ans=dfs(v,adj,visited,rec,x,i);
if(ans)
return 1;
}
}
rec[i]=0;
return 0;
}
// Function to detect cycle in an undirected graph.
// it takes adjacency list representation as an argument
bool isCycle(int v, vector<int> adj[]) {
vector<int> visited(v,0),rec(v,0);
int ans=0;
for(int i=0;i<v;i++){
if(visited[i]==0)
ans=dfs(v,adj,visited,rec,i,-1);
if(ans)
return 1;
}
return 0;
}

Time Complexity: O(V+E)


Space Complexity: O(V)

44. Write a function to convert an infix expression to postfix


expression
Input: a+b*(c^d)
Output: abcd^*+

Page 53 © Copyright by Interviewbit


Data Structure Interview Questions

int prec(char c)
{
if (c == '^')
return 3;
else if (c == '/' || c == '*')
return 2;
else if (c == '+' || c == '-')
return 1;
else
return -1;
}
public:
// Function to convert an infix expression to a postfix expression.
string infixToPostfix(string s) {
stack<char> st; // For stack operations, we are using C++ built in stack
string result;

for (int i = 0; i < s.length(); i++) {


char c = s[i];

// If the scanned character is


// an operand, add it to the output string.
if ((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
|| (c >= '0' && c <= '9'))
result += c;

// If the scanned character is an


// '(', push it to the stack.
else if (c == '(')
st.push('(');

// If the scanned character is an ')',


// pop and to output string from the stack
// until an '(' is encountered.
else if (c == ')') {
while (st.top() != '(') {
result += st.top();
st.pop();
}
st.pop();
}

// If an operator is scanned
else {
while (!st.empty()
&& prec(s[i]) <= prec(st.top())) {
if (c == '^' && st.top() == '^')
break;
else {
result += st.top();
st.pop();
}
}
st.push(c);
}
}

// ll h i i l f h k
Page 54 © Copyright by Interviewbit
Data Structure Interview Questions

Time Complexity: O(n)


Space Complexity: O(n)

45. Write a function to find the maximum for each and every
contiguous subarray of size k.
Input: N = 9, K = 3 arr[] = {1, 2, 3, 1, 4, 5, 2, 3, 6}
Output: {3, 3, 4, 5, 5, 5, 6}
Explanation: In the first subarray of size 3: {1,2,3}, the value 3 is maximum,
similarly for all such subarrays for size 3.

//function to find maximum in each subarray using sliding window approach


vector<int> max_of_subarrays(vector<int> arr, int n, int k){
int i=0,j=0;
deque<int> dq;
dq.push_front(i++);
while(i<k)
{
while(!dq.empty()&&arr[dq.back()]<=arr[i])
dq.pop_back();
dq.push_back(i++);
}
vector<int> ans;
while(i<n)
{
ans.push_back(arr[dq.front()]);
while(!dq.empty()&&j>=dq.front())
{
dq.pop_front();

}
j++;
while(!dq.empty()&&arr[dq.back()]<=arr[i])
dq.pop_back();
dq.push_back(i++);
}
ans.push_back(arr[dq.front()]);
return ans;

Time Complexity: O(n)


Space Complexity: O(k)

Page 55 © Copyright by Interviewbit


Data Structure Interview Questions

46. Write a function to merge two sorted binary search tree


Input:
First BST
7
/ \
5 9
Second BST
4
/ \
3 12
Output: 3 4 5 6 7 9 12

Page 56 © Copyright by Interviewbit


Data Structure Interview Questions

//Function to return a list of integers denoting the node


//values of both the BST in a sorted order.
void inorder(Node*root,vector<int>&v){
if(root==NULL)
return;
inorder(root->left,v);
v.push_back(root->data);
inorder(root->right,v);
}
vector<int> merge(vector<int>v1,vector<int>v2){
vector<int>v;
int n1=v1.size(),n2=v2.size(),i=0,j=0;
while(i<n1&&j<n2){
if(v1[i]>v2[j]){
v.push_back(v2[j]);
j++;
}
else{
v.push_back(v1[i]);
i++;
}
}
while(i<n1){
v.push_back(v1[i]);
i++;
}
while(j<n2){
v.push_back(v2[j]);
j++;
}
return v;
}
vector<int> merge(Node *root1, Node *root2)
{
vector<int>v1,v2;
inorder(root1,v1);
inorder(root2,v2);
return merge(v1,v2);
}

Time Complexity: O(m+n)


Space Complexity: O(height of the first tree + height of the second tree)

47. Write a function to print all unique rows of the given matrix.
Input:

Page 57 © Copyright by Interviewbit


Data Structure Interview Questions

{{1, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{1, 0, 1, 1, 0},
{0, 1, 0, 0, 1},
{1, 1, 1, 0, 0}}

Output:
{{1, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{1, 0, 1, 1, 0}}

vector<vector<int>> uniqueRow(int M[MAX][MAX],int row,int col)


{
set<vector<int>> st;
vector<vector<int>> v;

for(int i=0; i<row; i++) {


vector<int> v1;
for(int j=0; j<col; j++) {
v1.push_back(M[i][j]);
}
if(st.count(v1) == 0) {
v.push_back(v1);
st.insert(v1);
}
}

return v;
}

Time Complexity: O( ROW x COL )


Space Complexity: O( ROW )

48. Write a function to find number of subarrays with product


less than K

Page 58 © Copyright by Interviewbit


Data Structure Interview Questions

Input: arr = [1, 6, 2, 3, 2, 1], k = 12


Output: 11

int numSubarrayProductLessThanK(vector<int>& nums, int k) {


int ans=0;
int pdt=1;
int left=0,right=0;
while(right<=nums.size()-1){

pdt*=nums[right];
while(pdt>=k and left<nums.size()){
pdt/=nums[left];
left++;

}
if(right-left>=0)
ans+=right-left+1;//since on adding a new element new subarrays formed is r
right++;

}
return ans;
}

Time Complexity: O(n)


Space Complexity: O(1)

49. Find the subsequence of length 3 with the highest product


from a sequence of non-negative integers, with the
elements in increasing order.
Input: n = 8 arr[ ] = {6, 7, 10, 1, 2, 3, 11, 12}
Output: {10, 11, 12}
The three increasing elements of the given arrays are 10, 11, and 12, which form a
three-size subsequence with the highest product.

Page 59 © Copyright by Interviewbit


Data Structure Interview Questions

vector<int> maxProductSubsequence(int *a , int n)


{
set<int> s;
long long largestOnLeft[n];
for(int i=0;i<n;i++)
{
s.insert(a[i]);
auto it=s.lower_bound(a[i]);
if(it==s.begin())
{
largestOnLeft[i]=-1;
continue;
}
it--;
largestOnLeft[i]=*it;
}
int m=0;
long long p=INT_MIN;
vector<int> result(3);
result[0]=-1;
for(int i=n-1;i>=0;i--)
{
if(a[i]>=m){
m=a[i];}
else
{
if(largestOnLeft[i] !=-1)
{
if(largestOnLeft[i]*a[i]*m >p)
{
p=largestOnLeft[i]*a[i]*m;
result[0]=largestOnLeft[i];
result[1]=a[i];
result[2]=m;
}
}
}
}
return v;
}

Time Complexity: O(nlog(n))


Space Complexity: O(n)

50. Write a function to implement Quicksort on Doubly Linked


List

Page 60 © Copyright by Interviewbit


Data Structure Interview Questions

Input: 8<->10<->1<->7<->6
Output: 1<->6<->7<->8<->10

class Solution{
public:
Node* partition(Node *l, Node *h){
//Your code goes here
Node*temp = h;
Node*tt = l;
Node*first = l;

while(tt != h){
if(tt->data <= temp->data){
swap(first->data, tt->data);
first = first->next;
}
tt = tt -> next;
}
swap(first-> data, h->data);
return first;

}
};

void _quickSort(struct Node* l, struct Node *h)


{
if (h != NULL && l != h && l != h->next)
{
Solution ob;
struct Node *p = ob.partition(l, h);
_quickSort(l, p->prev);
_quickSort(p->next, h);
}
}

void quickSort(struct Node *head)


{
struct Node *h = lastNode(head);
_quickSort(head, h);
}

Time Complexity: O(n^2) in the worst case when the list is already sorted.
O(nlog(n)) in the best and average case.
Space Complexity: O(n)

Page 61 © Copyright by Interviewbit


Data Structure Interview Questions

51. Write a function to connect nodes at the same level of a


binary tree
Input: 100
/ \
13 15
/\ \
14 1 20

Output: 100-> NULL


/ \
13 -> 15 -> NULL
/ \ \
14 -> 1 -> 20 -> NULL

Page 62 © Copyright by Interviewbit


Data Structure Interview Questions

class Solution
{
public:
//Function to connect nodes at the same level.
void connect(Node *p)
{
map<int,vector<Node *> > m;
queue<Node *> q;
queue<int> l;
q.push(p);
l.push(0);
while(!q.empty())
{
Node *temp=q.front();
int level=l.front();
q.pop();
l.pop();
m[level].push_back(temp);
if(temp->left!=NULL)
{
q.push(temp->left);
l.push(level+1);
}
if(temp->right!=NULL)
{
q.push(temp->right);
l.push(level+1);
}
}
for(map<int,vector<Node *> > ::iterator it=m.begin();it!=m.end();it++)
{
vector<Node *> temp1=it->second;
for(int i=0;i<temp1.size()-1;i++)
{
temp1[i]->nextRight=temp1[i+1];
}
temp1[temp1.size()-1]->nextRight=NULL;
}
}
};

Time Complexity: O(n)


Space Complexity: O(n)

52. Write a function to find number of structurally unique


binary trees are possible

Page 63 © Copyright by Interviewbit


Data Structure Interview Questions

Input: N = 3
Output: 5 for N = 3, there are 5 possible BSTs:
1 3 3 21
\ / / /\ \
3 2 1 1 3 2
/ / \ \
2 1 2 3

Page 64 © Copyright by Interviewbit


Data Structure Interview Questions

class Solution
{
public:
//function to calculate binomial coefficient C(n,k)
long long int binomialCoefficient(long long int n, long long int k)
{
long long int res = 1;
if (k > n - k)
k = n - k;

for (long long int i = 0; i < k; ++i)


{
res *= (n - i);
res /= (i + 1);
}

return res;
}

//function to calculate Nth Catalan Number


long long int catalanNumber(long long in n)
{
// Calculate value of 2nCn
long long int C = binomialCoefficient(2*n, n);

// return 2nCn/(n+1)
return C/(n+1);
}

//Function to return the total number of possible unique BST.


long long int numOfUniqueBinarySearchTrees(int n)
{
// find nth catalan number
long long int countOfUniqueBinarySearchTrees = catalanNumber(n);

// return nth catalan number


return countOfUniqueBinarySearchTrees;
}
};

Time Complexity: O(n)


Space Complexity: O(1)

53. Implement LRU(Least Recently Used) Cache

Page 65 © Copyright by Interviewbit


Data Structure Interview Questions

class LRUCache
{
private:
class node_t {
public:
int key;
int value;
node_t * next;
node_t * prev;
};

int cap;
node_t head;
unordered_map<int, node_t*> tbl;

void remove_node(node_t * node) {


node->next->prev = node->prev;
node->prev->next = node->next;
}
void add_node(node_t * node) {
node->next = head.next;
node->prev = &head;
head.next = node;
node->next->prev = node;
}
public:
//Constructor for initializing the cache capacity with the given value.
LRUCache(int cap): cap(cap)
{
// code here
head.prev = &head;
head.next = &head;
}

//Function to return value corresponding to the key.


int get(int key)
{
// your code here
unordered_map<int, node_t*>::iterator it = tbl.find(key);
if(it==tbl.end())
return -1;
remove_node(it->second);
add_node(it->second);
return it->second->value;
}

//Function for storing key-value pair.


void set(int key, int value)
{
// your code here
unordered_map<int, node_t*>::iterator it = tbl.find(key);
if(it!=tbl.end())
{
remove_node(it->second);
add_node(it->second);
it->second->value = value;
}
l {
Page 66 © Copyright by Interviewbit
Data Structure Interview Questions

Time Complexity: O(1) to get an element


Space Complexity: O(n)

54. Write a function to determine whether duplicate elements


in a given array are within a given distance of each other.
Input: arr[] = {1, 2, 3, 4, 2, 1, 2} range=3
Output: True

class Solution {
public:

bool checkDuplicatesWithinRange(vector<int> arr, int range)


{
// Creating an empty hashset
unordered_set<int> myset;

// Traversing the input array


for (int i = 0; i < arr.size(); i++)
{
// If already present in hashset, then we found a duplicate within range di
if (myset.find(arr[i]) != myset.end())
return true;

// Add this item to hashset


myset.insert(arr[i]);

// Remove the range+1 distant item from the hashset


if (i >= range)
myset.erase(arr[i-range]);
}
return false;
}
};

Time Complexity: O(n)


Space Complexity: O(n)

55. Write a recursive function to calculate the height of a binary


tree in Java.
Consider that every node of a tree represents a class called Node as given below:

Page 67 © Copyright by Interviewbit


Data Structure Interview Questions

public class Node{


int data;
Node left;
Node right;
}

Then the height of the binary tree can be found as follows:

int heightOfBinaryTree(Node node)


{
if (node == null)
return 0; // If node is null then height is 0 for that node.
else
{
// compute the height of each subtree
int leftHeight = heightOfBinaryTree(node.left);
int rightHeight = heightOfBinaryTree(node.right);
//use the larger among the left and right height and plus 1 (for the root)
return Math.max(leftHeight, rightHeight) + 1;
}
}

56. Write Java code to count number of nodes in a binary tree

int countNodes(Node root)


{
int count = 1; //Root itself should be counted
if (root ==null)
return 0;
else
{
count += countNodes(root.left);
count += countNodes(root.right);
return count;
}
}

57. Print Le view of any binary trees.

Page 68 © Copyright by Interviewbit


Data Structure Interview Questions

The main idea to solve this problem is to traverse the tree in pre order manner
and pass the level information along with it. If the level is visited for the first
time, then we store the information of the current node and the current level in
the hashmap. Basically, we are getting the le view by noting the first node of
every level.
At the end of traversal, we can get the solution by just traversing the map.
Consider the following tree as example for finding the le view:
Le view of a binary tree in Java:

Page 69 © Copyright by Interviewbit


Data Structure Interview Questions

import java.util.HashMap;

//to store a Binary Tree node


class Node
{
int data;
Node left = null, right = null;

Node(int data) {
this.data = data;
}
}
public class InterviewBit
{
// traverse nodes in pre-order way
public static void leftViewUtil(Node root, int level, HashMap<Integer, Integer> m
{
if (root == null) {
return;
}

// if you are visiting the level for the first time


// insert the current node and level info to the map
if (!map.containsKey(level)) {
map.put(level, root.data);
}

leftViewUtil(root.left, level + 1, map);


leftViewUtil(root.right, level + 1, map);
}

// to print left view of binary tree


public static void leftView(Node root)
{
// create an empty HashMap to store first node of each level
HashMap<Integer, Integer> map = new HashMap<>();

// traverse the tree and find out the first nodes of each level
leftViewUtil(root, 1, map);

// iterate through the HashMap and print the left view


for (int i = 0; i <map.size(); i++) {
System.out.print(map.get(i) + " ");
}
}

public static void main(String[] args)


{
Node root = new Node(4);
root.left = new Node(2);
root.right = new Node(6);
root.left.left = new Node(1);
root.left.left = new Node(3);
root.right.left = new Node(5);
root.right.right = new Node(7);
root.right.left.left = new Node(9);

l f i ( )
Page 70 © Copyright by Interviewbit
Data Structure Interview Questions

58. Given an m x n 2D grid map of '1’s which represents land and


'0’s that represents water return the number of islands
(surrounded by water and formed by connecting adjacent
lands in 2 directions - vertically or horizontally).

Assume that the boundary cases - which are all four edges of the grid
are surrounded by water.

Constraints are:

m == grid.length
n == grid[i].length
1 <= m, n <= 300
grid[i][j] can only be ‘0’ or ‘1’.

Example:

Input: grid = [
[“1” , “1” , “1” , “0” , “0”],
[“1” , “1” , “0” , “0” , “0”],
[“0” , “0” , “1” , “0” , “1”],
[“0” , “0” , “0” , “1” , “1”]
]

Page 71 © Copyright by Interviewbit


Data Structure Interview Questions

Output: 3

class InterviewBit {
public int numberOfIslands(char[][] grid) {
if(grid==null || grid.length==0||grid[0].length==0)
return 0;

int m = grid.length;
int n = grid[0].length;

int count=0;
for(int i=0; i<m; i++){
for(int j=0; j<n; j++){
if(grid[i][j]=='1'){
count++;
mergeIslands(grid, i, j);
}
}
}

return count;
}

public void mergeIslands(char[][] grid, int i, int j){


int m=grid.length;
int n=grid[0].length;

if(i<0||i>=m||j<0||j>=n||grid[i][j]!='1')
return;

grid[i][j]='X';

mergeIslands(grid, i-1, j);


mergeIslands(grid, i+1, j);
mergeIslands(grid, i, j-1);
mergeIslands(grid, i, j+1);
}
}

59. What is topological sorting in a graph?

Page 72 © Copyright by Interviewbit


Data Structure Interview Questions

Topological sorting is a linear ordering of vertices such that for every directed
edge ij, vertex i comes before j in the ordering.
Topological sorting is only possible for Directed Acyclic Graph (DAG).
Applications:
1. jobs scheduling from the given dependencies among jobs.
2. ordering of formula cell evaluation in spreadsheets
3. ordering of compilation tasks to be performed in make files,
4. data serialization
5. resolving symbol dependencies in linkers.
Topological Sort Code in Java:

Page 73 © Copyright by Interviewbit


Data Structure Interview Questions

// V - total vertices
// visited - boolean array to keep track of visited nodes
// graph - adjacency list.
// Main Topological Sort Function.
void topologicalSort()
{
Stack<Integer> stack = new Stack<Integer>();

// Mark all the vertices as not visited


boolean visited[] = new boolean[V];
for (int j = 0; j < V; j++){
visited[j] = false;
}
// Call the util function starting from all vertices one by one
for (int i = 0; i < V; i++)
if (visited[i] == false)
topologicalSortUtil(i, visited, stack);

// Print contents of stack -> result of topological sort


while (stack.empty() == false)
System.out.print(stack.pop() + " ");
}

// A helper function used by topologicalSort


void topologicalSortUtil(int v, boolean visited[],
Stack<Integer> stack)
{
// Mark the current node as visited.
visited[v] = true;
Integer i;

// Recur for all the vertices adjacent to the current vertex


Iterator<Integer> it = graph.get(v).iterator();
while (it.hasNext()) {
i = it.next();
if (!visited[i])
topologicalSortUtil(i, visited, stack);
}

// Push current vertex to stack that saves result


stack.push(new Integer(v));
}

Conclusion

Page 74 © Copyright by Interviewbit


Algorithm Interview Questions

To view the live version of the


page, click here.

© Copyright by Interviewbit
Contents

Algorithm Interview Questions for Freshers


1. How can we compare between two algorithms written for the same problem?
2. What do you understand by the best case, worst case and average case scenario of
an algorithm?
3. What do you understand by the Asymptotic Notations?
4. Write an algorithm to swap two given numbers in Java without using a temporary
variable.
5. Explain the Divide and Conquer Algorithmic Paradigm. Also list a few algorithms
which use this paradigm.
6. What do you understand about greedy algorithms? List a few examples of greedy
algorithms.
7. What do you understand by a searching algorithm? List a few types of searching
algorithms.
8. Describe the Linear Search Algorithm.
9. Describe the Binary Search Algorithm.
10. Write down an algorithm for adding a node to a linked list sorted in ascending
order(maintaining the sorting property).
11. Write an algorithm for counting the number of leaf nodes in a binary tree.
12. What do you understand about the Dynamic Programming (DP) Algorithmic
Paradigm? List a few problems which can be solved using the same.
13. Write down a string reversal algorithm. If the given string is "kitiR," for example,
the output should be "Ritik".
14. What do you understand about the BFS (Breadth First Search) algorithm.
15. What do you understand about the DFS (Depth First Search) algorithm.

Algorithm Interview Questions for Experienced


16. How do the encryption algorithms work?
17. What are few of the most widely used cryptographic algorithms?
Page 1 © Copyright by Interviewbit
18. Describe the merge sort algorithm.
Algorithm Interview Questions

Algorithm Interview Questions for


Experienced (.....Continued)

19. Describe the quick sort algorithm.


20. Describe the bubble sort algorithm with the help of an example.
21. Write an algorithm to find the maximum subarray sum for a given array. In other
words, find the maximum sum that can be achieved by taking contiguous
elements from a given array of integers.
22. Explain the Dijkstra's Algorithm to find the shortest path between a given node
in a graph to any other node in the graph.
23. Can we use the binary search algorithm for linked lists? Justify your answer.
24. What are recursive algorithms? State the important rules which every recursive
algorithm must follow.
25. Devise an algorithm to insert a node in a Binary Search Tree.
26. Define insertion sort and selection sort.
27. Define tree traversal and list some of the algorithms to traverse a binary tree.
28. Describe the heap sort algorithm.
29. What is the space complexity of the insertion sort algorithm?
30. What is the space complexity of the selection sort algorithm?

Page 2 © Copyright by Interviewbit


Let's get Started

What is an Algorithm?

Algorithms and Data Structures are a crucial component of any technical coding
interview. It does not matter if you are a C++ programmer, a Java programmer, or a
Web developer using JavaScript, Angular, React, JQuery, or any other programming
language.

A programmer should have a thorough understanding of both basic data structures,


such as arrays, linked lists, trees, hash tables, stacks, and queues, etc. and
conventional algorithms, such as Binary Search, Dynamic Programming, and so on.
Therefore, in this article we would be mostly focussing on Algorithms - an
introduction to algorithms, a lot of algorithms interview questions which are being
asked in the coding interviews of various companies and also a few Algorithms MCQs
which we think everyone should practice in order to have a better understanding of
algorithms.

Page 3 © Copyright by Interviewbit


Algorithm Interview Questions

Now, the very first question which must be popping in your head must be "What is an
algorithm?" Well, the answer to this question is: An algorithm is a finite sequence of
well-defined instructions used to solve a class of problems or conduct a computation
in mathematics and computer science.
Algorithms are used to specify how calculations, data processing, automated
reasoning, automated decision making, and other tasks should be done. An
algorithm is a method for calculating a function that can be represented in a finite
amount of space and time and in a well defined formal language. The instructions
describe a computation that, when run, continues through a finite number of well
defined subsequent stages, finally creating "output" and terminating at a final
ending state, starting from an initial state and initial input (possibly empty). The shi
from one state to the next is not always predictable; some algorithms, known as
randomised algorithms, take random input into account.

The Need For Algorithms (Advantages of Algorithms):

Before diving deep into algorithm interview questions, let us first understand the
need for Algorithms in today's world. The following are some of the benefits of using
algorithms in real-world problems.

Page 4 © Copyright by Interviewbit


Algorithm Interview Questions

Algorithms boost the effectiveness of an existing method.


It is easy to compare an algorithm's performance to those of other approaches
using various methods (Time Complexity, Space Complexity, etc.).
Algorithms provide the designers with a detailed description of the criteria and
goals of the problems.
They also enable a reasonable comprehension of the program's flow.
Algorithms evaluate how well the approaches work in various scenarios (Best
cases, worst cases, average cases).
An algorithm also determines which resources (input/output, memory) cycles
are necessary.
We can quantify and assess the problem's complexity in terms of time and space
using an algorithm.
The cost of design is also reduced if proper algorithms are used.

Algorithm Interview Questions for Freshers


1. How can we compare between two algorithms written for the
same problem?

Page 5 © Copyright by Interviewbit


Algorithm Interview Questions

The complexity of an algorithm is a technique that is used to categorise how efficient


it is in comparison to other algorithms. It focuses on how the size of the data set to
be processed affects execution time. In computing, the algorithm's computational
complexity is critical. It is a good idea to categorise algorithms according to how
much time or space they take up and to describe how much time or space they take
up as a function of input size.
Complexity of Time: The running time of a program as a function of the size of
the input is known as time complexity.
Complexity of Space: Space complexity examines algorithms based on how
much space they require to fulfil their tasks. In the early days of computers,
space complexity analysis was crucial (when storage space on the computer was
limited).
Note: Nowadays, a lack of space is rarely an issue because computer storage is
plentiful. Therefore, it is mostly the Time Complexity that is given more
importance while evaluating an Algorithm.

2. What do you understand by the best case, worst case and


average case scenario of an algorithm?
The mathematical foundation/framing of an algorithm's run time performance is
defined by asymptotic analysis. We can easily determine the best case, average case,
and worst-case scenarios of an algorithm using asymptotic analysis.

Page 6 © Copyright by Interviewbit


Algorithm Interview Questions

Best Case Scenario of an Algorithm: The best-case scenario for an algorithm is


defined as the data arrangement in which the algorithm performs the best. Take
a binary search, for example, where the best-case scenario is if the target value is
in the very centre of the data we are looking for. The best-case scenario for
binary search would have a time complexity of O(1) or constant time complexity.
Worst Case Scenario of an Algorithm: The worst collection of input for a given
algorithm is referred to as the worst-case scenario of an Algorithm. For example,
quicksort can perform poorly if the pivot value is set to the largest or smallest
element of a sublist. Quicksort will degenerate into an algorithm with a time
complexity of O(n^2), where n is the size of the list to be sorted.
Average Case Scenario of an Algorithm: The average-case complexity of an
algorithm is the amount of some computational resource (usually time) used by
the process, averaged over all possible inputs, according to computational
complexity theory. For example, the average-case complexity of the randomised
quicksort algorithm is O(n*log(n)), where n is the size of the list to be sorted.

3. What do you understand by the Asymptotic Notations?


Asymptotic analysis is a technique that is used for determining the efficiency of an
algorithm that does not rely on machine-specific constants and avoids the algorithm
from comparing itself to the time-consuming approach. For asymptotic analysis,
asymptotic notation is a mathematical technique that is used to indicate the
temporal complexity of algorithms.
The following are the three most common asymptotic notations.
Big Theta Notation: (θ Notation)
The exact asymptotic behaviour is defined using the theta (θ) Notation. It binds
functions from above and below to define behaviour. Dropping low order terms
and ignoring leading constants is a convenient approach to get Theta notation
for an expression.

Page 7 © Copyright by Interviewbit


Algorithm Interview Questions

Big O Notation:
The Big O notation defines an upper bound for an algorithm by bounding a
function from above. Consider the situation of insertion sort: in the best case
scenario, it takes linear time, and in the worst case, it takes quadratic time.
Insertion sort has a time complexity O(n^2). It is useful when we just have an
upper constraint on an algorithm's time complexity.

Page 8 © Copyright by Interviewbit


Algorithm Interview Questions

Big Omega (Ω) Notation:


The Ω Notation provides an asymptotic lower bound on a function, just like Big
O notation does. It is useful when we have a lower bound on an algorithm's time
complexity.

Page 9 © Copyright by Interviewbit


Algorithm Interview Questions

4. Write an algorithm to swap two given numbers in Java


without using a temporary variable.
It is a trick question that is frequently asked in the interviews of various companies.
This problem can be solved in a variety of ways. However, while solving the problem,
we must solve it without using a temporary variable, which is an essential condition.
For this problem, if we can consider the possibility of integer overflow in our solution
while coming up with an approach to solving it, we can make a great impression on
interviewers.
Let us say that we have two integers a and b, with a's value equal to 5 and a's value
equal to 6, and we want to swap them without needing a third variable. We will need
to use Java programming constructs to solve this problem. Mathematical procedures
such as addition, subtraction, multiplication, and division can be used to swap
numbers. However, it is possible that it will cause an integer overflow problem.
Let us take a look at two approaches to solve this problem:
Using Addition and subtraction:

a = a + b;
b = a - b; // this will act like (a+b) - b, and now b equals a.
a = a - b; // this will act like (a+b) - a, and now an equals b.

It is a clever trick. However, if the addition exceeds the maximum value of the int
primitive type as defined by Integer.MAX_VALUE in Java, or if the subtraction is less
than the minimum value of the int primitive type as defined by Integer.MIN_VALUE in
Java, there will be an integer overflow.
Using the XOR method:
Another way to swap two integers without needing a third variable (temporary
variable) is using the XOR method. This is o en regarded as the best approach
because it works in languages that do not handle integer overflows, such as Java, C,
and C++. Java has a number of bitwise operators. XOR (denoted by ^) is one of them.

Page 10 © Copyright by Interviewbit


Algorithm Interview Questions

x = x ^ y;
y = x ^ y;
x = x ^ y;

5. Explain the Divide and Conquer Algorithmic Paradigm. Also


list a few algorithms which use this paradigm.
Divide and Conquer is an algorithm paradigm, not an algorithm itself. It is set up in
such a way that it can handle a large amount of data, split it down into smaller
chunks, and determine the solution to the problem for each of the smaller chunks. It
combines all of the piecewise solutions of the smaller chunks to form a single global
solution. This is known as the divide and conquer technique. The Divide and Conquer
algorithmic paradigm employ the steps given below:
Divide: The algorithm separates the original problem into a set of subproblems
in this step.
Conquer: The algorithm solves each subproblem individually in this step.
Combine: In this step, the algorithm combines the solutions to the subproblems
to obtain the overall solution.

Page 11 © Copyright by Interviewbit


Algorithm Interview Questions

Some of the algorithms which use the Divide and Conquer Algorithmic paradigm are
as follows:
Binary Search
Merge Sort
Strassen's Matrix Multiplication
Quick Sort
Closest pair of points.

6. What do you understand about greedy algorithms? List a few


examples of greedy algorithms.
A greedy algorithm is an algorithmic method that aims to choose the best optimal
decision at each sub-step, eventually leading to a globally optimal solution. This
means that the algorithm chooses the best answer available at the time, regardless
of the consequences. In other words, when looking for an answer, an algorithm
always selects the best immediate, or local, option. Greedy algorithms may identify
less than perfect answers for some cases of other problems while finding the overall,
ideal solution for some idealistic problems.
The Greedy algorithm is used in the following algorithms to find their solutions:
Prim's Minimal Spanning Tree Algorithm
Kruskal's Minimal Spanning Tree Algorithm
Travelling Salesman Problem
Fractional Knapsack Problem
Dijkstra's Algorithm
Job Scheduling Problem
Graph Map Coloring
Graph Vertex Cover.

7. What do you understand by a searching algorithm? List a few


types of searching algorithms.

Page 12 © Copyright by Interviewbit


Algorithm Interview Questions

Searching Algorithms are used to look for an element or get it from a data structure
(usually a list of elements). These algorithms are divided into two categories based
on the type of search operation:
Sequential Search: This method traverses the list of elements consecutively,
checking each element and reporting if the element to be searched is found.
Linear Search is an example of a Sequential Search Algorithm.
Interval Search: These algorithms were created specifically for searching sorted
data structures. Because they continually target the centre of the search
structure and divide the search space in half, these types of search algorithms
are far more efficient than Sequential Search algorithms. Binary Search is an
example of an Interval Search Algorithm.

8. Describe the Linear Search Algorithm.


To find an element in a group of elements, the linear search can be used. It works by
traversing the list of elements from the beginning to the end and inspecting the
properties of all the elements encountered along the way. Let us consider the case of
an array containing some integer elements. We want to find out and print all of the
elements' positions that match a particular value (also known as the "key" for the
linear search). The linear search works in a flow here, matching each element with
the number from the beginning to the end of the list, and then printing the element's
location if the element at that position is equal to the key.
Given below is an algorithm describing Linear Search:
Step 1: Using a loop, traverse the list of elements given.
Step 2: In each iteration, compare the target value (or key-value) to the list's
current value.
Step 3: If the values match, print the array's current index.
Step 4: Move on to the next array element if the values do not match.
Step 5: Repeat Steps 1 to 4 till the end of the list of elements is reached.

Page 13 © Copyright by Interviewbit


Algorithm Interview Questions

The time complexity of the Linear Search Algorithm is O(n) where n is the size of the
list of elements and its space complexity is constant, that is, O(1).

9. Describe the Binary Search Algorithm.


To apply binary search on a list of elements, the prerequisite is that the list of
elements should be sorted. It is based on the Divide and Conquers Algorithmic
paradigm. In the Binary Search Algorithm, we divide the search interval in half
periodically to search the sorted list. We begin by creating an interval that spans the
entire list. If the search key's value is less than the item in the interval's midpoint, the
interval should be narrowed to the lower half. Otherwise, we limit it to the upper half
of the page. We check for the value until it is discovered or the interval is empty.
Given below is an algorithm describing Binary Search: (Let us assume that the
element to be searched is x and the array of elements is sorted in ascending order)

Page 14 © Copyright by Interviewbit


Algorithm Interview Questions

Step 1: x should be firstly compared to the middle element.


Step 2: We return the middle element's index if x matches the middle element.
Step 3: Else If x is greater than the middle element, x can only be found a er the
middle element in the right half subarray since the array is sorted in the
ascending order. As a result, we repeat the process for the right half.
Step 4: Otherwise, we repeat for the le half (x is smaller).
Step 5: If the interval is empty, we terminate the binary search.
The time complexity of the Binary Search Algorithm is O(log(n)) where n is the size of
the list of elements and its space complexity is constant, that is, O(1).

10. Write down an algorithm for adding a node to a linked list


sorted in ascending order(maintaining the sorting
property).
An algorithm for adding a node to a link list sorted in ascending order (maintaining
the sorting property) is given below:
Step 1: Check if the linked list has no value (or is empty). If yes, then set the new
node as the head and return it.
Step 2: Check if the value of the node to be inserted is smaller than the value of
the head node. If yes, place it at the beginning and make it the head node.
Step 3: Find the suitable node a er which the input node should be added in a
loop. To discover the required node, begin at the head and work your way
forward until you reach a node whose value exceeds the input node. The
preceding node is the correct node.
Step 4: A er the correct node is found in step 3, insert the node.

11. Write an algorithm for counting the number of leaf nodes in


a binary tree.
An algorithm for counting the number of leaf nodes in a binary tree is given below:

Page 15 © Copyright by Interviewbit


Algorithm Interview Questions

Step 1: If the current node is null, return a value 0.


Step 2: If a leaf node is encountered, that is, if the current node's le and right
nodes are both null, then return 1.
Step 3: Calculate the number of leaf nodes recursively by adding the number of
leaf nodes in the le subtree by the number of leaf nodes in the right subtree.

12. What do you understand about the Dynamic Programming


(DP) Algorithmic Paradigm? List a few problems which can
be solved using the same.
Dynamic Programming is primarily a recursion optimization. We can use Dynamic
Programming to optimise any recursive solution that involves repeated calls for the
same inputs. The goal is to simply save the results of subproblems so that we do not
have to recalculate them later. The time complexity of this simple optimization is
reduced from exponential to polynomial. For example, if we create a simple recursive
solution for Fibonacci Numbers, the time complexity is exponential, but if we
optimise it by storing subproblem answers using Dynamic Programming, the time
complexity is linear.

The following codes illustrate the same:

Page 16 © Copyright by Interviewbit


Algorithm Interview Questions

With Recursion (no DP): The time complexity of the given code will be exponential.

/*Sample C++ code for finding nth fibonacci number without DP*/
int nFibonacci(int n){
if(n == 0 || n == 1) return n;
else return nFibonacci(n - 1) + nFibonacci(n - 2);
}

With DP: The time complexity of the given code will be linear because of Dynamic
Programming.

/*Sample C++ code for finding nth fibonacci number with DP*/
int nFibonacci(int n){
vector<int> fib(n + 1);
fib[0] = 0;
fib[1] = 1;
for(int i = 2;i <= n;i ++){
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}

A few problems which can be solved using the Dynamic Programming (DP)
Algorithmic Paradigm are as follows:
Finding the nth Fibonacci number
Finding the Longest Common Subsequence between two strings.
Finding the Longest Palindromic Substring in a string.
The discrete (or 0-1) Knapsack Problem.
Shortest Path between any two nodes in a graph (Floyd Warshall Algorithm)

13. Write down a string reversal algorithm. If the given string is


"kitiR," for example, the output should be "Ritik".
An algorithm for string reversal is as follows:

Page 17 © Copyright by Interviewbit


Algorithm Interview Questions

Step 1: Start.
Step 2: We take two variables l and r.
Step 3: We set the values of l as 0 and r as (length of the string - 1).
Step 4: We interchange the values of the characters at positions l and r in the
string.
Step 5: We increment the value of l by one.
Step 6: We decrement the value of r by one.
Step 7: If the value of r is greater than the value of l, we go to step 4
Step 8: Stop.

14. What do you understand about the BFS (Breadth First


Search) algorithm.
BFS or Breadth-First Search is a graph traversal technique. It begins by traversing the
graph from the root node and explores all of the nodes in the immediate vicinity. It
chooses the closest node and then visits all of the nodes that have yet to be visited.
Until it reaches the objective node, the algorithm repeats the same method for each
of the closest nodes.
The BFS Algorithm is given below:
Step 1: Set status = 1 as the first step for all the nodes(ready state).
Step 2: Set the status of the initial node A to 2, that is, waiting state.
Step 3: Repeat steps 4 and 5 until the queue is not empty.
Step 4: Dequeue and process node N from the queue, setting its status to 3, that
is, the processed state.
Step 5: Put all of N's neighbours in the ready state (status = 1) in the queue and
set their status to 2 (waiting state)
Step 6: Exit.

15. What do you understand about the DFS (Depth First Search)
algorithm.

Page 18 © Copyright by Interviewbit


Algorithm Interview Questions

Depth First Search or DFS is a technique for traversing or exploring data structures
such as trees and graphs. The algorithm starts at the root node (in the case of a
graph, any random node can be used as the root node) and examines each branch as
far as feasible before retracing. So the basic idea is to start at the root or any arbitrary
node and mark it, then advance to the next unmarked node and repeat until there
are no more unmarked nodes. A er that, go back and check for any more unmarked
nodes to cross. Finally, print the path's nodes. The DFS algorithm is given below:
Step1: Create a recursive function that takes the node's index and a visited array
as input.
Step 2: Make the current node a visited node and print it.
Step 3: Call the recursive function with the index of the adjacent node a er
traversing all nearby and unmarked nodes.

Algorithm Interview Questions for Experienced


16. How do the encryption algorithms work?
e process of transforming plaintext into a secret code format known as "Ciphertext''
is known as encryption. For calculations, this technique uses a string of bits known as
"keys" to convert the text. The larger the key, the more potential patterns for
producing ciphertext there are. The majority of encryption algorithms use fixed
blocks of input with lengths ranging from 64 to 128 bits, while others use the stream
technique.

17. What are few of the most widely used cryptographic


algorithms?
A few of the most widely used cryptographic algorithms are as follows:

Page 19 © Copyright by Interviewbit


Algorithm Interview Questions

IDEA
CAST
CMEA
3-way
Blowfish
GOST
LOKI
DES and Triple DES.

18. Describe the merge sort algorithm.


Merge sort (also known as mergesort) is a general-purpose, comparison-based
sorting algorithm developed in computer science. The majority of its
implementations result in a stable sort, which indicates that the order of equal
elements in the input and output is the same. In 1945, John von Neumann devised
the merge sort method, which is a divide and conquer algorithm. The following is
how a merge sort works conceptually:
Separate the unsorted list into n sublists, each with one element (a list of one
element is considered sorted).
Merge sublists repeatedly to create new sorted sublists until only one sublist
remains. The sorted list will be displayed then.
The time complexity of the Merge Sort Algorithm is O(nlog(n)) where n is the size of
the list of the elements to be sorted while the space complexity of the Merge Sort
Algorithm is O(n), that is, linear space complexity.

Page 20 © Copyright by Interviewbit


Algorithm Interview Questions

19. Describe the quick sort algorithm.


Quicksort is a sorting algorithm that is in place (in-place algorithm is an algorithm
that transforms input using no auxiliary data structure). It was created by the British
computer scientist Tony Hoare in 1959 and was published in 1961, and it is still a
popular sorting algorithm. It can be somewhat quicker than merge sort and two or
three times faster than heapsort when properly done.
Quicksort is based on the divide and conquer algorithmic paradigm. It operates by
picking a 'pivot' element from the array and separating the other elements into two
subarrays based on whether they are greater or less than the pivot. As a result, it is
also known as partition exchange sort. The subarrays are then recursively sorted. This
can be done in place, with only a little amount of additional RAM (Random Access
Memory) required for sorting.

Page 21 © Copyright by Interviewbit


Algorithm Interview Questions

Quicksort is a comparison sorting algorithm, which means it can sort objects of any
type that have a "less-than" relation (technically, a total order) declared for them.
Quicksort is not a stable sort, which means that the relative order of equal sort items
is not retained in efficient implementations. Quicksort (like the partition method)
must be written in such a way that it can be called for a range within a bigger array,
even if the end purpose is to sort the entire array, due to its recursive nature.
The following are the steps for in-place quicksort:
If there are less than two elements in the range, return immediately because
there is nothing else to do. A special-purpose sorting algorithm may be used for
other very small lengths, and the rest of these stages may be avoided.
Otherwise, choose a pivot value, which is a value that occurs in the range (the
precise manner of choice depends on the partition routine, and can involve
randomness).
Partition the range by reordering its elements while determining a point of
division so that all elements with values less than the pivot appear before the
division and all elements with values greater than the pivot appear a er it;
elements with values equal to the pivot can appear in either direction. Most
partition procedures ensure that the value that ends up at the point of division is
equal to the pivot, and is now in its ultimate location because at least one
instance of the pivot is present (but termination of quicksort does not depend
on this, as long as sub-ranges strictly smaller than the original are produced).
Apply the quicksort recursively to the sub-range up to the point of division and
the sub-range a er it, optionally removing the element equal to the pivot at the
point of division from both ranges. (If the partition creates a potentially bigger
sub-range near the boundary with all elements known to be equal to the pivot,
these can also be omitted.)
Quicksort's mathematical analysis reveals that, on average, it takes O(nlog (n) time
complexity to sort n items. In the worst-case scenario, it performs in time complexity
of O(n^2).

Page 22 © Copyright by Interviewbit


Algorithm Interview Questions

Note: The algorithm's performance can be influenced by the partition routine


(including the pivot selection) and other details not fully defined above, possibly
to a large extent for specific input arrays. It is therefore crucial to define these
alternatives before discussing quicksort's efficiency.

20. Describe the bubble sort algorithm with the help of an


example.
Bubble sort, also known as sinking sort, is a basic sorting algorithm that iterates
through a list, comparing neighbouring elements and swapping them if they are out
of order. The list is sent through again and again until it is sorted. The comparison
sort method is named from the manner that smaller or larger components "bubble"
to the top of the list. This simplistic method performs badly in real-world situations
and is mostly used as a teaching aid. Let us take an example to understand how
bubble sort works:
Let us assume that the array to be sorted is (50 10 40 20 80). The various passes or
rounds of bubble sort are given below:

Page 23 © Copyright by Interviewbit


Algorithm Interview Questions

First Pass:
(50 10 40 20 80) –> ( 10 50 40 20 80 ), Since 50 > 10, the algorithm compares
the first two elements and swaps them.
( 10 50 40 20 80 ) –> ( 10 40 50 20 80 ), Since 50 > 40, the algorithm swaps the
values at the second and third positions.
(10 40 50 20 80) –> (10 40 20 50 80), Since 50 > 3, the algorithm swaps the
third and fourth elements.
(10 40 20 50 80) -> ( 10 40 20 50 80 ), The method does not swap the fourth
and fi h elements because they are already in order (80 > 50).
Second Pass:
( 10 40 20 50 80 ) –> ( 10 40 20 50 80 ) , Elements at first and second position
are in order so now swapping.
( 10 40 20 50 80 ) –> ( 10 20 40 50 80 ), Since 40 > 20, the algorithm swaps the
values at the second and third positions.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at fourth and fi h position
are in order so now swapping.
The array is now sorted, but our algorithm is unsure whether it is complete. To know
if the algorithm is sorted, it must complete one complete pass without any swaps.
Third Pass:
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the first and second
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the second and third
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 5 80 ), Elements at the fourth and fi h
position are in order so now swapping.

Page 24 © Copyright by Interviewbit


Algorithm Interview Questions

21. Write an algorithm to find the maximum subarray sum for a


given array. In other words, find the maximum sum that can
be achieved by taking contiguous elements from a given
array of integers.
Kadane's algorithm can be used to find the maximum subarray sum for a given array.
From le to right, Kadane's algorithm searches the provided array. It then computes
the subarray with the largest sum ending at position j in the jth step, and this sum is
stored in the variable "currentSum". Furthermore, it computes the subarray with the
biggest sum anywhere in the subarray starting from the first position to the jth
position, that is, in A[1...j], and stores it in the variable "bestSum". This is done by
taking the maximum value of the variable "currentSum" till now and then storing it
in the variable "bestSum". In the end, the value of "bestSum" is returned as the final
answer to our problem.
Formally, Kadane's algorithm can be stated as follows:
Step 1: Initialize the following variables:
bestSum = INT_MIN
currentSum = 0 // for empty subarray, it is initialized as value 0
Step 2: Loop for each element of the array A
(a) currentSum = currentSum + A[i]
(b) if(bestSum < currentSum)
bestSum = currentSum
(c) if(currentSum < 0)
currentSum = 0
Step 3: return bestSum

22. Explain the Dijkstra's Algorithm to find the shortest path


between a given node in a graph to any other node in the
graph.

Page 25 © Copyright by Interviewbit


Algorithm Interview Questions

Dijkstra's algorithm is a method for determining the shortest pathways between


nodes in a graph, which might be used to depict road networks. Edsger W. Dijkstra, a
computer scientist, conceived it in 1956 and published it three years later. There are
numerous variations of the algorithm. The original Dijkstra algorithm discovered the
shortest path between two nodes, but a more frequent form fixes a single node as
the "source" node and finds the shortest pathways from the source to all other nodes
in the network, resulting in a shortest-path tree. Let us take a look at Dijkstra's
Algorithm to find the shortest path between a given node in a graph to any other
node in the graph:
Let us call the node where we are starting the process as the initial node. Let the
distance from the initial node to Y be the distance of node Y. Dijkstra's algorithm will
begin with unlimited distances and attempt to improve them incrementally.

Page 26 © Copyright by Interviewbit


Algorithm Interview Questions

Step 1: Mark all nodes that have not been visited yet. The unvisited set is a
collection of all the nodes that have not been visited yet.
Step 2: Assign a tentative distance value to each node: set it to zero for our first
node and infinity for all others. The length of the shortest path discovered so far
between the node v and the initial node is the tentative distance of a node v.
Because no other vertex other than the source (which is a path of length zero) is
known at the start, all other tentative distances are set to infinity. Set the
current node to the beginning node.
Step 3: Consider all of the current node's unvisited neighbours and determine
their approximate distances through the current node. Compare the newly
calculated tentative distance to the current assigned value and choose the one
that is less. If the present node A has a distance of 5 and the edge linking it to a
neighbour B has a length of 3, the distance to B through A will be 5 +3 = 8.
Change B to 8 if it was previously marked with a distance greater than 8. If this is
not the case, the current value will be retained.
Step 4: Mark the current node as visited and remove it from the unvisited set
once we have considered all of the current node's unvisited neighbours. A node
that has been visited will never be checked again.
Stop if the destination node has been marked visited (when planning a route
between two specific nodes) or if the smallest tentative distance between the
nodes in the unvisited set is infinity (when planning a complete traversal; occurs
when there is no connection between the initial node and the remaining
unvisited nodes). The algorithm is now complete.
Step 5: Otherwise, return to step 3 and select the unvisited node indicated with
the shortest tentative distance as the new current node.

Page 27 © Copyright by Interviewbit


Algorithm Interview Questions

It is not required to wait until the target node is "visited" as described above while
constructing a route: the algorithm can end once the destination node has the least
tentative distance among all "unvisited" nodes (and thus could be selected as the
next "current"). For arbitrary directed graphs with unbounded non-negative weights,
Dijkstra's algorithm is asymptotically the fastest known single-source shortest path
algorithm with time complexity of O(|E| + |V|log(|V|)), where |V| is the number of
nodes and|E| is the number of edges in the graph.

23. Can we use the binary search algorithm for linked lists?
Justify your answer.
No, we cannot use the binary search algorithm for linked lists.
Explanation: Because random access is not allowed in linked lists, reaching the
middle element in constant or O(1) time is impossible. As a result, the usage of a
binary search algorithm on a linked list is not possible.

24. What are recursive algorithms? State the important rules


which every recursive algorithm must follow.
Recursive algorithm is a way of tackling a difficult problem by breaking it down into
smaller and smaller subproblems until the problem is small enough to be solved
quickly. It usually involves a function that calls itself (property of recursive functions).
The three laws which must be followed by all recursive algorithms are as follows:

Page 28 © Copyright by Interviewbit


Algorithm Interview Questions

There should be a base case.


It is necessary for a recursive algorithm to call itself.
The state of a recursive algorithm must be changed in order for it to return to
the base case.

25. Devise an algorithm to insert a node in a Binary Search Tree.


An algorithm to insert a node in a Binary Search Tree is given below:
Assign the current node to the root node.
If the root node's value is greater than the value that has to be added:
If the root node has a le child, go to the le .
Insert node here if it does not have a le child.
If the root node's value is less than the value that has to be added:
If the root node has a right child, go to the right.
Insert node here if it does not have the right child.

26. Define insertion sort and selection sort.


Insertion sort: Insertion sort separates the list into sorted and unsorted sub-
lists. It inserts one element at a time into the proper spot in the sorted sub-list.
A er insertion, the output is a sorted sub-list. It iteratively works on all the
elements of an unsorted sub-list and inserts them into a sorted sub-list in order.
Selection sort: Selection sort is an in-place sorting technique. It separates the
data collection into sorted and unsorted sub-lists. The minimum element from
the unsorted sub-list is then selected and placed in the sorted list. This loops
until all of the elements in the unsorted sub-list have been consumed by the
sorted sub-list.
Note: Both sorting strategies keep two sub-lists, sorted and unsorted, and place
one element at a time into the sorted sub-list. Insertion sort takes the currently
selected element and places it in the sorted array at the right point while
keeping the insertion sort attributes. Selection sort, on the other hand, looks for
the smallest element in an unsorted sub-list and replaces it with the current
element.

Page 29 © Copyright by Interviewbit


Algorithm Interview Questions

27. Define tree traversal and list some of the algorithms to


traverse a binary tree.
The process of visiting all the nodes of a tree is known as tree traversal.
Some of the algorithms to traverse a binary tree are as follows:
Pre-order Traversal.
In order Traversal.
Post order Traversal.
Breadth First Search
ZigZag Traversal.

28. Describe the heap sort algorithm.


Heap sort is a comparison-based sorting algorithm. Heapsort is similar to selection
sort in that it separates its input into a sorted and an unsorted region, then
successively decreases the unsorted part by taking the largest element from it and
putting it into the sorted region. Unlike selection sort, heapsort does not waste time
scanning the unsorted region in linear time; instead, heap sort keeps the unsorted
region in a heap data structure to identify the largest element in each step more
rapidly. Let us take a look at the heap sort algorithm:
The Heapsort algorithm starts by converting the list to a max heap. The algorithm
then swaps the first and last values in the list, reducing the range of values
considered in the heap operation by one, and filters the new first value into its heap
place. This process is repeated until the range of values considered is only one value
long.
On the list, use the buildMaxHeap() function. This function, also known as
heapify(), creates a heap from a list in O(n) operations.
Change the order of the list's first and last elements. Reduce the list's considered
range by one.
To si the new initial member to its appropriate index in the heap, use the
si Down() function on the list.
Unless the list's considered range is one element, proceed to step 2.

Page 30 © Copyright by Interviewbit


Algorithm Interview Questions

Note: The buildMaxHeap() operation runs only one time with a linear time
complexity or O(n) time complexity. The si Down() function works in O(log n)
time complexity, and is called n times. Therefore, the overall time complexity of
the heap sort algorithm is O(n + n log (n)) = O(n log n).

29. What is the space complexity of the insertion sort


algorithm?
Insertion sort is an in-place sorting method, which implies it does not require any
additional or minimal data storage. In insertion sort, only a single list element must
be stored outside of the starting data, resulting in a constant space complexity or
O(1) space complexity.

30. What is the space complexity of the selection sort


algorithm?
Selection sort is an in place sorting method, which implies it does not require any
additional or minimal data storage. Therefore, the selection sort algorithm has a
constant space complexity or O(1) space complexity.

Conclusion

So, in conclusion, we would like to convey to our readers that the Algorithm
Interviews are usually the most crucial and tough interviews of all in the Recruitment
process of a lot of So ware Companies and a sound understanding of Algorithms
usually implies that the candidate is very good in logical thinking and has the ability
to think out of the box. Algorithm interview questions can be easily solved if one has
a sound understanding of Algorithms and has gone through a lot of Algorithm
Examples and Algorithm MCQs (which we will be covering in the next section of this
article). Therefore, we suggest to all the budding coders of today to develop a strong
grasp on the various Algorithms that have been discovered to date so that they can
ace their next Technical Interviews.
Useful Resources:

Page 31 © Copyright by Interviewbit


Algorithm Interview Questions

Data Structures and Algorithms


Data Structures Interview Questions

Page 32 © Copyright by Interviewbit

You might also like