Advance Java all
Advance Java all
2024
1
SQL Cheat Sheet
© Copyright by Interviewbit
Contents
To get introduced to SQL, we first need to know about Databases and Database
Management Systems(DBMS).
Data is basically a collection of facts related to some object. A Database is a
collection of small units of data arranged in a systematic manner. A Relational
Database Management System is a collection of tools that allows the users to
manipulate, organize and visualize the contents of a database while following some
standard rules that facilitate fast response between the database and the user side.
A er getting introduced to the concept of data, databases and DBMS/RDBMS, we can
finally learn about SQL. SQL or Structured Query Language is basically the language
that we (the user) use to communicate with the Databases and get our required
interpretation of data out of it. It is used for storing, manipulating and retrieving data
out of a database.
SQL Features
SQL allows us to interact with the databases and bring out/manipulate data within
them. Using SQL, we can create our own databases and then add data into these
databases in the form of tables.
The following functionalities can be performed on a database using SQL:
Create or Delete a Database.
Create or Alter or Delete some tables in a Database.
SELECT data from tables.
INSERT data into tables.
UPDATE data in tables.
DELETE data from tables.
Create Views in the database.
Execute various aggregate functions.
To get started with using SQL, we first need to install some Database Management
System server. A er installing the RDBMS, the RDBMS itself will provide all the
required tools to perform operations on the database and its contents through SQL.
Some common RDBMS which is highly in use are:
Oracle
MySQL
PostgreSQL
Heidi SQL
To install any RDBMS, we just need to visit their official website and install the setup
file from there, by following the instructions available there. With the server setup,
we can set up a Query Editor, on which we can type our SQL Queries.
2. Tables
All data in the database are organized efficiently in the form of tables. A database can
be formed from a collection of multiple tables, where each table would be used for
storing a particular kind of data and the table by themselves would be linked with
each other by using some relations.
Example:
The above example is for a table of students and stores their Name, Phone, and Class
as data. The ID is assigned to each student to uniquely identify each student and
using this ID, we can relate data from this table to other tables.
SQL-Create Table:
We use the CREATE command to create a table. The table in the above example can
be created with the following code:
SQL-Delete Table:
3. SQL DataTypes
To allow the users to work with tables effectively, SQL provides us with various
datatypes each of which can be useful based on the type of data we handle.
The above image is a chart that shows all the datatypes available in SQL along with
some of their examples.
The next section describes various most popular SQL server datatypes categorised
under each major division.
String Datatypes:
The table below lists all the String type datatypes available in SQL, along with their
descriptions:
Datatype Description
Numeric Datatypes:
The table below lists all the Numeric Datatypes in SQL along with their descriptions:
Datatype Description
Date/Time Datatypes:
The datatypes available in SQL to handle Date/Time operations effectively are called
the Date/Time datatypes. The below table lists all the Date/Time variables in SQL
along with their description:
Datatype Description
4. SQL Commands
SQL Commands are instructions that are used by the user to communicate with the
database, to perform specific tasks, functions and queries of data.
Types of SQL Commands:
The above image broadly shows the different types of SQL commands available in
SQL in the form of a chart.
1. Data Definition Language(DDL): It changes a table’s structure by adding, deleting
and altering its contents. Its changes are auto-committed(all changes are
automatically permanently saved in the database). Some commands that are a part
of DDL are:
CREATE: Used to create a new table in the database.
Example:
Example:
DROP: Used to delete the structure and record stored in the table.
Example:
TRUNCATE: Used to delete all the rows from the table, and free up the space in
the table.
Example:
In the above example, we insert the values “Scaler” and “DSA” in the columns Name
and Subject in the STUDENT table.
UPDATE: Used to update value of a table’s column.
Example:
UPDATE STUDENT
SET User_Name = 'Interviewbit'
WHERE Student_Id = '2'
In the above example, we update the name of the student, whose Student_ID is 2, to
the User_Name = “Interviewbit”.
DELETE: Used to delete one or more rows in a table.
Example:
In the above example, the query deletes the row where the Name of the student is
“Scaler” from the STUDENT table.
3. Data Control Language(DCL): These commands are used to grant and take back
access/authority (revoke) from any database user. Some commands that are a part of
DCL are:
Grant: Used to grant a user access privileges to a database.
Example:
In the above example, we grant the rights to SELECT and UPDATE data from the table
TABLE_1 to users - USER_1 and USER_2.
Revoke: Used to revoke the permissions from an user.
Example:
In the above example we revoke the rights to SELECT and UPDATE data from the
table TABLE_1 from the users- USER_1 and USER_2.
4. Transaction Control Language: These commands can be used only with DML
commands in conjunction and belong to the category of auto-committed
commands. Some commands that are a part of TCL are:
COMMIT: Saves all the transactions made on a database.
Example:
In the above database, we delete the row where AGE of the students is 16, and then
save this change to the database using COMMIT.
ROLLBACK: It is used to undo transactions which are not yet been saved.
Example:
By using ROLLBACK in the above example, we can undo the deletion we performed in
the previous line of code, because the changes are not committed yet.
SAVEPOINT: Used to roll transaction back to a certain point without having to
roll back the entirity of the transaction.
Example:
SAVEPOINT SAVED;
DELETE FROM STUDENTS
WHERE AGE = 16;
ROLLBACK TO SAVED;
In the above example, we have created a savepoint just before performing the delete
operation in the table, and then we can return to that savepoint using the ROLLBACK
TO command.
5. Data Query Language: It is used to fetch some data from a database. The
command belonging to this category is:
SELECT: It is used to retrieve selected data based on some conditions which are
described using the WHERE clause. It is to be noted that the WHERE clause is
also optional to be used here and can be used depending on the user’s needs.
Example: With WHERE clause,
SELECT Name
FROM Student
WHERE age >= 18;
SELECT Name
FROM Student
In the first example, we will only select those names in the Student table, whose
corresponding age is greater than 17. In the 2nd example, we will select all the names
from the Student table.
5. SQL Constraints
Constraints are rules which are applied on a table. For example, specifying valid limits
or ranges on data in the table etc.
The valid constraints in SQL are:
1. NOT NULL: Specifies that this column cannot store a NULL value.
Example:
In the above example, we create a table STUDENT, which has some attributes it has
to store. Among these attributes we declare that the columns ID and NAME cannot
have NULL values in their fields using NOT NULL constraint.
2. UNIQUE: Specifies that this column can have only Unique values, i.e the values
cannot be repeated in the column.
Example:
In the above example, we create a table Student and declare the ID column to be
unique using the UNIQUE constraint.
3. Primary Key: It is a field using which it is possible to uniquely identify each row in
a table. We will get to know about this in detail in the upcoming section.
4. Foreign Key: It is a field using which it is possible to uniquely identify each row in
some other table. We will get to know about this in detail in the upcoming section.
5. CHECK: It validates if all values in a column satisfy some particular condition or
not.
Example:
Here, in the above query, we add the CHECK constraint into the table. By adding the
constraint, we can only insert entries that satisfy the condition AGE < 20 into the
table.
6. DEFAULT: It specifies a default value for a column when no value is specified for
that field.
Example:
In the above query, we set a default value of 2 for the CLASS attribute. While inserting
records into the table, if the column has no value specified, then 2 is assigned to that
column as the default value.
Example:
Example:
The above example will insert into the student table having the values 1, Scaler,
+1234-5678 and 12 to the columns ID, name, phone and class columns.
SELECT: We use the select statement to perform the Read ( R ) operation of
CRUD.
SQL Syntax:
Example:
The above example allows the user to read the data in the name and class columns
from the student table.
UPDATE: Update is the ‘U’ component of CRUD. The Update command is used to
update the contents of specific columns of specific rows.
SQL Syntax:
UPDATE name_of_table
SET column1=value1,column2=value2,...
WHERE conditions...;
Example:
UPDATE customers
SET phone = '+1234-9876'
WHEREID = 2;
The above SQL example code will update the table ‘customers’ whose ID is 2 with the
new given phone number.
DELETE:
The Delete command is used to delete or remove some rows from a table. It is the ‘D’
component of CRUD.
SQL Syntax:
Example:
The above SQL example code will delete the row from table student, where the class
= 11 conditions becomes true.
The below table lists some important keywords used in SQL, along with their
description and example.
AS Renames a
table/column
SELECT name AS
with an alias
student_name, phone
existing only
FROM student;
for the query
duration.
ASC Used in
conjunction SELECT column1,
with ORDER column2, … FROM
BY to sort table_name ORDER BY
data in column1, column2, …
ascending ASC;
order.
DESC Used in
conjunction SELECT column1,
with ORDER column2, … FROM
BY to sort table_name ORDER BY
data in column1, column2, …
descending DESC;
Page 22 © Copyright by Interviewbit
SQL Cheat Sheet
8. Clauses in SQL
Clauses are in-built functions available in SQL and are used for filtering and analysing
data quickly allowing the user to efficiently extract the required information from the
database.
The below table lists some of the important SQL clauses and their description with
examples:
GROUP SELECT
BY Groups rows that have COUNT(StudentID),
the same values into State FROM
summary rows. Students GROUP
BY State;
9. SQL Operators
Operators are used in SQL to form complex expressions which can be evaluated to
code more intricate queries and extract more precise data from a database.
There are 3 main types of operators: Arithmetic, Comparision and Logical operators,
each of which will be described below.
Arithmetic Operators:
Arithmetic Operators allows the user to perform arithmetic operations in SQL. The
table below shows the list of arithmetic operators available in SQL:
Operator Description
+ Addition
- Subtraction
* Multiplication
/ Division
% Modulo
Bitwise Operators:
Bitwise operators are used to performing Bit manipulation operations in SQL. The
table below shows the list of bitwise operators available in SQL:
Operator Description
| Bitwise OR
^ Bitwise XOR
Relational Operators:
Relational operators are used to performing relational expressions in SQL, i.e those
expressions whose value either result in true or false. The table below shows the list
of relational operators available in SQL:
Operator Description
= Equal to
Compound Operators:
Compound operators are basically a combination of 2 or more arithmetic or
relational operator, which can be used as a shorthand while writing code. The table
below shows the list of compound operators available in SQL:
Operator Description
+= Add equals
-= Subtract equals
*= Multiply equals
/= Divide equals
%= Modulo equals
|= OR equals
^= XOR equals
Logical Operators:
Logical operators are used to combining 2 or more relational statements into 1
compound statement whose truth value is evaluated as a whole. The table below
shows the SQL logical operators with their description:
Operator Description
The above example creates a table called STUDENT with some given
properties(columns) and assigns the ID column as the primary key of the table. Using
the value of ID column, we can uniquely identify its corresponding row.
2. Foreign Key: Foreign keys are keys that reference the primary keys of some other
table. They establish a relationship between 2 tables and link them up.
Example: In the below example, a table called Orders is created with some given
attributes and its Primary Key is declared to be OrderID and Foreign Key is declared
to be PersonId referenced from the Person's table. A person's table is assumed to be
created beforehand.
Super Key: It is a group of single or multiple keys which identifies row of a table.
Candidate Key: It is a collection of unique attributes that can uniquely identify
tuples in a table.
Alternate Key: It is a column or group of columns that can identify every row in
a table uniquely.
Compound Key: It is a collection of more than one record that can be used to
uniquely identify a specific record.
Composite Key: Collection of more than one column that can uniquely identify
rows in a table.
Surrogate Key: It is an artificial key that aims to uniquely identify each record.
Amongst these, the Primary and Foreign keys are most commonly used.
Name Description
Name Description
Name Description
Name Description
Example:
Consider the following tables,
NATURAL JOIN: It is a special type of inner join based on the fact that the
column names and datatypes are the same on both tables.
Syntax:
Example:
In the above example, we are merging the Customers and Orders table shown above
using a NATURAL JOIN based on the common column customer_id.
RIGHT JOIN: Returns all of the records from the second table, along with any
matching records from the first.
Example:
Let us define an Orders table first,
LEFT JOIN: Returns all of the records from the first table, along with any
matching records from the second table.
Example:
Consider the below Customer and Orders table,
The top few entries of the resultant table will appear as shown in the below image.
FULL JOIN: Returns all records from both tables when there is a match.
Example:
Table Orders:
We will get the following table as the result of the outer join.
Example:
Here, we create a new Trigger called trigger1, just before we perform an INSERT
operation on the Student table, we calculate the percentage of the marks for each
row.
Some common operations that can be performed on triggers are:
DROP: This operation will drop an already existing trigger from the table.
Syntax:
SHOW: This will display all the triggers that are currently present in the table.
Syntax:
EXEC procedure_name;
Example:
The above image shows an example of SQL injections, through the use of 2 tables -
students and library.
Here the hacker is injecting SQL code -
into the Database server, where his query is used to JOIN the tables - students and
library. Joining the 2 tables, the result of the query is returned from the database,
using which the hacker gains access to the information he needs thereby taking
advantage of the system vulnerability. The arrows in the diagram show the flow of
how the SQL Injection causes the vulnerability in the database system, starting from
the hacker’s computer.
Conclusion:
Databases are growing increasingly important in our modern industry where data is
considered to be a new wealth. Managing these large amounts of data, gaining
insights from them and storing them in a cost-effective manner makes database
management highly important in any modern so ware being made. To manage any
form of databases/RDBMS, we need to learn SQL which allows us to easily code and
manage data from these databases and create large scalable applications of the
future, which caters to the needs of millions.
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
© Copyright by Interviewbit
If you're preparing for your upcoming SQL interview, on top of the
interview questions, guidance from an expert can prove to be
extremely useful. One of our top instructors is hosting a FREE
Masterclass for aspirants like you! Feel free to register if you're
interested.
2. What is DBMS?
DBMS stands for Database Management System. DBMS is a system so ware
responsible for the creation, retrieval, updation, and management of the database. It
ensures that our data is consistent, organized, and is easily accessible by serving as
an interface between the database and its end-users or application so ware.
4. What is SQL?
SQL stands for Structured Query Language. It is the standard language for relational
database management systems. It is especially useful in handling organized data
comprised of entities (variables) and relations between different entities of the data.
The PRIMARY KEY constraint uniquely identifies each row in a table. It must contain
UNIQUE values and has an implicit NOT NULL constraint.
A table in SQL is strictly restricted to have one and only one primary key, which is
comprised of single or multiple fields (columns).
CREATE TABLE Students ( /* Create table with a single field as primary key */
ID INT NOT NULL
Name VARCHAR(255)
PRIMARY KEY (ID)
);
CREATE TABLE Students ( /* Create table with multiple fields as primary key */
ID INT NOT NULL
LastName VARCHAR(255)
FirstName VARCHAR(255) NOT NULL,
CONSTRAINT PK_Student
PRIMARY KEY (ID, FirstName)
);
write a sql statement to add primary key 't_id' to the table 'teachers'.
Check
Write a SQL statement to add primary key constraint 'pk_a' for table 'table_a'
and fields 'col_b, col_c'.
Check
9. What is a UNIQUE constraint?
A UNIQUE constraint ensures that all values in a column are different. This provides
uniqueness for the column(s) and helps identify each row uniquely. Unlike primary
key, there can be multiple unique constraints defined per table. The code syntax for
UNIQUE is quite similar to that of PRIMARY KEY and can be used interchangeably.
Check
Write a SQL statement to add a FOREIGN KEY 'col_fk' in 'table_y' that references
'col_pk' in 'table_x'.
Check
11. What is a Join? List its different types.
The SQL Join clause is used to combine records (rows) from two or more tables in a
SQL database based on a related column between the two.
SELECT *
FROM Table_A
JOIN Table_B;
SELECT *
FROM Table_A
INNER JOIN Table_B;
LEFT (OUTER) JOIN: Retrieves all the records/rows from the le and the
matched records/rows from the right table.
SELECT *
FROM Table_A A
LEFT JOIN Table_B B
ON A.col = B.col;
RIGHT (OUTER) JOIN: Retrieves all the records/rows from the right and the
matched records/rows from the le table.
SELECT *
FROM Table_A A
RIGHT JOIN Table_B B
ON A.col = B.col;
FULL (OUTER) JOIN: Retrieves all the records where there is a match in either
the le or right table.
SELECT *
FROM Table_A A
FULL JOIN Table_B B
ON A.col = B.col;
Write a SQL statement to CROSS JOIN 'table_1' with 'table_2' and fetch 'col_1'
from table_1 & 'col_2' from table_2 respectively. Do not use alias.
Check
Write a SQL statement to perform SELF JOIN for 'Table_X' with alias 'Table_1'
and 'Table_2', on columns 'Col_1' and 'Col_2' respectively.
Check
14. What is an Index? Explain its different types.
A database index is a data structure that provides a quick lookup of data in a column
or columns of a table. It enhances the speed of operations accessing data from a
database table at the cost of additional writes and memory to maintain the index
data structure.
There are different types of indexes that can be created for different purposes:
Unique and Non-Unique Index:
Unique indexes are indexes that help maintain data integrity by ensuring that no two
rows of data in a table have identical key values. Once a unique index has been
defined for a table, uniqueness is enforced whenever keys are added or changed
within the index.
Non-unique indexes, on the other hand, are not used to enforce constraints on the
tables with which they are associated. Instead, non-unique indexes are used solely to
improve query performance by maintaining a sorted order of data values that are
used frequently.
Clustered and Non-Clustered Index:
Clustered indexes are indexes whose order of the rows in the database corresponds
to the order of the rows in the index. This is why only one clustered index can exist in
a given table, whereas, multiple non-clustered indexes can exist in the table.
The only difference between clustered and non-clustered indexes is that the
database manager attempts to keep the data in the database in the same order as
the corresponding keys appear in the clustered index.
Clustering indexes can improve the performance of most query operations because
they provide a linear-access path to data stored in the database.
Check
15. What is the difference between Clustered and Non-clustered
index?
As explained above, the differences can be broken down into three small factors -
Clustered index modifies the way records are stored in a database based on the
indexed column. A non-clustered index creates a separate entity within the table
which references the original table.
Clustered index is used for easy and speedy retrieval of data from the database,
whereas, fetching records from the non-clustered index is relatively slower.
In SQL, a table can have a single clustered index whereas it can have multiple
non-clustered indexes.
Write a SQL query to update the field "status" in table "applications" from 0 to
1.
Check
Write a SQL query to select the field "app_id" in table "applications" where
"app_id" less than 1000.
Check
Write a SQL query to fetch the field "app_name" from "apps" where "apps.id" is
equal to the above collection of "app_id".
Check
19. What is the SELECT statement?
SELECT operator in SQL is used to select data from a database. The data returned is
stored in a result table, called the result-set.
20. What are some common clauses used with SELECT query in
SQL?
Some common SQL clauses used in conjuction with a SELECT query are as follows:
WHERE clause in SQL is used to filter records that are necessary, based on
specific conditions.
ORDER BY clause in SQL is used to sort the records based on some field(s) in
ascending (ASC) or descending order (DESC).
SELECT *
FROM myDB.students
WHERE graduation_year = 2019
ORDER BY studentID DESC;
GROUP BY clause in SQL is used to group records with identical data and can be
used in conjunction with some aggregation functions to produce summarized
results from the database.
HAVING clause in SQL is used to filter records in combination with the GROUP BY
clause. It is different from WHERE, since the WHERE clause cannot filter
aggregated records.
Write a SQL query to fetch "names" that are present in either table "accounts" or
in table "registry".
Check
Write a SQL query to fetch "names" that are present in "accounts" but not in
table "registry".
Check
Write a SQL query to fetch "names" from table "contacts" that are neither
present in "accounts.name" nor in "registry.name".
Check
22. What is Cursor? How to use a Cursor?
A database cursor is a control structure that allows for the traversal of records in a
database. Cursors, in addition, facilitates processing a er traversal, such as retrieval,
addition, and deletion of database records. They can be viewed as a pointer to one
row in a set of rows.
Working with SQL Cursor:
An alias is represented explicitly by the AS keyword but in some cases, the same can
be performed without it as well. Nevertheless, using the AS keyword is always a good
practice.
Write an SQL statement to select all from table "Limited" with alias "Ltd".
Check
26. What is a View?
A view in SQL is a virtual table based on the result-set of an SQL statement. A view
contains rows and columns, just like a real table. The fields in a view are fields from
one or more real tables in the database.
The
Alchemist
62nd
(Paulo
Ansh Sector A- Mr.
Coelho),
10
Inferno (Dan
Brown)
Beautiful Bad
24th
(Annie Ward),
Street
Sara Woman 99 Mrs.
Park
(Greer
Avenue
Macallister)
Windsor
Dracula
Ansh Street Mr.
(Bram Stoker)
777
As we can observe, the Books Issued field has more than one value per record, and to
convert it into 1NF, this has to be resolved into separate individual records for each
book issued. Check the following table in 1NF form -
Students Table (1st Normal Form)
Amanora Inception
Sara Park (Christopher Ms.
Town 94 Nolan)
The
62nd
Alchemist
Ansh Sector A- Mr.
(Paulo
10
Coelho)
62nd
Inferno (Dan
Ansh Sector A- Mr.
Brown)
10
24th
Street Beautiful Bad
Sara Mrs.
Park (Annie Ward)
Avenue
24th
Woman 99
Street
Sara (Greer Mrs.
Park
Macallister)
Avenue
Windsor Dracula
Ansh Street (Bram Mr.
777 Stoker)
Amanora
1 Sara Park Ms.
Town 94
62nd
2 Ansh Sector A- Mr.
10
24th
Street
3 Sara Mrs.
Park
Avenue
Windsor
4 Ansh Street Mr.
777
Here, WX is the only candidate key and there is no partial dependency, i.e., any proper
subset of WX doesn’t determine any non-prime attribute in the relation.
Third Normal Form
A relation is said to be in the third normal form, if it satisfies the conditions for the
second normal form and there is no transitive dependency between the non-prime
attributes, i.e., all non-prime attributes are determined only by the candidate keys of
the relation and not by any other non-prime attribute.
Example 1 - Consider the Students Table in the above example. As we can observe,
the Students Table in the 2NF form has a single candidate key Student_ID (primary
key) that can uniquely identify all records in the table. The field Salutation (non-
prime attribute), however, depends on the Student Field rather than the candidate
key. Hence, the table is not in 3NF. To convert it into the 3rd Normal Form, we will
once again partition the tables into two while specifying a new Foreign Key
constraint to identify the salutations for individual records in the Students table. The
Primary Key constraint for the same will be set on the Salutations table to identify
each record uniquely.
Students Table (3rd Normal Form)
Amanora
1 Sara Park 1
Town 94
62nd
2 Ansh Sector A- 2
10
24th
Street
3 Sara 3
Park
Avenue
Windsor
4 Ansh Street 1
777
Salutation_ID Salutation
1 Ms.
2 Mr.
3 Mrs.
For the above relation to exist in 3NF, all possible candidate keys in the above
relation should be {P, RS, QR, T}.
Boyce-Codd Normal Form
A relation is in Boyce-Codd Normal Form if satisfies the conditions for third normal
form and for every functional dependency, Le -Hand-Side is super key. In other
words, a relation in BCNF has non-trivial functional dependencies in form X –> Y, such
that X is always a super key. For example - In the above example, Student_ID serves as
the sole unique identifier for the Students Table and Salutation_ID for the
Salutations Table, thus these tables exist in BCNF. The same cannot be said for the
Books Table and there can be several books with common Book Names and the same
Student_ID.
TRUNCATE command is used to delete all the rows from the table and free the space
containing the table.
DROP command is used to remove an object from the database. If you drop a table,
all the rows in the table are deleted and the table structure is removed from the
database.
Check
Write a SQL query to remove first 1000 records from table 'Temporary' based on
'id'.
Check
Write a SQL statement to delete the table 'Temporary' while keeping its relations
intact.
Check
31. What is the difference between DROP and TRUNCATE
statements?
If a table is dropped, all things associated with the tables are dropped as well. This
includes - the relationships defined on the table with other tables, the integrity
checks and constraints, access privileges and other grants that the table has. To
create and use the table again in its original form, all these relations, checks,
constraints, privileges and relationships need to be redefined. However, if a table is
truncated, none of the above problems exist and the table retains its original
structure.
Collation refers to a set of rules that determine how data is sorted and compared.
Rules defining the correct character sequence are used to sort the character data. It
incorporates options for specifying case sensitivity, accent marks, kana character
types, and character width. Below are the different types of collation sensitivity:
Case sensitivity: A and a are treated differently.
Accent sensitivity: a and á are treated differently.
Kana sensitivity: Japanese kana characters Hiragana and Katakana are treated
differently.
Width sensitivity: Same character represented in single-byte (half-width) and
double-byte (full-width) are treated differently.
DELIMITER $$
CREATE PROCEDURE FetchAllStudents()
BEGIN
SELECT * FROM myDB.students;
END $$
DELIMITER ;
SELECT *
FROM students
WHERE first_name LIKE 'K%'
SELECT *
FROM students
WHERE first_name NOT LIKE 'K%'
SELECT *
FROM students
WHERE first_name LIKE '%Q%'
SELECT *
FROM students
WHERE first_name LIKE '__K%'
PostgreSQL was first called Postgres and was developed by a team led by Computer
Science Professor Michael Stonebraker in 1986. It was developed to help developers
build enterprise-level applications by upholding data integrity by making systems
fault-tolerant. PostgreSQL is therefore an enterprise-level, flexible, robust, open-
source, and object-relational DBMS that supports flexible workloads along with
handling concurrent users. It has been consistently supported by the global
developer community. Due to its fault-tolerant nature, PostgreSQL has gained
widespread popularity among developers.
The first step of using PostgreSQL is to create a database. This is done by using the
createdb command as shown below: createdb db_name
A er running the above command, if the database creation was successful, then the
below message is shown:
CREATE DATABASE
46. How can we start, restart and stop the PostgreSQL server?
To start the PostgreSQL server, we run:
Starting PostgreSQL: ok
We can also use the statement for removing data from multiple tables all at once by
mentioning the table names separated by comma as shown below:
TRUNCATE TABLE
table_1,
table_2,
table_3;
To get the next number 101 from the sequence, we use the nextval() method as
shown below:
SELECT nextval('serial_num');
We can also use this sequence while inserting new records using the INSERT
command:
If the database has been deleted successfully, then the following message would be
shown:
DROP DATABASE
Read
Might occur Might occur Might occ
Uncommitted
Read
Won’t occur Might occur Might occ
Committed
Repeatable
Won’t occur Might occur Won’t occ
Read
60. What can you tell about WAL (Write Ahead Logging)?
Write Ahead Logging is a feature that increases the database reliability by logging
changes before any changes are done to the database. This ensures that we have
enough information when a database crash occurs by helping to pinpoint to what
point the work has been complete and gives a starting point from the point where it
was discontinued.
For more information, you can refer here.
To perform case insensitive matches using a regular expression, we can use POSIX
(~*) expression from pattern matching operators. For example:
'interviewbit' ~* '.*INTervIewBit.*'
Step 2: Execute pg_dump program to take the dump of data to a .tar folder as shown
below:
The database dump will be stored in the sample_data.tar file on the location
specified.
Conclusion:
SQL is a language for the database. It has a vast scope and robust capability of
creating and manipulating a variety of database objects using commands like
CREATE, ALTER, DROP, etc, and also in loading the database objects using commands
like INSERT. It also provides options for Data Manipulation using commands like
DELETE, TRUNCATE and also does effective retrieval of data using cursor commands
like FETCH, SELECT, etc. There are many such commands which provide a large
amount of control to the programmer to interact with the database in an efficient
way without wasting many resources. The popularity of SQL has grown so much that
almost every programmer relies on this to implement their application's storage
functionalities thereby making it an exciting language to learn. Learning this provides
the developer a benefit of understanding the data structures used for storing the
organization's data and giving an additional level of control and in-depth
understanding of the application.
PostgreSQL being an open-source database system having extremely robust and
sophisticated ACID, Indexing, and Transaction supports has found widespread
popularity among the developer community.
References and Resources:
PostgreSQL Download
PostgreSQL Tutorial
SQL Guide
SQL Server Interview Questions
MySQL Interview Questions
DBMS Interview Questions
PL SQL Interview Questions
MongoDB Interview Questions
Database Testing Interview Questions
SQL Vs MySQL
PostgreSQL vs MySQL
Difference Between SQL and PLSQL
SQL Vs NoSQL
SQL IDE
SQL Projects
MySQL Commands
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
What is Database?
Database is a collection of interrelated data.
What is DBMS?
DBMS (Database Management System) is software used to create, manage, and organize
databases.
What is RDBMS?
● RDBMS (Relational Database Management System) - is a DBMS based on the
concept of tables (also called relations).
● Data is organized into tables (also known as relations) with rows (records) and
columns (attributes).
● Eg - MySQL, PostgreSQL, Oracle etc.
What is SQL?
SQL is Structured Query Language - used to store, manipulate and retrieve data from
RDBMS.
(It is not a database, it is a language used to interact with database)
*Note - SQL keywords are NOT case sensitive. Eg: select is the same as SELECT in SQL.
*Note - CHAR is for fixed length & VARCHAR is for variable length strings. Generally,
VARCHAR is better as it only occupies necessary memory & works more efficiently.
We can also use UNSIGNED with datatypes when we only have positive values to add.
Eg - UNSIGNED INT
2. DDL (Data Definition Language) : Used to create, alter, and delete database objects
like tables, indexes, etc. (CREATE, DROP, ALTER, RENAME, TRUNCATE)
4. DCL (Data Control Language): Used to grant & revoke permissions. (GRANT,
REVOKE)
DDL commands enable you to create, modify, and delete database objects like tables,
indexes, constraints, and more.
● CREATE TABLE:
● ALTER TABLE:
● DROP TABLE:
○ Used to delete an existing table along with its data and structure.
○ Example: DROP TABLE employees;
● CREATE INDEX:
● DROP INDEX:
● CREATE CONSTRAINT:
● DROP CONSTRAINT:
● TRUNCATE TABLE:
○ Used to delete the data inside a table, but not the table itself.
○ Syntax – TRUNCATE TABLE table_name
DQL (Data Query Language) is a subset of SQL focused on retrieving data from databases.
The SELECT statement is the foundation of DQL and allows us to extract specific columns
from a table.
● SELECT:
Here, column1, column2, ... are the field names of the table.
If you want to select all the fields available in the table, use the following syntax:
SELECT * FROM table_name;
● WHERE:
= : Equal
> : Greater than
< : Less than
>= : Greater than or equal
<= : Less than or equal
<> : Not equal.
- The WHERE clause can be combined with AND, OR, and NOT operators.
- The AND and OR operators are used to filter records based on more than one
condition:
- The AND operator displays a record if all the conditions separated by AND are
TRUE.
Syntax:
SELECT column1, column2, ... FROM table_name WHERE condition1 AND condition2 AND
condition3 ...;
Example:
● DISTINCT:
● LIKE:
The LIKE operator is used in a WHERE clause to search for a specified pattern in a column.
There are two wildcards often used in conjunction with the LIKE operator:
● IN:
● BETWEEN:
● IS NULL:
● AS:
● ORDER BY
The ORDER BY clause allows you to sort the result set of a query based on one or more
columns.
Basic Syntax:
- The ORDER BY clause is used after the SELECT statement to sort query results.
- Syntax: SELECT column1, column2 FROM table_name ORDER BY column1
[ASC|DESC];
- You can sort by multiple columns by listing them sequentially in the ORDER BY
clause.
- Rows are first sorted based on the first column, and for rows with equal values,
subsequent columns are used for further sorting.
- Example: SELECT first_name, last_name FROM employees ORDER BY last_name,
first_name;
Sorting by Expressions:
- By default, NULL values are considered the smallest in ascending order and the
largest in descending order.
- You can control the sorting behaviour of NULL values using the NULLS FIRST or
NULLS LAST options.
- Example: SELECT column_name FROM table_name ORDER BY column_name
NULLS LAST;
Sorting by Position:
- Instead of specifying column names, you can sort by column positions in the ORDER
BY clause.
- Example: SELECT product_name, price FROM products ORDER BY 2 DESC, 1
ASC;
● GROUP BY
The GROUP BY clause in SQL is used to group rows from a table based on one or more
columns.
Syntax:
- The GROUP BY clause follows the SELECT statement and is used to group rows
based on specified columns.
- Aggregation Functions:
○ Aggregation functions (e.g., COUNT, SUM, AVG, MAX, MIN) are often used
with GROUP BY to calculate values for each group.
○ Example: SELECT department, AVG(salary) FROM employees GROUP BY
department;
- Grouping by Multiple Columns:
○ You can group by multiple columns by listing them in the GROUP BY clause.
○ This creates a hierarchical grouping based on the specified columns.
○ Example: SELECT department, gender, AVG(salary) FROM employees
GROUP BY department, gender;
- HAVING Clause:
○ You can use both GROUP BY and ORDER BY in the same query to control
the order of grouped results.
○ Example: SELECT department, COUNT(*) FROM employees GROUP BY
department ORDER BY COUNT(*) DESC;
● AGGREGATE FUNCTIONS
These are used to perform calculations on groups of rows or entire result sets. They provide
insights into data by summarising and processing information.
- COUNT():
Counts the number of rows in a group or result set.
- SUM():
Calculates the sum of numeric values in a group or result set.
- AVG():
Computes the average of numeric values in a group or result set.
- MAX():
Finds the maximum value in a group or result set.
- MIN():
Retrieves the minimum value in a group or result set.
Data Manipulation Language (DML) in SQL encompasses commands that manipulate data
within a database. DML allows you to insert, update, and delete records, ensuring the
accuracy and currency of your data.
● INSERT:
● UPDATE:
● DELETE:
DCL commands are used to control who can access the data, modify the data, or perform
administrative tasks within a database.
DCL is an important aspect of database security, ensuring that data remains protected and
only authorised users have the necessary privileges.
There are two main DCL commands in SQL: GRANT and REVOKE.
1. GRANT:
The GRANT command is used to provide specific privileges or permissions to users or roles.
Privileges can include the ability to perform various actions on tables, views, procedures,
and other database objects.
Syntax:
GRANT privilege_type
ON object_name
TO user_or_role;
In this syntax:
2. REVOKE:
The REVOKE command is used to remove or revoke specific privileges or permissions that
have been previously granted to users or roles.
Syntax:
REVOKE privilege_type
ON object_name
FROM user_or_role;
In this syntax:
Example: Revoking the SELECT privilege on the "Employees" table from the "Analyst" user:
DCL plays a crucial role in ensuring the security and integrity of a database system.
By controlling access and permissions, DCL helps prevent unauthorised users from
tampering with or accessing sensitive data. Proper use of GRANT and REVOKE commands
ensures that only users who require specific privileges can perform certain actions on
database objects.
Transaction Control Language (TCL) deals with the management of transactions within a
database.
TCL commands are used to control the initiation, execution, and termination of transactions,
which are sequences of one or more SQL statements that are executed as a single unit of
work.
Transactions ensure data consistency, integrity, and reliability in a database by grouping
related operations together and either committing or rolling back changes based on the
success or failure of those operations.
There are three main TCL commands in SQL: COMMIT, ROLLBACK, and SAVEPOINT.
1. COMMIT:
The COMMIT command is used to permanently save the changes made during a
transaction.
It makes all the changes applied to the database since the last COMMIT or ROLLBACK
command permanent.
Once a COMMIT is executed, the transaction is considered successful, and the changes are
made permanent.
UPDATE Employees
SET Salary = Salary * 1.10
WHERE Department = 'Sales';
COMMIT;
2. ROLLBACK:
ROLLBACK is typically used when an error occurs during the execution of a transaction,
ensuring that the database remains in a consistent state.
BEGIN;
UPDATE Inventory
SET Quantity = Quantity - 10
WHERE ProductID = 101;
ROLLBACK;
3. SAVEPOINT:
The SAVEPOINT command creates a named point within a transaction, allowing you to set a
point to which you can later ROLLBACK if needed.
SAVEPOINTs are useful when you want to undo part of a transaction while preserving other
changes.
BEGIN;
UPDATE Accounts
SET Balance = Balance - 100
WHERE AccountID = 123;
SAVEPOINT before_withdrawal;
UPDATE Accounts
SET Balance = Balance + 100
WHERE AccountID = 456;
ROLLBACK TO before_withdrawal;
COMMIT;
Transaction Control Language (TCL) commands are vital for managing the integrity and
consistency of a database's data.
They allow you to group related changes into transactions, and in the event of errors, either
commit those changes or roll them back to maintain data integrity.
TCL commands are used in combination with Data Manipulation Language (DML) and other
SQL commands to ensure that the database remains in a reliable state despite unforeseen
errors or issues.
JOINS
In a DBMS, a join is an operation that combines rows from two or more tables based on a
related column between them.
Joins are used to retrieve data from multiple tables by linking them together using a common
key or column.
Types of Joins:
1. Inner Join
2. Outer Join
3. Cross Join
4. Self Join
1) Inner Join
An inner join combines data from two or more tables based on a specified condition, known
as the join condition.
The result of an inner join includes only the rows where the join condition is met in all
participating tables.
It essentially filters out non-matching rows and returns only the rows that have matching
values in both tables.
Syntax:
SELECT columns
FROM table1
INNER JOIN table2
ON table1.column = table2.column;
Here:
● columns refers to the specific columns you want to retrieve from the tables.
● table1 and table2 are the names of the tables you are joining.
● column is the common column used to match rows between the tables.
● The ON clause specifies the join condition, where you define how the tables are
related.
Customers Table:
CustomerID CustomerName
1 Alice
2 Bob
3 Carol
Orders Table:
102 3 Smartphone
103 2 Headphones
Result:
CustomerName Product
Alice Laptop
Bob Headphones
Carol Smartphone
2) Outer Join
Outer joins combine data from two or more tables based on a specified condition, just like
inner joins. However, unlike inner joins, outer joins also include rows that do not have
matching values in both tables.
Outer joins are particularly useful when you want to include data from one table even if there
is no corresponding match in the other table.
Types:
There are three types of outer joins: left outer join, right outer join, and full outer join.
A left outer join returns all the rows from the left table and the matching rows from the right
table.
If there is no match in the right table, the result will still include the left table's row with NULL
values in the right table's columns.
Example:
Result:
CustomerName Product
Alice Laptop
Bob Headphones
Carol Smartphone
NULL Monitor
In this example, the left outer join includes all rows from the Customers table.
Since there is no matching customer for the order with OrderID 103 (Monitor), the result
includes a row with NULL values in the CustomerName column.
A right outer join is similar to a left outer join, but it returns all rows from the right table and
the matching rows from the left table.
If there is no match in the left table, the result will still include the right table's row with NULL
values in the left table's columns.
Result:
CustomerName Product
Alice Laptop
Carol Smartphone
Bob Headphones
NULL Keyboard
Here, the right outer join includes all rows from the Orders table. Since there is no matching
order for the customer with CustomerID 4, the result includes a row with NULL values in the
CustomerName column.
A full outer join returns all rows from both the left and right tables, including matches and
non-matches.
If there's no match, NULL values appear in columns from the table where there's no
corresponding value.
Result:
CustomerName Product
Alice Laptop
Bob Headphones
Carol Smartphone
NULL Monitor
NULL Keyboard
In this full outer join example, all rows from both tables are included in the result. Both
non-matching rows from the Customers and Orders tables are represented with NULL
values.
3) Cross Join
A cross join, also known as a Cartesian product, is a type of join operation in a Database
Management System (DBMS) that combines every row from one table with every row from
another table.
Unlike other join types, a cross join does not require a specific condition to match rows
between the tables. Instead, it generates a result set that contains all possible combinations
of rows from both tables.
Cross joins can lead to a large result set, especially when the participating tables have many
rows.
Syntax:
SELECT columns
FROM table1
CROSS JOIN table2;
In this syntax:
● columns refers to the specific columns you want to retrieve from the cross-joined
tables.
● table1 and table2 are the names of the tables you want to combine using a cross
join.
Students Table:
StudentID StudentName
1 Alice
2 Bob
Courses Table:
CourseID CourseName
101 Maths
102 Science
Result:
StudentName CourseName
Alice Maths
Alice Science
Bob Maths
Bob Science
In this example, the cross join between the Students and Courses tables generates all
possible combinations of rows from both tables. As a result, each student is paired with each
course, leading to a total of four rows in the result set.
4) Self Join
This technique is useful when a table contains hierarchical or related data and you need to
compare or analyse rows within the same table.
Self joins are commonly used to find relationships, hierarchies, or patterns within a single
table.
In a self join, you treat the table as if it were two separate tables, referring to them with
different aliases.
Syntax:
SELECT columns
FROM table1 AS alias1
JOIN table1 AS alias2 ON alias1.column = alias2.column;
In this syntax:
● columns refers to the specific columns you want to retrieve from the self-joined table.
● table1 is the name of the table you're joining with itself.
● alias1 and alias2 are aliases you assign to the table instances for differentiation.
● column is the column you use as the join condition to link rows from the same table.
Example: Consider an Employees table that contains information about employees and their
managers.
Employees Table:
1 Alice 3
2 Bob 3
3 Carol NULL
4 David 1
Result:
Employee Manager
Alice Carol
Bob Carol
David Alice
In this example, the self join is performed on the Employees table to find the relationship
between employees and their managers. The join condition connects the ManagerID column
in the e1 alias (representing employees) with the EmployeeID column in the e2 alias
(representing managers).
SET OPERATIONS
Set operations in SQL are used to combine or manipulate the result sets of multiple SELECT
queries.
They allow you to perform operations similar to those in set theory, such as union,
intersection, and difference, on the data retrieved from different tables or queries.
Set operations provide powerful tools for managing and manipulating data, enabling you to
analyse and combine information in various ways.
● UNION
● INTERSECT
● EXCEPT (or MINUS)
● UNION ALL
1. UNION:
The UNION operator combines the result sets of two or more SELECT queries into a single
result set.
It removes duplicates by default, meaning that if there are identical rows in the result sets,
only one instance of each row will appear in the final result.
Example:
CustomerID CustomerName
1 Alice
2 Bob
Suppliers Table:
SupplierID SupplierName
101 SupplierA
102 SupplierB
UNION Query:
Result:
CustomerName
Alice
Bob
SupplierA
SupplierB
2. INTERSECT:
The INTERSECT operator returns the common rows that exist in the result sets of two or
more SELECT queries.
Result:
CustomerName
In this example, there are no common names between customers and suppliers, so the
result is an empty set.
The EXCEPT operator (also known as MINUS in some databases) returns the distinct rows
that are present in the result set of the first SELECT query but not in the result set of the
second SELECT query.
Result:
CustomerName
Alice
Bob
In this example, the names "Alice" and "Bob" are customers but not suppliers, so they
appear in the result set.
4. UNION ALL:
The UNION ALL operator performs the same function as the UNION operator but does not
remove duplicates from the result set. It simply concatenates all rows from the different
result sets.
Example: Using the same tables as before.
Result:
CustomerName
Alice
Bob
SupplierA
SupplierB
duplicates. conditions.
Usage Scenarios analysing related data from from different tables based on
Result sets may have different Result sets can have different
Result Set
column names, but data types and column names, data types, and
Structure
counts must match. counts.
SUB QUERIES
Subqueries, also known as nested queries or inner queries, allow you to use the result of
one query (the inner query) as the input for another query (the outer query).
Subqueries are often used to retrieve data that will be used for filtering, comparison, or
calculation within the context of a larger query.
They are a way to break down complex tasks into smaller, manageable steps.
Syntax:
SELECT columns
FROM table
WHERE column OPERATOR (SELECT column FROM table WHERE condition);
In this syntax:
● columns refers to the specific columns you want to retrieve from the outer query.
● table is the name of the table you're querying.
● column is the column you're applying the operator to in the outer query.
● OPERATOR is a comparison operator such as =, >, <, IN, NOT IN, etc.
● (SELECT column FROM table WHERE condition) is the subquery that provides the
input for the comparison.
Products Table:
1 Laptop 1000
2 Smartphone 500
3 Headphones 50
Orders Table:
101 1 2
102 3 1
For Example: Retrieve the product names and quantities for orders with a total cost greater
than the average price of all products.
Result:
ProductName Quantity
Laptop 2
Not used for combining rows; Combines rows from different tables
Combining Rows
used to filter or evaluate data. based on specified join conditions.
© Copyright by Interviewbit
Contents
Conclusion
32. Conclusion
Note that SQL is not case sensitive. However, it is a good practice to write the SQL ke
Command Action
\c To cancel Input
EXIT(ctrl+c) To exit
If we want to add values for all the columns of the table, we do not need to specify
the column names in the SQL query. However, the order of the values should be in
the same order as the columns in the table. The INSERT INTO syntax would be as
follows:
In MySQL, there are different index types, such as a regular INDEX, a PRIMARY KEY, or
a FULLTEXT index. You can achieve fast searches with the help of an index. Indexes
speed up performance by either ordering the data on disk so it's quicker to find your
result or, telling the SQL engine where to go to find your data.
Example: Adding indexes to the history table:
BIT Bit-field
Example: To select the records with an Order Date of "2018-11-11" from a table:
CREATE
[OR REPLACE]
[ALGORITHM = {MERGE | TEMPTABLE | UNDEFINED }]
[DEFINER = { user | CURRENT_USER }]
[SQL SECURITY { DEFINER | INVOKER }]
VIEW view_name [(column_list)]
AS select_statement
[WITH [CASCADED | LOCAL] CHECK OPTION]
A trigger is a task that executes in response to some predefined database event, such
as a er a new row is added to a particular table. Specifically, this event involves
inserting, modifying, or deleting table data, and the task can occur either prior to or
immediately following any such event.
Triggers have many purposes, including:
Audit Trails
Validation
Referential integrity enforcement
Conclusion
32. Conclusion
Several free or low-cost database management systems are available from which to
choose, such as MySQL, PostgreSQL, or SQLite.
When you compare MySQL with other database systems, think about what’s most
important to you. Performance, features (such as SQL conformance or extensions),
support, licensing conditions, and price all are factors to take into account.
MySQL is one of the best RDBMS being used for developing various web-based
so ware applications.
MySQL is offered under two different editions: the open-source MySQL Community
Server and the proprietary Enterprise Server.
Given these considerations, MySQL has many attractive qualities:
Speed
Ease of use
Query language support
Capability
Connectivity and security
Portability
Availability and cost
Open distribution and source code
Few MySQL References:
https://www.mysql.com
https://learning.oreilly.com/library/view/learning-mysql/0596008643/
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
© Copyright by Interviewbit
Contents
18. Explain the difference between a 2-tier and 3-tier architecture in a DBMS.
RDBMS stands for Relational Database Management System and was introduced in
the 1970s to access and store data more efficiently than DBMS. RDBMS stores data in
the form of tables as compared to DBMS which stores data as files. Storing data as
rows and columns makes it easier to locate specific values in the database and makes
it more efficient as compared to DBMS.
Examples of popular RDBMS systems are MySQL, Oracle DB, etc.
Atomicity: This property reflects the concept of either executing the whole
query or executing nothing at all, which implies that if an update occurs in a
database then that update should either be reflected in the whole database or
should not be reflected at all.
Consistency: This property ensures that the data remains consistent before and
a er a transaction in a database.
Durability: This property ensures that the data is not lost in cases of a system
failure or restart and is present in the same state as it was before the system
failure or restart.
The process of collecting, extracting, transforming, and loading data from multiple
sources and storing them into one database is known as data warehousing. A data
warehouse can be considered as a central repository where data flows from
transactional systems and other relational databases and is used for data analytics. A
data warehouse comprises a wide variety of organization’s historical data that
supports the decision-making process in an organization.
Physical Level: it is the lowest level and is managed by DBMS. This level
consists of data storage descriptions and the details of this level are typically
hidden from system admins, developers, and users.
Conceptual or Logical level: it is the level on which developers and system
admins work and it determines what data is stored in the database and what is
the relationship between the data points.
External or View level: it is the level that describes only part of the database
and hides the details of the table schema and its physical storage from the users.
The result of a query is an example of View level data abstraction. A view is a
virtual table created by selecting fields from one or more tables present in the
database.
It deletes only the rows which are specified by the WHERE clause.
It can be rolled back if required.
It maintains a log to lock the row of the table before deleting it and hence it’s
slow.
TRUNCATE command: this command is needed to remove complete data from a
table in a database. It is like a DELETE command which has no WHERE clause.
It removes complete data from a table in a database.
It can be rolled back even if required.
It doesn’t maintain a log and deletes the whole table at once and hence it’s fast.
Primary Key: The primary key defines a set of attributes that are used to
uniquely identify every tuple. In the below example studentId and firstName are
candidate keys and any one of them can be chosen as a Primary Key. In the given
example studentId is chosen as the primary key for the student table.
Unique Key: The unique key is very similar to the primary key except that
primary keys don’t allow NULL values in the column but unique keys allow them.
So essentially unique keys are primary keys with NULL values.
Alternate Key: All the candidate keys which are not chosen as primary keys are
considered as alternate Keys. In the below example, firstname and lastname are
alternate keys in the database.
Foreign Key: The foreign key defines an attribute that can only take the values
present in one table common to the attribute present in another table. In the
below example courseId from the Student table is a foreign key to the Course
table, as both, the tables contain courseId as one of their attributes.
Composite Key: A composite key refers to a combination of two or more
columns that can uniquely identify each tuple in a table. In the below example
the studentId and firstname can be grouped to uniquely identify every tuple in
the table.
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
37. Can you tell something about Table Per Class Strategy.
38. Can you tell something about Named SQL Query
39. What are the benefits of NamedQuery?
5. What is a SessionFactory?
SessionFactory provides an instance of Session. It is a factory class that gives the
Session objects based on the configuration parameters in order to establish the
connection to the database.
As a good practice, the application generally has a single instance of SessionFactory.
The internal state of a SessionFactory which includes metadata about ORM is
immutable, i.e once the instance is created, it cannot be changed.
This also provides the facility to get information like statistics and metadata related
to a class, query executions, etc. It also holds second-level cache data if enabled.
7 C l i h ti l l di i hib t ?
Page 6 © Copyright by Interviewbit
Hibernate Interview Questions
Lazy loading is mainly used for improving the application performance by helping to
load the child objects on demand.
It is to be noted that, since Hibernate 3 version, this feature has been enabled by
default. This signifies that child objects are not loaded until the parent gets loaded.
If an entity or object is loaded by calling the get() method then Hibernate first
checked the first level cache, if it doesn’t find the object then it goes to the second
level cache if configured. If the object is not found then it finally goes to the database
and returns the object, if there is no corresponding row in the table then it returns
null.
<hibernate-mapping>
<!-- What class is mapped to what database table-->
<class name = "InterviewBitEmployee" table = "InterviewBitEmployee">
</class>
</hibernate-mapping>
package com.dev.interviewbit.model;
import javax.persistence.Access;
import javax.persistence.AccessType;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.OneToOne;
import javax.persistence.Table;
import org.hibernate.annotations.Cascade;
@Entity
@Table(name = "InterviewBitEmployee")
@Access(value=AccessType.FIELD)
public class InterviewBitEmployee {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "employee_id")
private long id;
@Column(name = "full_name")
private String fullName;
@Column(name = "email")
private String email;
@OneToOne(mappedBy = "employee")
@Cascade(value = org.hibernate.annotations.CascadeType.ALL)
private Address address;
Java Application
Hibernate framework - Configuration and Mapping Files
Internal API -
JDBC (Java Database Connectivity)
JTA (Java Transaction API)
JNDI (Java Naming Directory Interface).
Database - MySQL, PostGreSQL, Oracle, etc
Hibernate Architecture
SessionFactory: This provides a factory method to get session objects and clients
of ConnectionProvider. It holds a second-level cache (optional) of data.
Session: This is a short-lived object that acts as an interface between the java
application objects and database data.
The session can be used to generate transaction, query, and criteria objects.
It also has a mandatory first-level cache of data.
Transaction: This object specifies the atomic unit of work and has methods
useful for transaction management. This is optional.
ConnectionProvider: This is a factory of JDBC connection objects and it provides
an abstraction to the application from the DriverManager. This is optional.
TransactionFactory: This is a factory of Transaction objects. It is optional.
Hibernate Objects
getCurrentSession() openSession()
This method returns the session bound to the This method always open
context. a new session.
In single threaded
environment, it is slower
In a single-threaded environment, this method
than
is faster than openSession().
getCurrentSession()single
threadeda
Apart from these two methods, there is another method openStatelessSession() and
this method returns a stateless session object.
Both the methods save records to the table in the database in case there are no
records with the primary key in the table. However, the main differences between
these two are listed below:
save() saveOrUpdate()
save() generates a
Session.saveOrUpdate() can either
new identifier and
INSERT or UPDATE based upon
INSERT record into a
existence of a record.
database
These are the methods to get data from the database. The primary differences
between get and load in Hibernate are given below:
get() load()
To retrive objects whose property has value equal to the restriction, we use
Restrictions.eq() method. For example, to fetch all records with name
‘Hibernate’:
To get objects whose property has the value “not equal to” the restriction, we
use Restrictions.ne() method. For example, to fetch all the records whose
employee’s name is not Hibernate:
Similarly, it also has other methods like isNull(), isNotNull(), gt(), ge(), lt(), le() etc for
adding more varieties of restrictions. It has to be noted that for Hibernate 5 onwards,
the functions returning an object of typeCriteria are deprecated. Hibernate 5 version
has provided interfaces like CriteriaBuilder and CriteriaQuery to serve the purpose:
javax.persistence.criteria.CriteriaBuilder
javax.persistence.criteria.CriteriaQuery
// Create CriteriaBuilder
CriteriaBuilder builder = session.getCriteriaBuilder();
// Create CriteriaQuery
CriteriaQuery<YourClass> criteria = builder.createQuery(YourClass.class);
public int executeUpdate() : This method is used to run the update/delete query.
public List list(): This method returns the result as a list.
public Query setFirstResult(int rowNumber): This method accepts the row
number as the parameter using which the record of that row number would be
retrieved.
public Query setMaxResult(int rowsCount): This method returns a maximum up
to the specified rowCount while retrieving from the database.
public Query setParameter(int position, Object value): This method sets the
value to the attribute/column at a particular position. This method follows the
JDBC style of the query parameter.
public Query setParameter(String name, Object value): This method sets the
value to a named query parameter.
Example: To get a list of all records from InterviewBitEmployee Table:
21. Can you tell something about one to many associations and
how can we use them in Hibernate?
The one-to-many association is the most commonly used which indicates that one
object is linked/associated with multiple objects.
For example, one person can own multiple cars.
@Entity
@Table(name="Person")
public class Person {
//...
@OneToMany(mappedBy="owner")
private Set<Car> cars;
In the Person class, we have defined the car's property to have @OneToMany
association. The Car class would have owned property that is used by the mappedBy
variable in the Person class. The Car class is as shown below:
@Entity
@Table(name="Car")
public class Car {
// Other Properties
@ManyToOne
@JoinColumn(name="person_id", nullable=false)
private Person owner;
public Car() {}
Here, Student-Course Table is called the Join Table where the student_id and
course_id would form the composite primary key.
setFetchSize() works for optimizing how Hibernate sends the result to the caller for
example: are the results buffered, are they sent in different size chunks, etc. This
method is not implemented by all the database drivers.
Persistent:
This state is entered whenever the object is linked or associated with the
session.
An object is said to be in a persistence state whenever we save or persist an
object in the database. Each object corresponds to the row in the database
table. Any modifications to the data in this state cause changes in the record in
the database.
Following methods can be used upon the persistence object:
session.save(record);
session.persist(record);
session.update(record);
session.saveOrUpdate(record);
session.lock(record);
session.merge(record);
Detached:
The object enters this state whenever the session is closed or the cache is
cleared.
Due to the object being no longer part of the session, any changes in the object
will not reflect in the corresponding row of the database. However, it would still
have its representation in the database.
In case the developer wants to persist changes of this object, it has to be
reattached to the hibernate session.
In order to achieve the reattachment, we can use the methods load(), merge(),
refresh(), update(), or save() methods on a new session by using the reference of
the detached object.
The object enters this state whenever any of the following methods are called:
session.close();
session.clear();
session.detach(record);
session.evict(record);
Persistent Entity
Hibernate framework provides an optional feature called cache region for the
queries’ resultset. Additional configurations have to be done in code in order to
enable this. The query cache is useful for those queries which are most frequently
called with the same parameters. This increases the speed of the data retrieval and
greatly improves performance for commonly repetitive queries.
This does not cache the state of actual entities in the result set but it only stores the
identifier values and results of the value type. Hence, query cache should be always
used in association with second-level cache.
Configuration:
In the hibernate configuration XML file, set the use_query_cache property to true as
shown below:
<property name="hibernate.cache.use_query_cache">true</property>
In the code, we need to do the below changes for the query object:
Query query = session.createQuery("from InterviewBitEmployee");
query.setCacheable(true);
query.setCacheRegion("IB_EMP");
33. Can you tell something about the N+1 SELECT problem in
Hibernate?
N+1 SELECT problem is due to the result of using lazy loading and on-demand
fetching strategy. Let's take an example. If you have an N items list and each item
from the list has a dependency on a collection of another object, say bid. In order to
find the highest bid for each item while using the lazy loading strategy, hibernate has
to first fire 1 query to load all items and then subsequently fire N queries to load big
of each item. Hence, hibernate actually ends up executing N+1 queries.
Pre-fetch the records in batches which helps us to reduce the problem of N+1 to
(N/K) + 1 where K refers to the size of the batch.
Subselect the fetching strategy
As last resort, try to avoid or disable lazy loading altogether.
@Entity
@Table(name = "InterviewBitEmployee")
@Inheritance(strategy = InheritanceType.SINGLE_TABLE)
@DiscriminatorColumn(name = "employee_type")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitEmployee {
@Id
@Column(name = "employee_id")
private String employeeId;
private String fullName;
private String email;
}
InterviewBitContractEmployee class:
@Entity
@DiscriminatorValue("contract")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitContractEmployee extends InterviewBitEmployee {
private LocalDate contractStartDate;
private LocalDate contractEndDate;
private String agencyName;
}
InterviewBitPermanentEmployee class:
@Entity
@DiscriminatorValue("permanent")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitPermanentEmployee extends InterviewBitEmployee {
private LocalDate workStartDate;
private int numberOfLeaves;
}
37. Can you tell something about Table Per Class Strategy.
Table Per Class Strategy is another type of inheritance mapping strategy where each
class in the hierarchy has a corresponding mapping database table. For example, the
InterviewBitContractEmployee class details are stored in the
interviewbit_contract_employee table and InterviewBitPermanentEmployee class
details are stored in interviewbit_permanent_employee tables respectively. As the
data is stored in different tables, there will be no need for a discriminator column as
done in a single table strategy.
Hibernate provides @Inheritance annotation which takes strategy as the parameter.
This is used for defining what strategy we would be using. By giving them value,
InheritanceType.TABLE_PER_CLASS, it signifies that we are using a table per class
strategy for mapping.
The code snippet will be as shown below:
InterviewBitEmployee class:
@Entity(name = "interviewbit_employee")
@Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitEmployee {
@Id
@Column(name = "employee_id")
private String employeeId;
private String fullName;
private String email;
}
InterviewBitContractEmployee class:
@Entity(name = "interviewbit_contract_employee")
@Table(name = "interviewbit_contract_employee")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitContractEmployee extends InterviewBitEmployee {
private LocalDate contractStartDate;
private LocalDate contractEndDate;
private String agencyName;
}
InterviewBitPermanentEmployee class:
@Entity(name = "interviewbit_permanent_employee")
@Table(name = "interviewbit_permanent_employee")
@NoArgsConstructor
@AllArgsConstructor
public class InterviewBitPermanentEmployee extends InterviewBitEmployee {
private LocalDate workStartDate;
private int numberOfLeaves;
}
Disadvantages:
This type of strategy offers less performance due to the need for additional joins
to get the data.
This strategy is not supported by all JPA providers.
Ordering is tricky in some cases since it is done based on a class and later by the
ordering criteria.
@NamedQueries(
{
@NamedQuery(
name = "findIBEmployeeByFullName",
query = "from InterviewBitEmployee e where e.fullName = :fullName"
)
}
)
:fullName refers to the parameter that is programmer defined and can be set using
the query.setParameter method while using the named query.
Usage:
The getNamedQuery method takes the name of the named query and returns the
query instance.
Conclusion
PREREQUISITE
ABOUT
THIS
TUTORIAL
• SUBSCRIBE
LEARNCODEWITH
• LIKE
DURGESH
• And SHARE
LEARNCODEWITH DURGESH
HIBERNATE FRAMEWORK
• Hibernate is a Java framework that simplifies the development of Java application to interact
with the database.
• Hibernate is ORM (Object Relational Mapping) tool.
• Hibernate is an Open source, lightweight.
• Hibernate is a non-invasive framework, means it won't forces the programmers to
extend/implement any class/interface.
• It is invented by Gavin King in 2001.
• Any type of application can build with Hibernate Framework.
Objects
JDBC
TABLE
HIBERNATE
Objects
TABLE
FETCH DATA
get( ) load()
get method of Hibernate Session returns load() method throws
null if object is not found in cache as well as ObjectNotFoundException if object is not
on database. found on cache as well as on database but
never return null.
get() involves database hit if object doesn't load method can return proxy in place and
exists in Session Cache and returns a fully only initialize the object or hit the database
initialized object which may involve several if any method other than getId() is called on
database call persistent or entity object. This lazy
initialization
increases the performance.
Use if you are not sure that object exists in Use if you are sure that object exists.
db or not
MANY TO MANY MAPPING
13 What is python?
Eid pid
12 2
13 2
13 3
EMP_PROJECT
FETCH TYPE
Answers
A1 A2 A3 A4
Questions
@Entity
public class Question {
@Id
@Column(name = "question_id")
private int questionId;
@OneToMany(mappedBy = "question")
private List<Answer> answers;
}
FETCH TYPE
LAZY EAGER
In Lazy loading, associated It is a design pattern in
data loads only when we which data loading occurs
explicitly call getter or on the spot.
size method.
Transient Persistent
State State
Detached Removed
State State
Transient Persistent
State State
database
Detached Removed
State State
Session Object
HQL
Hibernate Query Language
get( ) load( )
How to load complex
data ?
HQL SQL
• Database independent • Database dependent
• Easy to learn for programmer. • Easy to learn for DBA.
Table Name
Entity Name
CACHING IN HIBERNATE
Caching is a mechanism to
enhance the performance of a
Application.
USE CASE
JAVA
APPLICATION Database
NOW CACHING COMES
JAVA
APPLICATION Database
HIBERNATE CACHING
TodoDao HibernateTemplate
save(todo)
getAll() SessionFactory
DriverManagerDataSource LocalSessionFactoryBean
DataSource
chatGpt Annotaion
Saturday, July 13, 2024 1:50 AM
2. @Component
Definition: Indicates a class as a Spring component.
Example:
@Component
public class UserService {
// ...
}
Uses:
• Enables component scanning.
• Facilitates dependency management by the Spring container.
3. @Configuration
Definition: Indicates a class that declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Provides a Java-based configuration.
• Groups related bean definitions together.
4. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserService();
}
Uses:
• Allows programmatic bean registration.
• Facilitates dependency injection.
5. @Scope
Definition: Specifies the scope of a bean (e.g., singleton, prototype).
Example:
@Scope("prototype")
@Bean
public UserService userService() {
return new UserService();
}
Uses:
• Controls the lifecycle of a bean.
• Supports different visibility for beans.
6. @PostConstruct
Definition: Marks a method to be executed after dependency
injection.
Example:
@PostConstruct
public void init() {
// Initialization logic
}
Uses:
• Initializes resources after bean properties are set.
• Ensures proper setup before bean use.
7. @PreDestroy
Definition: Marks a method to be executed before a bean is
destroyed.
Example:
@PreDestroy
public void cleanup() {
// Cleanup logic
}
Uses:
• Releases resources before bean destruction.
• Prevents memory leaks.
8. @Qualifier
Definition: Used to resolve ambiguity when multiple beans of the
same type exist.
Example:
@Autowired
@Qualifier("userService")
private UserService userService;
Uses:
• Differentiates between multiple beans.
• Ensures the correct bean is injected.
10. @Profile
Definition: Indicates that a bean is available only in certain profiles.
Example:
@Profile("dev")
@Bean
public DataSource devDataSource() {
return new HikariDataSource();
}
Uses:
• Supports environment-specific configurations.
• Enables profile-based bean management.
11. @Conditional
Definition: Specifies a condition under which a bean is registered.
Example:
@ConditionalOnProperty(name = "feature.enabled", havingValue =
"true")
@Bean
public MyFeature myFeature() {
return new MyFeature();
}
Uses:
• Provides flexible configuration options.
• Controls bean creation based on conditions.
12. @Import
Definition: Imports additional configuration classes.
Example:
@Import({DataConfig.class, SecurityConfig.class})
public class AppConfig {
// ...
}
Uses:
• Supports modular configuration.
• Facilitates class importing.
13. @ComponentScan
Definition: Specifies packages to scan for Spring components.
Example:
14. @EventListener
Definition: Marks a method as an event listener.
Example:
@EventListener
public void handleEvent(MyEvent event) {
// Event handling logic
}
Uses:
• Supports event-driven architecture.
• Facilitates decoupling of components.
15. @Transactional
Definition: Declares a method or class as transactional.
Example:
@Transactional
public void save(User user) {
// ...
}
Uses:
• Manages database transactions.
• Ensures data integrity.
16. @EnableAspectJAutoProxy
Definition: Enables support for handling components marked with
@Aspect.
Example:
@EnableAspectJAutoProxy
@Configuration
public class AppConfig {
// ...
}
Uses:
• Activates AOP features in the application.
• Facilitates proxy-based AOP support.
17. @Aspect
Definition: Indicates that a class is an aspect in Spring AOP.
Example:
@Aspect
public class LoggingAspect {
18. @Pointcut
Definition: Declares a pointcut expression for AOP.
Example:
@Pointcut("execution(* com.example.service.*.*(..))")
public void serviceMethods() {
// Pointcut definition
}
Uses:
• Centralizes pointcut definitions for reuse.
• Enhances readability.
19. @Order
Definition: Specifies the order of execution for aspects.
Example:
@Aspect
@Order(1)
public class FirstAspect {
// ...
}
Uses:
• Controls execution order of multiple aspects.
• Supports complex AOP configurations.
20. @ConfigurationProperties
Definition: Binds properties to a configuration class.
Example:
@ConfigurationProperties(prefix = "app")
public class AppConfig {
private String name;
// ...
}
Uses:
• Supports type-safe configuration management.
• Simplifies property access.
2. @JdbcTemplate
Definition: Indicates that a class uses JDBC operations.
Example:
@Autowired
private JdbcTemplate jdbcTemplate;
Uses:
• Simplifies database operations.
• Reduces boilerplate code for JDBC.
3. @Transactional
Definition: Indicates that a method should run within a transaction.
Example:
@Transactional
public void saveUser(User user) {
// ...
}
Uses:
• Manages transaction boundaries.
• Ensures data consistency.
4. @Query
Definition: Specifies a query for a repository method.
Example:
@Query("SELECT u FROM User u WHERE u.id = ?1")
User findUserById(Long id);
Uses:
• Customizes SQL or JPQL queries in repositories.
• Supports complex queries.
5. @Modifying
Definition: Indicates a modifying query (insert, update, delete).
Example:
@Modifying
@Query("UPDATE User u SET u.name = ?1 WHERE u.id = ?2")
void updateUserName(String name, Long id);
Uses:
• Specifies data modification operations.
• Supports non-select queries.
6. @Transactional(readOnly = true)
Definition: Indicates a read-only transaction.
Example:
@Transactional(readOnly = true)
8. @Table
Definition: Specifies the table name for an entity.
Example:
@Entity
@Table(name = "users")
public class User {
// ...
}
Uses:
• Customizes the table mapping for entities.
• Supports database schema management.
9. @Id
Definition: Indicates the primary key of an entity.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Marks the identifier property of an entity.
• Facilitates primary key generation.
10. @GeneratedValue
Definition: Specifies the strategy for primary key generation.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Controls the primary key generation strategy.
11. @Column
Definition: Specifies a column for the entity.
Example:
@Column(name = "user_name", nullable = false)
private String username;
Uses:
• Maps a field to a specific database column.
• Supports column constraints like nullability.
12. @ManyToOne
Definition: Defines a many-to-one relationship between entities.
Example:
@ManyToOne
@JoinColumn(name = "role_id")
private Role role;
Uses:
• Manages relationships in JPA entities.
• Facilitates foreign key mappings.
13. @OneToMany
Definition: Defines a one-to-many relationship between entities.
Example:
@OneToMany(mappedBy = "user")
private List<Order> orders;
Uses:
• Maps collections of related entities.
• Supports bidirectional relationships.
14. @JoinColumn
Definition: Specifies a column for joining two entities.
Example:
@ManyToOne
@JoinColumn(name = "user_id")
private User user;
Uses:
• Defines the join column for associations.
• Enhances relationship mapping.
16. @Embedded
Definition: Indicates that an entity contains an embedded object.
Example:
@Embedded
private Address address;
Uses:
• Allows for reusable, composite data types.
• Supports complex entity relationships.
17. @ManyToMany
Definition: Defines a many-to-many relationship between entities.
Example:
@ManyToMany
@JoinTable(name = "user_roles",
joinColumns = @JoinColumn(name = "user_id"),
inverseJoinColumns = @JoinColumn(name = "role_id"))
private Set<Role> roles;
Uses:
• Manages complex relationships in data models.
• Supports associative tables for many-to-many relationships.
18. @Fetch
Definition: Specifies the fetching strategy for collections.
Example:
@OneToMany(fetch = FetchType.LAZY)
private List<Order> orders;
Uses:
• Controls data fetching strategies (EAGER or LAZY).
• Optimizes performance based on data access patterns.
19. @Version
Definition: Used for optimistic locking.
Example:
@Version
private Long version;
Uses:
• Prevents concurrent modification issues.
• Supports optimistic concurrency control.
20. @Lock
Definition: Specifies the locking strategy for an entity.
Example:
java
2. @Before
Definition: Indicates that a method runs before a join point.
Example:
@Before("execution(* com.example.service.*.*(..))")
public void logBefore(JoinPoint joinPoint) {
// Logging logic
}
Uses:
• Provides pre-processing logic for methods.
• Supports cross-cutting concerns such as logging.
3. @After
Definition: Indicates that a method runs after a join point.
Example:
@After("execution(* com.example.service.*.*(..))")
public void logAfter(JoinPoint joinPoint) {
// Logging logic
}
Uses:
• Provides post-processing logic for methods.
• Supports cleanup operations.
4. @Around
Definition: Indicates that a method runs both before and after a join
point.
Example:
@Around("execution(* com.example.service.*.*(..))")
5. @AfterReturning
Definition: Indicates that a method runs after a join point returns
successfully.
Example:
@AfterReturning(pointcut = "execution(* com.example.service.*.
*(..))", returning = "result")
public void logAfterReturning(JoinPoint joinPoint, Object result) {
// Logging logic
}
Uses:
• Captures successful method return values.
• Supports logging or further processing.
6. @AfterThrowing
Definition: Indicates that a method runs after a join point throws an
exception.
Example:
@AfterThrowing(pointcut = "execution(* com.example.service.*.
*(..))", throwing = "error")
public void logAfterThrowing(JoinPoint joinPoint, Throwable error) {
// Logging logic
}
Uses:
• Captures exceptions thrown by methods.
• Supports error handling and logging.
7. @Pointcut
Definition: Declares a pointcut expression for AOP.
Example:
Copy code
@Pointcut("execution(* com.example.service.*.*(..))")
public void serviceMethods() {
// Pointcut definition
}
Uses:
• Centralizes pointcut definitions for reuse.
• Enhances readability of aspect definitions.
8. @DeclareParents
Definition: Used to introduce a mixin interface to a target class.
Example:
@DeclareParents(value = "com.example.service.*", defaultImpl =
DefaultMixin.class)
public static MixinInterface mixin;
Uses:
• Supports mixin functionality in AOP.
• Enables flexible design patterns.
9. @Order
Definition: Specifies the order of execution for aspects.
Example:
@Aspect
@Order(1)
public class FirstAspect {
// ...
}
Uses:
• Controls execution order of multiple aspects.
• Supports complex AOP configurations.
10. @EnableAspectJAutoProxy
Definition: Enables support for handling components marked with
@Aspect.
Example:
@EnableAspectJAutoProxy
@Configuration
public class AppConfig {
// ...
}
Uses:
• Activates AOP features in the application.
• Facilitates proxy-based AOP support.
12. @AspectJ
Definition: Indicates an aspect defined using AspectJ style.
Example:
@Aspect
public class TransactionAspect {
// ...
}
Uses:
• Provides an alternative AOP style.
• Supports AspectJ-specific features.
13. @ContextConfiguration
Definition: Specifies the context configuration for testing.
Example:
@ContextConfiguration(classes = AppConfig.class)
public class MyTest {
// ...
}
Uses:
• Integrates AOP with test contexts.
• Supports configuration management in tests.
14. @AspectJProxy
Definition: Enables AspectJ-style proxying.
@EnableAspectJAutoProxy
public class MyConfig {
// ...
}
Uses:
• Activates AspectJ proxy support.
• Enhances AOP features.
15. @Transactional
Definition: Applies transaction management to an aspect method.
Example:
@Transactional
@Around("execution(* com.example.service.*.*(..))")
public Object transactionAdvice(ProceedingJoinPoint joinPoint)
throws Throwable {
// Transactional logic
}
Uses:
• Combines AOP with transaction management.
• Ensures data integrity.
17. @Target
Definition: Specifies the target type for the aspect.
Example:
@Target(ElementType.METHOD)
@Pointcut
public void methodPointcut() {
// ...
}
Uses:
• Controls where the aspect applies.
• Enhances aspect precision.
18. @Annotation
Definition: Targets methods annotated with a specific annotation.
Example:
@Pointcut("@annotation(com.example.annotations.Loggable)")
public void loggableMethods() {
// ...
}
Uses:
• Enhances aspect targeting.
• Supports annotation-driven behavior.
19. @Weave
Definition: Used in AspectJ for weaving aspects.
Example:
@Weave
public class MyWeavedClass {
// ...
}
Uses:
• Supports AspectJ weaving.
• Integrates aspects into compiled code.
20. @Configuration
Definition: Indicates a class that contains Spring configuration.
Example:
2. @Table
Definition: Specifies the table name in the database that the entity
maps to.
Example:
@Entity
@Table(name = "users")
public class User {
// ...
}
Uses:
• Allows customization of table names.
• Supports different naming conventions.
3. @Id
Definition: Indicates the primary key of the entity.
Example:
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
Uses:
4. @GeneratedValue
Definition: Specifies the strategy for generating primary key values.
Example:
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
Uses:
• Simplifies the management of primary keys.
• Supports different ID generation strategies (e.g., AUTO,
SEQUENCE).
5. @Column
Definition: Maps a field to a database column.
Example:
@Column(name = "username", nullable = false, unique = true)
private String username;
Uses:
• Customizes column definitions and constraints.
• Enhances database schema management.
6. @OneToMany
Definition: Defines a one-to-many relationship between two
entities.
Example:
@OneToMany(mappedBy = "user")
private List<Order> orders;
Uses:
• Represents complex relationships in the data model.
• Facilitates data retrieval across related entities.
7. @ManyToOne
Definition: Indicates a many-to-one relationship between two
entities.
Example:
@ManyToOne
@JoinColumn(name = "user_id")
private User user;
Uses:
• Supports relationships between entities.
• Enhances data integrity and organization.
8. @ManyToMany
Definition: Defines a many-to-many relationship between two
entities.
Example:
9. @JoinColumn
Definition: Specifies the foreign key column in a relationship.
Example:
@ManyToOne
@JoinColumn(name = "user_id", nullable = false)
private User user;
Uses:
• Customizes foreign key mapping.
• Ensures database integrity.
10. @Embedded
Definition: Indicates that a field is an embedded object that is part
of the entity.
Example:
@Embedded
private Address address;
Uses:
• Supports complex data structures within entities.
• Facilitates better data organization.
11. @Embeddable
Definition: Marks a class as an embeddable object that can be
included in an entity.
Example:
@Embeddable
public class Address {
private String street;
private String city;
// getters and setters
}
Uses:
• Defines reusable components for entity classes.
• Promotes better encapsulation.
12. @Transient
Definition: Indicates that a field should not be persisted in the
database.
13. @Version
Definition: Specifies the version of the entity for optimistic locking.
Example:
@Version
private Long version;
Uses:
• Prevents concurrent modifications.
• Ensures data integrity during updates.
14. @Fetch
Definition: Specifies how to fetch associated entities.
Example:
@OneToMany(fetch = FetchType.LAZY)
private List<Order> orders;
Uses:
• Controls the fetching strategy for associations.
• Optimizes performance.
15. @OrderBy
Definition: Specifies the ordering of a collection of associated
entities.
Example:
@OneToMany
@OrderBy("createdDate DESC")
private List<Order> orders;
Uses:
• Provides default sorting for collections.
• Enhances query performance.
16. @Query
Definition: Defines a custom query for a repository method.
Example:
@Query("SELECT u FROM User u WHERE u.username = ?1")
User findByUsername(String username);
Uses:
• Supports custom JPQL or SQL queries.
• Enhances query flexibility.
17. @NamedQuery
Definition: Defines a static query that can be referenced by name.
Example:
18. @EntityListeners
Definition: Specifies classes that contain callback methods for entity
lifecycle events.
Example:
@EntityListeners(AuditListener.class)
public class User {
// ...
}
Uses:
• Implements entity lifecycle callbacks.
• Supports auditing and logging.
19. @PrePersist
Definition: Indicates a method that should be called before the
entity is persisted.
Example:
@PrePersist
public void onPrePersist() {
this.createdDate = new Date();
}
Uses:
• Supports pre-insert operations.
• Enhances data integrity.
20. @PostLoad
Definition: Indicates a method that should be called after the entity
is loaded from the database.
Example:
@PostLoad
public void onPostLoad() {
// Logic after loading
}
Uses:
• Implements actions post-load.
• Enhances data processing.
2. @RequestMapping
Definition: Maps HTTP requests to handler methods.
Example:
@RequestMapping("/users")
public String getUsers() {
return "userList";
}
Uses:
• Configures URL mapping for requests.
• Supports multiple HTTP methods.
3. @GetMapping
Definition: A shortcut for @RequestMapping with the GET method.
Example:
All Interview inone Page 20
Example:
@GetMapping("/users")
public List<User> getAllUsers() {
return userService.findAll();
}
Uses:
• Simplifies GET request mappings.
• Enhances readability.
4. @PostMapping
Definition: A shortcut for @RequestMapping with the POST method.
Example:
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Simplifies POST request mappings.
• Supports resource creation.
5. @PutMapping
Definition: A shortcut for @RequestMapping with the PUT method.
Example:
@PutMapping("/users/{id}")
public void updateUser(@PathVariable Long id, @RequestBody User
user) {
userService.update(id, user);
}
Uses:
• Simplifies PUT request mappings.
• Supports resource updates.
6. @DeleteMapping
Definition: A shortcut for @RequestMapping with the DELETE
method.
Example:
@DeleteMapping("/users/{id}")
public void deleteUser(@PathVariable Long id) {
userService.delete(id);
}
Uses:
• Simplifies DELETE request mappings.
• Supports resource deletion.
7. @PathVariable
Definition: Binds a URI template variable to a method parameter.
Example:
@GetMapping("/users/{id}")
public User getUser(@PathVariable Long id) {
8. @RequestParam
Definition: Binds a request parameter to a method parameter.
Example:
@GetMapping("/users")
public List<User> getUsers(@RequestParam(required = false) String
name) {
return userService.findByName(name);
}
Uses:
• Handles query parameters in requests.
• Supports optional parameters.
9. @RequestBody
Definition: Binds the request body to a method parameter.
Example:
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Supports deserialization of request bodies.
• Facilitates JSON/XML request handling.
10. @ResponseBody
Definition: Indicates that a method return value should be bound to
the web response body.
Example:
@GetMapping("/users/{id}")
@ResponseBody
public User getUser(@PathVariable Long id) {
return userService.findById(id);
}
Uses:
• Supports direct response body writing.
• Simplifies AJAX response handling.
11. @RestController
Definition: Combines @Controller and @ResponseBody.
@RestController
@RequestMapping("/api")
public class UserRestController {
// ...
12. @ExceptionHandler
Definition: Handles exceptions thrown by controller methods.
Example:
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String>
handleUserNotFound(UserNotFoundException ex) {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body(ex.getMessag
e());
}
Uses:
• Centralizes exception handling logic.
• Supports custom error responses.
13. @ModelAttribute
Definition: Binds a method parameter or method return value to a
model attribute.
Example:
@ModelAttribute("user")
public User populateUser() {
return new User();
}
Uses:
• Prepares model attributes for views.
• Supports data binding.
14. @SessionAttributes
Definition: Indicates which model attributes should be stored in the
session.
Example:
@SessionAttributes("user")
public class UserController {
// ...
}
Uses:
• Manages session data in controllers.
• Supports multi-step forms.
16. @RedirectAttributes
Definition: Allows attributes to be stored in the session for
redirection.
Example:
@PostMapping("/users")
public String addUser(@ModelAttribute User user,
RedirectAttributes redirectAttributes) {
userService.save(user);
redirectAttributes.addFlashAttribute("message", "User added!");
return "redirect:/users";
}
Uses:
• Supports flash attributes for redirection.
• Enhances user feedback after actions.
17. @ResponseStatus
Definition: Specifies the HTTP status code for a method.
Example:
@ResponseStatus(HttpStatus.CREATED)
@PostMapping("/users")
public void addUser(@RequestBody User user) {
userService.save(user);
}
Uses:
• Customizes response status codes.
• Enhances RESTful service design.
18. @InitBinder
Definition: Customizes the data binding process.
Example:
@InitBinder
public void initBinder(WebDataBinder binder) {
binder.registerCustomEditor(User.class, new UserEditor());
}
Uses:
• Configures data binding settings.
• Supports custom validation logic.
19. @CrossOrigin
Definition: Enables Cross-Origin Resource Sharing (CORS) on a
method.
Example:
2. @EnableAutoConfiguration
Definition: Tells Spring Boot to automatically configure the
application context.
Example:
@EnableAutoConfiguration
public class Application {
// ...
}
Uses:
• Simplifies configuration for different setups.
• Automatically configures beans based on dependencies.
4. @Configuration
Definition: Indicates that a class declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Centralizes configuration.
• Supports explicit bean definitions.
5. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserServiceImpl();
}
Uses:
• Explicitly defines beans in a configuration class.
• Supports customization of bean instantiation.
6. @Value
Definition: Injects values from property files.
Example:
@Value("${app.name}")
private String appName;
Uses:
• Supports external configuration.
• Enhances flexibility with environment variables.
7. @Autowired
Definition: Marks a constructor, field, or method for dependency
injection.
Example:
8. @Qualifier
Definition: Specifies which bean to inject when multiple candidates
are available.
Example:
@Autowired
@Qualifier("userServiceImpl")
private UserService userService;
Uses:
• Resolves ambiguity in dependency injection.
• Supports multiple bean implementations.
9. @PostConstruct
Definition: Indicates a method to be executed after dependency
injection is complete.
Example:
@PostConstruct
public void init() {
// Initialization logic
}
Uses:
• Supports initialization after bean creation.
• Ensures dependencies are fully injected.
10. @PreDestroy
Definition: Indicates a method to be executed just before bean
destruction.
@PreDestroy
public void cleanup() {
// Cleanup logic
}
Uses:
• Supports resource cleanup before bean destruction.
• Ensures proper shutdown of resources.
11. @Conditional
Definition: Specifies a condition for bean registration.
Example:
@Bean
@ConditionalOnProperty(name = "feature.enabled", havingValue =
"true")
public UserService userService() {
return new UserServiceImpl();
12. @ConfigurationProperties
Definition: Binds external configuration to a Java object.
Example:
@ConfigurationProperties(prefix = "app")
public class AppProperties {
private String name;
// getters and setters
}
Uses:
• Centralizes configuration management.
• Supports strong typing for configuration properties.
13. @EnableConfigurationProperties
Definition: Enables support for @ConfigurationProperties annotated
classes.
Example:
@EnableConfigurationProperties(AppProperties.class)
public class Application {
// ...
}
Uses:
• Supports type-safe configuration management.
• Integrates property binding with Spring context.
14. @RestControllerAdvice
Definition: Combines @ControllerAdvice and @ResponseBody for
global exception handling.
Example:
@RestControllerAdvice
public class GlobalExceptionHandler {
// ...
}
Uses:
• Centralizes exception handling for REST controllers.
• Simplifies response handling for errors.
15. @EnableScheduling
Definition: Enables Spring’s scheduled task execution capability.
Example:
@EnableScheduling
public class SchedulerConfig {
// ...
}
16. @EnableAsync
Definition: Enables Spring’s asynchronous method execution
capability.
Example:
@EnableAsync
public class AsyncConfig {
// ...
}
Uses:
• Supports asynchronous execution in Spring.
• Enhances application responsiveness.
17. @SpringBootTest
Definition: Indicates that a class is a Spring Boot test.
Example:
@SpringBootTest
public class ApplicationTests {
// ...
}
Uses:
• Simplifies testing of Spring Boot applications.
• Provides a comprehensive application context for tests.
18. @MockBean
Definition: Creates a mock instance of a bean for testing.
Example:
@MockBean
private UserService userService;
Uses:
• Supports mocking dependencies in tests.
• Simplifies unit testing with Spring context.
19. @Profile
Definition: Indicates that a component is eligible for registration
when a specified profile is active.
Example:
@Profile("dev")
@Bean
public DataSource dataSource() {
return new H2DataSource();
}
Uses:
• Supports different configurations for different environments.
• Enhances flexibility and modularity.
20. @Scheduled
Definition: Indicates that a method should be scheduled to run at
fixed intervals.
Example:
@Scheduled(fixedRate = 5000)
public void performTask() {
// Task logic
}
Uses:
• Facilitates scheduled task execution.
• Supports cron expressions and fixed delays.
2. @EnableAutoConfiguration
Definition: Instructs Spring Boot to automatically configure the
application based on the dependencies present in the classpath.
Example:
@EnableAutoConfiguration
public class MyApplication {
// ...
}
Uses:
• Automatically configures common application components.
• Reduces the need for explicit configuration.
3. @ConfigurationProperties
Definition: Binds external configuration properties to a Java object.
Example:
@ConfigurationProperties(prefix = "app")
public class AppProperties {
private String name;
private String version;
4. @EnableConfigurationProperties
Definition: Enables support for @ConfigurationProperties annotated
classes.
Example:
@EnableConfigurationProperties(AppProperties.class)
public class MyApplication {
// ...
}
Uses:
• Supports type-safe configuration management.
• Integrates property binding with Spring context.
5. @SpringBootTest
Definition: Indicates that a class is a Spring Boot test, providing a
comprehensive application context for testing.
Example:
@SpringBootTest
public class MyApplicationTests {
// ...
}
Uses:
• Simplifies integration testing for Spring Boot applications.
• Provides a full application context for tests.
6. @MockBean
Definition: Creates a mock instance of a bean for testing.
Example:
@MockBean
private UserService userService;
Uses:
• Supports mocking dependencies in tests.
• Simplifies unit testing with Spring context.
7. @Profile
Definition: Indicates that a component is eligible for registration
when a specified profile is active.
Example:
@Profile("dev")
@Bean
public DataSource dataSource() {
return new H2DataSource();
}
8. @Scheduled
Definition: Indicates that a method should be scheduled to run at
fixed intervals.
Example:
@Scheduled(fixedRate = 5000)
public void performTask() {
// Task logic
}
Uses:
• Facilitates scheduled task execution.
• Supports cron expressions and fixed delays.
9. @Async
Definition: Indicates that a method should run asynchronously.
Example:
@Async
public CompletableFuture<String> asyncMethod() {
return CompletableFuture.completedFuture("Hello");
}
Uses:
• Enhances application responsiveness.
• Supports non-blocking operations.
10. @RestControllerAdvice
Definition: Combines @ControllerAdvice and @ResponseBody for
global exception handling in REST APIs.
Example:
@RestControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String>
handleUserNotFound(UserNotFoundException ex) {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body(ex.getMessag
e());
}
}
Uses:
• Centralizes exception handling for REST controllers.
• Simplifies response handling for errors.
11. @EnableScheduling
Definition: Enables Spring's scheduled task execution capability.
Example:
12. @Configuration
Definition: Indicates that a class declares one or more @Bean
methods.
Example:
@Configuration
public class AppConfig {
// ...
}
Uses:
• Centralizes configuration.
• Supports explicit bean definitions.
13. @Value
Definition: Injects values from property files into fields.
Example:
@Value("${app.name}")
private String appName;
Uses:
• Supports external configuration.
• Enhances flexibility with environment variables.
14. @CommandLineRunner
Definition: Indicates a method to be executed after the Spring Boot
application has started.
Example:
@Bean
public CommandLineRunner run() {
return args -> {
// Logic to run at startup
};
}
Uses:
• Supports executing initialization logic on startup.
• Facilitates running application-specific code.
15. @Conditional
Definition: Specifies a condition for bean registration.
Example:
@Bean
@ConditionalOnProperty(name = "feature.enabled", havingValue =
16. @Bean
Definition: Indicates that a method produces a bean to be managed
by the Spring container.
Example:
@Bean
public UserService userService() {
return new UserServiceImpl();
}
Uses:
• Explicitly defines beans in a configuration class.
• Supports customization of bean instantiation.
17. @Import
Definition: Allows importing additional configuration classes.
Example:
@Import({DatabaseConfig.class, SecurityConfig.class})
public class AppConfig {
// ...
}
Uses:
• Modularizes configuration.
• Supports importing configuration from multiple classes.
18. @EnableConfigurationPropertiesScan
Definition: Scans for @ConfigurationProperties beans in the
specified packages.
Example:
@EnableConfigurationPropertiesScan("com.example.config")
public class MyApplication {
// ...
}
Uses:
• Automatically detects and registers configuration properties
classes.
• Enhances organization of configuration classes.
19. @EntityScan
Definition: Specifies the packages to scan for JPA entities.
Example:
@EntityScan("com.example.model")
20. @EnableJpaRepositories
Definition: Enables JPA repositories in the specified packages.
Example:
@EnableJpaRepositories("com.example.repository")
public class MyApplication {
// ...
}
Uses:
• Automatically registers JPA repository beans.
• Simplifies repository management.
2. @Configuration
Definition: Indicates that a class contains Spring security
configuration.
Example:
@Configuration
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
}
Uses:
• Centralizes security configuration.
• Combines multiple security settings.
3. @EnableGlobalMethodSecurity
Definition: Enables method-level security in Spring Security.
Example:
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
All Interview inone Page 35
// ...
}
Uses:
• Supports security annotations on methods.
• Enhances security granularity.
4. @PreAuthorize
Definition: Indicates that a method can be invoked only if the user
has the specified authority.
Example:
@PreAuthorize("hasRole('ADMIN')")
public void adminOnlyMethod() {
// ...
}
Uses:
• Enforces method-level security.
• Supports fine-grained access control.
5. @PostAuthorize
Definition: Indicates that a method can be invoked but checks the
security after the method execution.
Example:
@PostAuthorize("returnObject.username == authentication.name")
public User getUser(Long id) {
return userService.findById(id);
}
Uses:
• Checks authorization after method execution.
• Supports dynamic security checks.
6. @Secured
Definition: Specifies the roles allowed to invoke a method.
Example:
@Secured("ROLE_USER")
public void userMethod() {
// ...
}
Uses:
• Provides simple role-based access control.
• Enhances method-level security.
7. @RolesAllowed
Definition: Indicates the roles permitted to execute a method.
Example:
@RolesAllowed({"ROLE_USER", "ROLE_ADMIN"})
public void userOrAdminMethod() {
// ...
}
Uses:
8. @AuthenticationPrincipal
Definition: Indicates a method parameter should be bound to the
current authenticated user's details.
Example:
@GetMapping("/user")
public String getUser(@AuthenticationPrincipal UserDetails
userDetails) {
return userDetails.getUsername();
}
Uses:
• Simplifies access to authenticated user information.
• Enhances method readability.
9. @EnableWebSecurity
Definition: Enables Spring Security’s web security support.
Example:
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
// ...
}
Uses:
• Configures web security settings.
• Enables method-level security.
10. @WithMockUser
Definition: Creates a mock user for testing secured methods.
Example:
@WithMockUser(username = "admin", roles = {"ADMIN"})
public void testAdminAccess() {
// Test logic
}
Uses:
• Simplifies testing with mock security contexts.
• Supports integration testing of security features.
11. @EnableGlobalMethodSecurity
Definition: Enables method-level security using annotations.
Example:
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class SecurityConfig {
// ...
}
Uses:
• Allows use of @PreAuthorize and @PostAuthorize.
• Enhances method security configuration.
12. @Secured
Definition: Provides role-based security at the method level.
Example:
@Secured("ROLE_ADMIN")
public void adminMethod() {
// ...
}
Uses:
• Simple way to restrict access to methods based on roles.
• Enhances method-level security.
13. @RequestMapping
Definition: Maps web requests to specific handler methods.
Example:
@RequestMapping("/api/users")
public List<User> getUsers() {
return userService.findAll();
}
Uses:
• Supports RESTful API design.
• Centralizes request mapping.
14. @PathVariable
Definition: Binds a method parameter to a URI template variable.
Example:
@GetMapping("/users/{id}")
public User getUser(@PathVariable Long id) {
return userService.findById(id);
}
Uses:
• Simplifies parameter binding from URL.
• Enhances API design.
15. @RequestBody
Definition: Binds the HTTP request body to a method parameter.
Example:
@PostMapping("/users")
public User createUser(@RequestBody User user) {
return userService.save(user);
}
Uses:
• Simplifies data binding from request body.
• Enhances API handling.
16. @RequestParam
Definition: Binds a method parameter to a web request parameter.
Example:
17. @ResponseStatus
Definition: Marks a method or exception with a specific HTTP status
code.
Example:
@ResponseStatus(HttpStatus.NOT_FOUND)
public void handleUserNotFound() {
// Logic
}
Uses:
• Customizes response status for controllers.
• Enhances error handling.
18. @ControllerAdvice
Definition: Global handler for controller exceptions and binding.
Example:
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(UserNotFoundException.class)
public ResponseEntity<String> handleUserNotFound() {
return
ResponseEntity.status(HttpStatus.NOT_FOUND).body("User not
found");
}
}
Uses:
• Centralizes exception handling.
• Simplifies response management.
19. @CrossOrigin
Definition: Allows cross-origin requests on the specified controller or
method.
Example:
@CrossOrigin(origins = "http://localhost:3000")
@GetMapping("/api/data")
public Data getData() {
return dataService.getData();
}
Uses:
• Supports CORS in REST APIs.
20. @EnableCaching
Definition: Enables caching support in a Spring application.
@EnableCaching
public class AppConfig {
// ...
}
Uses:
• Improves application performance with caching.
• Simplifies caching configuration.
Introduction to JDBC:
Components of JDBC:
There are four major components of JDBC using which it can interact with a
database. They are:
Scope of JDBC:
Earlier, ODBC API was used as the database API to connect with the database and
execute the queries. But, ODBC API uses C language for ODBC drivers(i.e. platform-
dependent and unsecured). Hence, Java has defined its own JDBC API that uses JDBC
drivers, which offers a natural Java interface for communicating with the database
through SQL. JDBC is required to provide a “pure Java” solution for the development
of an application using Java programming.
2. What is ResultSet?
JDBC driver of Oracle 10G is ojdbc14.jar and it can be obtained in the installation
directory of an Oracle at …/Oracle/app/oracle/product/10.2.0/server/jdbc/lib .
JDBC driver provides the connection to the database. Also, it implements the
protocol for sending the query and result between client and database.
6. Which data types are used for storing the image and file in
the database table?
BLOB data type is used to store the image in the database. We can also store
videos and audio by using the BLOB data type. It stores the binary type of data.
CLOB data type is used to store the file in the database. It stores the character
type of data.
DELIMITER $$
DROP PROCEDURE IF EXISTS `EMP`.`GET_EMP_DETAILS` $$
CREATE PROCEDURE `EMP`.`GET_EMP_DETAILS`
(IN EMP_ID INT, OUT EMP_DETAILS VARCHAR(255))
BEGIN
SELECT first INTO EMP_DETAILS
FROM Employees
WHERE ID = EMP_ID;
END $$
DELIMITER ;
Stored procedures are called using CallableStatement class available in JDBC API.
Below given code demonstrates this:
Three types of parameters are provided in the stored procedures. They are:
IN: It is used for passing the input values to the procedure. With the help of
setXXX() methods, you can bind values to IN parameters.
OUT: It is used for getting the value from the procedure. With the help of
getXXX() methods, you can obtain values from OUT parameters.
IN/OUT: It is used for passing the input values and obtaining the value
to/from the procedure. You bind variable values with the setXXX() methods
and obtain values with the getXXX() methods.
ODBC can be used for languages like JDBC is used only for the Java
C, C++, Java, etc. language
Most of the ODBC Drivers developed JDBC drivers are developed using
in native languages like C, C++ the Java language
11. What are the different types of JDBC drivers in Java? Explain
each with an example.
There are four types of JDBC drivers in Java. They are:
Type I: JDBC - ODBC bridge driver
In this, the JDBC–ODBC bridge acts as an interface between the client and
database server. When a user uses a Java application to send requests to
the database using JDBC–ODBC bridge, it converts the JDBC API into ODBC
API and then sends it to the database. When the result is received from the
database, it is sent to ODBC API and then to JDBC API.
It is platform-dependent because it uses ODBC which depends on the native
library of the operating system. In this, JDBC–ODBC driver should be
installed in every client system and database must support for ODBC driver.'
It is easier to use but it gives low performance because it involves the
conversion of JDBC method calls to the ODBC method calls.
ResultSet RowSet
ResultSet cannot
be serialized as it
RowSet is disconnected from the database so it
handles the
can be serialized.
connection to
the database.
By default,
ResultSet object
By default, the RowSet object is scrollable and
is non-scrollable
updatable.
and non-
updatable.
ResultSet object
is not a RowSet object is a JavaBean object.
JavaBean object.
ResultSet is
returned by the Rowset extends the ResultSet interface and it is
executeQuery() returned by calling the
method of RowSetProvider.newFactory().createJdbcRowSet()
Statement method.
interface
It is difficult to
pass ResultSet
from one class to It is easier to pass RowSet from one class to
another class as another class as it has no connection with the
it has a database.
connection with
the database.
Interfaces:
Statement st = conn.createStatement( );
ResultSet rs = st.executeQuery();
Statement PreparedStatement
Provides better
performance than
Performance is less compared
Statement, as it executes
to PreparedStatement.
the pre-compiled SQL
statements.
It is used to execute
the SQL statements
It can be
It is used to such as
used for any
execute SQL Insert/Update/Delete
SQL
Select queries. which will update or
statements.
modify the database
data.
It returns
the boolean
value TRUE It returns the It returns an integer
if the result ResultSet object value which
is a which contains represents the
ResultSet the data number of affected
object and retrieved by the rows where 0
FALSE when SELECT indicates that the
there is no statement. query returns null.
ResultSet
object.
Used for
executing
Used for
both Used for executing
executing only
SELECT and only a non-SELECT
the SELECT
non- query.
Query.
SELECT
queries.
The execute() method is used in the situations when you are not sure about the type
of statement else you can use executeQuery() or executeUpdate() method.
The above statement is used to retrieve the value of the specified column Index and
the return type is an int data type.
Setter Methods: These methods are used to set the value in the database. It is
almost similar to getter methods, but here it requires to pass the data/values for
the particular column to insert into the database and the column name or index
value of that column. Usually, setter method is represented as setXXX() methods.
Example:
void setInt(int Column_Index, int Data_Value)
The above statement is used to insert the value of the specified column Index with an
int value.
If two users are viewing the same record, then there is no issue, and locking will
not be done. If one user is updating a record and the second user also wants to
update the same record, in this situation, we are going to use locking so that
there will be no lost update.
Two types of locking are available in JDBC by which we can handle multiple user
issues using the record. They are:
Optimistic Locking: It will lock the record only when an update takes
place. This type of locking will not make use of exclusive locks when reading
or selecting the record.
Pessimistic Locking: It will lock the record as soon as it selects the row to
update. The strategy of this locking system guarantees that the changes are
made safely and consistently.
Dirty read implies the meaning “read the value which may or may not be
correct”. In the database, when a transaction is executing and changing some
field value, at the same time another transaction comes and reads the changed
field value before the first transaction could commit or rollback the value, which
may cause an invalid value for that particular field. This situation is known as a
dirty read.
Consider an example given below, where Transaction 2 changes a row but does
not commit the changes made. Then Transaction 1 reads the uncommitted
data. Now, if Transaction 2 goes for roll backing its changes (which is already
read by Transaction 1) or updates any changes to the database, then the view of
the data may be wrong in the records related to Transaction 1. But in this case,
no row exists that has an id of 100 and an age of 25.
Unable to load the appropriate JDBC drivers before calling the getConnection()
method.
It can specify an invalid or wrong JDBC URL, which cannot be recognized by the
JDBC driver.
This error may occur when one or more shared libraries required by the bridge
cannot be loaded.
Class.forName(“oracle.jdbc.driver.OracleDriver”);
The MySQL Connector/J version 8.0 library comes with a JDBC driver class:
com.mysql.jdbc.Driver. Before Java 6, we had to load the driver explicitly using the
statement given below:
Class.forName("com.mysql.jdbc.Driver");
However, this statement is no longer needed, because of a new update in JDBC 4.0
that comes from Java 6. As long as you place the MySQL JDBC driver JAR file into the
classpath of your program, the driver manager can find and load the driver.
: DriverManager is a built-in Java class with a
DriverManager.registerDriver()
static member register. Here we will be calling the constructor of the driver class
during compile time.
DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
DriverManager.registerDriver(new com.mysql.jdbc.Driver(); );
Here,
con: Reference to a Connection interface.
url: Uniform Resource Locator.
user: Username from which SQL command prompt is accessed.
password: Password from which SQL command prompt is accessed.
Url in Oracle can be created as follows:
Where oracle represents the database used, thin is the driver used, @localhost is the
IP(Internet Protocol) address where the database is stored, 1521 is the port number
and xe represents the service provider.
All 3 parameters given above are of string type and are expected to be declared by
the programmer before the function call. Use of this can be referred from the final
code of an application.
Where localhost represents hostname or IP address of the MySQL server, 3306 port
number of the server and by default, it is 3306, test1 is the name of the database on
the server.
Create a statement:
Once a connection establishment is done, you can interact with the database.
The Statement, PreparedStatement, and CallableStatement JDBC interfaces
will define the methods that permit you to send SQL commands and receive
data from the database.
We can use JDBC Statement as follows:
Statement st = con.createStatement();
Here, con is a reference to the Connection interface used in the earlier step.
Execute the query:
Here, query means an SQL query. We can have various types of queries. A few of
them are as follows:
Query for updating or inserting a table in a database.
Query for data retrieval.
The executeQuery() method that belongs to the Statement interface is used for
executing queries related to values retrieval from the database. This method
returns the ResultSet object which can be used to get all the table records.
The executeUpdate(sql_query) method of the Statement interface is used for
executing queries related to the update/insert operation.
Example:
int m = st.executeUpdate(sql);
if (m==1)
System.out.println("Data inserted successfully : "+sql);
else
System.out.println("Data insertion failed");
import java.sql.*;
import java.util.*;
class OracleCon
{
public static void main(String a[])
{
//Creating the connection
String url = "jdbc:oracle:thin:@localhost:1521:xe";
String user = "system";
String password = "123";
Statement st = con.createStatement();
int m = st.executeUpdate(sql);
if (m == 1)
System.out.println("Data inserted successfully : "+sql);
else
System.out.println("Data insertion failed");
con.close();
}
catch(Exception ex)
{
System.err.println(ex);
}
}
}
import java.sql.*;
class MysqlCon
{
public static void main(String args[])
{
//Creating the connection
String url = "jdbc:mysql://localhost:3306/test1";
String user = "system";
String password = "123";
try
{
Class.forName("com.mysql.jdbc.Driver");
Statement st = con.createStatement();
cs.executeUpdate();
The sequence of actions (SQL statements) served as a single unit that is called a
transaction. Transaction Management places an important role in RDBMS-
oriented applications to maintain data consistency and integrity.
Transaction Management can be described well – by using ACID properties. ACID
stands for Atomicity, Consistency, Isolation, and Durability.
Atomicity - If all queries are successfully executed, then only data will be
committed to the database.
Consistency - It ensures bringing the database into a consistent state a er
any transaction.
Isolation - It ensures that the transaction is isolated from other
transactions.
Durability - If a transaction has been committed once, it will remain always
committed, even in the situation of errors, power loss, etc.
Need for Transaction Management:
When creating a connection to the database, the auto-commit mode will be
selected by default. This implies that every time when the request is executed, it
will be committed automatically upon completion.
We might want to commit the transaction a er the execution of few more SQL
statements. In such a situation, we must set the auto-commit value to False. So
that data will not be able to commit before executing all the queries. In case if
we get an exception in the transaction, we can rollback() changes made and
make it like before.
setAutoCommit() method:
The value of AutoCommit is set to TR
UE by default. A er the SQL statement execution, it will be committed
automatically. By using this method we can set the value for AutoCommit.
Syntax: conn.setAutoCommit(boolean_value)
Here, boolean_value is set to TRUE for enabling autocommit mode for the
connection, FALSE for disabling it.
Commit() method:
The commit() method is used for committing the data. A er the SQL statement
execution, we can call the commit() method. It will commit the changes made
by the SQL statement.
Syntax: conn.commit();
Rollback() method:
The rollback() method is used to undo the changes made till the last commit has
occurred. If we face any problem or exception in the SQL statements execution
flow, we may roll back the transaction.
Syntax: conn.rollback();
setSavepoint() method:
If you have set a savepoint in the transaction (a group of SQL statements), you
can use the rollback() method to undo all the changes till the savepoint or a er
the savepoint(), if something goes wrong within the current transaction. The
setSavepoint() method is used to create a new savepoint which refers to the
current state of the database within the transaction.
Syntax: Savepoint sp= conn.setSavepoint("MysavePoint")
releaseSavepoint() method:
It is used for deleting or releasing the created savepoint.
Syntax: conn.releaseSavepoint("MysavePoint");
The transaction isolation level is a value that decides the level at which
inconsistent data is permitted in a transaction, which means it represents the
degree of isolation of one transaction from another. A higher level of isolation
will result in improvement of data accuracy, but it might decrease the number of
concurrent transactions. Similarly, a lower level of isolation permits for more
concurrent transactions, but it reduces the data accuracy.
To ensure data integrity during transactions in JDBC, the DBMS make use of
locks to prevent access to other accesses to the data which is involved in the
transaction. Such locks are necessary for preventing Dirty Read, Non-Repeatable
Read, and Phantom-Read in the database.
It is used for the locking mechanism by DBMS and can be set using
setTransactionIsolation() method. You can obtain details about the level of
isolation used by the connection using getTransactionIsolation() method.
Introduction to J2EE
J2EE (Java Enterprise Edition) standards were first proposed by Oracle (Sun
Microsystems) to help developers develop, build and deploy reusable, distributed,
reliable, scalable, portable and secure enterprise-level business applications. In
simple terms, J2EE constitutes a set of frameworks, a collection of APIs and various
J2EE technologies like JSP, Servlets etc that are used as standards for simplifying
the development and building of large scale applications.
It is aimed at easing the development, build and deployment process of enterprise-
level applications that can be run on different platforms which supports Java. J2EE
remains the most popular standard followed by the Java developers community
which is why it is important for developers to know about J2EE concepts and have
hands-on experience in them.
In this article, we will see the most commonly asked interview questions on J2EE for
both freshers and experienced professionals.
Support for Web Services: J2EE provides a platform to develop and deploy web
services. The JAX-RPC (Java API for XML based Remote Procedure Call) helps
developers develop SOAP-based portable and interoperable web services, clients
and endpoints.
Faster Time to Market: J2EE uses the concept of containers for simplifying the
development. This helps in business logic separation from lifecycle
management and resources which aids developers to focus on business logic
than on the infrastructure. For instance, the EJB (Enterprise JavaBeans)
container takes care of threading, distributed communication, transaction
management, scaling etc and provides a necessary abstraction to the
developers.
Compatibility: J2EE platform follows the principle of “Write Once, Run
Anywhere”. It provides comprehensive standards and APIs that ensures
compatibility among different application vendors resulting in the portability of
applications.
Simplified connectivity: J2EE helps in easier applications connectivity which
allows utilizing the capabilities of different devices. It also provides JMS (Java
Message Service) to integrate diverse applications in asynchronous and loosely
coupled ways. It also provides CORBA (Common Object Request Broker
Architecture) support for linking systems tightly via remote calls.
Due to all the above benefits packed in one technology, it helps the developers to
reduce the TCO (Total Cost of Ownership) and also focus more on actual business
logic implementation.
Java API for XML-Based RPC (JAX-RPC): This is used to build web services and
clients that make use of XML and Remote Procedure Calls.
Java Server Pages (JSP): This is used for delivering XML and HTML documents.
Apart from these, we can make use of OutputStream for delivering other data
types as well.
Java Servlets: Servlets are classes used for extending the server capabilities
which hosts applications and can be accessed using the request-response
model.
Enterprise Java Beans (EJB): This is a server-side component that is used for
encapsulating the application’s business logic by providing runtime
environment, security, servlet lifecycle management, transaction management
and other services.
J2EE Connector Architecture: This defines standard architecture to connect
J2EE platforms to different EIS (Enterprise Information Systems) such as
mainframe processes, database systems and different legacy applications coded
in another language.
J2EE Deployment API: Provides specifications for web services deployment
Java Management Extensions (JMX): They are used for supplying tools for
monitoring and managing applications, objects, devices and networks.
J2EE Authorization Contract for Containers (JACC): This is used to define
security contracts between authorization policy modules and application
servers.
Java API for XML Registries (JAXR): This provides standard API to access
different XML Registries to enable infrastructure for the building and
deployment of web services.
Java Message Service (JMS): This is a messaging standard for allowing different
JEE components for creating, sending, receiving and reading messages by
enabling communication in a distributed, loosely coupled, asynchronous and
reliable manner.
Java Naming and Directory Interface (JNDI): This is an API that provides
naming and directory functionality for Java-based applications.
Java Transaction API (JTA): This is used for specifying Java standard interfaces
between transaction systems and managers.
Common Object Request Broker Architecture (CORBA): This provides a
standard for defining Object Management Group designed for facilitating system
communication deployed on diverse platforms.
JDBC data access API: This provides API for getting data from any data sources
like flat files, spreadsheets, relational databases etc.
J2EE is made up of 3 main components (tiers) - Client tier, Middle tier, Enterprise
data tier as shown in the below image:
Client Tier: This tier has programs and applications which interact with the user
and they are generally located in different machines from the server. Here,
different inputs are taken from the user and these requests are forwarded to the
server for processing and the response will be sent back to the client.
Middle Tier: This tier comprises of Web components and EJB containers. The
web components are either servlet or JSP pages that process the request and
generate the response. At the time of the application’s assembly, the client’s
static HTML codes, programs and applets along with the server’s components
are bundled within the web components. The EJB components are present for
processing inputs from the user which are sent to the Enterprise Bean that is
running in the business tier.
Enterprise Data Tier: This tier includes database servers, resource planning
systems and various other data sources that are located on a separate machine
which are accessed by different components by the business tier. Technologies
like JPA, JDBC, Java transaction API, Java Connector Architecture etc are used in
this tier.
JDBC or Java Database Connectivity provides guidelines and APIs for connecting
databases from different vendors like MySQL, Oracle, PostgreSQL etc for getting
data. JNDI (Java Naming and Directory Interface) helps in providing logical structure
to retrieve a resource from the database, EJB beans, messaging queues etc without
knowing the actual host address or port. A resource can be registered with JNDI and
then those registered application components can be accessed using the JNDI name.
11. What are the J2EE applets? Why can we use it?
Applets are J2EE client components that are written in Java and are executed in a
web browser or a variety of other devices which supports the applet programming
model. They are used for providing interactive features to web apps and help in
providing small, portable embedded Java programs in HTML pages which will be run
automatically when we view the pages.
Model: This component defines the internal system state. It can either be a Java
Beans cluster or a single bean depending on the application architecture.
View: Struts make use of JSP technology for designing views of enterprise-level
applications.
Controller: It is a servlet and is used for managing user actions that process
requests and respond to them.
The database records cannot be directly consumed by the Java applications as Java
only deals with objects. ORM again plays a major role in the transformation of
database records to Java objects.
Java Servlets dynamically process requests and responses. JSP pages are used for
executing servlets that allow a natural approach to creating static content.
17. What are the differences between JVM vs JIT vs JDK vs JRE?
Java
Development Java Runtime
Kit: JDK is a Environment:
Java Virtual cross- JRE is part of
Machine: platformed JDK that
Just in Time
Introduced for so ware consists of
Compilation:
managing development JVM, core
This is a part
system environment classes and
of JVM and
memory and offering support
was
also to provide various libraries. It is
developed
a portable collections of used for
for improving
execution libraries and providing a
JVM
environment tools runtime
performance.
for Java required for environment
applications. developing for running
Java Java
applications programs.
and applets.
JRE is a
subset of JDK
and is like a
container
that consists
This is used JDK is
Used for of JVM,
for compiling essential for
compiling byte supporting
only reusable writing and
code to libraries and
byte code to running
machine code other files. It
machine programs in
completely. doesn't have
code. Java.
development
tools such as
compilers
and
debuggers.
Since the Java application uses a connection pool, it has active connections that
would get disconnected if the database goes down. When the queries are executed to
retrieve or modify data, then we will get a Socket exception.
Web Server A
Web servers are computer programs that accept requests and returns
responses based on that. A
a
These are useful for getting static content for the applications. T
T
This server makes use of HTML and HTTP protocols.
s
24. What is the purpose of heap dumps and how do you analyze
a heap dump?
Heap dumps consist of a snapshot of all live objects on Java heap memory that are
used by running Java applications. Detailed information for each object like type,
class name, address, size and references to other objects can be obtained in the heap
dump. Various tools help in analyzing heap dumps in Java. For instance, JDK itself
provides jhat tool for analysing heap dump. Heap dumps are also used for analysing
memory leaks which is a phenomenon that occurs when there are objects that are
not used by the application anymore and the garbage collection is not able to free
that memory as they are still shown as referenced objects. Following are the causes
that result in memory leaks:
Continuously instantiating objects without releasing them.
Unclosed connection objects (such as connections to the database) post the
required operation.
Static variables holding on to references of objects.
Adding objects in HashMap without overriding hashCode() equals() method. If
these methods are not included, then the hashmap will continuously grow
without ignoring the duplicates.
Unbounded caches.
Listener methods that are uninvoked.
Due to this, the application keeps consuming more and more memory and eventually
this leads to OutOfMemory Errors and can ultimately crash the application. We can
make use of the Eclipse Memory Analyzer or jvisualVM tool for analysing heap dump
to identify memory leaks.
This heap dump contains live objects that are stored in heap_dump.hprof file.
Process ID (PID) of the Java process is needed to get the dump that can be obtained
by using ps or grep commands.
J2EE Spring
J2EE is a standard or
Spring is a framework used for
specification defined by
designing templates for an
Sun/Oracle which is used
application.
for web development.
EAR stands for Enterprise Archive file and it consists of web, EJB and client
components all compressed and packed into a file called .ear file. EAR files allow us
to deploy different modules onto the application server simultaneously.
WAR stands for Web Archive file and consists of all web components packed and
compressed in a .war file. This file allows testing and deploying web applications
easily in a single request.
JAR stands for Java Archive file. It consists of all libraries and class files that
constitute APIs. These are packed and compressed in a file called the .jar file. These
are used for deploying the entire application including classes and resources in a
single request.
The below image describes the different phases of the servlet lifecycle:
A Java servlet is typically multithreaded. This means that multiple requests can be
sent to the same servlet and they can be executed at the same time. All the local
variables (not pointing to shared resources) inside the servlet are automatically
thread-safe and are request specific. Care has to be taken when the servlet is
accessing or modifying the global shared variable. The servlet instance lifecycle for
different requests are managed by the web container as follows:
User clicks on a link in a client that requests a response from a server. In this
instance, consider that the client performs GET request to the server as shown in
the image below:
The web container intercepts the request and identifies which servlet has to
serve the request by using the deployment descriptor file and then creates two
objects as shown below-
HttpServletRequest - to send servlet request
HttpServletResponse - to get the servlet response
The web container then creates and allocates a thread that inturn creates a
request that calls the service() lifecycle method of the servlet and passes the
request and response objects as parameters as shown below:
Servlet makes use of the response object obtained from the servlet method for
writing the response to the client.
Once the request is served completely, the thread dies and the objects are made
ready for garbage collection.
Conclusion:
J2EE defines standards and specifications for various components such as e-mailing,
database connectivity, security, XML parsing, CORBA communication etc that help in
developing complex, reliable, secure and distributed servlets and applications that
follow the client-server model. It provides various API interfaces that act as standards
between different vendor adapters and J2EE components. This ensures that the
application components are not dependent on vendor codes. Due to this, J2EE has
been very popular among Java developers in the field of so ware development.
19. What are the methods that a servlet can use to get information about the
server?
20. How can a servlet get the name of the server and the port number for a
particular request?
21. How can a servlet get information about the client machine?
22. Explain the Single-Thread Model in servlets.
23. How does Background Processing take place in servlets?
24. How does Servlet collaboration take place?
25. Explain Request parameters associated with servlets.
26. What are the three methods of inter-servlet communication?
27. What are the reasons we use inter-servlet communication?
28. What do you mean by Servlet Manipulation?
29. What is the javax.servlet package?
Introduction to Servlet:
A servlet is a small Java program that runs within a Web server. Servlets receive and
respond to requests from Web clients, usually across HTTP, the HyperText Transfer
Protocol. Servlets can also access a library of HTTP-specific calls and receive all the
benefits of the mature Java language, including portability, performance, reusability,
and crash protection. Servlets are o en used to provide rich interaction functionality
within the browser for users (clicking link, form submission, etc.)
To handle requests that are appropriate for the servlet, a typical servlet must
override its service() method. The service() method allows 2 parameters: these are
the request object and the response object. The request object is used to inform the
servlet about the request, whereas the response object is used to then give a
response.
As opposed to this, an HTTP servlet typically does not override the service() method.
However, it actually overrides the doGet() to handle the GET requests and the
doPost() to handle POST requests. Depending on the type of requests it needs to
handle, an HTTP servlet can override either or both of these methods.
A er the web container loads and instantiates the servlet class and before it delivers
requests from clients, the web container initializes the servlet. To customize this
process to allow the servlet to read persistent configuration data, initialize resources,
and perform any other one-time activities, you override the init method of the
Servlet interface.
Example:
When a servlet container determines that a servlet should be removed from service
(for example, when a container wants to reclaim memory resources or when it is
being shut down), the container calls the destroy method of the Servlet interface.
The following destroy method releases the database object created in the init
method.
SERVER_NAME req.getServerName()
SERVER_SOFTWARE getServletContext().getServerInfo()
SERVER_PROTOCOL req.getProtocol()
SERVER_PORT req.getServerPort()
REQUEST_METHOD req.getMethod()
PATH_INFO req.getPathInfo()
PATH_TRANSLATED req.getPathTranslated()
SCRIPT_NAME req.getServletPath()
DOCUMENT_ROOT req.getRealPath("/")
QUERY_STRING req.getQueryString()
REMOTE_HOST req.getRemoteHost()
REMOTE_ADDR req.getRemoteAddr()
AUTH_TYPE req.getAuthType()
REMOTE_USER req.getRemoteUser()
CONTENT_TYPE req.getContentType()
CONTENT_LENGTH req.getContentLength()
HTTP_ACCEPT req.getHeader("Accept")
HTTP_USER_AGENT req.getHeader("User-Agent")
Page 9 © Copyright by Interviewbit
Servlet Interview Questions
The above method returns the value of the named init parameter or if the named init
parameter does not exist it will return null. The value returned is always a single
string. The servlet then interprets the value.
For instance, one can add custom tags within a page, and then a servlet can replace
these with HTML content.
Support for the esoteric data types
For instance, one can provide a filter that converts nonstandard image types to GIF or
JPEG for the unsupported image types.
17. What is the life cycle contract that a servlet engine must
conform to?
The life cycle contract that a servlet engine must conform to is as follows:
19. What are the methods that a servlet can use to get
information about the server?
A servlet can be used to learn about its server using 4 different methods. Out of these,
two methods are called using the ServletRequest object. These are passed to the
servlet. The other two are called from the ServletContext object. In these, the servlet
is executing.
20. How can a servlet get the name of the server and the port
number for a particular request?
A servlet can get the name of the server and the port number for a particular request
with getServerName() and getServerPort() , respectively:
These methods are attributes of ServletRequest because the values can change for
different requests if the server has more than one name (a technique called virtual
hosting).
The getServerInfo() and getAttribute() methods of ServletContext supply
information about the server so ware and its attributes:
Conclusion:
Unlike CGI and FastCGI, which use many processes to handle separate programs and
separate requests, servlets are all handled by separate threads within the webserver
process. Thus, the servlets are efficient and scalable. As servlets run within the web
server, they can interact very closely with the server to do things that are not possible
with CGI scripts.
An advantage of servlets is that they are portable: both across operating systems like
with Java and also across web servers.
All of the major web servers support servlets.
References and Resources:
Java Servlet Programming, by Jason Hunter, Published by O'Reilly Media, Inc.
Java Servlet & JSP Cookbook, by Bruce W. Perry
Java Interview
JSP Interview
© Copyright by Interviewbit
Contents
Conclusion
28. Conclusion
Java Server Pages (JSP) enables the creation of dynamic and a platform-independent
way for building web-based applications. It is a server-side programming technology.
JSP is a technology that is an integral part of Java EE, a complete platform for
enterprise-class applications. This means that JSP can be used from the simplest
applications to the most complex and demanding ones.
JavaServer Pages (JSP) o en serve the same purpose as programs implemented
using the Common Gateway Interface (CGI). However, in addition, JSP offers several
advantages as compared to CGI.
Element Description
<%@
include Includes a file during the translation phase.
... %>
<%@
Declares a tag library, containing custom actions,
taglib ...
used on the page.
%>
Element Description
Element Description
<% ...
Scriptlet used to embed scripting code.
%>
+ Addition
* Multiplication
/ or div Division
% or
Modulo (remainder)
mod
&& or
Test for logical AND
and
for(inti=0;i<n;i++)
{
//block of statements
}
While(i<n)
{
//Block of statements
}
The reque
object is u
to request
informati
like a
request javax.servlet.http.HttpServletRequest
paramete
header
informati
server nam
etc.
The respo
is an insta
of a class t
response javax.servlet.http.HttpServletResponse represent
response
can be giv
to the clie
This is use
get, set, a
remove th
pageContext javax.servlet.jsp.PageContext attributes
from a
particular
scope.
This is use
get, set, a
remove
attributes
session javax.servlet.http.HttpSession
session sc
and also u
to get sess
i f ti
Page 13 © Copyright by Interviewbit
JSP Interview Questions
Iteration
Conditional logic
Catch exception
URL forward
Redirect, etc.
Following is the syntax to include a tag library:
20. Which methods are used for reading form data using JSP?
JSP is used to handle the form data parsing automatically. It dies so by using the
following methods depending on the situation:
getParameter() − To get the value of a form parameter, call the
request.getParameter() method.
getParameterValues() − If a parameter appears more than once and it returns
multiple values, call this method.
getParameterNames() − This method is used if, in the current request, you want
a complete list of all parameters.
getInputStream() − This method is used for reading binary data streams from
the client.
The JSP page is turned into a servlet for all the JSP elements to be processed by the
server. Then the servlet is executed. The servlet container and the JSP container—are
o en combined into one package under the name “web container”.
In the translation phase, the JSP container is responsible for converting the JSP page
into a servlet and compiling the servlet. This is used to automatically initiate the
translation phase for a page when the first request for the page is received.
In the “request processing” phase, the JSP container is also responsible for invoking
the JSP page implementation class to process each request and generate the
response.
When a page request of JSP is processed, the template text and the dynamic content
generated by the JSP elements are merged, and the result is sent as the response to
the browser.
jsp:forward: This action tag forwards the request and response to another
resource.
jsp:include: This action tag is used to include another resource.
jsp:useBean: This action tag is used to create and locates bean objects.
jsp:setProperty: This action tag is used to set the value of the property of the
bean.
jsp:getProperty: This action tag is used to print the value of the property of the
bean.
jsp:plugin: This action tag is used to embed another component such as the
applet.
jsp:param: This action tag is used to set the parameter value. It is used in
forward and includes mostly.
jsp:fallback: This action tag can be used to print the message if the plugin is
working.
Here <%%> tags are scriptlet tags and within it, we can place the java code.
It is an architecture that separates business logic, presentation, and data. In this, the
flow starts from the view layer, where the request is raised and processed in the
controller layer. This is then sent to the model layer to insert data and get back the
success or failure message.
Conclusion
28. Conclusion
The Java 2 Enterprise Edition (J2EE) takes the task of building an Internet presence
and transforms it to the point where developers can use Java to efficiently create
multi-tier, server-side applications. In late 1999, Sun Microsystems added a new
element to the collection of Enterprise Java tools, called the JavaServer Pages (JSP).
The JSP, built on top of Java servlets, is designed to increase the efficiency in which
programmers, and even nonprogrammers, can create web content.
JavaServer Pages helps in developing web pages that include dynamic content. A JSP
page can change its content based on any number of variable items. A JSP page not
only contains standard markup language elements like a regular web page but also
contains special JSP elements that allow the server to insert dynamic content in the
page. This combination of standard elements and custom elements allows for the
creation of powerful web apps.
References:
JavaServer Pages, 3rd Edition, O'Reilly.
Web Development with JavaServer Pages, by Duane and Mark.
@RequestMapping:
This provides the routing information and informs Spring that any HTTP
request matching the URL must be mapped to the respective method.
org.springframework.web.bind.annotation.RequestMapping has to be imported
to use this annotation.
@RestController:
This is applied to a class to mark it as a request handler thereby creating
RESTful web services using Spring MVC. This annotation adds the
@ResponseBody and @Controller annotation to the class.
org.springframework.web.bind.annotation.RestController has to be imported
to use this annotation.
Check out more Interview Questions on Spring Boot here.
Before:
This advice executes before a join point, but it does not have the ability to
prevent execution flow from proceeding to the join point (unless it throws
an exception).
To use this, use @Before annotation.
A erReturning:
This advice is to be executed a er a join point completes normally i.e if a
method returns without throwing an exception.
To use this, use @A erReturning annotation.
A erThrowing:
This advice is to be executed if a method exits by throwing an exception.
To use this, use @A erThrowing annotation.
A er:
This advice is to be executed regardless of the means by which a join point
exits (normal return or exception encounter).
To use this, use @A er annotation.
Around:
This is the most powerful advice surrounds a join point such as a method
invocation.
To use this, use @Around annotation.
30. What are some of the classes for Spring JDBC API?
RowMapper:
This is an enhanced version of ResultSetExtractor that saves a lot of code.
It allows to map a row of the relations with the instance of the user-defined
class.
It iterates the ResultSet internally and adds it into the result collection
thereby saving a lot of code to fetch records.
© Copyright by Interviewbit
Contents
Introduction
If you are a fresh graduate looking for a job in the so ware development industry, or
an experienced developer planning to switch to a new job, it is essential to prepare
for the interviews. One of the essential skills for any Java developer is understanding
Java Persistence API (JPA), which is the standard specification for mapping Java
objects to relational databases.
In this article, we have compiled a list of JPA Interview Questions that are essential
and frequently asked during the interviews. We have categorized these questions
into the following sections:
JPA interview questions for freshers
JPA interview questions for experienced developers
JPA MCQ Questions
The questions in the first section are designed to test the basic knowledge of JPA,
while the questions in the second section are more advanced and require a deeper
understanding of the framework.
So, let's get started and dive into the list of important questions for the JPA
interview.
Java Persistence API (JPA) is a specification for managing data persistence in Java
applications. JPA is used to simplify the process of writing code for data persistence
by providing a high-level abstraction layer over the underlying data storage
technology, such as relational databases. JPA helps in mapping Java objects to
relational database tables and allows developers to perform CRUD (create, read,
update, delete) operations on data. JPA is o en used in coexistence with Hibernate,
a popular open-source ORM (object-relational mapping) framework. It is a part of the
Java EE platform and is commonly used in enterprise applications.
In this diagram, the ORM Framework provides a set of interfaces and annotations
that allow developers to map Java objects to relational databases. JPA
implementation interacts with the Relational DB and uses ORM Framework to map
Java objects to the database.
Entities are defined using annotations, which provide metadata about how the entity
should be persisted and how it relates to other entities in the application. The most
commonly used annotation for defining entities is @Entity, which marks a Java class
as an entity. Entities typically have instance variables that correspond to columns in
the database table, and methods that provide access to these variables. JPA also
provides annotations for defining relationships between entities, such as
@OneToOne, @OneToMany, @ManyToOne, and @ManyToMany.
Entities can be persisted in the database using the JPA “EntityManager” interface,
which provides methods for creating, reading, updating, and deleting entities. When
an entity is persisted, JPA creates a corresponding row in the database table, and
when an entity is read from the database, JPA populates the entity's instance
variables with the corresponding column values.
In this diagram, the Entity represents a persistent data object, which is defined using
fields and methods. Each field corresponds to a column in the database table, and
each method provides access to these fields. The Id field is typically annotated with
@Id annotation to indicate that it is the primary key for the entity.
In this example, the JPQL query selects all ‘Employee’ objects that belong to the ‘IT’
department. The query is executed using the ‘createQuery’ method of the
‘EntityManager’ interface, and the ‘setParameter’ method is used to bind the value of
the ‘dept’ parameter to the query
In JPA, transactions are used to manage the interactions between Java code and the
underlying relational database. JPA provides a transaction management system that
allows developers to define and control transactions in their applications.
JPA defines a ‘javax.persistence.EntityTransaction’ interface that represents a
transaction between a Java application and the database. A typical usage pattern for
a JPA transaction involves the following steps:
Obtain an instance of the ‘EntityManager’ interface.
Begin a transaction using the ‘EntityTransaction’ interface's ‘begin()’ method.
Perform one or more database operations using the ‘EntityManager’ interface's
persistence methods, such as ‘persist()’, ‘merge()’, or ‘remove()’.
Commit the transaction using the ‘EntityTransaction’ interface's ` method.
If any errors occur during the transaction, roll back the transaction using the
‘EntityTransaction’ interface's ‘rollback()’ method.
import org.springframework.data.repository.CrudRepository;
public interface ProductRepository extends CrudRepository<Product, Long> {
}
This interface provides basic CRUD functionality for the Product entity, such as
save(), delete(), findById(), and findAll().
Now let's create a repository using the JPA Repository interface:
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.data.jpa.repository.Query;
This interface extends the JpaRepository interface and provides additional methods,
such as findByPriceGreaterThan() and findByNameContaining(). These methods are
defined using Spring Data's method name query and the @Query annotation,
respectively.
10. What is a Named Query in JPA? How is it used? And what are
the benefits of using this?
In JPA, a named query is a pre-defined query that is given a name and can be used in
multiple places in an application. It is defined in the entity class using the
@NamedQuery annotation and can be used to retrieve entities based on specific
criteria.
Consider the below snippet to understand better about this -
@Entity
@NamedQuery(
name = "Product.findByPriceGreaterThan",
query = "SELECT p FROM Product p WHERE p.price > :price"
)
public class Product {
// ...
}
In this code snippet, we create a TypedQuery object using the named query
"Product.findByPriceGreaterThan" and pass in the Product class as the expected
result type. We then set the value of the named parameter ":price" to 10.0 and
execute the query using getResultList() to retrieve a list of products that match the
criteria.
Using named queries in JPA has several benefits, including:
Reusability: named queries can be defined once and used multiple times
throughout the application.
Performance: named queries are compiled and cached by the JPA provider,
which can improve performance for frequently used queries.
Maintenance: named queries can be easily modified or updated in a central
location, rather than scattered throughout the codebase.
11. What are the various query methods in JPA to retrieve data
from the database? List some of the most used methods.
In JPA, there are several query methods that can be used to retrieve data from the
database:
createQuery(): This method creates a JPQL (Java Persistence Query Language)
query that can be used to retrieve data from the database. JPQL queries are
similar to SQL queries, but they operate on JPA entities rather than database
tables.
createNamedQuery(): This method creates a named JPQL query that has been
defined in the entity class using the @NamedQuery annotation.
createNativeQuery(): This method creates a native SQL query that can be used
to retrieve data from the database using SQL syntax. Native SQL queries can be
used when JPQL is not sufficient for complex queries or for accessing database-
specific features.
find(): This method retrieves an entity from the database by its primary key.
getReference(): This method retrieves a reference to an entity from the
database by its primary key, without actually loading the entity data from the
database.
createQuery(criteriaQuery): This method creates a JPA Criteria API query that
can be used to retrieve data from the database. The Criteria API provides a type-
safe, object-oriented way to construct queries at runtime.
getSingleResult(): This method executes a query and returns a single result. If
the query returns more than one result or no results, an exception is thrown.
getResultList(): This method executes a query and returns a list of results. If the
query returns no results, an empty list is returned.
EntityManager.find() EntityManager.getReference()
It returns a lightweight
reference object that only
It immediately loads the contains the entity's primary
entity from the database key and does not actually load
and returns it as a fully the entity from the database
initialized object. until a method other than
getId() is called on the
reference.
It throws an
It throws an
IllegalArgumentException
EntityNotFoundException if
if the argument passed to
the entity does not exist in the
the method is not a valid
database.
entity type.
@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;
@ManyToOne
@JoinColumn(name="department_id")
private Department department;
@Service
@Transactional
public class EmployeeService {
@Autowired
private EntityManager entityManager;
JpaRepository.save() JpaRepository.saveAndFlus
Entity Class Creation: The first stage in the lifecycle of a JPA application is the
creation of entity classes. Entity classes are Java classes that represent database
tables and have properties that correspond to columns in those tables.
Entity Mapping: The next stage is entity mapping, which involves defining the
mapping between the entity classes and the database tables. This is typically
done using annotations or XML configuration files, and it specifies how the
properties of the entity classes correspond to the columns in the database
tables.
Persistence Unit Creation: The third stage is the creation of a Persistence Unit,
which is a logical grouping of one or more entity classes and their associated
metadata. This is typically done using a persistence.xml file, which specifies the
database connection details, the list of entity classes to be managed, and any
additional configuration options.
EntityManagerFactory Creation: The next stage is the creation of an
EntityManagerFactory, which is responsible for creating EntityManager
instances. The EntityManagerFactory is typically created once at the start of the
application and is used to create EntityManager instances throughout the
application.
EntityManager Creation: The next stage is the creation of an EntityManager,
which provides the primary interface for interacting with the Persistence
Context. The EntityManager is responsible for managing the lifecycle of entity
objects, executing queries, and performing CRUD operations on the database.
Transaction Management: The next stage is transaction management, which
involves defining the boundaries of transactions and managing their lifecycle.
Transactions are used to ensure data consistency and integrity, and they are
typically managed using annotations or programmatic APIs.
Entity Lifecycle Management: The next stage is entity lifecycle management,
which involves managing the lifecycle of entity objects within the Persistence
Context. Entity objects can be in one of several states, including New, Managed,
Detached, and Removed, and their state can be changed using the
EntityManager API.
Query Execution: The final stage is query execution, which involves executing
JPQL queries to retrieve data from the database. JPQL is a query language that
is similar to SQL but is specific to JPA.
Note: This is a simplified view of the JPA lifecycle and there may be additional
stages or variations depending on the specific implementation and
configuration of the application.
22. How does JPA handle optimistic locking? Can you give an
example of how you would implement optimistic locking in
JPA?
JPA (Java Persistence API) provides support for optimistic locking through the use of
version fields. Optimistic locking is a concurrency control mechanism that allows
multiple transactions to access the same data concurrently while ensuring data
consistency.
In JPA, optimistic locking is implemented by defining a version field on the entity
class. This version field is automatically incremented by the persistence provider
each time an entity is updated. When an entity is updated, JPA checks if the version
of the entity in the database matches the version of the entity in the persistence
context. If the versions do not match, it means that another transaction has modified
the entity in the meantime, and JPA throws an optimistic locking exception.
Consider the below snippets for the implementation of optimistic locking in JPA:
@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;
@Version
private int version;
In this example, we have an Employee entity with an id, a name, and a version field
annotated with @Version. The version field is an integer that JPA uses for optimistic
locking.
EntityManager em = entityManagerFactory.createEntityManager();
em.getTransaction().begin();
em.getTransaction().commit();
em.close();
@Entity
public class Book {
@Id
@GeneratedValue
private Long id;
private String title;
private String author;
private double price;
@Version
private int version;
In this example, we have a Book entity with an id, a title, an author, a price, and a
version field annotated with @Version. The version field is an integer that JPA uses
for optimistic locking.
When an entity is updated, JPA checks whether the version of the entity in the
database matches the version of the entity in the persistence context. If the versions
match, JPA updates the entity and increments the version number. If the versions do
not match, JPA throws an optimistic locking exception.
The @Version annotation can be applied to only one field per entity class. If the entity
has more than one field that needs to be used for optimistic locking, you can create a
composite version field using an embedded object or a concatenated string.
EntityManager em = entityManagerFactory.createEntityManager();
Query query = em.createQuery("SELECT e FROM Employee e ORDER BY e.name");
query.setFirstResult(0); // Starting index of the results to return
query.setMaxResults(10); // Maximum number of results to return
List<Employee> employees = query.getResultList();
em.close();
In this example, we have created a query to select all employees and order them by
name. We then set the starting index of the results to 0 and the maximum number of
results to 10 using the setFirstResult() and setMaxResults() methods. Finally, we
execute the query and retrieve the results using getResultList().
The advantages of using pagination over fetching all results at once include:
1. Reduced memory usage: When fetching a large number of results, it can
consume a lot of memory to hold all the results in memory at once. Pagination
allows you to retrieve a smaller subset of results at a time, reducing memory
usage.
2. Faster response times: If a query returns a large number of results, it can take a
long time to return all the results. Pagination allows you to retrieve smaller
subsets of results, which can improve response times.
3. Improved user experience: If you're displaying query results to users, it can be
overwhelming to display a large number of results at once. Pagination allows
you to display a smaller subset of results at a time, making it easier for users to
navigate through the results.
4. Better performance: When using pagination, the database can use more
efficient algorithms to retrieve and sort smaller subsets of results. This can result
in better performance than fetching all results at once.
You can then annotate the entity class or the entity listener class with the
@EntityListeners annotation to register the listener.
Consider the below example of a custom JPA entity listener:
In this example, we have a UserListener class with a prePersist method that sets the
creation date of a user before it's persisted to the database. We can then annotate
the User entity with the @EntityListeners annotation to register the listener:
@Entity
@Table(name = "users")
@EntityListeners(UserListener.class)
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Temporal(TemporalType.TIMESTAMP)
private Date creationDate;
In this example, we have a User entity with an id, a username, a password, and a
creation date field. We annotate the entity with @EntityListeners and specify the
UserListener class to register the listener.
We can use a custom entity listener in our application for various purposes, such as
auditing, validation, or processing entity lifecycle events. For example, we could use
a custom listener to calculate and update the average rating of a product when a new
review is added or to validate that a user has a unique email address before it's
persisted in the database, etc.
em.getTransaction().commit();
In this example, the EntityManager finds the entity to update and acquires an
optimistic lock on it using the lock() method. The entity is then modified and the
transaction is committed. If another transaction has modified the entity in the
meantime, an OptimisticLockException will be thrown when the transaction tries to
commit the changes.
@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;
@OneToOne(mappedBy="employee")
private Address address;
@Entity
public class Address {
@Id
@GeneratedValue
private Long id;
@OneToOne
@JoinColumn(name="employee_id")
private Employee employee;
@Entity
public class Department {
@Id
@GeneratedValue
private Long id;
@OneToMany(mappedBy="department")
private List<Employee> employees;
@Entity
public class Employee {
@Id
@GeneratedValue
private Long id;
@ManyToOne
@JoinColumn(name="department_id")
private Department department;
In this example, the @OneToMany annotation is used to specify that the Department
entity has a one-to-many relationship with the Employee entity. The mappedBy
attribute is used to indicate that the relationship is mapped by the department field
in the Employee entity.
29. Can you explain how JPA handles entity state transitions
(e.g. from new to managed, managed to remove, etc.)?
What are some best practices for managing entity states in
JPA?
JPA manages the state of entities as they are created, modified, and deleted in the
application. The state of an entity can be one of the following:
1. New: When an entity is first created using the new operator, it's in a new state.
2. Managed: Once an entity is persisted using the EntityManager.persist() method,
it enters the managed state. Entities in this state are managed by the
persistence context, and any changes made to the entity are tracked and
automatically synchronized with the database.
3. Detached: Entities become detached when they are removed from the
persistence context or when the persistence context is closed. In this state,
changes made to the entity are not tracked or synchronized with the database.
However, they can be re-attached to the persistence context later using the
EntityManager.merge() method.
4. Removed: When an entity is removed using the EntityManager.remove()
method, it enters the removed state. Entities in this state are scheduled for
deletion from the database when the transaction is committed.
To manage entity states in JPA, it's important to follow some best practices:
1. Always use EntityManager to create, retrieve, update, and delete entities.
2. Use the EntityManager.persist() method to create new entities.
3. Use the EntityManager.merge() method to update existing entities or to re-
attach detached entities to the persistence context.
4. Use the EntityManager.remove() method to delete entities.
5. Avoid using the new operator to create entities once JPA is involved in your
application.
6. Be aware of the transaction boundaries and make sure that all database
operations are performed within a transaction.
7. Keep the persistence context as small as possible to avoid unnecessary memory
usage and performance issues.
8. By following these best practices, you can ensure that entity state transitions are
properly managed in your application and that your data is consistent and up-
to-date in the database.
CascadeType.ALL CascadeType.PERSIST
If an entity is associated
If an entity is associated with
with another entity
another entity using
using CascadeType.ALL,
CascadeType.PERSIST, only the
any operation
‘persist’ operation will be
performed on the parent
propagated to the child entity.
entity will be
For example, if we persist a
propagated to the child
parent entity, any child entities
entity. For example, if
associated with it will also be
we delete a parent
persisted, but any subsequent
entity, any child entities
operations (e.g. remove or
associated with it will
merge) will not be propagated.
also be deleted.
CascadeType.ALL should
be used with caution as CascadeType.PERSIST is less risky
it can result in as it only propagates the ‘persist’
unintended operation, but it may require
consequences, such as additional operations (such as
deleting child entities ‘remove’ or ‘merge’) to ‘update’
that should not be or ‘delete’ child entities.
deleted.
Conclusion
In conclusion, JPA (Java Persistence API) is a powerful tool for developers working
with Java applications that need to interact with databases. As such, it's become an
increasingly popular topic in interviews for Java development positions.
We hope that this article has provided you with a solid foundation for preparing for
JPA interviews. By reviewing these questions and practising your answers, you can
increase your chances of success and impress potential employers with your JPA
expertise.
Remember, JPA is just one aspect of Java development, so be sure to also brush up
on other relevant technologies and concepts. With a well-rounded understanding of
Java development and JPA in particular, you'll be well-positioned to excel in your
career. Also, if you are a java developer then other things related to java like Spring,
Hibernate, etc content you can find here -
Technical Interview Questions
Interview Preparation Resources
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
19. What are the effects of running Spring Boot Application as “Java Application”?
20. What is Spring Boot dependency management system?
21. What are the possible sources of external configuration?
22. Can we change the default port of the embedded Tomcat server in Spring boot?
23. Can you tell how to exclude any package without using the basePackages filter?
24. How to disable specific auto-configuration class?
25. Can the default web server in the Spring Boot application be disabled?
26. What are the uses of @RequestMapping and @RestController annotations in
Spring Boot?
Spring handles all the infrastructure-related aspects which lets the programmer
to focus mostly on application development.
<beans>
<context:annotation-config/>
<!-- bean definitions go here -->
</beans>
The IoC container instantiates the bean from the bean’s definition in the XML
file.
Spring then populates all of the properties using the dependency injection as
specified in the bean definition.
The bean factory container calls setBeanName() which take the bean ID and the
corresponding bean has to implement BeanNameAware interface.
The factory then calls setBeanFactory() by passing an instance of itself (if
BeanFactoryAware interface is implemented in the bean).
If BeanPostProcessors is associated with a bean, then the
preProcessBeforeInitialization() methods are invoked.
If an init-method is specified, then it will be called.
Lastly, postProcessAfterInitialization() methods will be called if there are any
BeanPostProcessors associated with the bean that needs to be run post
creation.
When beans are combined together within the Spring container, they are said to
be wired or the phenomenon is called bean wiring.
The Spring container should know what beans are needed and how the beans
are dependent on each other while wiring beans. This is given by means of XML /
Annotations / Java code-based configuration.
Spring boot provides numbers of starter dependency, here are the most commonly
used -
Data JPA starter.
Test Starter.
Security starter.
Web starter.
Mail starter.
Thymeleaf starter.
Just like any other Java program, a Spring Boot application must have a main
method. This method serves as an entry point, which invokes the
SpringApplication#run method to bootstrap the application.
@SpringBootApplication
public class MyApplication {
SpringApplication.run(MyApplication.class);
// other statements
}
}
<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-web </artifactId>
</dependency>
11. What is Spring Boot CLI and what are its benefits?
Spring Boot CLI is a command-line interface that allows you to create a spring-based
java application using Groovy.
Example: You don’t need to create getter and setter method or access modifier,
return statement. If you use the JDBC template, it automatically loads for you.
12. What are the most common Spring Boot CLI commands?
Page 7 © Copyright by Interviewbit
Spring Boot Interview Questions
-run, -test, -grap, -jar, -war, -install, -uninstall, --init, -shell, -help.
To check the description, run spring --help from the terminal.
19. Can we disable the default web server in the Spring boot
application?
Yes, we can use application.properties to configure the web application type i.e
spring.main.web-application-type=none.
<dependency>
<groupId> org.springframework.boot</groupId>
<artifactId> spring-boot-starter-actuator </artifactId>
</dependency>
Health
Info
Beans
Mappings
Configprops
Httptrace
Heapdump
Threaddump
Shutdown
29. How to get the list of all the beans in your Spring boot
application?
Spring Boot actuator “/Beans” is used to get the list of all the spring beans in your
application.
You can define both application and Spring boot-related properties into a file called
application.properties. You can create this file manually or use Spring Initializer to
create this file. You don’t need to do any special configuration to instruct Spring Boot
to load this file, If it exists in classpath then spring boot automatically loads it and
configure itself and the application code accordingly.
© Copyright by Interviewbit
Contents
The same thing can be related to MVC applications also whose life cycle has 2
foremost phases:
For creating a request object.
For sending the response to any browser.
h i h f i il i ?
Page 7 © Copyright by Interviewbit
MVC Interview Questions
ActionFilters are used for executing the logic while MVC action is executed.
Furthermore, action filters permit the implementation of pre and post-processing
logic and action methods.
18. How will you navigate from one view to another view in
MVC? Explain with a hyperlink example.
We will make use of the ActionLink method which will help us to navigate from one
view to another. Here is an example of navigating the Home controller by invoking
the Go to Home action. He is how we can code it:
<%=Html.ActionLink("Home","GoTo Home")%>
19. Explain the 3 concepts in one line; Temp data, View, and
Viewbag?
We can briefly describe Temp data, View, and Viewbag as:
Temp data: This is used for maintaining the data when there is a shi of work
from one controller to another.
View data: This is used for maintaining the data when we will shi from a
controller to a view within an application.
View Bag: This acts as a view data’s dynamic wrapper.
20. Mention & explain the different approaches you will use to
implement Ajax in MVC?
There are 2 different approaches to implementing Ajax in MVC. These are:
jQuery: This is a library written using JavaScript for simplifying HTML-DOM
manipulation.
AJAX libraries: Asynchronous JavaScript and XML libraries are a set of web
development libraries written using JavaScript and are used to perform
common operations.
ActionResult ViewResult
It is not so effective
It becomes effective if you want to
in deriving different
derive different types of views
types of views
dynamically.
dynamically.
Separation of Concerns can be defined as one of the core features as well as benefits
of using MVC and is supported by ASP.NET. Here, the MVC framework offers a distinct
detachment of the different concerns such as User Interface (UI), data and the
business logic.
28. Which class will you use for sending the result back in JSON
format in MVC?
For sending back the result in JSON format in any MVC application, you have to
l h l l
Page 12 © Copyright by Interviewbit
MVC Interview Questions
The view has its own layout The partial view does not
page. have its own layout page.
34. When multiple filters are used in MVC, how is the ordering of
execution of the filters done?
The order in which filters are used:
First, the authorization filters are executed.
Followed by the Action filters.
Then, the response filters are executed.
Finally, the exception filters.
@ {
Layout = "~/ Views/ Shared/ _
file.cshtml";
}
<html>
<head>
<meta name="viewport" />
<title> InitialView </title> </head>
<body> ….
</body>
</html>
37. Mention the possible file extensions used for razor views?
The different file extensions that are used by razor views are:
.cshtml: When your MVC application is using C# as the programming language.
.vbhtml: When your MVC application is using VB is the programming language.
Page 15 © Copyright by Interviewbit
MVC Interview Questions
39. Point out the different stages a Page life cycle of MVC has?
The different steps or stages of the page life-cycle of MVC are:
Initialization of app.
Routing.
Instantiate the object followed by executing the controller.
Locate as well as invoke the controller action.
Instantiating and then rendering the view.
<system.web>
<authentication mode = "Forms" >
<formsloginUrl = "Login.aspx" protection = "All" timeout = "30" name = ".ASPXAU
</authentication>
</system.web>
46. Point out the two instances where you cannot use routing or
where routing is not necessary
The 2 situations where routing is not used or not necessary are:
When there is a physical file matching the URL pattern.
When any routing gets disabled in any particular URL pattern.
© Copyright by Interviewbit
Contents
Spring Security is essentially just a bunch of servlet filters that enable Java
applications to include authentication and authorization functionality. It is one of the
most powerful, and highly customizable access-control frameworks (security
framework) that provide authentication, authorization, and other security features
for Java EE (Enterprise edition) based enterprise applications. The real power of
Spring Security lies in its ability to be extended to meet custom needs. Its main
responsibility is to authenticate and authorize incoming requests for accessing any
resource, including rest API endpoints, MVC (Model-View-Controller) URLs, static
resources, etc.
Authentication: This refers to the process of verifying the identity of the user,
using the credentials provided when accessing certain restricted resources. Two
steps are involved in authenticating a user, namely identification and
verification. An example is logging into a website with a username and a
password. This is like answering the question Who are you?
Authorization: It is the ability to determine a user's authority to perform an
action or to view data, assuming they have successfully logged in. This ensures
that users can only access the parts of a resource that they are authorized to
access. It could be thought of as an answer to the question Can a user do/read
this?
Value = username:password
Encoded Value = base64(Value)
Authorization Value = Basic <Encoded Value>
//Example: Authorization: Basic VGVzdFVzZXI6dGVzdDEyMw==
//Decode it'll give back the original username:password UserName:user123
Hash1=MD5(username:realm:password)
Hash2=MD5(method:digestURI)
response=MD5(Hash1:nonce:nonceCount:cnonce:qop:Hash2)
//Example, this got generated by running this example
Authorization: Digest username="TestAdmin", realm="admin-digest-realm", nonce="MTYwMDEw
Spring Security is indeed a cross-cutting concern. Spring security is also using Spring
AOP (Aspect Oriented Programming) internally. A cross-cutting concern is one that
applies throughout the whole application and affects it all. Below are some cross-
cutting concerns related to the enterprise application.
Logging and tracing
Transaction management
Security
Caching
Error handling
Performance monitoring
Custom Business Rules
import org.springframework.expression.Expression;
import org.springframework.expression.ExpressionParser;
import org.springframework.expression.spel.standard.SpelExpressionParser;
public class WelcomeTest
{
public static void main(String[] args)
{
ExpressionParser parser = new SpelExpressionParser();
Expression exp = parser.parseExpression("'WELCOMEtoSPEL'");
String message = (String) exp.getValue();
System.out.println(message);
//OR
//System.out.println(parser.parseExpression("'Hello SPEL'").getValue());
}
}
Output:
WELCOMEtoSPEL
JWT (JSON Web Tokens) are tokens that are generated by a server upon user
authentication in a web application and are then sent to the client (normally a
browser). As a result, these tokens are sent on every HTTP request, allowing the
server to verify or authenticate the user's identity. This method is used for
authorizing transactions or requests between client and server. The use of JWT does
not intend to hide data, but rather ensure its authenticity. JWTs are signed and
encoded, instead of encrypted. A cryptographic algorithm is used to digitally sign
JWTs in order to ensure that they cannot be altered a er they are issued. Information
contained in the token is signed by the server's private key in order to ensure
integrity.
Login credentials are sent by the user. When successful, JWT tokens (signed by
private key/secret key) are sent back by the server to the client.
The client takes JWT and inserts it in the Authorization header to make data
requests for the user.
Upon receiving the token from the client, the server simply needs to compare
the signature sent by the client to the one it generated with its private
key/secret key. The token will be valid once the signatures match.
Three parts make up JSON Web Tokens, separated by a dot (.). The first two (the
header and the payload) contain Base64-URL encoded JSON, while the third is a
cryptographic signature.
For example:
eyJhbGciOfefeiI1NiJ9.eyJuYW1lIjdgdfeENvZGVyIn0.5dlp7GmziL2dfecegse4mtaqv0_xX4oFUuTDh14K
eyJhbGciOfefeiI1NiJ9 #header
eyJuYW1lIjdgdfeENvZGVyIn0 #payload
5dlp7GmziL2dfecegse4mtaqv0_xX4oFUuTDh14KuF #signature
Step 1: The client first sends a request for a resource (MVC controller). The
application container creates a filter chain for handling and processing incoming
requests.
Step 2: Each HttpServletRequest passes through the filter chain depending
upon the request URI. (We can configure whether the filter chains should be
applied to all requests or to the specific request URI).
Step 3: For most web applications, filters perform the following functions:
Modify or Change the HttpServletRequest/HttpServletResponse before it
reaches the Spring MVC controller.
Can stop the processing of the request and send a response to the client,
such as Servlets not allowing requests to specific URI's.
Filter chains in Spring Security are very complex and flexible. They use services such
as UserDetailsService and AuthenticationManager to accomplish their tasks. It is also
important to consider their orders since you might want to verify their authenticity
before authorizing them. A few of the important security filters from Spring's filter
chain are listed below in the order they occur:
SecurityContextPersistenceFilter: Stores the SecurityContext contents
between HTTP requests. It also clears SecurityContextHolder when a request is
finished.
ConcurrentSessionFilter: It is responsible for handling concurrent sessions. Its
purpose is to refresh the last modified time of the request's session and to
ensure the session hasn't expired.
UsernamePasswordAuthenticationFilter: It's the most popular authentication
filter and is the one that's most o en customized.
ExceptionTranslationFilter: This filter resides above FilterSecurityInterceptor in
the security filter stack. Although it doesn't perform actual security
enforcement, it handles exceptions thrown by the security interceptors and
returns valid and suitable HTTP responses.
FilterSecurityInterceptor: It is responsible for securing HTTP resources (web
URIs), and raising or throwing authentication and authorization exceptions
when access is denied.
A servlet filter must be declared in the web.xml file so that it can be invoked before
the request is passed on to the actual Servlet class. DelegatingFilterProxy is a servlet
filter embedded in the spring context. It acts as a bridge between web.xml (web
application) and the application context (Spring IoC Container).
DelegatingFilterProxy is a proxy that delegates an incoming request to a group of
filters (which are not managed as spring beans) provided by the Spring web
framework. It provides full access to the Spring context's life cycle machinery and
dependency injection.
Whenever a request reaches the web application, the proxy ensures that the request
is delegated to Spring Security, and, if everything goes smoothly, it will ensure that
the request is directed to the right resource within the web application. The following
example demonstrates how to configure the DelegatingProxyFilter in web.xml:
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
</web-app>
@PreAuthorize @Secu
It does
It supports multiple roles in conjunction with AND operator. If mor
For example: the OR
For ex
@PreAuthorize("hasRole('ROLE_role1') and hasRole('ROLE_role2')")
@Sec
Spri
Spring boot: @Ena
@EnableGlobalMethodSecurity(prePostEnabled = true)
Conclusion
Spring Security is one of the most popular, powerful, and highly customizable access-
control frameworks (security framework) that provide authentication, authorization,
and other security features for enterprise applications. In this article, we have
compiled a comprehensive list of Spring Security Interview questions, which are
typically asked during interviews. In addition to checking your existing Spring
Security skills, these questions serve as a good resource for reviewing some
important concepts before you appear for an interview. It is suitable for both freshers
as well as experienced developers and tech leads.
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
Conclusion
Conclusion (.....Continued)
36. Conclusion
What is a Container?
A container is a standard unit of so ware bundled with dependencies so that
applications can be deployed fast and reliably b/w different computing platforms.
Docker can be visualized as a big ship (docker) carrying huge boxes of products
(containers).
Docker container doesn’t require the installation of a separate operating
system. Docker just relies or makes use of the kernel’s resources and its
functionality to allocate them for the CPU and memory it relies on the kernel’s
functionality and uses resource isolation for CPU and memory, and separate
namespaces to isolate the application’s view of the OS (operating system).
All these aspects form the core part of DevOps which becomes all the more
important for any developer to know these in order to improve productivity,
fasten the development along with keeping in mind the factors of application
scalability and more efficient resource management.
Imagine containers as a very lightweight pre-installed box with all the packages,
dependencies, so ware required by your application, just deploy to production
with minimal configuration changes.
Lots of companies like PayPal, Spotify, Uber, etc use Docker to simplify the
operations and to bring the infrastructure and security closer to make more
secure applications.
Being portable, Containers can be deployed on multiple platforms like bare
instances, virtual machines, Kubernetes platform etc. as per requirements of
scale or desired platform.
3. What is a DockerFile?
It is a text file that has all commands which need to be run for building a given
image.
1. Native Hypervisor: This type is also called a Bare-metal Hypervisor and runs
directly on the underlying host system which also ensures direct access to the
host hardware which is why it does not require base OS.
2. Hosted Hypervisor: This type makes use of the underlying host operating
system which has the existing OS installed.
7. What is the docker command that lists the status of all docker
containers?
In order to get the status of all the containers, we run the below command: docker
ps -a
A Docker image registry, in simple terms, is an area where the docker images are
stored. Instead of converting the applications to containers each and every time,
a developer can directly use the images stored in the registry.
This image registry can either be public or private and Docker hub is the most
popular and famous public registry available.
Virtualization Containerization
1. Off: In this, the container won’t be restarted in case it's stopped or it fails.
2. On-failure: Here, the container restarts by itself only when it experiences
failures not associated with the user.
3. Unless-stopped: Using this policy, ensures that a container can restart only
when the command is executed to stop it by the user.
4. Always: Irrespective of the failure or stopping, the container always gets
restarted in this type of policy.
19. Can you tell the differences between a docker Image and
Layer?
Image: This is built up from a series of read-only layers of instructions. An image
corresponds to the docker container and is used for speedy operation due to the
caching mechanism of each step.
- The result of building this docker file is an image. Whereas the instructions present
in this file add the layers to the image. The layers can be thought of as intermediate
images. In the example above, there are 4 instructions, hence 4 layers are added to
the resultant image.
23. Can you tell the what are the purposes of up, run, and start
commands of docker compose?
24. What are the basic requirements for the docker to run on
any system?
Docker can run on both Windows and Linux platforms.
For the Windows platform, docker atleast needs Windows 10 64bit with 2GB RAM
space. For the lower versions, docker can be installed by taking help of the
toolbox. Docker can be downloaded from https://docs.docker.com/docker-for-
windows/ website.
For Linux platforms, Docker can run on various Linux flavors such as Ubuntu
>=12.04, Fedora >=19, RHEL >=6.5, CentOS >=6 etc.
25. Can you tell the approach to login to the docker registry?
Using the docker login command credentials to log in to their own cloud
repositories can be entered and accessed.
FROM: This is used to set the base image for upcoming instructions. A docker file
is considered to be valid if it starts with the FROM instruction.
LABEL: This is used for the image organization based on projects, modules, or
licensing. It also helps in automation as we specify a key-value pair while
defining a label that can be later accessed and handled programmatically.
RUN: This command is used to execute instructions following it on the top of the
current image in a new layer. Note that with each RUN command execution, we
add layers on top of the image and then use that in subsequent steps.
CMD: This command is used to provide default values of an executing container.
In cases of multiple CMD commands the last instruction would be considered.
This can be done using networking by identifying the “ipconfig” on the docker host.
This command ensures that an ethernet adapter is created as long as the docker is
present in the host.
30. Can you tell the difference between CMD and ENTRYPOINT?
CMD command provides executable defaults for an executing container. In case
the executable has to be omitted then the usage of ENTRYPOINT instruction
along with the JSON array format has to be incorporated.
ENTRYPOINT specifies that the instruction within it will always be run when the
container starts.
This command provides an option to configure the parameters and the
executables. If the DockerFile does not have this command, then it would still
get inherited from the base image mentioned in the FROM instruction.
- The most commonly used ENTRYPOINT is /bin/sh or /bin/bash for most
of the base images.
As part of good practices, every DockerFile should have at least one of these two
commands.
32. How many containers you can run in docker and what are
the factors influencing this limit?
There is no clearly defined limit to the number of containers that can be run within
docker. But it all depends on the limitations - more specifically hardware restrictions.
The size of the app and the CPU resources available are 2 important factors
influencing this limit. In case your application is not very big and you have abundant
CPU resources, then we can run a huge number of containers.
35. How will you ensure that a container 1 runs before container
2 while using docker compose?
Docker-compose does not wait for any container to be “ready” before going ahead
with the next containers. In order to achieve the order of execution, we can use:
The “depends_on” which got added in version 2 of docker-compose can be used
as shown in a sample docker-compose.yml file below:
version: "2.4"
services:
backend:
build: .
depends_on:
- db
db:
image: postgres
Conclusion
36. Conclusion
DevOps technologies are growing at an exponential pace. As the systems are being
more and more distributed, developers have turned towards containerization
because of the need to develop so ware faster and maintain them better. They also
aid in easier and faster continuous integration and deployment process which is why
these technologies have experienced tremendous growth.
Docker is the most famous and popular tool for achieving the purpose of
containerization and continuous integration/development and also for continuous
deployment due to its great support for pipelines. With the growing ecosystem,
docker has proven itself to be useful to operate on multiple use cases thereby making
it all the more exciting to learn it!
© Copyright by Interviewbit
Contents
A spring boot actuator is a project that provides restful web services to access the
current state of an application that is running in production. In addition, you can
monitor and manage application usage in a production environment without having
to code or configure any of the applications.
Example:
In Spring MVC applications, you need to specify a suffix and prefix. You can do this by
adding the properties listed below in the application.properties file.
For suffix – spring.mvc.view.suffix: .jsp
For prefix – spring.mvc.view.prefix: /WEB-INF/
14. What are the challenges that one has to face while using
Microservices?
The challenges that one has to face while using microservices can be both functional
and technical as given below:
Functional Challenges:
Require heavy infrastructure setup.
Need Heavy investment.
Require excessive planning to handle or manage operations overhead.
Technical Challenges:
Microservices are always interdependent. Therefore, they must communicate
with each other.
It is a heavily involved model because it is a distributed system.
You need to be prepared for operations overhead if you are using Microservice
architecture.
To support heterogeneously distributed microservices, you need skilled
professionals.
It is difficult to automate because of the number of smaller components. For
that reason, each component must be built, deployed, and monitored
separately.
It is difficult to manage configurations across different environments for all
components.
Challenges associated with deployment, debugging, and testing.
The semantic monitoring method, also called synthetic monitoring, uses automated
tests and monitoring of the application to identify errors in business processes. This
technology provides a deeper look into the transaction performance, service
availability, and overall application performance to identify performance issues of
microservices, catch bugs in transactions and provide an overall higher level of
performance.
The term 'idempotence' refers to the repeated performance of a task despite the
same outcome. In other words, it is a situation in which a task is performed
repeatedly with the end result remaining the same.
Usage: When the remote service or data source receives instructions more than
once, Idempotence ensures that it will process each request once.
Botton-level tests: The bottom-level tests are those that deal with technology, such
as unit tests and performance tests. This is a completely automated process.
Middle-level tests: In the middle, we have exploratory tests such as stress tests and
usability tests.
Top-level tests: In the top-level testing, we have a limited number of acceptance
tests. The acceptance tests help stakeholders understand and verify the so ware
features.
In accordance with the pyramid, the number of tests should be highest at the first
layer. At the service layer, fewer tests should be performed than at the unit test level,
but greater than that at the end-to-end level.
Conclusion:
Css Interview Questions Laravel Interview Questions Asp Net Interview Questions
21. What is the requirement for an object to be used as key or value in HashMap?
22. How does HashMap handle collisions in Java?
23. What is the time complexity of basic operations get() and put() in HashMap
class?
In this article, we've compiled the answers to the most frequently asked Data
Structure Interview Questions so that you may better prepare for your job
interview.
Decision Making
Genetics
Image Processing
Blockchain
Numerical and Statistical Analysis
Compiler Design
Database Design and many more
Linear Data Structure: A data structure that includes data elements arranged
sequentially or linearly, where each element is connected to its previous and
next nearest elements, is referred to as a linear data structure. Arrays and linked
lists are two examples of linear data structures.
Non-Linear Data Structure: Non-linear data structures are data structures in
which data elements are not arranged linearly or sequentially. We cannot walk
through all elements in one pass in a non-linear data structure, as in a linear
data structure. Trees and graphs are two examples of non-linear data structures.
Some of the main operations provided in the stack data structure are:
push: This adds an item to the top of the stack. The overflow condition occurs if
the stack is full.
pop: This removes the top item of the stack. Underflow condition occurs if the
stack is empty.
top: This returns the top item from the stack.
isEmpty: This returns true if the stack is empty else false.
size: This returns the size of the stack.
Stack Queue
Stack is based on
Queue is based on FIFO(First In
LIFO(Last In First Out)
First Out) principle
principle
A queue can be implemented using two stacks. Let q be the queue and stack1
and stack2 be the 2 stacks for implementing q . We know that stack supports
push, pop, and peek operations and using these operations, we need to emulate the
operations of the queue - enqueue and dequeue. Hence, queue q can be
implemented in two methods (Both the methods use auxillary space complexity of
O(n)):
1. By making enqueue operation costly:
Here, the oldest element is always at the top of stack1 which ensures
dequeue operation occurs in O(1) time complexity.
To place the element at top of stack1, stack2 is used.
Pseudocode:
Enqueue: Here time complexity will be O(n)
enqueue(q, data):
While stack1 is not empty:
Push everything from stack1 to stack2.
Push data to stack1
Push everything back to stack1.
deQueue(q):
If stack1 is empty then error else
Pop an item from stack1 and return it
Here, for enqueue operation, the new element is pushed at the top of stack1 .
Here, the enqueue operation time complexity is O(1).
In dequeue, if stack2 is empty, all elements from stack1 are moved to
stack2 and top of stack2 is the result. Basically, reversing the list by
pushing to a stack and returning the first enqueued element. This operation of
pushing all elements to a new stack takes O(n) complexity.
Pseudocode:
Enqueue: Time complexity: O(1)
enqueue(q, data):
Push data to stack1
dequeue(q):
If both stacks are empty then raise error.
If stack2 is empty:
While stack1 is not empty:
push everything from stack1 to stack2.
Pop the element from stack2 and return it.
This method ensures that the newly entered element is always at the front of
‘q1’ so that pop operation just dequeues from ‘q1’.
‘q2’ is used as auxillary queue to put every new element in front of ‘q1’ while
ensuring pop happens in O(1) complexity.
Pseudocode:
Push element to stack s: Here push takes O(n) time complexity.
push(s, data):
Enqueue data to q2
Dequeue elements one by one from q1 and enqueue to q2.
Swap the names of q1 and q2
pop(s):
dequeue from q1 and return it.
push(s,data):
Enqueue data to q1
pop(s):
Step1: Dequeue every elements except the last element from q1 and enqueue to q2.
Step2: Dequeue the last item of q1, the dequeued item is stored in result variable.
Step3: Swap the names of q1 and q2 (for getting updated data after dequeue)
Step4: Return the result.
Array data structures are commonly used in databases and other computer systems
to store large amounts of data efficiently. They are also useful for storing information
that is frequently accessed, such as large amounts of text or images.
1. Singly Linked List: A singly linked list is a data structure that is used to store
multiple items. The items are linked together using the key. The key is used to
identify the item and is usually a unique identifier. In a singly linked list, each item is
stored in a separate node. The node can be a single object or it can be a collection of
objects. When an item is added to the list, the node is updated and the new item is
added to the end of the list. When an item is removed from the list, the node that
contains the removed item is deleted and its place is taken by another node. The key
of a singly linked list can be any type of data structure that can be used to identify an
object. For example, it could be an integer, a string, or even another singly linked list.
Singly-linked lists are useful for storing many different types of data. For example,
they are commonly used to store lists of items such as grocery lists or patient records.
They are also useful for storing data that is time sensitive such as stock market prices
or flight schedules.
2. Doubly Linked List: A doubly linked list is a data structure that allows for two-way
data access such that each node in the list points to the next node in the list and also
points back to its previous node. In a doubly linked list, each node can be accessed by
its address, and the contents of the node can be accessed by its index. It's ideal for
applications that need to access large amounts of data in a fast manner. A
disadvantage of a doubly linked list is that it is more difficult to maintain than a
single-linked list. In addition, it is more difficult to add and remove nodes than in a
single-linked list.
3. Circular Linked List: A circular linked list is a unidirectional linked list where each
node points to its next node and the last node points back to the first node, which
makes it circular.
4. Doubly Circular Linked List: A doubly circular linked list is a linked list where each
node points to its next node and its previous node and the last node points back to
the first node and first node’s previous points to the last node.
5. Header List: A list that contains the header node at the beginning of the list, is
called the header-linked list. This is helpful in calculating some repetitive operations
like the number of elements in the list etc.
An array's elements
Linked List elements are
are not dependent on
dependent on one another.
one another.
Memory utilization is
Memory utilization is effective in
ineffective in the case
the case of an array.
of an array.
Operations like
insertion and deletion Operations like insertion and
take longer time in an deletion are faster in the linked list.
array.
It's widely used in computer networks for storing routing table information.
Decision Trees.
Expression Evaluation.
Database indices.
25. What is binary search tree data structure? What are the
applications for binary search trees?
A binary search tree is a data structure that stores items in sorted order. In a binary
search tree, each node stores a key and a value. The key is used to access the item
and the value is used to determine whether the item is present or not. The key can be
any type of value such as an integer, floating point number, character string, or even
a combination of these types. The value can be any type of items such as an integer,
floating point number, character string, or even a combination of these types. When
a node is added to the tree, its key is used to access the item stored at that node.
When a node is removed from the tree, its key is used to access the item stored at
that node.
A binary search tree is a special type of binary tree that has a specific order of
elements in it. It has three basic qualities:
All elements in the le subtree of a node should have a value less than or equal
to the parent node's value, and
All elements in the right subtree of a node should have a value greater than or
equal to the parent node's value.
Both the le and right subtrees must be binary search trees too.
Uses: In binary search trees (BST), inorder traversal gives nodes in ascending
order.
2. Preorder Traversal:
Algorithm:
Step 1. Visit the root.
Step 2. Traverse the le subtree, i.e., call Preorder(root.le )
Step 3. Traverse the right subtree, i.e., call Preorder(root.right)
Preorder traversal in Java:
Uses:
Preorder traversal is commonly used to create a copy of the tree.
It is also used to get prefix expression of an expression tree.
3. Postorder Traversal:
Algorithm:
Step 1. Traverse the le subtree, i.e., call Postorder(root.le )
Step 2. Traverse the right subtree, i.e., call Postorder(root.right)
Step 3. Visit the root.
Postorder traversal in Java:
Uses:
Postorder traversal is commonly used to delete the tree.
It is also useful to get the postfix expression of an expression tree.
Consider the following tree as an example, then:
27. What is a deque data structure and its types? What are the
applications for deque?
A deque can be thought of as an array of items, but with one important difference:
Instead of pushing and popping items off the end to make room, deques are
designed to allow items to be inserted at either end. This property makes deques
well-suited for performing tasks such as keeping track of inventory, scheduling tasks,
or handling large amounts of data.
Output Restricted Deque: Deletion operations are performed at only one end
while insertion is performed at both ends in the output restricted queue.
28. What are some key operations performed on the Deque data
structure?
Following are the key operations available deque:
insertFront(): This adds an element to the front of the Deque.
insertLast(): This adds an element to the rear of the Deque.
deleteFront(): This deletes an element from the front of the Deque.
deleteLast():This deletes an element from the front of the Deque.
getFront(): This gets an element from the front of the Deque.
getRear(): This gets an element from the rear of the Deque.
isEmpty(): This checks whether Deque is empty or not.
isFull(): This checks whether Deque is full or not.
One of the cons of this representation is that even if the graph is sparse (has fewer
edges), it takes up the same amount of space. Adding a vertex takes O(V^2). It also
takes O(V) time to compute all of a vertex's neighbours, which is not very efficient.
2. Adjacency List: In this method, each Node holds a list of Nodes that are directly
connected to that vertex. Each node at the end of the list is connected with null
values to indicate that it is the last node in the list. This saves space O(|V|+|E|). In the
worst-case scenario, a graph can have C(V, 2) edges, consuming O(V^2) space. It is
simpler to add a vertex. It takes the least amount of time to compute all of a vertex's
neighbours.
One of the cons of this representation is that queries such as "is there an edge from
vertex u to vertex v?" are inefficient and take O (V) in the worst case.
33. What is AVL tree data structure, its operations, and its
rotations? What are the applications for AVL trees?
AVL trees are height balancing binary search trees named a er their inventors
Adelson, Velski, and Landis. The AVL tree compares the heights of the le and right
subtrees and ensures that the difference is less than one. This distinction is known as
the Balance Factor.
BalanceFactor = height(le -subtree) − height(right-subtree)
Le rotation: When a node is inserted into the right subtree of the right subtree
and the tree becomes unbalanced, we perform a single le rotation.
Right rotation: If a node is inserted in the le subtree of the le subtree, the AVL
tree may become unbalanced. The tree then requires right rotation.
Le -Right rotation: The RR rotation is performed first on the subtree, followed
by the LL rotation on the entire tree.
Right-Le rotation: The LL rotation is performed first on the subtree, followed
by the RR rotation on the entire tree.
Following are some real-time applications for AVL tree data structure:
AVL trees are typically used for in-memory sets and dictionaries.
AVL trees are also widely used in database applications where there are fewer
insertions and deletions but frequent data lookups are required.
Apart from database applications, it is used in applications that require
improved searching.
Following are key operations performed on the Segment tree data structure:
Building Tree: In this step, we create the structure and initialize the segment
tree variable.
Updating the Tree: In this step, we change the tree by updating the array value at
a point or over an interval.
Querying Tree: This operation can be used to run a range query on the array.
Following are real-time applications for Segment Tree:
Used to efficiently list all pairs of intersecting rectangles from a list of rectangles
in the plane.
The segment tree has become popular for use in pattern recognition and image
processing.
Finding range sum/product, range max/min, prefix sum/product, etc
Computational geometry
Geographic information systems
Static and Dynamic RMQ (Range Minimum Query)
Storing segments in an arbitrary manner
The word "Trie" is an abbreviation for "retrieval." Trie is a data structure that stores a
set of strings as a sorted tree. Each node has the same number of pointers as the
number of alphabet characters. It can look up a word in the dictionary by using its
prefix. Assuming that all strings are formed from the letters 'a' to 'z' in the English
alphabet, each trie node can have a maximum of 26 points.
Trie is also referred to as the digital tree or the prefix tree. The key to which a node is
connected is determined by its position in the Trie. Trie allows us to insert and find
strings in O(L) time, where L is the length of a single word. This is clearly faster than
BST. Because of how it is implemented, this is also faster than Hashing. There is no
need to compute a hash function. There is no need to handle collisions (like we do in
open addressing and separate chaining)
Another benefit of Trie is that we can easily print all words in alphabetical order,
which is not easy with hashing. Trie can also perform prefix search (or auto-complete)
efficiently.
The main disadvantage of tries is that they require a large amount of memory to
store the strings. We have an excessive number of node pointers for each node
Following are some real-time applications for Trie data structure:
LRU cache or Least Recently Used cache allows quick identification of an element
that hasn’t been put to use for the longest time by organizing items in order of use. In
order to achieve this, two data structures are used:
Queue – This is implemented using a doubly-linked list. The maximum size of
the queue is determined by the cache size, i.e by the total number of available
frames. The least recently used pages will be near the front end of the queue
whereas the most recently used pages will be towards the rear end of the queue.
Hashmap – Hashmap stores the page number as the key along with the address
of the corresponding queue node as the value.
Max-Heap:
In a Max-Heap the data element present at the root node must be the
greatest among all the data elements present in the tree.
This property should be recursively true for all sub-trees of that binary tree.
Min-Heap:
In a Min-Heap the data element present at the root node must be the
smallest (or minimum) among all the data elements present in the tree.
This property should be recursively true for all sub-trees of that binary tree.
#include <bits/stdc++.h>
using namespace std;
class Solution{
public:
//function that takes an array and its size as arguments
int removeDuplicates(int a[],int n){
int index=0;
for(int i=1;i<n;i++) {
int main()
{
int T;
//taking the number of test cases from user
cin>>T;
//running the loop for all test cases
while(T--)
{
int N;
//taking size input from user
cin>>N;
int a[N];
//taking array input from user
for(int i=0;i<N;i++)
{
cin>>a[i];
}
Solution ob;
//calling the removeDuplicates in the Solution class
int n = ob.removeDuplicates(a,N);
//printing the array after removing duplicates
for(int i=0;i<n;i++)
cout<<a[i]<<" ";
cout<<endl;
}
}
// Tree Node
struct Node {
int data;
Node* left;
Node* right;
};
if(temp->left)
st2.push(temp->left);
if(temp->right)
st2.push(temp->right);
}
//Iterate until the second stack is not empty
while(!st2.empty()){
Node* temp=st2.top();
st2.pop();
result.push_back(temp->data);
if(temp->right)
st1.push(temp->right);
if(temp->left)
st1.push(temp->left);
}
}
return result;
}
int prec(char c)
{
if (c == '^')
return 3;
else if (c == '/' || c == '*')
return 2;
else if (c == '+' || c == '-')
return 1;
else
return -1;
}
public:
// Function to convert an infix expression to a postfix expression.
string infixToPostfix(string s) {
stack<char> st; // For stack operations, we are using C++ built in stack
string result;
// If an operator is scanned
else {
while (!st.empty()
&& prec(s[i]) <= prec(st.top())) {
if (c == '^' && st.top() == '^')
break;
else {
result += st.top();
st.pop();
}
}
st.push(c);
}
}
// ll h i i l f h k
Page 54 © Copyright by Interviewbit
Data Structure Interview Questions
45. Write a function to find the maximum for each and every
contiguous subarray of size k.
Input: N = 9, K = 3 arr[] = {1, 2, 3, 1, 4, 5, 2, 3, 6}
Output: {3, 3, 4, 5, 5, 5, 6}
Explanation: In the first subarray of size 3: {1,2,3}, the value 3 is maximum,
similarly for all such subarrays for size 3.
}
j++;
while(!dq.empty()&&arr[dq.back()]<=arr[i])
dq.pop_back();
dq.push_back(i++);
}
ans.push_back(arr[dq.front()]);
return ans;
47. Write a function to print all unique rows of the given matrix.
Input:
{{1, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{1, 0, 1, 1, 0},
{0, 1, 0, 0, 1},
{1, 1, 1, 0, 0}}
Output:
{{1, 1, 1, 0, 0},
{0, 1, 0, 0, 1},
{1, 0, 1, 1, 0}}
return v;
}
pdt*=nums[right];
while(pdt>=k and left<nums.size()){
pdt/=nums[left];
left++;
}
if(right-left>=0)
ans+=right-left+1;//since on adding a new element new subarrays formed is r
right++;
}
return ans;
}
Input: 8<->10<->1<->7<->6
Output: 1<->6<->7<->8<->10
class Solution{
public:
Node* partition(Node *l, Node *h){
//Your code goes here
Node*temp = h;
Node*tt = l;
Node*first = l;
while(tt != h){
if(tt->data <= temp->data){
swap(first->data, tt->data);
first = first->next;
}
tt = tt -> next;
}
swap(first-> data, h->data);
return first;
}
};
Time Complexity: O(n^2) in the worst case when the list is already sorted.
O(nlog(n)) in the best and average case.
Space Complexity: O(n)
class Solution
{
public:
//Function to connect nodes at the same level.
void connect(Node *p)
{
map<int,vector<Node *> > m;
queue<Node *> q;
queue<int> l;
q.push(p);
l.push(0);
while(!q.empty())
{
Node *temp=q.front();
int level=l.front();
q.pop();
l.pop();
m[level].push_back(temp);
if(temp->left!=NULL)
{
q.push(temp->left);
l.push(level+1);
}
if(temp->right!=NULL)
{
q.push(temp->right);
l.push(level+1);
}
}
for(map<int,vector<Node *> > ::iterator it=m.begin();it!=m.end();it++)
{
vector<Node *> temp1=it->second;
for(int i=0;i<temp1.size()-1;i++)
{
temp1[i]->nextRight=temp1[i+1];
}
temp1[temp1.size()-1]->nextRight=NULL;
}
}
};
Input: N = 3
Output: 5 for N = 3, there are 5 possible BSTs:
1 3 3 21
\ / / /\ \
3 2 1 1 3 2
/ / \ \
2 1 2 3
class Solution
{
public:
//function to calculate binomial coefficient C(n,k)
long long int binomialCoefficient(long long int n, long long int k)
{
long long int res = 1;
if (k > n - k)
k = n - k;
return res;
}
// return 2nCn/(n+1)
return C/(n+1);
}
class LRUCache
{
private:
class node_t {
public:
int key;
int value;
node_t * next;
node_t * prev;
};
int cap;
node_t head;
unordered_map<int, node_t*> tbl;
class Solution {
public:
The main idea to solve this problem is to traverse the tree in pre order manner
and pass the level information along with it. If the level is visited for the first
time, then we store the information of the current node and the current level in
the hashmap. Basically, we are getting the le view by noting the first node of
every level.
At the end of traversal, we can get the solution by just traversing the map.
Consider the following tree as example for finding the le view:
Le view of a binary tree in Java:
import java.util.HashMap;
Node(int data) {
this.data = data;
}
}
public class InterviewBit
{
// traverse nodes in pre-order way
public static void leftViewUtil(Node root, int level, HashMap<Integer, Integer> m
{
if (root == null) {
return;
}
// traverse the tree and find out the first nodes of each level
leftViewUtil(root, 1, map);
l f i ( )
Page 70 © Copyright by Interviewbit
Data Structure Interview Questions
Assume that the boundary cases - which are all four edges of the grid
are surrounded by water.
Constraints are:
m == grid.length
n == grid[i].length
1 <= m, n <= 300
grid[i][j] can only be ‘0’ or ‘1’.
Example:
Input: grid = [
[“1” , “1” , “1” , “0” , “0”],
[“1” , “1” , “0” , “0” , “0”],
[“0” , “0” , “1” , “0” , “1”],
[“0” , “0” , “0” , “1” , “1”]
]
Output: 3
class InterviewBit {
public int numberOfIslands(char[][] grid) {
if(grid==null || grid.length==0||grid[0].length==0)
return 0;
int m = grid.length;
int n = grid[0].length;
int count=0;
for(int i=0; i<m; i++){
for(int j=0; j<n; j++){
if(grid[i][j]=='1'){
count++;
mergeIslands(grid, i, j);
}
}
}
return count;
}
if(i<0||i>=m||j<0||j>=n||grid[i][j]!='1')
return;
grid[i][j]='X';
Topological sorting is a linear ordering of vertices such that for every directed
edge ij, vertex i comes before j in the ordering.
Topological sorting is only possible for Directed Acyclic Graph (DAG).
Applications:
1. jobs scheduling from the given dependencies among jobs.
2. ordering of formula cell evaluation in spreadsheets
3. ordering of compilation tasks to be performed in make files,
4. data serialization
5. resolving symbol dependencies in linkers.
Topological Sort Code in Java:
// V - total vertices
// visited - boolean array to keep track of visited nodes
// graph - adjacency list.
// Main Topological Sort Function.
void topologicalSort()
{
Stack<Integer> stack = new Stack<Integer>();
Conclusion
© Copyright by Interviewbit
Contents
What is an Algorithm?
Algorithms and Data Structures are a crucial component of any technical coding
interview. It does not matter if you are a C++ programmer, a Java programmer, or a
Web developer using JavaScript, Angular, React, JQuery, or any other programming
language.
Now, the very first question which must be popping in your head must be "What is an
algorithm?" Well, the answer to this question is: An algorithm is a finite sequence of
well-defined instructions used to solve a class of problems or conduct a computation
in mathematics and computer science.
Algorithms are used to specify how calculations, data processing, automated
reasoning, automated decision making, and other tasks should be done. An
algorithm is a method for calculating a function that can be represented in a finite
amount of space and time and in a well defined formal language. The instructions
describe a computation that, when run, continues through a finite number of well
defined subsequent stages, finally creating "output" and terminating at a final
ending state, starting from an initial state and initial input (possibly empty). The shi
from one state to the next is not always predictable; some algorithms, known as
randomised algorithms, take random input into account.
Before diving deep into algorithm interview questions, let us first understand the
need for Algorithms in today's world. The following are some of the benefits of using
algorithms in real-world problems.
Big O Notation:
The Big O notation defines an upper bound for an algorithm by bounding a
function from above. Consider the situation of insertion sort: in the best case
scenario, it takes linear time, and in the worst case, it takes quadratic time.
Insertion sort has a time complexity O(n^2). It is useful when we just have an
upper constraint on an algorithm's time complexity.
a = a + b;
b = a - b; // this will act like (a+b) - b, and now b equals a.
a = a - b; // this will act like (a+b) - a, and now an equals b.
It is a clever trick. However, if the addition exceeds the maximum value of the int
primitive type as defined by Integer.MAX_VALUE in Java, or if the subtraction is less
than the minimum value of the int primitive type as defined by Integer.MIN_VALUE in
Java, there will be an integer overflow.
Using the XOR method:
Another way to swap two integers without needing a third variable (temporary
variable) is using the XOR method. This is o en regarded as the best approach
because it works in languages that do not handle integer overflows, such as Java, C,
and C++. Java has a number of bitwise operators. XOR (denoted by ^) is one of them.
x = x ^ y;
y = x ^ y;
x = x ^ y;
Some of the algorithms which use the Divide and Conquer Algorithmic paradigm are
as follows:
Binary Search
Merge Sort
Strassen's Matrix Multiplication
Quick Sort
Closest pair of points.
Searching Algorithms are used to look for an element or get it from a data structure
(usually a list of elements). These algorithms are divided into two categories based
on the type of search operation:
Sequential Search: This method traverses the list of elements consecutively,
checking each element and reporting if the element to be searched is found.
Linear Search is an example of a Sequential Search Algorithm.
Interval Search: These algorithms were created specifically for searching sorted
data structures. Because they continually target the centre of the search
structure and divide the search space in half, these types of search algorithms
are far more efficient than Sequential Search algorithms. Binary Search is an
example of an Interval Search Algorithm.
The time complexity of the Linear Search Algorithm is O(n) where n is the size of the
list of elements and its space complexity is constant, that is, O(1).
With Recursion (no DP): The time complexity of the given code will be exponential.
/*Sample C++ code for finding nth fibonacci number without DP*/
int nFibonacci(int n){
if(n == 0 || n == 1) return n;
else return nFibonacci(n - 1) + nFibonacci(n - 2);
}
With DP: The time complexity of the given code will be linear because of Dynamic
Programming.
/*Sample C++ code for finding nth fibonacci number with DP*/
int nFibonacci(int n){
vector<int> fib(n + 1);
fib[0] = 0;
fib[1] = 1;
for(int i = 2;i <= n;i ++){
fib[i] = fib[i - 1] + fib[i - 2];
}
return fib[n];
}
A few problems which can be solved using the Dynamic Programming (DP)
Algorithmic Paradigm are as follows:
Finding the nth Fibonacci number
Finding the Longest Common Subsequence between two strings.
Finding the Longest Palindromic Substring in a string.
The discrete (or 0-1) Knapsack Problem.
Shortest Path between any two nodes in a graph (Floyd Warshall Algorithm)
Step 1: Start.
Step 2: We take two variables l and r.
Step 3: We set the values of l as 0 and r as (length of the string - 1).
Step 4: We interchange the values of the characters at positions l and r in the
string.
Step 5: We increment the value of l by one.
Step 6: We decrement the value of r by one.
Step 7: If the value of r is greater than the value of l, we go to step 4
Step 8: Stop.
15. What do you understand about the DFS (Depth First Search)
algorithm.
Depth First Search or DFS is a technique for traversing or exploring data structures
such as trees and graphs. The algorithm starts at the root node (in the case of a
graph, any random node can be used as the root node) and examines each branch as
far as feasible before retracing. So the basic idea is to start at the root or any arbitrary
node and mark it, then advance to the next unmarked node and repeat until there
are no more unmarked nodes. A er that, go back and check for any more unmarked
nodes to cross. Finally, print the path's nodes. The DFS algorithm is given below:
Step1: Create a recursive function that takes the node's index and a visited array
as input.
Step 2: Make the current node a visited node and print it.
Step 3: Call the recursive function with the index of the adjacent node a er
traversing all nearby and unmarked nodes.
IDEA
CAST
CMEA
3-way
Blowfish
GOST
LOKI
DES and Triple DES.
Quicksort is a comparison sorting algorithm, which means it can sort objects of any
type that have a "less-than" relation (technically, a total order) declared for them.
Quicksort is not a stable sort, which means that the relative order of equal sort items
is not retained in efficient implementations. Quicksort (like the partition method)
must be written in such a way that it can be called for a range within a bigger array,
even if the end purpose is to sort the entire array, due to its recursive nature.
The following are the steps for in-place quicksort:
If there are less than two elements in the range, return immediately because
there is nothing else to do. A special-purpose sorting algorithm may be used for
other very small lengths, and the rest of these stages may be avoided.
Otherwise, choose a pivot value, which is a value that occurs in the range (the
precise manner of choice depends on the partition routine, and can involve
randomness).
Partition the range by reordering its elements while determining a point of
division so that all elements with values less than the pivot appear before the
division and all elements with values greater than the pivot appear a er it;
elements with values equal to the pivot can appear in either direction. Most
partition procedures ensure that the value that ends up at the point of division is
equal to the pivot, and is now in its ultimate location because at least one
instance of the pivot is present (but termination of quicksort does not depend
on this, as long as sub-ranges strictly smaller than the original are produced).
Apply the quicksort recursively to the sub-range up to the point of division and
the sub-range a er it, optionally removing the element equal to the pivot at the
point of division from both ranges. (If the partition creates a potentially bigger
sub-range near the boundary with all elements known to be equal to the pivot,
these can also be omitted.)
Quicksort's mathematical analysis reveals that, on average, it takes O(nlog (n) time
complexity to sort n items. In the worst-case scenario, it performs in time complexity
of O(n^2).
First Pass:
(50 10 40 20 80) –> ( 10 50 40 20 80 ), Since 50 > 10, the algorithm compares
the first two elements and swaps them.
( 10 50 40 20 80 ) –> ( 10 40 50 20 80 ), Since 50 > 40, the algorithm swaps the
values at the second and third positions.
(10 40 50 20 80) –> (10 40 20 50 80), Since 50 > 3, the algorithm swaps the
third and fourth elements.
(10 40 20 50 80) -> ( 10 40 20 50 80 ), The method does not swap the fourth
and fi h elements because they are already in order (80 > 50).
Second Pass:
( 10 40 20 50 80 ) –> ( 10 40 20 50 80 ) , Elements at first and second position
are in order so now swapping.
( 10 40 20 50 80 ) –> ( 10 20 40 50 80 ), Since 40 > 20, the algorithm swaps the
values at the second and third positions.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at fourth and fi h position
are in order so now swapping.
The array is now sorted, but our algorithm is unsure whether it is complete. To know
if the algorithm is sorted, it must complete one complete pass without any swaps.
Third Pass:
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the first and second
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the second and third
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 50 80 ), Elements at the third and fourth
position are in order so now swapping.
( 10 20 40 50 80 ) –> ( 10 20 40 5 80 ), Elements at the fourth and fi h
position are in order so now swapping.
Step 1: Mark all nodes that have not been visited yet. The unvisited set is a
collection of all the nodes that have not been visited yet.
Step 2: Assign a tentative distance value to each node: set it to zero for our first
node and infinity for all others. The length of the shortest path discovered so far
between the node v and the initial node is the tentative distance of a node v.
Because no other vertex other than the source (which is a path of length zero) is
known at the start, all other tentative distances are set to infinity. Set the
current node to the beginning node.
Step 3: Consider all of the current node's unvisited neighbours and determine
their approximate distances through the current node. Compare the newly
calculated tentative distance to the current assigned value and choose the one
that is less. If the present node A has a distance of 5 and the edge linking it to a
neighbour B has a length of 3, the distance to B through A will be 5 +3 = 8.
Change B to 8 if it was previously marked with a distance greater than 8. If this is
not the case, the current value will be retained.
Step 4: Mark the current node as visited and remove it from the unvisited set
once we have considered all of the current node's unvisited neighbours. A node
that has been visited will never be checked again.
Stop if the destination node has been marked visited (when planning a route
between two specific nodes) or if the smallest tentative distance between the
nodes in the unvisited set is infinity (when planning a complete traversal; occurs
when there is no connection between the initial node and the remaining
unvisited nodes). The algorithm is now complete.
Step 5: Otherwise, return to step 3 and select the unvisited node indicated with
the shortest tentative distance as the new current node.
It is not required to wait until the target node is "visited" as described above while
constructing a route: the algorithm can end once the destination node has the least
tentative distance among all "unvisited" nodes (and thus could be selected as the
next "current"). For arbitrary directed graphs with unbounded non-negative weights,
Dijkstra's algorithm is asymptotically the fastest known single-source shortest path
algorithm with time complexity of O(|E| + |V|log(|V|)), where |V| is the number of
nodes and|E| is the number of edges in the graph.
23. Can we use the binary search algorithm for linked lists?
Justify your answer.
No, we cannot use the binary search algorithm for linked lists.
Explanation: Because random access is not allowed in linked lists, reaching the
middle element in constant or O(1) time is impossible. As a result, the usage of a
binary search algorithm on a linked list is not possible.
Note: The buildMaxHeap() operation runs only one time with a linear time
complexity or O(n) time complexity. The si Down() function works in O(log n)
time complexity, and is called n times. Therefore, the overall time complexity of
the heap sort algorithm is O(n + n log (n)) = O(n log n).
Conclusion
So, in conclusion, we would like to convey to our readers that the Algorithm
Interviews are usually the most crucial and tough interviews of all in the Recruitment
process of a lot of So ware Companies and a sound understanding of Algorithms
usually implies that the candidate is very good in logical thinking and has the ability
to think out of the box. Algorithm interview questions can be easily solved if one has
a sound understanding of Algorithms and has gone through a lot of Algorithm
Examples and Algorithm MCQs (which we will be covering in the next section of this
article). Therefore, we suggest to all the budding coders of today to develop a strong
grasp on the various Algorithms that have been discovered to date so that they can
ace their next Technical Interviews.
Useful Resources: