JNTUH Dbms Unit5
JNTUH Dbms Unit5
A DBMS stores the data on external storage because the amount of data is very
huge, must persist across program executions and has to be fetched into main
memory when DBMS processes the data.
The unit of information for reading data from disk, or writing data to disk, is a
page. The size of a page is 4KB or 8KB.
Each record in a file has a unique identifier called a record id, or rid for short. A
rid has the property that we can identify the disk address of the page containing the
record by using the rid.
Buffer Manager:
Data is read into memory for processing, and written to disk for persistent storage,
by a layer of software called the buffer manager. When the files and access
228
methods layer (which we often refer to as just the file layer) needs to process a
page, it asks the buffer manager to fetch the page, specifying the page's rid. The
buffer manager fetches the page from disk if it is not already in memory.
Space on disk is managed by the disk space manager. When the files and access
methods layer needs additional space to hold new records in a file, it asks the disk
space manager to allocate an additional disk page for the file;
Magnetic Disks:
Magnetic disks support direct access (transfers the block of data between the
memory and peripheral devices of the system, without the participation of the
processor) to a desired location and are widely used for database applications. A
DBMS provides seamless access to data on disk; applications need not worry about
whether data is in main memory or disk.
229
Platter: A platter is a circular magnetic plate that is used for storing data in a hard
disk. It is often made of aluminum, glass substrate or ceramic. A hard disk drive
contains several platters. Each platter has two working surfaces. These surfaces of
the platters hold the recorded data.
Spindle: A typical HDD design consists of a spindle, which is a motor that holds
the platters.
230
Tracks: Each working surface of the platter is divided into number of concentric
rings, which are called tracks.
Cylinder: The collection of all the tracks that are of the same distance, from the
edge of the platter, is called a cylinder.
Read / Write Head: The data on a hard drive platter is read by read – write heads,
of read – write arm. The read – write arm also known as actuator.
Arm assembly: Each on a separate read – write arm are controlled by a common
arm assembly which moves all heads simultaneously from one cylinder to another.
Tracks: Each platter is broken into thousands of tightly packed concentric circles,
known as tracks. These tracks resemble the structure of annual rings of a tree. All
the information stored on the hard disk is recorded in tracks. Starting from zero at
the outer side of the platter, the number of tracks goes on increasing to the inner
side. Each track can hold a large amount of data counting to thousands of bytes.
Sectors: Each track is further broken down into smaller units called sectors. As
sector is the basic unit of data storage on a hard disk, each track has the same
number of sectors, which means that the sectors are packed much closer together
on tracks near the center of the disk. A single track typically can have thousands of
sectors. The data size of a sector is always a power of two, and is almost always
either 512 or 4096 bytes.
231
Clusters: Sectors are often grouped together to form clusters. A cluster is the
smallest possible unit of storage on a hard disk. If contiguous clusters (clusters that
are next to each other on the disk) are not available, the data is written elsewhere
on the disk, and the file is considered to be fragmented.
232
File Organization and Indexing
A database consists of a huge amount of data. The data is grouped within a table in
RDBMS, and each table has related records. A user can see that the data is stored
in form of tables, but in actual this huge amount of data is stored in physical
memory in form of files.
File Organization:
File Organization refers to the logical relationships among various records that
constitute the file, particularly with respect to the means of identification and
access to any specific record. In simple terms, Storing the files in certain order is
called file Organization.
Indexing:
The main goal of designing the database is faster access to any data in the database
and quicker insert/delete/update to any data. When a database is very huge, even a
smallest transaction will take time to perform the action. In order to reduce the
time spent in transactions, Indexes are used. Indexes are similar to book catalogues
in library or even like an index in a book.
Indexing is a data structure technique which allows you to quickly retrieve records
from a database file. An Index is a small table having only two columns. The first
column comprises a copy of the primary or candidate key of a table. Its second
column contains a set of pointers for holding the address of the disk block where
that specific key value stored.
An index
Takes a search key as input
Efficiently returns a collection of matching records.
233
Types of Indexing:
Primary Index:
Primary Index is an ordered file which is fixed length size with two fields. The first
field is the same a primary key and second, filed is pointed to that specific data
block. In the primary Index, there is always one to one relationship between the
entries in the index table.
The primary Indexing in DBMS is also further divided into two types.
Dense Index
Sparse Index
Dense Index: In a dense index, a record is created for every search key valued in
the database. This helps you to search faster but needs more space to store index
records. In this Indexing, method records contain search key value and points to
the real record on the disk.
Dense Index
234
Sparse Index
It is an index record that appears for only some of the values in the file. Sparse
Index helps you to resolve the issues of dense Indexing in DBMS. In this method
of indexing technique, a range of index columns stores the same data block
address, and when data needs to be retrieved, the block address will be fetched.
However, sparse Index stores index records for only some search-key values. It
needs less space, less maintenance overhead for insertion, and deletions but It is
slower compared to the dense Index for locating records.
Clustered Indexing
Clustering index is defined on an ordered data file. The data file is ordered on a
non-key field. In some cases, the index is created on non-primary key columns
which may not be unique for each record. In such cases, in order to identify the
records faster, we will group two or more columns together to get the unique
values and create index out of them. This method is known as clustering index.
Basically, records with similar characteristics are grouped together and indexes are
created for these groups. Clustered index sorted according to first name (Search
key).
For example, students studying in each semester are grouped together. i.e. 1st
Semester students, 2nd semester students, 3rd semester students etc are grouped.
235
Example:
dno ----
1
B1 1
Key Pointer 2
1 . 2
2 . 3
3 . B2 3
4 . 3
5 . 4
6 . B3 4
4
5
B4 5
6
6
Example:
236
INDEX DATA STRUCTURES
Hashing is the technique of the database management system, which directly finds
the specific data location on the disk without using the concept of index structure.
In the database systems, data is stored at the blocks whose data address is produced
by the hash function. That location of memory where hash files stored these records
is called as data bucket or data block.
Key: A key in the Database Management system (DBMS) is a field or set of fields
that helps the relational database users to uniquely identify the row/records of the
database table.
Hash function: This mapping function matches all the set of search keys to those
addresses where the actual records are located. It is an easy mathematics function.
Linear Probing: It is a concept in which the next available block of data is used for
inserting the new record instead of overwriting the older data block.
Quadratic Probing: It is a method that helps users to determine or find the address
of a new data bucket.
Bucket Overflow: When a record is inserted, the address generated by the hash
function is not empty or data already exists in that address .This situation is called
bucket overflow.
1. Static Hashing
2. Dynamic Hashing
237
Static Hashing: In the static hashing, the resultant data bucket address will always
remain the same.
Therefore, if you generate an address for say Student_ID = 10 using hashing function
mod(3), the resultant bucket address will always be 1. So, you will not see any
change in the bucket address.
Therefore, in this static hashing method, the number of data buckets in memory
always remains constant.
Searching: When you need to retrieve the record, the same hash function should be
helpful to retrieve the address of the bucket where data should be stored.
Delete a record: Using the hash function, you can first fetch the record which is you
wants to delete. Then you can remove the records for that address in memory.
238
DYNAMIC HASHING TECHNIQUES
EXTENDIBLE HASHING
In Static Hashing the performance degrades with overflow pages. This problem,
however, can be overcome by a simple idea: Use a directory of pointers to buckets,
and double the size of the number of buckets by doubling just the directory and
splitting only the bucket that overflowed which the central concept of Extendible is
hashing.
It is a dynamic hashing method wherein directories and buckets are used to hash
data. It is an aggressively flexible method in which the hash function also
experiences dynamic changes.
Basic structure of Extendible hashing:
239
in accordance with the global depth is used to decide the action that to be
performed in case an overflow occurs. Local Depth is always less than or equal
to the Global Depth.
Bucket Splitting: When the number of elements in a bucket exceeds a
particular size, then the bucket is split into two parts.
Directory Expansion: Directory Expansion Takes place when a bucket
overflows. Directory Expansion is performed when the local depth of the
overflowing bucket is equal to the global depth.
16 10000
4 00100
6 00110
22 10110
24 11000
10 01010
31 11111
7 00111
9 01001
20 10100
26 11010
Initially, the global-depth and local-depth is always 1. Thus, the hashing frame looks
like this:
240
Inserting 16: The binary format of 16 is 10000 and global-depth is 1. The hash
function returns 1 LSB of 10000 which is 0. Hence, 16 is mapped to the
directory with id=0.
Inserting 4 and 6: Both 4(100) and 6(110) have 0 in their LSB. Hence, they are
hashed as follows:
Inserting 22: The binary form of 22 is 10110. Its LSB is 0. The bucket pointed by
directory 0 is already full. Hence, Over Flow occurs.
Since Local Depth = Global Depth, the bucket splits and directory expansion takes
place. Also, rehashing of numbers present in the overflowing bucket takes place
after the split. And, since the global depth is incremented by 1, now, the global
depth is 2. Hence, 16,4,6,22 are now rehashed w.r.t 2 LSBs.[
16(10000),4(100),6(110),22(10110) ]
241
Notice that the bucket which was underflow has remained untouched. But,
since the number of directories has doubled, we now have 2 directories 01 and
11 pointing to the same bucket. This is because the local-depth of the bucket
has remained 1. And, any bucket having a local depth less than the global
depth is pointed-to by more than one directory.
Inserting 24 and 10: 24(11000) and 10 (1010) can be hashed based on directories
with id 00 and 10. Here, we encounter no overflow condition.
Inserting 31, 7, 9: All of these elements [31(11111), 7(111), 9(1001)] have either 01
or 11 in their LSBs. Hence, they are mapped on the bucket
pointed out by 01 and 11. We do not encounter any overflow
condition here.
242
Inserting 20: Insertion of data element 20 (10100) will again cause the overflow
problem.
20 is inserted in bucket pointed out by 00. Since the local depth of the bucket =
global-depth, directory expansion (doubling) takes place along with bucket splitting.
Elements present in overflowing bucket are rehashed with the new global depth.
Now, the new Hash table looks like this:
243
Inserting 26: Global depth is 3. Hence, 3 LSBs of 26(11010) are considered.
Therefore 26 best fits in the bucket pointed out by directory 010.
The bucket overflows, and, since the local depth of bucket < Global depth (2<3),
directories are not doubled but, only the bucket is split and elements are rehashed.
Finally, the output of hashing the given list of numbers is obtained.
244
LINEAR HASHING
The scheme utilizes a family of hash functions h0, h1, h2, ... , with the property
that each function's range is twice that of its predecessor. That is, if hi maps a data
entry into one of M buckets, hi+1 maps a data entry into one of M buckets. Such a
family is typically obtained by choosing a hash function h and an initial number N
of buckets,2 and defining hi(value) = h(value) mod (2i N).
The idea is best understood in terms of rounds of splitting. During round number
Level, only hash functions hLevel and hLevel+1 are in use. The buckets in the file
at the beginning of the round are split, one by one from the first to the last bucket,
thereby doubling the number of buckets. At any given point within a round,
therefore, we have buckets that have been split, buckets that are yet to be split, and
buckets created by splits in this round.
Consider how we search for a data entry with a given search key value. We apply
hash function h Level , and if this leads us to one of the unsplit buckets, we simply
look there. If it leads us to one of the split buckets, the entry may be there or it may
have been moved to the new bucket created earlier in this round by splitting this
bucket; to determine which of the two buckets contains the entry, we apply
hLevel+1.
245
246
247
Tree Based Indexing
1. ISAM (Indexed sequential access method)
- Static Index Structure
2. B+ Trees
- Dynamic Index Structure
B+ Tree: B+ tree is dynamic index structure i.e the height of the tree grows and
contracts as records are added and deleted.
A B+ tree is also known as balanced tree in which every path from the root of the
tree to a leaf is of the same length.
Leaf node of B+ trees are linked, so doing a linear search of all keys will require
just one pass through all the leaf nodes.
B+ tree combines features of ISAM and B tress. It contains index pages and data
pages. The data pages always appear as leaf nodes in the tree. The root node and
intermediate nodes are always index pages. These features are similar to ISAM
unlike ISAM, overflow pages are not used in B+ trees.
For order M
Maximum number of keys per node =M-1
Minimum Number of keys per node= ceil (M/2)-1
Maximum number of pointers / children per node=M
Minimum number of pointers / children per node=ceil (M/2)
248
4. Copy the smallest search key value from second node to the parent
node.(Right biased)
Case 2: Overflow in non-leaf node
1. Split the non leaf node into two nodes.
2. First node contains ceil (m/2)-1 values.
3. Move the smallest among remaining to the parent.
4. Second node contains the remaining keys.
Example:
Problem: Insert the following key values 6, 16, 26, 36, 46 on a B+ tree with order = 3.
Solution:
Step 1: The order is 3 so at maximum in a node so there can be only 2 search key values. As
insertion happens on a leaf node only in a B+ tree so insert search key value 6 and 16 in
increasing order in the node. Below is the illustration of the same:
Step 2: We cannot insert 26 in the same node as it causes an overflow in the leaf
node, We have to split the leaf node according to the rules. First part contains
ceil((3-1)/2) values i.e., only 6. The second node contains the remaining values i.e.,
16 and 26. Then also copy the smallest search key value from the second node to
the parent node i.e., 16 to the parent node. Below is the illustration of the same:
249
Step 3: Now the next value is 36 that is to be inserted after 26 but in that node, it
causes an overflow again in that leaf node. Again follow the above steps to split
the node. First part contains ceil ((3-1)/2) values i.e., only 16. The second node
contains the remaining values i.e., 26 and 36. Then also copy the smallest search
key value from the second node to the parent node i.e., 26 to the parent node.
Below is the illustration of the same: The illustration is shown in the diagram
below.
250
Deletion in B+ tree:
Before going through the steps below, one must know these facts about a B+ tree
of degree m.
1. A node can have a maximum of m children.
2. A node can contain a maximum of m - 1 keys.
3. A node should have a minimum of ⌈m/2⌉ children.
4. A node (except root node) should contain a minimum of ⌈m/2⌉ - 1 keys.
While deleting a key, we have to take care of the keys present in the internal
nodes (i.e. indexes) as well because the values are redundant in a B+ tree.
Search the key to be deleted then follow the following steps.
Case I
The key to be deleted is present only at the leaf node not in the indexes (or internal
nodes). There are two cases for it:
1. There is more than the minimum number of keys in the node. Simply delete
the key.
M=3,
Max. Children=3
Min. Children=ceil (3/2)=2
Max.Keys=m-1=2
Min.keys=ceil (3/2)-1=1
251
2. There is an exact minimum number of keys in the node. Delete the key and
borrow a key from the immediate sibling. Add the median key of the sibling
node to the parent.
252
Case II
The key to be deleted is present in the internal nodes as well. Then we have to
remove them from the internal nodes as well. There are the following cases for this
situation.
1. If there is more than the minimum number of keys in the node, simply delete
the key from the leaf node and delete the key from the internal node as well.
Fill the empty space in the internal node with the inorder successor.
Deleting 45
253
2. If there are an exact minimum number of keys in the node, then delete the
key and borrow a key from its immediate sibling (through the parent).
Fill the empty space created in the index (internal node) with the borrowed
key.
Deleting 35
254
3. This case is similar to Case II(1) but here, empty space is generated above
the immediate parent node.
After deleting the key, merge the empty space with its sibling.
Fill the empty space in the grandparent node with the inorder successor.
255
Case III
In this case, the height of the tree gets shrinked. It is a little complicated.Deleting
55 from the tree below leads to this condition. It can be understood in the
illustrations below.
256
ISAM Trees
Indexed Sequential Access Method (ISAM) trees
are static
Root
40
Non-Leaf
Pages
20 33 51 63
Leaf 10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 97*
63*
Pages
Non-leaf
Pages
Leaf
Pages
Overflow
page
Primary pages
258
ISAM File Creation
How to create an ISAM file?
All leaf pages are allocated sequentially and
sorted on the search key value
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
260
ISAM: Inserting Entries
The appropriate page is determined as for a search, and the
entry is inserted (with overflow pages added if necessary)
Insert 23*
Root
40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
23*
261
ISAM: Inserting Entries
The appropriate page is determined as for a search, and the
entry is inserted (with overflow pages added if necessary)
Insert 48*
Root
40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
23* 48*
262
ISAM: Inserting Entries
The appropriate page is determined as for a search, and the
entry is inserted (with overflow pages added if necessary)
Insert 41*
Root
40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
263
ISAM: Inserting Entries
The appropriate page is determined as for a search, and the
entry is inserted (with overflow pages added if necessary)
Insert 42*
Root
40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
Root
Delete 42* 40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
265 42*
ISAM: Deleting Entries
The appropriate page is determined as for a search, and the
entry is deleted (with ONLY overflow pages removed when
becoming empty)
Root
Delete 42* 40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
266
ISAM: Deleting Entries
The appropriate page is determined as for a search, and the
entry is deleted (with ONLY overflow pages removed when
becoming empty)
Root
Delete 42* 40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
267
ISAM: Deleting Entries
The appropriate page is determined as for a search, and the
entry is deleted (with ONLY overflow pages removed when
becoming empty)
Root
Delete 51* 40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
Note that 51 still appears in an i2n6d8ex entry, but not in the leaf!
ISAM: Deleting Entries
The appropriate page is determined as for a search, and the
entry is deleted (with ONLY overflow pages removed when
becoming empty)
Root
Delete 55* 40
20 33 51 63
10* 15* 20* 27* 33* 37* 40* 46* 55* 63* 97*
If the data distribution and size are relatively static, ISAM might
be a good choice to pursue!
270
File Organization
The File is a collection of records. Using the primary key, we can access the
records. The type and frequency of access can be determined by the type of
file organization which was used for a given set of records.
File organization is a logical relationship among various records. This
method defines how file records are mapped onto disk blocks.
271
Insertion of the new record:
Suppose we have four records R1, R3 and so on up to R9 and R8 in a sequence.
Hence, records are nothing but a row in the table. Suppose we want to insert a new
record R2 in the sequence, then it will be placed at the end of the file. Here,
records are nothing but a row in any table.
272
Insertion of the new record:
It is the simplest and most basic type of organization. It works with data
blocks. In heap file organization, the records are inserted at the file's end.
When the records are inserted, it doesn't require the sorting and ordering of
records.
When the data block is full, the new record is stored in some other block.
This new data block need not to be the very next data block, but it can select
any data block in the memory to store new records. The heap file is also
known as an unordered file.
In the file, every record has a unique id, and every page in a file is of the
same size. It is the DBMS responsibility to store and manage the new
records.
273
Insertion of a new record
Suppose we have five records R1, R3, R6, R4 and R5 in a heap and suppose we
want to insert a new record R2 in a heap. If the data block 3 is full then it will be
inserted in any of the database selected by the DBMS, let's say data block 1.
If we want to search, update or delete the data in heap file organization, then we
need to traverse the data from staring of the file till we get the requested record.
If the database is very large then searching, updating or deleting of record will be
time-consuming because there is no sorting or ordering of records. In the heap file
organization, we need to check all the data until we get the requested record.
274
When a record has to be received using the hash key columns, then the address is
generated, and the whole record is retrieved using that address. In the same way,
when a new record has to be inserted, then the address is generated using the hash
key and record is directly inserted. The same process is applied in the case of
delete and update.
In this method, there is no effort for searching and sorting the entire file. In this
method, each record will be stored randomly in the memory.
275
B+ Tree File Organization
B+ Tree is a data structure, which uses a tree-like structure for storing and
accessing the records or data from the file.
It is an enhanced method of an ISAM (Indexed Sequential Access Method). This
file organization stores that data which is not fit in the system’s main memory.
This file organization uses the concept of key-index. This concept uses the primary
key for the sorting of records. An index value of the database record is the record
address of the file.
B+ tree is the same as a binary search tree. But, this type of tree can have more
than two children. In this type of file organization, all the records or information
are stored at the leaf nodes. And the intermediate nodes act as the pointer to the
nodes which store the records, i.e., leaf nodes. Intermediate nodes in the tree do not
contain any information. Following diagram shows how the values are stored in the
B+ Tree File organization:
In this B+ tree, 30 is the only root node of the tree, which is also known as the
main node of the B+ tree. There exists an intermediary layer with the nodes, which
stores the address of the leaf nodes, not the actual records. Only the leaf nodes
contain the records in the order which is sorted.
In the above B+ tree, only one leaf node exists whose values are: 10, 15, 22, 26,
28, 33, 34, 38, 40. As all the leaf nodes of the tree are sorted, so the records can be
easily searched.
276
Clustered File Organization
Cluster is defined as “when two or more related tables or records are stored within
the same file”. The related column of two or more database tables in the cluster is
called the cluster key. And this cluster key is used to map the two tables together.
This method minimizes the cost of accessing and searching the various records
because they are combined and available in a single cluster.
Example:
Suppose we have two tables whose names are Student and Subject. Both of the
following given tables are related to each other.
Student
Subject
Subject_ID Subject_Name
C01 Math
C02 Java
C03 C
C04 DBMS
Therefore, both these tables ‘student’ and ‘subject’ are allowed to combine using a
join operation and can be seen as following in the cluster file.
Student + Subject
277
Cluster Key
Subject_ID Subject_Name Student_ID Student_Name Student_Age
C01 Math 101 Raju 20
103 Ravi 21
C02 Java 104 Rajesh 22
C03 C 105 Ranjith 21
107 Rahul 20
C04 DBMS 102 Ramesh 20
106 Ravinder 20
108 Rudra 21
If we have to perform the insert, update and delete operations on the record, then
we can perform them directly because the data is sorted on that key with which
searching and accessing is done. In the given table (Student + Subject), the cluster
key is a Subject_ID.
278
data, Report multiple Any of the columns efficient. tables.
generation, transactions can be used as key No Suitable for
statistical Suitable for column. Searching performance 1:M
calculations Online range of data & degrades mappings
etc transactions partial data are when there is
efficient. insert / delete
/ update.
Grows and
shrinks with
data.
Works well
in secondary
storage
devices and
hence
reducing disk
I/O.
Since all
datas are at
the leaf node,
searching is
easy.
All data at
leaf node are
sorted
sequential
linked list.
279
for large multiple hash are
tables. keys present inefficient.
or frequently
updated
column as
hash key are
inefficient.
280
Indexes and Performance Tuning
An index is a copy of selected columns of data from a table that can be searched
very efficiently. Although indexing does add some overheads in the form of
additional writes and storage space to maintain the index data structure, the key
focus of implementing index – in various available ways – is to improve the
lookup mechanism.
It must improve the performance of data matching by reducing the time taken to
match the query value.
Now, let’s understand how index is actually programmed and stored and what
causes the speed when an index is created.
Index entries are also "rows", containing the indexed column(s) and some sort of
pointing / marking data into the base table data. When an index is used to fetch a
row, the index is walked until it finds the row(s) of interest, and the base table is
then looked up to fetch the actual row data.
When a data is inserted, a corresponding row is written to the index, and when a
row is deleted, its index row is taken out. This keeps the data and searching index
always in sync making the lookup very fast and read-time efficient.
281
Indexing is implemented using specific architecture and approaches. Some
architectures worth mentioning are:
Clustered Index:
A clustered indexed is similar to telephone directory, where data is arranged by
first name...
A table can have only one clustered index, However, one clustered index can have
multiple columns. Similar to telephone directory is arranged by first name and last
name
Non-clustered index:
A nonclustered indexed is similar to the index in textbook, where data is stored at
one place and index is stored in another place.
2. The index has pointers to the storage location.
3. Since, Nonclustered index are stored separately, a table can have more than one
Nonclustered index
4. In the index itself, data is stored in ascending or descending order of the index
282
References:
Ramakrishnan - Database Management Systems 3rd Edition
Korth Database System Concepts 6th edition
Database Systems - Design, Implementation, and Management (9th Edition)
Fundamentals-of-Database-Systems-Pearson-2015-Ramez-Elmasri-
Shamkant-B.-Navathe
An Introduction to Database Systems, 8th Edition, C J Date
www.guru99.com
https://www.programiz.com/
www.javatpoint.com
www.geeksforgeeks.org
www.tutorialspoint.com
searchsqlserver.techtarget.com
https://beginnersbook.com
https://www.gatevidyalay.com
Gate Smashers
Education 4u
Jenny's lectures
KNOWLEDGE GATE
283