0% found this document useful (0 votes)
24 views38 pages

Layers of A DBMS

The document outlines the architecture and components of a Database Management System (DBMS), including query processing, buffer management, and disk space management. It discusses the memory hierarchy, file organizations, sorting algorithms, and indexing techniques, emphasizing the importance of efficient data retrieval and storage. Additionally, it presents cost models for various operations and the classification of indexes, highlighting the use of B+ trees as a widely adopted indexing method.

Uploaded by

fityanul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views38 pages

Layers of A DBMS

The document outlines the architecture and components of a Database Management System (DBMS), including query processing, buffer management, and disk space management. It discusses the memory hierarchy, file organizations, sorting algorithms, and indexing techniques, emphasizing the importance of efficient data retrieval and storage. Additionally, it presents cost models for various operations and the classification of indexes, highlighting the use of B+ trees as a widely adopted indexing method.

Uploaded by

fityanul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 38

Layers of a DBMS

Query
Query optimization
Query
Processor Query
Execution engine execution
plan
Files and access methods

Buffer management

Disk space management


The Memory Hierarchy
Main Memory Disk Tape

•Volatile • 5-10 MB/S • 1.5 MB/S transfer rate


•limited address transmission rates • 280 GB typical
• 2-10 GB storage capacity
spaces
• expensive • average time to • Only sequential access
• average access access a block: • Not for operational
time: 10-15 msecs. data

10-100 nanoseconds Need to consider
seek, rotation,
transfer times.
• Keep records
“close”
to each other.
Disk Space Manager
Task: manage the location of pages on disk (page = block)
Spindle
Tracks
Provides commands for: Disk head

• allocating and deallocating a page Sector


on disk
• reading and writing pages.

Why not use the operating system


Platters
for this task? Arm movement
• Portability
• Limited size of address space
• May need to span several Arm assembly
disk devices.
Buffer Management in a DBMS
Page Requests from Higher Levels

BUFFER POOL

disk page

free frame

MAIN MEMORY

DISK choice of frame dictated


DB by replacement policy

• Data must be in RAM for DBMS to operate on it!


• Table of <frame#, pageid> pairs is maintained.
Buffer Manager
Manages buffer pool: the pool provides space for a limited
number of pages from disk.

Needs to decide on page replacement policy.

Enables the higher levels of the DBMS to assume that the


needed data is in main memory.

Why not use the Operating System for the task??

- DBMS may be able to anticipate access patterns


- Hence, may also be able to perform prefetching
- DBMS needs the ability to force pages to disk.
Record Formats: Fixed Length
F1 F2 F3 F4

L1 L2 L3 L4

Base address (B) Address = B+L1+L2

• Information about field types same for all records


in a file; stored in system catalogs.
• Finding i’th field requires scan of record.
• Note the importance of schema information!
Files of Records
• Page or block is OK when doing I/O, but higher
levels of DBMS operate on records, and files of
records.
• FILE: A collection of pages, each containing a
collection of records. Must support:
– insert/delete/modify record
– read a particular record (specified using record id)
– scan all records (possibly with some conditions on
the records to be retrieved)
File Organizations
– Heap files: Suitable when typical access is a file
scan retrieving all records.
– Sorted Files: Best if records must be retrieved in
some order, or only a `range’ of records is needed.
– Hashed Files: Good for equality selections.
• File is a collection of buckets. Bucket = primary
page plus zero or more overflow pages.
• Hashing function h: h(r) = bucket in which
record r belongs. h looks at only some of the
fields of r, called the search fields.
Cost Model for Our Analysis
 Asa good approximation, we ignore CPU
costs:
– B: The number of data pages
– R: Number of records per page
– D: (Average) time to read or write disk page
– Measuring number of page I/O’s ignores gains of
pre-fetching blocks of pages; thus, even I/O cost
is only approximated.
Sorting

• Illustrates the difference in algorithm design when


your data is not in main memory:
– Problem: sort 1Gb of data with 1Mb of RAM.
• Arises in many places in database systems:
– Data requested in sorted order (ORDER BY)
– Needed for grouping operations
– First step in sort-merge join algorithm
– Duplicate removal
– Bulk loading of B+-tree indexes.
2-Way Sort: Requires 3 Buffers
• Pass 1: Read a page, sort it, write it.
– only one buffer page is used
• Pass 2, 3, …, etc.:
– three buffer pages used.

INPUT 1

OUTPUT
INPUT 2

Main memory Disk


Disk
buffers
Two-Way External Merge Sort
3,4 6,2 9,4 8,7 5,6 3,1 2 Input file
• Each pass we read + write PASS 0
1,3 2 1-page runs
each page in file. 3,4 2,6 4,9 7,8 5,6
PASS 1
• N pages in the file => the 2,3 4,7 1,3
2-page runs
4,6 8,9 5,6 2
number of passes PASS 2
2,3
 log 2 N   1 4,4 1,2 4-page runs
• So toal cost is: 6,7 3,5
6
8,9
PASS 3

 
2 N  log 2 N   1 1,2
2,3
• Idea: Divide and conquer: 3,4 8-page runs
sort subfiles and merge 4,5
6,6
7,8
9
General External Merge Sort
e than 3 buffer pages. How can we utilize them
• To sort a file with N pages using B buffer pages:
– Pass 0: use B buffer pages. Produce  N / B sorted runs
of B pages each.
– Pass 2, …, etc.: merge B-1 runs.

INPUT 1

... ..
INPUT 2
... OUTPUT

INPUT B-1 .
Disk Disk
B Main memory
buffers
Cost of External Merge Sort
• Number of passes: 1   log B 1  N / B 
• Cost = 2N * (# of passes)
• E.g., with 5 buffer pages, to sort 108 page file:
– Pass 0: = 22 sorted runs of 5 pages each
(last run is only 3 pages)
– Pass 1:  108 / 5 = 6 sorted runs of 20 pages each
(last run is only 8 pages)
22 / 4runs,
– Pass 2: 2 sorted  80 pages and 28 pages
– Pass 3: Sorted file of 108 pages
Number of Passes of External
Sort
N B=3 B=5 B=9 B=17 B=129 B=257
100 7 4 3 2 1 1
1,000 10 5 4 3 2 2
10,000 13 7 5 4 2 2
100,000 17 9 6 5 3 3
1,000,000 20 10 7 5 3 3
10,000,000 23 12 8 6 4 3
100,000,000 26 14 9 7 4 4
1,000,000,000 30 15 10 8 5 4
Cost Model for Our Analysis
 As a good approximation, we ignore CPU costs:
– B: The number of data pages
– R: Number of records per page
– D: (Average) time to read or write disk page
– Measuring number of page I/O’s ignores gains of
pre-fetching blocks of pages; thus, even I/O cost is
only approximated.
– Average-case analysis; based on several simplistic
assumptions.
Assumptions in Our Analysis
• Single record insert and delete.
• Heap Files:
– Equality selection on key; exactly one match.
– Insert always at end of file.
• Sorted Files:
– Files compacted after deletions.
– Selections on sort field(s).
• Hashed Files:
– No overflow buckets, 80% page occupancy.
Cost of Operations

Heap Sorted Hashed


File File File
Scan all recs
Equality Search
Range Search
Insert
Delete
Cost of Operations
Heap Sorted Hashed
File File File
Scan all recs BD BD 1.25 BD
Equality Search 0.5 BD D log2B D
Range Search BD D (log2B +#of 1.25 BD
pages with
matches)
Insert 2D Search +BD 2D
Delete Search +D Search +BD 2D
Indexes
• An index on a file speeds up selections on the
search key fields for the index.
– Any subset of the fields of a relation can be the
search key for an index on the relation.
– Search key is not the same as key (minimal set of
fields that uniquely identify a record in a relation).
• An index contains a collection of data entries,
and supports efficient retrieval of all data entries
k* with a given key value k.
Alternatives for Data Entry k* in
Index
• Three alternatives:
 Data record with key value k
 <k, rid of data record with search key value k>
 <k, list of rids of data records with search key k>
• Choice of alternative for data entries is
orthogonal to the indexing technique used to
locate data entries with a given key value k.
– Examples of indexing techniques: B+ trees, hash-
based structures
Alternatives for Data Entries (2)
• Alternative 1:
– If this is used, index structure is a file organization for
data records (like Heap files or sorted files).
– At most one index on a given collection of data records
can use Alternative 1. (Otherwise, data records
duplicated, leading to redundant storage and potential
inconsistency.)
– If data records very large, # of pages containing data
entries is high. Implies size of auxiliary information in
the index is also large, typically.
Alternatives for Data Entries (3)
• Alternatives 2 and 3:
– Data entries typically much smaller than data
records. So, better than Alternative 1 with large
data records, especially if search keys are small.
– If more than one index is required on a given file,
at most one index can use Alternative 1; rest
must use Alternatives 2 or 3.
– Alternative 3 more compact than Alternative 2,
but leads to variable sized data entries even if
search keys are of fixed length.
Index Classification
• Primary vs. secondary: If search key contains
primary key, then called primary index.
• Clustered vs. unclustered: If order of data records
is the same as, or `close to’, order of data entries,
then called clustered index.
– Alternative 1 implies clustered, but not vice-versa.
– A file can be clustered on at most one search key.
– Cost of retrieving data records through index varies
greatly based on whether index is clustered or not!
Clustered vs. Unclustered Index

Data entries
Data entries
(Index File)
(Data file)

Data Records Data Records

CLUSTERED UNCLUSTERED
Index Classification (Contd.)
• Dense vs. Sparse: If
there is at least one data
entry per search key Ashby, 25, 3000

Basu, 33, 4003


22

value (in some data Ashby


Bristow, 30, 2007
25

30

record), then dense. Cass Cass, 50, 5004


33

Smith Daniels, 22, 6003


– Alternative 1 always Jones, 40, 6003
40

44

leads to dense index. Smith, 44, 3000


44

– Every sparse index is Tracy, 44, 5004


50

clustered! Sparse Index Dense Index


on on
Data File
– Sparse indexes are Name Age

smaller;
Index Classification (Contd.)
• Composite Search Keys: Search Examples of composite key
indexes using lexicographic order.
on a combination of fields.
– Equality query: Every field 11,80 11
value is equal to a constant 12,10 12
name age sal
value. E.g. wrt <sal,age> 12,20 12
13,75 bob 12 10 13
index: <age, sal> cal 11 80 <age>
• age=20 and sal =75 joe 12 20
10,12 sue 13 75 10
– Range query: Some field
20,12 Data records 20
value is not a constant. E.g.: 75,13 sorted by name 75
• age =20; or age=20 and 80,11 80

sal > 10 <sal, age> <sal>


Data entries in index Data entries
sorted by <sal,age> sorted by <sal>
Tree-Based Indexes
• ``Find all students with gpa > 3.0’’
– If data is in sorted file, do binary search to find
first such student, then scan to find others.
– Cost of binary search can be quite high.
• Simple idea: Create an `index’ file.
k1 k2 kN Index File

Page 1 Page 2 Page 3 Page N Data File

 Can do binary search on (smaller) index file!


Tree-Based Indexes (2)
index entry

P K P K 2 P K m Pm
0 1 1 2

40 Root

20 33 51 63

10* 15* 20* 27* 33* 37* 40* 46* 51* 55* 63* 97*
B+ Tree: The Most Widely Used
Index
• Insert/delete at log F N cost; keep tree height-
balanced. (F = fanout, N = # leaf pages)
• Minimum 50% occupancy (except for root).
Each node contains d <= m <= 2d entries.
The parameter d is called the order of the tree.
Root
Index Entries

Data Entries
Example B+ Tree
• Search begins at root, and key comparisons
direct it to a leaf.
• Search for 5*, 15*, all data entries >=
24* ...
13 17 24 30

2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
B+ Trees in Practice
• Typical order: 100. Typical fill-factor: 67%.
– average fanout = 133
• Typical capacities:
– Height 4: 1334 = 312,900,700 records
– Height 3: 1333 = 2,352,637 records
• Can often hold top levels in buffer pool:
– Level 1 = 1 page = 8 Kbytes
– Level 2 = 133 pages = 1 Mbyte
– Level 3 = 17,689 pages = 133 MBytes
Inserting a Data Entry into a B+
Tree
• Find correct leaf L.
• Put data entry onto L.
– If L has enough space, done!
– Else, must split L (into L and a new node L2)
• Redistribute entries evenly, copy up middle key.
• Insert index entry pointing to L2 into parent of L.
• This can happen recursively
– To split index node, redistribute entries evenly, but
push up middle key. (Contrast with leaf splits.)
Inserting 8* into Example B+
Tree
• Note: Entry to be inserted in parent node.
(Note that 5 is
s copied up and
5
continues to appear in the leaf.)
– why
minimum 2* 3* 5* 7* 8*
occupancy is
guaranteed.
– Difference Entry to be inserted in parent node.
(Note that 17 is pushed up and only
between 17
appears once in the index. Contrast
this with a leaf split.)
copy-up and
push-up. 5 13 24 30
Example B+ Tree After Inserting
8*
Root
17

5 13 24 30

2* 3* 5* 7* 8* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*

 Notice that root was split, leading to increase in height.


 In this example, we can avoid split by re-
distributing entries; however, this is
usually not done in practice.
Deleting a Data Entry from a B+ Tree
• Start at root, find leaf L where entry belongs.
• Remove the entry.
– If L is at least half-full, done!
– If L has only d-1 entries,
• Try to re-distribute, borrowing from sibling (adjacent node with
same parent as L).
• If re-distribution fails, merge L and sibling.
• If merge occurred, must delete entry (pointing to L or sibling)
from parent of L.
• Merge could propagate to root, decreasing height.
Example Tree After (Inserting
8*, Then) Deleting 19* and
20* ...Root

17

5 13 27 30

2* 3* 5* 7* 8* 14* 16* 22* 24* 27* 29* 33* 34* 38* 39*

• Deleting 19* is easy.


• Deleting 20* is done with re-distribution.
Notice how middle key is copied up.
... And Then Deleting 24*
• Must merge. 30

• Observe `toss’ of
index entry (on right), 22* 27* 29* 33* 34* 38* 39*

and `pull down’ of


index entry (below).
5 13 17 30

2* 3* 5* 7* 8* 14* 16* 22* 27* 29* 33* 34* 38* 39*

You might also like