0% found this document useful (0 votes)
7 views

QueryProcessing Sorting

The document discusses sorting large datasets that exceed available memory by using external merge sort. It describes how external merge sort works by writing sorted runs to disk and then merging them. The document also covers using B+ trees to retrieve sorted data if the tree is clustered on the sorting columns.

Uploaded by

Palak Rathore
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

QueryProcessing Sorting

The document discusses sorting large datasets that exceed available memory by using external merge sort. It describes how external merge sort works by writing sorted runs to disk and then merging them. The document also covers using B+ trees to retrieve sorted data if the tree is clustered on the sorting columns.

Uploaded by

Palak Rathore
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Query Processing 2:

Sorting & Joins


R&G, Chapters 13
Lecture 14

One of the advantages of being


disorderly is that one is
constantly making exciting
discoveries.
A. A. Milne
Questions
• We learned that the same query can be written
many ways.
– How does DBMS decide which is best?

• We learned about tree & hash indexes.


– How does DBMS know when to use them?

• Sometimes we need to sort data.


– How to sort more data than will fit in memory?
Review: Query Processing
• Queries start out as SQL

• Database translates SQL to one or more Relational


Algebra plans

• Plan is a tree of operations, with access path for each

• Access path is how each operator gets tuples


– If working directly on table, can use scan, index
– Some operators, like sort-merge join, or group-by, need tuples
sorted
– Often, operators pipelined, getting tuples that are output from
earlier operators in the tree

• Database estimates cost for various plans, chooses least


expensive
Review: Operator Costs

• Selection – either scan all tuples, or use index

• Projection – expensive only if duplicates


eliminated, requires sorting or hashing

(On-the-fly)
• Join – several algorithms: sname

– Nested Loops
– Indexed Nested Loops bid=100 rating > 5 (On-the-fly)

– Sort-Merge
– Hash Join (Simple Nested Loops)
sid=sid

Reserves Sailors
Review: Cost Estimation
• To compute operator costs, DBMS needs:
– Sizes of relations
– Info on indexes (type, size)
– Info on attribute values (high, low, distr, etc.)
– This info stored in catalogs

• Cost often related to input size


– Need to compute reduction factor, how big output
will be compared to how big it could be

• Query costs can vary hugely between plans


What we’ll cover:

• Efficient sorting

• Different join algorithms

• Later – developing query plans, computing


costs
Why Sort?

• A classic problem in computer science!

• Database needs it in order


– e.g., find students in increasing gpa order
– first step in bulk loading B+ tree index.
– eliminating duplicates
– aggregating related groups of tuples
– Sort-merge join algorithm involves sorting.

• Problem: sort 1Gb of data with 1Mb of RAM.


– why not virtual memory?
Streaming Data Through RAM

• An important detail for sorting & other DB operations


• Simple case:
– Compute f(x) for each record, write out the result
– Read a page from INPUT to Input Buffer
– Write f(x) for each item to Output Buffer
– When Input Buffer is consumed, read another page
– When Output Buffer fills, write it to OUTPUT
• Reads and Writes are not coordinated
– E.g., if f() is Compress(), you read many pages per write.
– E.g., if f() is DeCompress(), you write many pages per read.

Input Output
Buffer Buffer
INPUT
f(x) OUTPUT

RAM
2-Way Sort

• Pass 0: Read a page, sort it, write it.


– only one buffer page is used (as in previous slide)
• Pass 1, 2, …, etc.:
– requires 3 buffer pages
– merge pairs of runs into runs twice as long
– three buffer pages used.

INPUT 1

OUTPUT
INPUT 2

Main memory buffers Disk


Disk
Two-Way External Merge Sort
3,4 6,2 9,4 8,7 5,6 3,1 2 Input file
• Each pass we read + write PASS 0
each page in file. 3,4 2,6 4,9 7,8 5,6 1,3 2 1-page runs
PASS 1
• N pages in the file => the 4,7
2,3 1,3
number of passes 4,6 8,9 5,6 2
2-page runs

=  log 2 N  + 1 2,3
PASS 2

4,4 1,2 4-page runs


• So total cost is: 6,7 3,5
6
(
2 N log 2 N  + 1 ) 8,9
PASS 3

1,2
2,3
• Idea: Divide and conquer: 3,4 8-page runs
sort subfiles and merge 4,5
6,6
7,8
9
General External Merge Sort

 More than 3 buffer pages. How can we utilize them?


• To sort a file with N pages using B buffer pages:
– Pass 0: use B buffer pages. Produce  N / B sorted runs of
B pages each.
– Pass 1, 2, …, etc.: merge B-1 runs.

INPUT 1

INPUT 2
... ... OUTPUT ...
INPUT B-1
Disk Disk
B Main memory buffers
Cost of External Merge Sort
• Number of passes: 1+ logB−1 N/ B 
• Cost = 2N * (# of passes)
• E.g., with 5 buffer pages, to sort 108
page file:
– Pass 0: = 22 sorted runs of 108 / 5 
5 pages each (last run is only 3 pages)
• Now, do four-way (B-1) merges
– Pass 1: = 6 sorted runs of 20
pages each (last run is only 8 pages)
– Pass 2: 2 sorted runs, 80 pages and 28  22 / 4 
pages
– Pass 3: Sorted file of 108 pages
Number of Passes of External Sort

( I/O cost is 2N times number of passes)

N B=3 B=5 B=9 B=17 B=129 B=257


100 7 4 3 2 1 1
1,000 10 5 4 3 2 2
10,000 13 7 5 4 2 2
100,000 17 9 6 5 3 3
1,000,000 20 10 7 5 3 3
10,000,000 23 12 8 6 4 3
100,000,000 26 14 9 7 4 4
1,000,000,000 30 15 10 8 5 4
Internal Sort Algorithm

• Quicksort is a fast way to sort in memory.


• Alternative: “tournament sort” (a.k.a. “heapsort”,
“replacement selection”)
• Keep two heaps in memory, H1 and H2
read B-2 pages of records, inserting into H1;
while (records left) {
m = H1.removemin(); put m in output buffer;
if (H1 is empty)
H1 = H2; H2.reset(); start new output run;
else
read in a new record r (use 1 buffer for input
pages);
if (r < m) H2.insert(r);
else H1.insert(r);
}
H1.output(); start new run; H2.output();
More on Heapsort

• Fact: average length of a run is 2(B-2)


– The “snowplow” analogy
• Worst-Case:
– What is min length of a run?
– How does this arise?
• Best-Case:
– What is max length of a run?
B
– How does this arise?
• Quicksort is faster, but … longer runs often
means fewer passes!
I/O for External Merge Sort
• Actually, doing I/O a page at a time
– Not an I/O per record
• In fact, read a block (chunk) of pages
sequentially!
• Suggests we should make each buffer
(input/output) be a chunk of pages.
– But this will reduce fan-out during merge passes!
– In practice, most files still sorted in 2-3 passes.
Number of Passes of Optimized Sort

N B=1,000 B=5,000 B=10,000


100 1 1 1
1,000 1 1 1
10,000 2 2 1
100,000 3 2 2
1,000,000 3 2 2
10,000,000 4 3 3
100,000,000 5 3 3
1,000,000,000 5 4 3
 Block size = 32, initial pass produces runs of size 2B.
Sorting Records!

• Sorting has become a competition between labs,


universities, DBMS vendors
– Parallel sorting is the name of the game ...
• Minute Sort: how many 100-byte records can you sort in
a minute?
– Typical DBMS: 10MB (~100,000 records)
– Current World record: 116 GB
• 80 Itanium2s running Linux
• Terabyte Sort: how fast to sort 1TB
– Current record 7.25 minutes (80 Itanium2s)
• See
http://research.microsoft.com/barc/SortBenchmark/
Using B+ Trees for Sorting

• Scenario: Table to be sorted has B+ tree index on


sorting column(s).
• Idea: Can retrieve records in order by traversing leaf
pages.
• Is this a good idea?
• Cases to consider:
– B+ tree is clustered Good idea!
– B+ tree is not clustered Could be a very bad
idea!
Clustered B+ Tree Used for Sorting

• Cost: root to the left-


most leaf, then retrieve Index
all leaf pages (Directs search)
(Alternative 1)
• If Alternative 2 is used?
Data Entries
Additional cost of ("Sequence set")
retrieving data records:
each page fetched just
once.

Data Records

 Better than external sorting!


Unclustered B+ Tree Used for Sorting
• Alternative (2) for data entries; each data
entry contains rid of a data record. In general,
one I/O per data record!

Index
(Directs search)

Data Entries
("Sequence set")

Data Records
External Sorting vs. Unclustered Index

N Sorting p=1 p=10 p=100


100 200 100 1,000 10,000
1,000 2,000 1,000 10,000 100,000
10,000 40,000 10,000 100,000 1,000,000
100,000 600,000 100,000 1,000,000 10,000,000
1,000,000 8,000,000 1,000,000 10,000,000 100,000,000
10,000,000 80,000,000 10,000,000 100,000,000 1,000,000,000

 p: # of records per page


 B=1,000 and block size=32 for sorting
 p=100 is the more realistic value.
Sorting - Review
• External sorting is important; DBMS may dedicate
part of buffer pool for sorting!
• External merge sort minimizes disk I/O cost:
– Pass 0: Produces sorted runs of size B (# buffer pages).
Later passes: merge runs.
– # of runs merged at a time depends on B, and block
size.
– Larger block size means less I/O cost per page.
– Larger block size means smaller # runs merged.
– In practice, # of passes rarely more than 2 or 3.
Sorting – Review (cont)
• Choice of internal sort algorithm may matter:
– Quicksort: Quick!
– Heap/tournament sort: slower (2x), longer runs
• The best sorts are wildly fast:
– Despite 40+ years of research, we’re still
improving!
• Clustered B+ tree is good for sorting;
unclustered tree is usually very bad.
Joins
• How does DBMS join two tables?

• Sorting is one way...

• Database must choose best way for each query


Schema for Examples

Sailors (sid: integer, sname: string, rating: integer, age: real)


Reserves (sid: integer, bid: integer, day: dates, rname: string)

• Similar to old schema; rname added for variations.


• Reserves:
– Each tuple is 40 bytes long,
– 100 tuples per page,
– M = 1000 pages total.
• Sailors:
– Each tuple is 50 bytes long,
– 80 tuples per page,
– N = 500 pages total.
Equality Joins With One Join Column
SELECT *
FROM Reserves R1, Sailors S1
WHERE R1.sid=S1.sid

• In algebra: R ><S. Common! Must be carefully


optimized. R × S is large; so, R × S followed by a
selection is inefficient.
• Assume: M tuples in R, pR tuples per page, N tuples in
S, pS tuples per page.
– In our examples, R is Reserves and S is Sailors.
• We will consider more complex join conditions later.
• Cost metric: # of I/Os. We will ignore output costs.
Simple Nested Loops Join

foreach tuple r in R do
foreach tuple s in S do
if ri == sj then add <r, s> to result

• For each tuple in the outer relation R, we scan the


entire inner relation S.
– Cost: M + pR * M * N = 1000 + 100*1000*500 I/Os.
• Page-oriented Nested Loops join: For each page of R,
get each page of S, and write out matching pairs of
tuples <r, s>, where r is in R-page and S is in S-
page.
– Cost: M + M*N = 1000 + 1000*500
– If smaller relation (S) is outer, cost = 500 + 500*1000
Block Nested Loops Join

• Use one page as an input buffer for scanning the inner


S, one page as the output buffer, and use all
remaining pages to hold ``block’’ of outer R.
– For each matching tuple r in R-block, s in S-page, add
<r, s> to result. Then read next R-block, scan S, etc.

R&S Join Result


Hash table for block of R
(k < B-1 pages)
...
... ...
Input buffer for S Output buffer
Examples of Block Nested Loops

• Cost: Scan of outer + #outer blocks * scan of inner


– #outer blocks =  # of pages of outer / blocksize 
• With Reserves (R) as outer, and 100 pages of R:
– Cost of scanning R is 1000 I/Os; a total of 10 blocks.
– Per block of R, we scan Sailors (S); 10*500 I/Os.
– If space for just 90 pages of R, we would scan S 12 times.
• With 100-page block of Sailors as outer:
– Cost of scanning S is 500 I/Os; a total of 5 blocks.
– Per block of S, we scan Reserves; 5*1000 I/Os.
• With sequential reads considered, analysis changes:
may be best to divide buffers evenly between R and S.
Index Nested Loops Join

foreach tuple r in R do
foreach tuple s in S where ri == sj do
add <r, s> to result
• If there is an index on the join column of one relation
(say S), can make it the inner and exploit the index.
– Cost: M + ( (M*pR) * cost of finding matching S tuples)
• For each R tuple, cost of probing S index is about 1.2
for hash index, 2-4 for B+ tree. Cost of then finding S
tuples (assuming Alt. (2) or (3) for data entries)
depends on clustering.
– Clustered index: 1 I/O (typical), unclustered: upto 1 I/O
per matching S tuple.
Examples of Index Nested Loops

• Hash-index (Alt. 2) on sid of Sailors (as inner):


– Scan Reserves: 1000 page I/Os, 100*1000 tuples.
– For each Reserves tuple: 1.2 I/Os to get data entry in
index, plus 1 I/O to get (the exactly one) matching Sailors
tuple. Total: 220,000 I/Os.
• Hash-index (Alt. 2) on sid of Reserves (as inner):
– Scan Sailors: 500 page I/Os, 80*500 tuples.
– For each Sailors tuple: 1.2 I/Os to find index page with
data entries, plus cost of retrieving matching Reserves
tuples. Assuming uniform distribution, 2.5 reservations per
sailor (100,000 / 40,000). Cost of retrieving them is 1 or
2.5 I/Os depending on whether the index is clustered.
Sort-Merge Join (R S)
><
i=j
• Sort R and S on the join column, then scan them to do a
``merge’’ (on join col.), and output result tuples.
– Advance scan of R until current R-tuple >= current S tuple,
then advance scan of S until current S-tuple >= current R
tuple; do this until current R tuple = current S tuple.
– At this point, all R tuples with same value in Ri (current R
group) and all S tuples with same value in Sj (current S
group) match; output <r, s> for all pairs of such tuples.
– Then resume scanning R and S.
• R is scanned once; each S group is scanned once per
matching R tuple. (Multiple scans of an S group are
likely to find needed pages in buffer.)
Example of Sort-Merge Join
sid bid day rname
sid sname rating age 28 103 12/4/96 guppy
22 dustin 7 45.0 28 103 11/3/96 yuppy
28 yuppy 9 35.0 31 101 10/10/96 dustin
31 lubber 8 55.5 31 102 10/12/96 lubber
44 guppy 5 35.0 31 101 10/11/96 lubber
58 rusty 10 35.0 58 103 11/12/96 dustin
• Cost: M log M + N log N + (M+N)
– The cost of scanning, M+N, could be M*N (very unlikely!)
• With 35, 100 or 300 buffer pages, both Reserves and
Sailors can be sorted in 2 passes; total join cost: 7500.
(BNL cost: 2500 to 15000 I/Os)
Cost for Sorting for Sort Merge Join
Optimization 1: Use More Buffer B
Cost for Sorting R when Buffer B is used
2 * (M + N) * # of Passes to sort
• # of Passes to sort Bigger Table (M pages)
= 1 + Ceiling (Log B-1 M/B)
total =2 * (M + N) * ( 1 + Ceiling (Log B-1 M/B ) )
Refinement of Sort-Merge Join

• We can combine the merging phases in the sorting of R


and S with the merging required for the join.
– With B > L , where L is the size of the larger relation,
using the sorting refinement that produces runs of length
2B in Pass 0, #runs of each relation is < B/2.
– Allocate 1 page per run of each relation, and `merge’ while
checking the join condition.
– Cost: read+write each relation in Pass 0 + read each
relation in (only) merging pass (+ writing of result tuples).
– In example, cost goes down from 7500 to 4500 I/Os.
• In practice, cost of sort-merge join, like the cost of
external sorting, is linear.
Optimized Sort Merge Join Cost
• Optimization 2:
If we use Buffer size B2 > Size of bigger Table, B > M
Sorting both Table M+N can be done in 2 passes
1) 2 * (M + N) * 2 --- to sort both tables
2) (M + N) -----------
 to merge two
sorted table
total = 5 * (M + N)
• Optimization 3:
If we combine the final Merging phase of Sort in the second
pass with the Merging phase of Join,
No need to write back whole sorted two tables
 Save (M+N) and
No need to read two sorted tables to merge to find matching
 Save (M+N)
total = 3 * (M + N)
Best Case of Cost of Sort Merge Join
Best Case: When Both Tables are Sorted
• Very efficient if tables are already sorted:
No sorting cost. Cost is Only for merging phase
total = M + N
Hash-Join Original
Relation OUTPUT Partitions
1

• Partition both 1
INPUT 2
relations using hash hash 2
function
fn h: R tuples in ... h
partition i will only B-1

match S tuples in B-1


partition i. Disk B main memory buffers Disk

Partitions
of R & S Join Result
Hash table for partition
 Read in a partition hash Ri (k < B-1 pages)
of R, hash it using fn
h2
h2 (<> h!). Scan
h2
matching partition
of S, search for Input buffer
for Si
Output
buffer
matches.
Disk B main memory buffers Disk
Observations on Hash-Join

• #partitions k < B-1 (why?), and B-2 > size of largest


partition to be held in memory. Assuming uniformly
sized partitions, and maximizing k, we get:
– k= B-1 for Partitioning Phase, so each k = M/(B-1) and
– M/(B-1) < B-2 for Probing, i.e., B must be > M
• If we build an in-memory hash table to speed up the
matching of tuples, a little more memory is needed.
• If the hash function does not partition uniformly, one
or more R partitions may not fit in memory. Can apply
hash-join technique recursively to do the join of this R-
partition with corresponding S-partition.
Cost of Hash-Join
• In partitioning phase, read+write both relns; 2(M+N).
In matching phase, read both relns; (M+N) I/Os
Total = 3 (M + N)
• In our running example, this is a total of 4500 I/Os.
• In Recursive Calls
Cost for Recursive Partitioning when Buffer B is used
2 * (M + N) * (# of Recursive Calls for Partitioning)
• # of Recursive Calls to each big Partition (B-1
partitions) when Bigger Table is M pages
# of Recursive Calls = 1 + Ceiling (Log B-1 M/(B-1) )
So Total Cost =
2 * (M + N)*( 1 + Ceiling (Log B-1 M/(B-1) ) ) + (M+N)
Sort-Merge Join vs. Hash Join
• Sort-Merge Join vs. Hash Join:
– Given a minimum amount of memory (what is
this, for each?) both have a cost of 3(M+N) I/Os.
– Hash Join superior on this count if relation sizes
differ greatly.
– Hash Join shown to be highly parallelizable.
– Hash Join is Bad when Data Skewed a lot (some
big partitions causes recursive partitioning)
– Sort-Merge Less Sensitive to Data Skew than Hash
Join
– Result of Sort-Merge is sorted !
General Join Conditions

• Equalities over several attributes (e.g., R.sid=S.sid AND


R.rname=S.sname):
– For Index NL, build index on <sid, sname> (if S is inner); or
use existing indexes on sid or sname.
– For Sort-Merge and Hash Join, sort/partition on combination
of the two join columns.
• Inequality conditions (e.g., R.rname < S.sname):
– For Index NL, need (clustered!) B+ tree index.
• Range probes on inner; # matches likely to be much higher than for
equality joins.
– Hash Join, Sort Merge Join not applicable.
– Block NL quite likely to be the best join method here.
Conclusions
• Database needs to run queries fast

• Sorting efficiently is one factor

• Choosing the right join another factor

• Next time: optimizing all parts of a query

You might also like