Query Processing Concepts

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 99

Query Processing (Concepts)

Objectives

 Query processing and optimization.


 Static versus dynamic query optimization.
 How a query is decomposed and
semantically analyzed.
 How to create a relational algebra tree to
represent a query.
 The rules of equivalence for the relational
algebra operations.

2
(Cont.)

 Heuristic transformation rules.


 The different strategies for implementing the
relational algebra operations.
 The difference between materialization and
pipelining.
 The advantages of left-deep trees.

3
Query Processing

 The activities involved in retrieving data from


the database.
 The aims of query processing
(1)to transform a query written in a high-level
language into a low-level language
(2)to execute the strategy to retrieve the
required data.

4
(Cont.)

 Query processing can be divided into four


main phases: decomposition, optimization,
code generation, and execution.
Query Decomposition

 The aims of query decomposition


(1)to transform a high-level query into a
relational algebra query.
(2)to check that the query is syntactically and
semantically correct.

6
(Cont.)

 The typical stages of query decomposition


are analysis, normalization, semantic
analysis, simplification, and query
restructuring.

7
Analysis

 The query is lexically and syntactically


analyzed using the techniques of
programming language compilers.
 Verifies that the relations and attributes
specified in the query are defined in the
system catalog.
 Verifies that any operations applied to
database objects are appropriate for the
object type.
8
(Cont.)

 On completion of the
Root
analysis, the high-level
query has been
transformed into some
internal representation Intermediate operations
(query tree) that is
more suitable for
processing.
leaves

9
Normalization

 Converts the query into a normalized form


that can be more easily manipulated.
 There are two different normal forms,
conjunctive normal form and disjunctive
normal form.

10
Conjunctive normal form

 A sequence of conjuncts that are connected


with the and operator. Each conjunct
contains one or more terms connected by the
or operator.
for example
(position=‘Manager’ V salary>20000) ^
branchNo = ‘B003’

11
Disjunctive normal form

 A sequence of disjuncts that are connected


with the or operator. Each disjunt contains
one or more terms connected by the and
operator.
for example
(position=‘Manager’ ^ branchNo = ‘B003’) V
(salary>20000 ^ branchNo = ‘B003’)

12
Semantic analysis

 The objective is to reject normalized queries


that are incorrectly formulated or
contradictory.

13
Simplification

 To detect redundant qualifications, eliminate


common subexpressions , and transform the
query to a semantically equivalent but more
easily and efficiently computed form.
 Access restrictions, view definitions, and
integrity constraints are considered at this
stage.

14
Query restructuring

 The final stage of query decomposition.


 The query is restructured to provide a more
efficient implementation.

15
Query optimization

 The activity of choosing an efficient


execution strategy for processing a query.
 An important aspect of query processing is
query optimization.
 The aim of query optimization is to choose
the one that minimizes resource usage.

16
(Cont.)

 Every method of query optimization depend on


database statistics.
 The statistics cover information about relations,
attribute, and indexes.
 Keeping the statistics current can be problematic.
 If the DBMS updates the statistics every time a tuple
is inserted, updated, or deleted, this would have a
significant impact on performance during peak
period.

17
(Cont.)

 An alternative approach is to update the


statistics on a periodic basis, for example
nightly, or whenever the system is idle.

18
Dynamic query optimization

 Advantage: all information required to select


an optimum strategy is up to date.
 Disadvantage: the performance of the query
is affected because the query has to be
parsed, validated, and optimized before it
can be executed.

19
Static query optimization

 The query is parsed, validated, and


optimized once that is similar to the approach
taken by a compiler for a programming
language.
 Advantages
1)The runtime overhead is removed
2)More time available to evaluate a larger
number of execution strategies.

20
(cont.)

 Disadvantage: the execution strategy that is


chosen as being optimal when the query is
compiled may no longer be optimal when the
query is run.

21
Transformation Rules for the
Relational Algebra Operations

 By applying transformation rules, we can


transform one relational algebra into an
equivalent expression that is more efficient.
 There are twelve rules that can be used to
restructure the relational algebra tree
generated during query decomposition.

22
Heuristics rules

 Many DBMSs use heuristics to determine strategies


for query processing.
 Heuristics rules include
-performing Selection and Projections as early as
possible.
-combining Cartesian product with a subsequent
selection whose predicate represents a join
condition into a join operation.

23
(Cont.)

-using associativity of binary operations to


rearrange leaf nodes so that leaf nodes with
the most restrictive Selections are executed
first.

24
Cost estimation

 Depends on statistical information held in the


system catalog.
 Typical statistics include the cardinality of
each base relation, the number of blocks
required to store a relation, the number of
distinct values for each attribute, the
selection cardinality of each attribute, and the
number of levels in each multilevel index.

25
Join operation

 Block nested loop join


 Indexed nested loop join
 Sort-merge join
 Hash join

26
Pipelining

 In materialization, the output of one operation is


stored in a temporary relation for processing by the
next operation.
 An alternative approach is to pipeline the results of
one operation to another operation without creating a
temporary relation to hold the intermediate result.
 By using it, we can save on the cost of creating
temporary relations and reading the results back in
again.

27
Left – deep trees

 A relational algebra tree where the right-hand


relation is always a base relation or the result
of a select or project operation (but never the
result of another join).
 Advantages: reducing the search space for
the optimum strategy and allowing the query
optimizer to be based on dynamic processing
techniques.

28
Query Processing Tutorial
Basic Steps in Query
Processing
Optimization – finding the cheapest evaluation plan for a query.
 Given relational algebra expression may have many equivalent
expressions
E.g. balance<2500(balance(account) is equivalent to
balance(balance<2500(account))
 Any relational-algebra expression can be evaluated in
many ways. Annotated expression specifying detailed
evaluation strategy is called an evaluation-plan.
E.g. can use an index on balance to find accounts with
balance <2500, or can perform complete relation scan and
discard accounts with balance  2500
 Amongst all equivalent expressions, try to choose the
one with cheapest possible evaluation-plan. Cost
30 estimate of a plan based on statistical information in the
DBMS catalog.
Catalog Information for Cost
Estimation

 nr: number of tuples in relation r.


 br: number of blocks containing tuples of r.
 sr: size of a tuple of r in bytes.
 fr: blocking factor of r - i.e., the number of tuples of r that fit into
one block.
 V(A, r): number of distinct values that appear in r for attribute A;
same as the size of A(r).
 SC(A, r): selection cardinality of attribute A of relation r; average
number of records that satisfy equality on A.
 If tuples of r are stored together nphysically in a file, then:
r
br   
 fr 
31
Catalog Information about
Indices
 fi: average fan-out of internal node of index I, for
tree-structured indices such as B+-trees.
 HTi: number of levels in index i - i.e., the height of i.
– For a balanced tree index (such as B+-tree) on attribute A
of relation r,
– For a hash index, HTi is 1.
 LBi: number of lowest-level index blocks in i - i.e., the
number of blocks at the leaf level of the index.
HTi  log f i (V ( A, r )

32
Measures of Query Cost

 Many possible ways to estimate cost, for instance disk


accesses, CPU time, or even communication overhead in a
distributed or parallel system.
 Typically disk access is the predominant cost, and is also
relatively easy to estimate. Therefore number of block transfers
from disk is used as a measure of the actual cost of evaluation.
It is assumed that all transfers of blocks have the same cost.
 Cost of algorithms depend on the size of the buffer in main
memory, as having more memory reduces need for disk access.
Thus memory size should be a parameter while estimating
cost; often use worst case estimates.
 We refer to the cost estimate algorithm A as EA. We do not
33 include cost of writing output to disk.
Selection Operation

 File scan - search algorithms that locate and retrieve


records that fulfill a selection condition.
 Algorithm A1 (linear search). Scan each file block
and test all records to see whether they satisfy the
selection condition.
– Cost estimate (number of disk blocks scanned) EA1 = br
– If selection is on a key attribute, EA1 = (br / 2) (stop on finding
record)
– Linear search can be applied regardless of
* selection condition, or
* ordering of records in the file, or
34 * availability of indices
Selection Operation (Cont.)

 A2 (binary search). Applicable if selection is an


equality comparison on the attribute on which file is
ordered.
– Assume that the blocks of a relation are stored contiguously
– Cost estimate (number of disk blocks to be scanned):
 SC ( A, r ) 
E A2  log 2 (br )    1
 fr 
 log2(br) — cost of locating the first tuple by a binary search
on the blocks
* SC(A, r) — numbers of records that will satisfy the selection
 SC(A,r)/fr — number of blocks that these records will
occupy
35 – Equality condition on a key attribute: SC(A,r) = 1; estimate
Statistical Information - Examples

 faccount = 20 (20 tuples of account fit in one block)


 V(branch-name, account) = 50 (50 branches)
 V(balance, account) = 500 (500 different balance
values)
 naccount = 10000 (account has 10,000 tuples)
 Assume the following indices exist on account:
– A primary, B+-tree index for attribute branch-name
– A secondary, B+-tree index for attribute balance

36
Selection Cost Estimate Example

branch-name = “Perryridge”(account)
 Number of blocks is baccount = 500: 10,000 tuples in
the relation; each block holds 20 tuples.
 Assume account is sorted on branch-name.
– V(branch-name, account) is 50
– 10000/50 = 200 tuples of the account relation pertain to
Perryridge branch
– 200/20 = 10 blocks for these tuples
– A binary search to find the first record would take
 log2(500) = 9 block accesses
 Total cost of binary search is 9 +10 –1 = 18 block
37 accesses (versus 500 for linear scan)
Selections Using Indices

 Index scan- search algorithms that use an index;


condition is on search-key of index.
 A3(primary index on candidate key, equality).
Retrieve a single record that satisfies the
corresponding equality condition. EA3 = HTi +1
 A4(primary index on nonkey, equality) Retrieve
multiple records. Let the search-key attribute be A.
 SC ( A, r ) 
E A 4  HTi   
 fr 
 A5(equality on search-key of secondary index).
– Retrieve a single record if the search-key is a candidate key
EA5 = HTi + 1
38 – Retrieve multiple records (each may be on a different block)
if the search-key is not a candidate key. E = HT + SC(A, r)
Cost Estimate Example
(Indices)
Consider the query is branch-name = “Perryridge”(account),
with the primary index on branch-name.
 Since V(branch-name, account) = 50, we expect that
10000/50 = 200 tuples of the account relation pertain
to the Perryridge branch.
 Since the index is a clustering index, 200/20 = 10
block reads are required to read the account tuples
 Several index blocks must also be read. If B+-tree
index stores 20 pointers per node, then the B+-tree
index must have between 3 and 5 leaf nodes and the
entire tree has a depth of 2. Therefore, 2 index blocks
must be read.
39

Selections Involving Comparisons

Implement selections of the form Av(r) or A>v(r) by using a linear file


scan or binary search, or by using indices in the following ways:
 A6 (primary index, comparison). The cost estimate is:
c
E A6  HTi   
 fr 
where c is the estimated number of tuples satisfying the condition.
In absence of statistical information c is assumed to be nr/2.
 A7 (secondary index, comparison). The cost estimate is:
LBi  c
E A7  HTi  c
nr

where c is defined as before. (Linear file scan may be cheaper if


c is large!)
40
Implementation of Complex
Selections

 The selectivity of a condition i is the probability that a


tuple in the relation r satisfies i . If si is the number of
satisfying tuples in r, i’s selectivity is given by si/nr.
 Conjunction:   … (r) . The estimate for
1 2 n

number of tuples in the result


s1  s2 is:
 ...  sn
nr 
nrn
 Disjunction:   … (r) .Estimated number of
1 2 n
 s s s 
tuples: nr  1  (1  1 )  (1  2 )  ...  (1  n ) 
 nr nr nr 

41  Negation: (r). Estimated


n  size(number
(r )) of tuples:
r 
Algorithms for Complex
Selections

 A8 (conjunctive selection using one index). Select a combination of


i and algorithms A1 through A7 that results in the least cost for .
Test other conditions in memory buffer.
 A9 (conjunctive selection using multiple-key index). Use appropriate
composite (multiple-key) index if available.
 A10 (conjunctive selection by intersection of identifiers).
 Requires indices with record pointers. Use corresponding index for
each condition, and take intersection of all the obtained sets of
record pointers. Then read file. If some conditions did not have
appropriate indices, apply test in memory.
 A11 (disjunctive selection by union of identifiers). Applicable if all
conditions have available indices. Otherwise use linear scan.

42
Example of Cost Estimate for Complex
Selection

 Consider a selection on account with the following condition:


where branch-name = “Perryridge” and balance = 1200
 Consider using algorithm A8:
– The branch-name index is clustering, and if we use it the
cost estimate is 12 block reads (as we saw before).
– The balance index is non-clustering, and V(branch,
account) = 500, so the selection would retrieve
10,000/500 = 20 accounts. Adding the index block reads,
gives a cost estimate of 22 block reads.
– Thus using branch-name index is preferable, even though
its condition is less selective.
– If both indices were non-clustering, it would be preferable
to use the balance index.
43
Example (cont.)

 Consider using algorithm A10:


 Use the index on balance to retrieve set S1 of pointers to
records with balance = 1200.
 Use index on branch-name to retrieve set S2 of pointers to
records with branch-name = “Perryridge”.
 S1  S2 = set of pointers to records with branch-name =
“Perryridge” and balance = 1200.
 The number of pointers retrieved (20 and 200) fit into a
single leaf page; we read four index blocks to retrieve the
two sets of pointers and compute their intersection.
 Estimate the one tuple in 50  500 meets both conditions.
Since naccout = 10000, conservatively overestimate that S1
 S2 contains one pointer.
44  The total estimated cost of this strategy is five block
Sorting
 We may build an index on the relation, and then use
the index to read the relation in sorted order. May
lead to one disk block access for each tuple.
 For relations that fit in memory, techniques like
quicksort can be used. For relations that don’t fit in
memory, external sort-merge is a good choice.

45
External Sort-Merge

 Let M denote memory size(in pages).


 1. Create sorted runs as follows. Let i be 0 initially. Repeatedly
do the following till the end of the relation:
(a) Read M blocks of relation into memory
(b) Sort the in-memory blocks
(c) Write sorted data to run Ri; increment i.
 2. Merge the runs; suppose for now that i < M. In a single
merge step, use i blocks of memory to buffer input runs, and 1
block to buffer output. Repeatedly do the following until all input
buffer pages are empty:
(a) Select the first record in sort order from each of the buffers
(b) Write the record to the output
46 (c) Delete the record from the buffer page; if the buffer page is
empty, read the next block (if any) of the run into the buffer.
Example: External Sorting Using Sort-
Merge
a 19 a 19
g 24 d 31
b 14 a 14
a 19 g 24
c 33 a 19
d 31
b 14 d 31 b 14
c 33
c 33 e 16 c 33
b 14 e 16 g 24 d 7
e 16
r 16 d 21 d 21
a 14 d 31
d 21 m 3
d 7 e 16
m 3 r 16
d 21 g 24
p 2
d 7 a 14 m 3 m 3
a 14 d 7 p 2 p 2
Initial p 2 r 16 r 16
Runs Sorte
Runs d
Relation Create Merge Merge
47 Runs Pass-1 Pass-2 Outpu
t
External Sort-Merge (Cont.)
 If i  M, several merge passes are required.
– In each pass, contiguous groups of M – 1 runs are merged.
– A pass reduces the number of runs by a factor of M – 1, and
creates runs longer by the same factor.
– Repeated passes are performed till all runs have been
merged into one.
 Cost analysis:
– Disk accesses for initial run creation as well as in each pass
is 2br (except for final pass, which doesn’t write out results)
– Total number of merge passes required: logM – 1 (br/M)
– Thus total number of disk accesses for external sorting:
48 br(2 logM – 1 (br/M) +1)
Join Operation
 Several different algorithms to implement joins
– Nested-loop join
– Block nested-loop join
– Indexed nested-loop join
– Merge-join
– Hash-join
 Choice based on cost estimate
 Join size estimates required, particularly for cost
estimates for outer-level operations in a relational-
algebra expression.
49
Join Operation: Running
Example
Running example:
Depositor customer
Catalog information for join examples:
 ncusmter = 10, 000.
 fcustomer = 25, which implies that
bcustomer = 10000/25 = 400.
 ndepositor = 5000, and f = 50 which implies that
bdepositor = 5000/50 = 100.
V(customer-name, depositor) = 2500, which implies
that, on average, each customer has two accounts.
50 Also assume that customer-name in depositor is a
foreign key on customer.
Estimation of the Size of Joins
 The Cartesian product r  s contains nrns tuples; each tuple
occupies sr + ss types.
 If R  S = ø, the r s is the same as r  s .
 If R  S is a key for R, then a tuple of s will join with at most
one tuple from r; therefore, the number of tuples in r s is no
greater than the number of tuples in s.
If R  S in S is a foreign key in S referencing R, then the
number of tuples in r s is exactly the same as the number of
tuples in s.
 The case for R  S being a foreign key referencing S is
symmetric.
 In the example query depositor customer, customer-name in
depositor is a foreign key of customer, hence, the result has
exactly ndepositor tuples, which is 5000.
51
Estimation of the Size of Joins
(Cont.)

 If R  S = {A} is not a key for R or S.


If we assume that every tuple t in R produces tuples in
R S, number of tuples in R S is estimated to be:
nr  ns
V ( A, s )
 If the reverse is true, the estimate obtained will be:
nr  ns
V ( A, r )

 The lower of these two estimates is probably the more


52 accurate one.
Estimation of the Size of Joins
(Cont.)

 Compute the size estimates for depositor customer


without using information about foreign keys:
– V(customer-name, depositor) = 2500, and
– V(customer-name, customer) = 10000
– The two estimates are 5000  10000/2500 = 20,000 and
5000  10000/10000 = 5000
– We choose the lower estimate, which, in this case, is the
same as our earlier computation using foreign keys.

53
Nested-Loop Join
 Compute the theta join, r s
for each tuple tr in r do begin
for each tuple ts in s do begin
test pair (tr, ts) to see if they satisfy the join condition 
if they do, add tr · ts to the result.
end
end
 r is called the outer relation and s the inner relation
of the join.
 Requires no indices and can be used with any kind of
join condition.
 Expensive since it examines every pair of tuples in
the two relations. If the smaller relation fits entirely in
54 main memory, use that relation as the inner relation.
Nested-Loop Join (Cont.)

 In the worst case, if there is enough memory only to hold one


block of each relation, the estimated cost is nr  bs + br disk
accesses.
 If the smaller relation fist entirely in memory, use that as the
inner relation. This reduce the cost estimate to br + bs disk
accesses.
 Assuming the worst case memory availability scenario, cost
estimate will be 5000  400 + 100 = 2,000,100 disk accesses
with depositor as outer relation, and
 10000 100 +400 = 1,000,400 disk accesses with customer as
the outer relation.
 If the smaller relation (depositor) fits entirely in memory, the cost
estimates will be 500 disk accesses.
 Block nested-loops algorithm is preferable.

55
Block Nested-Loop Join
 Variant of nested-loop join in which every block of inner relation
is paired with every block of outer relation.
for each block Br of r do begin
for each block Bs of s do begin
for each tuple tr in Br do begin
for each tuple ts in Bs do begin
test pair (tr, ts) for satisfying the join condition
if they do, add tr ·ts to the result.
end
end
end
end
 Worse case: each block in the inner relation s is read only once
for each block in the outer relation (instead of once for each
56 tuple in the outer relation)
Block Nested-Loop Join
(Cont.)
 Worst case estimate: br  bs + br block accesses.
Best case: br + bs block accesses.
 Improvements to nested-loop and block nested loop
algorithms:
– If equi-join attribute forms a key on inner relation, stop inner
loop with first match
– In block nested-loop, use M – 2 disk blocks as blocking unit
for outer relation, where M = memory size in blocks; use
remaining two blocks to buffer inner relation and output.
– Reduces number of scans of inner relation greatly.
– Scan inner loop forward and backward alternately, to make
use of blocks remaining in buffer (with LRU replacement)
57 – Use index on inner relation if available
Indexed Nested-Loop Join
 If an index is available on the inner loop’s join attribute and join is
an equi-join or natural join, more efficient index lookups can
replace file scans.
 Can construct an index just to compute a join.
 For each tuple tr in the outer relation r, use the index to look up
tuples in s that satisfy the join condition with tuple tr.
 Worst case: buffer has space for only one page of r and one page
of the index.
– br disk accesses are needed to read relation r, and, for each
tuple in r, we perform an index lookup on s.
– Cost of the join: br + nr  c, where c is the cost of a single
selection on s using the join condition.
 If indices are available on both r and s, use the one with fewer
58 tuples as the outer relation.
Example of Index Nested-Loop
Join

 Compute depositor customer, with depositor as the


outer relation.
 Let customer have a primary B+-tree index on the join
attribute customer-name, which contains 20 entries in
each index node.
 Since customer has 10,000 tuples, the height of the
tree is 4, and one more access is needed to find the
actual data.
 Since ndepositor is 5000, the total cost is
100 + 500  5 = 25, 100 disk accesses.
 This cost is lower than the 40,100 accesses needed for
59 a block nested-loop join.
Merge-Join
 First sort both relations on their join attribute ( if not
already sorted on the join attributes).
 Join step is similar to the merge stage of the sort-
merge algorithm. Main difference is handling of
duplicate values in join attribute — every pair with
same values on join attribute must be matched
pr a1 a2 a1 a3
ps
a 3 a A
b 1 b G
d 8 c L
d 13 d N
f 7 m B
m 5
s
q 6
60 r
Merge-Join (Cont.)
 Each tuple needs to be read only once, and as a result,
each block is also read only once. Thus number of
block accesses is br + bs, plus the cost of sorting if
relations are unsorted.
 Can be used only for equi-joins and natural joins
 If one relation is sorted, and the other has a secondary
B+-tree index on the join attribute, hybrid merge-joins
are possible. The sorted relation is merged with the leaf
entries of the B+-tree. The result is sorted on the
addresses of the unsorted relation’s tuples, and then
the addresses can be replaced by the actual tuples
61
efficiently.
Hash-Join
 Applicable for equi-joins and natural joins.
 A hash function h is used to partition tuples of both
relations into sets that have the same hash value on
the join attributes, as follows:
– h maps JoinAttrs values to {0, 1, …, max}, where
JoinAttrs denotes the common attributes of r and s
used in the natural join.
– Hr ,Hr , …, Hr
0 1 max denote partitions of r tuples, each
initially empty. Each tuple tr  r is put in partition
Hri, where i = h(tr[JoinAttrs]).
– Hs , Hs , …, Hs
o 1 max denote partitions of s tuples,
62 each initially empty. Each tuple ts  s is put in
Hash-Join (Cont.)

 r tuples in Hri need only to be compared with s tuples


in Hsi; they do not need to be compared with s tuples
in any other partition, since:
 An r tuple and an s tuple that satisfy the join condition
will have the same value for the join attributes.
 If that value is hashed to some value i, the r tuple has
to be in Hri and the s tuple in Hsi.

63
Hash-Join (Cont.)

0 0

1 1
.
. .
2 .
. 2
.
.
3 3

4 4

r Partitions Partitions s
64 of r of s
Hash-Join algorithm

 The hash-join of r and s is computed as follows.


1. Partition the relations s using hashing function h. When partitioning a
relation, one block of memory is reserved as the output buffer for
each partition.
2. Partition r similarly.
3. For each i:
(a) Load Hsi into memory and build an in-memory hash index on it using the
join attribute. This hash index uses a different hash function than the
earlier one h.
(b) Read the tuples in Hri from disk one by one. For each tuple tr locate each
matching tuple ts in Hsi using the in-memory hash index. Output the
concatenation of their attributes.
Relation s is called the build input and r is called the probe input.
65
Hash-Join algorithm (Cont.)
 The value max and the hash function h is chosen such that each
Hsi should fit in memory.
 Recursive partitioning required if number of partitions max is
greater than number of pages M of memory.
– Instead of partitioning max ways, partition s M  1 ways;
– Further partition the M  1 partitions using a different hash function.
– Use same partitioning method on r
– Rarely required: e.g., recursive partitioning not needed for relations of
1 GB or less with memory size of 2MB, with block size of 4KB.
 Hash-table overflow occurs in partition Hsi if Hsi does not fit in
memory. Can resolve by further partitioning Hsi using different
hash function. Hri must be similarly partitioned.

66
Cost of Hash-Join

 If recursive partitioning is not required: 3(br + bs)+2  max


 If recursive partitioning is required, number of passes required for
partitioning s is logM  1(bs) – 1. This is because each final
partition of s should fit in memory.
 The number of partitions of probe relation r is the same as that for
build relation s; the number of passes for partitioning of r is also the
same as for s. Therefore it is best to choose the smaller relation as
the build relation.
 Total cost estimate is:
2(br + bs) logM  1(bs) – 1 + br + bs
 If the entire build input can be kept in main memory, max can be
set to 0 and the algorithm does not partition the relations into
temporary files. Cost estimate goes down to br + bs.
67
Example of Cost of Hash-Join

customer depositor
 Assume that memory size is 20 blocks.
 bdepositor = 100 and bcustomer = 400.
 Depositor is to be used as build input. Partition it into
five partitions, each of size 20 blocks. This
partitioning can be done in one pass.
 Similarly, partition customer into five partitions, each
of size 80. This is also done in one pass.
 Therefore total cost: 3(100 + 400) = 1500 block
68 transfers (ignores cost of writing partially filled
Hybrid Hash-Join

 Useful when memory sizes are relatively large, and the build
input is bigger than memory.
 With a memory size of 25 blocks, depositor can be partitioned
into five partitions, each of size 20 blocks.
 Keep the first of the partitions of the build relation in memory. It
occupies 20 blocks; one block is used for input, and one block
each is used for buffering the other four partitions.
 Customer is similarly partitioned into five partitions each of size
80; the first is used right away for probing, instead of being
written out and read back in.
 Ignoring the cost of writing partially filled blocks, the cost is
3(80+320) +20 + 80 = 1300 block transfers with hybrid hash-
join, instead of 1500 with plain hash-join.
 Hybrid hash-join most useful if M . b s

69
Complex Joins
 Join with a conjunctive condition:
r 1  2… n s
– Compute the result of one of the simpler joins r is
– final result comprises those tuples in the intermediate result
that satisfy the remaining conditions
1 … i–1  i+1  … n
– Test these conditions as tuples in r i s are generated.
 Join with a disjunctive condition:
r 1  2…  n s
Compute as the union of the records in individual join r i
70 s:
Complex Joins (Cont.)

 Join involving three relations: loan depositor customer


 Strategy 1. Compute depositor customer, use result to
compute loan (depositor customer)
 Strategy 2. Compute loan depositor first, and then join the
result with customer.
 Strategy 3. Perform the pair of joins at once. Build an index
on loan for loan-number, and on customer for customer-
name.
– For each tuple t in depositor, look up the corresponding tuples in
customer and the corresponding tuples in loan.
– Each tuple of deposit is examined exactly once.
Strategy 3 combines two operations into one special-
71 purpose operation that is more efficient than implementing
two joins of two relations
Other Operations

 Duplicate elimination can be implemented via


hashing or sorting.
– On sorting duplicates will come adjacent to each other, and
all but one of a set of duplicates can be deleted.
– Optimization: duplicates can be deleted during run
generation as well as at intermediate merge steps in
external sort-merge.
– Hashing is similar – duplicates will come into the same
bucket.
 Projection is implemented by performing projection
on each tuple followed by duplicate elimination.
72
Other Operation (Cont.)

 Aggregation can be implemented in a manner


similar to duplicate elimination.
– Sorting or hashing can be used to bring tuples in the same
group together, and then the aggregate functions can be
applied on each group.
– Optimization: combine tuples in the same group during run
generation and intermediate merges, by computing partial
aggregate values.
 Set operations (, , and ): can either use variant
of merge-join after sorting, or variant of hash-join.

73
Other Operations (Cont.)
 E.g., Set operations using hashing:
1. Partition both relations using the same hash function,
thereby creating Hr0, …, Hrmax, and Hs0, …, Hsmax.
2. Process each partition i as follows. Using a different hashing
function, build an in-memory hash index on Hri after it is
brought into memory.
3.  r  s: Add tuples in Hsi to the hash index if they are not
already in it. Then add the tuples in the hash index to the
result.
 r  s: output tuples in Hsi to the result if they are already
there in the hash index.
 r  s: for each tuple in Hsi, if it is there in the hash index,
74 delete it from the index. Add remaining tuples in the hash
Other Operations (Cont.)

 Outer join can be computed either as


– A join followed by addition of null-padded non-participating
tuples.
– By modifying the join algorithms.
 Example:
– In r s, non participating tuples are those in r  R(r s)
– Modify merge-join to compute r s: During merging, for every
tuples tr from r that do not match any tuple in s, output tr
padded with nulls.
– Right outer-join and full outer-join can be computed similarly.
75
Evaluation of Expressions
 Materialization: evaluate one operation at a time, starting at the
lowest-level. Use intermediate results materialized into
temporary relations to evaluate next-level operations.
 E.g., in figure below, compute and store
balance<2500(account); then compute and store its join with
customer, and finally compute the projection on customer-name.
customer-name

balance<2500 customer

account

76
Evaluation of Expressions (Cont.)

 Pipelining: evaluate several operations simultaneously, passing


the results of one operation on to the next.
 E.g., in expression in previous slide, don’t store result of
balance<2500(Account) – instead, pass tuples directly to the join.
Similarly, don’t store result of join, pass tuples directly to
projection.
 Much cheaper than materialization: no need to store a temporary
relation to disk.
 Pipelining may not always be possible — e.g., sort, hash-join.
 For pipelining to be effective, use evaluation algorithms that
generate output tuples even as tuples are received for inputs to the
operation.
 Pipelines can be executed in two ways: demand driven and
producer driven.
77
Transformation of Relational
Expressions

 Generation of query-evaluation plans for an


expression involves two steps:
1. generating logically equivalent expressions
2. annotating resultant expressions to get alternative
query
plans
 Use equivalence rules to transform an expression
into an equivalent one.
 Based on estimated cost, the cheapest plan is
selected. The process is called cost based
optimization.
78
Equivalence of Expressions
 Relations generated by two equivalent expressions have the
same set of attributes and contain the same set of tuples,
although their attributes may be ordered differently.
customer-name
customer-name

branch-city = Brooklyn
branch-city = Brooklyn

branch
 branch-city = Brooklyn
account depositor branch account depositor
(a) Initial Expression Tree (b) Transformed Expression
79 Equivalent expressions
Tree
Equivalence Rules

1. Conjunctive selection operations can be


deconstructed into a sequence of individual
selections.
1  2 (E) =  1 ( 2 (E))

2. Selection operations are commutative.


1 ( 2 (E))= 2 (1 (E))
3. Only the last in a sequence of projection operations is
needed, the others can be omitted.
L (L (…(L (E))…)) = L (E)
1 2 n 1

4. Selections can be combined with Cartesian products


and theta joins.
80 (a)  (E1 E2) = E1  E2
Equivalence Rules (Cont.)

5. Theta-join operations (and natural joins) are


commutative.
E1  E2 = E2  E1
6. (a) Natural join operations are associative:
(E1 E2) E3 = E1 (E2 E3)
(b) Theta joins are associative in the following
manner:
(E1 1 E2) 2   3 E3 = E1 1  3 (E2 2 E3)

where  involves attributes from only E2 and E3.


2

81
Equivalence Rules (Cont.)

7. The selection operation distributes over the theta join


operation under the following two conditions:
(a) When all the attributes in 0 involve only the
attributes of one of the expressions (E1) being joined.
 (E1
0  E2) = (0 (E1))  E2

(b) When 1 involves only the attributes of E1 and 2


involves only the attributes of E2.
 1  2 (E1  E2) = (1 (E1))  (2 ( E2))

82
Equivalence Rules (Cont.)

8. The projection operation distributes over the theta join operation


as follows:
(a) if  involves only attributes from L1  L2:
L1 L2 (E1  E2) = (L1(E1))  (L2(E2))
(b) Consider a join E1  E2. Let L1 and L2 be sets of

attributes from E1 and E2, respectively. Let L3 be


attributes of E1 that are involved in join condition  ,
but are not in L1  L2, and let L4 be attributes of E2 that
are involved in join condition  , but are not in L1  L2.
L1 L2 (E1  E2) = L1 L2((L1 L3 (E1))  (L2 L4 (E2)))
83
Equivalence Rules (Cont.)

9. The set operations union and intersection are commutative (set


difference is not commutative).
E1  E2 = E2  E1
E1  E2 = E2  E1
10. Set union and intersection are associative.
11. The selection operation distributes over ,  and . E.g.:
p(E1  E2) = p(E1)  p(E2)
For difference and intersection, we also have:
p(E1  E2) = p(E1)  E2
12. The projection operation distributes over the union operation.
L(E1  E2) = (L(E1))  L(E2))
84
Selection Operation Example

 Query: Find the names of all customers who have an


account at some branch located in Brooklyn.
customer-name(branch-city = “Brooklyn”
(branch (account depositor)))
 Transformation using rule 7a.
customer-name
((branch-city = “Brooklyn” (branch))
(account depositor))
 Performing the selection as early as possible reduces
85 the size of the relation to be joined.
Selection Operation
Example(Cont.)
 Query: Find the names of all customers with an account at a
Brooklyn branch whose account balance is over $1000.
customer-name(branch-city = “Brooklyn”  balance > 1000
(branch (account depositor))
 Transformation using join associativity (Rule 6a):
customer-name(branch-city = “Brooklyn”  balance > 1000
(branch account)) depositor)
 Second form provides an opportunity to apply the “Perform
selections early” rule, resulting in the subexpression
branch-city = “Brooklyn” (branch) balance > 1000 (account)
 Thus a sequence of transformations can be useful
86
Projection Operation Example

customer-name((branch-city = “Brooklyn” (branch)


account) depositor)
 When we compute
(branch-city = “Brooklyn” (branch) account)
We obtain a relation whose schema is:
(branch-name, branch-city, assets, account-number, balance)
 Push projections using equivalence rules 8a and 8b;
eliminate unneeded attributes from intermediate results
to get:
customer-name ((account-number (
87 branch-city = “Brooklyn” (branch)) account)) depositor)
Join Ordering Example

 For all relations r1, r2 and r3,


(r1 r 2) r 3 = r1 (r2 r 3)
 If r2 r3 is quite large and r1 r2 is small, we choose
(r1 r 2) r3
so that we compute and store a smaller temporary
relation.

88
Join Ordering Example (Cont.)

 Consider the expression


customer-name((branch-city = “Brooklyn” (branch))
account) depositor)
 Could compute account depositor first, and join result
with
branch-city = “Brooklyn” (branch)
but account depositor is likely to be a large relation.
 Since it is more likely that only a small fraction of the
bank’s customers have accounts in branches located in
Brooklyn, it is better to compute
89 branch-city = “Brooklyn” (branch) account
Evaluation Plan
 An evaluation plan defines exactly what algorithm is
used for each operation, and how the execution of the
operations is coordinated.
customer-name (sort to remove duplicates)

(hash-join)

(merge-join) depositor
Pipeline Pipeline
 branch-name = Brooklyn  balance < 1000
(use index 1) (use linear scan)

branch account
90
Choice of Evaluation Plans
 Must consider the interaction of evaluation techniques when
choosing evaluation plans: choosing the cheapest algorithm for
each operation independently may not yield the best overall
algorithm. E.g.
– Merge-join may be costlier than hash-join, but may provide a
sorted output which reduces the cost for an outer level
aggregation.
– Nested-loop join may provide opportunity for pipelining
 Practical query optimizers incorporate elements of the following
two broad approaches:
1. Search all the plans and choose the best plan in a cost-
based fashion.
91 2. Use heuristics to choose a plan.
Cost Based Optimization

 Consider finding the best join-order for r1 r2 r.


… n

 There are (2(n1))/(n )! Different join orders for above


expression. With n = 7, the number is 665280, with n =
10, the number is greater than 176 billion!
 No need to generate all the join orders. Using dynamic
programming, the least-cost join order for any subset of
{r1, r2, …, rn} is computed only once and stored for
future use.
 This reduces time complexity to around O(3n). With n =
10, this number is 59000.
92
Cost-Based Optimization
(Cont.)
 In left-deep join trees, the right-hand-side input for each join is a
relation, not the result of an intermediate join.
 If only left-deep join trees are considered, cost of finding best
join order becomes O(2n).

r5

r4
r3 r4 r5

r3
r1 r2

r1 r2
93 (a) Left-deep Join Tree (b) Non-left-deep Join Tree
Dynamic Programming in
Optimization

 To find best left-deep join tree for a set of n relations:


– Consider n alternatives with one relation as right-hand-side
input and the other relations as left-hand-side input.
– Using (recursively computed and stored) least-cost join
order for each alternative on left-hand-side, choose the
cheapest of the n alternatives.
 To find best join tree for a set of n relations:
– To find best plan for a set of S of n relations, consider all
possible plans of the form: S1 (S  S1) where S1 is any
non-empty subset of S.
– As before, use recursively computed and stored costs for
subsets of S to find the cost of each plan. Choose the
cheapest of the 2n 1 alternatives
94
Interesting Orders in Cost-Based
Optimization

 Consider the expression ( r1 r2 r3 ) r4 r5


 An interesting sort order is a particular sort order of tuples that
could be useful for a later operation.
– Generating the result of r1 r2 r3 sorted on the attributes
common with r4 r5 may be useful, but generating it sorted on the
attributes common to only r1 and r2 is not useful.
– Using merge-join to compute r1 r2 r3 may be costlier, but may
provide an output sorted in an interesting order.
 Not sufficient to find the best join order for each subset of the
set of n given relations; must find the best join order for each
subset, for each interesting sort order of the join result for that
subset. Simple extension of earlier dynamic programming
95 algorithms.
Heuristic Optimization

 Cost-based optimization is expensive, even with


dynamic programming.
 Systems may use heuristics to reduce the number of
choices that must be made in a cost-based fashion.
 Heuristic optimization transforms the query-tree by
using a set of rules that typically ( but not in all cases)
improve execution performance:
– Perform selection early (reduces the number of tuples)
– Perform projection early ( reduces the number of attributes)
– Perform most restrictive selection and join operations before
other similar operations.
 Some systems use only heuristics, others combine
96 heuristics with partial cost-based optimization.
Steps in Typical Heuristic
Optimization

1. Deconstruct conjunctive selections into a sequence of single


selection operations (Equiv. Rule 1).
2. Move selection operations down the query tree for the earliest
possible execution (Equiv. Rules 2, 7a, 7b, 11).
3. Execute first those selection and join operations that will
produce the smallest relations (Equiv. rule 6).
4. Replace Cartesian product operations that are followed by a
selection condition by join operations (Equiv. Rule 4a).
5. Deconstruct and move as far down the tree as possible lists
of projection attributes, creating new projections where
needed (Equiv. rules 3, 8a, 8b, 12).
6. Identify those subtrees whose operations can be pipelined,
and execute them using pipelining.

97
Structure of Query Optimizers

 The System R optimizer considers only left-deep join


orders. This reduces optimization complexity and
generates plans amenable to pipelined evaluation.
 System R also uses heuristics to push selections
and projections down the query tree.
 For scans using secondary indices, the Sybase
optimizer takes into account the probability that the
page containing the tuple is in the buffer.

98
Structure of Query Optimizers
(Cont.)
 Some query optimizers integrate heuristic selection and the
generation of alternative access plans.
– System R and Starburst use a hierarchical procedure based
on the nested-block concept of SQL: heuristic rewriting
followed by cost-based join-order optimization.
– The Oracle7 optimizer supports a heuristic based on
available access paths.
 Even with the use of heuristics, cost-based query optimization
impose a substantial overhead.
 This expense is usually more than offset by savings at query-
execution time, particularly by reducing the number of slow disk
accesses.

99

You might also like