CH 14
CH 14
CH 14
Query Optimization
Chapter 14: Query Optimization
Introduction
Catalog Information for Cost Estimation
Estimation of Statistics
Transformation of Relational Expressions
Dynamic Programming for Choosing Evaluation Plans
14.2
Introduction
Alternative ways of evaluating a given query
Equivalent expressions
Different algorithms for each operation (Chapter 13)
Cost difference between a good and a bad way of evaluating a
query can be enormous
Example: performing a r X s followed by a selection r.A = s.B is
much slower than performing a join on the same condition
Need to estimate the cost of operations
Depends critically on statistical information about relations which the
database must maintain
E.g. number of tuples, number of distinct values for join attributes,
etc.
Need to estimate statistics for intermediate results to compute cost of
complex expressions
14.3
Introduction (Cont.)
Relations generated by two equivalent expressions have the
same set of attributes and contain the same set of tuples,
although their attributes may be ordered differently.
14.4
Introduction (Cont.)
14.5
Overview of chapter
Statistical information for cost estimation
Equivalence rules
Cost-based optimization algorithm
Optimizing nested subqueries
Materialized views and view maintenance
14.6
Statistical Information for Cost
Estimation
nr: number of tuples in a relation r.
br: number of blocks containing tuples of r.
sr: size of a tuple of r.
fr: blocking factor of r — i.e., the number of tuples of r that
fit into one block.
V(A, r): number of distinct values that appear in r for
attribute A; same as the size of A(r).
SC(A, r): selection cardinality of attribute A of relation r;
average number of records that satisfy equality on A.
If tuples of r are stored together physically in a file, then:
nr
br
f r
14.7
Catalog Information about Indices
14.8
Measures of Query Cost
Recall that
Typically disk access is the predominant cost, and is also
relatively easy to estimate.
The number of block transfers from disk is used as a
measure of the actual cost of evaluation.
It is assumed that all transfers of blocks have the same
cost.
Real life optimizers do not make this assumption, and
distinguish between sequential and random disk access
We do not include cost to writing output to disk.
We refer to the cost estimate of algorithm A as EA
14.9
Selection Size Estimation
SC ( A, r )
Ea 2 log2 (br ) 1
fr
14.10
Statistical Information for
Examples
faccount= 20 (20 tuples of account fit in one block)
V(branch-name, account) = 50 (50 branches)
V(balance, account) = 500 (500 different balance values)
account = 10000 (account has 10,000 tuples)
Assume the following indices exist on account:
A primary, B+-tree index for attribute branch-name
A secondary, B+-tree index for attribute balance
14.11
Selections Involving Comparisons
Selections of the form AV(r) (case of A V(r) is symmetric)
Let c denote the estimated number of tuples satisfying the
condition.
If min(A,r) and max(A,r) are available in catalog
C = 0 if v < min(A,r)
v min( A, r )
C = nr .
max( A, r ) min( A, r )
In absence of statistical information c is assumed to be nr / 2.
14.12
Implementation of Complex
Selections
The selectivity of a condition i is the probability that a tuple in
the relation r satisfies i . If si is the number of satisfying tuples
in r, the selectivity of i is given by si /nr.
Conjunction: 1 2. . . n (r). The estimate for number of
s1 s2 . . . sn
tuples in the result is: nr
nrn
Disjunction:1 2 . . . n (r). Estimated number of tuples:
s s s
nr 1 (1 1 ) (1 2 ) ... (1 n )
nr nr nr
Negation: (r). Estimated number of tuples:
nr – size((r))
14.13
Join Operation: Running Example
Running example:
depositor customer
Catalog information for join examples:
ncustomer = 10,000.
fcustomer = 25, which implies that
bcustomer =10000/25 = 400.
ndepositor = 5000.
fdepositor = 50, which implies that
bdepositor = 5000/50 = 100.
V(customer-name, depositor) = 2500, which implies that , on
average, each customer has two accounts.
Also assume that customer-name in depositor is a foreign key
on customer.
14.14
Estimation of the Size of Joins
The Cartesian product r x s contains nr .ns tuples; each tuple
occupies sr + ss bytes.
If R S = , then r s is the same as r x s.
If R S is a key for R, then a tuple of s will join with at most
one tuple from r
therefore, the number of tuples in r s is no greater than the
number of tuples in s.
If R S in S is a foreign key in S referencing R, then the
number of tuples in r s is exactly the same as the number of
tuples in s.
The case for R S being a foreign key referencing S is
symmetric.
In the example query depositor customer, customer-name in
depositor is a foreign key of customer
hence, the result has exactly ndepositor tuples, which is 5000
14.15
Estimation of the Size of Joins (Cont.)
If R S = {A} is not a key for R or S.
If we assume that every tuple t in R produces tuples in R S, the
number of tuples in R S is estimated to be:
nr ns
V ( A, s )
14.16
Estimation of the Size of Joins
(Cont.)
Compute the size estimates for depositor customer without
using information about foreign keys:
V(customer-name, depositor) = 2500, and
V(customer-name, customer) = 10000
The two estimates are 5000 * 10000/2500 - 20,000 and 5000 *
10000/10000 = 5000
We choose the lower estimate, which in this case, is the same as
our earlier computation using foreign keys.
14.17
Size Estimation for Other
Operations
Projection: estimated size of A(r) = V(A,r)
14.18
Size Estimation (Cont.)
Outer join:
Estimated size of r s = size of r s + size of r
Case of right outer join is symmetric
Estimated size of r s = size of r s + size of r + size of s
14.19
Estimation of Number of Distinct Values
Selections: (r)
If forces A to take a specified value: V(A, (r)) = 1.
e.g., A = 3
14.20
Estimation of Distinct Values
(Cont.)
Joins: r s
If all attributes in A are from r
estimated V(A, r s) = min (V(A,r), n r s)
If A contains attributes A1 from r and A2 from s, then estimated
V(A,r s) =
min(V(A1,r)*V(A2 – A1,s), V(A1 – A2,r)*V(A2,s), nr s)
More accurate estimate can be got using probability theory, but this
one works fine generally
14.21
Estimation of Distinct Values
(Cont.)
Estimation of distinct values are straightforward for projections.
They are the same in A (r) as in r.
The same holds for grouping attributes of aggregation.
For aggregated values
For min(A) and max(A), the number of distinct values can be
estimated as min(V(A,r), V(G,r)) where G denotes grouping attributes
For other aggregates, assume all values are distinct, and use V(G,r)
14.22
Transformation of Relational
Expressions
Two relational algebra expressions are said to be equivalent if
on every legal database instance the two expressions generate
the same set of tuples
Note: order of tuples is irrelevant
In SQL, inputs and outputs are multisets of tuples
Two expressions in the multiset version of the relational algebra are
said to be equivalent if on every legal database instance the two
expressions generate the same multiset of tuples
An equivalence rule says that expressions of two forms are
equivalent
Can replace expression of first form by second, or vice versa
14.23
Equivalence Rules
1. Conjunctive selection operations can be deconstructed into a
sequence of individual selections.
1 2 ( E ) 1 ( 2 ( E ))
2. Selection operations are commutative.
1 ( 2 ( E )) 2 ( 1 ( E ))
14.24
Pictorial Depiction of Equivalence Rules
14.25
Equivalence Rules (Cont.)
5. Theta-join operations (and natural joins) are commutative.
E1 E2 = E2 E1
6. (a) Natural join operations are associative:
(E1 E2) E3 = E1 (E2 E3)
14.26
Equivalence Rules (Cont.)
7. The selection operation distributes over the theta join operation
under the following two conditions:
(a) When all the attributes in 0 involve only the attributes of one
14.27
Equivalence Rules (Cont.)
14.28
Equivalence Rules (Cont.)
9. The set operations union and intersection are commutative
E1 E2 = E2 E1
E1 E2 = E2 E1
(set difference is not commutative).
10. Set union and intersection are associative.
(E1 E2) E3 = E1 (E2 E3)
(E1 E2) E3 = E1 (E2 E3)
11. The selection operation distributes over , and –.
(E1 – E2) = (E1) – (E2)
and similarly for and in place of –
Also: (E 1 – E2) = (E1) – E2
and similarly for in place of –, but not for
12. The projection operation distributes over union
L(E1 E2) = (L(E1)) (L(E2))
14.29
Transformation Example
Query: Find the names of all customers who have an account at
some branch located in Brooklyn.
customer-name(branch-city = “Brooklyn”
(branch (account depositor)))
Transformation using rule 7a.
customer-name
((branch-city =“Brooklyn” (branch))
(account depositor))
Performing the selection as early as possible reduces the size of
the relation to be joined.
14.30
Example with Multiple
Transformations
Query: Find the names of all customers with an account at
a Brooklyn branch whose account balance is over $1000.
customer-name((branch-city = “Brooklyn” balance > 1000
(branch (account depositor)))
Transformation using join associatively (Rule 6a):
customer-name((branch-city = “Brooklyn” balance > 1000
14.31
Multiple Transformations (Cont.)
14.32
Projection Operation Example
When we compute
14.33
Join Ordering Example
For all relations r1, r2, and r3,
(r1 r2) r3
so that we compute and store a smaller temporary relation.
14.34
Join Ordering Example (Cont.)
14.35
Enumeration of Equivalent
Expressions
Query optimizers use equivalence rules to systematically generate
expressions equivalent to the given expression
Conceptually, generate all equivalent expressions by repeatedly
executing the following step until no more expressions can be
found:
for each expression found so far, use all applicable equivalence rules,
and add newly generated expressions to the set of expressions found
so far
The above approach is very expensive in space and time
Space requirements reduced by sharing common subexpressions:
when E1 is generated from E2 by an equivalence rule, usually only the
top level of the two are different, subtrees below are the same and
can be shared
E.g. when applying join associativity
Time requirements are reduced by not generating all expressions
More details shortly
14.36
Evaluation Plan
An evaluation plan defines exactly what algorithm is used for each
operation, and how the execution of the operations is coordinated.
14.37
Choice of Evaluation Plans
Must consider the interaction of evaluation techniques when
choosing evaluation plans: choosing the cheapest algorithm for
each operation independently may not yield best overall
algorithm. E.g.
merge-join may be costlier than hash-join, but may provide a sorted
output which reduces the cost for an outer level aggregation.
nested-loop join may provide opportunity for pipelining
Practical query optimizers incorporate elements of the following
two broad approaches:
1. Search all the plans and choose the best plan in a
cost-based fashion.
2. Uses heuristics to choose a plan.
14.38
Cost-Based Optimization
Consider finding the best join-order for r1 r2 . . . r n .
There are (2(n – 1))!/(n – 1)! different join orders for above
expression. With n = 7, the number is 665280, with n = 10, the
number is greater than 176 billion!
No need to generate all the join orders. Using dynamic
programming, the least-cost join order for any subset of
{r1, r2, . . . rn} is computed only once and stored for future use.
14.39
Dynamic Programming in Optimization
14.40
Join Order Optimization Algorithm
procedure findbestplan(S)
if (bestplan[S].cost )
return bestplan[S]
// else bestplan[S] has not been computed earlier, compute it now
for each non-empty subset S1 of S such that S1 S
P1= findbestplan(S1)
P2= findbestplan(S - S1)
A = best algorithm for joining results of P1 and P2
cost = P1.cost + P2.cost + cost of A
if cost < bestplan[S].cost
bestplan[S].cost = cost
bestplan[S].plan = “execute P1.plan; execute P2.plan;
join results of P1 and P2 using A”
return bestplan[S]
14.41
Left Deep Join Trees
In left-deep join trees, the right-hand-side input for each
join is a relation, not the result of an intermediate join.
14.42
Cost of Optimization
With dynamic programming time complexity of optimization with
bushy trees is O(3n).
With n = 10, this number is 59000 instead of 176 billion!
Space complexity is O(2n)
To find best left-deep join tree for a set of n relations:
Consider n alternatives with one relation as right-hand side input
and the other relations as left-hand side input.
Using (recursively computed and stored) least-cost join order for
each alternative on left-hand-side, choose the cheapest of the n
alternatives.
If only left-deep trees are considered, time complexity of finding
best join order is O(n 2n)
Space complexity remains at O(2n)
Cost-based optimization is expensive, but worthwhile for queries
on large datasets (typical queries have small n, generally < 10)
14.43
Interesting Orders in Cost-Based Optimization
14.44
Heuristic Optimization
Cost-based optimization is expensive, even with dynamic
programming.
Systems may use heuristics to reduce the number of
choices that must be made in a cost-based fashion.
Heuristic optimization transforms the query-tree by using a
set of rules that typically (but not in all cases) improve
execution performance:
Perform selection early (reduces the number of tuples)
Perform projection early (reduces the number of attributes)
Perform most restrictive selection and join operations before
other similar operations.
Some systems use only heuristics, others combine heuristics
with partial cost-based optimization.
14.45
Steps in Typical Heuristic
Optimization
1. Deconstruct conjunctive selections into a sequence of single
selection operations (Equiv. rule 1.).
2. Move selection operations down the query tree for the
earliest possible execution (Equiv. rules 2, 7a, 7b, 11).
3. Execute first those selection and join operations that will
produce the smallest relations (Equiv. rule 6).
4. Replace Cartesian product operations that are followed by a
selection condition by join operations (Equiv. rule 4a).
5. Deconstruct and move as far down the tree as possible lists
of projection attributes, creating new projections where
needed (Equiv. rules 3, 8a, 8b, 12).
6. Identify those subtrees whose operations can be pipelined,
and execute them using pipelining.
14.46
Structure of Query Optimizers
The System R/Starburst optimizer considers only left-deep join
orders. This reduces optimization complexity and generates
plans amenable to pipelined evaluation.
System R/Starburst also uses heuristics to push selections and
projections down the query tree.
Heuristic optimization used in some versions of Oracle:
Repeatedly pick “best” relation to join next
Starting from each of n starting points. Pick best among these.
14.47
Structure of Query Optimizers
(Cont.)
14.48
Optimizing Nested Subqueries**
SQL conceptually treats nested subqueries in the where clause as
functions that take parameters and return a single value or set of
values
Parameters are variables from outer level query that are used in the
nested subquery; such variables are called correlation variables
E.g.
select customer-name
from borrower
where exists (select *
from depositor
where depositor.customer-name =
borrower.customer-name)
Conceptually, nested subquery is executed once for each tuple in
the cross-product generated by the outer level from clause
Such evaluation is called correlated evaluation
Note: other conditions in where clause may be used to compute a join
(instead of a cross-product) before executing the nested subquery
14.49
Optimizing Nested Subqueries (Cont.)
Correlated evaluation may be quite inefficient since
a large number of calls may be made to the nested query
there may be unnecessary random I/O as a result
SQL optimizers attempt to transform nested subqueries to joins
where possible, enabling use of efficient join techniques
E.g.: earlier nested query can be rewritten as
select customer-name
from borrower, depositor
where depositor.customer-name = borrower.customer-name
Note: above query doesn’t correctly deal with duplicates, can be
modified to do so as we will see
In general, it is not possible/straightforward to move the entire
nested subquery from clause into the outer level query from clause
A temporary relation is created instead, and used in body of outer level
query
14.50
Optimizing Nested Subqueries
(Cont.)
In general, SQL queries of the form below can be rewritten as shown
Rewrite: select …
from L1
where P1 and exists (select *
from L2
where P2)
To: create table t1 as
select distinct V
from L2
where P21
select …
from L1, t1
where P1 and P22
P21 contains predicates in P2 that do not involve any correlation variables
P22 reintroduces predicates involving correlation variables, with
relations renamed appropriately
V contains all attributes used in predicates with correlation variables
14.51
Optimizing Nested Subqueries
(Cont.)
In our example, the original nested query would be transformed to
create table t1 as
select distinct customer-name
from depositor
select customer-name
from borrower, t1
where t1.customer-name = borrower.customer-name
The process of replacing a nested query by a query with a join
(possibly with a temporary relation) is called decorrelation.
Decorrelation is more complicated when
the nested subquery uses aggregation, or
when the result of the nested subquery is used to test for equality, or
when the condition linking the nested subquery to the other
query is not exists,
14.52
Materialized Views**
A materialized view is a view whose contents are computed
and stored.
Consider the view
create view branch-total-loan(branch-name, total-loan) as
select branch-name, sum(amount)
from loan
groupby branch-name
Materializing the above view would be very useful if the total loan
amount is required frequently
Saves the effort of finding multiple tuples and adding up their
amounts
14.53
Materialized View Maintenance
The task of keeping a materialized view up-to-date with the
underlying data is known as materialized view maintenance
Materialized views can be maintained by recomputation on every
update
A better option is to use incremental view maintenance
Changes to database relations are used to compute changes to
materialized view, which is then updated
View maintenance can be done by
Manually defining triggers on insert, delete, and update of each
relation in the view definition
Manually written code to update the view whenever database
relations are updated
Supported directly by the database
14.54
Incremental View Maintenance
The changes (inserts and deletes) to a relation or expressions
are referred to as its differential
Set of tuples inserted to and deleted from r are denoted ir and dr
To simplify our description, we only consider inserts and deletes
We replace updates to a tuple by deletion of the tuple followed by
insertion of the update tuple
We describe how to compute the change to the result of each
relational operation, given changes to its inputs
We then outline how to handle relational algebra expressions
14.55
Join Operation
Consider the materialized view v = r s and an update to r
Let rold and rnew denote the old and new states of relation r
Consider the case of an insert to r:
We can write rnew s as (rold ir) s
And rewrite the above to (rold s) (ir s)
But (rold s) is simply the old value of the materialized view, so the
incremental change to the view is just ir s
Thus, for inserts vnew = vold (ir s)
Similarly for deletes vnew = vold – (dr s)
14.56
Selection and Projection
Operations
Selection: Consider a view v = (r).
vnew = vold (ir)
vnew = vold - (dr)
Projection is a more difficult operation
R = (A,B), and r(R) = { (a,2), (a,3)}
A(r) has a single tuple (a).
If we delete the tuple (a,2) from r, we should not delete the tuple (a) from
A(r), but if we then delete (a,3) as well, we should delete the tuple
For each tuple in a projection A(r) , we will keep a count of how
many times it was derived
On insert of a tuple to r, if the resultant tuple is already in A(r) we
increment its count, else we add a new tuple with count = 1
On delete of a tuple from r, we decrement the count of the corresponding
tuple in A(r)
if the count becomes 0, we delete the tuple from A(r)
14.57
Aggregation Operations
count : v = A gcount(B)(r).
When a set of tuples ir is inserted
For each tuple r in i , if the corresponding group is already present in v,
r
we increment its count, else we add a new tuple with count = 1
When a set of tuples dr is deleted
for each tuple t in i .we look for the group t.A in v, and subtract 1 from the
r
count for the group.
– If the count becomes 0, we delete from v the tuple for the group t.A
sum: v = A gsum (B)(r)
We maintain the sum in a manner similar to count, except we add/subtract
the B value instead of adding/subtracting 1 for the count
Additionally we maintain the count in order to detect groups with no tuples.
Such groups are deleted from v
Cannot simply test for sum = 0 (why?)
To handle the case of avg, we maintain the sum and count
aggregate values separately, and divide at the end
14.58
Aggregate Operations (Cont.)
min, max: v = A gmin (B) (r).
Handling insertions on r is straightforward.
Maintaining the aggregate values min and max on deletions may be
more expensive. We have to look at the other tuples of r that are in
the same group to find the new minimum
14.59
Other Operations
Set intersection: v = r s
when a tuple is inserted in r we check if it is present in s, and if so
we add it to v.
If the tuple is deleted from r, we delete it from the intersection if it is
present.
Updates to s are symmetric
The other set operations, union and set difference are handled in a
similar fashion.
Outer joins are handled in much the same way as joins but with
some extra work
14.60
Handling Expressions
To handle an entire expression, we derive expressions for
computing the incremental change to the result of each sub-
expressions, starting from the smallest sub-expressions.
E.g. consider E1 E2 where each of E1 and E2 may be a
complex expression
Suppose the set of tuples to be inserted into E1 is given by D1
Computed earlier, since smaller sub-expressions are handled
first
Then the set of tuples to be inserted into E1 E2 is given by
D1 E 2
This is just the usual way of maintaining joins
14.61
Query Optimization and Materialized
Views
Rewriting queries to use materialized views:
A materialized view v = r s is available
A user submits a query r s t
We can rewrite the query as v t
Whether to do so depends on cost estimates for the two alternative
14.62
Materialized View Selection
Materialized view selection: “What is the best set of views to
materialize?”.
This decision must be made on the basis of the system workload
Indices are just like materialized views, problem of index
selection is closely related, to that of materialized view
selection, although it is simpler.
Some database systems, provide tools to help the database
administrator with index and materialized view selection.
14.63
End of Chapter
branch-name = “Perryridge”(account)
14.65
Selections Using Indices
Index scan – search algorithms that use an index; condition is on
search-key of index.
A3 (primary index on candidate key, equality). Retrieve a single
record that satisfies the corresponding equality condition EA3 = HTi + 1
A4 (primary index on nonkey, equality) Retrieve multiple records. Let
the search-key attribute be A.
A5 (equality on SC ( A, r )
E Asearch-key
4 HTi of secondary
index).
Retrieve a single record if the fsearch-key
r is a candidate key
EA5 = HTi + 1
Retrieve multiple records (each may be on a different block) if the search-
key is not a candidate key. EA3 = HTi + SC(A,r)
14.66
Cost Estimate Example (Indices)
Consider the query is branch-name = “Perryridge”(account), with the
primary index on branch-name.
14.67
Selections Involving Comparisons
selections of the form AV(r) or A V(r) by using a linear file
scan or binary search, or by using indices in the following
ways:
A6 (primary index, comparison). The cost estimate is:
c
E AB HTi
fr
where c is the estimated number of tuples satisfying
the condition. In absence of statistical information c is
assumed to be nr/2.
A7 (secondary index, comparison). The cost estimate:
LB c
E A7 HTi i c
nr
where c is defined as before. (Linear file scan may be
cheaper if c is large!).
14.68
Example of Cost Estimate for Complex
Selection
Consider a selection on account with the following condition:
where branch-name = “Perryridge” and balance = 1200
Consider using algorithm A8:
The branch-name index is clustering, and if we use it the cost
estimate is 12 block reads (as we saw before).
The balance index is non-clustering, and
V(balance, account = 500, so the selection would retrieve
10,000/500 = 20 accounts. Adding the index block reads,
gives a cost estimate of 22 block reads.
Thus using branch-name index is preferable, even though its
condition is less selective.
If both indices were non-clustering, it would be preferable to
use the balance index.
14.69
Example (Cont.)
Consider using algorithm A10:
Use the index on balance to retrieve set S1 of pointers to records
with balance = 1200.
Use index on branch-name to retrieve-set S2 of pointers to records
with branch-name = Perryridge”.
S1 S2 = set of pointers to records with branch-name =
“Perryridge” and balance = 1200.
The number of pointers retrieved (20 and 200), fit into a single leaf
page; we read four index blocks to retrieve the two sets of pointers
and compute their intersection.
Estimate that one tuple in 50 * 500 meets both conditions. Since
naccount = 10000, conservatively overestimate that
S1 S2 contains one pointer.
The total estimated cost of this strategy is five block reads.
14.70