Query
Query
■ Introduction
■ Background
■ Distributed DBMS Architecture
■ Distributed Database Design
■ Semantic Data Control
❏ Distributed Query Processing
➠ Query Processing Methodology
➠ Distributed Query Optimization
❏ Distributed Transaction Management
❏ Parallel Database Systems
❏ Distributed Object DBMS
❏ Database Interoperability
❏ Current Issues
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 1
Query Processing
query
processor
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 2
Query Processing Components
■ Query optimization
➠ How do we determine the “best” execution plan?
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 3
Selecting Alternatives
SELECT ENAME
FROM EMP,ASG
WHERE EMP.ENO = ASG.ENO
AND DUR > 37
Strategy 1
ΠENAME(σDUR>37∧EMP.ENO=ASG.ENO=(EMP × ASG))
Strategy 2
ΠENAME(EMP ENO (σDUR>37 (ASG)))
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 4
What is the Problem?
Site 1 Site 2 Site 3 Site 4 Site 5
ASG1=σENO≤“E3”(ASG) ASG2=σENO>“E3”(ASG) EMP1=σENO≤“E3”(EMP) EMP2=σENO>“E3”(EMP) Result
Site 5 Site 5
result = EMP1’∪EMP2’ result2=(EMP1∪=EMP2) ENOσDUR>37(ASG1∪=ASG1)
EMP1’ EMP2’
ASG1 ASG2 EMP1 EMP2
Site 3 Site 4
EMP1’=EMP1 ENOASG1
’ EMP2’=EMP2 ENOASG2
’
Site 1 Site 2 Site 3 Site 4
ASG1’ ASG2’
Site 1 Site 2
ASG1’=σDUR>37(ASG1) ASG2’=σDUR>37(ASG2)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 5
Cost of Alternatives
■ Assume:
➠ size(EMP) = 400, size(ASG) = 1000
➠ tuple access cost = 1 unit; tuple transfer cost = 10 units
■ Strategy 1
❶ produce ASG': (10+10)∗tuple access cost 20
❷ transfer ASG' to the sites of EMP: (10+10)∗tuple transfer cost 200
❸ produce EMP': (10+10) ∗tuple access cost∗2 40
❹ transfer EMP' to result site: (10+10) ∗tuple transfer cost 200
Total cost 460
■ Strategy 2
❶ transfer EMP to site 5:400∗tuple transfer cost 4,000
❷ transfer ASG to site 5 :1000∗tuple transfer cost 10,000
❸ produce ASG':1000∗tuple access cost 1,000
❹ join EMP and ASG':400∗20∗tuple access cost 8,000
Total cost 23,000
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 6
Query Optimization Objectives
Minimize a cost function
I/O cost + CPU cost + communication cost
These might have different weights in different distributed
environments
Wide area networks
➠ communication cost will dominate
◆ low bandwidth
◆ low speed
◆ high protocol overhead
➠ most algorithms ignore all other cost components
Local area networks
➠ communication cost not that dominant
➠ total cost function should be considered
Can also maximize throughput
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 7
Complexity of Relational
Operations
Operation Complexity
Select
Project O(n)
■ Assume (without duplicate elimination)
➠ relations of cardinality n Project
➠ sequential scan (with duplicate elimination) O(nlog n)
Group
Join
Semi-join O(nlog n)
Division
Set Operators
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 8
Query Optimization Issues –
Types of Optimizers
■ Exhaustive search
➠ cost-based
➠ optimal
➠ combinatorial complexity in the number of relations
■ Heuristics
➠ not optimal
➠ regroup common sub-expressions
➠ perform selection, projection first
➠ replace a join by a series of semijoins
➠ reorder operations to reduce intermediate relation size
➠ optimize individual operations
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 9
Query Optimization Issues –
Optimization Granularity
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 10
Query Optimization Issues –
Optimization Timing
■ Static
➠ compilation optimize prior to the execution
➠ difficult to estimate the size of the intermediate results
error propagation
➠ can amortize over many executions
➠ R*
■ Dynamic
➠ run time optimization
➠ exact information on the intermediate relation sizes
➠ have to reoptimize for multiple executions
➠ Distributed INGRES
■ Hybrid
➠ compile using a static algorithm
➠ if the error in estimate sizes > threshold, reoptimize at
run time
➠ MERMAID
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 11
Query Optimization Issues –
Statistics
■ Relation
➠ cardinality
➠ size of a tuple
➠ fraction of tuples participating in a join with
another relation
■ Attribute
➠ cardinality of domain
➠ actual number of distinct values
■ Common assumptions
➠ independence between different attribute values
➠ uniform distribution of attribute values within their
domain
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 12
Query Optimization Issues –
Decision Sites
■ Centralized
➠ single site determines the “best” schedule
➠ simple
➠ need knowledge about the entire distributed
database
■ Distributed
➠ cooperation among sites to determine the schedule
➠ need only local information
➠ cost of cooperation
■ Hybrid
➠ one site determines the global schedule
➠ each site optimizes the local subqueries
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 13
Query Optimization Issues –
Network Topology
■ Wide area networks (WAN) – point-to-point
➠ characteristics
◆ low bandwidth
◆ low speed
◆ high protocol overhead
➠ communication cost will dominate; ignore all other
cost factors
➠ global schedule to minimize communication cost
➠ local schedules according to centralized query
optimization
■ Local area networks (LAN)
➠ communication cost not that dominant
➠ total cost function should be considered
➠ broadcasting can be exploited (joins)
➠ special algorithms exist for star networks
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 14
Distributed Query
Processing Methodology
Calculus Query on Distributed
Relations
Query GLOBAL
Query GLOBAL
Decomposition
Decomposition SCHEMA
SCHEMA
Fragment Query
STATS
Global
Global STATSON
ON
Optimization FRAGMENTS
Optimization FRAGMENTS
Optimized Local
Queries
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 15
Step 1 – Query Decomposition
Input : Calculus query on global relations
■ Normalization
➠ manipulate query quantifiers and qualification
■ Analysis
➠ detect and reject “incorrect” queries
➠ possible for only a subset of relational calculus
■ Simplification
➠ eliminate redundant predicates
■ Restructuring
➠ calculus query Τalgebraic query
➠ more than one translation is possible
➠ use transformation rules
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 16
Normalization
■ Lexical and syntactic analysis
➠ check validity (similar to compilers)
➠ check for attributes and relations
➠ type checking on the qualification
■ Put into normal form
➠ Conjunctive normal form
(p11∨p12∨…∨p1n) ∧…∧ (pm1∨pm2∨…∨pmn)
➠ Disjunctive normal form
(p11∧p12 ∧…∧p1n) ∨…∨ (pm1 ∧pm2∧…∧Τpmn)
➠ OR's mapped into union
➠ AND's mapped into join or selection
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 17
Analysis
■ Refute incorrect queries
■ Type incorrect
➠ If any of its attribute or relation names are not defined in
the global schema
➠ If operations are applied to attributes of the wrong type
■ Semantically incorrect
➠ Components do not contribute in any way to the
generation of the result
➠ Only a subset of relational calculus queries can be tested
for correctness
➠ Those that do not contain disjunction and negation
➠ To detect
◆ connection graph (query graph)
◆ join graph
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 18
Analysis – Example
SELECT ENAME,RESP
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO
AND ASG.PNO = PROJ.PNO
AND PNAME = "CAD/CAM"
AND DUR ≥ 36
AND TITLE = "Programmer"
ASG ASG
EMP.ENO=ASG.ENO ASG.PNO=PROJ.PNO EMP.ENO=ASG.ENO ASG.PNO=PROJ.PNO
TITLE =
EMP RESP PROJ EMP PROJ
“Programmer”
ENAME
RESULT
PNAME=“CAD/CAM”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 19
Analysis
If the query graph is not connected, the query is
wrong.
SELECT ENAME,RESP
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO
AND PNAME = "CAD/CAM"
AND DUR ≥ 36
AND TITLE = "Programmer"
ASG
ENAME
RESULT
PNAME=“CAD/CAM”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 20
Simplification
■ Why simplify?
➠ Remember the example
■ How? Use transformation rules
➠ elimination of redundancy
◆ idempotency rules
p1 ∧ ¬( p1) ⇔ false
p1 ∧ (p1 ∨ p2) ⇔ p1
p1 ∨ false ⇔ p1
…
➠ application of transitivity
➠ use of integrity rules
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 21
Simplification – Example
SELECT TITLE
FROM EMP
WHERE EMP.ENAME = “J. Doe”
OR (NOT(EMP.TITLE = “Programmer”)
AND (EMP.TITLE = “Programmer”
OR EMP.TITLE = “Elect. Eng.”)
AND NOT(EMP.TITLE = “Elect. Eng.”))
SELECT TITLE
FROM EMP
WHERE EMP.ENAME = “J. Doe”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 22
Restructuring
■ Convert relational calculus to ΠENAME Project
relational algebra
■ Make use of query trees
■ Example σDUR=12 OR DUR=24
Find the names of employees other than
J. Doe who worked on the CAD/CAM
project for either 1 or 2 years. σPNAME=“CAD/CAM” Select
SELECT ENAME
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO σENAME≠“J. DOE”
AND ASG.PNO = PROJ.PNO
AND ENAME ≠ “J. Doe”
PNO
AND PNAME = “CAD/CAM”
AND (DUR = 12 OR DUR = 24)
ENO Join
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 23
Restructuring –
Transformation Rules
■ Commutativity of binary operations
➠ R×S⇔S×R
➠ R S⇔S R
➠ R∪S⇔S∪R
■ Associativity of binary operations
➠ ( R × S ) × T ⇔ R × (S × T)
➠ (R S) T⇔R (S T)
■ Idempotence of unary operations
➠ ΠA’(ΠA’(R)) ⇔ΤΠA’(R)
➠ σp1(A1)(σp2(A2)(R)) = σp1(A1) ∧ p2(A2)(R)
where R[A] and A' ⊆ A, A" ⊆ A and A' ⊆ A"
■ Commuting selection with projection
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 24
Restructuring –
Transformation Rules
■ Commuting selection with binary operations
➠ σp(A)(R × S) ⇔ (σp(A) (R)) × S
➠ σp(Ai)(R (Aj,Bk) S) ⇔ (σp(Ai) (R)) (Aj,Bk) S
➠ σp(Ai)(R ∪ T) ⇔ σp(Ai) (R) ∪ σp(Ai) (T)
where Ai belongs to R and T
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 25
Example
Recall the previous example: ΠENAME Project
Find the names of employees other
than J. Doe who worked on the
CAD/CAM project for either one or σDUR=12 OR DUR=24
two years.
SELECT ENAME
σ
PNAME=“CAD/CAM” Select
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 26
Equivalent Query
ΠENAME
PNO ∧ENO
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 27
Restructuring
ΠENAME
PNO
ΠPNO,ENAME
ENO
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 28
Step 2 – Data Localization
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 29
Example
ΠENAME
Assume
➠ EMP is fragmented into EMP1, EMP2, σDUR=12 OR DUR=24
EMP3 as follows:
◆ EMP1=σENO≤“E3”(EMP)
◆ EMP2= σ“E3”<ENO≤“E6”(EMP)
σPNAME=“CAD/CAM”
◆ EMP3=σENO≥“E6”(EMP)
◆ ASG2=σENO>“E3”(ASG)
ENO
Replace EMP by (EMP1∪EMP2∪EMP3 ) and
ASG by (ASG1 ∪ ASG2) in any query PROJ ∪ ∪
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 31
Eliminates Unnecessary Work
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 32
Reduction for PHF
■ Reduction with selection
➠ Relation R and FR={R1, R2, …, Rw} where Rj=σ p (R)
j
σ ENO=“E5” σ ENO=“E5”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 33
Reduction for PHF
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 34
Reduction for PHF
■ Reduction with join - Example
➠ Assume EMP is fragmented as before and
ASG1: σENO ≤ "E3"(ASG)
ASG2: σENO > "E3"(ASG)
➠ Consider the query
SELECT*
FROM EMP, ASG
WHERE EMP.ENO=ASG.ENO
ENO
∪ ∪
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 36
Reduction for VF
■ Find useless (not empty) intermediate relations
Relation R defined over attributes A = {A1, ..., An} vertically
fragmented as Ri = ΠA' (R) where A' ⊆ A:
ΠD,K(Ri) is useless if the set of projection attributes D is not in A'
Example: EMP1= ΠENO,ENAME (EMP); EMP2= ΠENO,TITLE (EMP)
SELECT ENAME
FROM EMP
ΠENAME ΠENAME
ENO Þ
EMP1 EMP2 EMP1
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 37
Reduction for DHF
■ Rule :
➠ Distribute joins over unions
➠ Apply the join reduction for horizontal fragmentation
■ Example
ASG1: ASG ENO EMP1
ASG2: ASG ENO EMP2
EMP1: σ TITLE=“Programmer” (EMP)
EMP2: σ TITLE=“Programmer” (EMP)
Query
SELECT *
FROM EMP, ASG
WHERE ASG.ENO = EMP.ENO
AND EMP.TITLE = “Mech. Eng.”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 38
Reduction for DHF
Generic query ENO
σΟTITLE=“Mech. Eng.”
∪ ∪
∪ σΟTITLE=“Mech. Eng.”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 39
Reduction for DHF
Joins over unions ∪
ENO ENO
σΟTITLE=“Mech. Eng.”
ASG2 EMP2
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 40
Reduction for HF
■ Combine the rules already specified:
➠ Remove empty relations generated by contradicting
selections on horizontal fragments;
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 41
Reduction for HF
Example
Consider the following hybrid ΠENAME
fragmentation: ΠENAME
EMP1=σENO≤"E4" (ΠENO,ENAME (EMP))
σΟENO=“E5”
EMP2=σENO>"E4" (ΠENO,ENAME (EMP))
EMP3= ΠENO,TITLE (EMP) σΟENO=“E5”
ENO
and the query
SELECT ENAME
∪
FROM EMP EMP2
WHERE ENO=“E5”
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 42
Step 3 – Global Query Optimization
Input: Fragment query
■ Find the best (not necessarily optimal) global
schedule
➠ Minimize a cost function
➠ Distributed join processing
◆ Bushy vs. linear trees
◆ Which relation to ship where?
◆ Ship-whole vs ship-as-needed
➠ Decide on the use of semijoins
◆ Semijoin saves on communication at the expense of
more local processing.
➠ Join methods
◆ nested loop vs ordered joins (merge join or hash join)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 43
Cost-Based Optimization
■ Solution space
➠ The set of equivalent algebra expressions (query trees).
■ Search algorithm
➠ How do we move inside the solution space?
➠ Exhaustive search, heuristic algorithms (iterative
improvement, simulated annealing, genetic,…)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 44
Query Optimization Process
Input Query
Equivalent QEP
Best QEP
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 45
Search Space
■ Search space characterized by PNO
alternative execution plans
ENO
■ Focus on join trees PROJ
× ASG
PROJ EMP
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 46
Search Space
■ Restrict by means of heuristics
➠ Perform unary operations before binary operations
➠ …
■ Restrict the shape of the join tree
➠ Consider only linear trees, ignore bushy ones
R4
R3
R1 R2 R1 R2 R3 R4
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 47
Search Strategy
■ How to “move” in the search space.
■ Deterministic
➠ Start from base relations and build plans by adding one
relation at each step
➠ Dynamic programming: breadth-first
➠ Greedy: depth-first
■ Randomized
➠ Search for optimalities around a particular starting point
➠ Trade optimization time for execution time
➠ Better when > 5-6 relations
➠ Simulated annealing
➠ Iterative improvement
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 48
Search Strategies
■ Deterministic
R4
R3 R3
R1 R2 R1 R2 R1 R2
■ Randomized
R3 R2
R1 R2 R1 R3
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 49
Cost Functions
■ Total Time (or Total Cost)
➠ Reduce each cost (in terms of time) component
individually
➠ Do as little of each cost component as possible
➠ Optimizes the utilization of the resources
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 50
Total Cost
Summation of all cost factors
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 51
Total Cost Factors
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 52
Response Time
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 53
Example
Site 1
x units
Site 3
Site 2 y units
card(R S)
SF (R,S) =
card(R)Τ∗Τcard(S)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 55
Intermediate Relation Sizes
Selection
size(R) = card(R) ∗Τlength(R)
card(σF (R)) = SFσ (F) ∗Τcard(R)
where
1
S Fσ(A = value) =
card(∏A(R))
max(A) – value
S Fσ(A > value) =
max(A) – min(A)
value – max(A)
S Fσ(A < value) =
max(A) – min(A)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 56
Intermediate Relation Sizes
Projection
card(ΠA(R))=card(R)
Cartesian Product
card(R × S) = card(R) ∗=card(S)
Union
upper bound: card(R ∪ S) = card(R) + card(S)
lower bound: card(R ∪ S) = max{card(R), card(S)}
Set Difference
upper bound: card(R–S) = card(R)
lower bound: 0
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 57
Intermediate Relation Size
Join
➠ Special case: A is a key of R and B is a foreign key of
S;
card(R A=B S) = card(S)
➠ More general:
card(R S) = SF ∗Τcard(R) ∗Τcard(S)
Semijoin
card(R A S) = SF (S.A) ∗Τcard(R)
where
card(∏A(S))
SF (R A S)= SF (S.A) =
card(dom[A])
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 58
Centralized Query Optimization
■ INGRES
➠ dynamic
➠ interpretive
■ System R
➠ static
➠ exhaustive search
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 59
INGRES Algorithm
❶ Decompose each multi-variable query into a
sequence of mono-variable queries with a
common variable
❷ Process each by a one variable query processor
➠ Choose an initial execution plan (heuristics)
➠ Order the rest by considering intermediate relation
sizes
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 60
INGRES Algorithm–Decomposition
■ Replace an n variable query q by a series of
queries
q1 → q2 → … → qn
where qi uses the result of qi-1.
■ Detachment
➠ Query q decomposed into q' → q" where q' and q"
have a common variable which is the result of q'
■ Tuple substitution
➠ Replace the value of each tuple with actual values
and simplify the query
q(V1, V2, ... Vn) → (q' (t1, V2, V2, ... , Vn), t1 ∈ R)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 61
Detachment
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 62
Detachment Example
Names of employees working on CAD/CAM project
q1: SELECT EMP.ENAME
FROM EMP, ASG, PROJ
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=PROJ.PNO
AND PROJ.PNAME="CAD/CAM"
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 63
Detachment Example (cont’d)
q': SELECT EMP.ENAME
FROM EMP,ASG,JVAR
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=JVAR.PNO
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 64
Tuple Substitution
q11 is a mono-variable query
q12 and q13 is subject to tuple substitution
Assume GVAR has two tuples only: <E1> and <E2>
Then q13 becomes
q131: SELECT EMP.ENAME
FROM EMP
WHERE EMP.ENO="E1"
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 65
System R Algorithm
❷ Execute joins
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 66
System R Algorithm
For joins, two alternative algorithms :
■ Nested loops
for each tuple of external relation (cardinality n1)
for each tuple of internal relation (cardinality n2)
join two tuples if the join predicate is true
end
end
➠ Complexity: n1∗n2
■ Merge join
sort relations
merge relations
➠ Complexity: n1+ n2 if relations are previously sorted and
equijoin
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 67
System R Algorithm – Example
Names of employees working on the CAD/CAM project
Assume
➠ EMP has an index on ENO,
➠ ASG has an index on PNO,
➠ PROJ has an index on PNO and an index on PNAME
ASG
ENO PNO
EMP PROJ
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 68
System R Example (cont’d)
❶ Choose the best access paths to each relation
➠ EMP: sequential scan (no selection on EMP)
➠ ASG: sequential scan (no selection on ASG)
➠ PROJ: index on PNAME (there is a selection on
PROJ based on PNAME)
❷ Determine the best join ordering
➠ EMP ASG PROJ
➠ ASG PROJ EMP
➠ PROJ ASG EMP
➠ ASG EMP PROJ
➠ EMP × PROJ ASG
➠ PROJ × EMP ASG
➠ Select the best ordering based on the join costs
evaluated according to the two methods
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 69
System R Algorithm
Alternatives
EMP ASG EMP ×=PROJ ASG EMP ASG PROJ PROJ ASG PROJ ×=EMP
pruned pruned pruned pruned
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 70
System R Algorithm
■ ((PROJ ASG) EMP) has a useful index on
the select attribute and direct access to the
join attributes of ASG and EMP
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 71
Join Ordering in Fragment Queries
■ Ordering joins
➠ Distributed INGRES
➠ System R*
■ Semijoin ordering
➠ SDD-1
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 72
Join Ordering
■ Consider two relations only
➠ Use heuristics
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 73
Join Ordering – Example
Consider
PROJ PNOASG ENOEMP
Site 2
ASG
ENO PNO
EMP PROJ
Site 1 Site 3
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 74
Join Ordering – Example
Execution alternatives:
1. EMP → Site 2 2. ASG → Site 1
Site 2 computes EMP'=EMP ASG Site 1 computes EMP'=EMP ASG
EMP' → Site 3 EMP' → Site 3
Site 3 computes EMP’ PROJ Site 3 computes EMP’ PROJ
5. EMP → Site 2
PROJ → Site 2
Site 2 computes EMP PROJ ASG
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 75
Semijoin Algorithms
■ Consider the join of two relations:
➠ R[A] (located at site 1)
➠ S[A] (located at site 2)
■ Alternatives:
1 Do the join R A S
2 Perform one of the semijoin equivalents
R A S ⇔ (R A S) A S
⇔ R A (S A R)
⇔ (R A S) A (S A R)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 76
Semijoin Algorithms
■ Perform the join
➠ send R to Site 2
➠ Site 2 computes R A S
■ Consider semijoin (R A S) A S
➠ S' ←Τ∏A(S)
➠ S' → Site 1
➠ Site 1 computes R' = R A S'
➠ R' → Site 2
➠ Site 2 computes R' A S
Semijoin is better if
size(ΠA(S)) + size(R A S)) < size(R)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 77
Distributed Query
Processing
1: relation cardinality; 2: number of unique values per attribute; 3: join selectivity factor; 4: size
of projection on each join attribute; 5: attribute size and tuple size
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 78
Distributed INGRES Algorithm
Same as the centralized version except
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 79
R* Algorithm
■ Cost function includes local processing as well
as transmission
■ Considers only joins
■ Exhaustive search
■ Compilation
■ Published papers provide solutions to handling
horizontal and vertical fragmentations but the
implemented prototype does not
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 80
R* Algorithm
Performing joins
■ Ship whole
➠ larger data transfer
➠ smaller number of messages
➠ better if relations are small
■ Fetch as needed
➠ number of messages = O(cardinality of external
relation)
➠ data transfer per message is minimal
➠ better if relations are large and the selectivity is
good
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 81
R* Algorithm –
Vertical Partitioning & Joins
1. Move outer relation tuples to the site of the inner
relation
(a) Retrieve outer tuples
(b) Send them to the inner relation site
(c) Join them as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ no. of outer tuples fetched ∗
cost(retrieving qualified inner tuples)
+ msg. cost ∗ (no. outer tuples fetched ∗Τ
ΤΤΤΤΤ ΤΤΤΤavg. outer tuple size) / msg. size
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 82
R* Algorithm –
Vertical Partitioning & Joins
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 83
R* Algorithm –
Vertical Partitioning & Joins
3. Move both inner and outer relations to another
site
Total cost = cost(retrieving qualified outer tuples)
+ cost(retrieving qualified inner tuples)
+ cost(storing inner tuples in storage)
+ msg. cost ∗ (no. of outer tuples fetched ∗
avg. outer tuple size) / msg. size
+ msg. cost ∗ (no. of inner tuples fetched ∗
avg. inner tuple size) / msg. size
+ no. of outer tuples fetched ∗
cost(retrieving inner tuples from
temporary storage)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 84
R* Algorithm –
Vertical Partitioning & Joins
4. Fetch inner tuples as needed
(a) Retrieve qualified tuples at outer relation site
(b) Send request containing join column value(s) for outer
tuples to inner relation site
(c) Retrieve matching inner tuples at inner relation site
(d) Send the matching inner tuples to outer relation site
(e) Join as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ msg. cost ∗ (no. of outer tuples fetched)
+ no. of outer tuples fetched ∗ (no. of
inner tuples fetched ∗ avg. inner tuple
size ∗ msg. cost / msg. size)
+ no. of outer tuples fetched ∗
cost(retrieving matching inner tuples
for one outer value)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 85
SDD-1 Algorithm
■ Based on the Hill Climbing Algorithm
➠ Semijoins
➠ No replication
➠ No fragmentation
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 86
Hill Climbing Algorithm
Assume join is between three relations.
Step 1: Do initial processing
Step 2: Select initial feasible solution (ES0)
2.1 Determine the candidate result sites - sites
where a relation referenced in the query exist
2.2 Compute the cost of transferring all the other
referenced relations to each candidate site
2.3 ES0 = candidate site with minimum cost
Step 3: Determine candidate splits of ES0 into
{ES1, ES2}
3.1 ES1 consists of sending one of the relations
to the other relation's site
3.2 ES2 consists of sending the join of the
relations to the final result site
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 87
Hill Climbing Algorithm
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 88
Hill Climbing Algorithm –
Example
What are the salaries of engineers who work on the
CAD/CAM project?
ΠSAL(PAY TITLE(EMP ENO(ASG PNO(σPNAME=“CAD/CAM”(PROJ)))))
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 89
Hill Climbing Algorithm –
Example
Step 1:
Selection on PROJ; result has cardinality 1
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 90
Hill Climbing Algorithm –
Example
Step 2: Initial feasible solution
Alternative 1: Resulting site is Site 1
Total cost = cost(PAY→Site 1) + cost(ASG→Site 1) + cost(PROJ→Site 1)
= 4 + 10 + 1 = 15
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 91
Hill Climbing Algorithm –
Example
Step 3: Determine candidate splits
Alternative 1: {ES1, ES2, ES3} where
ES1: EMP → Site 2
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 92
Hill Climbing Algorithm –
Example
Step 4: Determine costs of each split alternative
cost(Alternative 1) = cost(EMP→Site 2) + cost((EMP PAY)→Site 4) +
cost(PROJ → Site 4)
= 8 + 8 + 1 = 17
cost(Alternative 2) = cost(PAY→Site 1) + cost((PAY EMP)→Site 4) +
cost(PROJ → Site 4)
= 4 + 8 + 1 = 13
Decision : DO NOT SPLIT
Step 5: ES0 is the “best”.
Step 6: No redundant transmissions.
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 93
Hill Climbing Algorithm
Problems :
❶ Greedy algorithm → determines an initial feasible
solution and iteratively tries to improve it
❷ If there are local minimas, it may not find global
minima
❸ If the optimal schedule has a high initial cost, it
won't find it since it won't choose it as the initial
feasible solution
Example : A better schedule is
PROJ → Site 4
ASG' = (PROJ ASG) → Site 1
(ASG' EMP) → Site 2
Total cost = 1 + 2 + 2 = 5
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 94
SDD-1 Algorithm
Initialization
Step 1: In the execution strategy (call it ES),
include all the local processing
Step 2: Reflect the effects of local processing on
the database profile
Step 3: Construct a set of beneficial semijoin
operations (BS) as follows :
BS = Ø
For each semijoin SJi
BS ← BS ∪ SJi if cost(SJi ) < benefit(SJi)
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 95
SDD-1 Algorithm – Example
Consider the following query
SELECT R3.C
FROM R1, R2, R3
WHERE R1.A = R2.A
AND R2.B = R3.B
which has the following query graph and statistics:
relation card tuple size relation
size
R1 30 50 1500
Site 1 Site 2 Site 3 R2 100 30 3000
A B R3 50 40 2000
R1 R2 R3 attribute SF size(Πattribute)
R1.A 0.3 36
R2.A 0.8 320
R2.B 1.0 400
R3.B 0.4 80
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 96
SDD-1 Algorithm – Example
■ Beneficial semijoins:
➠ SJ1 = R2 R1, whose benefit is
2100 = (1 – 0.3)∗3000 and cost is 36
➠ SJ2 = R2 R3, whose benefit is
1800 = (1 – 0.4) ∗3000 and cost is 80
■ Nonbeneficial semijoins:
➠ SJ3 = R1 R2 , whose benefit is
300 = (1 – 0.8) ∗1500 and cost is 320
➠ SJ4 = R3 R2 , whose benefit is 0 and cost is 400
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 97
SDD-1 Algorithm
Iterative Process
Step 4: Remove the most beneficial SJi from BS
and append it to ES
Step 5: Modify the database profile accordingly
Step 6: Modify BS appropriately
➠ compute new benefit/cost values
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 98
SDD-1 Algorithm – Example
■ Iteration 1:
➠ Remove SJ1 from BS and add it to ES.
➠ Update statistics
size(R2) = 900 (= 3000∗0.3)
SF (R2.A) = ~0.8∗0.3 = ~0.24
■ Iteration 2:
➠ Two beneficial semijoins:
SJ2 = R2’ R3, whose benefit is 540 = (1–0.4) ∗900 and cost is
200
SJ3 = R1 R2', whose benefit is 1140=(1–0.24)∗1500 and cost is 96
➠ Add SJ3 to ES
➠ Update statistics
size(R1) = 360 (= 1500∗0.24)
SF (R1.A) = ~0.3∗0.24 = 0.072
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 99
SDD-1 Algorithm – Example
■ Iteration 3:
➠ No new beneficial semijoins.
➠ Remove remaining beneficial semijoin SJ2 from
BS and add it to ES.
➠ Update statistics
size(R2) = 360 (= 900*0.4)
Note: selectivity of R2 may also change, but not
important in this example.
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 100
SDD-1 Algorithm
Assembly Site Selection
Step 8: Find the site where the largest amount of data
resides and select it as the assembly site
Example:
Amount of data stored at sites:
Site 1: 360
Site 2: 360
Site 3: 2000
Therefore, Site 3 will be chosen as the assembly site.
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 101
SDD-1 Algorithm
Postprocessing
Step 9: For each Ri at the assembly site, find the
semijoins of the type
Ri Rj
where the total cost of ES without this semijoin
is smaller than the cost with it and remove the
semijoin from ES.
Note : There might be indirect benefits.
➠ Example: No semijoins are removed.
Step 10: Permute the order of semijoins if doing so
would improve the total cost of ES.
➠ Example: Final strategy:
Send (R2 R1) R3 to Site 3
Send R1 R2 to Site 3
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 102
Step 4 – Local Optimization
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 103
Distributed Query Optimization
Problems
■ Cost model
➠ multiple query optimization
➠ heuristics to cut down on alternatives
■ Larger set of queries
➠ optimization only on select-project-join queries
➠ also need to handle complex queries (e.g., unions,
disjunctions, aggregations and sorting)
■ Optimization cost vs execution cost tradeoff
➠ heuristics to cut down on alternatives
➠ controllable search strategies
■ Optimization/reoptimization interval
➠ extent of changes in database profile before
reoptimization is necessary
Distributed DBMS © 1998 M. Tamer Özsu & Patrick Valduriez Page 7-9. 104