0% found this document useful (0 votes)
75 views

Query Processing

This document outlines the key topics in distributed query processing, including query decomposition, localization, optimization, and adaptive processing. Distributed query optimization aims to minimize costs like I/O, CPU, and especially communication costs, and considers factors like network topology, statistics, and centralized vs distributed decision making. The goal is to efficiently process queries over distributed data sources.

Uploaded by

Meenakshi swamy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views

Query Processing

This document outlines the key topics in distributed query processing, including query decomposition, localization, optimization, and adaptive processing. Distributed query optimization aims to minimize costs like I/O, CPU, and especially communication costs, and considers factors like network topology, statistics, and centralized vs distributed decision making. The goal is to efficiently process queries over distributed data sources.

Uploaded by

Meenakshi swamy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 121

Principles of Distributed Database

Systems

M. Tamer Özsu
Patrick Valduriez

© 2020, M.T. Özsu & P. Valduriez 1


Outline
 Introduction
 Distributed and parallel database design
 Distributed data control
 Distributed Query Processing
 Distributed Transaction Processing
 Data Replication
 Database Integration – Multidatabase Systems
 Parallel Database Systems
 Peer-to-Peer Data Management
 Big Data Processing
 NoSQL, NewSQL and Polystores
 Web Data Management
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/2
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Introduction to QO
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

 Slides of the 3rd Edition of the textbook !

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/3


Query Processing in a DDBMS
high level user query

query
processor

Low-level data manipulation


commands for D-DBMS

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/4


Query Processing Components
• Query language that is used
➡ SQL: “intergalactic dataspeak”

• Query execution methodology


➡ The steps that one goes through in executing high-level (declarative) user
queries.

• Query optimization
➡ How do we determine the “best” execution plan?

• We assume a homogeneous D-DBMS

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/5


Selecting Alternatives
SELECTENAME
FROM EMP,ASG
WHERE EMP.ENO = ASG.ENO
AND RESP = "Manager"

Strategy 1
ENAME(RESP=“Manager”EMP.ENO=ASG.ENO(EMP×ASG))
Strategy 2
 (EMP ⋈ENO (RESP=“Manager” (ASG))
ENAME

Strategy 2 avoids Cartesian product, so may be “better”

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/6


What is the Problem?
Site 1 Site 2 Site 3 Site 4 Site 5
Result
ASG1=σENO≤“E3”(ASG) ASG2= σENO>“E3”(ASG) EMP1= σENO≤“E3”(EMP) EMP2= σENO>“E3”(EMP)

Site 5 Site 5
result EMP1'  EMP2' result= (EMP1 U EMP2)⋈ENOσRESP=“Manager”(ASG1 U ASG2)

EMP1' EMP2'
Site 3 Site 4 ASG1 ASG2 EMP1 EMP2
EMP’1=EMP1 ⋈ENO ASG’1 EMP’2=EMP2 ⋈ENO ASG’2
Site 1 Site 2 Site 3 Site 4
ASG1' ASG'2
Site 1 Site 2
ASG1' σ RESP"Manager" ASG1 ASG'2 σ RESP"Manager" ASG2

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/7


Cost of Alternatives
• Assume
➡ size(EMP) = 400, size(ASG) = 1000
➡ tuple access cost = 1 unit; tuple transfer cost = 10 units
• Strategy 1
➡ produce ASG': (10+10)  tuple access cost = 20
➡ transfer ASG' to the sites of EMP: (10+10)  tuple transfer cost = 200
➡ produce EMP': (10+10)  tuple access cost  2 = 40
➡ transfer EMP' to result site: (10+10)  tuple transfer cost = 200
Total Cost 460
• Strategy 2
➡ transfer EMP to site 5: 400  tuple transfer cost = 4,000
➡ transfer ASG to site 5: 1000  tuple transfer cost = 10,000
➡ produce ASG': 1000  tuple access cost = 1,000
➡ join EMP and ASG': 400  20  tuple access cost = 8,000
Total Cost 23,000

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/8


Query Optimization Objectives
• Minimize a cost function
I/O cost + CPU cost + communication cost
These might have different weights in different distributed environments
• Wide area networks
➡ communication cost may dominate or vary much
✦ bandwidth
✦ speed
✦ high protocol overhead
• Local area networks
➡ communication cost not that dominant
➡ total cost function should be considered
• Can also maximize throughput

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/9


Complexity of Relational
Operations
Operation Complexity

Select
• Assume Project O(n)
(without duplicate elimination)
➡ relations of cardinality n
➡ sequential scan Project
(with duplicate elimination) O(n  log n)
Group

Join
Semi-join O(n  log n)
Division
Set Operators

Cartesian Product O(n2)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/10


Query Optimization Issues –
Types Of Optimizers
• Exhaustive search
➡ Cost-based

➡ Optimal

➡ Combinatorial complexity in the number of relations

• Heuristics
➡ Not optimal

➡ Regroup common sub-expressions

➡ Perform selection, projection first

➡ Replace a join by a series of semijoins

➡ Reorder operations to reduce intermediate relation size

➡ Optimize individual operations

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/11


Query Optimization Issues –
Optimization Granularity
• Single query at a time

➡ Cannot use common intermediate results

• Multiple queries at a time

➡ Efficient if many similar queries

➡ Decision space is much larger

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/12


Query Optimization Issues –
Optimization Timing
• Static
➡Compilation  optimize prior to the execution
➡Difficult to estimate the size of the intermediate results⇒error
propagation
➡Can amortize over many executions
➡R*
• Dynamic
➡Run time optimization
➡Exact information on the intermediate relation sizes
➡Have to reoptimize for multiple executions
➡Distributed INGRES
• Hybrid
➡Compile using a static algorithm
➡If the error in estimate sizes > threshold, reoptimize at run time
➡Mermaid

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/13


Query Optimization Issues –
Statistics
• Relation
➡Cardinality
➡Size of a tuple
➡Fraction of tuples participating in a join with another relation
• Attribute
➡Cardinality of domain
➡Actual number of distinct values
• Common assumptions
➡Independence between different attribute values
➡Uniform distribution of attribute values within their domain

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/14


Query Optimization Issues –
Decision Sites
• Centralized
➡ Single site determines the “best” schedule
➡ Simple
➡ Need knowledge about the entire distributed database
• Distributed
➡ Cooperation among sites to determine the schedule
➡ Need only local information
➡ Cost of cooperation
• Hybrid
➡ One site determines the global schedule
➡ Each site optimizes the local subqueries

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/15


Query Optimization Issues –
Network Topology
• Wide area networks (WAN) – point-to-point
➡ Characteristics
✦ Low bandwidth
✦ Low speed
✦ High protocol overhead
➡ Communication cost will dominate; ignore all other cost factors
➡ Global schedule to minimize communication cost
➡ Local schedules according to centralized query optimization
• Local area networks (LAN)
➡ Communication cost not that dominant
➡ Total cost function should be considered
➡ Broadcasting can be exploited (joins)
➡ Special algorithms exist for star networks
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/16
Distributed Query Processing
Methodology
Calculus Query on Distributed Relations

Query GLOBAL
Decomposition SCHEMA

Algebraic Query on Distributed


Relations
CONTROL
Data FRAGMENT
SITE Localization SCHEMA

Fragment Query

Global STATS ON
Optimization FRAGMENTS

Optimized Fragment Query


with Communication Operations

LOCAL Local LOCAL


Optimization SCHEMAS
SITES

Optimized Local Queries


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.6/17
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Introduction to query optimization
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/18


Query Decomposition
Input : Calculus query on global relations
• Normalization
➡ manipulate query quantifiers and qualification
• Analysis
➡ detect and reject “incorrect” queries
➡ possible for only a subset of relational calculus
• Simplification
➡ eliminate redundant predicates
• Restructuring
➡ calculus query  algebraic query
➡ more than one translation is possible
➡ use transformation rules

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/19


Normalization
• Lexical and syntactic analysis
➡ check validity (similar to compilers)
➡ check for attributes and relations
➡ type checking on the qualification
• Put into normal form
➡ Conjunctive normal form
(p11 p12  …  p1n)  …  (pm1  pm2  …  pmn)
➡ Disjunctive normal form
(p11  p12  …  p1n)  …  (pm1  pm2  …  pmn)
➡ OR's mapped into union
➡ AND's mapped into join or selection

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/20


Normalization - example

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/21


Analysis
• Refute incorrect queries
• Type incorrect
➡ If any of its attribute or relation names are not defined in the global
schema
➡ If operations are applied to attributes of the wrong type
• Semantically incorrect
➡ Components do not contribute in any way to the generation of the result
➡ Only a subset of relational calculus queries can be tested for correctness
➡ Those that do not contain disjunction and negation
➡ To detect
✦ connection graph (query graph)
✦ join graph

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/22


Analysis – Example
SELECT ENAME,RESP
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO
AND ASG.PNO = PROJ.PNO
AND PNAME = "CAD/CAM"
AND DUR ≥ 36
AND TITLE = "Programmer"

Query graph Join graph


DUR≥36

ASG ASG
EMP.ENO=ASG.ENO ASG.PNO=PROJ.PNO EMP.ENO=ASG.ENO ASG.PNO=PROJ.PNO

TITLE =
EMP RESP PROJ EMP PROJ
“Programmer”

ENAME
RESULT
PNAME=“CAD/CAM”

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/23


Analysis
If the query graph is not connected, the query may be
wrong or use Cartesian product
SELECT ENAME,RESP
FROM EMP, ASG, PROJ
WHERE EMP.ENO = ASG.ENO
AND PNAME = "CAD/CAM"
AND DUR > 36
AND TITLE = "Programmer"

ASG

EMP RESP PROJ

ENAME
RESULT
PNAME=“CAD/CAM”

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/24


Simplification
• Why simplify?
➡ Remember the example
• How? Use transformation rules
➡ Elimination of redundancy
idempotency rules

p1  ¬( p1)  false
p1  (p1∨ p2)  p1
p1  false  p1

➡ Application of transitivity
➡ Use of integrity rules

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/25


Simplification – Example
SELECT TITLE
FROM EMP
WHERE EMP.ENAME = "J. Doe"
OR (NOT(EMP.TITLE = "Programmer")
AND (EMP.TITLE = "Programmer"
OR EMP.TITLE = "Elect. Eng.")
AND NOT(EMP.TITLE = "Elect. Eng."))


SELECT TITLE
FROM EMP
WHERE EMP.ENAME = "J. Doe"

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/26


Restructuring
• Convert relational calculus to ENAME Project
relational algebra
• Make use of query trees σDUR=12 OR DUR=24
• Example
Find the names of employees other than J.
Doe who worked on the CAD/CAM project σPNAME=“CAD/CAM” Select
for either 1 or 2 years.
SELECT ENAME
FROM EMP, ASG, PROJ σENAME≠“J. DOE”
WHERE EMP.ENO = ASG.ENO
AND ASG.PNO = PROJ.PNO
AND ENAME≠ "J. Doe" ⋈PNO
AND PNAME = "CAD/CAM"
AND (DUR = 12 OR DUR = 24) ⋈ENO Join

PROJ ASG EMP


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/27
Restructuring –Transformation
Rules
• Commutativity of binary operations
➡ R × S S × R
➡ R ⋈S S ⋈R
➡ R  S S  R
• Associativity of binary operations
➡ ( R × S) × T R × (S × T)
➡ (R ⋈S) ⋈T R ⋈ (S ⋈T)
• Idempotence of unary operations
➡ A’( A’(R))  A’(R)
➡ p1(A1)(p2(A2)(R)) p1(A1)∧p2(A2)(R)
where R[A] and A'  A, A"  A and A'  A"
• Commuting selection with projection

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/28


Restructuring – Transformation
Rules
• Commuting selection with binary operations
➡ p(A)(R × S) (p(A) (R)) × S

➡ p(A )(R ⋈(A ,B )S) (p(A ) (R)) ⋈(A ,B )S


i j k i j k

➡ p(A )(R  T) p(A ) (R)  p(A ) (T)


i i i
where Ai belongs to R and T
• Commuting projection with binary operations
➡ C(R × S) A’(R) × B’(S)

➡ C(R ⋈(A ,B )S) A’(R) ⋈(A ,B ) B’(S)


j k j k
➡ C(R  S) C(R)  C(S)
where R[A] and S[B]; C = A'  B' where A'  A, B'  B

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/29


Example
Recall the previous example: ENAME Project
Find the names of employees other
than J. Doe who worked on the
CAD/CAM project for either one or two DUR=12  DUR=24
years.

SELECT ENAME
PNAME=“CAD/CAM” Select
FROM PROJ, ASG, EMP
WHERE ASG.ENO=EMP.ENO ENAME≠“J. DOE”
AND ASG.PNO=PROJ.PNO
AND ENAME ≠ "J. Doe"
AND PROJ.PNAME="CAD/CAM" ⋈PNO
AND (DUR=12 OR DUR=24)

⋈ENO Join

PROJ ASG EMP


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/30
Equivalent Query
ENAME

PNAME=“CAD/CAM”  (DUR=12  DUR=24) ENAME≠“J. Doe”

⋈PNO,ENO

EMP PROJ ASG


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/31
Restructuring
ENAME

⋈PNO
PNO,ENAME

⋈ENO

PNO PNO,ENO PNO,ENAME

PNAME = "CAD/CAM" DUR =12DUR=24 ENAME ≠ "J. Doe"

PROJ ASG EMP


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/32
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Introduction to query optimization
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/33


Data Localization
Input: Algebraic query on distributed relations
• Determine which fragments are involved
• Localization program
➡ substitute for each global query its materialization program
➡ optimize

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/34


Example
Recall the previous example: ENAME Project
Find the names of employees other
than J. Doe who worked on the
CAD/CAM project for either one or two DUR=12∨DUR=24
years.

SELECT ENAME
PNAME=“CAD/CAM” Select
FROM PROJ, ASG, EMP
WHERE ASG.ENO=EMP.ENO ENAME≠“J. DOE”
AND ASG.PNO=PROJ.PNO
AND ENAME ≠ "J. Doe"
AND PROJ.PNAME="CAD/CAM" ⋈PNO
AND (DUR=12 OR DUR=24)

⋈ENO Join

PROJ ASG EMP


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/35
Example
Assume ENAME

➡ EMP is fragmented into EMP1, EMP2, EMP3 DUR=12∨DUR=24


as follows:
✦ EMP1= ENO≤“E3”(EMP)
PNAME=“CAD/CAM”
✦ EMP2= “E3”<ENO≤“E6”(EMP)
✦ EMP3= ENO>“E6”(EMP)
➡ ASG fragmented into ASG1 and ASG2 as
ENAME≠“J. DOE”
follows:
✦ ASG1= ENO≤“E3”(ASG) ⋈PNO
✦ ASG2= ENO>“E3”(ASG)
➡ Conditions pi are defined on
the common join key ⋈ENO
Replace EMP by (EMP1  EMP2  EMP3)
PROJ  
and ASG by (ASG1  ASG2) in any query

EMP1 EMP2 EMP3 ASG1 ASG2


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/36
Provides Parallellism


...

⋈ENO ⋈ENO ⋈ENO ⋈ENO

EMP1 ASG1 EMP2 ASG1 EMP3 ASG1 EMP1 ASG2

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/37


Eliminates Unnecessary Work

⋈ENO ⋈ENO ⋈ENO

EMP1 ASG1 EMP2 ASG2 EMP3 ASG2

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/38


Reduction for PHF
• Reduction with selection
➡ Relation R and FR={R1, R2, …, Rw} where Rj=pj(R)
p (Rj)= if  x in R: ¬ (pi(x) ∧ pj(x))
i
➡ Example
SELECT *
FROM EMP
WHERE ENO="E5"

ENO=“E5” ENO=“E5”

EMP1 EMP2 EMP3 EMP2


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/39
Reduction for PHF
• Reduction with join
➡ Possible if fragmentation is done on join attribute
➡ Distribute join over union
(R1 R2)⋈S  (R1⋈S)  (R2⋈S)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/40


Reduction for PHF
• Reduction with join
➡ Possible if fragmentation is done on join attribute
➡ Distribute join over union
(R1 R2)⋈S  (R1⋈S)  (R2⋈S)
➡ Given Ri =p (R) and Rj = p (R)
i j

Ri ⋈Rj = if  x in Ri,  y in Rj: ¬ (pi(y) ∧ pj(x))

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/41


Reduction for PHF
• Assume EMP is fragmented as ⋈ENO
before and
➡ASG1: ENO ≤ "E3"(ASG)  
➡ASG2: ENO > "E3"(ASG)
• Consider the query
SELECT * EMP1 EMP2 EMP3 ASG1 ASG2
FROM EMP,ASG
WHERE EMP.ENO=ASG.ENO
• Distribute join over unions 
• Apply the reduction rule

⋈ENO ⋈ENO ⋈ENO

EMP1 ASG1 EMP2 ASG2 EMP3 ASG2

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/42


Reduction for VF
• Find useless (not empty) intermediate relations
Relation R defined over attributes A = {A1, ..., An} vertically fragmented as
Ri =A'(R) where A' A:
D,K(Ri) is useless if the set of projection attributes D is not in A'
Example: EMP1=ENO,ENAME (EMP); EMP2=ENO,TITLE (EMP)

SELECT ENAME
FROM EMP

ENAME ENAME

⋈ENO

EMP1 EMP2 EMP1


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/43
Reduction for DHF
• Rule :
➡Distribute joins over unions
➡Apply the join reduction for horizontal fragmentation
• Example
ASG1: ASG ⋉ENO EMP1
ASG2: ASG ⋉ENO EMP2
EMP1: TITLE=“Programmer” (EMP)
EMP2: TITLE≠“Programmer” (EMP)
• Query
SELECT *
FROM EMP, ASG
WHERE ASG.ENO = EMP.ENO
AND EMP.TITLE = "Mech. Eng."

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/44


Reduction for DHF
Generic query ⋈ENO
TITLE=“Mech. Eng.”

 

ASG1 ASG2 EMP1 EMP2

Selections first ⋈ENO

 TITLE=“Mech. Eng.”

ASG1 ASG2 EMP2


Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/45
Reduction for DHF

Joins over unions

⋈ENO ⋈ENO

TITLE=“Mech. Eng.” TITLE=“Mech. Eng.”

ASG1 EMP2 ASG2 EMP2


Elimination of the empty intermediate relations
(left sub-tree)
⋈ENO

TITLE=“Mech. Eng.”

ASG2 EMP2
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/46
Reduction for Hybrid
Fragmentation
• Combine the rules already specified:
➡ Remove empty relations generated by contradicting selections on
horizontal fragments;
➡ Remove useless relations generated by projections on vertical fragments;
➡ Distribute joins over unions in order to isolate and remove useless joins.

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/47


Reduction for HF
Example
Consider the following hybrid ENAME
fragmentation: ENAME
EMP1= ENO≤"E4" (ENO,ENAME (EMP))
EMP2= ENO>"E4" (ENO,ENAME (EMP)) ENO=“E5”
EMP3= ENO,TITLE (EMP)
and the query ⋈ENO
 ENO=“E5”
SELECT ENAME
FROM EMP
WHERE ENO="E5"

EMP2

EMP1 EMP2 EMP3

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/48


Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Introduction to QO
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/49


Global Query Optimization
Input: Fragment query
• Find the best (not necessarily optimal) global schedule
➡ Minimize a cost function
➡ Distributed join processing
✦ Bushy vs. linear trees
✦ Which relation to ship where?
✦ Ship-whole vs ship-as-needed
➡ Decide on the use of semijoins
✦ Semijoin saves on communication at the expense of more local processing.
➡ Join methods
✦ nested loop vs ordered joins (merge join or hash join)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/50


Search Space
▷◁
• Search space characterized by PNO
alternative execution ▷◁
• Focus on join trees PROJ
• For N relations, there are O(N!) EMP
ENO
ASG
equivalent join trees that can be
obtained by applying ▷◁
commutativity and associativity
ENO
rules ▷◁ PNO EMP
SELECT ENAME,RESP
FROM EMP, ASG,PROJ
PROJ ASG
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=PROJ.PNO ▷◁ ENO,PNO

× ASG

PROJ EMP
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/51
Cost-Based Optimization
• Solution space
➡ The set of equivalent algebra expressions (query trees).
• Cost function (in terms of time)
➡ I/O cost + CPU cost + communication cost
➡ These might have different weights in different distributed
environments (LAN vs WAN).
➡ Can also maximize throughput
• Search algorithm
➡ How do we move inside the solution space?
➡ Exhaustive search, heuristic algorithms (iterative improvement,
simulated annealing, genetic,…)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/52


Query Optimization Process

Input Query

Search Space Transformation


Generation Rules

Equivalent QEP

Search Cost Model


Strategy

Best QEP

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/53


Search Space
 Restrict by means of heuristics
 Perform unary operations before binary operations
 …
 Restrict the shape of the join tree
➡ Consider only linear trees, ignore bushy ones

Linear Join Tree Bushy Join Tree




⋈ R4

⋈ R3 ⋈ ⋈
R1 R2 R1 R2 R3 R4

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/54


Search Strategy
 How to “move” in the search space.
 Deterministic
 Start from base relations and build plans by adding one relation at each
step
 Dynamic programming: breadth-first
 Greedy: depth-first
 Randomized
 Search for optimalities around a particular starting point
 Trade optimization time for execution time
 Better when > 10 relations
 Simulated annealing
 Iterative improvement

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/55


Search Strategies
• Deterministic ⋈
⋈ ⋈ R4

⋈ ⋈ R3 ⋈ R3

R1 R2 R1 R2 R1 R2

• Randomized
⋈ ⋈
⋈ R3 ⋈ R2

R1 R2 R1 R3

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/56


Cost Functions
• Total Time (or Total Cost)
➡ Reduce each cost (in terms of time) component individually
➡ Do as little of each cost component as possible
➡ Optimizes the utilization of the resources

Increases system throughput

• Response Time
➡ Do as many things as possible in parallel
➡ May increase total time because of increased total activity

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/57


Total Cost
Summation of all cost factors

Total cost = CPU cost + I/O cost + communication cost

CPU cost = unit instruction cost * no.of instructions

I/O cost = unit disk I/O cost * no. of disk I/Os

communication cost = message initiation + transmission

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/58


Total Cost Factors
• Wide area network
➡Message initiation and transmission costs high
➡Local processing cost is low (fast mainframes or minicomputers)
➡Ratio of communication to I/O costs = 20:1
• Local area networks
➡Communication and local processing costs are more or less equal
➡Ratio = 1:1.6

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/59


Response Time
Elapsed time between the initiation and the completion of a query

Response time = CPU time + I/O time + communication time


CPU time = unit instruction time * no. of sequential instructions
I/O time = unit I/O time * no. of sequential I/Os
communication time = unit msg initiation time * no. of sequential msg
+ unit transmission time * no. of sequential bytes

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/60


Example
Site 1
x units
Site 3

Site 2 y units

Assume that only the communication cost is considered


Total time = 2 × message initialization time + unit transmission time *
(x+y)
Response time = max {time to send x from 1 to 3, time to send y from 2
to 3}
time to send x from 1 to 3 = message initialization time
+ unit transmission time * x
time to send y from 2 to 3 = message initialization time
+ unit transmission time * y
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/61
Optimization Statistics
• Primary cost factor: size of intermediate relations
➡ Need to estimate their sizes
• Make them precise  more costly to maintain
• Simplifying assumption: uniform distribution of attribute values in a
relation

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/62


Statistics
• For each relation R[A , A , …, A ] fragmented as R , …, R
1 2 n 1 r
➡ length of each attribute: length(Ai)
➡ the number of distinct values for each attribute in each fragment:
card(A Rj)
i
➡ maximum and minimum values in the domain of each attribute:
min(Ai), max(Ai)
➡ the cardinalities of each domain: card(dom[Ai])
• The cardinalities of each fragment: card(R ) j
• Selectivity factor of each operation for relations
➡For joins
card(R ⋈S)
SF ⋈ (R,S) = card(R) * card(S)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/63


Intermediate Relation Sizes
Selection
size(R) = card(R) × length(R)
card(F(R)) = SF(F) × card(R)
where

1
S F(A = value) =
card(∏A(R))
max(A) – value
S F(A >value) =
max(A) – min(A)
value – max(A)
S F(A <value) =
max(A) – min(A)
SF(p(Ai) p(Aj)) = SF(p(Ai)) × SF(p(Aj))
SF(p(Ai)  p(Aj)) = SF(p(Ai)) + SF(p(Aj)) – (SF(p(Ai)) × SF(p(Aj)))
SF(A{value}) = SF(A= value) * card({values})

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/64


Intermediate Relation Sizes
Projection
card(A(R))=card(R)
Cartesian Product
card(R × S) = card(R) * card(S)
Union
upper bound: card(R  S) = card(R) + card(S)
lower bound: card(R  S) = max{card(R), card(S)}
Set Difference
upper bound: card(R–S) = card(R)
lower bound: 0

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/65


Intermediate Relation Size
Join
Special case: A is a key of R and B is a foreign key of S
card(R ⋈A=B S) = card(S)
More general:
card(R ⋈ S) = SF⋈ * card(R) × card(S)
Semijoin
card(R ⋉A S) = SF⋉(S.A) * card(R)
where
card(∏A(S))
SF⋉(R ⋉A S)= SF⋉(S.A) =
card(dom[A])

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/66


Histograms for Selectivity
Estimation
• For skewed data, the uniform distribution assumption of attribute
values yields inaccurate estimations
• Use an histogram for each skewed attribute A
➡ Histogram = set of buckets
✦ Each bucket describes a range of values of A, with its average frequency f
(number of tuples with A in that range) and number of distinct values d
✦ Buckets can be adjusted to different ranges
• Examples
➡ Equality predicate
✦ With (value in Rangei), we have: SFσ(A = value) = 1/di
➡ Range predicate
✦ Requires identifying relevant buckets and summing up their frequencies

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/67


Histogram Example

For ASG.DUR=18: we have SF=1/12 so the card of selection is 50/12


= 5 tuples

For ASG.DUR≤18: we have min(range3)=12 and max(range3)=24 so


the card. of selection is 100+75+(((18−12)/(24 − 12))*50) = 200
Distributed DBMS
tuples
© M. T. Özsu & P. Valduriez Ch.7/68
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Introduction to QO
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/69


Centralized Query Optimization
• Dynamic (Ingres project at UCB)
➡ Interpretive
• Static (System R project at IBM)
➡ Exhaustive search
• Hybrid (Volcano project at OGI)
➡ Choose node within plan

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/70


Dynamic Algorithm
 Decompose each multi-variable query into a sequence of mono-
variable queries with a common variable
 Process each by a one variable query processor
➡ Choose an initial execution plan (heuristics)
➡ Order the rest by considering intermediate relation sizes

No statistical information is maintained

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/71


Dynamic Algorithm–
Decomposition
• Replace an n variable query q by a series of queries
q1q2  …  qn
where qi uses the result of qi-1.
• Detachment
➡ Query q decomposed into q'  q" where q' and q" have a common
variable which is the result of q'
• Tuple substitution
➡ Replace the value of each tuple with actual values and simplify the query

q(V1, V2, ... Vn)  (q' (t1, V2, V2, ... , Vn), t1R)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/72


Detachment
q: SELECT V2.A2,V3.A3, …,Vn.An
FROM R1 V1, …,Rn Vn
WHERE P1(V1.A1’)AND P2(V1.A1,V2.A2,…, Vn.An)


q': SELECT V1.A1 INTO R1'
FROM R1 V1
WHERE P1(V1.A1)

q": SELECT V2.A2, …,Vn.An


FROM R1' V1, R2 V2, …,Rn Vn
WHERE P2(V1.A1, V2.A2, …,Vn.An)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/73


Detachment Example
Names of employees working on CAD/CAM project
q1: SELECT EMP.ENAME
FROM EMP, ASG, PROJ
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=PROJ.PNO
AND PROJ.PNAME="CAD/CAM"

q11: SELECT PROJ.PNO INTO JVAR
FROM PROJ
WHERE PROJ.PNAME="CAD/CAM"

q': SELECT EMP.ENAME


FROM EMP,ASG,JVAR
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=JVAR.PNO

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/74


Detachment Example (cont’d)
q': SELECT EMP.ENAME
FROM EMP,ASG,JVAR
WHERE EMP.ENO=ASG.ENO
AND ASG.PNO=JVAR.PNO


q12: SELECT ASG.ENO INTO GVAR
FROM ASG,JVAR
WHERE ASG.PNO=JVAR.PNO

q13: SELECT EMP.ENAME


FROM EMP,GVAR
WHERE EMP.ENO=GVAR.ENO

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/75


Tuple Substitution
q11 is a mono-variable query
q12 and q13 is subject to tuple substitution
Assume GVAR has two tuples only: 〈 E1 〉 and 〈 E2 〉
Then q13 becomes
q131: SELECT EMP.ENAME
FROM EMP
WHERE EMP.ENO="E1"

q132: SELECT EMP.ENAME


FROM EMP
WHERE EMP.ENO="E2"

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/76


Static Algorithm
 Simple (i.e., mono-relation) queries are executed according to the best
access path
 Execute joins
➡ Determine the possible ordering of joins
➡ Determine the cost of each ordering
➡ Choose the join ordering with minimal cost

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/77


Static Algorithm
For joins, two alternative algorithms :
• Nested loops
for each tuple of external relation (cardinality n1)
for each tuple of internal relation (cardinality n2)
join two tuples if the join predicate is true
end
end
➡ Complexity: n1* n2
• Merge join
sort relations
merge relations
➡ Complexity: n1+ n2 if relations are previously sorted and equijoin

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/78


Static Algorithm – Example
Names of employees working on the CAD/CAM project
Assume
➡ EMP has an index on ENO,
➡ ASG has an index on PNO,
➡ PROJ has an index on PNO and an index on PNAME

ASG

ENO PNO

EMP PROJ

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/79


Example (cont’d)
 Choose the best access paths to each relation
➡ EMP: sequential scan (no selection on EMP)
➡ ASG: sequential scan (no selection on ASG)
➡ PROJ: index on PNAME (there is a selection on PROJ based on PNAME)
 Determine the best join ordering
➡ EMP ▷◁ ASG ▷◁ PROJ
➡ ASG ▷◁ PROJ ▷◁ EMP
➡ PROJ ▷◁ ASG ▷◁ EMP
➡ ASG ▷◁ EMP ▷◁ PROJ
➡ EMP × PROJ ▷◁ ASG
➡ PRO × JEMP ▷◁ ASG
➡ Select the best ordering based on the join costs evaluated according to
the two methods

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/80


Static Algorithm
Alternatives

EMP ASG PROJ

EMP ⋈ ASG EMP × PROJ ASG ⋈ EMP ASG ⋈ PROJ PROJ ⋈ PROJ × EMP
pruned pruned pruned ASG pruned

(ASG ⋈ EMP) ⋈ PROJ (PROJ ⋈ ASG) ⋈ EMP

Best total join order is one of


((ASG ⋈ EMP) ⋈ PROJ)
((PROJ ⋈ ASG) ⋈ EMP)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/81


Static Algorithm
• ((PROJ ⋈ ASG) ⋈ EMP) has a useful index on the select attribute and
direct access to the join attributes of ASG and EMP
• Therefore, chose it with the following access methods:
➡ select PROJ using index on PNAME
➡ then join with ASG using index on PNO
➡ then join with EMP using index on ENO

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/82


Hybrid optimization
• In general, static optimization is more efficient than dynamic
optimization
➡ Adopted by all commercial DBMS
• But even with a sophisticated cost model (with histograms), accurate
cost prediction is difficult
• Example
➡ Consider a parametric query with predicate
WHERE R.A = $a /* $a is a parameter
➡ The only possible assumption at compile time is uniform distribution of
values
• Solution: Hybrid optimization
➡ Choose-plan done at runtime, based on the actual parameter binding

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/83


Hybrid Optimization Example

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/84


Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/85


Join Ordering in Fragment
Queries
• Ordering joins
➡ Distributed INGRES
➡ System R*
➡ Two-step
• Semijoin ordering
➡ SDD-1

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/86


Join Ordering
• Consider two relations only

if size(R) < size(S)


R S
if size(R) > size(S)

• Multiple relations more difficult because too many alternatives.


➡ Compute the cost of all alternatives and select the best one.
✦ Necessary to compute the size of intermediate relations which is difficult.
➡ Use heuristics

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/87


Join Ordering – Example

Consider
PROJ ⋈PNO ASG ⋈ENO EMP

Site 2

ASG
ENO PNO

EMP PROJ
Site 1 Site 3

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/88


Join Ordering – Example
Execution alternatives:
1. EMP Site 2 2. ASG  Site 1
Site 2 computes EMP'=EMP ⋈ ASG Site 1 computes EMP'=EMP ⋈ ASG
EMP' Site 3 EMP'  Site 3
Site 3 computes EMP' ⋈ PROJ Site 3 computes EMP’ ⋈ PROJ

3. ASG  Site 3 4. PROJ  Site 2


Site 3 computes ASG'=ASG ⋈ PROJ Site 2 computes PROJ'=PROJ ⋈ ASG
ASG'  Site 1 PROJ'  Site 1
Site 1 computes ASG' ▷◁ EMP Site 1 computes PROJ' ⋈ EMP

5. EMP  Site 2
PROJ  Site 2
Site 2 computes EMP ⋈ PROJ ⋈ ASG

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/89


Semijoin Algorithms
• General form of semijoin (derivation):
R ⋉F S = A(R ⋈F S) = A(R) ⋈ AB(S) = R ⋈F AB(S)
where
R[A], S[B] are relations

Consider the join of two relations:
➡ R[A] (located at site 1)
➡ S[A] (located at site 2)
• Alternatives:
1. Do the join R ⋈AS
2. Perform one of the semijoin equivalents
R ⋈AS (R ⋉AS) ⋈AS
 R ⋈A (S ⋉A R)
 (R ⋉A S) ⋈A (S ⋉A R)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/90


Semijoin Algorithms
• Perform the join
➡send R to Site 2
➡Site 2 computes R ⋈A S
• Consider semijoin (R ⋉ S) ⋈ S
A A

➡S' = A(S)
➡S'  Site 1
➡Site 1 computes R' = R ⋉AS'
➡R' Site 2
➡Site 2 computes R' ⋈AS
Semijoin is better if
size(A(S)) + size(R ⋉AS)) < size(R)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/91


Semijoin Algorithms
• Semijoins are useful for multi-join queries
 Reducing the size of the operand relations involved in multiple join queries
 Optimization becomes more complex
 Example: program to compute EMP ⋈ ASG ⋈ PROJ is
 EMP' ⋈ ASG' ⋈ PROJ,
 where EMP' = EMP ⋉ ASG and ASG' = ASG ⋉ PROJ.
 We may further reduce the size of an operand relation
 EMP'' = EMP ⋉ (ASG ⋉ PROJ)

size(ASG ⋉ PROJ) ≤ size(ASG), we have size(EMP'') ≤ size(EMP')

EMP ⋉ (ASG ⋉ PROJ) is semijoin program for EMP

there exist several potential semijoin programs

there is one optimal semijoin program, called the full reducer

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/92


Semijoin Algorithms
• The problem is to find the full reducer
 Evaluate the size reduction of all possible semijoin programs
 Problems with the enumerative method

Cyclic queries, that have cycles in their join graph and for which full
reducers cannot be found

Tree queries: full reducers exist, but the number of candidate semijoin
programs is exponential in the number of relations, which makes the
enumerative approach NP-hard

Full reducers for tree queries exist
 The problem of finding them is NP-hard
 Important class of queries, called chained queries

A chained query has a join graph where relations can be ordered, and each relation joins
only with the next relation in the order
 Polynomial algorithm exists

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/93


Semijoin:Example
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/95


Distributed Dynamic Algorithm
1. Execute all monorelation queries (e.g., selection, projection)
2. Reduce the multirelation query to produce irreducible subqueries
q1 q2  …  qnsuch that there is only one relation between qi and
qi+1
3. Choose qi involving the smallest fragments to execute (call MRQ')
4. Find the best execution strategy for MRQ'
a) Determine processing site
b) Determine fragments to move
5. Repeat 3 and 4

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/96


Distributed Dynamic Algorithm

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/97


Distributed Dynamic Algorithm
- Example

Let us consider the query PROJ ⋈ ASG, where PROJ and ASG are
fragmented

Assume that the allocation of fragments and their sizes are as follows
(in kilobytes)

Discussion:

Point–to–point network, the best

strategy is to send each PROJi to site 3,

3000 kbytes, versus 6000 kbytes

if ASG is sent to sites 1,2, and 4.

Broadcast network, the best strategy is to send ASG (in

a single transfer) to sites 1, 2, and 4, which incurs a transfer of 2000 kbytes.

The latter strategy is faster and maximizes response time because the joins can
be done in parallel.

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/98


Distributed Static Algorithm
• Cost function includes local processing as well as transmission
• Considers only joins
• “Exhaustive” search
• Compilation
• Published papers provide solutions to handling horizontal and vertical
fragmentations but the implemented prototype does not

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/99


Distributed Static Algorithm

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/100


Static Approach – Performing
Joins
• Ship whole
➡Larger data transfer
➡Smaller number of messages
➡Better if relations are small
• Fetch as needed
➡Number of messages = O(cardinality of external relation)
➡Data transfer per message is minimal
➡Better if relations are large and the selectivity is good

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/101


Static Approach –
Vertical Partitioning & Joins
1. Move outer relation tuples to the site of the inner relation
(a) Retrieve outer tuples
(b) Send them to the inner relation site
(c) Join them as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ no. of outer tuples fetched * cost(retrieving qualified inner tuples)
+ msg. cost * (no. outer tuples fetched * avg. outer tuple size)/msg.
size

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/102


Static Approach –
Vertical Partitioning & Joins
2. Move inner relation to the site of outer relation
Cannot join as they arrive; they need to be stored
Total cost = cost(retrieving qualified outer tuples)
+ no. of outer tuples fetched * cost(retrieving matching inner tuples
from temporary storage)
+ cost(retrieving qualified inner tuples)
+ cost(storing all qualified inner tuples in temporary storage)
+ msg. cost * no. of inner tuples fetched * avg. inner tuple
size/msg. size

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/103


Static Approach –
Vertical Partitioning & Joins
3.Fetch inner tuples as needed
(a) Retrieve qualified tuples at outer relation site
(b) Send request containing join column value(s) for outer tuples to inner
relation site
(c) Retrieve matching inner tuples at inner relation site
(d) Send the matching inner tuples to outer relation site
(e) Join as they arrive
Total Cost = cost(retrieving qualified outer tuples)
+ msg. cost * (no. of outer tuples fetched)
+ no. of outer tuples fetched * no. of inner tuples fetched * avg.
inner tuple size * msg. cost / msg. size)
+ no. of outer tuples fetched * cost(retrieving matching inner tuples
for one outer value)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/104


Static Approach –
Vertical Partitioning & Joins
4. Move both inner and outer relations to another site
Total cost = cost(retrieving qualified outer tuples)
+ cost(retrieving qualified inner tuples)
+ cost(storing inner tuples in storage)
+ msg. cost × (no. of outer tuples fetched * avg. outer tuple
size)/msg. size
+ msg. cost * (no. of inner tuples fetched * avg. inner tuple
size)/msg. size
+ no. of outer tuples fetched * cost(retrieving inner tuples from
temporary storage)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/105


Static Approach –
Example
 Join of relations PROJ, the external relation, and ASG, the internal
relation, on attribute PNO

PROJ ⋈ ASG
 We assume that

PROJ and ASG are stored at two different sites

there is an index on attribute PNO for relation ASG
 The possible execution strategies for the query are as follows:
 1. Ship whole PROJ to site of ASG.
 2. Ship whole ASG to site of PROJ.
 3. Fetch ASG tuples as needed for each tuple of PROJ.
 4. Move ASG and PROJ to a third site.
 Discussion

Strategy 4: the highest cost since both relations must be transferred

Strategy 2: size(PROJ) >> size(ASG)

minimizes the communication time

likely to be the best (if local processing time is not too high compared to

strategies 1 and 3)

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/106


Static Approach –
Example
 Discussion

local processing time of strategies 1 and 3 is probably

much better than that of strategy 2 since they exploit the index

If strategy 2 is not the best, the choice is between strategies 1 and 3

If PROJ is large and only a few tuples of ASG match, strategy 3 wins

if PROJ is small or many tuples of ASG match, strategy 1 should be the best.

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/107


Dynamic vs. Static vs Semijoin
• Semijoin
➡ SDD1 selects only locally optimal schedules
• Dynamic and static approaches have the same advantages and
drawbacks as in centralized case
➡ But the problems of accurate cost estimation at compile-time are more
severe
✦ More variations at runtime
✦ Relations may be replicated, making site and copy selection important
• Hybrid optimization
➡ Choose-plan approach can be used
➡ 2-step approach simpler

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/108


2-Step Optimization
1. At compile time, generate a static plan with operation ordering and
access methods only
2. At startup time, carry out site and copy selection and allocate
operations to sites

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/109


2-Step – Problem Definition
• Given
➡ A set of sites S = {s1, s2, …,sn} with the load of each site
➡ A query Q ={q1, q2, q3, q4} such that each subquery qi is the maximum
processing unit that accesses one relation and communicates with its
neighboring queries
➡ For each qi in Q, a feasible allocation set of sites Sq={s1, s2, …,sk} where
each site stores a copy of the relation in qi
• The objective is to find an optimal allocation of Q to S such that
➡ the load unbalance of S is minimized
➡ The total communication cost is minimized

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/110


2-Step – Problem Definition
• Each site s has a load, denoted by load(s ), which reflects the number
i i
of queries currently submitted
• The load can be expressed in different ways, e.g. as the number of I/O
bound and CPU bound queries at the site
• The average load of the system is defined as:

• The balance of the system for a given allocation of subqueries to sites


can be measured using the following unbalance factor

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/111


2-Step – Problem Definition
• The problem addressed by the second step of two-step query
optimization can be formalized as the following subquery allocation
problem. Given
• 1. a set of sites S = {s1 , .., sn } with the load of each site;
• 2. a query Q = {q1 , .., qm }; and
• 3. for each subquery qi in Q, a feasible allocation set of sites
• Sq = {s1, ..., sk }
• where each site stores a copy of the relation involved in qi ;
• the objective is to find an optimal allocation on Q to S such that
• 1. UF(S) is minimized, and
• 2. the total communication cost is minimized.

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/112


2-Step – Algorithm
• The algorithm, which we describe for linear join trees, uses several
heuristics.
• 1. Start by allocating subqueries with least allocation flexibility, i.e.
with the smaller feasible allocation sets of sites.
• 2. Consider the sites with least load and best benefit.
• The benefit of a site is defined as
• 1. the number of subqueries already allocated to the site and
• 2. measures the communication cost savings from allocating the
subquery and
• 3. the load information of any unallocated subquery that has a
selected site in its feasible allocation set is recomputed

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/113


2-Step Algorithm
• For each q in Q compute load (Sq)
• While Q not empty do
1. Select subquery a with least allocation flexibility
2. Select best site b for a (with least load and best benefit)
3. Remove a from Q and recompute loads if needed

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/


2-Step – Algorithm

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/115


2-Step Algorithm Example
• Let Q = {q , q , q , q } where q
1 2 3 4 1 is
associated with R1, q2 is
associated with R2 joined with the
result of q1, etc.
• Iteration 1: select q4, allocate to
s1, set load(s1)=2
• Iteration 2: select q2, allocate to
s2, set load(s2)=3
• Iteration 3: select q3, allocate to
s1, set load(s1) =3
• Iteration 4: select q1, allocate to s3
or s4
Note: if in iteration 2, q2, were allocated to s4, this would have produced
a better plan. So hybrid optimization can still miss optimal plans
Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/
Outline
 Distributed Query Processing
 Introduction
 Query Decomposition and Localization
 Centralized query optimization
 Join Ordering
 Distributed Query Optimization
 Adaptive Query Processing

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.7/117


Adaptive Query Processing -
Motivations
• Assumptions underlying query optimization

The optimizer has sufficient knowledge about runtime

Cost information

Runtime conditions remain stable during query execution
• Appropriate for systems with few data sources in a
controlled environment
• Inappropriate for changing environments with large
numbers of data sources and unpredictable runtime
conditions

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/


Example: QEP with Blocked Operator

 Assume ASG, EMP,


PROJ and PAY each at a
different site
 If ASG site is down, the
entire pipeline is blocked
 However, with some
reorganization, the join of
EMP and PAY could be
done while waiting for
ASG

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/


Adaptive Query Processing – Definition

 A query processing is adaptive if it receives information


from the execution environment and determines its
behavior accordingly
 Feed-back loop between optimizer and runtime environment
 Communication of runtime information between DDBMS components
 Additional components
 Monitoring, assessment, reaction
 Embedded in control operators of QEP
 Tradeoff between reactiveness and overhead of
adaptation

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/


Adaptive Components

 Monitoring parameters (collected by sensors in QEP)


 Memory size
 Data arrival rates
 Actual statistics
 Operator execution cost
 Network throughput
 Adaptive reactions
 Change schedule
 Replace an operator by an equivalent one
 Modify the behavior of an operator
 Data repartitioning

Distributed DBMS © M. T. Özsu & P. Valduriez Ch.8/

You might also like