3 Distribution Design
3 Distribution Design
3/1
Outline
Introduction
Background
Distributed Database Design
Fragmentation
Data distribution
Database Integration
Semantic Data Control
Distributed Query Processing
Multidatabase Query Processing
Distributed Transaction Management
Data Replication
Parallel Database Systems
Distributed Object DBMS
Peer-to-Peer Data Management
Web Data Management
Current Issues
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/2
Design Problem
In the general setting :
Making decisions about the placement of data and programs across the sites of
a computer network as well as possibly designing the network itself.
In Distributed DBMS, the placement of applications entails
placement of the distributed DBMS software; and
placement of the applications that run on the database
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/3
Dimensions of the Problem
Level of sharing
Level of knowledge
Access pattern behavior
partial
information
dynamic
static
data
data +
program
complete
information
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/4
Distribution Design
Top-down
mostly in designing systems from scratch
mostly in homogeneous systems
Bottom-up
when the databases already exist at a number of sites
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/5
Top-Down Design
User Input
View Integration
User Input
Requirements
Analysis
Objectives
Conceptual
Design
View Design
Access
Information
ESs GCS
Distribution
Design
Physical
Design
LCSs
LISs
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/6
Distribution Design Issues
Why fragment at all?
How to fragment?
How much to fragment?
How to test correctness?
How to allocate?
Information requirements?
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/7
Fragmentation
Can't we just distribute relations?
What is a reasonable unit of distribution?
relation
views are subsets of relations locality
extra communication
fragments of relations (sub-relations)
concurrent execution of a number of transactions that access different portions of
a relation
views that cannot be defined on a single fragment will require extra processing
semantic data control (especially integrity enforcement) more difficult
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/8
Fragmentation Alternatives
Horizontal
PROJ
1
: projects with budgets less than
$200,000
PROJ
2
: projects with budgets greater
than or equal to $200,000
PROJ
1
PNO PNAME BUDGET
LOC
P3 CAD/CAM 250000 New York
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
PNO PNAME
LOC
P1 Instrumentation 150000 Montreal
P2 Database Develop. 135000 New York
BUDGET
PROJ
2
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/9
Fragmentation Alternatives
Vertical
PROJ
1
: information about project
budgets
PROJ
2
: information about project
names and locations
PNO BUDGET
P1 150000
P3 250000
P2 135000
P4 310000
P5 500000
PNO PNAME LOC
P1 Instrumentation Montreal
P3 CAD/CAM New York
P2 Database Develop. New York
P4 Maintenance Paris
P5 CAD/CAM Boston
PROJ
1
PROJ
2
New York
New York
PROJ
PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal
P3 CAD/CAM 250000
P2 Database Develop. 135000
P4 Maintenance 310000 Paris
P5 CAD/CAM 500000 Boston
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/10
Degree of Fragmentation
Finding the suitable level of partitioning within this
range
tuples
or
attributes
relations
finite number of alternatives
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/11
Correctness of Fragmentation
Completeness
Decomposition of relation R into fragments R
1
, R
2
, ..., R
n
is complete if and
only if each data item in R can also be found in some R
i
Reconstruction
If relation R is decomposed into fragments R
1
, R
2
, ..., R
n
, then there should
exist some relational operator such that
R =
1in
R
i
Disjointness
If relation R is decomposed into fragments R
1
, R
2
, ..., R
n
, and data item d
i
is in
R
j
, then d
i
should not be in any other fragment R
k
(k j ).
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/12
Allocation Alternatives
Non-replicated
partitioned : each fragment resides at only one site
Replicated
fully replicated : each fragment at each site
partially replicated : each fragment at some of the sites
Rule of thumb:
If
read-only queries
update queries
<< 1, replication is advantageous,
otherwise replication may cause problems
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/13
Full-replication Partial-replication Partitioning
QUERY
PROCESSING
Easy
Same Difficulty
Same Difficulty DIRECTORY
MANAGEMENT
Easy or
Non-existant
CONCURRENCY
CONTROL
Easy Difficult Moderate
RELIABILITY Very high High Low
REALITY
Possible
application
Realistic
Possible
application
Comparison of Replication
Alternatives
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/14
Information Requirements
Four categories:
Database information
Application information
Communication network information
Computer system information
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/15
Fragmentation
Horizontal Fragmentation (HF)
Primary Horizontal Fragmentation (PHF)
Derived Horizontal Fragmentation (DHF)
Vertical Fragmentation (VF)
Hybrid Fragmentation (HF)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/16
PHF Information Requirements
Database Information
relationship
cardinality of each relation: card(R)
TITLE, SAL
SKILL
ENO, ENAME, TITLE PNO, PNAME, BUDGET,
LOC
ENO, PNO, RESP, DUR
EMP
PROJ
ASG
L
1
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/17
PHF - Information Requirements
Application Information
simple predicates : Given R[A
1
, A
2
, , A
n
], a simple predicate p
j
is
p
j
: A
i
Value
where e {=,<,,>,,}, Value e D
i
and D
i
is the domain of A
i
.
For relation R we define Pr = {p
1
, p
2
, ,p
m
}
Example :
PNAME = "Maintenance"
BUDGET 200000
minterm predicates : Given R and Pr = {p
1
, p
2
, ,p
m
}
define M = {m
1
,m
2
,,m
r
} as
M = { m
i
| m
i
= .
p
j
ePr
p
j
* }, 1jm, 1iz
where p
j
* = p
j
or p
j
* = (p
j
).
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/18
PHF Information Requirements
Example
m
1
: PNAME="Maintenance" . BUDGET200000
m
2
: NOT(PNAME="Maintenance") . BUDGET200000
m
3
: PNAME= "Maintenance" . NOT(BUDGET200000)
m
4
: NOT(PNAME="Maintenance") . NOT(BUDGET200000)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/19
PHF Information Requirements
Application Information
minterm selectivities: sel(m
i
)
The number of tuples of the relation that would be accessed by a user query
which is specified according to a given minterm predicate m
i
.
access frequencies: acc(q
i
)
The frequency with which a user application qi accesses data.
Access frequency for a minterm predicate can also be defined.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/20
Primary Horizontal Fragmentation
Definition :
R
j
= o
F
j
(R), 1 j w
where F
j
is a selection formula, which is (preferably) a minterm predicate.
Therefore,
A horizontal fragment R
i
of relation R consists of all the tuples of R which
satisfy a minterm predicate m
i
.
Given a set of minterm predicates M, there are as many horizontal fragments
of relation R as there are minterm predicates.
Set of horizontal fragments also referred to as minterm fragments.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/21
PHF Algorithm
Given: A relation R, the set of simple predicates Pr
Output: The set of fragments of R = {R
1
, R
2
,,R
w
} which obey the
fragmentation rules.
Preliminaries :
Pr should be complete
Pr should be minimal
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/22
Completeness of Simple
Predicates
A set of simple predicates Pr is said to be complete if and only if the
accesses to the tuples of the minterm fragments defined on Pr requires that
two tuples of the same minterm fragment have the same probability of
being accessed by any application.
Example :
Assume PROJ[PNO,PNAME,BUDGET,LOC] has two applications defined
on it.
Find the budgets of projects at each location. (1)
Find projects with budgets less than $200000. (2)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/23
Completeness of Simple
Predicates
According to (1),
Pr={LOC=Montreal,LOC=New York,LOC=Paris}
which is not complete with respect to (2).
Modify
Pr ={LOC=Montreal,LOC=New York,LOC=Paris,
BUDGET200000,BUDGET>200000}
which is complete.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/24
Minimality of Simple Predicates
If a predicate influences how fragmentation is performed, (i.e., causes a
fragment f to be further fragmented into, say, f
i
and f
j
) then there should be
at least one application that accesses f
i
and f
j
differently.
In other words, the simple predicate should be relevant in determining a
fragmentation.
If all the predicates of a set Pr are relevant, then Pr is minimal.
acc(m
i
)
card( f
i
)
=
acc(m
j
)
card( f
j
)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/25
Minimality of Simple Predicates
Example :
Pr ={LOC=Montreal,LOC=New York, LOC=Paris,
BUDGET200000,BUDGET>200000}
is minimal (in addition to being complete). However, if we add
PNAME = Instrumentation
then Pr is not minimal.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/26
COM_MIN Algorithm
Given: a relation R and a set of simple predicates Pr
Output: a complete and minimal set of simple predicates Pr' for Pr
Rule 1: a relation or fragment is partitioned into at least two parts which
are accessed differently by at least one application.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/27
COM_MIN Algorithm
Initialization :
find a p
i
e Pr such that p
i
partitions R according to Rule 1
set Pr' = p
i
; Pr Pr {p
i
} ; F {f
i
}
Iteratively add predicates to Pr' until it is complete
find a p
j
e Pr such that p
j
partitions some f
k
defined according to minterm
predicate over Pr' according to Rule 1
set Pr' = Pr' {p
i
}; Pr Pr {p
i
}; F F {f
i
}
if -p
k
e Pr' which is nonrelevant then
Pr' Pr {p
i
}
F F {f
i
}
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/28
PHORIZONTAL Algorithm
Makes use of COM_MIN to perform fragmentation.
Input: a relation R and a set of simple predicates Pr
Output: a set of minterm predicates M according to which relation R is to
be fragmented
Pr' COM_MIN (R,Pr)
determine the set M of minterm predicates
determine the set I of implications among p
i
e Pr
eliminate the contradictory minterms from M
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/29
PHF Example
Two candidate relations : PAY and PROJ.
Fragmentation of relation PAY
Application: Check the salary info and determine raise.
Employee records kept at two sites application run at two sites
Simple predicates
p
1
: SAL 30000
p
2
: SAL > 30000
Pr = {p
1
,p
2
} which is complete and minimal Pr'=Pr
Minterm predicates
m
1
: (SAL 30000)
m
2
: NOT(SAL 30000) = (SAL > 30000)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/30
PHF Example
TITLE
Mech. Eng.
Programmer
SAL
27000
24000
PAY
1
PAY
2
TITLE
Elect. Eng.
Syst. Anal.
SAL
40000
34000
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/31
PHF Example
Fragmentation of relation PROJ
Applications:
Find the name and budget of projects given their no.
Issued at three sites
Access project information according to budget
one site accesses 200000 other accesses >200000
Simple predicates
For application (1)
p
1
: LOC = Montreal
p
2
: LOC = New York
p
3
: LOC = Paris
For application (2)
p
4
: BUDGET 200000
p
5
: BUDGET > 200000
Pr = Pr' = {p
1
,p
2
,p
3
,p
4
,p
5
}
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/32
Fragmentation of relation PROJ continued
Minterm fragments left after elimination
m
1
: (LOC = Montreal) . (BUDGET 200000)
m
2
: (LOC = Montreal) . (BUDGET > 200000)
m
3
: (LOC = New York) . (BUDGET 200000)
m
4
: (LOC = New York) . (BUDGET > 200000)
m
5
: (LOC = Paris) . (BUDGET 200000)
m
6
: (LOC = Paris) . (BUDGET > 200000)
PHF Example
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/33
PHF Example
PROJ
1
PNO PNAME BUDGET LOC PNO PNAME BUDGET LOC
P1 Instrumentation 150000 Montreal P2
Database
Develop.
135000 New York
PROJ
2
PROJ
4
PROJ
6
PNO PNAME BUDGET LOC
P3 CAD/CAM 250000 New
York
PNO PNAME BUDGET LOC
Maintenance P4 310000 Paris
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/34
Completeness
Since Pr' is complete and minimal, the selection predicates are complete
Reconstruction
If relation R is fragmented into F
R
= {R
1
,R
2
,,R
r
}
R =
R
i
eFR
R
i
Disjointness
Minterm predicates that form the basis of fragmentation should be mutually
exclusive.
PHF Correctness
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/35
Derived Horizontal Fragmentation
Defined on a member relation of a link according to a selection operation
specified on its owner.
Each link is an equijoin.
Equijoin can be implemented by means of semijoins.
TITLE, SAL
SKILL
ENO, ENAME, TITLE PNO, PNAME, BUDGET, LOC
ENO, PNO, RESP, DUR
EMP PROJ
ASG
L
1
L
2
L
3
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/36
DHF Definition
Given a link L where owner(L)=S and member(L)=R, the derived horizontal
fragments of R are defined as
R
i
= R
F
S
i
, 1iw
where w is the maximum number of fragments that will be defined on R and
S
i
= o
F
i
(S)
where F
i
is the formula according to which the primary horizontal fragment
S
i
is defined.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/37
Given link L
1
where owner(L
1
)=SKILL and member(L
1
)=EMP
EMP
1
= EMP SKILL
1
EMP
2
= EMP SKILL
2
where
SKILL
1
= o
SAL30000
(SKILL)
SKILL
2
= o
SAL>30000
(SKILL)
DHF Example
ENO ENAME TITLE
E3 A. Lee Mech. Eng.
E4 J. Miller Programmer
E7 R. Davis Mech. Eng.
EMP
1
ENO ENAME TITLE
E1 J. Doe Elect. Eng.
E2 M. Smith Syst. Anal.
E5 B. Casey Syst. Anal.
EMP
2
E6 L. Chu Elect. Eng.
E8 J. Jones Syst. Anal.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/38
DHF Correctness
Completeness
Referential integrity
Let R be the member relation of a link whose owner is relation S which is
fragmented as F
S
= {S
1
, S
2
, ..., S
n
}. Furthermore, let A be the join attribute
between R and S. Then, for each tuple t of R, there should be a tuple t' of S
such that
t[A] = t' [A]
Reconstruction
Same as primary horizontal fragmentation.
Disjointness
Simple join graphs between the owner and the member fragments.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/39
Has been studied within the centralized context
design methodology
physical clustering
More difficult than horizontal, because more alternatives exist.
Two approaches :
grouping
attributes to fragments
splitting
relation to fragments
Vertical Fragmentation
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/40
Overlapping fragments
grouping
Non-overlapping fragments
splitting
We do not consider the replicated key attributes to be overlapping.
Advantage:
Easier to enforce functional dependencies
(for integrity checking etc.)
Vertical Fragmentation
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/41
VF Information Requirements
Application Information
Attribute affinities
a measure that indicates how closely related the attributes are
This is obtained from more primitive usage data
Attribute usage values
Given a set of queries Q = {q
1
, q
2
,, q
q
} that will run on the relation
R[A
1
, A
2
,, A
n
],
use(q
i
,) can be defined accordingly
use(q
i
,A
j
) =
1 if attribute A
j
is referenced by query q
i
0 otherwise
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/42
VF Definition of use(q
i
,A
j
)
Consider the following 4 queries for relation PROJ
q
1
: SELECT BUDGET q
2
: SELECT PNAME,BUDGET
FROM PROJ FROM PROJ
WHERE PNO=Value
q
3
: SELECT PNAME q
4
: SELECT SUM(BUDGET)
FROM PROJ FROM PROJ
WHERE LOC=Value WHERE LOC=Value
Let A
1
= PNO, A
2
= PNAME, A
3
= BUDGET, A
4
= LOC
q
1
q
2
q
3
q
4
A
1
1 0 1 0
0 0 1 1
0 0 1 1
0 0 1 1
A
2
A
3
A
4
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/43
VF Affinity Measure aff(A
i
,A
j
)
The attribute affinity measure between two attributes A
i
and A
j
of a relation
R[A
1
, A
2
, , A
n
] with respect to the set of applications Q = (q
1
, q
2
, , q
q
) is
defined as follows :
aff (A
i
, A
j
) = (query access)
all queries that access A
i
and A
j
query access =
access frequency of a query -
access
execution
all sites
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/44
Assume each query in the previous example accesses the attributes once
during each execution.
Also assume the access frequencies
Then
aff(A
1
, A
3
) = 15*1 + 20*1+10*1
= 45
and the attribute affinity matrix AA is
VF Calculation of aff(A
i
, A
j
)
4
q
1
q
2
q
3
q
S
1
S
2
S
3
15 20 10
5 0 0
25 25 25
3 0 0
A A A A
1 2 3 4
A
A
A
A
1
2
3
4
45 0 45 0
0
80 5 75
45 5 53 3
0 75 3 78
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/45
Take the attribute affinity matrix AA and reorganize the attribute orders to
form clusters where the attributes in each cluster demonstrate high affinity
to one another.
Bond Energy Algorithm (BEA) has been used for clustering of entities.
BEA finds an ordering of entities (in our case attributes) such that the
global affinity measure is maximized.
VF Clustering Algorithm
AM = (affinity of A
i
and A
j
with their neighbors)
j
i
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/46
Bond Energy Algorithm
Input: The AA matrix
Output: The clustered affinity matrix CA which is a perturbation of AA
Initialization: Place and fix one of the columns of AA in CA.
Iteration: Place the remaining n-i columns in the remaining i+1 positions in
the CA matrix. For each column, choose the placement that makes the most
contribution to the global affinity measure.
Row order: Order the rows according to the column ordering.
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/47
Bond Energy Algorithm
Best placement? Define contribution of a placement:
cont(A
i
, A
k
, A
j
) = 2bond(A
i
, A
k
)+2bond(A
k
, A
l
) 2bond(A
i
, A
j
)
where
bond(A
x
,A
y
) = aff(A
z
,A
x
)aff(A
z
,A
y
)
z =1
n
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/48
BEA Example
Consider the following AA matrix and the corresponding CA matrix where
A
1
and A
2
have been placed. Place A
3
:
Ordering (0-3-1) :
cont(A
0
,A
3
,A
1
) = 2bond(A
0
, A
3
)+2bond(A
3
, A
1
)2bond(A
0
, A
1
)
= 2* 0 + 2* 4410 2*0 = 8820
Ordering (1-3-2) :
cont(A
1
,A
3
,A
2
) = 2bond(A
1
, A
3
)+2bond(A
3
, A
2
)2bond(A
1
,A
2
)
= 2* 4410 + 2* 890 2*225 = 10150
Ordering (2-3-4) :
cont (A
2
,A
3
,A
4
) = 1780
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/49
BEA Example
Therefore, the CA matrix has the form
When A
4
is placed, the final form of the CA matrix (after row organization)
is
A
1
A
2
A
3
45
0
45
0
45
5
53
3
0
80
5
75
A
1
A
2
A
3
A
4
A
1
A
2
A
3
A
4
45
45
0
0
45
53
5
3
0
5
80
75
0
3
75
78
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/50
How can you divide a set of clustered attributes {A
1
, A
2
, , A
n
}
into two (or more) sets {A
1
, A
2
, , A
i
} and {A
i
, , A
n
} such that
there are no (or minimal) applications that access both (or more
than one) of the sets.
VF Algorithm
A
1
A
2
A
i
A
i+1
A
m
A
1
A
2
A
3
A
i
A
i+1
A
m
BA
. . .
TA
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/51
Define
TQ = set of applications that access only TA
BQ = set of applications that access only BA
OQ = set of applications that access both TA and BA
and
CTQ = total number of accesses to attributes by applications
that access only TA
CBQ = total number of accesses to attributes by applications
that access only BA
COQ = total number of accesses to attributes by applications
that access both TA and BA
Then find the point along the diagonal that maximizes
VF ALgorithm
CTQ-CBQCOQ
2
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/52
Two problems :
Cluster forming in the middle of the CA matrix
Shift a row up and a column left and apply the algorithm to find the best
partitioning point
Do this for all possible shifts
Cost O(m
2
)
More than two clusters
m-way partitioning
try 1, 2, , m1 split points along diagonal and try to find the best point for
each of these
Cost O(2
m
)
VF Algorithm
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/53
VF Correctness
A relation R, defined over attribute set A and key K, generates the vertical
partitioning F
R
= {R
1
, R
2
, , R
r
}.
Completeness
The following should be true for A:
A = A
R
i
Reconstruction
Reconstruction can be achieved by
R =
K
R
i
, R
i
e F
R
Disjointness
TID's are not considered to be overlapping since they are maintained by the
system
Duplicated keys are not considered to be overlapping
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/54
Hybrid Fragmentation
R
HF
HF
R
1
VF VF VF VF VF
R
11
R
12
R
21
R
22
R
23
R
2
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/55
Fragment Allocation
Problem Statement
Given
F = {F
1
, F
2
, , F
n
} fragments
S ={S
1
, S
2
, , S
m
} network sites
Q = {q
1
, q
2
,, q
q
} applications
Find the "optimal" distribution of F to S.
Optimality
Minimal cost
Communication + storage + processing (read & update)
Cost in terms of time (usually)
Performance
Response time and/or throughput
Constraints
Per site constraints (storage & processing)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/56
Information Requirements
Database information
selectivity of fragments
size of a fragment
Application information
access types and numbers
access localities
Communication network information
unit cost of storing data at a site
unit cost of processing at a site
Computer system information
bandwidth
latency
communication overhead
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/57
Allocation
File Allocation (FAP) vs Database Allocation (DAP):
Fragments are not individual files
relationships have to be maintained
Access to databases is more complicated
remote file access model not applicable
relationship between allocation and query processing
Cost of integrity enforcement should be considered
Cost of concurrency control should be considered
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/58
Allocation Information
Requirements
Database Information
selectivity of fragments
size of a fragment
Application Information
number of read accesses of a query to a fragment
number of update accesses of query to a fragment
A matrix indicating which queries updates which fragments
A similar matrix for retrievals
originating site of each query
Site Information
unit cost of storing data at a site
unit cost of processing at a site
Network Information
communication cost/frame between two sites
frame size
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/59
General Form
min(Total Cost)
subject to
response time constraint
storage constraint
processing constraint
Decision Variable
Allocation Model
x
ij
=
1 if fragment F
i
is stored at site S
j
0 otherwise
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/60
Total Cost
Storage Cost (of fragment F
j
at S
k
)
Query Processing Cost (for one query)
processing component + transmission component
Allocation Model
(unit storage cost at S
k
) - (size of F
j
) - x
jk
query processing cost +
all queries
cost of storing a fragment at a site
all fragments
all sites
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/61
Allocation Model
Query Processing Cost
Processing component
access cost + integrity enforcement cost + concurrency control cost
Access cost
Integrity enforcement and concurrency control costs
Can be similarly calculated
(no. of update accesses+ no. of read accesses) -
all fragments
all sites
x
ij
- local processing cost at a site
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/62
Query Processing Cost
Transmission component
cost of processing updates + cost of processing retrievals
Cost of updates
Retrieval Cost
Allocation Model
update message cost +
all fragments
all sites
acknowledgment cost
all fragments
all sites
min
all sites
all fragments
(cost of retrieval command +
cost of sending back the result)
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/63
Allocation Model
Constraints
Response Time
execution time of query max. allowable response time for that query
Storage Constraint (for a site)
Processing constraint (for a site)
storage requirement of a fragment at that site s
all fragments
storage capacity at that site
processing load of a query at that site s
all queries
processing capacity of that site
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/64
Allocation Model
Solution Methods
FAP is NP-complete
DAP also NP-complete
Heuristics based on
single commodity warehouse location (for FAP)
knapsack problem
branch and bound techniques
network flow
Distributed DBMS M. T. zsu & P. Valduriez Ch.3/65
Allocation Model
Attempts to reduce the solution space
assume all candidate partitionings known; select the best partitioning
ignore replication at first
sliding window on fragments