Chapter 6-Consistency and Replication
Chapter 6-Consistency and Replication
we discuss
why replication is useful and its relation with scalability; in
particular object-based replication
consistency models
Data –Centric consistency Model
client–centric consistency models
how consistency and replication are implemented
2
6.1 Reasons for Replication
two major reasons: reliability and performance
reliability
3
Replication as Scaling Technique
replication and caching are widely applied as scaling techniques
processes can use local copies and limit access time and traffic
however, we need to keep the copies consistent; but this may requires
more network bandwidth
if the copies are refreshed more often than used (low access-to-
write operation
8
example: four processes operating on the same data item x
9
3.Weak Consistency
there is no need to worry about intermediate results in a
critical section since other processes will not see the data
until it leaves the critical section; only the final result need to
be seen by other processes
this can be done by a synchronization variable, S, that has
10
this leads to weak consistency models which have three
properties
1. Accesses to synchronization variables associated with a
data store are sequentially consistent (all processes see all
operations on synchronization variables in the same order)
2. No operation on a synchronization variable is allowed to be
performed until all previous writes have been completed
everywhere
3. No read or write operation on data items are allowed to be
performed until all previous operations to synchronization
variables have been performed.All previous
synchronization will have been completed; by doing a
synchronization a process can be sure of getting the most
recent values)
11
weak consistency enforces consistency on a group of
operations, not on individual reads and writes
e.g., S stands for synchronizes; it means that a local copy
of a data store is brought up to date
12
4. Release Consistency
with weak consistency model, when a synchronization
variable is accessed, the data store does not know whether it
is done because the process has finished writing the shared
data or is about to start reading
if we can separate the two (entering a critical section and
leaving it), a more efficient implementation might be possible
the idea is to selectively guard shared data; the shared data
that are kept consistent are said to be protected
release consistency provides mechanisms to separate the
two kinds of operations or synchronization variables
an acquire operation is used to tell that a critical region is
about to be entered
a release operation is used to tell that a critical region has
just been exited
13
when a process does an acquire, the store will ensure that all
copies of the protected data are brought up to date to be
consistent with the remote ones; does not guarantee that
locally made changes will be sent to other local copies
immediately
when a release is done, protected data that have been
changed are propagated out to other local copies of the store;
it does not necessarily import changes from other copies
15
6.3 Client-Centric Consistency Models
with many applications, updates happen very rarely
process) update the data while many read it and there are no
write-write conflicts; we need to handle only read-write conflicts;
e.g., DNS server, Web site
for such applications, it is even acceptable for readers to see old
is high)
transfer the update operation (also called active
22
iii. Unicasting versus Multicasting
multicasting can be combined with push-based
23
6.5 Consistency Protocols
so far we have concentrated on various consistency
1. primary-based protocols
remote-write protocols
local-write protocols
2. replicated-write protocols
active replication
quorum-based protocols
3. cache-coherence protocols
24
1. Primary-Based Protocols
each data item x in the data store has an associated
protocols
a. Remote-Write Protocols
all read and write operations are carried out at a
25
primary-based remote-write protocol with a fixed server to which all read and write operations are
forwarded
26
another approach is primary-backup protocols where reads
can be made from local backup servers while writes should
be made directly on the primary server
the backup servers are updated each time the primary is
updated
b.Local-Write Protocols
two approaches
28
primary-based local-write protocol in which a single copy is migrated between processes
29
ii. primary-backup local-write protocol
the primary migrates between processes that wish to
is followed
30
primary-backup protocol in which the primary migrates to the process wanting to perform an update
31
2.Replicated-Write Protocols
unlike primary-based protocols, write operations can be
update operations
updates are generally propagated by means of write
Lamport’s timestamps, or
34
a) forwarding an invocation request from a replicated object
b) returning a reply to a replicated object
35
3. Cache-Coherence Protocols
cashes form a special case of replication as they are
actually detected
static solution: prior to execution, a compiler performs
36
coherence enforcement strategy: how caches are kept
consistent with the copies stored at the servers
simplest solution: do not allow shared data to be
37