0% found this document useful (0 votes)
5 views

Chapter 4

Uploaded by

abcd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Chapter 4

Uploaded by

abcd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Contents Index

4
NON-LOCKING
SCHEDULERS

4.1 lNTRODUCTlON
In this chapter we will examine two scheduling techniques that do not use
locks, timestamp ordering (TO) and serialization graph testing (SGT). As with
2PL, we’ll see aggressive and conservative as well as centralized and distributed
versions of both techniques.
We will also look at a very aggressive variety of schedulers, called certi-
fiers. A certifier never delays operations submitted by TMs. It always outputs
them right away When a transaction is ready to commit, the certifier runs a
“certification test” to determine whether the transaction was involved in a non-
SR execution. If the transaction fails this test, then the certifier aborts the
transaction. Otherwise, it allows the transaction to commit. We will describe
certifiers based on all three techniques: 2PL, TO, and SGT.
In the final section, we will show how to combine scheduling techniques
into composite schedulers. For example, a composite scheduler could use 2PL
for part of its synchronization activity and TO for another part. Using the
composition rules, you can use the basic techniques we have discussed to
construct hundreds of different types of schedulers, all of which produce SR
executions.
Unlike 2PL, the techniques described in this chapter are not currently used
in many commercial products. Moreover, their performance relative to 2PL is
not well understood. Therefore, the material in this chapter is presented

113
114 CHAPTER 4 I NON-LOCKING SCHEDULERS

mostly at a conceptual level, with fewer performance comparisons and practi-


cal details than in Chapter 3.

4.2 TIMESTAMP ORDERING (TO)

Introduction

In timestamp ordering, the TM assigns a unique timestamp, ts( T,), to each


transaction, T,. It generates timestamps using any of the techniques described
in Section 3.11, in the context of timestamp-based deadlock prevention. The
TM attaches a transaction’s timestamp to each operation issued by the transac-
tion. It will therefore be convenient to speak of the timestamp of an operation
o,[x], which is simply the timestamp of the transaction that issued the opera-
tion, A TO scheduler orders conflicting operations according to their times-
tamps. hilore precisely, it enforces the following rule, called the TO rule.

TO Rule: If PJXI and e[ x 1 are conflicting operations, then the DM


processes pi[x] before ql[x] iff ts( r,) < ts( T,).

The next theorem shows that the TO rule produces SR executions.

Theorem 4.1: If H is a history representing an execution produced by a


TO scheduler, then H is SR.
Proof: Consider SG(H). If T, -+ T, is an edge of SG(EI), then there must
exist conflicting operations p![x], e[x] in H such that p,[x] < q)[x]. Hence
by the TO rule, ts( Ti) < ts( T,). If a cycle T, --t TL + . . . -+ T, -+ T, existed
in SG(H), then by induction ts( T,) < ts( T,), a contradiction. So SG(H) is
acyclic, and by the Serializability Theorem, H is SR. g

By enforcing the TO rule, we are ensuring that every pair of conflicting opera-
tions is executed in timestamp order. Thus, a TO execution has the same effect
as a serial execution in which the transactions appear in timestamp order. In
the rest of this section we will present ways of enforcing the TO rule.

Bask TO
Basic TO is a simple and aggressive implementation of the TO rule. It accepts
operations from the TM and immediately outputs them to the DM in first-
come-first-served order. To ensure that this order does not violate the TO rule,
the scheduIer rejects operations that it receives too late. An operation p,[x] is
too late if it arrives after the scheduler has already output some conflicting
operation q,[x] with ts( T,) > ts( TJ. If pi[x] is too late, then it cannot be sched-
4.2 TIMESTAMP ORDERING (TO) 116

uled without violating the TO rule. Since the scheduler has already output
qj[x], it can only solve the problem by rejecting pi[x].
If e;[x] is rejected, then T, must abort. When T, is resubmitted, it must be
assigned a larger timestamp - large enough that its operations are less likely
to be rejected during its second execution. Notice the difference with
timestamp-based deadlock prevention, where an aborted transaction is resub-
mitted with the same timestamp to avoid cyclic restart. Here it is resubmitted
with a new and larger timestamp to avoid certain rejection.
To determine if an operation has arrived too late, the Basic TO scheduler
maintains for every data item x the maximum timestamps of Reads and Writes
on x that it has sent to the DM, denoted max-r-scheduled[x] and max-w-
scheduled[x] (respectively). When the scheduler receives e;[x], it compares
ts(Ti) to max-q-scheduIed[x] for all operation types q that conflict with p. If
ts( T,) < max-q-scheduled[x], then the scheduler rejects ei[x], since it has
already scheduled a conflicting operation with a larger timestamp. Otherwise,
it schedules ei[x] and, if ts( T2) > max-p-scheduled[x], it updates max-p-
scheduled[x] to ts( Ti).
The scheduler must handshake with the DM to guarantee that operations
are processed by the DM in the order that the scheduler sent them. Even if the
scheduler decides that e;[x] can be scheduled, it must not send it to the DM
until every conflicting qj[x] that it previously sent has been acknowledged by
the DM. Notice that 2PL automatically takes care of this problem. 2PL does
not schedule an operation until all conflicting operations previously scheduled
have released their locks, which does not happen until after the DM acknow-
ledges those operations.
To enforce this handshake, the Basic TO scheduler also maintains, for each
data item x, the number of Reads and Writes that have been sent to, but not yet
acknowledged by, the DM. These are denoted r-in-transit[x] and w-in-
transit[x] (respectively). For each data item x the scheduler also maintains a
queue, queue[x], of operations that cay2be scheduled insofar as the TO rule is
concerned, but are waiting for acknowledgments from the DM to previously
sent conflicting operations. Conflicting operations are in the queue in times-
tamp order.
Let us consider a simple scenario to see how the scheduler uses these data
structures to enforce the TO Rule. For simplicity, assume that the timestamp of
each transaction (or operation) is equal to its subscript (i.e., ts( T,) = i). We use
ack(oJx]) to denote the acknowledgment that the DM sends to the scheduler
indicating that oj[x] has been processed. Suppose initially max-r-scheduled[x]
= 0, r-in-transit[x] = 0, and queue[x] is empty.
1. Y,[x] arrives and the scheduler dispatches it to the DM. It sets max-r-
scheduled[x] to 1 and r-in-transit[x] to 1.
2. w,[x] arrives. Although the TO rule says wJx] can be scheduled, since
r-in-transit[x] = 1 the scheduler must wait until it receives ack(r,[x]). It
therefore appends w,[x] to queue[x]. (w-in-transit[x] is unaffected.)
116 CHAPTER 4 / NON-LOCKING SCHEDULERS

3. r+[x] arrives and although the TO rule says r+,[x] can be scheduled, the
scheduler must wait until it receives ack(wl[x]). It therefore appends
r,[x] to queue[x] (after u;[x]). (r-in-transit[x] is unaffected.)
4. Y~[x] arrives. Just like r,[x], it must wait for w,[x]. So, the scheduler
appends it to queue[x] (after r4[x]).
5. ack(r,[x]) arrives from the DM. The scheduler decrements r-in-transit[x]
to 0. It can now dispatch w,[xJ, so it removes w2[x] from queue[x],
sends it to the DM, and sets max-w-scheduled to 2 and w-in-transitjx]
to 1. It cannot yet dispatch r,[x] and r{[x] because w-in-transit[x] > 0,
indicating that the DM has not yet acknowledged some conflicting
Write.
6. ack(w?[x]) arrives from the DM. The scheduler decrements w-in-
transit[x] to 0. Now it can send both Y~[x] and r{[x] to the DM simulta-
neously. So, it sets max-r-scheduled to 4 and r-in-transit[x] to 2, and
queue[x] becomes empty again.
The principles of operation of a Basic TO scheduler should now be clear.
When it receives an operation p,[x], it accepts it for scheduIing if ts( T1) I max-
q-scheduled[x] f or all operation types q that conflict with p. Otherwise, it
rejects ei[x] and T, must be aborted. Once p,[x] is accepted for scheduling, the
scheduler dispatches it to the DM immediately, if for all operation types q that
conflict with p, q-in-transit[x] = 0 and there are no q operations in queue[x].
Otherwise a conflicting operation ql[x] is in transit between the scheduler and
the DM, or is waiting in queue[x], and so el[x] must be delayed; it is therefore
inserted in queue[x]. Finally, when it receives ack(p,[x]), the scheduler updates
p-in-transit[x] accordingly, and removes all the operations in (the head of)
queue[x] that can now be dispatched and sends them to the DM.

Strict TO
Although the TO rule enforces serializability, it does not necessarily ensure
recoverability. For example, suppose that ts( T,) = 1 and ts(T?) = 2, and
consider the following history: w,[x] rL[x] wL[y] cl. Conflicting operations
appear in timestamp order. Thus this history could be produced by Basic TO.
Yet it is not recoverable: T? reads x from T,, T, is committed, but T, is not.
As we discussed in Chapters 1 and 2, we usually want the scheduler to
enforce an even stronger condition than recoverabiIity, namely, strictness. Here
is how Basic TO can be modified to that end.
Recall that w-in-transit[x] denotes the number of w[x] operations that the
scheduler has sent to the DM but that the DM has not yet acknowledged. Since
two conflicting operations cannot be “in transit” at any time and Writes on the
same data item conflict, w-in-transit[x] at any time is either 0 or 1.
The Strict TO scheduler works like Basic TO in every respect, except that
it does nof set w-in-transitlx] to 0 when it receives the DM’s acknowledgment
4.2 TIMESTAMP ORDERING (TO) 117

of a wJx]. Instead it waits until it has received acknowledgment of ai or c,. It


then sets w-in-transit[x] to zero for every x for which it had sent w;[x] to the
DM. This delays all rj[x] and wj[x] operations with ts( Tj) > ts( Ti) until after Ti
has committed or aborted. This means that the execution output by the sched-
uler to the DM is strict. Notice that since Tj waits for Ti only if ts( T,) > ts( Ti),
these waiting situations cannot lead to deadlock.
Note that w-in-transit[x] acts like a lock. It excludes access to x by other
transactions until the transaction that owns this “lock” - the transaction that
issued the Write that is “in transit” - commits or aborts. This may lead one to
believe that we, in effect, turned our TO scheduler into a 2PL scheduler. The
following history shows that this is not so:

H, is equivalent to the serial history T, T2 T,, and thus is SR. Moreover, it is


strict. The only potentially troublesome conflict is w,[y] < ~~[y], but c1 < rJy],
as required for strictness. If ts( T,) < ts( T,) < ts( T,), this history could be
produced by the Strict TO scheduler we described. However, H, could not
possibly have been produced by 2PL. In 2PL, T2 must release its read lock on x
before w,[x] but may not set its read lock on y until after w,[y]. Since w,[x] <
w,[y], T2 would release a lock before obtaining another lock, violating the two
phase rule.
It is possible to modify Basic TO to enforce only the weaker conditions of
recoverability or cascadelessness (see Exercise 4.2).

Timestamp Management

Suppose we store timestamps in a table, where each entry is of the form [x,
max-r-scheduled[x], max-w-scheduled[x]]. This table could consume a lot of
space. Indeed, if data items are small, this timestamp information (and o-in-
transit) could occupy as much space as the database itself. This is a potentially
serious problem.
We can solve the problem by exploiting the following observation.
Suppose TMs use relatively accurate real time clocks to generate timestamps,
and suppose transactions execute for relatively short periods of time. Then at
any given time t, the scheduler can be pretty sure it won’t receive any more
operations with timestamps smaller than t - 6, where 6 is large compared to
transaction execution time. The only reason the scheduler needs the time-
stamps in max-r-scheduled[x] and max-w-scheduled[x], say ts, and ts,,, is to
reject Reads and Writes with even smaller timestamps than ts, and ts,. So, once
ts, and ts, are smaller than t - 6, ts, and ts, are of little value to the scheduler,
because it is unlikely to receive any operation with a timestamp smaller than ts,
or ts,.
Using this observation, we can periodically purge from the timestamp
table entries that have uselessly small timestamps. Each Purge operation
118 CHAPTER 4 / NON-LOCKING SCHEDULERS

uses a timestamp ts,,,,n, which is the t - 6 value in the previous paragraph.


Purge(ts,,,,) removes every entry [x, max-r-scheduled[x], max-w-scheduled[x]]
from the timestamp table where max-r-scheduled[x] < tsmin and max-w-
scheduled[x] < ts,,,. In addition, it tags the table with tsIlltll, indicating that a
Purge with that timestamp value has taken place.
Once the timestamp table has been purged, the scheduler must use a modi-
fied test to determine whether an operation o,[x] is too late. First, it looks for
an entry for x in the timestamp table. If it finds one, it compares ts( Ti) to max-
r-scheduled[x] and/or max-w-scheduled[x] in the usual manner. However, if
there is no entry for x, then it must compare ts(T,) with the ts,,,, that tags the
table. If ts(T,) 1 ts,,,,, then if an entry for x was purged from the table, it was
irrelevant, and thus o,[x] is not too late. But if ts( T;) < ts,,,,, then the last Purge
might have deleted an entry for x, where either ts( T,) < max-r-scheduled[x] or
ts( TI) < max-w-scheduled[x]. If that entry still existed, it might tell the sched-
uler to reject o,[x]. To be safe, the scheduler must therefore reject o,[x].
If &?m is sufficiently small, it will be rare to reject an oJx] with ts( T,) <
ts,,lin. However, the smaller the value of ts,,,,,, the smaller the number of entries
that Purge will delete, and hence the larger the size of the timestamp table.
Therefore, selecting ts,,,, entails a tradeoff between decreasing the number of
rejections and minimizing the size of the timestamp table.

Distributed TO schedulers
TO schedulers are especially easy to distribute. Each site can have its own TO
scheduler which schedules operations that access the data srored at that site.
The decision to schedule, delay, or reject an operation o,[x] depends only on
other operations accessing x. Each scheduler can maintain all the information
about the operations accessing the data items it manages. It can therefore go
about its decisions independently of the other schedulers. Unlike distributed
2PL, where coordination among distributed schedulers is usually needed to
handle distributed deadlocks, distributed TO requires no inter-scheduler
communication whatsoever.

Conservative TO
If a Basic TO scheduier receives operations in an order widely different from
their timestamp order, then it may reject too many operations, thereby causing
too many transactions to abort. This is due to its aggressive nature. We can
remedy this problem by designing more conservative scheduIers based on the
TO rule.
One approach is to require the scheduler to artificially delay each opera-
tion it receives for some period of time. To see why this helps avoid rejections,
consider some operation oi[x]. The danger in scheduling o,[x] right away is
that the scheduler may later receive a conflicting operation with a smaller
4.2 TIMESTAMP ORDERING (TO) 11s

timestamp, which it will therefore have to reject. However, if it holds oi[x] for
a while before scheduling it, then there is a better chance that any conflicting
operations with smaller timestamps will arrive in time to be scheduled. The
longer the scheduler holds each operation before scheduling it, the fewer rejec-
tions it will be forced to make. Like other conservative schedulers, conserva-
tive TO delays operations to avoid rejections.
Of course, delaying operations for too long also has its problems, since the
delays slow down the processing of transactions. When designing a conserva-
tive TO scheduler, one has to strike a balance by adding enough delay to avoid
too many rejections without slowing down transactions too much.
An “ultimate conservative” TO scheduler neuer rejects operations and thus
never causes transactions to abort. Such a scheduler can be built if we make
certain assumptions about the system. As with Conservative 2PL, one such
assumption is that transactions predeclare their readset and writeset and the
TM conveys this information to the scheduler. We leave the construction of a
conservative TO scheduler based on this assumption as an exercise (Exercise
4.11).
In this section we’ll concentrate on an ultimate conservative TO scheduler
based on a different assumption, namely, that each TM submits its operations
to each DM in timestamp order. One way to satisfy this assumption is to adopt
the following architecture. At any given time, each TM supervises exactly one
transaction (e.g., there is one TM associated with each terminal from which
users can initiate transactions). Each TM’s timestamp generator returns
increasing timestamps every time it’s called. Thus, each TM runs transactions
serially, and each transaction gets a larger timestamp than previous ones super-
vised by that TM. Of course, since many TMs may be submitting operations
to the scheduler in parallel, the scheduler does not necessarily receive opera-
tions serially,
Under these assumptions we can build an ultimate conservative TO sched-
uler as follows. The scheduler maintains a queue, called unsched-queue,
containing operations it has received from the TMs but has not yet scheduled.
The operations in unsched-queue are kept in timestamp order, the operations
with the smallest timestamp being at the head of the queue. Operations with
the same timestamp are placed according to the order received, the earlier ones
being closer to the head.
When the scheduler receives pi[x] from a TM, it inserts ei[x] at the appro-
priate place in unsched-queue to maintain the order properties just given. The
scheduler then checks if the operation at the head of unsched-queue is ready to
be dispatched to the DM. The head of unsched-queue, say qj[x], is said to be
ready if
1. unsched-queue contains at least one operation from every TM, and
2. all operations conflicting with qj[x] previously sent to the DM have been
acknowledged by the DM.
120 CHAPTER 4 / NON-LOCKING SCHEDULERS

If the head of unsched-queue is in fact ready, the scheduler removes it from


unsched-queue and sends it to the DM. The scheduler repeats this activity until
the head of unsched-queue is no longer ready.
Ready rule (1) requires that we know if there are operations from all TMs
in unsched-queue. One way of doing this efficiently is to maintain, for each
TM,,, the count of operations in unsched-queue received from TM,, denoted
op-count[v]. To enable the scheduler to decrement the appropriate op-count
when it removes an operation from unsched-queue,~ unsched-queue should
actually store pairs of the form (v, o,[x]), meaning that operation o,[x] was
submitted by TM,.. Each scheduler needs this information anyway, so it knows
which Th4 should receive the acknowledgment for each operation.
Ready rule (2) is a handshake between the scheduler and the Dh4. This can
be implemented as in Basic TO, by keeping for each x a count of the Reads and
V7rites that are in transit between the scheduler and the DM.
It is easy to see that the ultimate conservative TO scheduler described
previously- enforces the TO rule. This follows from the fact that the operations
in unsched-queue are maintained in timestamp order, and it is always the head
of the queue that is sent to the DM. Thus, not only conflicting operations but
all operations are scheduled in timestamp order. Moreover, the handshake
mechanism guarantees that the DM will process conflicting operations in
timestamp order.
One problem with this scheduler is that it may get stuck if a TM stops
sending operations for a while. To send anq’ operation to the DM, the sched-
uler must have at least one unscheduled operation from all TA4s. If some TM
has no operations to send, then the scheduler is blocked. To avoid this prob-
lem, if a TIz/i has no operations to send to the scheduler, it sends a NULLopera-
tion. A Null must carry a timestamp consistent with the requirement that each
TM submits operations to the scheduler in timestamp order. When a TM sends
a Null to the scheduler, it is promising that every operation it sends in the
future will have a timestamp greater than ts(Null). The scheduler treats Nulls
just like other operations, except that when a Null becomes the head of the
queue, the scheduler simply removes it, and decrements the appropriate op-
count, but does not send it to the DM.
Particular care must be exercised if a TM fails and is therefore unable to
send any operations, whether NuIl or not, to the scheduler. In this case, the
scheduler must somehow be informed of the failure, so that it does not expect
operations from that TM. After the TM is repaired and before it starts submit-
ting operations to the scheduler, the latter must be made aware of that fact.
Indeed, the TM should not be allowed to submit operations to the scheduler
before it is explicitly directed that it may do so.
A second, and maybe more serious, problem with conservative TO sched-
ulers is that they are, true to their name, extremely restrictive. The executions
they produce are serial! There are several methods for enhancing the degree of
concurrency afforded by conservative TO schedulers,
4.3 SERIALIZATION GRAPH TESTING (SGT) 121

One way to improve conservative TO is to avoid the serialization of


nonconflicting operations by using transaction classes. A transaction class is
defined by a readset and a writeset. A transaction is a member of a transaction
class if its readset and writeset are subsets of the class’s readset and writeset
(respectively). Transaction classes need not be mutually exclusive. That is, they
may have overlapping reads&s and writesets, so a transaction can be a member
of many classes.
We associate each TM with exactly one class and require that each transac-
tion be a member of the class of the TM that supervises it. Conservative TO
exploits the association of TMs with transaction classes by weakening ready
rule (1). Instead of requiring that operations are received from all TMs, the
scheduler only needs to have received operations from the TMs associated
with transaction classes that contain x in their writeset, if the head of unsched-
queue is YJx], or x: in either their readset or writeset, if the head of the queue is
w,[x]. By relaxing the condition under which a queued operation may be sent
to the DM, we potentially reduce the time for which operations will be
delayed.
Each transaction must predeclare its readset and writeset, so the system
can direct it to an appropriate TM. Alternatively, a preprocessor or compiler
could determine these sets and thereby the class(es) to which a transaction
belongs. The scheduler must know which TMs are associated with which
classes. Class definitions and their associations with TMs must remain static
during normal operation of the DBS. Changing this information can be done,
but usually must be done off-line so that various system components can be
simultaneously informed of such changes and modified appropriately.
A more careful analysis of transaction classes can lead to conditions for
sending operations to the DM that are even weaker than the one just
described. Such an analysis can be conveniently carried out in terms of a graph
structure called a conflict graph. However, we shall not discuss this technique
in this book. Relevant references appear in the Bibliographic Notes.

4.3 SERlALlZATiON GRAPH TESTlNG (SGT)


Introduction
So far, we have seen schedulers that use locks or timestamps. In this section we
shall discuss a third type of scheduler, called serialization graph testing (SGT)
schedulers. An SGT scheduler maintains the SG of the history that represents
the execution it controls. As the scheduler sends new operations to the DM,
the execution changes, and so does the SG maintained by the scheduler. An
SGT scheduler attains SR executions by ensuring the SG it maintains always
remains acyclic.
According to the definition in Chapter 2, an SG contains nodes for all
committed transactions and for no others. Such an SG differs from the one that
122 CHAPTER 4 I NON-LOCKING SCHEDULERS

is usually maintained by an SGT scheduler, in two ways. First, the SGT sched-
uler’s SG may not include nodes corresponding to all committed transactions,
especially those that committed long ago. Second, it usually includes nodes for
all active transactions, which by definition are not yet committed. Due to these
differences, we use a different term, Stored SG (SSG), to denote the SG main-
tained by an SGT scheduler.

Basic SGT
When an SGT scheduler receives an operation e,[x] from the TM, it first adds
a node for T, in its SSG, if one doesn’t already exist. Then it adds an edge from
T1 to T, for every previously scheduled operation qj[x] that conflicts with pi[x].
We have two cases:
1. The resulting SSG contains a cycle. This means that if p,[x] were to be
scheduled now (or at any point in the future), the resulting execution
would be non-SR. Thus the scheduler rejects ~~[x]. It sends a, to the DM
and, when a, is acknowledged, it deletes from the SSG T, and all edges
incident with T,. Deleting T, makes the SSG acyclic again, since all
cycles that existed involved T,. Since the SSG is acyclic, the execution
produced by the scheduler now - with T, aborted - is SR.
2. The resulting SSG is still acyclic. In this case, the scheduler can accept
,bi[x]. It can schedule p,[x] immediately, if all conflicting operations
previously scheduled have been acknowledged by the DM; otherwise, it
must delay p,[x] until the DM acknowledges all conflicting operations.
This handshake can be implemented as in Basic TO. Namely, for each
data item x the scheduler maintains queue[x] where delayed operations
are inserted in first-in-first-out order, and two counts, r-in-transit[x] and
w-in-transit[x], for keeping track of unacknowle-dged Reads and Writes
for each x sent to the Dh4.
To determine if an operation conflicts with a previously scheduled one, the
scheduler can maintain, for each transaction T, that has a node in SSG, the sets
of data items for which Reads and Writes have been scheduled. These sets will
be denoted r-scheduled[T,] and w-scheduled[T,J, respectively. Then, p;[x]
conflicts with a previously scheduled operation of transaction T, iff x E q-
scheduled[T,], for 4 conflicting with e.
A significant practical consideration is when the scheduler may discard the
information it has collected abour a transaction. To detect conflicts, we have to
maintain the readset and writeset of every transaction, which could consume a
lot of space. It is therefore important to discard this information as soon as
possible.
One may naively assume that the scheduler can delete information about a
transaction as soon as it commits. Unfortunately, this is not so. For exampIe,
consider the (partial) history
4.3 SERlALlZATlON GRAPH TESTING (SGT) 123

Since SSG(H,) is acyclic, the execution represented by H, could have been


produced by an SGT scheduler. Now, suppose that the scheduler receives
wk+,[z]. According to the SGT policy, the operation can be scheduled iff z @{x,
YI, *** yk}. But for the scheduler to be able to test that, it must remember that x,
y,, . . . . yk were the d at a items accessed by transactions T,, T,, . . . . Ti, even
though these transactions have committed.
The scheduler can delete information about a terminated transaction Tj iff
T; could not, at any time in the future, be involved in a cycle of the SSG. For a
node to participate in a cycle it must have at least one incoming and one outgo-
ing edge. As in H,, new edges out of a transaction Ti may arise in the SSG even
after Ti terminates. However, once 77,terminates no new edges directed to it
may subsequently arise. Therefore, once a terminated tr,ansaction has no
incoming edges in the SSG, it cannot possibly become involved in a cycle in the
future. So a safe rule for deleting nodes is that information about a transaction
may be discarded as soon as that transaction has terminated and is a source
(i.e., a node with no incoming edges) in the SSG. If it is not a source at the time
it terminates, then it must wait until all transactions that precede it have termi-
nated and therefore have been deleted from the SSG.
By explicitly checking whether the SSG is acyclic, an SGT scheduler allows
any interleaving of Reads and Writes that is SR. In this sense, it is more lenient
than TO (which only allows timestamp ordered executions of Reads and
Writes) and 2PL (which doesn’t allow certain interleavings of Reads and
Writes, such as history H, in Section 4.2). However, it attains this flexibility at
the expense of extra overhead in maintaining the SG and checking for cycles.
Moreover, it is currently unknown under what conditions the extra leniency of
SGT leads to improved throughput or response time.

Conservative SGT

A conservative SGT scheduler never rejects operations but may delay them. As
with 2PL and TO, we can achieve this if each transaction T; predeclares its
readset and writeset, denoted r-set[T;] and w-set[TJ, by attaching them to its
Start operation.
When the scheduler receives T[s Start, it saves r-set[T;] and w-set[?;]. It
then creates a node for T; in the SSG and adds edges Tj + T; for every Tj in the
SSG such that P-set[T;] f7 q-set[Tj] # { }’ f or all pairs of conflicting operation
types e and q.
For each data item x the scheduler maintains the usual queue[x] of delayed
operations that access x. Conflicting operations in queue[x], say ei[x] and

‘We use “{ }” to denote the empty set.


124 CHAPTER 4 I NON-LOCKING SCHEDULERS

ql[x], are kept in an order consistent with SSG edges. That is, if T, --t T) is in
the SSG, then qJx] is closer to the head of queue[x] than pJx]; thus, qJx] will
be dequeued before eJx]. The order of nonconflicting operations in queue[x]
(i.e., Reads) is immaterial; for specificity, let’s say they are kept in order of
arrival. When the scheduler receives operation o,[x] from the TM, it inserts
o,[x] in queue[x] in accordance with the ordering just specified.
The scheduler may send the operation at the head of some queue to the
DM iff the operation is “ready.” An operation pi[x] is read), if
1. all operations that conflict with pJx] and were previously sent to the
DM have been acknowledged; and
2. for every Ti that directly precedes T, in the SSG (i.e., Ti -+ Tr is in the
SSG) and for every operation type q that conflicts with p, either x g q-
set[T,] or qJx] has already been received by the scheduler (i.e., x E q-
scheduled[ T,]).
Condition (1) amounts to the usual handshake that makes sure the DM
processes conflicting operations in the order they are scheduled. Condition (2)
is what makes this scheduler avoid aborts. The rationale for it is this. Suppose
T1 precedes T, in the SSG. If the SSG is acyclic, then the execution is equivalent
to a serial one in which TI executes before T,. Thus if e,[x] and q][x] conflict,
qJx] must be scheduled before p,[x]. So if pJx] is received before qJx], it must
be delayed. Otherwise, when q,[x] is eventually received it will have to be
rejected, as its acceptance would create a cycle involving T, and T, in the SSG.
Note that to evaluate condition (2), the Conservative SGT scheduler must,
in addition to o-set[ T,], maintain the sets o-scheduled[T,,], as discussed in
Basic SGT.
One final remark about condition (2). You may wonder why we have
limited it only to transactions that directly precede T,. The reason is that the
condition is necessarily satisfied by transactions T, that indirectly precede T,;
that is, the shortest path from T, to T, has more than one edge. Then T, and T,
do not issue conflicting operations. In particular, x E p-set[T,] implies x @
q-set[T,] for a11conflicting operation types ~7,q.
Every time it receives pi[x] from the TM or an acknowledgment of some
q,Jx] from the DM, the scheduler checks if the head of queue[x] is ready. If so,
it dequeues the operation and sends it to the DM. The scheduler then repeats
the same process with the new head of queue[x] until the queue is empty or its
head is not ready. The policy for discarding information about terminated
transactions is the same as for Basic SGT.

Recoverability Considerations

Basic and Conservative SGT produce SR histories, but not necessarily recover-
able - much less cascadeless or strict - ones.
4.3 SERIALIZATION GRAPH TESTING (SGT) 125

Both types of SGT schedulers can be modified to produce only strict (and
SR) histories by using the same technique as Strict TO. The scheduler sets w-
in-transit[x] to 1 when it sends w,[x] to the DM. But rather than decrementing
it back to zero when the DM acknowledges wi[x], the scheduler does so when
it receives an acknowledgment that the DM processed ai or ci. Recall that w-in-
transit[x] is used to delay sending r,[x] and wj[x] operations until a previously
sent wi[x] is processed. By postponing the setting of w-in-transit[x] to zero, the
scheduler delays rj[x] and WJX] until the transaction that last wrote into x has
terminated, thereby ensuring the execution is strict.
It’s also easy to modify Basic or Conservative SGT to enforce only the
weaker condition of avoiding cascading abort. For this, it is only necessary to
make sure that before scheduling r;[x], the transaction from which Tj will read
x has committed. To do this, every time the scheduler receives an acknowledg-
ment of a Commit operation, cj, it marks node T1 in the SSG as “committed.”
Now suppose the scheduler receives ri[x] from the TM and accepts it. Let Tj be
a transaction that satisfies:
1. x E w-scheduled[Tj]; and
2. for any Tk # Tj such that x E w-scheduled[Th], Tk + Tj is in the SSG.
At most one Tj can satisfy both of these conditions. (It is possible that no trans-
action does, for instance, if all transactions that have ever written into x have
been deleted from the SSG by now.) The scheduler can send ri[x] to the DM
only if Tj is marked “committed” or no such Tj exists.
The same idea can be used to enforce only the weaker condition of recov-
erability. The difference is that instead of delaying individual Reads, the sched-
uler now delays Tis Commit until all transactions Tj from which Ti has read
either are marked “committed” or have been deleted from the SSG. Also, since
in this case cascading aborts are possible, when the scheduler either receives
T/s Abort from the TM or causes Ti to abort to break a cycle in the SSG, it
also aborts any transaction Tj that read from T;. An SGT scheduler can detect
if Tj has read from Ti by checking if
1. Ti + Tj is in the SSG, and
2. there is some x E r-scheduled[?;] n w-scheduled[TJ such that for every
Tk where Ti + Tk and Tk -+Tj are in the SSG, x +Zw-scheduled[Tk].

Distributed SG’b Schedulers


SGT schedulers present problems in distribution of control since their deci-
sions are based on the SSG, an inherently global structure. The problems are
reminiscent of distributed deadlocks. If each scheduler maintains a local SSG
reflecting only the conflicts on the data items that it manages, then it is possible
to construct executions in which all such local SSGs are acyclic, yet the global
SSG contains a cycle.
126 CHAPTER 4 / NON-LOCKING SCHEDULERS

For example, consider

Suppose that there are k sites, and x, is stored at sire i, for 1 I is k. At each
site i < k, the Iocal SSG contains the edge T, * T,,,. Now, if T, issues w,[xJ,
the local SSG at site k contains the edge Tk -+T,. Thus, globally we have a
cycle T, --+ T2 -+ . . . + Tk --t T,, yet a11local SSGs are acyclic (each consists of a
single edge).
This is essentially the same problem we had with global deadlock detec-
tion in 2PL (Section 3.11). There is an important difference, however, that
makes our new problem more severe. In global deadlock detection any trans-
actions involved in a cycle in the WFG are just waiting for each other and thus
none can proceed; in particular, none can commit. So we may take our time in
checking for global cycles, merely at the risk of delaying the discovery of a
deadlock. On the other hand, transactions that lie along an SSG cycle do not
wait for each other. Since a transaction should not commit until the scheduler
has checked that the transaction is not in an SSG cycle, global cycle detection
must take place at least at the same rate as transactions are processed. In typi-
cal applications, the cost of this is prohibitive.

*Space-Efficient SGT Schedulers


The implementations of SGT just described have the unpleasant property of
potentially requiring an unbounded amount of space. History
HZ = rkcl[xl w[xl ~~bll cl w[xl w[YzI cz . . . wk[xl Wkb’kl ck

is a model for histories with an arbitrary number of committed transactions


about which the scheduler must keep readset and writeset information. Recall
that in the implementation we discussed, each transaction T, that appears in
the SSG requires space for representing T, and, more substantially, for storing
r-scheduled[T,] and w-scheduled[T,]. If t is the total number of transactions
appearing in an execution such as Hz, and D is the set of data items, then the
scheduler may require space proportional to t’ for storing the SSG and space
proportional to t- ID/ for other information maintained by the scheduler. Since
there is no bound on the number of transactions in the execution, the sched-
uler’s space requirements can grow indefinitely,
In all of the other schedulers we have studied, the space required is propor-
tional to a* IDI, where a is the number of active transactions (i.e., those that
have not yet committed or aborted). We can make SGT’s space requirements
comparable to those schedulers by using a different implementation, which
requires space proportional to the maximum of a’ (for the SSG) and a* IDI (for
the conflict information). Since at any given time a is generally much smaller
than the total number of transactions that have appeared in that execution and
is a number under system control, this is a significant improvement over Ba-
sic SGT.
4.3 SERIALIZATION GRAPH TESTING (SGT) 127

In outline, here is the space-efficient implementation for Basic SGT. The


scheduler maintains an SSG which is transitively closed and only contains
nodes for active transactions. For each node Ti of SSG, the scheduler also
maintains four sets: o-scheduled[ Ti] and o-conflict[Ti], for o E {r, zu}. As
before, r-scheduled[TJ (or w-scheduled[TJ) is the set of data items for which
Reads (or Writes) have been scheduled. r-conflict[ Ti] (or w-conflict[TJ) is the
set of data items read (or written) by terminated transactions that must follow
T, in any execution equivalent to the present one. More precisely, at the end of
an execution represented by history H, o-conflict[Ti] = {X / oj[x] E H, T, is
committed in H, and T; -+ Tj E SG+ (H)}, where SG+(H) is the transitive
closure of SG(H).z
When a transaction Ti begins, the scheduler adds a new node to the SSG,
and initializes o-scheduled[TJ and o-conflict[TJ to the empty set, for o E (r,
w}. When the scheduler receives an operation ei[x], it tests whether x is in q-
conflict[TJ for some q conflicting with p. If so, it rejects ei[x]. Otherwise, it
adds edges Tj -+ T; to the SSG for every transaction T, with x E q-scheduled[ Tj]
U q-conflict[Tj] for all operation types q conflicting with e. If the resulting
SSG contains a cycle, the scheduler rejects p;[x]. Otherwise, it schedules e;[x]
after all conflicting operations previously sent to the DM have been acknowl-
edged.
When the scheduler receives the acknowledgment of ci from the DM, it
computes SSG+ . For each Tj with Tj -+ Ti in SSG + , it sets o-conflict[ Tj] to o-
conflict[Tj] U o-scheduled[TJ U o-conflict[TJ, o E {r, w). Note that at this
point, r-scheduled[ T,] (or w-scheduled[ TJ) is precisely the readset (or writeset)
of T;. Then the scheduler deletes Ti from SSG + along with all its incoming and
outgoing edges. The resulting graph becomes the new SSG.
To see why this method is correct, first consider the scheduler’s response to
receiving ~~[x]. Let history H represent the execution at the time the scheduler
receives pi[x]. If x is in q-conflict[T;] for some q conflicting with p, then for
some committed transaction Tj, some operation qj[x] has been scheduled and
Ti + Tj E SG+(H). If pi[ x ] were to be scheduled, we would also have that Tj -+
T, (because of ql[x] and ei[x]) and thus a cyclic SG would result. To avoid this,
ei[x] is rejected.
Next, consider the scheduler’s response to ci, and let history H represent
the execution at this time. Consider any active transaction Tj such that Tj + Ti
is in SG + (H) (which implies Tj + T, is in SSG + ). Before discarding T, we must
record the fact that the scheduler must not subsequently schedule any of Tis
operations that conflict with Tts operations or with operations of committed
transactions that follow Tj in SG + . Otherwise, a cyclic SG will subsequently
arise. This is why we must, at this point, update r-conflict[Tj] (or w-
conflict[ Tj]) by adding r-scheduled[ Ti] U r-conflict[TJ (or w-scheduled[ Ti] U
w-conflict[TJ) to it.

*See Section A.3 of the Appendix for the definition of transitive closure of a directed graph.
128 CHAPTER 4 I NON-LOCKING SCHEDULERS

We compute SSG + before deleting the node corresponding to T,, the trans-
action that committed, to avoid losing indirect precedence information (repre-
sented as paths in the SSG). For example, suppose that there is a path from TJ
to Th in the SSG and that all such paths pass through node ?;. Then, deleting
T, will produce a graph that doesn’t represent the fact that Ti must precede Tk.
SSG’ contains an edge T? -+ Tk iff the SSG has a path from T, to Tk. Thus
SSG’ represents exactly the same precedence information as SSG. Moreover,
the on131precedence information lost by deleting T, from the SSG+ pertains to
Ti itself (in which we are no longer interested since it has terminated) and to no
other active transactions.
The scheduler as described produces SR executions. By using the tech-
niques of the previous subsection we can further restrict its output to be recov-
erable, cascadeless, or strict (see Exercise 4.19).

4.4 CERTIFIERS
Introduction
So far we have been assuming that every time it receives an operation, a sched-
uler must decide whether to accept, reject, or delay it. A different approach is
to have the scheduler immediately schedule each operation it receives. From
time to time, it checks to see what it has done. If it thinks all is well, it
continues scheduling. On the other hand, if it discovers that in its hurry to
process operations it has inappropriately scheduled conflicting operations,
then it must abort certain transactions.
When it’s about to schedule a transaction T,‘s Commit, the scheduler
checks whether the execution that includes c, is SR. If not, it rejects the
Commit, thereby forcing T, to abort. (It cannot check less often than on every
Commit, as it wouId otherwise risk committing a transaction involved in a
non-SR execution.) Such schedulers are called certifiers. The process of check-
ing whether a transaction’s Commit can be safely scheduled or must be rejected
is called certification. Certifiers are sometimes called optimistic schedulers,
because they aggressively schedule operations, hoping nothing bad, such as a
non-SR execution, will happen.
There are certifiers based on all three types of schedulers - 2PL, TO, and
SGT- with either centralized or distributed control. We will expiore all of
these possibilities in this section.

2PL Certification

When a 2PL certifier receives an operation from the TM, it notes the data item
accessed by the operation and immediately submits it to the DM. When it
receives a Commit, c,, the certifier checks if there is any operation p)[x] of Tr
that conflicts with some operation q][~] of some other active transaction, T,. If
4.4 CERTIFIERS 129

so, the certifier rejects ci and aborts Tia3 Otherwise it certifies Ti by passing c, to
the DM, thereby allowing Ti to terminate successfully.
The 2PL certifier uses several data structures: a set containing the names
of active transactions, and. two sets, r-scheduled[ Ti] and w-scheduled[ TJ, for
each active transaction Ti, which contain the data items read and written,
respectively, by Tj so far.
When the 2PL certifier receives Y,[x] (or wi[x]), it adds x to r-scheduled[ T;]
(or w-scheduled[TJ). When the scheduler receives ci, Ti has finished executing,
so r-scheduled[TJ and w-scheduled[Ti] contain Tts readset and writeset,
respectively. Thus, testing for conflicts can be done by looking at intersections
of the r-scheduled and w-scheduled sets. To process ci, the certifier checks
every other active transaction, Tj, to determine if any one of r-scheduled[ Tj] f7
w-scheduled[ Tj], w-scheduled[ Ti] n r-scheduled[ Tj], or w-scheduled[ Ti] n w-
scheduled[Tj] is nonempty. If so, it rejects ci. Otherwise, it certifies T; and
removes T, from the set of active transactions.
To prove that the 2PL certifier only produces SR executions, we will
follow the usual procedure of proving that every history it allows must have an
acyclic SG.

Theorem 4.2: The 2PL certifier produces SR histories.


Proof: Let H be a history representing an execution produced by the 2PL
certifier. Suppose T; + Ti is an edge of SG(H). Then T, and Tj are commit-
ted in H and there are conflicting operations qj[x] and pi[x] such that qJx]
<pi[x]. We claim that the certification of Tj preceded the certification of
Ti. For suppose Ti was certified first. At the time Ti is certified, pi[~] must
have been processed by the scheduler; hence x is in p-scheduled[ T;]. More-
over, since qj[x] < p;[x], x is in q-scheduled[Tj]. Thus p-scheduled[ Ti] n
q-scheduled[Tj] + { } . Tj, not having been certified yet, is active. But then
the scheduler would not certify T,, contrary to our assumption that Ti is
committed in H. We have therefore shown that if Tj + Ti is in SG(H), Tj
must be certified before Ti in H. By induction, if a cycle existed in SG(H),
every transaction on that cycle would have to be certified before itself,
an absurdity. Therefore SG(H) is acyclic. By the Serializability Theorem,
H is SR. 0

We called this certifier a 2PL certifier, yet there was no mention of locks
anywhere! The name is justified if one thinks of an imaginary read (or write)
lock being obtained by Ti on x when x is added to r-scheduled[TJ (or w-
scheduled[TJ). If there is ever a lock conflict between two active transactions,

3A transaction is committed only when the scheduler acknowledges to the TM the processing of
Commit, not when the TM sends the Commit to the scheduler. So, it is perfectly legitimate for
the scheduler to reject c; and abort Ti at this point.
130 CHAPTER 4 I NON-LOCKING SCHEDULERS

the first of them to attempt certification will be aborted. This is very similar to
a 2PL scheduler that never allows a conflicting operation to wait, but rather
always rejects it. In fact, the committed projection of every history produced
by a 2PL certifier could also have been produced by a 2PL scheduler.
To enforce recoverability, when a 2PL certifier aborts a transaction T,, it
must also abort any other active transaction T, such that w-scheduled[ Tl] n r-
scheduled[ T,] # ( }. Note that this may cause T,,to be aborted unnecessarily if,
for example, there are data items in w-scheduled[ir,] n r-scheduled[ JJ] but T,
actually read them before T, wrote them. However, the 2PL certifier does not
keep track of the order in which conflicting operations were processed; it can’t
distinguish, at certification time, the case just described from the case in which
T,, read some of the items in w-scheduled[ T,] n r-scheduled[ T,] izfter T, rvrote
them. For safety, then, it must abort 2;.
One can modify the 2PL certifier to enforce the stronger conditions of
cascadelessness or strictness, although this involves delaying operations and
therefore runs counter to the optimistic philosophy of certifiers (see Exercise
4.25).
To understand the performance of 2PL certification, let’s compare it to its
on-line counterpart, Basic 2PL. Both types of scheduler check for conflicts
between transactions. Thus, the overhead for checking conflicts in the two
methods is about the same. If transactions rarely conflict, then Basic 2PL
doesn’t delay many transactions and neither Basic 2PL nor 2PL certification
aborts many. Thus, in this case throughput for the two methods is about the
same.
At higher conflict rates, 2PL certification performs more poorly. To see
why, suppose T, issues an operation that conflicts with that of some other
active transaction T,. In 2PL certification, T, and T, would execute to comple-
tion, even though at least one of them is doomed to be aborted. The execution
effort in completing that doomed transaction is wasted. By contrast, in 2PL T,
would be delayed. This ensures that at most one of T, and T, will be aborted
due to the conflict. Even if delaying T, causes a deadlock, the Gictim is aborted
before it executes completely, so less of its execution effort is wasted than in
2PL certification.
Quantitative studies are consistent with this intuition. In simulation and
analytic modelling of the methods, 2PL certification has lower throughput
than 2PL for most application parameters. The difference in throughput
increases with increasing conflict rate.

SGT Certification
SGT lends itself very naturally to a certifier. The certifier dynamically main-
tains an SSG of the execution it has produced so far, exactly as in Basic SGT.
Every time it receives an operation pJx], it adds the edge T1 -+ T, to the SSG
for every transaction q such that the certifier has already sent to the DM an
4.4 CERTIFIERS 131

operation q1[9c] conflicting with P;[x]. After this is done, it immediately


dispatches pl[x] to the DM (without worrying whether the SSG is acyclic). Of
course, handshaking is still necessary to ensure that the DM processes conflict-
ing operations in the order scheduled.
When the scheduler receives c,, it checks whether T, lies on a cycle of the
SSG. If so, it rejects ci and T; is aborted. Otherwise it certifies T, and Ti
commits normally.
The implementation issues are essentially those that we discussed for Basic
SGT. The same data structures can be used to maintain the SSG and to enforce
handshaking between the certifier and DM. The problem of space inefficiency
is of concern here also, and the same remedies apply.
Finally, the SGT certifier is modified to enforce recoverability in essentially
the same way as SGT schedulers. The rule is that if T; aborts, the certifier also
aborts any active transaction Tj such that, for some data item X, x E w-
scheduled[ Ti] n r-scheduled[ Tj], Ti + T, is in the SSG, and for every Tk such
that T, -+ Tk and Tk -+ Tj are in the SSG, x @w-scheduled[Tk].

TO Certification

A TO certifier schedules Reads and Writes without delay, except for reasons
related to handshaking between the certifier and the DM. When the certifier
receives ci, it certifies T; if all conflicts involving operations of T, are in time-
stamp order. Otherwise, it rejects ci and Ti is aborted. Thus, Ti is certified iff
the execution so far satisfies the TO rule. That is, in the execution produced
thus far, if some operation ei[x] precedes some conflicting operation qj[x] of
transaction Tj, then ts( Ti) < ts( Tj). This is the very same condition that Basic
TO checks when it receives each operation. However, when Basic TO finds a
violation of the TO rule, it immediately rejects the operation, whereas the TO
certifier delays this rejection until it receives the transaction’s Commit. Since
allowing such a transaction to complete involves extra work, with no hope
that it will ultimately commit, Basic TO is preferable to a TO certifier.

Distributed Certifiers
A distributed certifier consists of a collection of certifier processes, one for
each site. As with distributed schedulers, we assume that the certifier at a site is
responsible for regulating access to exactly those data items stored at that site.
Although each certifier sends operations to its respective DM indepen-
dently of other certifiers, the activity of transaction certification must be
carried out in a coordinated manner. To certify a transaction, a decision must
be reached involving all of the certifiers that received an operation of that
transaction. In SGT certification, the certifiers must exchange their local SSGs
to ensure that the global SSG does not have a cycle involving the transaction
132 CHAPTER 4 I NON-LOCKING SCHEDULERS

being certified. If no such cycle exists, then the transaction is certified (by all
certifiers involved); otherwise it is aborted,
In 2PL 0~ TO certification, each certifier can make a local decision
whether or not to certify a transaction, based on conflict information for the
data items it manages. A global decision must then be reached by consensus. If
the local decision of all certifiers involved in the certification of a transaction is
to certify the transaction, then the global decision is to certify. If even one certi-
fier’s local decision is to abort the transaction, then the global decision is to
abort. The fate of a transaction is decided only after this global decision has
been reached. A certifier cannot proceed to certify a transaction on the basis of
its local decision only.
This kind of consensus can be reached by using the following communica-
tion protocol between the TM that is supervising T, and the certifiers that
processed T,‘s operations. The TM distributes T,‘s Commit to all certifiers that
participated in the execution of T,. When a certifier receives c,, it makes a local
decision, called its Z)O~P,on whether to certify T, or not, and sends its vote to
the TM. After receiving the votes from all the certifiers that participate in T,‘s
certification, the TM makes the global decision accordingly It then sends the
global decision to all participating certifiers, which carry it out as soon as they
receive ir.4
Using this method for reaching a unanimous decision, a certifier may vote
to certify a transaction, yet end up having to abort it because some other certi-
fier voted not to certify. Thus, a certifier that votes to certify experiences a
period of uncertainty on the fate of the transaction, namely, the period
between the moment it sends its vote and the moment it receives the global
decision from the TM. Of course, a certifier that votes to abort is not un-
certain about the transaction. It knows it will eventually be aborted by all
certifiers.

4.5 INTEGRATED SCHEDULERS


Introduction
The schedulers we have seen so far synchronize conflicting operations by one
of three mechanisms: 2PL, TO, or SGT. There are other schedulers that use
combinations of these techniques to ensure that transactions are processed in
an SR manner. Such schedulers are best understood by decomposing the prob-
lem of concurrency control into certain subproblems. Each subproblem is
solved by one of our familiar three techniques. Then we have to make sure that
these solutions fit together consistently to yield a correct solution to the entire
problem of scheduling.

4We’li study this type of protocol in much greater detail in Chapter 7.


4.5 INTEGRATED SCHEDULERS 133

We decompose the problem by separating the issue of scheduling Reads


against conflicting Writes (and, symmetrically, of Writes against conflicting
Reads) from that of scheduling Writes against conflicting Writes. We’ll call the
first subproblem YWsynchronization and the second, ww synchronization. We
will call an algorithm used for rw synchronization an YWsynchronizer, and one
for ww synchronization a ww synchronizer. A complete scheduler consists of
an rw and a ww synchronizer. In a correct scheduler, the two synchronizers
must resolve read-write and write-write conflicts in a consistent way, Sched-
ulers obtained in this manner are called integrated schedulers. Integrated
schedulers that use (possibly different versions of) the same mechanism (2PL,
TO, or SGT) for both the rw and the ww synchronizer are called pure sched-
ulers. All of the schedulers we have seen so far are pure schedulers. Schedulers
combining different mechanisms for rw and ww synchronization are called
mixed schedulers.
To use any of the variations of the three concurrency control techniques
that we have seen to solve each of these two subproblems, all we need to do is
change the definition of “conflicting operations” to reflect the type of synchro-
nization we are trying to achieve. Specifically, in rw synchronization, two
operations accessing the same data item conflict if one of them is a Read and
the other is a Write. Two Writes accessing the same data item are not consid-
ered to conflict in this case. Similarly, in ww synchronization two operations
on the same data item conflict if both are Writes.
For example, if a scheduler uses 2PL for rw synchronization only, wi[x] is
delayed (on account of the 2PL rw synchronizer) only if some other transaction
Tj is holding a read lock on x. That is, several transactions may be sharing a
write lock on x, as long as no transaction is concurrently holding a read lock
on X. This may sound wrong, but remember that there is another part to the
scheduler, the ww synchronizer, which will somehow ensure that write-write
conflicts are resolved consistently with the way the 2PL rw synchronizer
resolves read-write conflicts. Similarly, in a scheduler using 2PL for ww
synchronization only, Reads are never delayed by the 2PL ww synchronizer,
although they may be delayed by the rw synchronizer.
A TO rw synchronizer only guarantees that Reads and Writes accessing
the same data item are processed in timestamp order. It will not force two
Writes w;[x] and wj[x] to be processed in timestamp order, unless there is some
operation rk[X] such that ts(T;) < ts( Tk) < tS( Tj) or tS( Tj) < tS(Tk) < tS( Ti).

Similarly, a TO ww synchronizer ensures that wi[x] is processed before wj[x]


only if ts( T;) < ts( Tj), but imposes no such order on ri[x] and wj[x]; of course,
the rw synchronizer will impose such an order.
In SGT, the SG maintained by the scheduler contains only those edges that
reflect the kind of conflicts being resolved. An SGT rw (or ww) synchronizer
adds edges corresponding to read-write (or write-write) conflicts, every time it
receives an operation.
134 CHAPTER 4 I NON-LOCKING SCHEDULERS

The TW serialization graph of history H, denoted SGJH), consists of


nodes corresponding to the committed transactions appearing in H and edges
T, + T, iff r,[x] < w,[x] or w,[x] < r$x] for some X. Similarly, the ww serializa-
tion graph of H, denoted SG&H), consists of nodes corresponding to the
committed transactions appearing in H and edges T, + T,, iff XJ,[X] < u’,[x] for
some x.
It is easy to show that if a history H represents some execution produced
by a scheduler using an rw (or ww) synchronizer based on 2PL, TO, or SGT,
then SG,,J H) (or SG,,(H)) is acyclic. The arguments are similar to those used
in earlier sections to prove that SG(H) is acyclic if H represents an execution
produced by a 2PL, TO, or SGT scheduler. Therefore for a scheduler using any
combination of 2PL, TO, and SGT for rw or ww synchronization, we know
that if H is produced by the scheduler, then SG,,(H) and SG,,(H) are both
acyclic.
To ensure that H is SR, we need the complete graph SG( H) to be acyclic. It
is not enough for each of SG,,( H) and SG,,( H) to be acyclic. In addition, we
need the two graphs to represent compatible partial orders. That is, the union
of the two graphs must be acyclic. Ensuring the compatibility of these two
graphs is the hardest part of designing correct integrated schedulers, as we’ll
see later in this section.

Thomas’ Write Rule (TWR)


Suppose a TO scheduler receives w,[x] after it has already sent w,[x] to the
DM, but ts( T,) < ts( Tj). The TO rule requires that w,[x] be rejected. But if the
scheduler is only concerned with ww synchronization, then this rejection is
unnecessary. For if the scheduler had processed WJX] when it was “supposed
to,” namely, before it had processed u;[x], then x would have the same value as
it has now, when it is faced with w,[x]‘s having arrived too late. Said differentiy,
processing a sequence of Writes in timestamp order produces the same result as
processing the single Write with maximum timestamp; thus, late operations
can be ignored.
This observation leads to a ww synchronization rule, called Thomas’
Write Rule (TWRj, that never delays or rejects any operation. When a TWR
ww synchronizer receives a Write that has arrived too late insofar as the TO
rule is concerned, it simply ignores the Write (i.e., doesn’t send it to the DM)
but reports its successful compIetion to the TM.
More precisely, Thomas’ Write Rule states: Let T, be the transaction with
maximum timestamp that wrote into x before the scheduler receives w,[x]. If
ts( T;) > ts( T,), process w,[x] as usual (submit it to the DM, wait for the DM’s
ack(w,[x]), and then acknowledge it to the TM). Otherwise, process w,[x] by
simply acknowledging it to the TM.
But what about Reads? Surely Reads care about the order in which W’rites
are processed. For example, suppose we have four transactions T,, Tz, T,, and
4.5 INTEGRATED SCHEDULERS 135

T, where ts( T,) < ts( T,) < ts( T3) < ts( T4); T,, T,, and T, just write x; and T3
reads x. Now, suppose the scheduler scheduIes w,[x], r)[x], w,[x] in that order,
and then receives w,[x]. TWR says that it’s safe for the ww synchronizer to
accept wJx] but not process it. But this seems wrong, since T, should have
written x before r3[x] read the value written by w,[x]. This is true. But the
problem is one of synchronizing Reads against Writes and therefore none of
the ww synchronizer’s business. The rw synchronizer must somehow prevent
this situation.
This example drives home the division of labor between rw and ww
synchronizers. And it emphasizes once more that care must be taken in inte-
grating rw and ww synchronizers to obtain a correct scheduler.
We will examine two integrated schedulers, one using Basic TO for rw
synchronization and TWR for ww synchronization, and another using 2PL for
rw synchronization and TWR for ww synchronization. The first is a pure inte-
grated scheduler because both rw and ww synchronization are achieved by the
same mechanism, TO. The second is a mixed integrated scheduler because it
combines a 2PL rw synchronizer with a TWR ww synchronizer.

A Pure Integrated Scheduler


Our first integrated scheduler can be viewed as a simple optimization over
Basic TO, in the sense that it sometimes avoids rejecting Writes that Basic TO
would reject.
Recall that a Basic TO scheduler schedules an operation if all conflicting
operations that it has previously scheduled have smaller timestamps. Other-
wise it rejects the operation. Our new scheduler uses this principle for rw
synchronization only. For ww synchronization it uses TWR instead. That is, it
behaves as follows:
2. It scheduIes ri[x] provided that for all wj[x] that have already been
scheduled, ts( T;) > ts( Tj). Otherwise, it rejects r;[x].
2. It rejects wi[x] if it has already scheduled some rj[x] with ts( Tj) > ts( TJ.
Otherwise, if it has scheduled some wj[x] with ts( Tj) > ts( TJ, it ignores
wi[x] (i.e., acknowledges the processing of wi[x] to the TM but does not
send the operation to the DM). Otherwise, it processes wi[x] normally.
Thus, the difference from Basic TO is that a late Write is only rejected if a
conflicting Read with a greater timestamp has already been processed. If the
only conflicting operations with greater timestamps that have been processed
are Writes, the late Write is simply ignored. The implementation details and
recoverability considerations of this scheduler are similar to those of Basic TO
(see Exercise 4.33).

A Mixed Integrated Scheduler


Now let’s construct an integrated scheduler that uses Strict 2PL for rw synchro-
nization and TWR for wwsynchronization. Suppose that His a history repre-
136 CHAPTER 4 / NON-LOCKING SCHEDULERS

senting some execution of such a scheduler. We know that SG,,(H) and


SG,,.(H) are each acyclic. To ensure that SG(H) = SG,,(H) U SG,,(H) is
acyclic, we will also require that
1. if T, --t Tr is an edge of SG,,.(H), then ts( T,) < ts(T?).
We know that TO synchronizes all conflicts in timestamp order, so if ?; +
T, is in SG,&H) then ts( T,) < ts( T,). By (1), the same holds in SG,,.(H). Since
every edge T, --+ Tj in SG,,$(H) and SG,,(H) has ts( TI) < ts( T,), SG(H) must be
acyclic. The reasoning is identical to that of Theorem 4.1, which proved that
TO is correct. In the remainder of this section we describe a technique for
making condition (1) hold.
If T8 --t Ti is in SG,,(H}, then T, cannot terminare until T, releases some
lock that T, needs. In Strict 2PL, a transaction holds its locks until it commits.
Therefore, if ‘ir, -+ T,, then T, commits before T, terminates, which implies that
T, commits before T, commits. Thus, we can obtain (1) by ensuring that
2. if T, commits before T,,, then ts( T,) < ts( r,).
The scheduler can’t ensure (2) unless it delays assigning a timestamp to a
transaction T, until it receives Tl’s Commit. But the scheduler cannot process
any of T,‘s Writes using TWR until it knows Ti’s timestamp. These last two
observations imply that the scheduler must delay processing all of the Writes it
receives from each transaction T, until it receives Tt’s Commit.
This delaying of Writes creates a problem if, for some x, T, issues rj[x]
after having issued w,[x]. Since w,[x] is still delayed in the scheduler when the
scheduler receives r,[x], the scheduler can’t send rJx] to the DXI; if it did, then
r,/x] would not (as it should) read the value produced by w,[x].
The scheduler can deal with this problem by checking a transaction’s
queue of delayed Writes every time it receives a Read to process. If the transac-
tion previously wrote the data item, the scheduler can service the Read inter-
nally, without going to the DM. Since the scheduler has to get a read lock
before processing the Read, it will see the transaction’s write Iock at that time,
and will therefore know to process the Read internally. If there is no write lock
owned by the same transaction, then the scheduler processes the Read
normaIly.
In a centralized DBS, the scheduler can enforce (2) simply by assigning
each transaction a larger timestamp than the previous one that committed. In a
distributed DBS, this is difficult to do, because different schedulers (and TMs)
are involved in committing different transactions. Therefore, for distributed
DBSs we need a way to assign timestamps to different transactions indepen-
dently at different sites. The following is one method for accomplishing this.
Each scheduler maintains the usual information used by 2PL and TO
schedulers. Each transaction has a timestamp, which its TM assigns according
to a rule that we will describe later on. For each data item x (that it manages),
a scheduler maintains read locks and write locks (used for 2PL rw synchroniza-
4.5 INTEGRATED SCHEDULERS 137

tion), and a variable max-w-scheduled[x] containing the largest timestamp of


all transactions that have written into x (used for the TWR ww synchroniza-
tion). (Since TO is not used for rw synchronization, max-r-scheduled[x] is
unnecessary.) To coordinate handshaking with the DM, the scheduler also
keeps the usual queue of delayed operations on X, queue[x], and two counts, r-
in-transit[x] and w-in-transit[x], of the Reads and Writes sent to, but not yet
acknowledged by, the DM. The scheduler also maintains a variable to help in
generating transaction timestamps; for each data item x, max-ts[x] contains
the largest timestamp of all transactions that have ever obtained a lock on x (be
it a read or a write lock). And for each active transaction Ti, the TM keeps a
variable max-lock-set[ T;], which it initializes to 0 when Ti begins executing.
Each scheduler uses 2PL for rw synchronization. To process T;[x], the
scheduler obtains rl;[x]. To process wi[x], it sets wli[x], after which it inserts
wi[x] into w-queue[TJ, a buffer that contains T/s Writes. After processing o,[x]
(o E { 7, w}), the scheduler sends the TM an acknowledgment that includes
max-ts[x]. The TM processes the acknowledgment by setting max-lock-set[TJ
: = max(max-lock-set[ Ti], max-ts[x]).
When the TM receives c,, it generates a unique timestamp for T, greater
than max-lock-setf T;]. It then sends ci and ts( Ti) to each scheduler that
processed operations on behalf of Ti. To process ci, a scheduler sets max-ts[X]
: = max(ts( T;), max-ts[x]) for each x that T; has locked (at that scheduler).
Then, the scheduler processes Tts Writes. For every ZU;[X] buffered in w-
queue[TJ, if ts( Ti) > max-w-scheduled[x], then the scheduler sends wj[x] to
the DM and sets max-w-scheduled[x] to ts(Ti); otherwise it discards wi[x]
without processing it. Notice that it’s crucial that no wj[x] was sent to the DM
before the scheduler receives ci and ts(Ti), because the latter is needed for
processing wi[x] according to TWR. After a DM acknowledges all of the
Writes that were sent to it, its scheduler sends c, to the DM. After it receives
ack(c;), the scheduler can release Ti’s locks (that were set by that scheduler) and
acknowledge T/s commitment (by that scheduler) to the TM.
We want to show that the implementation sketched here achieves condi-
tion (l), which is sufficient for proving that Strict 2PL rw synchronization and
TWR ww synchronization are integrated correctly. Let H be a history repre-
senting an execution of the DBS just outlined and suppose that Ti + Tj is in
SG,(H). This means that pJx] < ~Jx] for some x where e is 7 or w and 4 is w
or r, respectively. Because we are using Strict 2PL for rw synchronization, Ti
committed before Tj did. Consider the time at which Tj is ready to commit. At
that time max-lock-set[Tj] 1 max-ts[x], since Tj is holding a 4 lock on X, and
after it obtained that lock, the TM set max-lock-set[Tj] to at least max-ts[x].
But before Ti released its e lock on X, the scheduler set max-ts[x] to at least
ts( TJ. Since max-ts[x] never decreases with time, max-lock-set[T;] 2 ts( TJ.
By the rule for generating timestamps, ts( Tj) > max-lock-set[Tj] and there-
fore ts( Tj) > ts( T,). Thus, timestamps assigned by the TM are consistent with
the order of Commits, which is the condition we needed to show that Strict
138 CHAPTER 4 / NON-LOCKING SCHEDULERS

2PL rw synchronization and TWR ww synchronization are correctly inte-


grated.

BIBLIOGRAPHIC NOTES
Early TO based algorithms appear in [Shapiro, Millstein 77a], [Shapiro, Millstein
77b], and [Thomas 791. The latter paper, first published in 1976 as a technical report,
also introduced certification and TWR, and was the first to apply voting to replicated
data (see Chapter 8). An elaborate TO algorithm using TWR, classes, and conflict
graphs was built in the SDD-1 distributed DBS [Bernstein et al. 781, [Bernstein, Ship-
man 801, [Bernstein, Shipman, Rothnie 801, and [McLean 811. Other TO algorithms
include [Cheng, Belford 801 and [Kaneko et al. 791. Multigranularity locking ideas are
applied to TO in [Carey 831.
Serialization graph testing has been studied in [Badal 791, [Casanova 811, [Hadzilacos,
Yannakakis 861, and [Schlageter 781. [Casanova 811 contains the space-efficient SGT
scheduler in Section 4.3.
The term “optimistic” scheduler was coined in [Kung, Robinson 811, who developed
the concept of certifier independently of [Thomas 791. Other work on certifiers
includes [Haerder 841, [Kersten, Tebra 841, and [Robinson 821. [Lai, Wilkinson 841
studies the atomicity of the certification activity. The performance of certifiers is
analyzed in [Menasce, Nakanishi 82a], [Morris, Wong 841, [Morris, Wong 8.51,and
[Robinson 821.
The rw and ww synchronization paradigm of Section 4.5 is from [Bernstein, Goodman
811 and [Bernstein, Goodman 811. The 2PL and TWR mixed integrated scheduler is
from [Bernstein, Goodman, Lai 831.

EXERCISES
4.1 In Basic TO, suppose the scheduler adjusts max-q-scheduled[x] when
it sends e,[x] to the DM, instead of when it adds ei[x] to queue[x]. What
effect does this have on the rate at which the scheduler rejects operations?
What are the benefits of this modification to Basic TO?
4.2 Modify Basic TO to avoid cascading aborts. Modify it to enforce the
weaker condition of recoverability. Explain why your modified schedulers
satisfy the required conditions.
4.3 In Basic TO, under what conditions (if any) is it necessary to insert an
operation pJx] to queue[x] other than at the end?
4.4 Generalize the Basic TO scheduler to handle arbitrary operations
(e.g., Increment and Decrement).
4.5 Modify the Basic TO scheduler of the previous problem to avoid
cascading aborts. Does the compatibility matrix contain enough informa-
tion to make this modification? If not, explain what additional informa-
EXERCISES 139

tion is needed. Prove that the resulting scheduler produces histories that
are cascadeless.
4.6 Compare the behavior of distributed 2PL with Wait-Die deadlock
prevention to that of distributed Basic TO.
4.7 Prove that the Strict TO scheduler of Section 4.2 produces strict his-
tories.
4.8 Design a conservative TO scheduler that uses knowledge of process
speeds and message delays to avoid rejecting operations.
4.9 Prove that the ultimate conservative TO scheduler in Section 4.2
produces SR histories.
4.10 Modify the ultimate conservative TO scheduler in Section 4.2 so that
each TM can manage more than one transaction concurrently.
4.11 Design an ultimate conservative TO scheduler that avoids rejections
by exploiting predeclaration. (Do not use the TM architecture of Section
4.2, where each TM submits operations to the DM in timestamp order.)
Prove that your scheduler produces SR executions.
4.12 Design a way of changing class definitions on-line in conservative TO.
4.13 Design a TO scheduler that guarantees the following property: For
any history H produced by the scheduler, there is an equivalent serial
history H, such that if Tj is committed before Tj in H, then Ti precedes Tj
in H,. (T; and Tj may not have operations that conflict with each other.)
Prove that it has the property.
4.14 A conflict graph for a set of classes is an undirected graph whose nodes
include RI and WI, for each class I, and whose edges include
ci (RI, WI) for all I,
q (RI, WI) if the readset of class I intersects the writeset of class /, and
q (WI, WI) if the writeset of class I intersects the writeset of class
I (I * I).
Suppose each class is managed by one TM, and that each TM executes
transactions serially. A transaction can only be executed by a TM if it is a
member of the TM’s class.
a. Suppose the conflict graph has no cycles. What additional constraints,
if any, must be imposed by the scheduler to ensure SR executions?
Prove that the scheduler produces SR executions.
b. Suppose the scheduler uses TWR for ww synchronization, and the
conflict graph has no cycles containing an (RI, WJ) edge (i.e., all cycles
only contain (WI, WI) edges). What additional constraints, if any,
must be imposed by the scheduler to ensure SR executions? Prove that
the scheduler produces SR executions.
4.15 If the size of the timestamp table is too small, then too many recent
timestamps will have to be deleted in order to make room for even more
140 CHAPTER 4 / NON-LOCKING SCHEDULERS

recent ones. This will cause a TO scheduler to reject some older operations
that access data items whose timestamps were deleted from the table. An
interesting project is to study this effect quantitatively, either by simulation
or by mathematical analysis.
4.16 Prove that the conservative SGT scheduler described in Section 4.3
produces SR executions.
4.17 Show that for any history produced by an SGT scheduler, there exists
an assignment of timestamps to transactions such that the same history
could be produced by a TO scheduler.
4.18 Design an SGT scheduler that guarantees the following property: For
any history H produced by the scheduler, there is an equivalent serial
history H, such that if T, is committed before T, in H, then T, precedes T,
in H,. Prove that it has the property.
4.19 Modify the space-efficient SGT scheduler described in Section 4.3 to
produce recoverable, cascadeless, and strict executions. ExpIain why each
of your modified schedulers satisfies the required condition.
4.20 Give a serializability theoretic correctness proof of the space-efficient
‘SGT scheduler described in Section 4.3.
4.21 Although the space requirements of both 2PL and space-efficient SGT
are proportional to a * jDl, w h ere a is the number of active transactions
and D is the size of the database, space-efficient SGT will usually require
somewhat more space than 2PL. Explain why.
4.22 Since a certifier does not control the order in which Reads and Writes
execute, a transaction may read arbitrarily inconsistent data. A correct
certifier will eventually abort any transaction that reads inconsistent data,
but this may not be enough to avoid bad results. In particular, a program
may not check data that it reads from the database adequately enough to
avoid malfunctioning in the event that it reads inconsistent data; for exam-
ple, it may go into an infinite loop. Give a realistic example of a transac-
tion that malfunctions in this way using 2PL certification, but never
malfunctions using Basic 2PL.
4.23 Prove that the committed projection of every history produced by a
2PL certifier could have been produced by a 2PL scheduier.
4.24 Give an example of a complete history that could be produced by a
2PL certifier but not by a 2PL scheduler. (In view of the previous exercise,
the history must include aborted transactions.)
4.25 Modify the 2PL certifier so that it avoids cascading aborts. What
additional modifications are needed to ensure strictness? Explain why
each modified certifier satisfies the required condition.
4.26 If a 2PL certifier is permitted to certify two (or more) transactions
concurrently, is there a possibiIity that it will produce a non-SR execution?
Suppose the 2PL certifier enforces recoverability, If it is permitted to certify
EXERCJSES 141

two transactions concurrently, is there a possibility that it will reject more


transactions than it would if it certified transactions one at a time?
4.27 A transaction is called YWphased if it waits until all of its Reads have
been processed before it submits any of its Writes. Its read (or write) phase
consists of the time the scheduler receives its first Read (or Write) through
the time the scheduler acknowledges the processing of its last Read (or
Write). Consider the following certifier for rw phased transactions in a
centralized DBS. The scheduler assigns a timestamp to a transaction when
it receives the transaction’s first Write. Timestamps increase monotoni-
cally, so that ts( T;) < ts( Tj) iff Ti begins its write phase before Tl begins its
write phase. To certify transaction Tj, the certifier checks that for all Ti
with ts( T;) < ts( Tj), either (1) Ti completed its write phase before Tj
started its read phase, or (2) Ti completed its write phase before Tl started
its write phase and w-scheduled( T;) n r-scheduled( Tj) is empty. If neither
condition is satisfied for some such T;, then Tj is aborted; otherwise, it is
certified.
a. Prove that this certifier only produces serializable executions.
b. Are there histories produced by this certifier that could not be
produced by the 2PL certifier?
c. Are there histories involving only rw phased transactions that could be
produced by the 2PL certifier but not by this certifier?
d. Design data structures and an algorithm whereby a certifier can effi-
ciently check (1) and (2).
4.28 In the previous exercise, suppose that for each Ti with ts( Ti) < ts( Tj),
the certifier checks that (1) or (2) or the following condition (3) holds: T,
completed its read phase before Tj completed its read phase and w-
scheduled( TJ n (r-scheduled( Tj) U w-scheduled( Tj)) is empty. Answer
(a) - (d) in the previous exercise for this new type of certifier.
4.29 Suppose the scheduler uses a workspace model of transaction execu-
tion, as in the mixed integrated scheduler of Section 4.5. That is, it delays
processing the Writes of each transaction Ti until after it receives Ti’s
Commit. Suppose a scheduler uses the 2PL certification rule, but uses the
workspace model for transaction execution. Does the scheduler still
produce SR executions? If so, prove it. If not, modify the certification rule
so that it does produce SR executions. Are the executions recoverable?
Cascadeless? Strict?
4.30 Does the SGT certifier still produce SR executions using a workspace
model of transaction execution (see Exercise 4.29)? If so, prove it. If not,
modify the certification rule so that it does produce SR executions.
4.31 In the workspace model of transaction execution (see Exercise 4.29),
the scheduler buffers a transaction Tts Writes in w-queue[TJ. Since the
scheduler must scan w-queue[T;] for every Read issued by Ti, the effi-
ciency of that scan is quite important. Design a data structure and search
142 CHAPTER 4 / NON-LOCKING SCHEDULERS

algorithm for w-queue[T,] and analyze its efficiency.


4.32 Describe an SGT certifier based on the space-efficient SGT scheduler
of Section 4.3, and prove that it produces SR executions.
4.33 Modify the pure integrated scheduler of Section 4.5 (Basic TO for rw
synchronization and TWR for ww synchronization) to produce cascade-
less executions. Describe the algorithm using the actun1 data structures
maintained by the scheduler, such as max-r-scheduled, max-w-scheduled,
and w-in-transit.
4.34 Describe a pure integrated scheduler that uses ultimate conservative
TO for rw synchronization and TWR for ww synchronization. Prove that
it produces SR executions.
4.35 Describe a mixed integrated scheduler that uses conservative TO for
rw synchronization and 2PL for ww synchronization. Prove that it
produces SR executions.

You might also like