0% found this document useful (0 votes)
101 views

DBMS - Unit 4

The document discusses transaction concepts in database management systems. It defines a transaction as a unit of program execution that accesses and updates data items. Transactions must satisfy the ACID properties of atomicity, consistency, isolation, and durability to maintain data integrity. Concurrent transaction execution is allowed but must be controlled to prevent inconsistencies. A schedule specifies the order of transaction operations and must be serializable, or equivalent to a serial schedule, to preserve consistency. Both conflict serializability and view serializability are discussed as ways to test if a schedule is serializable.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

DBMS - Unit 4

The document discusses transaction concepts in database management systems. It defines a transaction as a unit of program execution that accesses and updates data items. Transactions must satisfy the ACID properties of atomicity, consistency, isolation, and durability to maintain data integrity. Concurrent transaction execution is allowed but must be controlled to prevent inconsistencies. A schedule specifies the order of transaction operations and must be serializable, or equivalent to a serial schedule, to preserve consistency. Both conflict serializability and view serializability are discussed as ways to test if a schedule is serializable.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Database Management Systems

UNIT - 4
Transaction Concept:
 A transaction is a unit of program execution that accesses and possibly updates
various data items.
 E.g., transaction to transfer $50 from account A to account B:
read(A)
A := A – 50
write(A)
read(B)
B := B + 50
write(B)
 Two main issues to deal with:
 Failures of various kinds, such as hardware failures and system crashes
 Concurrent execution of multiple transactions
ACID Properties:
A transaction is a unit of program execution that accesses and possibly updates
various data items. To preserve the integrity of data the database system must ensure:
 Atomicity: Either all operations of the transaction are properly reflected in the
database or none are.
 Consistency: Execution of a transaction in isolation preserves the consistency of the
database.
 Isolation: Although multiple transactions may execute concurrently, each
transaction must be unaware of other concurrently executing transactions.
Intermediate transaction results must be hidden from other concurrently executed
transactions.
o That is, for every pair of transactions Ti and Tj, it appears to Ti that either Tj,
finished execution before Ti started, or Tj started execution after Ti finished.
 Durability: After a transaction completes successfully, the changes it has made to
the database persist, even if there are system failures.

Transaction State:

A transaction in a database can be in one of the following states −

 Active – the initial state; the transaction stays in this state while it is executing
 Partially committed – after the final statement has been executed.
 Failed -- after the discovery that normal execution can no longer proceed.
 Aborted – after the transaction has been rolled back and the database restored to its
state prior to the start of the transaction. Two options after it has been aborted:
o Restart the transaction
 can be done only if no internal logical error
o Kill the transaction
 Committed – after successful completion.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 1


Database Management Systems

Implementation of Atomicity and Durability:


 Atomicity and durability are implemented by the recovery-management subsystem.
 A simplistic approach to recovery-management is the shadow-database scheme:
o A pointer called db_pointer always points to the current consistent copy of
the database.
o All updates are made on a shadow copy of the database (active and partially
committed).
o db_pointer is updated only after all updates have been written to disk
(commit).
o If the transaction fails, the old copy pointed to by db_pointer is retained, and
the shadow copy is deleted.

Concurrent Executions:
 Multiple transactions are allowed to run concurrently in the system. Advantages
are:
o Increased processor and disk utilization, leading to better transaction
throughput
 E.g. one transaction can be using the CPU while another is reading
from or writing to the disk

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 2


Database Management Systems

o Reduced average response time for transactions: short transactions need not
wait behind long ones.
 Concurrency control schemes – mechanisms to achieve isolation
o That is, to control the interaction among the concurrent transactions in order
to prevent them from destroying the consistency of the database
 Will study in Chapter 15, after studying notion of correctness of
concurrent executions.
Schedules:
 Schedule – a sequences of instructions that specify the chronological order in which
instructions of concurrent transactions are executed
o A schedule for a set of transactions must consist of all instructions of those
transactions
o Must preserve the order in which the instructions appear in each individual
transaction.
 A transaction that successfully completes its execution will have a commit
instructions as the last statement
o By default transaction assumed to execute commit instruction as its last step
 A transaction that fails to successfully complete its execution will have an abort
instruction as the last statement.

Schedule 1:
 Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B.
 An example of a serial schedule in which T1 is followed by T2 :

Schedule 2:
 A serial schedule in which T2 is followed by T1 :

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 3


Database Management Systems

Schedule 3:
 Let T1 and T2 be the transactions defined previously. The following schedule is not a
serial schedule, but it is equivalent to Schedule 1.

Schedule 4:
 The following concurrent schedule does not preserve the sum of “A + B”

Serializability:
 Basic Assumption – Each transaction preserves database consistency.
 Thus, serial execution of a set of transactions preserves database consistency.
 A (possibly concurrent) schedule is serializable if it is equivalent to a serial schedule.
Different forms of schedule equivalence give rise to the notions of:
o conflict serializability
o view serializability
Simplified view of transactions:
 We ignore operations other than read and write instructions
 We assume that transactions may perform arbitrary computations on data in local
buffers in between reads and writes.
 Our simplified schedules consist of only read and write instructions.
Conflicting Instructions:
 Let li and lj be two Instructions of transactions Ti and Tj respectively. Instructions li
and lj conflict if and only if there exists some item Q accessed by both li and lj, and at
least one of these instructions wrote Q.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 4


Database Management Systems

1. li = read(Q), lj = read(Q). li and lj don’t conflict.


2. li = read(Q), lj = write(Q). They conflict.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
 Intuitively, a conflict between li and lj forces a (logical) temporal order between
them.
o If li and lj are consecutive in a schedule and they do not conflict, their results
would remain the same even if they had been interchanged in the schedule.
Conflict Serializability:
 If a schedule S can be transformed into a schedule S´ by a series of swaps of non-
conflicting instructions, we say that S and S´ are conflict equivalent.
 We say that a schedule S is conflict serializable if it is conflict equivalent to a serial
schedule.
 Schedule 3 can be transformed into Schedule 6 -- a serial schedule where T2 follows
T1, by a series of swaps of non-conflicting instructions. Therefore, Schedule 3 is
conflict serializable.

 Example of a schedule that is not conflict serializable:

 We are unable to swap instructions in the above schedule to obtain either the
serial schedule < T3, T4 >, or the serial schedule < T4, T3 >.
View Serializability:
 Let S and S´ be two schedules with the same set of transactions. S and S´ are view
equivalent if the following three conditions are met, for each data item Q,
o If in schedule S, transaction Ti reads the initial value of Q, then in schedule S’
also transaction Ti must read the initial value of Q.
o If in schedule S transaction Ti executes read(Q), and that value was produced
by transaction Tj (if any), then in schedule S’ also transaction Ti must read the
value of Q that was produced by the same write(Q) operation of transaction
Tj .

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 5


Database Management Systems

o The transaction (if any) that performs the final write(Q) operation in schedule
S must also perform the final write(Q) operation in schedule S’.
 As can be seen, view equivalence is also based purely on reads and writes alone.
 A schedule S is view serializable if it is view equivalent to a serial schedule.
 Every conflict serializable schedule is also view serializable.
 Below is a schedule which is view-serializable but not conflict serializable.

 What serial schedule is above equivalent to?


 Every view serializable schedule that is not conflict serializable has blind writes.
Test for View Serializability:
 The precedence graph test for conflict serializability cannot be used directly to test
for view serializability.
o Extension to test for view serializability has cost exponential in the size of the
precedence graph.
 The problem of checking if a schedule is view serializable falls in the class of NP-
complete problems.
o Thus, existence of an efficient algorithm is extremely unlikely.
 However, practical algorithms that just check some sufficient conditions for view
serializability can still be used.
More Complex Notions of Serializability:
The schedule below produces the same outcome as the serial schedule < T1, T5 >, yet is not
conflict equivalent or view equivalent to it.

 If we start with A = 1000 and B = 2000, the final result is 960 and 2040
 Determining such equivalence requires analysis of operations other than read and
write.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 6


Database Management Systems

Recoverability:

Recoverable Schedules: if a transaction Tj reads a data item previously written by a


transaction Ti , then the commit operation of Ti must appear before the commit
operation of Tj.
The following schedule is not recoverable if T9 commits immediately after the
read(A) operation.

If T8 should abort, T9 would have read (and possibly shown to the user) an
inconsistent database state. Hence, database must ensure that schedules are recoverable.

Cascading Rollbacks: single transaction failure leads to a series of transaction rollbacks.


Consider the following schedule where none of the transactions has yet committed (so the
schedule is recoverable).

If T10 fails, T11 and T12 must also be rolled back.


 Can lead to the undoing of a significant amount of work

Cascadeless Schedules:
 Cascadeless schedules — for each pair of transactions Ti and Tj such that Tj reads a
data item previously written by Ti, the commit operation of Ti appears before the
read operation of Tj.
 Every cascadeless schedule is also recoverable
 It is desirable to restrict the schedules to those that are cascadeless
 Example of a schedule that is NOT cascadeless

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 7


Database Management Systems

Concurrency Control:
 A database must provide a mechanism that will ensure that all possible schedules
are both:
o Conflict serializable.
o Recoverable and preferably cascadeless
 A policy in which only one transaction can execute at a time generates serial
schedules, but provides a poor degree of concurrency
 Concurrency-control schemes tradeoff between the amount of concurrency they
allow and the amount of overhead that they incur
 Testing a schedule for serializability after it has executed is a little too late!
o Tests for serializability help us understand why a concurrency control
protocol is correct
 Goal – to develop concurrency control protocols that will assure serializability.

Weak Levels of Consistency:


 Some applications are willing to live with weak levels of consistency, allowing
schedules that are not serializable
o E.g., a read-only transaction that wants to get an approximate total balance
of all accounts
o E.g., database statistics computed for query optimization can be approximate
(why?)
o Such transactions need not be serializable with respect to other transactions
 Tradeoff accuracy for performance

 Purpose of Concurrency control:


o To Ensure Isolation.
o To Preserve Database Consistency.
o To Resolve conflicts.

Lock-Based Protocols:
A lock is a mechanism to control concurrent access to a data item
Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as well as
written. X-lock is requested using lock-X instruction.
2. shared (S) mode. Data item can only be read. S-lock is
requested using lock-S instruction.
Lock requests are made to concurrency-control manager. Transaction can proceed only
after request is granted.

 Lock-compatibility matrix

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 8


Database Management Systems

 A transaction may be granted a lock on an item if the requested lock is compatible


with locks already held on the item by other transactions
 Any number of transactions can hold shared locks on an item,
o But if any transaction holds an exclusive on the item no other transaction
may hold any lock on the item.
 If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is then
granted.
 Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
 Locking as above is not sufficient to guarantee serializability — if A and B get
updated in-between the read of A and B, the displayed sum would be wrong.
 A locking protocol is a set of rules followed by all transactions while requesting and
releasing locks. Locking protocols restrict the set of possible schedules.

Pitfalls of Lock-Based Protocols:

 Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3 to
release its lock on B, while executing lock-X(A) causes T3 to wait for T4 to release its
lock on A.
 Such a situation is called a deadlock.
 To handle a deadlock one of T3 or T4 must be rolled back
and its locks released.
 The potential for deadlock exists in most locking protocols. Deadlocks are a
necessary evil.
 Starvation is also possible if concurrency control manager is badly designed. For
example:
 A transaction may be waiting for an X-lock on an item, while a sequence of
other transactions request and are granted an S-lock on the same item.
 The same transaction is repeatedly rolled back due to deadlocks.
 Concurrency control manager can be designed to prevent starvation.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 9


Database Management Systems

The Two-Phase Locking Protocol:


 This is a protocol which ensures conflict-serializable schedules.
 Phase 1: Growing Phase
o transaction may obtain locks
o transaction may not release locks
 Phase 2: Shrinking Phase
o transaction may release locks
o transaction may not obtain locks
 The protocol assures serializability. It can be proved that the transactions can be
serialized in the order of their lock points (i.e. the point where a transaction
acquired its final lock).
 Two-phase locking does not ensure freedom from deadlocks
 Conservative (Static) two-phase locking: It releases the locks only after commit.
 Cascading roll-back is possible under two-phase locking. To avoid this, follow a
modified protocol called strict two-phase locking.
 Strict two-phase locking :Here a transaction must hold all its exclusive locks till it
commits/aborts.
 Rigorous two-phase locking: is even stricter: here all locks are held till
commit/abort. In this protocol transactions can be serialized in the order in which
they commit.

Lock Conversions:
 Two-phase locking with lock conversions:
– First Phase:
 can acquire a lock-S on item
 can acquire a lock-X on item
 can convert a lock-S to a lock-X (upgrade)
– Second Phase:
 can release a lock-S
 can release a lock-X
 can convert a lock-X to a lock-S (downgrade)
 This protocol assures serializability. But still relies on the programmer to insert the
various locking instructions.

Timestamp-Based Protocols:
 Each transaction is issued a timestamp when it enters the system. If an old
transaction Ti has time-stamp TS(Ti), a new transaction Tj is assigned time-stamp
TS(Tj) such that TS(Ti) <TS(Tj).
 The protocol manages concurrent execution such that the time-stamps determine
the serializability order.
 In order to assure such behavior, the protocol maintains for each data Q two
timestamp values:
o W-timestamp(Q) is the largest time-stamp of any transaction that executed
write(Q) successfully.
o R-timestamp(Q) is the largest time-stamp of any transaction that executed
read(Q) successfully.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 10


Database Management Systems

 The timestamp ordering protocol ensures that any conflicting read and write
operations are executed in timestamp order.

 Suppose a transaction Ti issues a read(Q)


o If TS(Ti)  W-timestamp(Q), then Ti needs to read a value of Q that was
already overwritten.
 Hence, the read operation is rejected, and Ti is rolled back.
o If TS(Ti) W-timestamp(Q), then the read operation is executed, and R-
timestamp(Q) is set to max(R-timestamp(Q), TS(Ti )).
 Suppose that transaction Ti issues write(Q).
o If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was
needed previously, and the system assumed that that value would never be
produced.
 Hence, the write operation is rejected, and Ti is rolled back.
o If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of
Q.
 Hence, this write operation is rejected, and Ti is rolled back.
o Otherwise, the write operation is executed, and W-timestamp(Q) is set to
TS(Ti).

Example Use of the Protocol:


A partial schedule for several data items for transactions with timestamps 1, 2, 3, 4, 5

Correctness of Timestamp-Ordering Protocol:


 The timestamp-ordering protocol guarantees serializability since all the arcs in
the precedence graph are of the form:

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 11


Database Management Systems

Thus, there will be no cycles in the precedence graph


 Timestamp protocol ensures freedom from deadlock as no transaction ever waits.
 But the schedule may not be cascade-free, and may not even be recoverable.

Thomas’ Write Rule:


 Modified version of the timestamp-ordering protocol in which obsolete write
operations may be ignored under certain circumstances.
 When Ti attempts to write data item Q, if TS(Ti) < W-timestamp(Q), then Ti is
attempting to write an obsolete value of {Q}.
o Rather than rolling back Ti as the timestamp ordering protocol would have
done, this {write} operation can be ignored.
 Otherwise this protocol is the same as the timestamp ordering protocol.
 Thomas' Write Rule allows greater potential concurrency.

Validation-Based Protocol:
 Execution of transaction Ti is done in three phases.

1. Read and execution phase: Transaction Ti writes only to temporary local


variables
2. Validation phase: Transaction Ti performs a ``validation test'' to determine if local
variables can be written without violating serializability.
3. Write phase: If Ti is validated, the updates are applied to the database;
otherwise, Ti is rolled back.
 The three phases of concurrently executing transactions can be interleaved, but
each transaction must go through the three phases in that order.
 Assume for simplicity that the validation and write phase occur together,
atomically and serially
 I.e., only one transaction executes validation/write at a time.
 Also called as optimistic concurrency control since transaction executes fully in the
hope that all will go well during validation.
 Each transaction Ti has 3 timestamps
 Start(Ti) : the time when Ti started its execution
 Validation(Ti): the time when Ti entered its validation phase
 Finish(Ti) : the time when Ti finished its write phase
 Serializability order is determined by timestamp given at validation time, to increase
concurrency.
 Thus TS(Ti) is given the value of Validation(Ti).
 This protocol is useful and gives greater degree of concurrency if probability of
conflicts is low.
 because the serializability order is not pre-decided, and
 relatively few transactions will have to be rolled back.

Validation Test for Transaction Tj :


 If for all Ti with TS (Ti) < TS (Tj) either one of the following condition holds:
 finish(Ti) < start(Tj)
 start (Tj) < finish(Ti) < validation(Tj) and the set of data items written by Ti
does not intersect with the set of data items read by Tj.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 12


Database Management Systems

then validation succeeds and Tj can be committed. Otherwise, validation fails and Tj is
aborted.
 Justification: Either the first condition is satisfied, and there is no overlapped
execution, or the second condition is satisfied and
 The writes of Tj do not affect reads of Ti since they occur after Ti has finished
its reads.
 The writes of Ti do not affect reads of Tj since Tj does not read any item
written by Ti.

Multiple Granularity:
 Allow data items to be of various sizes and define a hierarchy of data granularities,
where the small granularities are nested within larger ones
 Can be represented graphically as a tree.
 When a transaction locks a node in the tree explicitly, it implicitly locks all the node's
descendents in the same mode.
 Granularity of locking (level in tree where locking is done):
o fine granularity (lower in tree): high concurrency, high locking overhead.
o coarse granularity (higher in tree): low locking overhead, low concurrency.

Example of Granularity Hierarchy:

The levels, starting from the coarsest (top) level are


 database
 area
 file
 record
Intention Lock Modes:
 In addition to S and X lock modes, there are three additional lock modes with
multiple granularity:
o Intention-shared (IS): indicates explicit locking at a lower level of the tree but
only with shared locks.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 13


Database Management Systems

o intention-exclusive (IX): indicates explicit locking at a lower level with


exclusive or shared locks
o shared and intention-exclusive (SIX): the subtree rooted by that node is
locked explicitly in shared mode and explicit locking is being done at a lower
level with exclusive-mode locks.
 intention locks allow a higher level node to be locked in S or X mode without having
to check all descendent nodes.

The compatibility matrix for all lock modes is:

Recovery and Atomicity:


 Modifying the database without ensuring that the transaction will commit may leave
the database in an inconsistent state.
 Consider transaction Ti that transfers $50 from account A to account B; goal is either
to perform all database modifications made by Ti or none at all.
 Several output operations may be required for Ti (to output A and B). A failure may
occur after one of these modifications has been made but before all of them are
made.
 To ensure atomicity despite failures, we first output information describing the
modifications to stable storage without modifying the database itself.
 We study two approaches:
 log-based recovery, and
 shadow-paging
 We assume (initially) that transactions run serially, that is, one after the other.

Log-Based Recovery:
 A log is kept on stable storage.
o The log is a sequence of log records, and maintains a record of update
activities on the database.
 When transaction Ti starts, it registers itself by writing a
<Ti start>log record
 Before Ti executes write(X), a log record <Ti, X, V1, V2> is written, where V1 is the
value of X before the write, and V2 is the value to be written to X.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 14


Database Management Systems

o Log record notes that Ti has performed a write on data item Xj Xj had value
V1 before the write, and will have value V2 after the write.
 When Ti finishes it last statement, the log record <Ti commit> is written.
 We assume for now that log records are written directly to stable storage (that is,
they are not buffered)
 Two approaches using logs
o Deferred database modification
o Immediate database modification
Deferred Database Modification:
 The deferred database modification scheme records all modifications to the log, but
defers all the writes to after partial commit.
 Assume that transactions execute serially
 Transaction starts by writing <Ti start> record to log.
 A write(X) operation results in a log record <Ti, X, V> being written, where V is the
new value for X
o Note: old value is not needed for this scheme
 The write is not performed on X at this time, but is deferred.
 When Ti partially commits, <Ti commit> is written to the log
 Finally, the log records are read and used to actually execute the previously deferred
writes.
 During recovery after a crash, a transaction needs to be redone if and only if both <Ti
start> and<Ti commit> are there in the log.
 Redoing a transaction Ti ( redoTi) sets the value of all data items updated by the
transaction to the new values.
 Crashes can occur while
o the transaction is executing the original updates, or
o while recovery action is being taken
 example transactions T0 and T1 (T0 executes before T1):
T0: read (A) T1 : read (C)
A: - A - 50 C:- C- 100
Write (A) write (C)
read (B)
B:- B + 50
write (B)
 Below we show the log as it appears at three instances of time.

 If log on stable storage at time of crash is as in case:


(a) No redo actions need to be taken

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 15


Database Management Systems

(b) redo(T0) must be performed since <T0 commit> is present


(c) redo(T0) must be performed followed by redo(T1) since
<T0 commit> and <Ti commit> are present
Immediate Database Modification:
 The immediate database modification scheme allows database updates of an
uncommitted transaction to be made as the writes are issued
o since undoing may be needed, update logs must have both old value and new
value
 Update log record must be written before database item is written
o We assume that the log record is output directly to stable storage
o Can be extended to postpone log record output, so long as prior to execution
of an output(B) operation for a data block B, all log records corresponding to
items B must be flushed to stable storage
 Output of updated blocks can take place at any time before or after transaction
commit
 Order in which blocks are output can be different from the order in which they are
written.
Immediate Database Modification Example:

 Recovery procedure has two operations instead of one:


o undo(Ti) restores the value of all data items updated by Ti to their old values,
going backwards from the last log record for Ti
o redo(Ti) sets the value of all data items updated by Ti to the new values, going
forward from the first log record for Ti
 Both operations must be idempotent
o That is, even if the operation is executed multiple times the effect is the same
as if it is executed once

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 16


Database Management Systems

 Needed since operations may get re-executed during recovery


 When recovering after failure:
o Transaction Ti needs to be undone if the log contains the record
<Ti start>, but does not contain the record <Ti commit>.
o Transaction Ti needs to be redone if the log contains both the record <Ti
start> and the record <Ti commit>.
 Undo operations are performed first, then redo operations.

Immediate DB Modification Recovery Example:


Below we show the log as it appears at three instances of time.

Recovery actions in each case above are:


(a) undo (T0): B is restored to 2000 and A to 1000.
(b) undo (T1) and redo (T0): C is restored to 700, and then A and B are set to 950 and 2050
respectively.
(c) redo (T0) and redo (T1): A and B are set to 950 and 2050 respectively. Then C is set to
600.

Checkpoints:
 Problems in recovery procedure as discussed earlier :
o searching the entire log is time-consuming
o we might unnecessarily redo transactions which have already
o output their updates to the database.
 Streamline recovery procedure by periodically performing checkpointing
o Output all log records currently residing in main memory onto stable storage.
o Output all modified buffer blocks to the disk.
o Write a log record < checkpoint> onto stable storage.
 During recovery we need to consider only the most recent transaction Ti that started
before the checkpoint, and transactions that started after Ti.
o Scan backwards from end of log to find the most recent <checkpoint> record
o Continue scanning backwards till a record <Ti start> is found.
o Need only consider the part of log following above start record. Earlier part
of log can be ignored during recovery, and can be erased whenever desired.
o For all transactions (starting from Ti or later) with no <Ti commit>, execute
undo(Ti). (Done only in case of immediate modification.)
o Scanning forward in the log, for all transactions starting from Ti or later
with a <Ti commit>, execute redo(Ti).

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 17


Database Management Systems

Shadow Paging:
 Shadow paging is an alternative to log-based recovery; this scheme is useful if
transactions execute serially
 Idea: maintain two page tables during the lifetime of a transaction –the current page
table, and the shadow page table
 Store the shadow page table in nonvolatile storage, such that state of the database
prior to transaction execution may be recovered.
o Shadow page table is never modified during execution.

Recovery With Concurrent Transactions:


 We modify the log-based recovery schemes to allow multiple transactions to execute
concurrently.
o All transactions share a single disk buffer and a single log
o A buffer block can have data items updated by one or more transactions
 We assume concurrency control using strict two-phase locking;
o i.e. the updates of uncommitted transactions should not be visible to other
transactions
 Otherwise how to perform undo if T1 updates A, then T2 updates A
and commits, and finally T1 has to abort?
 Logging is done as described earlier.
o Log records of different transactions may be interspersed in the log.
 The checkpointing technique and actions taken on recovery have to be changed
o since several transactions may be active when a checkpoint is performed.
 Checkpoints are performed as before, except that the checkpoint log record is now
of the form
< checkpoint L>
where L is the list of transactions active at the time of the checkpoint
o We assume no updates are in progress while the checkpoint is carried out
(will relax this later)
 When the system recovers from a crash, it first does the following:
o Initialize undo-list and redo-list to empty

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 18


Database Management Systems

o Scan the log backwards from the end, stopping when the first <checkpoint L>
record is found.
For each record found during the backward scan:
 if the record is <Ti commit>, add Ti to redo-list
 if the record is <Ti start>, then if Ti is not in redo-list, add Ti to undo-
list
o For every Ti in L, if Ti is not in redo-list, add Ti to undo-list
 At this point undo-list consists of incomplete transactions which must be undone,
and redo-list consists of finished transactions that must be redone.
 Recovery now continues as follows:
o Scan log backwards from most recent record, stopping when
<Ti start> records have been encountered for every Ti in undo-list.
 During the scan, perform undo for each log record that belongs to a
transaction in undo-list.
o Locate the most recent <checkpoint L> record.
o Scan log forwards from the <checkpoint L> record till the end of the log.
 During the scan, perform redo for each log record that belongs to a
transaction on redo-list
Example of Recovery:
 Go over the steps of the recovery algorithm on the following log:
<T0 start>
<T0, A, 0, 10>
<T0 commit>
<T1 start> --stop backward for undo
<T1, B, 0, 10>
<T2 start>
<T2, C, 0, 10>
<T2, C, 10, 20>
<checkpoint {T1, T2}> -- start forward for redo
<T3 start>
<T3, A, 10, 20>
<T3, D, 0, 10>
<T3 commit>
Buffer Management:

Log record buffering:


 log records are buffered in main memory, instead of of being output directly to
stable storage.
o Log records are output to stable storage when a block of log records in the
buffer is full, or a log force operation is executed.
 Log force is performed to commit a transaction by forcing all its log records
(including the commit record) to stable storage.
 Several log records can thus be output using a single output operation, reducing the
I/O cost.
 The rules below must be followed if log records are buffered:
o Log records are output to stable storage in the order in which they are
created.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 19


Database Management Systems

o Transaction Ti enters the commit state only when the log record
<Ti commit> has been output to stable storage.
o Before a block of data in main memory is output to the database, all log
records pertaining to data in that block must have been output to stable
storage.
 This rule is called the write-ahead logging or WAL rule
 Strictly speaking WAL only requires undo information to be
output

Database Buffering:
 Database maintains an in-memory buffer of data blocks
o When a new block is needed, if buffer is full an existing block needs to be
removed from buffer
o If the block chosen for removal has been updated, it must be output to disk
 As a result of the write-ahead logging rule, if a block with uncommitted updates is
output to disk, log records with undo information for the updates are output to the
log on stable storage first.
 No updates should be in progress on a block when it is output to disk. Can be
ensured as follows.
o Before writing a data item, transaction acquires exclusive lock on block
containing the data item
o Lock can be released once the write is completed.
 Such locks held for short duration are called latches.
o Before a block is output to disk, the system acquires an exclusive latch on the
block
 Ensures no update can be in progress on the block
 Database buffer can be implemented either
o in an area of real main-memory reserved for the database, or
o in virtual memory
 Implementing buffer in reserved main-memory has drawbacks:
o Memory is partitioned before-hand between database buffer and
applications, limiting flexibility.
o Needs may change, and although operating system knows best how memory
should be divided up at any time, it cannot change the partitioning of
memory.
Failure with Loss of Nonvolatile Storage:
 So far we assumed no loss of non-volatile storage
 Technique similar to checkpointing used to deal with loss of non-volatile storage
o Periodically dump the entire content of the database to stable storage
o No transaction may be active during the dump procedure; a procedure
similar to checkpointing must take place
 Output all log records currently residing in main memory onto stable
storage.
 Output all buffer blocks onto the disk.
 Copy the contents of the database to stable storage.
 Output a record <dump> to log on stable storage.
o To recover from disk failure

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 20


Database Management Systems

restore database from most recent dump.


Consult the log and redo all transactions that committed after the
dump
 Can be extended to allow transactions to be active during dump;
known as fuzzy dump or online dump
o Will study fuzzy checkpointing later

ARIES Recovery Algorithm:


 ARIES is a state of the art recovery method
o Incorporates numerous optimizations to reduce overheads during normal
processing and to speed up recovery
o The “advanced recovery algorithm” we studied earlier is modeled after
ARIES, but greatly simplified by removing optimizations
 Unlike the advanced recovery algorithm, ARIES
o Uses log sequence number (LSN) to identify log records
 Stores LSNs in pages to identify what updates have already been
applied to a database page
o Physiological redo
o Dirty page table to avoid unnecessary redos during recovery
o Fuzzy checkpointing that only records information about dirty pages, and
does not require dirty pages to be written out at checkpoint time
 More coming up on each of the above …
 ARIES uses several data structures
o Log sequence number (LSN) identifies each log record
 Must be sequentially increasing
 Typically an offset from beginning of log file to allow fast access
 Easily extended to handle multiple log files
o Page LSN
o Log records of several different types
o Dirty page table

ARIES Data Structures: Log Record


Each log record contains LSN of previous log record of the same transaction

o LSN in log record may be implicit


 Special redo-only log record called compensation log record (CLR) used to log
actions taken during recovery that never need to be undone
o Serves the role of operation-abort log records used in advanced recovery
algorithm
o Has a field UndoNextLSN to note next (earlier) record to be undone
 Records in between would have already been undone
 Required to avoid repeated undo of already undone actions

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 21


Database Management Systems

Remote Backup Systems:


 Remote backup systems provide high availability by allowing transaction processing
to continue even if the primary site is destroyed.

N. Siva Kumar, Assistant professor, Dept of CSE, MRCET. Page 22

You might also like