Transaction Concept: Unit - Iv Transaction Management

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 47

UNIT –IV

Transaction Management

TRANSACTION CONCEPT

 A Transaction is a unit of program execution that accesses and possibly updates various
data items.

 A transaction must see a consistent database.

 During transaction execution the database may be temporarily inconsistent.

 When the transaction completes successfully (is committed), the database must be
consistent.

 After a transaction commits, the changes it has made to the database persist, even if there
are system failures.

 Multiple transactions can execute in parallel.

Two main issues to deal with:

 Failures of various kinds, such as hardware failures and

system crashes

 crashes Concurrent execution of multiple transactions

Atomicity requirement

if the transaction fails after step 3 and before step 6, money will be ―lost‖ leading to an
inconsistent database state
Failure could be due to software or hardware the system should ensure that updates of a
partially executed transaction are not reflected in the database.

Durability requirement — once the user has been notified that the transaction has completed
(i.e., the transfer of the $50 has taken place), the updates to the database by the transaction
must persist even if there are software or hardware failures.
Consistency requirement in above example: the sum of A and B is unchanged by the
execution of the transaction In general, consistency requirements include Explicitly specified
integrity constraints such as primary keys and foreign keys Implicit integrity constraints
Example sum of balances of all accounts, minus sum of loan amounts must equal value of
cash-in-hand A transaction must see a consistent database. During transaction execution the
database may be temporarily inconsistent. When the transaction completes successfully the
database must be consistent Erroneous transaction logic can lead to inconsistency.

Example of Fund Transfer Isolation requirement — if between steps 3 and 6, another


transaction T2 is allowed to access the partially updated database, it will see an inconsistent
database (the sum A + B willbe less than it
should be).
T1 T2

1. read(A)

2. A := A – 50

3. write(A)
read(A), read(B), print(A+B)

4. read(B)

5. B := B + 50

6. write(B )

Isolation can be ensured trivially by running transactions serially that is, one after

the other. However, executing multiple transactions concurrently has significant

benefits.
ACID Properties

 Atomicity. Either all operations of the transaction are properly reflected in the

database or none are.

 Consistency. Execution of a transaction in isolation preserves the consistency of the

database.

 Isolation. Although multiple transactions may execute concurrently, each transaction


must be unaware of other concurrently executing transactions. Intermediate
transaction results must be hidden from other concurrently executed transactions.
 That is, for every pair of transactions Ti and Tj, it appears to Ti that
either Tj, finished execution before Ti started, or Tj started execution
after Ti finished.

 Durability. After a transaction completes successfully, the changes it has made to the
database persist, even if there are system failures.

Example of Fund Transfer


Transaction to transfer $50 from account A to account B:

1. read(A)

2. A := A – 50

3. write(A)

4. read(B)

5. B := B + 50

6. write(B)

 Atomicity requirement—if the transaction fails after step 3 and before step 6, the system
should ensure that its updates are notreflected in the database, else an inconsistency will result.
 Consistency requirement–the sum of A and B is unchanged by the execution of the
transaction.
 Isolation requirement—if between steps 3 and 6, another transaction is allowed to access the
partially updated database,it will see an inconsistent database (the sum A + Bwill be less than it
should be).
Isolation can be ensured trivially by running transactions serially,that is one after the other.

However, executing multiple transactions concurrently has significant benefits.


 Durability requirement—once the user has been notified that the transaction has completed
(i.e., the transfer of the $50 has taken place), the updates to the database by the transaction must
persist despite failures.
Transaction State

• Active – the initial state; the transaction stays in this state while it is executing

• Partially committed – after the final statement has been executed.

• Failed -- after the discovery that normal execution can no longer proceed.

• Aborted – after the transaction has been rolled back and the database restored to its

State prior to the start of the transaction. Two options after it has been aborted: restart the
transaction can be done only if no internal logical error kill the transaction

• Committed – after successful completion.


Implementation Of Atomicity And Durability

The recovery-management component of a database system implements the support for


atomicity and durability. Example of the shadow-database scheme:all updates are made on a
shadow copy of the database db_pointer is made to point to the updated shadow copy after
the transaction reaches partial commit and all updated pages have been flushed to disk.

db_pointer always points to the current consistent copy of the database.In case transaction
fails, old consistent copy pointed to by db_pointer can be used, and the shadow copy can be
deleted.

The shadow-database scheme: Assumes that only one transaction is active at a time. Assumes
disks do not fail Useful for text editors, but extremely inefficient for large databases (why?)

Variant called shadow paging reduces copying of data, but is still not practical for large
databases does not handle concurrent transactions

Concurrent Executions

Multiple transactions are allowed to run concurrently in the system. Advantages are:
increased processor and disk utilization, leading to better transaction throughput.

Example one transaction can be using the CPU while another is reading from or writing to the
disk reduced average response time for transactions: short transactions need not

wait behind long ones Concurrency control schemes – mechanisms to achieve isolation that
is, to control the interaction among the concurrent transactions in order to prevent them from
destroying the consistency of the database.

Schedule – a sequences of instructions that specify the chronological order in which


instructions of concurrent transactions are executed a schedule for a set of transactions must
consist of all instructions of those transactions must preserve the order in which the
instructions appear in each individual transaction.
A transaction that successfully completes its execution will have commit instructions as the
last statement by default transaction assumed to execute commit instruction as its last step

A transaction that fails to successfully complete its execution will have an abort instruction as the
last statement.

Schedule 1

• Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B.

• A serial schedule in which T1 is followed by T2 :

Schedule 2

Schedule 3
Let T1 and T2 be the transactions defined previously.The following schedule is not a serial
schedule, but it is equivalent to Schedule 1.

Serializability

Basic Assumption – Each transaction preserves database consistency.Thus serial execution of


a set of transactions preserves database consistency.A (possibly concurrent) schedule is
serializable if it is equivalent to a serial schedule. Different forms of schedule equivalence
give rise to the notions of:

1. conflict serializability

2. iew serializability

Simplified view of transactions We ignore operations other than read and write instructions;
We assume that transactions may perform arbitrary computations on data in local buffers in
between reads and writes. Our simplified schedules consist of only read and write
instructions. Conflicting Instructions Instructions li and lj of transactions Ti and Tj
respectively, conflict if and only if there exists some item Q accessed by both li and lj, and at
least one of these instructions wrote Q.

1. l = read(Q), lj = read(Q) li and lj don‘t conflict.


i .
2. li = read(Q), lj = write(Q) They conflict.
.
3. li = write(Q), lj = read(Q). They conflict
4. li = write(Q), lj = write(Q). They conflict
Intuitively, a conflict between li and lj forces a (logical) temporal order between them. If li
and lj are consecutive in a schedule and they do not conflict, their results would remain the
same even if they had been interchanged in the schedule.
Conflict Serializability

If a schedule S can be transformed into a schedule S´ by a series of swaps of non-conflicting


instructions, we say that S and S´ are conflict equivalent.

We say that a schedule S is conflict serializable if it is conflict equivalent to a serial schedule

Schedule 3 can be transformed into Schedule 6, a serial schedule where T2 follows T1, by
series of swaps of non-conflicting instructions.Therefore Schedule 3 is conflict serializable.

Example of a schedule that is not conflict serializable:We are unable to swap instructions in
the above schedule to obtain either the serial schedule < T3, T4 >, or the serial schedule < T4,
T3 >.

View Serializability

Let S and S´ be two schedules with the same set of transactions. S and S´ are view equivalent
if the following three conditions are met, for each data item Q,If in schedule S, transaction Ti
reads the initial value of Q, then in schedule S’ also transaction Ti must read the initial value
of Q.

If in schedule S transaction Ti executes read(Q), and that value was produced by transaction
Tj (if any), then in schedule S’ also transaction Ti must read the value of Q that was produced
by the same write(Q) operation of transaction Tj .The transaction (if any) that performs the
final write(Q) operation in schedule S must also perform the final write(Q) operation in
schedule S’. As can be seen, view equivalence is also based purely on reads and writes alone.

A schedule S is view serializable if it is view equivalent to a serial schedule.Every conflict


serializable schedule is also view serializable.Below is a schedule which is view-serializable
but not conflict serializable.
 What serial schedule is above equivalent to?

 Every view serializable schedule that is not conflict serializable has blind writes.

Other Notions of Serializability


The schedule below produces same outcome as the serial schedule < T1, T5 >, yet is not
conflict equivalent or view equivalent to it. Determining such equivalence requires analysis
of operations other than read and write.

Recoverability

Recoverable schedule — if a transaction Tj reads a data item previously written by a


transaction Ti , then the commit operation of Ti appears before the commit operation of Tj.The
following schedule (Schedule 11) is not recoverable if T9 commits immediately after the read

If T8 should abort, T9 would have read (and possibly shown to the user) an inconsistent
database state. Hence, database must ensure that schedules are recoverable.

Cascading Rollbacks

Cascading rollback – a single transaction failure leads to a series of transaction rollbacks.


Consider the following schedule where none of the transactions has yet committed (so the
schedule is recoverable)
If T10 fails, T11 and T12 must also be rolled back. Can lead to the undoing of a significant
amount of work Cascade less schedules — cascading rollbacks cannot occur; for each pair of
transactions Ti and Tj such that Tj reads a data item previously written by Ti, the commit
operation of Ti appears before the read operation of Tj.Every cascade less schedule is also
recoverable It is desirable to restrict the schedules to those that are cascade less

Concurrency Control

A database must provide a mechanism that will ensure that all possible schedules are

either conflict or view serializable, and are recoverable and preferably cascadeless A policy
in which only one transaction can execute at a time generates serial schedules, but provides a
poor degree of concurrency Are serial schedules recoverable/cascadeless? Testing a schedule
for serializability after it has executed is a little too late! Goal – to develop concurrency
control protocols that will assure serializability.

Implementation Of Isolation

Schedules must be conflict or view serializable, and recoverable, for the sake of database
consistency, and preferably cascadeless.A policy in which only one transaction can execute at
a time generates serial schedules,but provides a poor degree of concurrency.Concurrency-
control schemes tradeoff between the amount of concurrency they allow and the amount of
overhead that they incur.Some schemes allow only conflict-serializable schedules to be
generated, while others allow view- serializable schedules that are not conflict-serializable.
Testing For Serializability

• Consider some schedule of a set of transactions T1, T2, ..., Tn

• Precedence graph — a direct graph where the vertices are the transactions (names).

• We draw an arc from Ti to Tj if the two transaction conflict, and Ti accessed

the data item on which the conflict arose earlier.

• We may label the arc by the item that was accessed.


Test for Conflict Serializability
A schedule is conflict serializable if and only if its precedence graph is acyclic.
Cycle-detection algorithms exist which take order n2 time, where n is the number of vertices
in the graph. (Better algorithms take order n + e where e is the number of edges.)
If precedence graph is acyclic, the serializability order can be obtained by a topological
sorting of the graph. This is a linear order consistent with the partial order of the graph.

For example, a serializability order for Schedule A would be T5  T1  T3  T2  T4 Are


there others?

Test for View Serializability

The precedence graph test for conflict serializability cannot be used directly to test for view
serializability.Extension to test for view serializability has cost exponential in the size of the
precedence graph.The problem of checking if a schedule is view serializable falls in the class
of NP- complete problems. Thus existence of an efficient algorithm is extremely unlikely.

However practical algorithms that just check some sufficient conditions for view
serializability can still be used.

Concurrency Control

Concurrency Control vs. Serializability Tests

Concurrency-control protocols allow concurrent schedules, but ensure that the schedules
are conflict/view serializable, and are recoverable and cascadeless .Concurrency control
protocols generally do not examine the precedence graph as it is being created Instead a
protocol imposes a discipline that avoids nonseralizable schedules.Different concurrency
control protocols provide different tradeoffs between the amount of concurrency they allow
and the amount of overhead that they incur.Tests for serializability help us understand why a
concurrency control protocol is correct.
Weak Levels of Consistency

Some applications are willing to live with weak levels of consistency, allowing schedules that
are not serializable E.g. a read-only transaction that wants to get an approximate total balance
of all Accounts. Example. database statistics computed for query optimization can be
approximate (why?) Such transactions need not be serializable with respect to other
transactions Tradeoff accuracy for performance Levels of Consistency in SQL-92
Serializable — default Repeatable read — only committed records to be read, repeated reads
of same record must return same value. However, a transaction may not be serializable

it may find some records inserted by a transaction but not find others.

Read committed — only committed records can be read, but successive reads of

recor may return different (but committed) values.

Read uncommitted — even uncommitted records may be read.Transaction

Definition in SQL Data manipulation language must include a construct for

specifying the set of actions

that comprise a transaction.In SQL, a transaction begins implicitly.A transaction in

SQL ends by:Commit work commits current transaction and begins a new one.

Rollback work causes current transaction to abort In almost all database systems, by

default, every SQL statement also commits implicitly if it executes successfully

Implicit commit can be turned off by a database directive E.g. in JDBC,connection.

setAutoCommit(false);

Lock Based Protocols

A lock is a mechanism to control concurrent access to a data item

Fig:Lock-compatibility matrix
Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as

well as written. X-lock is requested using lock-X

instruction.

2. shared (S) mode. Data item can only be read. S-

lock is requested using lock-S instruction.

Lock requests are made to concurrency-control manager. Transaction can proceed only after
request is granted.

 A transaction may be granted a lock on an item if the requested lock is compatible with
locks already held on the item by other transactions

 Any number of transactions can hold shared locks on an item, but if any transaction
holds an exclusive on the item no other transaction may hold any lock on the item.

 If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is then
granted.

Example :if a transaction performing locking:

T2: lock-

S(A);

read (A);

unlock(A

); lock-

S(B);

read (B);

unlock(B);

display(A+

B)

Locking as above is not sufficient to guarantee serializability — if A and B get updated in-
between the read of A and B, the displayed sum would be wrong.

• A locking protocol is a set of rules followed by all transactions while requesting and
releasing locks. Locking protocols restrict the set of possible schedules.Pitfalls of Lock-Based
Protocols Consider the partial schedule Neither T3 nor T4 can make progress — executing
lock-S(B) causes T4 to wait for T3 to release its lock on B, while executing lock-X(A) causes
T3 to wait for T4 to release its lock on A.Such a situation is called a deadlock. To handle a
deadlock one of T3 or T4 must be rolled back and its locks released.The potential for deadlock
exists in most locking protocols. Deadlocks are a necessary evil.
Starvation is also possible if concurrency control manager is badly designed. For example: A
transaction may be waiting for an X-lock on an item, while a sequence of other transactions
request and are granted an S-lock on the same item.The same transaction is repeatedly rolled
back due to deadlocks.Concurrency control manager can be designed to prevent starvation.

Two-Phase Locking Protocol

This is a protocol which ensures conflict-serializable

schedules. Phase 1: Growing Phase

– transaction may obtain locks

– transaction may not release

locks Phase 2: Shrinking Phase

– transaction may release locks

– transaction may not obtain locks

The protocol assures serializability. It can be proved that the transactions can be serialized in
the order of their lock points (i.e. the point where a transaction acquired its final lock). All
locks are released after commit or abort

Implementation of Locking

A lock manager can be implemented as a separate process to which transactions send lock
and unlock requests The lock manager replies to a lock request by sending a lock grant
messages (or a message asking the transaction to roll back, in case of a deadlock).The
requesting transaction waits until its request is answered The lock manager maintains a data-
structure called a lock table to record granted locks and pending requests The lock table is
usually implemented as an in-memory hash table indexed on the name of the data item being
locked.

Two-phase locking does not ensure freedom from deadlocks

• Cascading roll-back is possible under two-phase locking. To avoid this, follow a

modified protocol called strict two-phase locking. Here a transaction must hold all its
exclusive locks till it commits/aborts.

• Rigorous two-phase locking is even stricter: here all locks are held till commit/abort.

In this protocol transactions can be serialized in the order in which they commit.

Timestamp Based Protocols

Each transaction is issued a timestamp when it enters the system. If an old transaction Ti has
time- stamp TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti)
<TS(Tj).
The protocol manages concurrent execution such that the time-stamps determine the
serializability order.In order to assure such behavior, the protocol maintains for each data Q
two timestamp values:

W-timestamp(Q) is the largest time-stamp of any transaction that

executed write(Q) successfully.

R-timestamp(Q) is the largest time-stamp of any transaction that

executed read(Q) successfully.

The timestamp ordering protocol ensures that any conflicting read and write operations
are executed in timestamp order.

Suppose a transaction Ti issues a read(Q)

If TS(Ti)  W-timestamp(Q), then Ti needs to read a value of Q

that was

already overwritten.Hence, the read operation is rejected, and Ti is

rolled back.

If TS(Ti) W-timestamp(Q), then the read operation is executed,

and R- timestamp(Q) is set to max(R-timestamp(Q), TS(Ti)).

Suppose that transaction Ti issues write(Q).

If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was

needed previously, and the system assumed that that value would never

be produced.

Hence, the write operation is rejected, and Ti is rolled back.

If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of Q.

Hence, this write operation is rejected, and Ti is rolled back.Otherwise, the write
operation is executed, and W-timestamp(Q) is set to TS(Ti).
A partial schedule for several data items for transactions

with timestamps 1, 2, 3, 4, 5

Correctness of Timestamp-Ordering Protocol


The timestamp-ordering protocol guarantees serializability since all the arcs in the
precedence graph are of the form:

Thus, there will be no cycles in the precedence graph Timestamp protocol ensures freedom
from deadlock as no transaction ever waits. But the schedule may not be cascade-free, and
may not even be recoverable.

Thomas‘ Write Rule Modified version of the timestamp-ordering protocol in which obsolete
write operations may be ignored under certain circumstances. When Ti attempts to write data
item Q, if TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of {Q}.
Rather than rolling back Ti as the timestamp ordering protocol would have done, this {write}
operation can be ignored.Otherwise this protocol is the same as the timestamp ordering
protocol.

Thomas' Write Rule allows greater potential concurrency.

Allows some view-serializable schedules that are not conflict-serializable.

Validation Based Protocol

Execution of transaction Ti is done in three phases.

1. Read and execution phase: Transaction Ti writes only to temporary local variables

2. Validation phase: Transaction Ti performs a ``validation test'' to determine if local

variables can be written without violating serializability.

3. Write phase: If Ti is validated, the updates are applied to the database;

otherwise, Ti is rolled back.

The three phases of concurrently executing transactions can be interleaved, but each
transaction must go through the three phases in that order.Assume for simplicity that the
validation and write phase occur together,atomically and serially i.e., only one transaction
executes validation/write at a time.Also called as optimistic concurrency control since
transaction executes fully in the hope that all will go well during validation.

Each transaction Ti has 3 timestamps

 Start(Ti) : the time when Ti started its execution

 Validation(Ti): the time when Ti entered its validation phase

 Finish(Ti) : the time when Ti finished its write phase Serializability order is
determined by timestamp given at validation time, to increase concurrency.
Multiple Granularities

Allow data items to be of various sizes and define a hierarchy of data granularities, where the
small granularities are nested within larger ones Can be represented graphically as a tree (but
don't confuse with tree-locking protocol) When a transaction locks a node in the tree
explicitly, it implicitly locks all the node's descendents in the same mode.

Granularity of locking (level in tree where locking is done):ine granularity (lower in tree):
high concurrency, high locking overhead coarse granularity (higher in tree): low locking
overhead, low concurrency

Example of Granularity Hierarchy


The levels, starting from the coarsest (top) level are

– database

– area

– file

– record

In addition to S and X lock modes, there are three additional lock modes with multiple granularity:

intention-shared (IS): indicates explicit locking at a lower level of the

tree but only with shared locks.

intention-exclusive (IX): indicates explicit locking at a lower level with exclusive or shared locks

shared and intention-exclusive (SIX): the subtree rooted by that node is locked explicitly in
shared mode and explicit locking is being done at a lower level with exclusive-mode
locks.intention locks allow a higher level node to be locked in S or X mode without having to
check all descendent nodes.
Multiversion Schemes

Multiversion concurrency control techniques keep the old values of a data item when the item
is updated. Several versions (values) of an item are maintained. When a transaction requires
access to an item, an appropriate version is chosen to maintain the serialisability of the
concurrently executing schedule, if possible. The idea is that some read operations that would
be rejected in other techniques can still be accepted, by reading an older version of the item to
maintain serialisability.

An obvious drawback of multiversion techniques is that more storage is needed to maintain


multiple versions of the database items. However, older versions may have to be maintained
anyway â€― for example, for recovery purpose. In addition, some database applications
require older versions to be kept to maintain a history of the evolution of data item values.
The extreme case is a temporal database, which keeps track of all changes and the items at
which they occurred. In such cases, there is no additional penalty for multiversion techniques,
since older versions are already maintained.

Multiversion techniques based on timestamp ordering

In this technique, several versions X1, X2, … Xk of each data item X are kept by the system.
For each version, the value of version Xi and the following two timestamps are kept:

1. read_TS(Xi): The read timestamp of Xi; this is the largest of all the timestamps of
transactions that have successfully read version Xi.

2. write_TS(Xi): The write timestamp of Xi; this is the timestamp of the transaction that
wrote the value of version Xi.
Whenever a transaction T is allowed to execute a write_item(X) operation, a new version of
item X, Xk+1, is created, with both the write_TS(Xk+1) and the read_TS(Xk+1) set to
TS(T). Correspondingly, when a transaction T is allowed to read the value of version Xi, the
value of read_TS(Xi) is set to the largest of read_TS(Xi) and TS(T).

To ensure serialisability, we use the following two rules to control the reading and writing of
data items:

1. If transaction T issues a write_item(X) operation, and version i of X has the highest


write_TS(Xi) of all versions of X which is also less than or equal to TS(T), and TS(T)
< read_TS(Xi), then abort and roll back transaction T; otherwise, create a new version
Xj of X with read_TS(Xj) = write_TS(Xj) = TS(T).

2. If transaction T issues a read_item(X) operation, and version i of X has the highest


write_TS(Xi) of all versions of X which is also less than or equal to TS(T), then
return the value of Xi to transaction T, and set the value of read_TS(Xj) to the largest
of TS(T) and the current read_TS(Xj).

Multiversion two-phase locking

In this scheme, there are three locking modes for an item: read, write and certify. Hence, the
state of an item X can be one of 'read locked', 'write locked', 'certify locked' and 'unlocked'.
The idea behind the multiversion two-phase locking is to allow other transactions T’ to
read an item X while a single transaction T holds a write lock X. (Compare with standard
locking scheme.) This is accomplished by allowing two versions for each item X; one version
must always have been written by some committed transaction. The second version X’ is
created when a transaction T acquires a write lock on the item. Other transactions can
continue to read the committed version X while T holds the write lock. Now transaction T
can change the value of X’ as needed, without affecting the value of the committed
version X. However, once T is ready to commit, it must obtain a certify lock on all items that
it currently holds write locks on before it can commit. The certify lock is not compatible with
read locks, so the transaction may have to delay its commit until all its write lock items are
released by any reading transactions. At this point, the committed version X of the data item
is set to the value of version X’, version X’ is discarded, and the certify locks are then
released. The lock compatibility table for this scheme is shown below:

Figure 13.35

In this multiversion two-phase locking scheme, reads can proceed concurrently with a write
operation â€― an arrangement not permitted under the standard two-phase locking schemes.
The cost is that a transaction may have to delay its commit until it obtains exclusive certify
locks on all items it has updated. It can be shown that this scheme avoids cascading aborts,
since transactions are only allowed to read the version X that was written by committed
transaction. However, deadlock may occur.
Granularity of data items

All concurrency control techniques assumed that the database was formed of a number of
items. A database item could be chosen to be one of the following:

 A database record.

 A field value of a database record.

 A disk block.

 A whole file.

 The whole database.

Several trade-offs must be considered in choosing the data item size. We shall discuss data
item size in the context of locking, although similar arguments can be made for other
concurrency control techniques.

First, the larger the data item size is, the lower the degree of concurrency permitted. For
example, if the data item is a disk block, a transaction T that needs to lock a record A must
lock the whole disk block X that contains A. This is because a lock is associated with the
whole data item X. Now, if another transaction S wants to lock a different record B that
happens to reside in the same block X in a conflicting disk mode, it is forced to wait until the
first transaction releases the lock on block X. If the data item size was a single record,
transaction S could proceed as it would be locking a different data item (record B).

On the other hand, the smaller the data item size is, the more items will exist in the database.
Because every item is associated with a lock, the system will have a larger number of locks to
be handled by the lock manger. More lock and unlock operations will be performed, causing
a higher overhead. In addition, more storage space will be required for the lock table. For
timestamps, storage is required for the read_TS and write_TS for each data item, and the
overhead of handling a large number of items is similar to that in the case of locking.

The size of data items is often called the data item granularity. Fine granularity refers to small
item size, whereas coarse granularity refers to large item size. Given the above trade-offs, the
obvious question to ask is: What is the best item size? The answer is that it depends on the
types of transactions involved. If a typical transaction accesses a small number of records, it
is advantageous to have the data item granularity be one record. On the other hand, if a
transaction typically accesses many records of the same file, it may be better to have block or
file granularity so that the transaction will consider all those records as one (or a few) data
items.

Most concurrency control techniques have a uniform data item size. However, some
techniques have been proposed that permit variable item sizes. In these techniques, the data
item size may be changed to the granularity that best suits the transactions that are currently
executing on the system.

deadlock
Another problem that may be introduced by 2PL protocol is deadlock. The formal definition
of deadlock will be discussed below. Here, an example is used to give you an intuitive idea
about the deadlock situation. The two transactions that follow the 2PL protocol can be
interleaved as shown here:
Figure 13.24

At time step 5, it is not possible for T1’ to acquire an exclusive lock on X as there is
already a shared lock on X held by T2’. Therefore, T1’ has to wait. Transaction
T2’ at time step 6 tries to get an exclusive lock on Y, but it is unable to as T1’ has a
shared lock on Y already. T2’ is put in waiting too. Therefore, both transactions wait
fruitlessly for the other to release a lock. This situation is known as a deadly embrace or
deadlock. The above schedule would terminate in a deadlock.

Conservative 2PL

A variation of the basic 2PL is conservative 2PL also known as static 2PL, which is a way of
avoiding deadlock. The conservative 2PL requires a transaction to lock all the data items it
needs in advance. If at least one of the required data items cannot be obtained then none of
the items are locked. Rather, the transaction waits and then tries again to lock all the items it
needs. Although conservative 2PL is a deadlock-free protocol, this solution further limits
concurrency.

Strict 2PL

In practice, the most popular variation of 2PL is strict 2PL, which guarantees a strict
schedule. (Strict schedules are those in which transactions can neither read nor write an item
X until the last transaction that wrote X has committed or aborted). In strict 2PL, a
transaction T does not release any of its locks until after it commits or aborts. Hence, no other
transaction can read or write an item that is written by T unless T has committed, leading to a
strict schedule for recoverability. Notice the difference between conservative and strict 2PL;
the former must lock all items before it starts, whereas the latter does not unlock any of its
items until after it terminates (by committing or aborting). Strict 2PL is not deadlock-free
unless it is combined with conservative 2PL.

In summary, all type 2PL protocols guarantee serialisability (correctness) of a schedule but
limit concurrency. The use of locks can also cause two additional problems: deadlock and
livelock. Conservative 2PL is deadlock-free.

In a multi-process system, deadlock is an unwanted situation that arises in a shared resource


environment, where a process indefinitely waits for a resource that is held by another process.

For example, assume a set of transactions {T0, T1, T2, ...,Tn}. T0 needs a resource X to
complete its task. Resource X is held by T1, and T1 is waiting for a resource Y, which is held
by T2. T2 is waiting for resource Z, which is held by T 0. Thus, all the processes wait for each
other to release resources. In this situation, none of the processes can finish their task. This
situation is known as a deadlock.
Deadlocks are not healthy for a system. In case a system is stuck in a deadlock, the
transactions involved in the deadlock are either rolled back or restarted.

Deadlock Prevention

To prevent any deadlock situation in the system, the DBMS aggressively inspects all the
operations, where transactions are about to execute. The DBMS inspects the operations and
analyzes if they can create a deadlock situation. If it finds that a deadlock situation might
occur, then that transaction is never allowed to be executed.

There are deadlock prevention schemes that use timestamp ordering mechanism of
transactions in order to predetermine a deadlock situation.

Wait-Die Scheme

In this scheme, if a transaction requests to lock a resource (data item), which is already held
with a conflicting lock by another transaction, then one of the two possibilities may occur −

 If TS(Ti) < TS(Tj) − that is Ti, which is requesting a conflicting lock, is older than Tj
− then Ti is allowed to wait until the data-item is available.

 If TS(Ti) > TS(tj) − that is Ti is younger than Tj − then Ti dies. Ti is restarted later with
a random delay but with the same timestamp.

This scheme allows the older transaction to wait but kills the

younger one. Wound-Wait Scheme

In this scheme, if a transaction requests to lock a resource (data item), which is already held
with conflicting lock by some another transaction, one of the two possibilities may occur −

 If TS(Ti) < TS(Tj), then Ti forces Tj to be rolled back − that is Ti wounds Tj. Tj is
restarted later with a random delay but with the same timestamp.

 If TS(Ti) > TS(Tj), then Ti is forced to wait until the resource is available.

This scheme, allows the younger transaction to wait; but when an older transaction requests
an item held by a younger one, the older transaction forces the younger one to abort and
release the item.

In both the cases, the transaction that enters the system at a later stage is

aborted. Deadlock Avoidance

Aborting a transaction is not always a practical approach. Instead, deadlock avoidance


mechanisms can be used to detect any deadlock situation in advance. Methods like "wait-for
graph" are available but they are suitable for only those systems where transactions are
lightweight having fewer instances of resource. In a bulky system, deadlock prevention
techniques may work well.

Wait-for Graph
This is a simple method available to track if any deadlock situation may arise. For each
transaction entering into the system, a node is created. When a transaction Ti requests for a
lock on an item, say
X, which is held by some other transaction Tj, a directed edge is created from Ti to Tj. If Tj
releases item X, the edge between them is dropped and Ti locks the data item.

The system maintains this wait-for graph for every transaction waiting for some data items
held by others. The system keeps checking if there's any cycle in the graph.

Here, we can use any of the two following approaches −

 First, do not allow any request for an item, which is already locked by another
transaction. This is not always feasible and may cause starvation, where a transaction
indefinitely waits for a data item and can never acquire it.

 The second option is to roll back one of the transactions. It is not always feasible to
roll back the younger transaction, as it may be important than the older one. With the
help of some relative algorithm, a transaction is chosen, which is to be aborted. This
transaction is known as the victim and the process is known as victim selection.

Recovery System:

Crash Recovery

DBMS is a highly complex system with hundreds of transactions being executed every
second. The durability and robustness of a DBMS depends on its complex architecture and its
underlying hardware and system software. If it fails or crashes amid transactions, it is
expected that the system would follow some sort of algorithm or techniques to recover lost
data.

Failure Classification

To see where the problem has occurred, we generalize a failure into various categories, as

follows − Transaction failure

A transaction has to abort when it fails to execute or when it reaches a point from where it
can‘t go any further. This is called transaction failure where only a few transactions or
processes are hurt.
Reasons for a transaction failure could be −
 Logical errors − Where a transaction cannot complete because it has some code error
or any internal error condition.

 System errors − Where the database system itself terminates an active transaction
because the DBMS is not able to execute it, or it has to stop because of some system
condition. For example, in case of deadlock or resource unavailability, the system
aborts an active transaction.

System Crash

There are problems − external to the system − that may cause the system to stop abruptly and
cause the system to crash. For example, interruptions in power supply may cause the failure
of underlying hardware or software failure.

Examples may include operating system

errors. Disk Failure

In early days of technology evolution, it was a common problem where hard-disk drives or
storage drives used to fail frequently.

Disk failures include formation of bad sectors, unreachability to the disk, disk head crash or
any other failure, which destroys all or a part of disk storage.

Storage Structure

We have already described the storage system. In brief, the storage structure can be divided into
two categories −

 Volatile storage − As the name suggests, a volatile storage cannot survive system
crashes. Volatile storage devices are placed very close to the CPU; normally they are
embedded onto the chipset itself. For example, main memory and cache memory are
examples of volatile storage. They are fast but can store only a small amount of
information.

 Non-volatile storage − These memories are made to survive system crashes. They are
huge in data storage capacity, but slower in accessibility. Examples may include hard-
disks, magnetic tapes, flash memory, and non-volatile (battery backed up) RAM.

Recovery and Atomicity

When a system crashes, it may have several transactions being executed and various files
opened for them to modify the data items. Transactions are made of various operations,
which are atomic in nature. But according to ACID properties of DBMS, atomicity of
transactions as a whole must be maintained, that is, either all the operations are executed or
none.

When a DBMS recovers from a crash, it should maintain the following −

 It should check the states of all the transactions, which were being executed.
 A transaction may be in the middle of some operation; the DBMS must ensure the
atomicity of the transaction in this case.

 It should check whether the transaction can be completed now or it needs to be rolled
back.
 No transactions would be allowed to leave the DBMS in an inconsistent state.

There are two types of techniques, which can help a DBMS in recovering as well as
maintaining the atomicity of a transaction −

 Maintaining the logs of each transaction, and writing them onto some stable storage
before actually modifying the database.

 Maintaining shadow paging, where the changes are done on a volatile memory, and
later, the actual database is updated.

Log-based Recovery

Log is a sequence of records, which maintains the records of actions performed by a


transaction. It is important that the logs are written prior to the actual modification and stored
on a stable storage media, which is failsafe.

Log-based recovery works as follows −

 The log file is kept on a stable storage media.

 When a transaction enters the system and starts execution, it writes a log about it.

<Tn, Start>

 When the transaction modifies an item X, it write logs as follows −

<Tn, X, V1, V2>

It reads Tn has changed the value of X, from V1 to V2.

 When the transaction finishes, it logs −

<Tn, commit>

The database can be modified using two approaches −

 Deferred database modification − All logs are written on to the stable storage and the
database is updated when a transaction commits.

 Immediate database modification − Each log follows an actual database modification.


That is, the database is modified immediately after every operation.

Recovery with Concurrent Transactions

When more than one transaction are being executed in parallel, the logs are interleaved. At
the time of recovery, it would become hard for the recovery system to backtrack all logs, and
then start recovering. To ease this situation, most modern DBMS use the concept of
'checkpoints'.

Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the
memory space available in the system. As time passes, the log file may grow too big to be
handled at all. Checkpoint is a mechanism where all the previous logs are removed from the
system and stored permanently in a
storage disk. Checkpoint declares a point before which the DBMS was in consistent state,
and all the transactions were committed.

Recovery

When a system with concurrent transactions crashes and recovers, it behaves in the
following manner

 The recovery system reads the logs backwards from the end to the last
checkpoint.

 It maintains two lists, an undo-list and a redo-list.

 If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just
<Tn, Commit>, it puts the transaction in the redo-list.

 If the recovery system sees a log with <Tn, Start> but no commit or abort
log found, it puts the transaction in undo-list.

All the transactions in the undo-list are then undone and their logs are removed. All
the transactions in the redo-list and their previous logs are removed and then redone
before saving their logs.

You might also like