Unit 5: Transactions
Unit 5: Transactions
TRANSACTIONS
Often, a collection of several operations on the database appears to be a single unit from the
point of view of the database user. For example, a transfer of funds from a checking account to a
savings account is a single operation from the customer’s standpoint; within the database system,
however, it consists of several operations. Clearly, it is essential that all these operations occur, or
that, in case of a failure, none occur. It would be unacceptable if the checking accounts were debited
but the savings account not credited. Collections of operations that form a single logical unit of
work are called transactions.
A transaction is a very small unit of a program and it may contain several low level tasks. A
transaction in a database system must maintain Atomicity, Consistency, Isolation, and Durability −
commonly known as ACID properties − in order to ensure accuracy, completeness, and data
integrity.
Operations of Transaction: -
Following are the main operations of transaction:
Read(X): Read operation is used to read the value of X from the database and stores it in a buffer in
main memory.
Write(X): Write operation is used to write the value back to the database from the buffer.
Let's take an example to debit transaction from an account which consists of following operations:
1. R(X);
2. X = X - 500;
3. W(X);
Department of IT Page 1
in the database which is not acceptable by the bank. To solve this problem, we have two important
operations
• Commit: It is used to save the work done permanently.
• Rollback: It is used to undo the work done.
Property of Transaction: -
The transaction has the four properties. These are used to maintain consistency in a database,
before and after the transaction.
1. Atomicity
2. Consistency
3. Durability
4. Isolation
1. Atomicity: -
This property states that a transaction must be treated as an atomic unit, that is, either all of its
operations are executed or none. There must be no state in a database where a transaction is left
partially completed. It states that all operations of the transaction take place at once if not, the
transaction is aborted. There is no midway, i.e., the transaction cannot occur partially. Each
transaction is treated as one unit and either run to completion or is not executed at all.
Atomicity involves the following two operations:
Abort: If a transaction aborts then all the changes made are not visible.
Commit: If a transaction commits then all the changes made are visible.
Example: Let's assume that following transaction T consisting of T1 and T2. A consists of Rs 600
and B consists of Rs 300. Transfer Rs 100 from account A to account B.
T1 T2
Read(A) Read(B)
A:= A-100 Y:= Y+100
Write(A) Write(B)
Department of IT Page 2
After completion of the transaction, A consists of Rs 500 and B consists of Rs 400. If the
transaction T fails after the completion of transaction T1 but before completion of transaction
T2, then the amount will be deducted from A but not added to B. This shows the inconsistent
database state. In order to ensure correctness of database state, the transaction must be
executed in entirety.
2. Consistency: -
The integrity constraints are maintained so that the database is consistent before and after the
transaction. The execution of a transaction will leave a database in either its prior stable state or
a new stable state. The consistent property of database states that every transaction sees a
consistent database instance. The transaction is used to transform the database from one
consistent state to another consistent state.
For example: The total amount must be maintained before or after the transaction.
Total before T occurs = 600+300=900
Total after T occurs= 500+400=900
Therefore, the database is consistent. In the case when T1 is completed but T2 fails, then
inconsistency will occur.
3. Durability: -
The database should be durable enough to hold all its latest updates even if the system fails or
restarts. If a transaction updates a chunk of data in a database and commits, then the database
will hold the modified data. If a transaction commits but the system fails before the data could
be written on to the disk, then that data will be updated once the system springs back into
action.
4. Isolation: -
This property ensures that multiple transactions can occur concurrently without leading to the
inconsistency of database state. Transactions occur independently without interference.
Changes occurring in a particular transaction will not be visible to any other transaction until
that particular change in that transaction is written to memory or has been committed. This
property ensures that the execution of transactions concurrently will result in a state that is
equivalent to a state achieved these were executed serially in some order.
Let X= 500, Y = 500. Consider two transactions T and T”.
Suppose T has been executed till Read (Y) and then T’’ starts. As a result , interleaving of
operations takes place due to which T’’ reads correct value of X but incorrect value of Y and sum
computed by T’’: (X+Y = 50, 000+500=50, 500) is thus not consistent with the sum at end of
transaction: T: (X+Y = 50, 000 + 450 = 50, 450). This results in database inconsistency, due to
a loss of 50 units. Hence, transactions must take place in isolation and changes should be visible
only after they have been made to the main memory.
The ACID properties, in totality, provide a mechanism to ensure correctness and consistency of a
database in a way such that each transaction is a group of operations that acts a single unit,
produces consistent results, acts in isolation from other operations and updates that it makes are
durably stored.
Department of IT Page 3
STORAGE STRUCTURE: -
Storage structure is the memory structure in the system. It is mainly divided into two categories:
1. Volatile Memory: - These are the primary memory devices in the system, and are placed along
with the CPU. These memories can store only small amount of data, but they are very fast. E.g.:-
main memory, cache memory etc. these memories cannot endure system crashes- data in these
memories will be lost on failure.
2. Non-Volatile memory: - These are secondary memories and are huge in size, but slow in
processing. E.g.:- Flash memory, hard disk, magnetic tapes etc. these memories are designed to
withstand system crashes.
Stable Memory: -
This is said to be third form of memory structure but it is same as non volatile memory. In this case,
copies of same non volatile memories are stored at different places. This is because, in case of any
crash and data loss, data can be recovered from other copies. This is even helpful if there one of
non-volatile memory is lost due to fire or flood. It can be recovered from other network location.
But there can be failure while taking the backup of DB into different stable storage devices. Even it
may fail to transfer all the data successfully; either it will partially transfer the data to remote
devices or completely fail to store the data in stable memory. Hence extra caution has to be taken
while taking the backup of data from one stable memory to other. There are different methods
followed to copy the data. One of them is to copy the data in two phases – copy the data blocks to
first storage device, if it is successful copy to second storage device. The copying is complete only
when second copy is executed successfully. But second copy of data blocks may fail to copy whole
blocks. In such case, each data blocks in first copy and second copy needs to be compared for its
inconsistency. But verifying each blocks would be very costly task as we may have huge number of
data block. One of the better way to identify the failed block is to identify the block which was in
progress during the failure. Take only this block, compare the data and correct the mismatches.
Department of IT Page 4
• Aborted, after the transaction has been rolled back and the database has been restored to its
state prior to the start of the transaction
• Committed, after successful completion.
The state diagram corresponding to a transaction appears in Figure 14.1. We say that a transaction
has committed only if it has entered the committed state.
States of Transactions
A transaction in a database can be in one of the following states −
1. Active state: - The active state is the first state of every transaction. In this state, the transaction
is being executed. For example: Insertion or deletion or updating a record is done here. But all
the records are still not saved to the database.
2. Partially committed: - In the partially committed state, a transaction executes its final
operation, but the data is still not saved to the database. In the total mark calculation example, a
final display of the total marks step is executed in this state.
3. Failed state: - If any of the checks made by the database recovery system fails, then the
transaction is said to be in the failed state. In the example of total mark calculation, if the
database is not able to fire a query to fetch the marks, then the transaction will fail to execute.
4. Aborted: - If any of the checks fail and the transaction has reached a failed state then the
database recovery system will make sure that the database is in its previous consistent state. If
not then it will abort or roll back the transaction to bring the database into a consistent state.
If the transaction fails in the middle of the transaction then before executing the transaction,
all the executed transactions are rolled back to its consistent state. After aborting the
transaction, the database recovery module will select one of the two operations:
a) Re-start the transaction
b) Kill the transaction
5. Committed: - A transaction is said to be in a committed state if it executes all its operations
successfully. In this state, all the effects are now permanently saved on the database system.
TRANSACTION ISOLATION: -
Suppose we are executing two transactions T1 and t2 to update student Rose’s last name. T1
updates last name to ‘M’ while T2 updates it to ‘Mathew’. Suppose these two transactions are
executed concurrently, and T1 starts first and each step of transaction is viewed as below. Though
both transactions are concurrent, each step in them takes very minute fraction of seconds to
execute and below snapshot shows those steps. This shows how they affect the result of the
transaction at those minute intervals on concurrency. Here what will be the final result? Last_name
is updated to ‘Mathew’. What happens to T1’s update? It is lost!
Department of IT Page 5
Suppose, T2 reads the data after T1 updates the name but before commit. Again here T2 will
execute as if it is unaware of T1’s update. Again last_name would be ‘Mathew’. Suppose T2 is
allowed to read T1’s uncommitted update. Hence T2 will read last_name as ‘M’ and update it. Again
last name is ‘Mathew’. But what happens to T1’s update? It is lost. It is no where saved and no will
have record of its status!
Imagine the update is something like increment in salary by 10% with below steps and T2 is
allowed to read uncommitted update by T1. What happens to salary? It will be incremented twice –
once by T1 and secondly by T2 which increments the salary updated by T1. This will lead to
incorrect data.
Hence it is very much important to execute concurrent transaction with utmost care, so that
it gives consistent result. Transactions should be executed in such a way that its result should not
affect other transactions; similarly current transaction should not be affected by other transactions.
In this way we can achieve isolation of transaction and hence consistency of DB.
Transaction Isolation Levels: -
As we know that, in order to maintain consistency in a database, it follows ACID properties.
Among these four properties (Atomicity, Consistency, Isolation and Durability) Isolation
determines how transaction integrity is visible to other users and systems. It means that a
transaction should take place in a system in such a way that it is the only transaction that is
accessing the resources in a database system.
Isolation levels define the degree to which a transaction must be isolated from the data
modifications made by any other transaction in the database system. A transaction isolation level is
defined by the following phenomena –
• Dirty Read – A Dirty read is the situation when a transaction reads a data that has not yet been
committed. For example, Let’s say transaction 1 updates a row and leaves it uncommitted,
meanwhile, Transaction 2 reads the updated row. If transaction 1 rolls back the change,
transaction 2 will have read data that is considered never to have existed.
• Non Repeatable read – Non Repeatable read occurs when a transaction reads same row twice,
and get a different value each time. For example, suppose transaction T1 reads data. Due to
concurrency, another transaction T2 updates the same data and commit, Now if transaction T1
rereads the same data, it will retrieve a different value.
• Phantom Read – Phantom Read occurs when two same queries are executed, but the rows
retrieved by the two, are different. For example, suppose transaction T1 retrieves a set of rows
that satisfy some search criteria. Now, Transaction T2 generates some new rows that match the
Department of IT Page 6
search criteria for transaction T1. If transaction T1 re-executes the statement that reads the
rows, it gets a different set of rows this time.
Based on these phenomena, The SQL standard defines four isolation levels:
1. Read Uncommitted – Read Uncommitted is the lowest isolation level. In this level, one
transaction may read not yet committed changes made by other transaction, thereby allowing
dirty reads. In this level, transactions are not isolated from each other.
2. Read Committed – This isolation level guarantees that any data read is committed at the
moment it is read. Thus it does not allows dirty read. The transaction holds a read or write lock
on the current row, and thus prevent other transactions from reading, updating or deleting it.
3. Repeatable Read – This is the most restrictive isolation level. The transaction holds read locks
on all rows it references and writes locks on all rows it inserts, updates, or deletes. Since other
transaction cannot read, update or delete these rows, consequently it avoids non-repeatable
read.
4. Serializable – This is the Highest isolation level. A serializable execution is guaranteed to be
serializable. Serializable execution is defined to be an execution of operations in which
concurrently executing transactions appears to be serially executing.
The Table is given below clearly depicts the relationship between isolation levels, read phenomena
and locks :
SERIALIZABILITY
When multiple transactions are running concurrently then there is a possibility that the database
may be left in an inconsistent state. Serializability is a concept that helps us to check
which schedules are Serializable. A Serializable schedule is the one that always leaves the database
in consistent state.
What is a Serializable schedule: -
A Serializable schedule always leaves the database in consistent state. A serial schedule is
always a Serializable schedule because in serial schedule, a transaction only starts when the other
transaction finished execution. A serial schedule doesn’t allow concurrency, only one transaction
executes at a time and the other starts when the already running transaction finished.
There are two types of schedules – Serial & Non-Serial. A Serial schedule doesn’t support
concurrent execution of transactions while a non-serial schedule supports concurrency. A non-
serial schedule may leave the database in inconsistent state so we need to check these non-serial
schedules for the Serializability.
Types of Serializability: -
There are two types of Serializability.
1. Conflict Serializability
2. View Serializability
Department of IT Page 7
1. Conflict Serializability: -
Conflict Serializability is one of the type of Serializability, which can be used to check whether a
non-serial schedule is conflict serializable or not.
Conflicting operations: -
Two operations are said to be in conflict, if they satisfy all the following three conditions:
a. Both the operations should belong to different transactions.
b. Both the operations are working on same data item.
c. At least one of the operation is a write operation.
Some examples to understand this
Example 1: -
Operation W(X) of transaction T1 and operation R(X) of transaction T2 are conflicting
operations, because they satisfy all the three conditions mentioned above. They belong to
different transactions; they are working on same data item X, one of the operation in write
operation.
Example 2:
Similarly Operations W(X) of T1 and W(X) of T2 are conflicting operations.
Example 3:
Operations W(X) of T1 and W(Y) of T2 are non-conflicting operations because both the write
operations are not working on same data item so these operations don’t satisfy the second
condition.
Example 4:
Similarly R(X) of T1 and R(X) of T2 are non-conflicting operations because none of them is
write operation.
Example 5:
Similarly W(X) of T1 and R(X) of T1 are non-conflicting operations because both the
operations belong to same transaction T1.
Department of IT Page 8
To convert this schedule into a serial schedule we must have to swap the R(A) operation of
transaction T2 with the W(A) operation of transaction T1. However we cannot swap these two
operations because they are conflicting operations, thus we can say that this given schedule
is not Conflict Serializable.
Let’s take another example
T1 T2
----- ------
R(A)
R(A)
R(B)
W(B)
R(B)
W(A)
Let’s swap non-conflicting operations:
We finally got a serial schedule after swapping all the non-conflicting operations so we can say
that the given schedule is Conflict Serializable.
2. View Serializability: -
View Serializability is a process to find out that a given schedule is view Serializable or not. To
check whether a given schedule is view Serializable, we need to check whether the given
Department of IT Page 9
schedule is View Equivalent to its serial schedule. Let’s take an example to understand what I
mean by that.
Given Schedule: -
T1 T2
----- ------
R(X)
W(X)
R(X)
W(X)
R(Y)
W(Y)
R(Y)
W(Y)
T1 T2
----- ------
R(X)
W(X)
R(Y)
W(Y)
R(X)
W(X)
R(Y)
W(Y)
If we can prove that the given schedule is View Equivalent to its serial schedule then the given
schedule is called view Serializable
View Equivalent: -
Two schedules T1 and T2 are said to be view equivalent, if they satisfy all the following
conditions:
1. Initial Read: -
Initial read of each data item in transactions must match in both schedules. For example, if
transaction T1 reads a data item X before transaction T2 in schedule S1 then in schedule S2,
T1 should read X before T2.
Read vs Initial Read: You may be confused by the term initial read. Here initial read means
the first read operation on a data item, for example, a data item X can be read multiple times
Department of IT Page 10
in a schedule but the first read operation on X is called the initial read. This will be more clear
once we will get to the example in the next section of this same article.
2. Final Write: -
Final write operations on each data item must match in both the schedules. For example, a
data item X is last written by Transaction T1 in schedule S1 then in S2, the last write operation
on X should be performed by the transaction T1.
3. Update Read: -: If in schedule S1, the transaction T2 is reading a data item updated by T1
then in schedule S2, T2 should read the value after the write operation of T1 on same data
item. For example, In schedule S1, T2 performs a read operation on X after the write operation
on X by T1 then in S2, T2 should read the X after T1 performs write on X.
Department of IT Page 11
TRANSACTIONS AS SQL STATEMENTS
A transaction is a sequence of operations performed (using one or more SQL statements) on
a database as a single logical unit of work. The effects of all the SQL statements in a transaction can
be either all committed (applied to the database) or all rolled back (undone from the database). A
database transaction must be atomic, consistent, isolated and durable.
To understand the concept of a transaction, consider a banking database. Suppose a bank
customer transfers money from his savings account (SB a/c) to his overdraft account (OD a/c), the
statement will be divided into four blocks:
• Debit SB a/c.
• Credit OD a/c.
• Record in Transaction Journal
• End Transaction
CONCURRENCY CONTROL
One of the fundamental properties of a transaction is isolation. When several transactions
execute concurrently in the database, however, the isolation property may no longer be preserved.
In the concurrency control, the multiple transactions can be executed simultaneously. It may affect
the transaction result. It is highly important to maintain the order of execution of those
transactions.
Problems of concurrency control: -
Several problems can occur when concurrent transactions are executed in an uncontrolled manner.
Following are the three problems in concurrency control.
1. Lost updates
2. Dirty read
3. Unrepeatable read
Department of IT Page 12
1. Lost update problem: - When two transactions that access the same database items contain
their operations in a way that makes the value of some database item incorrect, then the lost
update problem occurs. If two transactions T1 and T2 read a record and then update it, then the
effect of updating of the first record will be overwritten by the second update.
Example: -
Here,
• At time t2, transaction-X reads A's value.
• At time t3, Transaction-Y reads A's value.
• At time t4, Transactions-X writes A's value on the basis of the value seen at time t2.
• At time t5, Transactions-Y writes A's value on the basis of the value seen at time t3.
• So, at time T5, the update of Transaction-X is lost because Transaction y overwrites it without
looking at its current value.
• Such type of problem is known as Lost Update Problem as update made by one transaction is
lost here.
2. Dirty Read: - The dirty read occurs in the case when one transaction updates an item of the
database, and then the transaction fails for some reason. The updated database item is accessed
by another transaction before it is changed back to the original value. A transaction T1 updates a
record which is read by T2. If T1 aborts then T2 now has values which have never formed part of
the stable database.
Example: -
Here,
• At time t2, transaction-Y writes A's value.
• At time t3, Transaction-X reads A's value.
• At time t4, Transactions-Y rollbacks. So, it changes A's value back to that of prior to t1.
• So, Transaction-X now contains a value which has never become part of the stable database.
• Such type of problem is known as Dirty Read Problem, as one transaction reads a dirty value
which has not been committed
Department of IT Page 13
• Transaction-X is doing the sum of all balance while transaction-Y is transferring an amount 50
from Account-1 to Account-3.
• Here, transaction-X produces the result of 550 which is incorrect. If we write this produced
result in the database, the database will become an inconsistent state because the actual sum
is 600. Here, transaction-X has seen an inconsistent state of the database.
Department of IT Page 14
1. Shared Lock (S): - Shared Lock (S) also known as Read-only lock. As the name suggests it can be
shared between transactions because while holding this lock the transaction does not have the
permission to update data on the data item. S-lock is requested using lock-S instruction.
2. Exclusive Lock (X): - Data item can be both read as well as written. This is Exclusive and cannot
be held simultaneously on the same data item. X-lock is requested using lock-X instruction.
Shared Exclusive
Shared True False
Exclusive False False
If a resource is already locked by another transaction, then a new lock request can be
granted only if the mode of the requested lock is compatible with the mode of the existing lock. Any
number of transactions can hold shared locks on an item, but if any transaction holds an exclusive
lock on item, no other transaction may hold any lock on the item.
Upgrade / Downgrade locks: - A transaction that holds a lock on an item A is allowed under
certain condition to change the lock state from one state to another.
Upgrade: - A S(A) can be upgraded to X(A) if Ti is the only transaction holding the S-lock on
element A.
Downgrade: - We may downgrade X(A) to S(A) when we feel that we no longer want to write on
data-item A. As we were holding X-lock on A, we need not check any conditions.
Applying simple locking, we may not always produce Serializable results; it may lead to
Deadlock Inconsistency.
Problem with Simple Locking: -
Consider the Partial Schedule:
T1 T2
1 lock-X(B)
2 read(B)
3 B:=B-50
4 write(B)
5 lock-S(A)
6 read(A)
7 lock-S(B)
8 lock-X(A)
9 …… ……
Deadlock: - Deadlock refers to a specific situation where two or more processes are waiting for
each other to release a resource or more than two processes are waiting for the resource in a
circular chain. Consider the above execution phase. Now, T1 holds an Exclusive lock over B,
and T2 holds a Shared lock over A. Consider Statement 7, T2 requests for lock on B, while in
Department of IT Page 15
Statement 8 T1 requests lock on A. This as you may notice imposes a Deadlock as none can proceed
with their execution.
The Two-Phase Locking protocol allows each transaction to make a lock or unlock request in two
steps:
1. Growing Phase: - In this phase transaction may obtain locks but may not release any locks.
2. Shrinking Phase: In this phase, a transaction may release locks but not obtain any new lock.
It is true that the 2PL protocol offers Serializability. However, it does not ensure that
deadlocks do not happen.
Department of IT Page 16
In the below example, if lock conversion is allowed then the following phase can happen:
1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.
2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.
Example: -
T1 T2
1 LOCK – S(A)
2 LOCK – S(A)
3 LOCK – X(B)
4 ------ ------
5 UNLOCK(A)
6 LOCK – X(C)
7 UNLOCK(B)
8 UNLOCK(A)
9 UNLOCK(C)
10 ------ ------
The following way shows how unlocking and locking work with 2-PL.
Transaction T1:
o Growing phase: from step 1-3
o Shrinking phase: from step 5-7
o Lock point: at 3
Transaction T2:
o Growing phase: from step 2-6
o Shrinking phase: from step 8-9
o Lock point: at 6
Lock Point is the point at which the growing phase ends, i.e., when transaction takes the final lock
it needs to carry on its work.
2-PL ensures Serializability, but there are still some drawbacks of 2-PL.
1. Cascading Rollback is possible under 2-PL.
2. Deadlocks and Starvation is possible.
Department of IT Page 17
Because of Dirty Read in T2 and T3 in lines 8 and 12 respectively, when T1 failed we have to
rollback others also. Hence Cascading Rollbacks are possible in 2-PL.
Deadlock in 2-PL: -
Consider this simple example; it will be easy to understand. Say we have two transactions
T1 and T2.
Schedule: Lock-X1(A) Lock-X2(B) Lock-X1(B) Lock-X2(A)
Drawing the precedence graph, you may detect the loop. So Deadlock is also possible in 2-PL.
Two-phase locking may also limit the amount of concurrency that occurs in a schedule
because a Transaction may not be able to release an item after it has used it. This may be
because of the protocols and other restrictions we may put on the schedule to ensure
Serializability, deadlock freedom and other factors. The 2-PL is called Basic 2PL. To sum it up it
ensures Conflict Serializability but does not prevent Cascading Rollback and Deadlock. There
three other types of 2PL, Strict 2PL, Conservative 2PL and Rigorous 2PL.
Strict Two-Phase Locking Method: - Strict-Two phase locking system is almost similar to 2PL. The
only difference is that Strict-2PL never releases a lock after using it. It holds all the locks until the
commit point and releases all the locks at one go when the process is over.
Rigorous 2PL: - This requires that in addition to the lock being 2 phase all Exclusive(X) and
Shared(S) Locks held by the transaction be released until after the transaction commits.
Conservative 2PL: -
Conservative 2-PL is Deadlock free but it does not ensure Strict schedule. However, it is
difficult to use in practice because of need to predeclare the read-set and the write-set which is not
possible in many situations. In practice, the most popular variation of 2-PL is Strict 2-PL.
The Venn diagram below shows the classification of schedules which are rigorous and strict.
The universe represents the schedules which can be serialized as 2-PL. Now as the diagram
suggests, and it can also be logically concluded, if a schedule is Rigorous then it is Strict. We put a
restriction on a schedule which makes it strict, adding another to the list of restrictions make it
Rigorous.
Department of IT Page 18
T1: T2:
Read(A) Read(A)
A:=A-50 Temp:=A*0.1
Write(A) A:=A-Temp
Read(B) Write(A)
B:=B+50 Read(B)
Write(B). B:=B+Temp
Write(B).
Impose a partial ordering ⟶ on the set D = {d1, d2, d3, ….., dn} of all data items.
1. If di –> dj then any transaction accessing both di and dj must access di before accessing dj.
2. Implies that the set D may now be viewed as a directed acyclic graph (DAG), called a database
graph.
Tree Based Protocols is a simple implementation of Graph Based Protocol
Department of IT Page 19
1. Only exclusive locks are allowed.
2. The first lock by Ti may be on any data item.
3. Subsequently, a data Q can be locked by Ti only if the parent of Q is currently locked by Ti.
4. Data items may be unlocked at any time.
5. A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti.
Example: -
Let us consider a database D={A,B,C,D,E,F,G,H,I,J}.
Database Graph: -
The following 4 transactions follow the tree protocol on previous data base graph.
1. T1: lock – x(B); lock – x(E); lock – x(D); unlock – x(B); unlock – x(E); lock – x(G); unlock – x(D);
unlock – x(G).
2. T2: lock – x(D); lock – x(H); unlock – x(D); unlock – x(H).
3. T3: lock – x(B); lock – x(E); unlock – x(E); unlock – x(B).
4. T4: lock – x(H); lock – x(J); unlock – x(H); unlock – x(J).
Department of IT Page 20
Advantages: -
1. The tree protocol ensures conflict Serializability.
2. Freedom from deadlock.
3. Unlocking may occur earlier in the tree locking protocol than in the two phase locking protocol.
a. Shorter waiting times, and increase in concurrency.
b. Protocol is deadlock free, no rollbacks are required.
Disadvantages: -
1. In the tree locking protocol, a transaction may have to lock data items that it does not access.
a. Increased locking over head, and additional waiting time.
b. Potential decrease in concurrency.
RECOVERY SYSTEMS
DBMS is a highly complex system with hundreds of transactions being executed every
second. The durability and robustness of a DBMS depends on its complex architecture and its
underlying hardware and system software. If it fails or crashes amid transactions, it is expected that
the system would follow some sort of algorithm or techniques to recover lost data.
FAILURE CLASSIFICATION
To see where the problem has occurred, we generalize a failure into various categories, as follows
Transaction failure
A transaction has to abort when it fails to execute or when it reaches a point from where it can’t go
any further. This is called transaction failure where only a few transactions or processes are hurt.
Reasons for a transaction failure could be
• Logical errors − Where a transaction cannot complete because it has some code error or any
internal error condition.
• System errors − Where the database system itself terminates an active transaction because the
DBMS is not able to execute it, or it has to stop because of some system condition. For example,
in case of deadlock or resource unavailability, the system aborts an active transaction.
System Crash
There are problems − external to the system − that may cause the system to stop abruptly and
cause the system to crash. For example, interruptions in power supply may cause the failure of
underlying hardware or software failure. Examples may include operating system errors.
Disk Failure
In early days of technology evolution, it was a common problem where hard-disk drives or storage
drives used to fail frequently.
Disk failures include formation of bad sectors, unreachability to the disk, disk head crash or any
other failure, which destroys all or a part of disk storage.
Department of IT Page 21
• Maintaining the logs of each transaction, and writing them onto some stable storage before
actually modifying the database.
• Maintaining shadow paging, where the changes are done on a volatile memory, and later, the
actual database is updated.
Log-based Recovery: -
Log is a sequence of records, which maintains the records of actions performed by a transaction. It
is important that the logs are written prior to the actual modification and stored on a stable storage
media, which is fail safe.
Log-based recovery works as follows
• The log file is kept on a stable storage media.
• When a transaction enters the system and starts execution, it writes a log about it.
<Tn, Start>
• When the transaction modifies an item X, it write logs as follows
<Tn, X, V1, V2>
It reads Tn has changed the value of X, from V1 to V2.
• When the transaction finishes, it logs −
<Tn, commit>
The database can be modified using two approaches
• Deferred database modification: - All logs are written on to the stable storage and the
database is updated when a transaction commits.
• Immediate database modification: - Each log follows an actual database modification. That is,
the database is modified immediately after every operation.
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the memory
space available in the system. As time passes, the log file may grow too big to be handled at all.
Checkpoint is a mechanism where all the previous logs are removed from the system and stored
permanently in a storage disk. Checkpoint declares a point before which the DBMS was in
consistent state, and all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in the following
manner.
• The recovery system reads the logs backwards from the end to the last checkpoint.
• It maintains two lists, an undo-list and a redo-list.
Department of IT Page 22
• If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>, it puts
the transaction in the redo-list.
• If the recovery system sees a log with <Tn, Start> but no commit or abort log found, it puts the
transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed. All the
transactions in the redo-list and their previous logs are removed and then redone before saving
their logs.
ESSAY QUESTIONS: -
1. Explain two phase locking protocol in detail. [APRIL 2019]
2. Discuss briefly about failure classification system. [APRIL 2019]
3. Explain two phase locking protocol with examples. [APRIL 2018]
4. Explain the phases of concurrency control and recovery using algorithm. [APRIL 2018]
5. Analyze which of the following concurrency control protocols ensure both conflict
Serializability and freedom from deadlock? Explain the following: [APRIL 2017]
a. 2 phase locking
b. Graph Based protocols.
6. Explain different locking Technique for concurrency control. [MAY 2016]
7. Explain in brief Serializability and Recoverability. [MAY 2016]
8. What is Serializability? Explain conflict Serializability with example. [OCT 2018]
9. Write and explain transaction properties. [OCT 2018]
10. Write about two phase locking protocol in detail. [OCT 2017]
11. Write about [OCT 2017]
a. Check point.
b. Transaction commit.
c. Database Recovery.
12. List the ACID properties and explain the importance of each. [DEC 2016]
13. What is two phase locking protocol? How does it guarantee Serializability [DEC 2016]
Department of IT Page 23