0% found this document useful (0 votes)
7 views17 pages

DBMS Unit 5

Concurrency Control in DBMS is essential for managing simultaneous database operations by multiple users, ensuring consistency and preventing issues like lost updates, dirty reads, and unrepeatable reads. Various protocols such as Lock-Based, Timestamp Ordering, and Validation-Based are employed to maintain database integrity during concurrent executions. Additionally, deadlock situations can arise, necessitating avoidance, detection, and prevention strategies to ensure smooth transaction processing.

Uploaded by

khushi saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views17 pages

DBMS Unit 5

Concurrency Control in DBMS is essential for managing simultaneous database operations by multiple users, ensuring consistency and preventing issues like lost updates, dirty reads, and unrepeatable reads. Various protocols such as Lock-Based, Timestamp Ordering, and Validation-Based are employed to maintain database integrity during concurrent executions. Additionally, deadlock situations can arise, necessitating avoidance, detection, and prevention strategies to ensure smooth transaction processing.

Uploaded by

khushi saxena
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

DBMS Concurrency Control

Concurrency Control is the management procedure that is required for controlling concurrent execution of
the operations that take place on a database.
Concurrent Execution in DBMS
1. In a multi-user system, multiple users can access and use the same database at one time, which is
known as the concurrent execution of the database. It means that the same database is executed
simultaneously on a multi-user system by different users.
2. While working on the database transactions, there occurs the requirement of using the database by
multiple users for performing different operations, and in that case, concurrent execution of the
database is performed.
3. The thing is that the simultaneous execution that is performed should be done in an interleaved
manner, and no operation should affect the other executing operations, thus maintaining the
consistency of the database. Thus, on making the concurrent execution of the transaction operations,
there occur several challenging problems that need to be solved.
Problems with Concurrent Execution
In a database transaction, the two main operations are READ and WRITE operations. So, there is a need to
manage these two operations in the concurrent execution of the transactions as if these operations are not
performed in an interleaved manner, and the data may become inconsistent. So, the following problems
occur with the Concurrent Execution of the operations:
Problem 1: Lost Update Problems (W - W Conflict)
The problem occurs when two different database transactions perform the read/write operations on the same
database items in an interleaved manner (i.e., concurrent execution) that makes the values of the items
incorrect hence making the database inconsistent.
For example:
Consider the below diagram where two transactions TX and TY, are performed on the same account A
where the balance of account A is $300.
 At time t1, transaction TX reads the value of account A, i.e., $300 (only read).
 At time t2, transaction TX deducts $50 from account A that becomes $250 (only deducted and not
updated/write).
 Alternately, at time t3, transaction TY reads the value of account A that will be $300 only because
TX didn't update the value yet.
 At time t4, transaction TY adds $100 to account A that becomes $400 (only added but not
updated/write).
 At time t6, transaction TX writes the value of account A that will be updated as $250 only, as
TY didn't update the value yet.
 Similarly, at time t7, transaction TY writes the values of account A, so it will write as done at time t4
that will be $400. It means the value written by TX is lost, i.e., $250 is lost.
Hence data becomes incorrect, and database sets to inconsistent.
Dirty Read Problems (W-R Conflict)
The dirty read problem occurs when one transaction updates an item of the database, and somehow the
transaction fails, and before the data gets rollback, the updated database item is accessed by another
transaction. There comes the Read-Write Conflict between both transactions.
For example:
Consider two transactions TX and TY in the below diagram performing read/write operations on account A
where the available balance in account A is $300:
 At time t1, transaction TX reads the value of account A, i.e., $300.
 At time t2, transaction TX adds $50 to account A that becomes $350.
 At time t3, transaction TX writes the updated value in account A, i.e., $350.
 Then at time t4, transaction TY reads account A that will be read as $350.
 Then at time t5, transaction TX rollbacks due to server problem, and the value changes back to $300
(as initially).
 But the value for account A remains $350 for transaction TY as committed, which is the dirty read
and therefore known as the Dirty Read Problem.
Unrepeatable Read Problem (W-R Conflict)
Also known as Inconsistent Retrievals Problem that occurs when in a transaction, two different values are
read for the same database item.
For example:
Consider two transactions, TX and TY, performing the read/write operations on account A, having an
available balance = $300. The diagram is shown below:
 At time t1, transaction TX reads the value from account A, i.e., $300.
 At time t2, transaction TY reads the value from account A, i.e., $300.
 At time t3, transaction TY updates the value of account A by adding $100 to the available balance,
and then it becomes $400.
 At time t4, transaction TY writes the updated value, i.e., $400.
 After that, at time t5, transaction TX reads the available value of account A, and that will be read as
$400.
 It means that within the same transaction TX, it reads two different values of account A, i.e., $ 300
initially, and after updation made by transaction TY, it reads $400. It is an unrepeatable read and is
therefore known as the Unrepeatable read problem.
Thus, in order to maintain consistency in the database and avoid such problems that take place in concurrent
execution, management is needed, and that is where the concept of Concurrency Control comes into role.
Concurrency Control
Concurrency Control is the working concept that is required for controlling and managing the concurrent
execution of database operations and thus avoiding the inconsistencies in the database. Thus, for maintaining
the concurrency of the database, we have the concurrency control protocols.
Concurrency Control Protocols
The concurrency control protocols ensure the atomicity, consistency, isolation, durability and serializability
of the concurrent execution of the database transactions.
Therefore, these protocols are categorized as:
1. Lock Based Concurrency Control Protocol
2. Time Stamp Concurrency Control Protocol
3. Validation Based Concurrency Control Protocol
Lock-Based Protocol
In this type of protocol, any transaction cannot read or write data until it acquires an appropriate lock on it.
There are two types of lock:
1. Shared lock:
 It is also known as a Read-only lock. In a shared lock, the data item can only read by the transaction.
 It can be shared between the transactions because when the transaction holds a lock, then it can't
update the data on the data item.
2. Exclusive lock:
 In the exclusive lock, the data item can be both reads as well as written by the transaction.
 This lock is exclusive, and in this lock, multiple transactions do not modify the same data
simultaneously.
There are four types of lock protocols available:
1. Simplistic lock protocol
It is the simplest way of locking the data while transaction. Simplistic lock-based protocols allow all the
transactions to get the lock on the data before insert or delete or update on it. It will unlock the data item
after completing the transaction.
2. Pre-claiming Lock Protocol
 Pre-claiming Lock Protocols evaluate the transaction to list all the data items on which they need
locks.
 Before initiating an execution of the transaction, it requests DBMS for all the lock on all those data
items.
 If all the locks are granted then this protocol allows the transaction to begin. When the transaction is
completed then it releases all the lock.
 If all the locks are not granted then this protocol allows the transaction to rolls back and waits until
all the locks are granted.
3. Two-phase locking (2PL)
 The two-phase locking protocol divides the execution phase of the transaction into three parts.
 In the first part, when the execution of the transaction starts, it seeks permission for the lock it
requires.
 In the second part, the transaction acquires all the locks. The third phase is started as soon as the
transaction releases its first lock.
 In the third phase, the transaction cannot demand any new locks. It only releases the acquired locks.

There are two phases of 2PL:


Growing phase: In the growing phase, a new lock on the data item may be acquired by the transaction, but
none can be released.
Shrinking phase: In the shrinking phase, existing lock held by the transaction may be released, but no new
locks can be acquired.
In the below example, if lock conversion is allowed then the following phase can happen:
1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.
2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking phase.
Example:

The following way shows how unlocking and locking work with 2-PL.
Transaction T1:
 Growing phase: from step 1-3
 Shrinking phase: from step 5-7
 Lock point: at 3
Transaction T2:
 Growing phase: from step 2-6
 Shrinking phase: from step 8-9
 Lock point: at 6
4. Strict Two-phase locking (Strict-2PL)
 The first phase of Strict-2PL is similar to 2PL. In the first phase, after acquiring all the locks, the
transaction continues to execute normally.
 The only difference between 2PL and strict 2PL is that Strict-2PL does not release a lock after using
it.
 Strict-2PL waits until the whole transaction to commit, and then it releases all the locks at a time.
 Strict-2PL protocol does not have shrinking phase of lock release.
It does not have cascading abort as 2PL does.
Timestamp Ordering Protocol
 The Timestamp Ordering Protocol is used to order the transactions based on their Timestamps. The
order of transaction is nothing but the ascending order of the transaction creation.
 The priority of the older transaction is higher that's why it executes first. To determine the timestamp
of the transaction, this protocol uses system time or logical counter.
 The lock-based protocol is used to manage the order between conflicting pairs among transactions at
the execution time. But Timestamp based protocols start working as soon as a transaction is created.
 Let's assume there are two transactions T1 and T2. Suppose the transaction T1 has entered the
system at 007 times and transaction T2 has entered the system at 009 times. T1 has the higher
priority, so it executes first as it is entered the system first.
 The timestamp ordering protocol also maintains the timestamp of last 'read' and 'write' operation on a
data.
Basic Timestamp ordering protocol works as follows:
1. Check the following condition whenever a transaction Ti issues a Read (X) operation:
If W_TS(X) >TS(Ti) then the operation is rejected.
If W_TS(X) <= TS(Ti) then the operation is executed.
Timestamps of all the data items are updated.
2. Check the following condition whenever a transaction Ti issues a Write(X) operation:
If TS(Ti) < R_TS(X) then the operation is rejected.
If TS(Ti) < W_TS(X) then the operation is rejected and Ti is rolled back otherwise the operation is executed.
Where,
TS(TI) denotes the timestamp of the transaction Ti.
R_TS(X) denotes the Read time-stamp of data-item X.
W_TS(X) denotes the Write time-stamp of data-item X.
Advantages and Disadvantages of TO protocol:
TO protocol ensures serializability since the precedence graph is as follows:

 TS protocol ensures freedom from deadlock that means no transaction ever waits.
 But the schedule may not be recoverable and may not even be cascade- free.
Validation Based Protocol
Validation phase is also known as optimistic concurrency control technique. In the validation based
protocol, the transaction is executed in the following three phases:
Read phase: In this phase, the transaction T is read and executed. It is used to read the value of various data
items and stores them in temporary local variables. It can perform all the write operations on temporary
variables without an update to the actual database.
Validation phase: In this phase, the temporary variable value will be validated against the actual data to see
if it violates the serializability.
Write phase: If the validation of the transaction is validated, then the temporary results are written to the
database or system otherwise the transaction is rolled back.
Here each phase has the following different timestamps:
Start(Ti): It contains the time when Ti started its execution.
Validation (Ti): It contains the time when Ti finishes its read phase and starts its validation phase.
Finish(Ti): It contains the time when Ti finishes its write phase.
 This protocol is used to determine the time stamp for the transaction for serialization using the time
stamp of the validation phase, as it is the actual phase which determines if the transaction will
commit or rollback.
 Hence TS(T) = validation(T).
 The serializability is determined during the validation process. It can't be decided in advance.
 While executing the transaction, it ensures a greater degree of concurrency and also less number of
conflicts.
 Thus it contains transactions which have less number of rollbacks.
Deadlock in DBMS
A deadlock is a condition where two or more transactions are waiting indefinitely for one another to give up
locks. Deadlock is said to be one of the most feared complications in DBMS as no task ever gets finished
and is in waiting state forever.
For example: In the student table, transaction T1 holds a lock on some rows and needs to update some rows
in the grade table. Simultaneously, transaction T2 holds locks on some rows in the grade table and needs to
update the rows in the Student table held by Transaction T1.
Now, the main problem arises. Now Transaction T1 is waiting for T2 to release its lock and similarly,
transaction T2 is waiting for T1 to release its lock. All activities come to a halt state and remain at a
standstill. It will remain in a standstill until the DBMS detects the deadlock and aborts one of the
transactions.

Deadlock Avoidance
 When a database is stuck in a deadlock state, then it is better to avoid the database rather than
aborting or restating the database. This is a waste of time and resource.
 Deadlock avoidance mechanism is used to detect any deadlock situation in advance. A method like
"wait for graph" is used for detecting the deadlock situation but this method is suitable only for the
smaller database. For the larger database, deadlock prevention method can be used.
Deadlock Detection
In a database, when a transaction waits indefinitely to obtain a lock, then the DBMS should detect whether
the transaction is involved in a deadlock or not. The lock manager maintains a Wait for the graph to detect
the deadlock cycle in the database.
Wait for Graph
 This is the suitable method for deadlock detection. In this method, a graph is created based on the
transaction and their lock. If the created graph has a cycle or closed loop, then there is a deadlock.
 The wait for the graph is maintained by the system for every transaction which is waiting for some
data held by the others. The system keeps checking the graph if there is any cycle in the graph.
The wait for a graph for the above scenario is shown below:

Deadlock Prevention
 Deadlock prevention method is suitable for a large database. If the resources are allocated in such a
way that deadlock never occurs, then the deadlock can be prevented.
 The Database management system analyzes the operations of the transaction whether they can create
a deadlock situation or not. If they do, then the DBMS never allowed that transaction to be executed.
Wait-Die scheme
In this scheme, if a transaction requests for a resource which is already held with a conflicting lock by
another transaction then the DBMS simply checks the timestamp of both transactions. It allows the older
transaction to wait until the resource is available for execution.
Let's assume there are two transactions Ti and Tj and let TS(T) is a timestamp of any transaction T. If T2
holds a lock by some other transaction and T1 is requesting for resources held by T2 then the following
actions are performed by DBMS:
1. Check if TS(Ti) < TS(Tj) - If Ti is the older transaction and Tj has held some resource, then Ti is
allowed to wait until the data-item is available for execution. That means if the older transaction is
waiting for a resource which is locked by the younger transaction, then the older transaction is
allowed to wait for resource until it is available.
2. Check if TS(Ti) < TS(Tj) - If Ti is older transaction and has held some resource and if Tj is waiting
for it, then Tj is killed and restarted later with the random delay but with the same timestamp.
Wound wait scheme
 In wound wait scheme, if the older transaction requests for a resource which is held by the younger
transaction, then older transaction forces younger one to kill the transaction and release the resource.
After the minute delay, the younger transaction is restarted but with the same timestamp.
 If the older transaction has held a resource which is requested by the Younger transaction, then the
younger transaction is asked to wait until older releases it.
Failure Classification
To find that where the problem has occurred, we generalize a failure into the following categories:
1. Transaction failure
The transaction failure occurs when it fails to execute or when it reaches a point from where it can't go any
further. If a few transaction or process is hurt, then this is called as transaction failure.
Reasons for a transaction failure could be -
Logical errors: If a transaction cannot complete due to some code error or an internal error condition, then
the logical error occurs.
Syntax error: It occurs where the DBMS itself terminates an active transaction because the database system
is not able to execute it. For example, the system aborts an active transaction, in case of deadlock or resource
unavailability.
2. System Crash
 System failure can occur due to power failure or other hardware or software
failure. Example: Operating system error.
 Fail-stop assumption: In the system crash, non-volatile storage is assumed not to be corrupted.
3. Disk Failure
 It occurs where hard-disk drives or storage drives used to fail frequently. It was a common problem
in the early days of technology evolution.
 Disk failure occurs due to the formation of bad sectors, disk head crash, and unreachability to the
disk or any other failure, which destroy all or part of disk storage.
Shadow paging
Shadow paging is one of the techniques that is used to recover from failure. We all know that recovery
means to get back the information, which is lost. It helps to maintain database consistency in case of failure.
Concept of shadow paging
Now let see the concept of shadow paging step by step −
Step 1 − Page is a segment of memory. Page table is an index of pages. Each table entry points to a page on
the disk.
Step 2 − Two page tables are used during the life of a transaction: the current page table and the shadow
page table. Shadow page table is a copy of the current page table.
Step 3 − When a transaction starts, both the tables look identical, the current table is updated for each write
operation.
Step 4 − The shadow page is never changed during the life of the transaction.
Step 5 − When the current transaction is committed, the shadow page entry becomes a copy of the current
page table entry and the disk block with the old data is released.
Step 6 − The shadow page table is stored in non-volatile memory. If the system crash occurs, then the
shadow page table is copied to the current page table.
Advantages
The advantages of shadow paging are as follows −
 No need for log records.
 No undo/ Redo algorithm.
 Recovery is faster.
Disadvantages
The disadvantages of shadow paging are as follows −
 Data is fragmented or scattered.
 Garbage collection problem. Database pages containing old versions of modified data need to be
garbage collected after every transaction.
 Concurrent transactions are difficult to execute.
Log-Based Recovery
 The log is a sequence of records. Log of each transaction is maintained in some stable storage so that
if any failure occurs, then it can be recovered from there.
 If any operation is performed on the database, then it will be recorded in the log.
 But the process of storing the logs should be done before the actual transaction is applied in the
database.
Let's assume there is a transaction to modify the City of a student. The following logs are written for this
transaction.
When the transaction is initiated, then it writes 'start' log.
<Tn, Start>
When the transaction modifies the City from 'Noida' to 'Bangalore', then another log is written to the file.
<Tn, City, 'Noida', 'Bangalore' >
When the transaction is finished, then it writes another log to indicate the end of the transaction.
<Tn, Commit>
There are two approaches to modify the database:
1. Deferred database modification:
 The deferred modification technique occurs if the transaction does not modify the database until it
has committed.
 In this method, all the logs are created and stored in the stable storage, and the database is updated
when a transaction commits.
2. Immediate database modification:
 The Immediate modification technique occurs if database modification occurs while the transaction
is still active.
 In this technique, the database is modified immediately after every operation. It follows an actual
database modification.
Recovery using Log records
When the system is crashed, then the system consults the log to find which transactions need to be undone
and which need to be redone.
 If the log contains the record <Ti, Start> and <Ti, Commit> or <Ti, Commit>, then the Transaction
Ti needs to be redone.
 If log contains record<Tn, Start> but does not contain the record either <Ti, commit> or <Ti, abort>,
then the Transaction Ti needs to be undone.
Checkpoint
 The checkpoint is a type of mechanism where all the previous logs are removed from the system and
permanently stored in the storage disk.
 The checkpoint is like a bookmark. While the execution of the transaction, such checkpoints are
marked, and the transaction is executed then using the steps of the transaction, the log files will be
created.
 When it reaches to the checkpoint, then the transaction will be updated into the database, and till that
point, the entire log file will be removed from the file. Then the log file is updated with the new step
of transaction till next checkpoint and so on.
 The checkpoint is used to declare a point before which the DBMS was in the consistent state, and all
transactions were committed.
Recovery using Checkpoint
In the following manner, a recovery system recovers the database from this failure:

 The recovery system reads log files from the end to start. It reads log files from T4 to T1.
 Recovery system maintains two lists, a redo-list, and an undo-list.
 The transaction is put into redo state if the recovery system sees a log with <Tn, Start> and <Tn,
Commit> or just <Tn, Commit>. In the redo-list and their previous list, all the transactions are
removed and then redone before saving their logs.
 For example: In the log file, transaction T2 and T3 will have <Tn, Start> and <Tn, Commit>. The T1
transaction will have only <Tn, commit> in the log file. That's why the transaction is committed after
the checkpoint is crossed. Hence it puts T1, T2 and T3 transaction into redo list.
 The transaction is put into undo state if the recovery system sees a log with <Tn, Start> but no
commit or abort log found. In the undo-list, all the transactions are undone, and their logs are
removed.
 For example: Transaction T4 will have <Tn, Start>. So T4 will be put into undo list since this
transaction is not yet complete and failed amid.
Recovery with Concurrent Transaction
 Whenever more than one transaction is being executed, then the interleaved of logs occur. During
recovery, it would become difficult for the recovery system to backtrack all logs and then start
recovering.
 To ease this situation, 'checkpoint' concept is used by most DBMS.

You might also like