Unit 5TH Notes (DBMS)
Unit 5TH Notes (DBMS)
CONCURRENCY CONTROL
Concurrency Control in Database Management System is a procedure of managing simultaneous operations
without conflicting with each other. It ensures that Database transactions are performed concurrently and
accurately to produce correct results.
DBMS Concurrency Control is used to address such conflicts, which mostly occur with a multi-user system.
Therefore, Concurrency Control is the most important element for proper functioning of a Database
Management System where two or more database transactions are executed simultaneously, which require
access to the same data.
Lost Updates occur when multiple transactions select the same row and update the row based on the
value selected
Uncommitted dependency issues occur when the second transaction selects a row which is updated
by another transaction (dirty read)
Non-Repeatable Read occurs when a second transaction is trying to access the same row several
times and reads different data each time.
Incorrect Summary issue occurs when one transaction takes summary over the value of all the
instances of a repeated data-item, and second transaction updates few instances of that specific data-
item. In that situation, the resulting summary does not reflect a correct result.
Lock-Based Protocols
Timestamp-Based Protocols
Validation-Based Protocols
Lock-based Protocols
Lock Based Protocols in DBMS is a mechanism in which a transaction cannot Read or Write the data until it
acquires an appropriate lock. Lock based protocols help to eliminate the concurrency problem in DBMS for
simultaneous transactions by locking or isolating a particular transaction to a single user.
A lock is a data variable which is associated with a data item. This lock signifies that operations that can be
performed on the data item. Locks in DBMS help synchronize access to the database items by concurrent
transactions.
All lock requests are made to the concurrency-control manager. Transactions proceed only once the lock
request is granted.
Binary Locks: A Binary lock on a data item can either lock or unlocked states.
Shared/exclusive: This type of locking mechanism separates the locks in DBMS based on their uses. If a lock
is acquired on a data item to perform a write operation, it is called an exclusive lock.
A shared lock is also called a Read-only lock. With the shared lock, the data item can be shared between
transactions. This is because you will never have permission to update data on the data item.
For example, consider a case where two transactions are reading the account balance of a person. The
database will let them read by placing a shared lock. However, if another transaction wants to update that
account’s balance, shared lock prevent it until the reading process is over.
With the Exclusive Lock, a data item can be read as well as written. This is exclusive and can’t be held
concurrently on the same data item. X-lock is requested using lock-x instruction. Transactions may unlock
the data item after finishing the ‘write’ operation.
For example, when a transaction needs to update the account balance of a person. You can allows this
transaction by placing X lock on it. Therefore, when the second transaction wants to read or write, exclusive
lock prevent this operation.
This type of lock-based protocols allows transactions to obtain a lock on every object before beginning
operation. Transactions may unlock the data item after finishing the ‘write’ operation.
4. Pre-claiming Locking
Pre-claiming lock protocol helps to evaluate operations and create a list of required data items which are
needed to initiate an execution process. In the situation when all locks are granted, the transaction executes.
After that, all locks release when all of its operations are over.
Starvation
Starvation is the situation when a transaction needs to wait for an indefinite period to acquire a lock.
Deadlock
Deadlock refers to a specific situation where two or more processes are waiting for each other to release a
resource or more than two processes are waiting for the resource in a circular chain.
Two Phase Locking Protocol
Two Phase Locking Protocol also known as 2PL protocol is a method of concurrency control in DBMS that
ensures serializability by applying a lock to the transaction data which blocks other transactions to access the
same data simultaneously. Two Phase Locking protocol helps to eliminate the concurrency problem in
DBMS.
This locking protocol divides the execution phase of a transaction into three different parts.
In the first phase, when the transaction begins to execute, it requires permission for the locks it
needs.
The second part is where the transaction obtains all the locks. When a transaction releases its first
lock, the third phase starts.
In this third phase, the transaction cannot demand any new locks. Instead, it only releases the
acquired locks.
The Two-Phase Locking protocol allows each transaction to make a lock or unlock request in two steps:
Growing Phase: In this phase transaction may obtain locks but may not release any locks.
Shrinking Phase: In this phase, a transaction may release locks but not obtain any new lock
It is true that the 2PL protocol offers serializability. However, it does not ensure that deadlocks do not
happen.
In the above-given diagram, you can see that local and global deadlock detectors are searching for deadlocks
and solve them with resuming transactions to their initial states.
Strict-Two phase locking system is almost similar to 2PL. The only difference is that Strict-2PL never
releases a lock after using it. It holds all the locks until the commit point and releases all the locks at one go
when the process is over.
Centralized 2PL
In Centralized 2 PL, a single site is responsible for lock management process. It has only one lock manager
for the entire DBMS.
Primary copy 2PL mechanism, many lock managers are distributed to different sites. After that, a particular
lock manager is responsible for managing the lock for a set of data items. When the primary copy has been
updated, the change is propagated to the slaves.
Distributed 2PL
In this kind of two-phase locking mechanism, Lock managers are distributed to all sites. They are
responsible for managing locks for data at that site. If no data is replicated, it is equivalent to primary copy
2PL. Communication costs of Distributed 2PL are quite higher than primary copy 2PL
Timestamp-based Protocols
Timestamp based Protocol in DBMS is an algorithm which uses the System Time or Logical Counter as a
timestamp to serialize the execution of concurrent transactions. The Timestamp-based protocol ensures that
every conflicting read and writes operations are executed in a timestamp order.
The older transaction is always given priority in this method. It uses system time to determine the time stamp
of the transaction. This is the most commonly used concurrency protocol.
Lock-based protocols help you to manage the order between the conflicting transactions when they will
execute. Timestamp-based protocols manage conflicts as soon as an operation is created.
Example:
Suppose there are there transactions T1, T2, and T3.
T1 has entered the system at time 0010
T2 has entered the system at 0020
T3 has entered the system at 0030
Priority will be given to transaction T1, then transaction T2 and lastly Transaction T3.
Advantages:
Disadvantages:
1. Read Phase
2. Validation Phase
3. Write Phase
Read Phase
In the Read Phase, the data values from the database can be read by a transaction but the write operation or
updates are only applied to the local data copies, not the actual database.
Validation Phase
In Validation Phase, the data is checked to ensure that there is no violation of serializability while applying
the transaction updates to the database.
Write Phase
In the Write Phase, the updates are applied to the database if the validation is successful, else; the updates are
not applied, and the transaction is rolled back.
Its storage mechanisms and computational methods should be modest to minimize overhead.
GRANULARITY
Granularity is the size of data that is allowed to be locked. Multiple granularities is hierarchically breaking
up our database into smaller blocks that can be locked. This locking technique allows the locking of various
data sizes and sets. This way of breaking the data into blocks that can be locked decreases lock overhead and
increase the concurrency in our database. Multiple granularities helps maintain a track of data that can be
locked and how the data is locked.
o The second level represents a node of type area. The higher level database consists of exactly these
areas.
o The area consists of children nodes which are known as files. No file can be present in more than
one area.
o Finally, each file contains child nodes known as records. The file has exactly those records that are
its child nodes. No records represent in more than one file.
o Hence, the levels of the tree starting from the top level are as follows:
1. Database
2. Area
3. File
4. Record
In this example, the highest level shows the entire database. The levels below are file, record, and fields.
There are three additional lock modes with multiple granularities:
Intention-shared (IS): It contains explicit locking at a lower level of the tree but only with shared locks.
Intention-Exclusive (IX): It contains explicit locking at a lower level with exclusive or shared locks.
Shared & Intention-Exclusive (SIX): In this lock, the node is locked in shared mode, and some node is
locked in exclusive mode by the same transaction.
Compatibility Matrix with Intention Lock Modes: The below table describes the compatibility matrix for
these lock modes:
It uses the intention lock modes to ensure serializability. It requires that if a transaction attempts to lock a
node, then that node must follow these protocols:
o If T1 currently has none of the children of the node-locked only, then Transaction T1 will unlock a
node.
Observe that in multiple-granularity, the locks are acquired in top-down order, and locks must be released
in bottom-up order.
o If transaction T1 reads record Ra9 in file Fa, then transaction T1 needs to lock the database, area
A1 and file Fa in IX mode. Finally, it needs to lock R a2 in S mode.
o If transaction T2 modifies record R a9 in file Fa, then it can do so after locking the database, area
A1 and file Fa in IX mode. Finally, it needs to lock the R a9 in X mode.
o If transaction T3 reads all the records in file Fa, then transaction T3 needs to lock the database and
area A in IS mode. At last, it needs to lock Fa in S mode.
o If transaction T4 reads the entire database, then T4 needs to lock the database in S mode.
2. Write timestamp: This field contains the timestamp of the transaction whose new version is
created.
3. Read timestamp: This field contains the timestamp of the transaction of that transaction that will
read that newly created value.
Now let us understand this concept using an example. Let T1 and T2 be two transactions having timestamp
values 15 and 10, respectively.
The transaction T2 calls for a write operation on data (let’s say X) from the database. As T2 calls write
operation, then a new version of data value X is created, which contains the value of X, timestamp of T2, and
timestamp of that transaction which will read X, but in this case, no one is reading the newly created value so
that filed remains empty.
X 10
Now let the transaction T1 (having timestamp 15) call a read operation to read the newly created value X,
then the newly created variable contained will be
X 10 15
o Multi version concurrency control (MVCC) is a database optimization technique that creates
duplicate copies of records so that data can be safely read and updated at the same time.
o With MVCC, DBMS reads and writes don’t block each other.
When implemented properly by a DBMS, multi version concurrency control provides the following benefits:
Whenever more than one transaction is being executed, then the interleaved of logs occur. During
recovery, it would become difficult for the recovery system to backtrack all logs and then start
recovering
As we have discussed checkpoint in Transaction Processing Concept of this tutorial, so you can go through
the concepts again to make things more clear. Concurrency means that multiple transactions can be
executed at the same time and then the interleaved logs occur. But there may be changes in transaction
results so maintain the order of execution of those transactions. During recovery, it would be very difficult
for the recovery system to backtrack all the logs and then start recovering.
2. Transaction rollback
3. Checkpoints
4. Restart recovery
Transaction rollback:
In this scheme, we rollback a failed transaction by using the log.
The system scans the log backward a failed transaction, for every log record found in the log the
system restores the data item.
Checkpoints:
Checkpoints are a process of saving a snapshot of the applications state so that it can restart from that
point in case of failure.
Checkpoint is a point of time at which a record is written onto the database form the buffers.
When it reaches the checkpoint, then the transaction will be updated into the database, and till that
point, the entire log file will be removed from the file. Then the log file is updated with the new step
of transaction till the next checkpoint and so on.
The checkpoint is used to declare the point before which the DBMS was in the consistent state, and all
the transactions were committed.
In this scheme, we used checkpoints to reduce the number of log records that the system must scan
when it recovers from a crash.
In a concurrent transaction processing system, we require that the checkpoint log record be of the form
<checkpoint L>, where ‘L’ is a list of transactions active at the time of the checkpoint.
A fuzzy checkpoint is a checkpoint where transactions are allowed to perform updates even while
buffer blocks are being written out.
Restart recovery:
When the system recovers from a crash, it constructs two lists.
The undo-list consists of transactions to be undone, and the redo-list consists of transaction to be
redone.
The system constructs the two lists as follows: Initially, they are both empty. The system scans the log
backward, examining each record, until it finds the first <checkpoint> record.