Lecturenotes Module-5 BCS403 Databasemanagementsystem
Lecturenotes Module-5 BCS403 Databasemanagementsystem
Lecturenotes Module-5 BCS403 Databasemanagementsystem
MoModule 5
BCS403-Database Management System
Binary Locks
A binary lock can have two states or values: locked and unlocked (or 1 and 0, for
simplicity). A distinct lock is associated with each database item X.
Two operations, lock_item and unlock_item, are used with binary locking. A
transaction requests access to an item X by first issuing a lock_item(X) operation.
If LOCK(X) = 1, the transaction is forced to wait. If LOCK(X) = 0, it is set to 1 (the
transaction locks the item) and the transaction is allowed to access item X.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
If the simple binary locking scheme described here is used, every transaction must obey the
following rules:
The preceding binary locking scheme is too restrictive for database items because at
most one transaction can hold a lock on a given item.
For this purpose, a different type of lock, called a multiple-mode lock, is used. In this
scheme—called shared/exclusive or read/write locks—there are three locking
operations: read_lock(X), write_lock(X), and unlock(X).
A lock associated with an item X, LOCK(X), now has three possible states: read-locked,
write-locked, or unlocked.
A read-locked item is also called share-locked because other transactions are allowed to
read the item, whereas a write-locked item is called exclusive-locked because a single
transaction exclusively holds the lock on the item.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
When we use the shared/exclusive locking scheme, the system must enforce the following
rules:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
It is desirable to relax conditions 4 and 5 in the preceding list in order to allow lock
conversion; that is, a transaction that already holds a lock on item X is allowed under
certain conditions to convert the lock from one locked state to another.
It is also possible for a transaction T to issue a write_lock(X) and then later to downgrade
the lock by issuing a read_lock(X) operation.
Using binary locks or read/write locks in transactions, as described earlier, does not
guarantee serializability of schedules on its own.
To guarantee serializability, we must follow an additional protocol concerning the
positioning of locking and unlocking operations in every transaction.
The best-known protocol, two-phase locking, is described in the next section.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
There are a number of variations of two-phase locking (2PL). The technique just
described is known as basic 2PL.
A variation known as conservative 2PL (or static 2PL) requires a transaction to lock all
the items it accesses before the transaction begins execution, by predeclaring its read-
set and write-set.
Usually, the concurrency control subsystem itself is responsible for generating
the read_lock and write_lock requests.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
Dealing with Deadlock and Starvation
Deadlock occurs when each transaction T in a set of two or more transactions is waiting
for some item that is locked by some other transaction T′ in the set.
But because the other transaction is also waiting, it will never release the lock. A simple
example is shown in Figure 21.5(a), where the two transactions T1′ and T2′ are
deadlocked in a partial schedule; T1′ is in the waiting queue for X, which is locked by T2′,
whereas T2′ is in the waiting queue for Y, which is locked by T1′.
One way to prevent deadlock is to use a deadlock prevention protocol. One deadlock
prevention protocol, which is used in conservative two-phase locking, requires that
every transaction lock all the items it needs in advance (which is generally not a practical
assumption)—if any of the items cannot be obtained, none of the items are locked.
A number of other deadlock prevention schemes have been proposed that make a
decision about what to do with a transaction involved in a possible deadlock situation.
Should it be blocked and made to wait or should it be aborted, or should the transaction
preempt and abort another transaction? Some of these techniques use the concept of
transaction timestamp TS(T′), which is a unique identifier assigned to each transaction.
The timestamps are typically based on the order in which transactions are started;
hence, if transaction T1 starts before transaction T2, then TS(T1) < TS(T2).
The rules followed by these schemes are:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
Deadlock Detection
Timeouts. Another simple scheme to deal with deadlock is the use of timeouts. This method
is practical because of its low overhead and simplicity.
Starvation. Another problem that may occur when we use locking is starvation, which
occurs when a transaction cannot proceed for an indefinite period of time while other
transactions in the system continue normally.
MoModule 5
BCS403-Database Management System
Timestamps
A schedule in which the transactions participate is then serializable, and the only
equivalent serial schedule permitted has the transactions in order of their timestamp
values. This is called timestamp ordering (TO).
The algorithm allows interleaving of transaction operations, but it must ensure that for
each pair of conflicting operations in the schedule, the order in which the item is
accessed must follow the timestamp order.
To do this, the algorithm associates with each database item X two timestamp (TS)
values:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
new transaction with a new timestamp.
The concurrency control algorithm must check whether conflicting operations violate
the timestamp ordering in the following two cases:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
A variation of basic TO called strict TO ensures that the schedules are both strict (for
easy recoverability) and (conflict) serializable.
In this variation, a transaction T issues a read_item(X) or write_item(X) such that TS(T) >
write_TS(X) has its read or write operation delayed until the transaction T′ that wrote
the value of X (hence TS(T′) = write_TS(X)) has committed or aborted.
A modification of the basic TO algorithm, known as Thomas’s write rule, does not
enforce conflict serializability, but it rejects fewer write operations by modifying the
checks for the write_item(X) operation as follows:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
Correspondingly, when a transaction T is allowed to read the value of version Xi , the value
of read_TS(Xi ) is set to the larger of the current read_TS(Xi ) and TS(T).
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
MoModule 5
BCS403-Database Management System
utilizing timestamps.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
Some of these methods may suffer from anomalies that can violate serializability, but
because they generally have lower overhead than 2PL, they have been implemented in
several relational DBMSs.
The idea behind optimistic concurrency control is to do all the checks at once; hence,
transaction execution proceeds with a minimum of overhead until the validation phase
is reached. If there is little interference among transactions, most will be validated
successfully.
The optimistic protocol we describe uses transaction timestamps and also requires that
the write_sets and read_sets of the transactions be kept by the system.
The validation phase for Ti checks that, for each such transaction Tj that is either
recently committed or is in its validation phase, one of the following conditions holds:
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
The particular choice of data item type can affect the performance of concurrency control and
recovery.
The size of data items is often called the data item granularity. Fine granularity refers to
small item sizes, whereas coarse granularity refers to large item sizes. Several tradeoffs
must be considered in choosing the data item size.
If the data item size was a single record instead of a disk block, transaction S would be
able to proceed, because it would be locking a different data item (record).
On the other hand, the smaller the data item size is, the more the number of items in
the database. Because every item is associated with a lock, the system will have a larger
number of active locks to be handled by the lock manager.
For timestamps, storage is required for the read_TS and write_TS for each data item,
and there will be similar overhead for handling a large number of items.
Given the above tradeoffs, an obvious question can be asked: What is the best item
size? The answer is that it depends on the types of transactions involved.
If a typical transaction accesses a small number of records, it is advantageous to have
the data item granularity be one record.
Since the best granularity size depends on the given transaction, it seems appropriate
that a database system should support multiple levels of granularity, where the
granularity level can be adjusted dynamically for various mixes of transactions.
Figure 21.7 shows a simple granularity hierarchy with a database containing two files,
each file containing several disk pages, and each page containing several records.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
This can be used to illustrate a multiple granularity level 2PL protocol, with
shared/exclusive locking modes, where a lock can be requested at any level.
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
Suppose transaction T1 wants to update all the records in file f1, and T1 requests and is
granted an exclusive lock for f1. Then all of f1’s pages (p11 through p1n)—and the
records contained on those pages—are locked in exclusive mode.
To make multiple granularity level locking practical, additional types of locks, called
intention locks, are needed.
There are three types of intention locks:
The compatibility table of the three intention locks, and the actual shared and exclusive
THARANI R
Asst. Professor, Dept. of AI & ML
SCHEME - 2022
MoModule 5
BCS403-Database Management System
locks, is shown in Figure 21.8.
In addition to the three types of intention locks, an appropriate locking protocol must be
used. The multiple granularity locking (MGL) protocol consists of the following rules:
THARANI R
Asst. Professor, Dept. of AI & ML