9c Concurrency Control 2

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

Concurrency control

Lock management
• Lock manager : The part of the DBMS that keeps
track of the locks issued to transactions
• The lock manager maintains a lock table, which
is a hash table with the data object identifier as
the key.
• The DBMS also maintains a descriptive entry for
each transaction in a transaction table, and
among other things, the entry contains a pointer
to a list of locks held by the transaction.
Lock Table Entry
• A lock table entry for an object -- which can
be a page, a record, and so on, depending on
the DBMS -- contains the following
information:
1. the number of transactions currently holding a
lock on the object (this can be more than one if
the object is locked in shared mode),
2. the nature of the lock (shared or exclusive),
3. a pointer to a queue of lock requests.
Lock and Unlock Requests
• When a transaction needs a lock on an object, it issues a lock request to
the lock manager:
1. If a shared lock is requested, the queue of requests is empty, and the
object is not currently locked in exclusive mode, the lock manager
grants the lock and updates the lock table entry for the object
(indicating that the object is locked in shared mode, and incrementing
the number of transactions holding a lock by one).
2. If an exclusive lock is requested, and no transaction currently holds a
lock on the object (which also implies the queue of requests is empty),
the lock manager grants the lock and updates the lock table entry.
3. Otherwise, the requested lock cannot be immediately granted, and
the lock request is added to the queue of lock requests for this object.
The transaction requesting the lock is suspended.
• When a transaction aborts or commits, it releases all its
locks.
• When a lock on an object is released, the lock manager
updates the lock table entry for the object and examines
the lock request at the head of the queue for this object.
• If this request can now be granted, the transaction that
made the request is woken up and given the lock.
• Indeed, if there are several requests for a shared lock on
the object at the front of the queue, all of these
requests can now be granted together.
Note:
• If T1 has a shared lock on O, and T2 requests an
exclusive lock, T2's request is queued.
• Now, if T3 requests a shared lock, its request enters the
queue behind that of T2, even though the requested
lock is compatible with the lock held by T1.
• This rule ensures that T2 does not starve, that is, wait
indefinitely while a stream of other transactions acquire
shared locks and thereby prevent T2 from getting the
exclusive lock that it is waiting for.
Deadlock Prevention
• We can prevent deadlocks by giving each
transaction a priority and ensuring that lower
priority transactions are not allowed to wait for
higher priority transactions (or vice versa).
• One way to assign priorities is to give each
transaction a timestamp when it starts up.
• The lower the timestamp, the higher the
transaction's priority, that is, the oldest
transaction has the highest priority.
Schemes of Deadlock Prevention
• If a transaction Ti requests a lock and
transaction Tj holds a conflicting lock, the lock
manager can use one of the following two
policies:
1. Wait-die: If Ti has higher priority, it is allowed
to wait; otherwise it is aborted.
2. Wound-wait: If Ti has higher priority, abort
Tj; otherwise Ti waits.
• No transaction is perennially aborted because it
never has a sufficiently high priority  the higher
priority transaction is never aborted.
• When a transaction is aborted and restarted, it
should be given the same timestamp that it had
originally.
• Reissuing timestamps in this way ensures that each
transaction will eventually become the oldest
transaction, and thus the one with the highest
priority, and will get all the locks that it requires.
• The wait-die scheme is nonpreemptive; only a
transaction requesting a lock can be aborted.
• As a transaction grows older (and its priority
increases), it tends to wait for more and more
younger transactions.
• A younger transaction that conflicts with an older
transaction may be repeatedly aborted (a
disadvantage with respect to wound-wait), but on
the other hand, a transaction that has all the locks it
needs will never be aborted for deadlock reasons
Deadlock Detection
• The lock manager maintains a structure called
a waits-for graph to detect deadlock cycles.
• The nodes correspond to active transactions,
and there is an arc from Ti to Tj if (and only if)
Ti is waiting for Tj to release a lock.
• The lock manager adds edges to this graph
when it queues lock requests and removes
edges when it grants lock requests
Schedule Illustrating Deadlock
Waits-for Graph before and after Deadlock
• The waits-for graph is periodically checked for
cycles, which indicate deadlock.
• A deadlock is resolved by aborting a
transaction that is on a cycle and releasing its
locks; this action allows some of the waiting
transactions to proceed.
Performance of Lock-Based Concurrency
Control
• Designing a good lock-based concurrency control
mechanism in a DBMS involves making a number of
choices:
1. Should we use deadlock-prevention or deadlock-
detection?
2. If we use deadlock-detection, how frequently should
we check for deadlocks?
3. If we use deadlock-detection and identify a deadlock,
which transaction (on some cycle in the waits-for
graph, of course) should we abort?
Blocking and Aborting
• Lock-based schemes are designed to resolve conflicts
between transactions and use one of two mechanisms:
blocking and aborting transactions.
• blocked transactions may hold locks that force other
transactions to wait
• aborting and restarting a transaction obviously wastes
the work done thus far by that transaction.
• A deadlock represents an extreme instance of blocking in
which a set of transactions is forever blocked unless one
of the deadlocked transactions is aborted by the DBMS.
deadlock-prevention or deadlock-detection?

• Prevention-based schemes  the abort


mechanism is used preemptively in order to
avoid deadlocks.
• Detection-based schemes  the transactions in
a deadlock cycle hold locks that prevent other
transactions from making progress
• System throughput is reduced because many
transactions may be blocked, waiting to obtain
locks currently held by deadlocked transactions.
• This is the fundamental trade-off between these
prevention and detection approaches to deadlocks:
– loss of work due to preemptive aborts OR
– loss of work due to blocked transactions in a deadlock
cycle.
• We can increase the frequency with which we check
for deadlock cycles, and thereby reduce the amount
of work lost due to blocked transactions, but this
entails a corresponding increase in the cost of the
deadlock detection mechanism.
Conservative 2PL
• Prevent deadlock
• A transaction obtains all the locks that it will ever need when it
begins, or blocks waiting for these locks to become available.
• This scheme ensures that there will not be any deadlocks, and,
perhaps more importantly, that a transaction that already holds
some locks will not block waiting for other locks.
• A transaction acquires locks earlier.
• If lock contention is low, locks are held longer under Conservative
2PL.
• If lock contention is heavy, Conservative 2PL can reduce the time
that locks are held on average, because transactions that hold locks
are never blocked.
If we use deadlock-detection, how
frequently should we check for deadlocks?
• Deadlocks are relatively infrequent, and
detection-based schemes work well in
practice.
• However, if there is a high level of contention
for locks, and therefore an increased
likelihood of deadlocks, prevention-based
schemes could perform better.
If we use deadlock-detection and identify a deadlock, which
transaction (on some cycle in the waits-for graph, of course)
should we abort?

• When a deadlock is detected, the choice of which


transaction to abort can be made using several criteria:
• the one with the fewest locks,
• the one that has done the least work,
• the one that is farthest from completion, and so on.
• A transaction might have been repeatedly restarted
and then chosen as the victim in a deadlock cycle.
• Such transactions should eventually be favored during
deadlock detection and allowed to complete.
Concurrency Control in B+ Trees
• A straightforward approach to concurrency control for B+
trees and ISAM indexes is to ignore the index structure,
treat each page as a data object, and use some version of
2PL.
• Two observations provide the necessary insight:
1. The higher levels of the tree only serve to direct searches,
and all the `real' data is in the leaf levels (in the format of
one of the three alternatives for data entries).
2. For inserts, a node must be locked (in exclusive mode, of
course) only if a split can propagate up to it from the
modified leaf.
Search
• Searches should obtain shared locks on nodes,
starting at the root and proceeding along a
path to the desired leaf.
• The first observation suggests that a lock on a
node can be released as soon as a lock on a
child node is obtained.
B+ Tree Locking Example:
search 38*, insert 45*, insert 25*
Search 38*
• To search for the data entry 38*:
• a transaction Ti must obtain an S lock on node A,
read the contents and determine that it needs to
examine node B,
• obtain an S lock on node B and release the lock on A,
• obtain an S lock on node C and release the lock on B,
• obtain an S lock on node D and release the lock on C.
Delete 38*
• If transaction Tj wants to delete 38*:
• traverse the path from the root to node D and
is forced to wait until Ti is done.
• If some transaction Tk holds a lock on, say,
node C before Ti reaches this node, Ti is
similarly forced to wait for Tk to complete
Insert 45*
• a transaction must obtain an S lock on node A,
• Obtain a S lock on node B and release the lock
on A,
• obtain an S lock on node C
• obtain an X lock on node E and release the locks
on C and
• Because node E has space for the new entry, the
insert is accomplished by modifying this node.
Insert 25*
• Proceeding as for the insert of 45*, we obtain an X lock on node H.
• this node is full and must be split.
• Splitting H requires that we also modify the parent, node F, but the
transaction has only an S lock on F. Thus, it must request an
upgrade of this lock to an X lock.
• If no other transaction holds an S lock on F, the upgrade is granted,
and since F has space, the split will not propagate further, and the
insertion of 25* can proceed (by splitting H and locking G to
modify the sibling pointer in I to point to the newly created node).
• However, if another transaction holds an S lock on node F, the first
transaction is suspended until this transaction releases its S lock.

You might also like