0% found this document useful (0 votes)
35 views18 pages

DBMS Unit-5

DBMS unit 5 aktu

Uploaded by

juniorntr068
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views18 pages

DBMS Unit-5

DBMS unit 5 aktu

Uploaded by

juniorntr068
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CONCURRENCY CONTROL

When several transactions execute concurrently in the database, however, the isola-
tion property may no longer be preserved. To ensure that it is, the system must control
the interaction among the concurrent transactions. The mechanism used to control the
interaction of transactions is called concurrency control scheme.
There are number of concurrency control schemes.

1. Lock based protocol

2. Time stamp based protocol

3. Validation based protocol

4. Multiple granularity protocol

5. Multi-version protocol

1 Lock based protocol


A lock is a mechanism to control concurrent access to a data item.
Data item can be locked in two modes.
Shared mode(S): If a transaction Ti has obtained a shared-mode lock on item Q, then
Ti can read, but cannot write, Q.
Exclusive mode(S): If a transaction Ti has obtained an exclusive-mode lock on item
Q, then Ti can both read and write Q.

Note: The transaction makes the lock request to the concurrency control manager.
Transaction can process only after lock request is granted.

Compatibility function
Given a set of lock modes, we can define a compatibility function on them as follows.
Let A and B represent arbitrary lock modes. Suppose that a transaction Ti requests a
lock of mode A on item Q on which transaction Tj (Ti 6= Tj ) currently holds a lock of
mode B. If transaction Ti can be granted a lock on Q immediately, in spite of the presence
of the mode B lock, then we say mode A is compatible with mode B. Such a function can
be represented conveniently by a matrix. An element comp(A, B) of the matrix has the
value true if and only if mode A is compatible with mode B.

Note:

• A transaction requests a shared lock on data item Q by executing the lock-S(Q)


instruction.

2
• A transaction requests an exclusive lock on data item Q by executing the lock-X(Q)
instruction.

• To unlock the data item Q, we use unlock(Q) instruction.

Note:
To access a data item, transaction Ti must first lock that item. If the data item is already
locked by another transaction in an incompatible mode, the concurrency control manager
will not grant the lock until all incompatible locks held by other transactions have been
released. Thus, Ti is made to wait until all incompatible locks held by other transactions
have been released.

Example: Consider the following two transactions T1 and T2 with locking modes.

Consider the following schedule-1 of these transactions.


Suppose that the values of accounts A and B are 100and200, respectively. If these two
transactions are executed serially, either in the order T1 , T2 or the order T2 , T1 , then
transaction T2 will display the value $300. If, however, these transactions are executed
concurrently, then schedule 1 is possible. In this case, transaction T2 displays $250, which
is incorrect. The reason for this mistake is that the transaction T1 unlocked data item
B too early, as a result of which T2 saw an inconsistent state. Example: Consider the
following two transactions T3 and T4 with locking modes.

Consider the following schedule-1 of these transactions.

Consider the partial schedule-2 for T3 and T4 . Since T3 is holding an exclusive-mode


lock on B and T4 is requesting a shared-mode lock on B, T4 is waiting for T3 to unlock B.
Similarly, since T4 is holding a shared-mode lock on A and T3 is requesting an exclusive-
mode lock on A, T3 is waiting for T4 to unlock A. Thus, we have arrived at a state where
neither of these transactions can ever proceed with its normal execution. This situation
is called deadlock.

Note: When deadlock occurs, the system must roll back one of the two transactions.

Locking protocol This is the set of rules indicating when a transaction may lock and
unlock each of the data items.
Note: A schedule S is legal under a given locking protocol if S is a possible schedule
for a set of transactions that follow the rules of the locking protocol.
Note: A locking protocol ensures conflict serializability if and only if all legal schedules
are conflict serializable.

Starvation
Suppose a transaction T2 has a shared-mode lock on a data item, and another transaction
T1 requests an exclusive-mode lock on the data item. Clearly, T1 has to wait for T2 to
release the shared-mode lock. Meanwhile, a transaction T3 may request a shared-mode
lock on the same data item. The lock request is compatible with the lock granted to
T2 , so T3 may be granted the shared-mode lock. At this point T2 may release the lock,
but still T1 has to wait for T3 to finish. But again, there may be a new transaction T4
that requests a shared-mode lock on the same data item, and is granted the lock before

3
4
T3 releases it. In fact, it is possible that there is a sequence of transactions that each
requests a shared-mode lock on the data item, and each transaction releases the lock a
short while after it is granted, but T1 never gets the exclusive-mode lock on the data item.
The transaction T1 may never make progress, and is said to be starved. This situation is
said to be starvation.

1.1 Two-phase locking protocol


This protocol requires that each transaction issue lock and unlock requests in two phases:
1. Growing phase: A transaction may obtain locks, but may not release any lock.
2. Shrinking phase: A transaction may release locks, but may not obtain any new
locks.

Initially, a transaction is in the growing phase. The transaction acquires locks as needed.
Once the transaction releases a lock, it enters the shrinking phase, and it can issue no
more lock requests.
Example: Transactions T3 and T4 are locked in two phase. While, transactions T1 and
T2 are not locked in two phase.

Note: Two-phase locking protocol ensures conflict serializability. The serializability


order of transactions will be based on lock point in the transactions.

Lock point: Lock point of a transaction is a point in the schedule where the transaction
has obtained its final lock (the end of its growing phase).
Note: Two-phase locking does not ensure freedom from deadlock.

Observe that transactions T3 and T4 are in two phase, but, in schedule 2, they are
deadlocked.

Note: In addition to being serializable, schedules should be cascadeless. Cascading


rollback may occur under two-phase locking.
Example: Consider the partial schedule in the following figure:-
Each transaction observes the two-phase locking protocol, but the failure of T5 after the
read(A) step of T7 leads to cascading rollback of T6 and T7 .
Note: Cascading rollbacks can be avoided by a modification of two-phase locking called
the strict two-phase locking protocol.

Strict two-phase locking protocol


This protocol requires not only that locking be two phase, but also that all exclusive-
mode locks taken by a transaction be held until that transaction commits.
Rigorous two-phase locking protocol
Another variant of two-phase locking is the rigorous two-phase locking protocol, which
requires that all locks be held until the transaction commits.
Note: With rigorous two-phase locking, transactions can be serialized in the order in
which they commit.

Lock Conversion
Upgrade: We denote conversion from shared to exclusive modes by upgrade.

5
Downgrade: We denote conversion from exclusive to shared by downgrade.

Note: Lock conversion cannot be allowed arbitrarily. Rather, upgrading can take
place in only the growing phase, whereas downgrading can take place in only the shrink-
ing phase.
Note: Strict two-phase locking and rigorous two-phase locking (with lock conversions)
are used extensively in commercial database systems.
Note: A simple but widely used scheme automatically generates the appropriate lock
and unlock instructions for a transaction, on the basis of read and write requests from
the transaction:

• When a transaction Ti issues a read(Q) operation, the system issues a lock- S(Q)
instruction followed by the read(Q) instruction.

• WhenTi issues a write(Q) operation, the system checks to see whether Ti already
holds a shared lock on Q. If it does, then the system issues an upgrade( Q) instruc-
tion, followed by the write(Q) instruction. Otherwise, the system issues a lock-X(Q)
instruction, followed by the write(Q) instruction.

• All locks obtained by a transaction are unlocked after that transaction commits or
aborts.

1.2 Graph-Based Protocols


For this type of protocol, we need some prior knowledge of database. To acquire such
prior knowledge, we impose a partial ordering → on the set D = {d1 , d2 , ..., dn } of all data
items. If di → dj , then any transaction accessing both di and dj must access di before
accessing dj .

The partial ordering implies that the set D may now be viewed as a directed acyclic
graph, called a database graph. Here, we will consider graph with rooted tree. Therefore,
we will study tree protocol.

In the tree protocol, the only lock instruction allowed is lock-X. Each transaction
Ti can lock a data item at most once, and must observe the following rules:

1. The first lock by Ti may be on any data item.

2. Subsequently, a data item Q can be locked by Ti only if the parent of Q is currently


locked by Ti .

3. Data items may be unlocked at any time

4. A data item that has been locked and unlocked by Ti cannot subsequently be
relocked by Ti .

All schedules that are legal under the tree protocol are conflict serializable.

Example: Consider the database graph of the following figure:-


The following four transactions follow the tree protocol on this graph. We show only the
lock and unlock instructions:

6
7
T10 : lock-X(B); lock-X(E); lock-X(D); unlock(B); unlock(E); lock-X(G); unlock(D); un-
lock(G).
T11 : lock-X(D); lock-X(H); unlock(D); unlock(H).
T12 : lock-X(B); lock-X(E); unlock(E); unlock(B).
T13 : lock-X(D); lock-X(H); unlock(D); unlock(H).

One possible schedule in which these four transactions participated appears in the fol-
lowing figure:-
Observe that the schedule in this figure is conflict serializable. It can be shown not only
that the tree protocol ensures conflict serializability, but also that this protocol ensures
freedom from deadlock.
The tree protocol in this figure does not ensure recoverability and cascadelessness.

Advantage:
1. The tree-locking protocol has an advantage over the two-phase locking protocol in
that, unlike two-phase locking, it is deadlock-free, so no rollbacks are required.
2. The tree-locking protocol has another advantage over the two-phase locking protocol
in that unlocking may occur earlier. Earlier unlocking may lead to shorter waiting
times, and to an increase in concurrency.

1.3 Timestamp-Based Protocols


Timestamps With each transaction Ti in the system, we associate a unique fixed times-
tamp, denoted by TS(Ti ). This timestamp is assigned by the database system before the
transaction Ti starts execution. If a transaction Ti has been assigned timestamp TS(Ti ),
and a new transaction Tj enters the system, then TS(Ti ) ¡ TS(Tj ). There are two simple
methods for implementing this scheme:
1. Use the value of the system clock as the timestamp; that is, a transaction’s times-
tamp is equal to the value of the clock when the transaction enters the system.
2. Use a logical counter that is incremented after a new timestamp has been assigned;
that is, a transaction’s timestamp is equal to the value of the counter when the
transaction enters the system.
The timestamps of the transactions determine the serializability order. Thus, if TS(Ti ) ¡
TS(Ti ), then the system must ensure that the produced schedule is equivalent to a serial
schedule in which transaction Ti appears before transaction Tj .

To implement this scheme, we associate with each data item Q two timestamp values:
• W-timestamp(Q) denotes the largest timestamp of any transaction that executed
write(Q) successfully.
• R-timestamp(Q) denotes the largest timestamp of any transaction that executed
read(Q) successfully.
These timestamps are updated whenever a new read(Q) or write(Q) instruction is exe-
cuted.

8
1.3.1 Timestamp-Ordering Protocol
The timestamp-ordering protocol ensures that any conflicting read and write operations
are executed in timestamp order. This protocol operates as follows:
1. Suppose that transaction Ti issues read(Q).
(a) If TS(Ti ) < W-timestamp(Q), then the read operation is rejected, and Ti is
rolled back.
(b) If TS(Ti ) ≥ W-timestamp(Q), then the read operation is executed, and R-
timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti ).
2. Suppose that transaction Ti issues write(Q).
(a) If TS(Ti ) < R-timestamp(Q), then the system rejects the write operation and
rolls Ti back.
(b) If TS(Ti) < W-timestamp(Q), then the system rejects this write operation
and rolls Ti back.
(c) Otherwise, the system executes the write operation and sets W-timestamp(
Q) to TS(Ti ).
If a transaction Ti is rolled back by the concurrency-control scheme as result of issuance
of either a read or write operation, the system assigns it a new timestamp and restarts
it.

Example: Consider transactions T14 and T15 . Transaction T14 displays the contents of
accounts A and B:

T14 : read(B);
read(A);
display(A + B).

Transaction T15 transfers $50 from account A to account B, and then displays the con-
tents of both:

T15 : read(B);
B := B - 50;
write(B);
read(A);
A := A + 50;
write(A);
display(A + B).

Following schedule is possible under timestamp ordering protocol.


Note:
1. The timestamp-ordering protocol ensures conflict serializability.
2. This protocol also ensures freedom from deadlock.
3. There is a possibility of starvation.

9
10
4. This protocol can generate schedules that are not recoverable.

1.3.2 Thomas’ Write Rule


The modification to the timestamp-ordering protocol, called Thomas’ write rule, is this:
Suppose that transaction Ti issues write(Q).
1. If TS(Ti ) ¡ W-timestamp(Q), then the read operation is rejected, and Ti is rolled
back.
2. If TS(Ti ) ¡ W-timestamp(Q), then Ti is attempting to write an obsolete value of Q.
Hence, this write operation can be ignored.
3. Otherwise, the system executes the write operation and sets W-timestamp( Q) to
TS(Ti ).
Example: Consider following schedule:-
Clearly, this schedule is not conflict serializable and, thus, is not possible under any of
two-phase locking, the tree protocol, or the timestamp-ordering protocol. Under Thomas’
write rule, the write(Q) operation of T16 would be ignored. The result is a schedule that
is view equivalent to the serial schedule ¡ T16 , T17 ¿.

2 Multiple Granularity
Consider the following granularity hierarchy. This tree consists of four levels of nodes.
The highest level represents the entire database. Below it are nodes of type area; the
database consists of exactly these areas. Each area in turn has nodes of type file as its
children. Each area contains exactly those files that are its child nodes. No file is in more
than one area. Finally, each file has nodes of type record. As before, the file consists of
exactly those records that are its child nodes, and no record can be present in more than
one file.

This protocol uses the following compatibility matrix to lock the data items. There
is an intention mode associated with shared mode, and there is one with exclusive mode.
If a node is locked in intention-shared (IS) mode, explicit locking is being done at a
lower level of the tree, but with only shared-mode locks. Similarly, if a node is locked
in intention-exclusive (IX) mode, then explicit locking is being done at a lower level,
with exclusive-mode or shared-mode locks. Finally, if a node is locked in shared and
intention-exclusive (SIX) mode, the sub-tree rooted by that node is locked explicitly in
shared mode, and that explicit locking is being done at a lower level with exclusive-mode
locks.

The multiple-granularity locking protocol, which ensures serializability, is this:


Each transaction Ti can lock a node Q by following these rules:
1. It must observe the lock-compatibility function shown in above matrix.
2. It must lock the root of the tree first, and can lock it in any mode.
3. It can lock a node Q in S or IS mode only if it currently has the parent of Q locked
in either IX or IS mode.

11
12
4. It can lock a node Q in X, SIX, or IX mode only if it currently has the parent of Q
locked in either IX or SIX mode.

5. It can lock a node only if it has not previously unlocked any node (that is, Ti is two
phase).

6. It can unlock a node Q only if it currently has none of the children of Q locked.

Clearly, the multiple-granularity protocol requires that locks be acquired in top-down


(root-to-leaf) order, whereas locks must be released in bottom-up (leaf-to-root) order.

Example:
Consider the tree shown in the above figure and these transactions:

• Suppose that transaction T18 reads record ra2 in file Fa . Then, T18 needs to lock
the database, area A1 , and Fa in IS mode (and in that order), and finally to lock
ra2 in S mode.

• Suppose that transaction T19 modifies record ra9 in file Fa . Then, T19 needs to lock
the database, area A1 , and file Fa in IX mode, and finally to lock ra2 in X mode.

• Suppose that transaction T20 reads all the records in file Fa . Then, T20 needs to
lock the database and area A1 (in that order) in IS mode, and finally to lock Fa in
S mode.

• Suppose that transaction T21 reads the entire database. It can do so after locking
the database in S mode.

Clearly, transactions T18 , T20 , and T21 can access the database concurrently. Transaction
T19 can execute concurrently with T18 , but not with either T20 or T21 .

This protocol enhances concurrency and reduces lock overhead. It is particularly use-
ful in applications that include a mix of

• Short transactions that access only a few data items

• Long transactions that produce reports from an entire file or set of files

Note: Deadlock is possible in this protocol.

3 Multiversion Schemes
In multiversion concurrency control schemes, each write(Q) operation creates a new ver-
sion of Q. When a transaction issues a read(Q) operation, the concurrencycontrol manager
selects one of the versions of Q to be read. The concurrency-control scheme must ensure
that the version to be read is selected in a manner that ensures serializability.

13
3.1 Multiversion Timestamp Ordering
With each data item Q, a sequence of versions < Q1 , Q2 , ..., Qm > is associated. Each
version Qk contains three data fields:

• Content is the value of version Qk .

• W-timestamp(Qk ) is the timestamp of the transaction that created version Qk .

• R-timestamp(Qk ) is the largest timestamp of any transaction that successfully


read version Qk .

A transaction Ti creates a new version Qk of data item Q by issuing a write(Q) opera-


tion. The content field of the version holds the value written by Ti . The system initializes
the W-timestamp and R-timestamp to TS(Ti ). It updates the R-timestamp value of Qk
whenever a transaction Tj reads the content of Qk , and R-timestamp(Qk ) ¡ TS(Tj ).

The multiversion timestamp-ordering scheme operates as follows. Suppose that


transaction Ti issues a read(Q) or write(Q) operation. Let Qk denote the version of Q
whose write timestamp is the largest write timestamp less than or equal to TS(Ti ).

1. If transaction Ti issues a read(Q), then the value returned is the content of version
Qk .

2. If transaction Ti issues write(Q), and if TS(Ti )¡R-timestamp(Qk ), then the system


rolls back transaction Ti . On the other hand, if TS(Ti ) = W-timestamp(Qk ), the
system overwrites the contents of Qk ; otherwise it creates a new version of Q.

Versions that are no longer needed are removed according to the following rule. Suppose
that there are two versions, Qk and Qj , of a data item, and that both versions have a
W-timestamp less than the timestamp of the oldest transaction in the system. Then, the
older of the two versions Qk and Qj will not be used again, and can be deleted.

Note:

1. The multiversion timestamp-ordering scheme ensures serializability.

2. The multiversion timestamp-ordering scheme does not ensure recoverability and


cascadelessness.

4 Deadlock Handling
A system is in a deadlock state if there exists a set of transactions such that every trans-
action in the set is waiting for another transaction in the set. More precisely, there exists
a set of waiting transactions {T0 , T1 , ..., Tn } such that T0 is waiting for a data item that
T1 holds, and T1 is waiting for a data item that T2 holds, and . . ., and Tn−1 is waiting
for a data item that Tn holds, and Tn is waiting for a data item that T0 holds. None of
the transactions can make progress in such a situation.

There are two principal methods for dealing with the deadlock problem. We can use
a deadlock prevention protocol to ensure that the system will never enter a deadlock

14
state. Alternatively, we can allow the system to enter a deadlock state, and then try to
recover by using a deadlock detection and deadlock recovery scheme.

Note: Prevention is commonly used if the probability that the system would enter a
deadlock state is relatively high; otherwise, detection and recovery are more efficient.

4.1 Deadlock Prevention


Two different deadlock prevention schemes using timestamps have been proposed:

1. wait–die: This scheme is a non-preemptive technique. When transaction Ti


requests a data item currently held by Tj , Ti is allowed to wait only if it has a
timestamp smaller than that of Tj (that is, Ti is older than Tj ). Otherwise, Ti is
rolled back (dies).
For example, suppose that transactions T1 , T2 , and T3 have timestamps 5, 10, and
15, respectively. If T1 requests a data itemheld by T2 , then T1 will wait. If T3
requests a data item held by T2 , then T3 will be rolled back.

2. wound–wait: This scheme is a preemptive technique. It is a counterpart to the


wait–die scheme. When transaction Ti requests a data item currently held by Tj ,
Ti is allowed to wait only if it has a timestamp larger than that of Tj (that is, Ti is
younger than Tj ). Otherwise, Tj is rolled back (Tj is wounded by Ti ).
Returning to our example, with transactions T1 , T2 , and T3 , if T1 requests a data
item held by T2 , then the data item will be preempted from T2 , and T2 will be rolled
back. If T3 requests a data item held by T2 , then T3 will wait.

4.2 Deadlock Detection and Recovery


If a system does not employ some protocol that ensures deadlock freedom, then a detection
and recovery scheme must be used. An algorithm that examines the state of the system
is invoked periodically to determine whether a deadlock has occurred. If one has, then
the system must attempt to recover from the deadlock.

4.2.1 Deadlock Detection


To identify the deadlock is present in the system, we use a directed graph called wait-
for-graph.
In this graph, vertices are corresponding to transactions. When transaction Ti requests
a data item currently being held by transaction Tj , then the edge Ti → Tj is inserted in
the wait-for graph. This edge is removed only when transaction Tj is no longer holding a
data item needed by transaction Ti .

A deadlock exists in the system if and only if the wait-for graph contains a cycle. Each
transaction involved in the cycle is said to be deadlocked. To detect deadlocks, the sys-
tem needs to maintain the wait-for graph, and periodically to invoke an algorithm that
searches for a cycle in the graph.

Example: Consider the wait-for graph show in the following figure,


which depicts the following situation:

15
• Transaction T25 is waiting for transactions T26 and T27 .

• Transaction T27 is waiting for transaction T26 .

• Transaction T26 is waiting for transaction T28 .

Since the graph has no cycle, the system is not in a deadlock state.
Suppose now that transaction T28 is requesting an item held by T27 . The edge T28 → T27 is
added to the wait-for graph, resulting in the new system state in following figure.
This time, the graph contains the cycle
T26 → T28 → T27 → T26 .
implying that transactions T26 , T27 , and T28 are all deadlocked.

4.2.2 Recovery from Deadlock


When a detection algorithm determines that a deadlock exists, the system must recover
from the deadlock. The most common solution is to roll back one or more transactions
to break the deadlock. Three actions need to be taken:
1. Selection of a victim: Given a set of deadlocked transactions, we must determine
which transaction (or transactions) to roll back to break the deadlock.We should
roll back those transactions that will incur the minimum cost. Unfortunately, the
term minimum cost is not a precise one. Many factors may determine the cost of a
rollback, including

(a) How long the transaction has computed, and how much longer the transaction
will compute before it completes its designated task.
(b) How many data items the transaction has used.
(c) How many more data items the transaction needs for it to complete.
(d) How many transactions will be involved in the rollback.

2. Rollback: Once we have decided that a particular transaction must be rolled


back, we must determine how far this transaction should be rolled back.
The simplest solution is a total rollback:

3. Starvation: In a system where the selection of victims is based primarily on cost


factors, it may happen that the same transaction is always picked as a victim. As a
result, this transaction never completes its designated task, thus there is starvation.
We must ensure that transaction can be picked as a victim only a (small) finite
number of times. The most common solution is to include the number of rollbacks
in the cost factor.

4.3 The Phantom Phenomenon


Consider transaction T29 that executes the following SQL query on the bank database:

select sum(balance)
from account
where branch-name = ’Perryridge’

16
Transaction T29 requires access to all tuples of the account relation pertaining to the
Perryridge branch.
Let T30 be a transaction that executes the following SQL insertion:

insert into account


values (A-201, ’Perryridge’, 900)

Let S be a schedule involving T29 and T30 . We expect there to be potential for a conflict
for the following reason:
• If T29 uses the tuple newly inserted by T30 in computing sum(balance), then T29
read a value written by T30 . Thus, in a serial schedule equivalent to S, T30 must
come before T29 .

• If T29 does not use the tuple newly inserted by T30 in computing sum(balance), then
in a serial schedule equivalent to S, T29 must come before T30 .
The second of these two cases is curious. T29 and T29 do not access any tuple in common,
yet they conflict with each other! In effect, T29 and T29 conflict on a phantom tuple. If
concurrency control is performed at the tuple granularity, this conflict would go unde-
tected. This problem is called the phantom phenomenon.
To prevent the phantom phenomenon, we allow T29 to prevent other transactions from
creating new tuples in the account relation with branch-name = “Perryridge.”

5 AKTU Examination Questions


1. Define Concurrency Control.

2. Explain the phantom phenomena. Discuss a Time Stamp Protocol that avoids the
phantom phenomena.

3. Discuss about deadlock prevention schemes.

4. Explain Concurrency Control. Why it is needed in database system?

5. What is deadlock? What are necessary conditions for it? How it can be detected
and recovered?

6. Explain two phase locking protocol with suitable example.

7. Write the salient features of graph based locking protocol with suitable example.

8. What do you mean by multiple granularity? How the concurrency is maintained in


this case. Write the concurrent transactions for the following graph.

• T1 wants to access Item C in read mode


• T2 wants to access item D in Exclusive mode
• T3 wants to read all the children of item B
• T4 wants to access all items in read mode

9. Define Exclusive Lock.

17
18
10. What is Two phase Locking (2PL)? Describe with the help of example.

11. What are multi version schemes of concurrency control? Describe with the help of
an example. Discuss the various Time stamping protocols for concurrency control
also.

12. Define timestamp.

13. Discuss about the deadlock prevention schemes.

14. Explain the following protocols for concurrency control.


i) Lock based protocols
ii) Time Stamp based protocols

15. What are the pitfalls of lock-based protocol?

16. Describe major problems associated with concurrent processing with examples.
What is the role of locks in avoiding these Problems.

17. Explain the phantom phenomenon. Devise a time stamp based protocol that avoids
the phantom phenomenon.

18. What do you mean by multiple granularties? How it is implemented in transaction


system?

19

You might also like