0% found this document useful (0 votes)
13 views64 pages

Unit 5

The document discusses concurrency control techniques in databases, emphasizing the importance of ensuring serializability and recoverability in transaction schedules. It covers various locking protocols, including lock-based, two-phase locking, timestamp-based, and validation-based protocols, detailing their mechanisms and potential pitfalls such as deadlocks and starvation. Additionally, it introduces multiple granularity locking and intention lock modes to enhance concurrency while managing locking overhead.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views64 pages

Unit 5

The document discusses concurrency control techniques in databases, emphasizing the importance of ensuring serializability and recoverability in transaction schedules. It covers various locking protocols, including lock-based, two-phase locking, timestamp-based, and validation-based protocols, detailing their mechanisms and potential pitfalls such as deadlocks and starvation. Additionally, it introduces multiple granularity locking and intention lock modes to enhance concurrency while managing locking overhead.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

UNIT-5

Concurrency Control Techniques


Concurrency Control
• A database must provide a mechanism that will ensure that all possible
schedules are
• either conflict or view serializable, and
• are recoverable and preferably cascadeless
• A policy in which only one transaction can execute at a time generates
serial schedules, but provides a poor degree of concurrency.
• Testing a schedule for serializability after it has executed is a little too
late.
• Goal – to develop concurrency control protocols that will assure
serializability.
Lock-Based Protocols
• A lock is a mechanism to control concurrent access to a data item
• Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as well as
written. X-lock is requested using lock-X instruction.
2. shared (S) mode. Data item can only be read. S-lock is
requested using lock-S instruction.
• Lock requests are made to concurrency-control manager.
Transaction can proceed only after request is granted.
Lock-Based Protocols (Cont.)
• Lock-compatibility matrix

• A transaction may be granted a lock on an item if the requested lock is compatible


with locks already held on the item by other transactions
• Any number of transactions can hold shared locks on an item,
• but if any transaction holds an exclusive on the item no other transaction may
hold any lock on the item.
• If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is then
granted.
Lock-Based Protocols (Cont.)
• Example of a transaction performing locking:
T2: lock-S(A);
read (A);
unlock(A);
lock-S(B);
read (B);
unlock(B);
display(A+B)
• Locking as above is not sufficient to guarantee serializability — if A and B get
updated in-between the read of A and B, the displayed sum would be wrong.
• A locking protocol is a set of rules followed by all transactions while requesting
and releasing locks. Locking protocols restrict the set of possible schedules.
Pitfalls of Lock-Based Protocols
• Consider the partial schedule

• Neither T3 nor T4 can make progress — executing lock-S(B) causes T4 to wait for T3
to release its lock on B, while executing lock-X(A) causes T3 to wait for T4 to release
its lock on A.
• Such a situation is called a deadlock.
• To handle a deadlock one of T3 or T4 must be rolled back and its locks released.
Pitfalls of Lock-Based Protocols (Cont.)
• The potential for deadlock exists in most locking protocols.
• Starvation is also possible if concurrency control manager is badly
designed. For example:
• A transaction may be waiting for an X-lock on an item, while a
sequence of other transactions request and are granted an S-lock on
the same item.
• The same transaction is repeatedly rolled back due to deadlocks.

• Concurrency control manager can be designed to prevent starvation.


The Two-Phase Locking Protocol
This is a protocol which ensures conflict-serializable schedules.
• Phase 1: Growing Phase
• transaction may obtain locks
• transaction may not release locks
• Phase 2: Shrinking Phase
• transaction may release locks
• transaction may not obtain locks
• The protocol assures serializability. It can be proved that the
transactions can be serialized in the order of their lock points (i.e. the
point where a transaction acquired its final lock).
The Two-Phase Locking Protocol (Cont.)

• Two-phase locking does not ensure freedom from deadlocks


• Cascading roll-back is possible under two-phase locking. To avoid this,
follow a modified protocol called strict two-phase locking. Here a
transaction must hold all its exclusive locks till it commits/aborts.
• Rigorous two-phase locking is even stricter: here all locks are held till
commit/abort. In this protocol transactions can be serialized in the
order in which they commit.
T1 T2 T3
Lock-X(A)
Read(A)
Lock-S(B)
Read(B)
Write(A)
Unlock(A)
Lock-X(A)
Read(A)
Write(A)
Unlock(A)
Lock-S(A)
Read(A)
Lock Conversions
• Two-phase locking with lock conversions:
– First Phase:
• can acquire a lock-S on item
• can acquire a lock-X on item
• can convert a lock-S to a lock-X (upgrade)
– Second Phase:
• can release a lock-S
• can release a lock-X
• can convert a lock-X to a lock-S (downgrade)
• This protocol assures serializability. But still relies on the programmer
to insert the various locking instructions.
Automatic Acquisition of Locks
• A transaction Ti issues the standard read/write instruction, without explicit
locking calls.
• The operation read(D) is processed as:
if Ti has a lock on D
then
read(D)
else begin
if necessary wait until no other
transaction has a lock-X on D
grant Ti a lock-S on D;
read(D)
end
Automatic Acquisition of Locks
(Cont.)
• write(D) is processed as:
if Ti has a lock-X on D
then
write(D)
else begin
if necessary wait until no other trans. has any lock on D,
if Ti has a lock-S on D
then
upgrade lock on D to lock-X
else
grant Ti a lock-X on D
write(D)
end;
• All locks are released after commit or abort
Implementation of Locking
• A lock manager can be implemented as a separate process to which
transactions send lock and unlock requests
• The lock manager replies to a lock request by sending a lock grant
messages (or a message asking the transaction to roll back, in case of a
deadlock)
• The requesting transaction waits until its request is answered
• The lock manager maintains a data-structure called a lock table to record
granted locks and pending requests
• The lock table is usually implemented as an in-memory hash table
indexed on the name of the data item being locked
Lock Table
• Black rectangles indicate granted locks, white ones
indicate waiting requests
• Lock table also records the type of lock granted or
requested
• New request is added to the end of the queue of
requests for the data item, and granted if it is
compatible with all earlier locks
• Unlock requests result in the request being deleted,
and later requests are checked to see if they can now
be granted
• If transaction aborts, all waiting or granted requests of
the transaction are deleted
• lock manager may keep a list of locks held by
each transaction, to implement this efficiently
Graph-Based Protocols
• Graph-based protocols are an alternative to two-phase locking
• Impose a partial ordering  on the set D = {d1, d2 ,..., dh} of all data
items.
• If di  dj then any transaction accessing both di and dj must access di before
accessing dj.
• Implies that the set D may now be viewed as a directed acyclic graph, called a
database graph.
• The tree-protocol is a simple kind of graph protocol.
Tree Protocol

1.Only exclusive locks are allowed. Each Transaction Ti can lock a data item at most once.
2.The first lock by Ti may be on any data item. Subsequently, a data Q can be locked by Ti only if
the parent of Q is currently locked by Ti.
3.Data items may be unlocked at any time.
4.A data item that has been locked and unlocked by Ti cannot subsequently be relocked by Ti
Graph-Based Protocols (Cont.)
• The tree protocol ensures conflict serializability as well as freedom from deadlock.
• Unlocking may occur earlier in the tree-locking protocol than in the two-phase
locking protocol.
• shorter waiting times, and increase in concurrency
• protocol is deadlock-free, no rollbacks are required
• Drawbacks
• Protocol does not guarantee recoverability or cascade freedom
• Need to introduce commit dependencies to ensure recoverability
• Transactions may have to lock data items that they do not access.
• increased locking overhead, and additional waiting time
• potential decrease in concurrency
• Schedules not possible under two-phase locking are possible under tree protocol,
and vice versa.
Timestamp-Based Protocols
• Each transaction is issued a timestamp when it enters the system. If an old
transaction Ti has time-stamp TS(Ti), a new transaction Tj is assigned time-stamp
TS(Tj) such that TS(Ti) <TS(Tj).
• The protocol manages concurrent execution such that the time-stamps determine
the serializability order.
• In order to assure such behavior, the protocol maintains for each data Q two
timestamp values:
• W-timestamp(Q) is the largest time-stamp of any transaction that executed write(Q)
successfully.
• R-timestamp(Q) is the largest time-stamp of any transaction that executed read(Q)
successfully.
Timestamp-Based Protocols (Cont.)

• The timestamp ordering protocol ensures that any conflicting read


and write operations are executed in timestamp order.
• Suppose a transaction Ti issues a read(Q)
1. If TS(Ti)  W-timestamp(Q), then Ti needs to read a value of Q
that was already overwritten.
• Hence, the read operation is rejected, and Ti is rolled back.
2. If TS(Ti) W-timestamp(Q), then the read operation is
executed, and R-timestamp(Q) is set to max(R-timestamp(Q),
TS(Ti)).
Timestamp-Based Protocols
(Cont.)
• Suppose that transaction Ti issues write(Q).
1. If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is
producing was needed previously, and the system assumed
that that value would never be produced.
• Hence, the write operation is rejected, and Ti is rolled
back.
2. If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an
obsolete value of Q.
• Hence, this write operation is rejected, and Ti is rolled
back.
3. Otherwise, the write operation is executed, and W-
timestamp(Q) is set to TS(Ti).
Example Use of the Protocol
A partial schedule for several data items for transactions with timestamps 1, 2, 3,
4, 5
Correctness of Timestamp-
Ordering Protocol
• The timestamp-ordering protocol guarantees serializability since all the arcs
in the precedence graph are of the form:

Thus, there will be no cycles in the precedence graph


• Timestamp protocol ensures freedom from deadlock as no transaction ever
waits.
• But the schedule may not be cascade-free, and may not even be
recoverable.
Thomas’ Write Rule
• Modified version of the timestamp-ordering protocol in which obsolete
write operations may be ignored under certain circumstances.
• When Ti attempts to write data item Q, if TS(Ti) < W-timestamp(Q), then Ti is
attempting to write an obsolete value of {Q}.
• Rather than rolling back Ti as the timestamp ordering protocol would have done, this
{write} operation can be ignored.
• Otherwise this protocol is the same as the timestamp ordering protocol.
• Thomas' Write Rule allows greater potential concurrency.
• Allows some view-serializable schedules that are not conflict-serializable.
Validation-Based Protocol
• Execution of transaction Ti is done in three phases.
1. Read and execution phase: Transaction Ti writes only to temporary local variables
2. Validation phase: Transaction Ti performs a ``validation test'' to determine if local
variables can be written without violating serializability.
3. Write phase: If Ti is validated, the updates are applied to the database; otherwise,
Ti is rolled back.
• The three phases of concurrently executing transactions can be interleaved, but each
transaction must go through the three phases in that order.
• Assume for simplicity that the validation and write phase occur together,
atomically and serially
• I.e., only one transaction executes validation / write at a time.
• Also called as optimistic concurrency control since transaction executes fully in the
hope that all will go well during validation
Validation-Based Protocol
(Cont.)
• Each transaction Ti has 3 timestamps
• Start(Ti) : the time when Ti started its execution
• Validation(Ti): the time when Ti entered its validation phase
• Finish(Ti) : the time when Ti finished its write phase
• Serializability order is determined by timestamp given at validation time,
to increase concurrency.
• Thus TS(Ti) is given the value of Validation(Ti).
• This protocol is useful and gives greater degree of concurrency if
probability of conflicts is low.
• because the serializability order is not pre-decided, and
• relatively few transactions will have to be rolled back.
Validation Test for Transaction Tj
• If for all Ti with TS (Ti) < TS (Tj) either one of the following condition holds:
• finish(Ti) < start(Tj)
• start(Tj) < finish(Ti) < validation(Tj) and the set of data items written by
Ti does not intersect with the set of data items read by Tj.
then validation succeeds and Tj can be committed. Otherwise, validation
fails and Tj is aborted.
• Justification: Either the first condition is satisfied, and there is no
overlapped execution, or the second condition is satisfied and
• the writes of Tj do not affect reads of Ti since they occur after Ti has
finished its reads.
• the writes of Ti do not affect reads of Tj since Tj does not read any item
written by Ti.
Schedule Produced by
Validation
• Example of schedule produced using validation
Multiple Granularity
• Allow data items to be of various sizes and define a hierarchy of data
granularities, where the small granularities are nested within larger ones
• Can be represented graphically as a tree (but don't confuse with tree-
locking protocol)
• When a transaction locks a node in the tree explicitly, it implicitly locks
all the node's descendents in the same mode.
• Granularity of locking (level in tree where locking is done):
• fine granularity (lower in tree): high concurrency, high locking
overhead
• coarse granularity (higher in tree): low locking overhead, low
concurrency
Example of Granularity
Hierarchy

The levels, starting from the coarsest (top) level are


• database
• area
• file
• record
Intention Lock Modes
• In addition to S and X lock modes, there are three additional
lock modes with multiple granularity:
• intention-shared (IS): indicates explicit locking at a lower
level of the tree but only with shared locks.
• intention-exclusive (IX): indicates explicit locking at a
lower level with exclusive or shared locks
• shared and intention-exclusive (SIX): the subtree rooted
by that node is locked explicitly in shared mode and
explicit locking is being done at a lower level with
exclusive-mode locks.
Compatibility Matrix with Intention Lock
Modes

• The compatibility matrix for all lock modes is:


Multiple Granularity Locking
Scheme
• Transaction Ti can lock a node Q, using the following rules:
1. The lock compatibility matrix must be observed.
2. The root of the tree must be locked first, and may be locked in any mode.
3. A node Q can be locked by Ti in S or IS mode only if the parent of Q is currently locked
by Ti in either IX or IS mode.
4. A node Q can be locked by Ti in X, SIX, or IX mode only if the parent of Q is currently
locked by Ti in either IX or SIX mode.
5. Ti can lock a node only if it has not previously unlocked any node (that is, Ti is two-
phase).
6. Ti can unlock a node Q only if none of the children of Q are currently locked by Ti.
• Observe that locks are acquired in root-to-leaf order, whereas they are released
in leaf-to-root order.
• Lock granularity escalation: in case there are too many locks at a particular
level, switch to higher granularity S or X lock
1. If transaction T1 reads record Ra9 in file Fa, then transaction T1 needs to lock
the database, area A1 and file Fa in IS mode. Finally, it needs to lock Ra2 in S
mode.
2. If transaction T2 modifies record Ra9 in file Fa, then it can do so after locking
the database, area A1 and file Fa in IX mode. Finally, it needs to lock the Ra9 in X
mode.
3. If transaction T3 reads all the records in file Fa, then transaction T3 needs to
lock the database, and area A in IS mode. At last, it needs to lock Fa in S mode.
4. If transaction T4 reads the entire database, then T4 needs to lock the database
in S mode.

The transactions T1, T3 and T4 can access the database concurrently. Transaction
T2 can execute concurrently with T1 but not with either T3 or T4.
Recoverability and Cascade
Freedom
• Problem with timestamp-ordering protocol:
• Suppose Ti aborts, but Tj has read a data item written by Ti
• Then Tj must abort; if Tj had been allowed to commit earlier, the schedule is not recoverable.
• Further, any transaction that has read a data item written by Tj must abort
• This can lead to cascading rollback --- that is, a chain of rollbacks
• Solution 1:
• A transaction is structured such that its writes are all performed at the end of its processing
• All writes of a transaction form an atomic action; no transaction may execute while a
transaction is being written
• A transaction that aborts is restarted with a new timestamp
• Solution 2: Limited form of locking: wait for data to be committed before reading it
• Solution 3: Use commit dependencies to ensure recoverability
Multiversion Schemes
• Multiversion schemes keep old versions of data item to increase
concurrency.
• Multiversion Timestamp Ordering
• Multiversion Two-Phase Locking
• Each successful write results in the creation of a new version of the data
item written.
• Use timestamps to label versions.
• When a read(Q) operation is issued, select an appropriate version of Q
based on the timestamp of the transaction, and return the value of the
selected version.
• reads never have to wait as an appropriate version is returned
immediately.
Multiversion Timestamp
Ordering
• Each data item Q has a sequence of versions <Q1, Q2,...., Qm>. Each version
Qk contains three data fields:
• Content -- the value of version Qk.
• W-timestamp(Qk) -- timestamp of the transaction that created (wrote)
version Qk
• R-timestamp(Qk) -- largest timestamp of a transaction that successfully
read version Qk
• when a transaction Ti creates a new version Qk of Q, Qk’s W-timestamp and
R-timestamp are initialized to TS(Ti).
• R-timestamp of Qk is updated whenever a transaction Tj reads Qk, and
TS(T ) > R-timestamp(Q ).
Multiversion Timestamp
Ordering (Cont)
• Suppose that transaction T issues a read(Q) or write(Q) operation. Let Q denote
i k
the version of Q whose write timestamp is the largest write timestamp less than
or equal to TS(Ti).
1. If transaction Ti issues a read(Q), then the value returned is the content of
version Qk.
2. If transaction Ti issues a write(Q)
1. if TS(Ti) < R-timestamp(Qk), then transaction Ti is rolled back.
2. if TS(Ti) = W-timestamp(Qk), the contents of Qk are overwritten
3. else a new version of Q is created.
• Observe that
• Reads always succeed
• A write by Ti is rejected if some other transaction Tj that (in the serialization
order defined by the timestamp values) should read Ti's write, has already
read a version created by a transaction older than Ti.
Multiversion Two-Phase Locking
• Differentiates between read-only transactions and update transactions
• Update transactions acquire read and write locks, and hold all locks up to the
end of the transaction. That is, update transactions follow rigorous two-phase
locking.
• Each successful write results in the creation of a new version of the data
item written.
• each version of a data item has a single timestamp whose value is
obtained from a counter ts-counter that is incremented during commit
processing.
• Read-only transactions are assigned a timestamp by reading the current value
of ts-counter before they start execution; they follow the multiversion
timestamp-ordering protocol for performing reads.
Multiversion Two-Phase Locking
(Cont.)
• When an update transaction wants to read a data item:
• it obtains a shared lock on it, and reads the latest version.
• When it wants to write an item
• it obtains X lock on it then creates a new version of the item and sets this
version's timestamp to .
• When update transaction Ti completes, commit processing occurs:
• Ti sets timestamp on the versions it has created to ts-counter + 1
• Ti increments ts-counter by 1
• Read-only transactions that start after Ti increments ts-counter will see the values
updated by Ti.
• Read-only transactions that start before Ti increments the ts-counter will see the
value before the updates by Ti.
• Only serializable schedules are produced.
MVCC: Implementation Issues
• Creation of multiple versions increases storage overhead
• Extra tuples
• Extra space in each tuple for storing version information
• Versions can, however, be garbage collected
• E.g. if Q has two versions Q5 and Q9, and the oldest active transaction has
timestamp > 9, than Q5 will never be required again
Check whether the given schedule S is conflict serializable and recoverable or not

T1 T2 T3 T4
R(X)
W(X)
COMMIT
W(X)
COMMIT
W(Y)
R(Z)
COMMIT
R(X)
R(Y)
COMMIT
Check whether the given schedule S is conflict serializable
or not. If yes, then determine all the possible serialized
schedules-

T1 T2 T3 T4
R(X)
R(X)
R(X)
W(Y)
W(X)
R(Y)
W(Y)
Determine all the possible serialized
schedules for the given schedule-
T1 T2
R(A)
A=A-10
R(A)
TEMP = 0.2 * A
W(A)
R(B)
W(A)
R(B)
B = B+10
W(B)
B = B+TEMP
W(B)
Consider a relation R(A, B, C, D,
E) with the following FD’s –
AB C; BC D; CE
The number of super keys in
relation R:

1. 12
2. 8
3. 6
4. 5
In a relational data model, which one of the
following statement is true?

1. A relation with only two attributes is


always in BCNF.
2. If all attributes of a relation are prime
attributes, then the relation is in BCNF.
3. Every relation has at least one non prime
attribute.
4. BCNF decomposition preserves functional
dependencies.
Suppose the following functional
dependencies are hold in a relation R(P, Q,
R, S, T):
P QR, RS T
Which of the following functional
dependencies can be inferred from the
given FD’s

1. PS T
2. RT
3. PR
4. PS Q
Let S be the following schedule of operations in a
relational database system:
R2(Y),R1(X),R3(Z),R1(Y),W1(X),R2(Z),W1(Y),R3(X),
W3(Z)
Consider the statements P and Q below:
P: S is conflict serializable.
Q: If T3 commits before T1, then S is recoverable.
Which one of the following choice is correct

1. Both P & Q are true


2. Only P is true
3. Only Q is true
4. Both P & Q are false
The set of attributes X will be fully
functionally dependent on set of
attributes Y, If the following
conditions are satisfied

1. X is functionally dependent on Y
2. X is not functionally dependent
on any subset of Y
3. Both 1 & 2
4. None of the above
Consider the following two statements about database transaction schedules:

I. Strict two-phase locking protocol generates conflict serializable schedules that are
also recoverable.

II. Timestamp-ordering concurrency control protocol with Thomas Write Rule can
generate view serializable schedules that are not conflict serializable.

Which of the above statements is/are TRUE?

1. Both I and II
2. Only I
3. Only II
4. Neither I nor II
Suppose a database schedule S involves transactions T1,...,Tn. Construct the
precedence graph of S with vertices representing the transactions and edges
representing the conflicts. If S is serializable, which one of the following
orderings of the vertices of the precedence graph is guaranteed to yield a
serial schedule?

1. Topological Ordering
2. BFS
3. DFS
4. Ascending order of transaction indices
Which one of the following is NOT a part of
the ACID properties of database transactions?

1.Atomicity

2. Consistency

3. Deadlock freedom

4. Isolation
Consider the following transaction involving two bank accounts x and y.
read (x) ; x: = x - 50; write (x); read (y); y: = y + 50; write (y)

The constraint that the sum of the accounts x and y should remain
constant is that of

1. Atomicity

2. Consistency

3. Isolation

4. Durability
Which of the following scenarios may lead to
an irrecoverable error in a database system?

1. A transaction writes a data item after it is read by an uncommitted transaction.

2. A transaction reads a data item after it is read by an uncommitted transaction.

3. A transaction reads a data item after it is written by a committed transaction.

4. A transaction reads a data item after it is written by an uncommitted transaction.


Suppose a database system crashes again while recovering
from a previous crash. Assume checkpointing is not done
by the database either during the transactions or during
recovery.
Which of the following statements is/are correct?

1. The Database will become inconsistent.

2. All the transactions that are already undone and redone will not be recovered.

3. The system can not be recovered further.

4. The same undo and redo list will be used while recovering again.
Consider the following partial Schedule. Suppose that the
transaction T1 fails immediately after time instance 9. Which one of the
following statements is correct?
Time instance T91 T2

1 Read(A) 1. T2 must be aborted and then both T1 and


2 Write(A)
T2 must be restarted again to achieve the
3 Read(C)
atomicity.
4 Write(C)
2. Only T2 must be aborted and then restarted
5 Read(B)
again to achieve the atomicity.
6 Write(B)

7 Read(A) 3. Schedule is not recoverable.


8 COMMIT

9 Read(B) 4. Schedule is recoverable.


Consider a simple checkpointing protocol and the following set of operations in the log.
(start, T4); (write, T4,y,2,3); (start, T1); (commit, T4); (write, T1,z,5,7); (checkpoint); (start, T2);
(write, T2,x,1,9); (commit, T2); (start, T3), (write, T3,z,7,2);

If a crash happens now and the system tries to recover using both undo and redo operations, what are
the contents of the undo list and the redo list?

1. UNDO: T3, T2; REDO: T2

2. UNDO: T3, T2; REDO: T2, T4

3. UNDO: NONE ; REDO: T2, T4, T3, T1

4. UNDO: T3, T1, T4; REDO: T2


Consider two transactions T1 and T2 and four schedules S1, S2, S3,
S4 of T1 and T2 as given below:
T1: R1[ x ] W1[ x ] W1[ y ]
T2: R2[ x ] R2[ y ] W2[ y ]
S1: R1[ x ] R2[ x ] R2[ y ] W1[ x ] W1[ y ] W2[ y ]
S2: R1[ x ] R2[ x ] R2[ y ] W1[ x ] W2[ y ] W1[ y ]
S3: R1[ x ] W1[ x ] R2[ x ] W1[ y ] R2[ y ] W2[ y ]
S4: R2[ x ] R2[ y ] R1[ x ] W1[ x ] W1[ y ] W2[ y ]
Which of the above schedules are conflict-serializable?

1. S1 and S4 both 2. S2 and S3 both

3. S2 and S4 both 4. S1 and S3 both


Consider the following three schedules of transactions T1, T2 and T3.
[ Notation: In the following NYO represents the action Y (R for read, W for
write) performed by transac­tion N on object O. ]

(S1) 2RA 2WA 3RC 2WB 3WA 3WC 1RA 1RB 1WA 1WB
(S2) 3RC 2RA 2WA 2WB 3WA 1RA 1RB 1WA 1WB 3WC
(S3) 2RZ 3RC 3WA 2WA 2WB 3WC 1RA 1RB 1WA 1WB
Which of the following statements is TRUE?

1. Only S1 is conflict Serializable

2. Only S2 is Conflict Serializable

3. Only S3 is Conflict Serializable

4. S1, S2, S3 are Conflict Serializable


Consider the following three schedules of transactions T1, T2 and T3.
[ Notation: In the following NYO represents the action Y (R for read, W for
write) performed by transac­tion N on object O. ]

(S1) 2RA 2WA 3RC 2WB 3WA 3WC 1RA 1RB 1WA 1WB
(S2) 3RC 2RA 2WA 2WB 3WA 1RA 1RB 1WA 1WB 3WC
(S3) 2RZ 3RC 3WA 2WA 2WB 3WC 1RA 1RB 1WA 1WB
Which of the following statements is TRUE?

1. No two of S1, S2, and S3 are conflict equivalent to each other.

2. S2 is conflict equivalent to S3 but not S1.

3. S1 is conflict equivalent to S2 but not S3.

4. S1, S2, S3 are conflict equivalent to each other.


A database of research articles in a journal uses the following schema.
(VOLUME, NUMBER, STARTPAGE, ENDPAGE, TITLE, YEAR, PRICE)
The primary key is (VOLUME, NUMBER, STARTPAGE, ENDPAGE) and the following functional
dependencies exist in the schema.
(VOLUME, NUMBER, STARTPAGE, ENDPAGE) → TITLE
(VOLUME, NUMBER) → YEAR
(VOLUME, NUMBER, STARTPAGE, ENDPAGE) → PRICE
The database is redesigned to use the following schemas.
(VOLUME, NUMBER, STARTPAGE, ENDPAGE, TITLE, PRICE) (VOLUME, NUMBER, YEAR)
Which is the weakest normal form that the new database satisfies, but the old one does not?

1. 1nf

2. 2NF

3. 3NF

4. BCNF
Which of the following is TRUE?

1. Every relation in 3NF is in BCNF.

2. A relation is in 3NF if every non prime attribute of R is


fully functionally dependent on every key of R.

3. Every relation in BCNF is also in 3NF.

4. No relation can be in both 3NF and BCNF.


A table has fields : F1,F2,F3,F4,F5, with the
following functional dependencies:
F1→F3, F2→F4, (F1,F2)→F5 in terms of
Normalization, this table is in

1. First Normal Form

2. Second Normal Form

3. Third Normal Form

4. BCNF
Consider the following functional dependencies in a database.
DOB→Age, Age→Eligibility, Name→Roll_number,
Roll_number→Name, Course_number→Course_name,
Course_number→Instructor, (Roll_number, Course_number)→Grade.
The relation (Roll_number, Name, Date_of_Birth, Age) is

1. In 2NF but not in 3NF

2. In 3NF but not in BCNF

3. In BCNF

4. None of the above

You might also like