0% found this document useful (0 votes)
21 views11 pages

Chapter 4 Concurency Control

Chapter 4 discusses concurrency control techniques in database management systems, focusing on methods like locking mechanisms, timestamp ordering, optimistic concurrency control, and multi-version concurrency control. It highlights the importance of ensuring data integrity during simultaneous transactions and explains various locking strategies, including two-phase locking and multiple granularity locking. The chapter also addresses the use of locks in index structures to maintain consistency during data operations.

Uploaded by

kahsay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views11 pages

Chapter 4 Concurency Control

Chapter 4 discusses concurrency control techniques in database management systems, focusing on methods like locking mechanisms, timestamp ordering, optimistic concurrency control, and multi-version concurrency control. It highlights the importance of ensuring data integrity during simultaneous transactions and explains various locking strategies, including two-phase locking and multiple granularity locking. The chapter also addresses the use of locks in index structures to maintain consistency during data operations.

Uploaded by

kahsay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Chapter 4: Concurrency Control Techniques

Concurrency control is a critical aspect of database management systems (DBMS) that ensures
the integrity of data when multiple transactions are executed simultaneously. When multiple
transactions run concurrently, there is a risk of inconsistencies and anomalies, such as lost
updates, temporary inconsistency, and uncommitted data being read by other transactions. Here
are the primary techniques used for concurrency control:

1. Locking Mechanisms

Locking is one of the most widely used concurrency control techniques. When a transaction
wants to perform an operation on a data item, it must acquire a lock on that item first.

 Exclusive Locks (X Locks): Prevent other transactions from reading or writing the
locked data item.
 Shared Locks (S Locks): Allow multiple transactions to read the data item but prevent
any of them from writing to it.
 Lock Modes: Different DBMSs may implement various lock types, like intent locks and
range locks.

Two-Phase Locking (2PL)

This is a specific locking protocol that consists of two phases:

 Growing Phase: A transaction can acquire locks but cannot release them.
 Shrinking Phase: A transaction can release locks but cannot acquire new ones.

2PL can be:

 Strict 2PL: Transactions cannot release any locks until they have completed.
 Rigorous 2PL: Locks are held until the commit point, preventing any dirty reads.
 Cascading 2PL: Locks can be released before a transaction commits, allowing more
concurrency but introducing potential rollbacks due to cascading failures.

2. Timestamp Ordering

Each transaction is assigned a unique timestamp. The execution of transactions is then ordered
based on their timestamps, ensuring that earlier transactions have precedence over later ones.

 Read and Write Rules:


o A transaction can only read a data item if its timestamp is greater than or equal to
the last write timestamp.
o A transaction can write to a data item if its timestamp is greater than the
read/write timestamps of other transactions.
This technique can prevent anomalies but may lead to aborts if transactions interfere with the
ordering restrictions.

3. Optimistic Concurrency Control

This approach allows transactions to execute without restrictions, assuming that conflicts
between transactions will be rare. There are three phases:

 Read Phase: Transaction reads the required data and performs updates in a local copy.
 Validation Phase: Before committing, the system checks whether any other transactions
have modified the data during the read phase.
 Write Phase: If validation is successful, updates are applied; otherwise, the transaction is
rolled back.

Optimistic concurrency control is well-suited to environments where conflicts are infrequent.

4. Multi-Version Concurrency Control (MVCC)

In this technique, multiple versions of a data item are maintained, allowing transactions to read
different versions without blocking each other.

 Read Operations: Transactions read the most recent committed version of the data.
 Write Operations: When a transaction writes, a new version of the data is created
without altering the existing version until the transaction commits.

MVCC reduces contention and improves performance, especially in read-heavy scenarios.

5. Serializable Snapshot Isolation (SSI)

This technique extends MVCC to ensure that transactions operate in a fashion that resembles
serial execution. It allows for better concurrency than strict serializability while ensuring that the
outcomes are consistent.

6. Quorum-Based Techniques

These techniques require a specified number of votes (or quorums) from participating nodes in a
distributed database before a transaction can proceed, ensuring that sufficient agreement is
reached among distributed copies of data.

Locking Techniques for Concurrency Control

Concurrency control is a crucial aspect of database management systems (DBMS) as it ensures


that database transactions are executed in a safe manner when multiple transactions occur
simultaneously. Locking techniques are one of the primary mechanisms used to achieve
concurrency control. Here are some commonly used locking techniques, along with suitable
examples for each:
1. Binary Locks

In binary locking, each data item can be in one of two states: locked or unlocked. A transaction
must lock an item before it can access it. Once it’s done, it releases the lock.

Example:

 Transaction T1 wants to read and update a data item A.


o T1 locks A (lock(A)).
o T1 reads A, makes changes, and updates A.
o T1 releases the lock on A (unlock(A)).

2. Shared and Exclusive Locks

Locks can be categorized as shared or exclusive:

 Shared Lock (S-Lock): Allows a transaction to read a data item. Multiple transactions
can hold shared locks on the same data item simultaneously.
 Exclusive Lock (X-Lock): Allows a transaction to write to a data item. Only one
transaction can hold an exclusive lock on a data item.

Example:

 Transaction T1 wants to read A:


o T1 acquires a shared lock on A (S-lock(A)).
o T1 can read A, but cannot write it.
 Transaction T2 also wants to read A:
o T2 can acquire a shared lock on A (S-lock(A)).
 Transaction T3 wants to write to A:
o T3 will need to wait until both T1 and T2 release their shared locks, as it requires
an exclusive lock (X-lock(A)).

3. Two-Phase Locking (2PL)

Two-Phase Locking is a protocol that ensures serializability. In this protocol, a transaction enters
the growing phase where it can acquire any number of locks and then enters the shrinking phase
where it can only release locks.

Example:

 Transaction T1 starts and acquires locks:


o T1 locks A (X-lock(A)).
o T1 locks B (X-lock(B)).
 T1 performs its operations and releases locks:
o T1 unlocks A (unlock(A)).
o T1 unlocks B (unlock(B)).
Once T1 releases its locks, it cannot acquire any more locks, ensuring a clear two-phase division.

4. Strict Two-Phase Locking (Strict 2PL)

Strict 2PL is a stronger form of 2PL where transactions must hold all their locks until they
commit or abort. This eliminates the potential for cascading rollbacks.

Example:

 Transaction T1 locks A (X-lock(A)), reads and updates it, and only releases the lock after
it has committed. No other transaction can read or write to A while T1 holds the lock.

5. Deadlock Prevention and Handling

In locking systems, deadlocks can occur when transactions hold locks and wait for other locks to
be released. Some techniques include:

 Wait-Die: Older transactions wait for younger transactions, while younger transactions
are aborted if they request a lock held by an older transaction.
 Wound-Wait: Younger transactions are aborted if they request a lock held by an older
transaction.

Example:

 T1 (older) holds lock on A and is waiting for lock on B held by T2 (younger). According
to the Wait-Die scheme, T1 will wait, but T2 will be aborted.

6. Timestamp Ordering

This mechanism gives each transaction a unique timestamp. Transactions are executed in the
order of their timestamps, ensuring serializability without the need for locks.

Example:

 Transaction T1 is assigned timestamp T1 and starts executing. T2 is assigned timestamp


T2 (where T1 < T2). T2 cannot modify data until T1 has completed if they access the
same data item.
Multi version Concurrency Control Techniques
Validation (Optimistic) Concurrency Control Technique with suitbale examples

Multiversion Concurrency Control (MVCC) is a concurrency control method that allows for
multiple versions of data items to be stored, enabling multiple transactions to proceed without
locking resources. This technique is commonly used in databases to increase performance and
reduce contention. One of the primary methods of implementing MVCC is through Optimistic
Concurrency Control (OCC), which assumes that conflicts between transactions are rare.

Optimistic Concurrency Control (OCC)

In OCC, a transaction typically goes through three phases:

1. Read Phase: The transaction reads data without acquiring locks.


2. Validation Phase: Before committing, the transaction is validated to ensure that no
conflicting changes have occurred.
3. Write Phase: If validation is successful, the transaction writes its changes; otherwise, it
is rolled back.

Example Scenario

Let's consider a simple banking application where two users are trying to withdraw money from
the same account concurrently.

 Initial Account Balance: $100

Transactions:

1. Transaction T1: Withdraw $70


2. Transaction T2: Withdraw $50

Execution Steps:

1. Read Phase:
o T1 reads the account balance: $100.
o T2 also reads the account balance: $100.
2. Validation Phase (Before committing):
o T1 checks if the balance is sufficient: $100 (Balance) - $70 (Withdrawal) >= 0 →
Valid. It proceeds to the Write Phase.
o T2 checks the balance too: $100 - $50 >= 0 → Valid. It also proceeds to the Write
Phase.
3. Write Phase:
o T1 writes the new balance: $100 - $70 = $30.
o T2 then attempts to write its new balance. However, at this point, T2's validation
fails because it had read a balance of $100, which is no longer valid. Now it must
roll back or retry.

Example Code (Pseudocode)

transaction T1 {
read balance from account
// Assume balance = 100
requested_withdraw = 70

// Validation phase
if balance - requested_withdraw >= 0 {
// Write Phase
balance = balance - requested_withdraw
update account with new balance
commit
} else {
roll back
}
}

transaction T2 {
read balance from account
// Assume balance = 100
requested_withdraw = 50

// Validation phase
if balance - requested_withdraw >= 0 {
// Write Phase
balance = balance - requested_withdraw
update account with new balance
commit
} else {
roll back
}
}

Advantages of OCC

 Performance: Since transactions do not wait for locks, the performance can be
improved, especially in read-heavy scenarios.
 Scalability: Systems can handle many transactions concurrently without blocking.
Disadvantages of OCC

 Rollback Overhead: If conflicts do occur, transactions may roll back and retrial can
happen, which might lead to performance degradation.
 Conflict Rate: OCC is more effective in environments with low contention.

Granularity of Data Items and Multiple Granularity Locking Using Locks for
Concurrency Control in Indexes

oncurrency control is a critical aspect of database management systems (DBMS) that ensures
that multiple transactions can occur simultaneously without leading to inconsistencies in the
database. Granularity of data items and multiple granularity locking are important concepts in
this context, particularly when dealing with indexes.

Granularity of Data Items

Granularity refers to the size of the data item that is being locked. It can range from very fine
(individual rows) to very coarse (entire tables). The granularity chosen impacts the system's
throughput, deadlock potential, and response time.

1. Coarse Granularity: Locking larger units of data (like tables) can reduce the overhead
of obtaining locks but may lead to increased contention. For example, if two transactions
want to update different rows in the same table, they may have to wait for each other if a
table-level lock is used.

Example:

o Transaction T1 and T2 attempt to update rows in the "Customers" table. If table-


level locks are used, T1 locks the entire table. T2 must wait until T1 releases the
lock, which can lead to delays.
2. Fine Granularity: Locking smaller units (like individual rows) allows for higher
concurrency but increases the overhead of managing locks.

Example:

o Consider a database with a "Products" table. If T1 updates Row 5 while T2


updates Row 10, they can operate simultaneously without conflicts if row-level
locks are used. Fine-grained locking reduces contention but can introduce
overhead due to lock management.

Multiple Granularity Locking

Multiple granularity locking is a strategy that allows systems to lock data items at different levels
of granularity (like tuples, pages, and tables) using a hierarchical locking scheme. This method
combines the advantages of both coarse and fine granularity.
Lock Modes

Multiple granularity locking typically supports several lock modes:

1. S (Shared): Allows transactions to read a data item.


2. X (Exclusive): Allows a transaction to write to a data item.
3. IS (Intention Shared): Indicates a transaction intends to acquire shared locks on some
lower-level objects.
4. IX (Intention Exclusive): Indicates a transaction intends to acquire exclusive locks on
some lower-level objects.
5. SIX (Shared and Intention Exclusive): Combination of shared and intention exclusive.

Locking Protocol

To implement multiple granularity locking, several steps are typically followed:

1. Locking Hierarchy:
o At the database level (coarsest level), a lock can be placed on an entire table.
o At the page level, locks can be placed on individual pages.
o At the row level (finest level), locks can be placed on specific rows.
2. Locking Protocol:
o A transaction must acquire an intention lock on a higher level before locking
lower levels.
o For example, to lock Row 3 of a Page, a transaction must first acquire an IX lock
on the Page and then an X lock on Row 3.

Example Scenario with Multiple Granularity Locking

Consider a scenario where a "Products" table is structured as follows:

 Table: Products
 Fields: ProductID, Name, Price

Transaction T1 and T2:

 T1 wants to update the Price of ProductID 100.


 T2 wants to read the Price of ProductID 200 and also update the Price of ProductID 300.

Locking Process:

1. T1 acquires an IX lock on the "Products" table because it plans to perform an update (an
exclusive action).
2. T1 then requests an X lock on Row 100 to update the price.
3. T2 can acquire an IS lock on the "Products" table to read data.
4. T2 can also acquire an X lock on Row 300 since it will directly access that row, as long
as it does not interfere with T1’s exclusive lock on Row 100.
This concurrent execution is possible as T1 and T2 are working on separate rows and have
obtained the appropriate locks according to the locking hierarchy.

Using Locks for Concurrency Control in Indexes

ncurrency control is crucial in databases and applications that involve multiple users and
transactions accessing shared resources. Locks are a common mechanism used for this purpose,
particularly with index structures. Locks ensure that transactions are executed in a manner that
preserves data integrity and consistency.

Types of Locks

There are primarily two types of locks:

1. Exclusive Locks (X Lock): A transaction that holds an exclusive lock on a resource prevents other
transactions from accessing that resource in any manner.
2. Shared Locks (S Lock): A shared lock allows multiple transactions to read a resource
simultaneously but prevents any transaction from modifying it.

Locking in Indexes

Index structures—such as B-trees or hash indexes—are often locked to control access during
operations like insertions, deletions, and updates. Depending on the operation, different locking
strategies can be employed.

Example: B-tree Indexes

Consider a B-tree index on a table for a banking application containing account information such
as account numbers, names, and balances.

Scenario

Two transactions, Transaction A and Transaction B, are trying to perform operations on the
same B-tree index.

1. Transaction A wants to insert a new account with account number 123.


2. Transaction B wants to read account information for account number 120.

Process

Step 1: Transaction B Reads

 Transaction B acquires a Shared Lock (S Lock) on the B-tree index for reading the account
information.
 Since it's reading and not modifying, other transactions can also acquire shared locks on the
same index.
Step 2: Transaction A Writes

 Transaction A wants to insert account number 123 into the B-tree.


 To do this, Transaction A tries to acquire an Exclusive Lock (X Lock) on the B-tree index.
 Since Transaction B holds a shared lock, Transaction A will have to wait until Transaction B
releases its lock.

Step 3: Transaction B Completes

 Transaction B finishes its operation and releases the shared lock.


 Now, Transaction A can acquire the exclusive lock and proceed to insert the new account
number into the index.

Example Locking Strategy

1. Two-Phase Locking (2PL):


o Transaction A and Transaction B both utilize a two-phase locking protocol:
 Growing Phase: Acquires all the necessary locks but does not release any locks.
 Shrinking Phase: Releases locks and no new locks can be obtained.
o In this example, Transaction A will be in the growing phase until it completes the
insertion and then enters the shrinking phase by releasing the exclusive lock.

Example Change (Delete Operation)

Now let's consider a delete operation by another Transaction C:

 Transaction C wants to delete account number 120.


 It tries to acquire an Exclusive Lock (X Lock) on that specific record in the B-tree.
 If Transaction A is currently applying an exclusive lock to insert account number 123,
Transaction C will have to wait until Transaction A releases its lock.

Conclusion

Locking in index structures is a sophisticated but necessary process to ensure the consistency and
integrity of data during concurrent operations. By implementing strategies such as shared and
exclusive locks in a two-phase locking manner, databases can allow multiple concurrent
transactions while preventing conflicts that could lead to data corruption.

Best Practices:

 Minimize Lock Duration: Keep transactions short to reduce the time locks are held.
 Use Granularity: Use finer-grained locking (like row-level locks) over block-level locks for better
concurrency.
 Deadlock Prevention: Implement deadlock detection and resolution strategies to ensure that
two transactions don’t wait on each other indefinitely.

You might also like