Group 10 Assignment (3) 085636
Group 10 Assignment (3) 085636
SEMESTER : TWO
02: Answers
Atomicity: Atomicity ensures that all operations within a transaction are completed successfully or none
at all. If any operation within the transaction fails, the entire transaction is rolled back, leaving the
database in its previous state. This property guarantees data integrity by preventing partial updates or
inconsistent data. For instance, consider a bank transfer operation between two accounts. If the transfer
of funds from one account to another fails midway, atomicity ensures that both accounts remain
unchanged, preventing any potential inconsistency or data corruption.
Consistency: Consistency ensures that a transaction brings the database from one valid state to another
while maintaining database invariants. In other words, it guarantees that the database remains logically
consistent before and after the execution of a transaction. Database constraints such as primary key
uniqueness, referential integrity, and domain constraints help maintain consistency during transactions.
For example, if an attempt is made to insert a duplicate record into a table with a primary key constraint,
the transaction will be rolled back to maintain consistency.
Isolation: Isolation ensures that concurrently executing transactions do not affect each other’s execution.
It guarantees that each transaction runs independently without interference from other transactions.
There are different isolation levels (such as serializable, repeatable read, read committed, and
uncommitted read) that determine how much interference is allowed between transactions based on
their degree of isolation. For instance, at higher isolation levels like serializable, no concurrent transactions
can read or write data affected by the current transaction until it completes its execution.
Durability: Durability ensures that once a transaction has been committed, it remains so even in the event
of system failure or power loss. This property guarantees that data changes made during a committed
transaction persist permanently and can be recovered after system recovery processes have taken place.
For example, if a power outage occurs during the execution of a transaction but it has already been
committed beforehand, durability ensures that those changes will still be present when power is restored
and the DBMS resumes normal operations.
03:ANSWERS
Lock-Based Protocols:
Lock-based protocols are a common mechanism used for concurrency control in DBMS. They involve
acquiring and releasing locks on data items to regulate access by multiple transactions. Two fundamental
types of locks are used which are as follows:
(i) Shared Locks: Allow multiple transactions to read a data item simultaneously but prevent any
transaction from writing to it until all shared locks are released.
(ii) Exclusive Locks: Restrict both reading and writing operations on a data item by other transactions while
an exclusive lock is held.
Two-Phase Locking (2PL): The concept of two-phase locking ensures serializability by dividing the
transaction into two phases: the growing phase and the shrinking phase. In the growing phase, a
transaction can acquire locks but cannot release any, whereas in the shrinking phase, a transaction can
release locks but cannot acquire any more.
Timestamp-Based Protocols:
Timestamp-based protocols use timestamps assigned to transactions to order their execution and resolve
conflicts. Each transaction is assigned a unique timestamp based on its start time or submission order.
Transactions with higher timestamps are given priority over those with lower timestamps. This mechanism
ensures that transactions are executed in a consistent order based on their timestamps.
Optimistic Concurrency Control (OCC) is based on the assumption that conflicts between transactions are
rare. It allows transactions to proceed without acquiring locks initially and validates them only at commit
time.
(ii) Validation Phase: The system checks for conflicts between transactions during validation before
committing changes.
(iii) Write Phase: If no conflicts are detected during validation, the changes made by the transaction are
committed; otherwise, it is rolled back.
04:Answers
(i)Read Uncommitted: In the Read Uncommitted isolation level, transactions can read data that has been
modified by other transactions but not yet committed. This means that dirty reads are possible, where a
transaction can see uncommitted changes made by other transactions.
Example: Transaction A updates a row in a table. Transaction B reads the same row before Transaction A
commits its changes. In this case, Transaction B will see the uncommitted changes made by Transaction A.
(ii) Read Committed: In the Read Committed isolation level, transactions can only read data that has been
committed by other transactions. This prevents dirty reads but allows non-repeatable reads and phantom
reads.
Example: Transaction A updates a row in a table. Transaction B tries to read the same row before
Transaction A commits its changes. In this case, Transaction B will not see the uncommitted changes made
by Transaction A.
(iii) Repeatable Read: In the Repeatable Read isolation level, a transaction can read data that has been
committed by other transactions. This prevents both dirty reads and non-repeatable reads but allows
phantom reads.
Example: Transaction A selects a set of rows based on a certain condition. Meanwhile, Transaction B
inserts new rows that meet the same condition. If Transaction A re-executes the same query, it may see
additional rows inserted by Transaction B, causing a phantom read.
(iv) Serializable: In the Serializable isolation level, transactions are executed as if they were running one
after another, preventing all types of anomalies - dirty reads, non-repeatable reads, and phantom reads.
This is the strictest isolation level but can lead to decreased concurrency and potential performance issues
due to locking.
Example: If two transactions try to update the same row simultaneously under Serializable isolation, one
of them will be forced to wait until the other completes its operation to maintain consistency and prevent
conflicts.
05:Answers
Creating Tables:
CREATE TABLE customers (
customer_id INT PRIMARY KEY,
customer_name VARCHAR(50)
);
START TRANSACTION;
UPDATE orders SET order_amount = 60.00 WHERE order_id = 101;
COMMIT;
START TRANSACTION;
UPDATE orders SET order_amount = -100.00 WHERE order_id = 102; -- This update will fail due to negative
amount
ROLLBACK;
Using different isolation levels can affect how transactions interact with each other in a concurrent
environment. Lower isolation levels provide better performance but may lead to data integrity issues.
06:Answers
Concurrency control techniques and isolation levels play a crucial role in ensuring the consistency and
reliability of database systems when multiple transactions are executed concurrently. In this report, we
will analyze the performance impact of different concurrency control techniques and isolation levels by
conducting experiments with multiple concurrent transactions. We will measure the throughput and
response time to evaluate the effectiveness of these techniques.
Experimental Setup
Two-Phase Locking (2PL): This technique ensures that transactions acquire locks on data items before
accessing them and release the locks only after completing the transaction.
Optimistic Concurrency Control: This technique allows transactions to proceed without acquiring locks
initially. Conflicts are detected at commit time, and if conflicts occur, one of the transactions is rolled back.
Isolation Levels:
Read Uncommitted: Transactions can read uncommitted changes made by other transactions.
Read Committed: Transactions can only read committed data, preventing dirty reads.
Repeatable Read: Ensures that a transaction sees a consistent snapshot of the database, preventing non-
repeatable reads.
Serializable: Provides the highest level of isolation by ensuring that transactions are executed as if they
were serially executed.
Experimental Procedure:
We will simulate a database environment with multiple concurrent transactions using a benchmarking tool.
Each transaction will perform a mix of read and write operations on shared data items.
We will vary the concurrency control techniques and isolation levels to observe their impact on system
performance.
Results
Throughput Analysis:
07:Answers
Summary of Learnings
In this assignment, I delved into the critical concepts of transactions and concurrency control in
Database Management Systems (DBMS). Transactions are fundamental units of work in a
database system that must follow the ACID properties - Atomicity, Consistency, Isolation, and
Durability. Atomicity ensures that either all operations within a transaction are completed
successfully or none at all. Consistency guarantees that the database remains in a valid state before
and after the transaction. Isolation ensures that concurrent transactions do not interfere with each
other, while Durability ensures that once a transaction is committed, its changes are permanent.
Concurrency control is equally vital as it enables multiple users to access and modify data concurrently
without leading to conflicts or inconsistencies. Without proper concurrency control mechanisms, issues
like dirty reads, non-repeatable reads, and phantom reads can occur, compromising the reliability of the
database system.
Challenges Encountered:
During the implementation and analysis of transactions and concurrency control mechanisms, several
challenges were encountered. One common challenge is balancing between ensuring data consistency
through locking mechanisms and allowing for high levels of concurrency to improve performance. Overly
restrictive locking can lead to contention and reduced throughput, while too lax control can result in data
anomalies.
Another challenge lies in handling deadlock situations where two or more transactions are waiting
indefinitely for resources held by each other. Deadlocks can significantly impact system performance if not
managed effectively through deadlock detection and resolution strategies.