0% found this document useful (0 votes)
58 views7 pages

Group 10 Assignment (3) 085636

Uploaded by

gasparshemagemb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views7 pages

Group 10 Assignment (3) 085636

Uploaded by

gasparshemagemb
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

NATIONAL INSTITUTE OF TRANSPORT

DEPARTMENT OF COMPUTING AND COMMUNICATION TECHNOLOGY (CCT)

PROGRAME NAME : HDIT

MODULE CODE : ITUO7203

LECTURER’S NAME : PETER MWAKALINGA

ACADEMIC YEAR : 2023/2024

SEMESTER : TWO

TASK : GROUP ASSIGNMENT FIVE

SUBMISSION DATE :9JUNE2024

SN NAME OF STUDENT REGISTRATION NUMBER SIGN


1 CLIFF MSINGI NIT/BIT/2023/2304 C.M.Singi
2 ERICK SIMBEYE NIT/BIT/2023/2163 E.Simbeye
3 JAMES NCHIA NIT/BIT/2023/2372 J.Nchia
4 NANDI KAMILINDA NIT/BIT/2023/2324 N.Kamlinda
5 JOSHUA MWITA NIT/BIT/2023/2364 J.Mwita
6 MADANIA SIRAHI NIT/BIT/2023/2193 M.Sirahi
7 SAMWELI MKUTA NIT/BIT/2023/2347 S.Mkuta
8 ALDO NJOGOLO NIT/BIT/2023/2245 A.Njogolo
9 SALMA KAGOGO NIT/BIT/2023/2058 S.Kagogo
10 JACKLINE AMOSI NIT/BIT/2023/2159 J.Amosi
01: Answers
In Database Management Systems (DBMS), a transaction refers to a logical unit of work that is performed
against a database. It represents a single task or a group of tasks that need to be executed as a whole.
Transactions in DBMS ensure data integrity and consistency by following the principles of Atomicity,
Consistency, Isolation, and Durability (ACID properties).
Importance of Transactions in Ensuring Data Integrity and Consistency:
1. Atomicity: This property ensures that either all operations within a transaction are successfully
completed, or none of them are executed. It prevents partial updates to the database, thus maintaining
data integrity.
2. Consistency: Transactions in DBMS help maintain the consistency of the database by ensuring that it
moves from one consistent state to another consistent state after each transaction. This prevents data
anomalies and ensures that the database remains accurate and reliable.
3. Isolation: Transactions should be isolated from each other to prevent interference between concurrent
transactions. Isolation levels determine the degree to which transactions are separated from each other,
ensuring that they do not impact each other’s execution.
4. Durability: Once a transaction is committed, its changes are permanent and survive system failures. This
property ensures that the changes made by a transaction persist even in the event of crashes or errors.

02: Answers

Atomicity: Atomicity ensures that all operations within a transaction are completed successfully or none
at all. If any operation within the transaction fails, the entire transaction is rolled back, leaving the
database in its previous state. This property guarantees data integrity by preventing partial updates or
inconsistent data. For instance, consider a bank transfer operation between two accounts. If the transfer
of funds from one account to another fails midway, atomicity ensures that both accounts remain
unchanged, preventing any potential inconsistency or data corruption.

Consistency: Consistency ensures that a transaction brings the database from one valid state to another
while maintaining database invariants. In other words, it guarantees that the database remains logically
consistent before and after the execution of a transaction. Database constraints such as primary key
uniqueness, referential integrity, and domain constraints help maintain consistency during transactions.
For example, if an attempt is made to insert a duplicate record into a table with a primary key constraint,
the transaction will be rolled back to maintain consistency.

Isolation: Isolation ensures that concurrently executing transactions do not affect each other’s execution.
It guarantees that each transaction runs independently without interference from other transactions.
There are different isolation levels (such as serializable, repeatable read, read committed, and
uncommitted read) that determine how much interference is allowed between transactions based on
their degree of isolation. For instance, at higher isolation levels like serializable, no concurrent transactions
can read or write data affected by the current transaction until it completes its execution.

Durability: Durability ensures that once a transaction has been committed, it remains so even in the event
of system failure or power loss. This property guarantees that data changes made during a committed
transaction persist permanently and can be recovered after system recovery processes have taken place.
For example, if a power outage occurs during the execution of a transaction but it has already been
committed beforehand, durability ensures that those changes will still be present when power is restored
and the DBMS resumes normal operations.
03:ANSWERS

Lock-Based Protocols:

Lock-based protocols are a common mechanism used for concurrency control in DBMS. They involve
acquiring and releasing locks on data items to regulate access by multiple transactions. Two fundamental
types of locks are used which are as follows:

(i) Shared Locks: Allow multiple transactions to read a data item simultaneously but prevent any
transaction from writing to it until all shared locks are released.

(ii) Exclusive Locks: Restrict both reading and writing operations on a data item by other transactions while
an exclusive lock is held.

Two-Phase Locking (2PL): The concept of two-phase locking ensures serializability by dividing the
transaction into two phases: the growing phase and the shrinking phase. In the growing phase, a
transaction can acquire locks but cannot release any, whereas in the shrinking phase, a transaction can
release locks but cannot acquire any more.

For example, consider two transactions T1 and T2:


T1 acquires an exclusive lock on a data item A.
T2 requests a shared lock on A but has to wait until T1 releases its exclusive lock.
Once T1 releases its lock, T2 can acquire a shared lock on A.

Timestamp-Based Protocols:

Timestamp-based protocols use timestamps assigned to transactions to order their execution and resolve
conflicts. Each transaction is assigned a unique timestamp based on its start time or submission order.
Transactions with higher timestamps are given priority over those with lower timestamps. This mechanism
ensures that transactions are executed in a consistent order based on their timestamps.

For example, if Transaction T1 has a timestamp earlier than Transaction T2:

T1 will be executed before T2 if they conflict.


If T2 tries to write to a data item that was previously read by T1, the system will prioritize executing T1 first.

Optimistic Concurrency Control:

Optimistic Concurrency Control (OCC) is based on the assumption that conflicts between transactions are
rare. It allows transactions to proceed without acquiring locks initially and validates them only at commit
time.

The OCC process typically involves three phases:

(i) Read Phase: Transactions read data without acquiring locks.

(ii) Validation Phase: The system checks for conflicts between transactions during validation before
committing changes.
(iii) Write Phase: If no conflicts are detected during validation, the changes made by the transaction are
committed; otherwise, it is rolled back.

04:Answers

(i)Read Uncommitted: In the Read Uncommitted isolation level, transactions can read data that has been
modified by other transactions but not yet committed. This means that dirty reads are possible, where a
transaction can see uncommitted changes made by other transactions.

Example: Transaction A updates a row in a table. Transaction B reads the same row before Transaction A
commits its changes. In this case, Transaction B will see the uncommitted changes made by Transaction A.

(ii) Read Committed: In the Read Committed isolation level, transactions can only read data that has been
committed by other transactions. This prevents dirty reads but allows non-repeatable reads and phantom
reads.

Example: Transaction A updates a row in a table. Transaction B tries to read the same row before
Transaction A commits its changes. In this case, Transaction B will not see the uncommitted changes made
by Transaction A.

(iii) Repeatable Read: In the Repeatable Read isolation level, a transaction can read data that has been
committed by other transactions. This prevents both dirty reads and non-repeatable reads but allows
phantom reads.

Example: Transaction A selects a set of rows based on a certain condition. Meanwhile, Transaction B
inserts new rows that meet the same condition. If Transaction A re-executes the same query, it may see
additional rows inserted by Transaction B, causing a phantom read.

(iv) Serializable: In the Serializable isolation level, transactions are executed as if they were running one
after another, preventing all types of anomalies - dirty reads, non-repeatable reads, and phantom reads.
This is the strictest isolation level but can lead to decreased concurrency and potential performance issues
due to locking.

Example: If two transactions try to update the same row simultaneously under Serializable isolation, one
of them will be forced to wait until the other completes its operation to maintain consistency and prevent
conflicts.

05:Answers

Implementation of a Simple Transaction Management System using a Relational Database


SQL Scripts for Implementation:

Creating Tables:
CREATE TABLE customers (
customer_id INT PRIMARY KEY,
customer_name VARCHAR(50)
);

CREATE TABLE orders (


order_id INT PRIMARY KEY,
customer_id INT,
order_amount DECIMAL(10, 2),
FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);

Inserting Sample Data:

INSERT INTO customers (customer_id, customer_name) VALUES (1, 'Alice');


INSERT INTO customers (customer_id, customer_name) VALUES (2, 'Bob');

INSERT INTO orders (order_id, customer_id, order_amount) VALUES (101, 1, 50.00);


INSERT INTO orders (order_id, customer_id, order_amount) VALUES (102, 2, 75.00);

Successful Transaction Commit:

START TRANSACTION;
UPDATE orders SET order_amount = 60.00 WHERE order_id = 101;
COMMIT;

Failed Transaction Rollback:

START TRANSACTION;
UPDATE orders SET order_amount = -100.00 WHERE order_id = 102; -- This update will fail due to negative
amount
ROLLBACK;

Isolation Levels and Concurrent Transactions:

Read Uncommitted: Allows dirty reads.


Read Committed: Prevents dirty reads but allows non-repeatable reads.
Repeatable Read: Prevents dirty reads and non-repeatable reads but allows phantom reads.
Serializable: Prevents all anomalies by locking entire tables.The following are the
impact of Isolation Levels on Concurrent Transactions:

Using different isolation levels can affect how transactions interact with each other in a concurrent
environment. Lower isolation levels provide better performance but may lead to data integrity issues.

06:Answers

Concurrency control techniques and isolation levels play a crucial role in ensuring the consistency and
reliability of database systems when multiple transactions are executed concurrently. In this report, we
will analyze the performance impact of different concurrency control techniques and isolation levels by
conducting experiments with multiple concurrent transactions. We will measure the throughput and
response time to evaluate the effectiveness of these techniques.

Experimental Setup

Concurrency Control Techniques:

Two-Phase Locking (2PL): This technique ensures that transactions acquire locks on data items before
accessing them and release the locks only after completing the transaction.
Optimistic Concurrency Control: This technique allows transactions to proceed without acquiring locks
initially. Conflicts are detected at commit time, and if conflicts occur, one of the transactions is rolled back.
Isolation Levels:

Read Uncommitted: Transactions can read uncommitted changes made by other transactions.
Read Committed: Transactions can only read committed data, preventing dirty reads.
Repeatable Read: Ensures that a transaction sees a consistent snapshot of the database, preventing non-
repeatable reads.
Serializable: Provides the highest level of isolation by ensuring that transactions are executed as if they
were serially executed.

Experimental Procedure:

We will simulate a database environment with multiple concurrent transactions using a benchmarking tool.
Each transaction will perform a mix of read and write operations on shared data items.
We will vary the concurrency control techniques and isolation levels to observe their impact on system
performance.

Results

Throughput Analysis:

Throughput refers to the number of transactions processed per unit time.


We observed that Optimistic Concurrency Control generally has higher throughput compared to Two-
Phase Locking due to its non-blocking nature.
Isolation levels such as Read Uncommitted may lead to higher throughput but at the cost of data
consistency.

Response Time Analysis:

Response time indicates the time taken for a transaction to complete.


Two-Phase Locking typically results in lower response times for individual transactions due to strict locking
protocols.
However, Optimistic Concurrency Control may exhibit higher response times when conflicts occur and
transactions need to be rolled back.

07:Answers

Summary of Learnings

 In this assignment, I delved into the critical concepts of transactions and concurrency control in
Database Management Systems (DBMS). Transactions are fundamental units of work in a
database system that must follow the ACID properties - Atomicity, Consistency, Isolation, and
Durability. Atomicity ensures that either all operations within a transaction are completed
successfully or none at all. Consistency guarantees that the database remains in a valid state before
and after the transaction. Isolation ensures that concurrent transactions do not interfere with each
other, while Durability ensures that once a transaction is committed, its changes are permanent.

Concurrency control is essential to manage multiple transactions executing simultaneously in a multi-user


environment. It prevents issues like lost updates, uncommitted data, and inconsistent reads by
coordinating access to shared resources. Techniques such as locking, timestamp ordering, and optimistic
concurrency control are used to maintain data integrity and ensure correct execution of transactions
concurrently.

Importance of Transactions and Concurrency Control:


Transactions play a crucial role in ensuring data integrity and reliability in database systems. By grouping
related operations into a single unit of work, transactions help maintain consistency and prevent data
corruption. The ACID properties provided by transactions ensure that even in the presence of failures or
concurrent access, the database remains in a consistent state.

Concurrency control is equally vital as it enables multiple users to access and modify data concurrently
without leading to conflicts or inconsistencies. Without proper concurrency control mechanisms, issues
like dirty reads, non-repeatable reads, and phantom reads can occur, compromising the reliability of the
database system.
Challenges Encountered:
During the implementation and analysis of transactions and concurrency control mechanisms, several
challenges were encountered. One common challenge is balancing between ensuring data consistency
through locking mechanisms and allowing for high levels of concurrency to improve performance. Overly
restrictive locking can lead to contention and reduced throughput, while too lax control can result in data
anomalies.

Another challenge lies in handling deadlock situations where two or more transactions are waiting
indefinitely for resources held by each other. Deadlocks can significantly impact system performance if not
managed effectively through deadlock detection and resolution strategies.

In conclusion, understanding transactions and implementing effective concurrency control mechanisms


are essential for building robust and efficient database systems that can handle multiple users accessing
data concurrently while ensuring data integrity.
Authoritative References books used to answer this assignment
BOOK REFERENCES

1: C. J. Date, A. Kannan and S. Swamynathan, An Introduction to Database Systems, Pearson Education,


Eighth Edition, 2009.
2: Abraham Silberschatz, Henry F. Korth and S. Sudarshan, Database System Concepts, McGraw-Hill
Education (Asia), Fifth Edition, 2006.
3: Shio Kumar Singh, Database Systems Concepts, Designs and Application, Pearson Education, Second
Edition, 2011.

You might also like