0% found this document useful (0 votes)
9 views18 pages

DBMS Chap-5

The document discusses key concepts in database management systems, focusing on deadlocks, concurrency control, and transaction management. It explains deadlock definitions, recovery techniques, and the Two-Phase Commit Protocol for ensuring atomicity in distributed systems. Additionally, it covers granularity in concurrency control, ACID properties of transactions, and log-based recovery methods, including deferred and immediate database modification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views18 pages

DBMS Chap-5

The document discusses key concepts in database management systems, focusing on deadlocks, concurrency control, and transaction management. It explains deadlock definitions, recovery techniques, and the Two-Phase Commit Protocol for ensuring atomicity in distributed systems. Additionally, it covers granularity in concurrency control, ACID properties of transactions, and log-based recovery methods, including deferred and immediate database modification.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

DBMS CHAP-5

1] What is a deadlock? Explain deadlock recovery techniques.


Definition of Deadlock
A deadlock in a database occurs when two or more transactions are waiting for each other to
release locks on resources, creating a cyclic dependency. As a result, none of the transactions
can proceed, leading to a standstill in the system. Deadlocks are common in multi-user
environments and can severely impact database performance and reliability.
Example:
Transaction 1 locks Table A and requests a lock on Table B.
Transaction 2 locks Table B and requests a lock on Table A.
Both transactions are blocked indefinitely, as neither can proceed until the other releases its
lock.
Deadlock Recovery Techniques
Deadlock Detection:
The DBMS periodically checks for cycles in the wait-for graph (a representation of resource
dependencies).
When a deadlock is detected, one transaction is chosen as the "victim" and rolled back to
break the cycle.
Transaction Rollback:
The DBMS aborts one or more transactions involved in the deadlock.
The victim transaction is usually chosen based on criteria like transaction priority, resource
usage, or execution time.
Timeout Mechanism:
Transactions are automatically aborted if they exceed a predefined waiting time for resources.
This prevents indefinite blocking and resolves potential deadlocks.
Retry Logic:
Applications retry aborted transactions after introducing a randomized delay.
This reduces the likelihood of encountering the same deadlock again.
Deadlock Prevention:
Careful transaction design ensures resources are always acquired in a consistent order.
Locks are released as soon as possible to minimize contention.
By implementing these techniques, databases can recover from deadlocks efficiently and
maintain smooth operations.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

2] If we are to ensure atomicity, all the sites in which a transaction T


executed must agree on the final outcome of the execution T must either
commit at all sites, or it must abort at all sites. Describe the Two Phase
Commit Protocol used to ensure this property in detail
Two-Phase Commit Protocol (2PC)
The Two-Phase Commit Protocol (2PC) is an atomic commitment protocol used in
distributed database systems to ensure atomicity. It guarantees that all participating sites
either commit or abort a transaction, ensuring consistency across the distributed system. This
is particularly important when a transaction spans multiple nodes.
Phases of Two-Phase Commit Protocol
The protocol is divided into two phases:
1. Prepare Phase (Voting Phase): The Coordinator sends a Prepare request to all
participating sites (called cohorts).
Each cohort performs local checks to ensure it can commit the transaction (e.g., resource
availability, constraints).
Cohorts respond with either:
Yes: If they are ready to commit.
No: If they cannot commit.
The coordinator logs responses to decide the next step.
2. Commit Phase: If all cohorts respond with Yes, the coordinator sends a Commit message,
and all cohorts commit the transaction.
If any cohort responds with No, the coordinator sends an Abort message, and all cohorts roll
back the transaction.
Each cohort logs its final action for recovery purposes.
Benefits of 2PC
Ensures atomicity: All nodes either commit or abort together.
Guarantees consistency across distributed systems.
Limitations of 2PC
Blocking Nature: If the coordinator crashes during execution, cohorts may remain in an
uncertain state until recovery.
Communication Overhead: Requires multiple messages between coordinator and cohorts.
Susceptible to failures: Network or system failures can delay or disrupt transactions.
The Two-Phase Commit Protocol is widely used in distributed systems where atomicity is
critical for maintaining data integrity across multiple nodes.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

3] How does the granularity of data items affect the performance of


concurrency control? What factors affect the selection of granularity size of
data items?
The granularity of data items refers to the size or level of detail of the data items chosen as
units for concurrency control (e.g., a field, record, block, file, or entire database). Granularity
significantly impacts the performance of concurrency control mechanisms as follows:
Fine Granularity: Smaller data items (e.g., individual records or fields) allow higher degrees
of concurrency because multiple transactions can access different parts of the database
simultaneously.
However, it increases overhead due to the need for more locks, lock management operations,
and larger lock tables.
Coarse Granularity: Larger data items (e.g., entire files or blocks) reduce lock management
overhead since fewer locks are required.
However, it limits concurrency because locking a large item prevents other transactions from
accessing any part of it, even if there is no actual conflict.
Factors Affecting Selection of Granularity Size
The choice of granularity size depends on several factors:
Transaction Access Patterns: Fine granularity is preferred for transactions accessing small
portions of the database randomly.
Coarse granularity is better for transactions accessing large contiguous portions.
System Resource Availability: Fine granularity requires more system resources (e.g.,
memory for lock tables), which may not be feasible in resource-constrained environments.
Concurrency Requirements: Systems with high concurrency demands benefit from fine
granularity to allow parallel access to different parts of the database.
Locking Overhead: Coarse granularity reduces locking overhead but may lead to contention
and delays in transaction execution.
Database Size and Structure: Larger databases with hierarchical structures may use coarse
granularity at higher levels and fine granularity at lower levels (e.g., multiple granularity
locking).
Performance Trade-offs: The balance between minimizing locking overhead and
maximizing transaction parallelism determines the optimal granularity size.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

4] Explain deadlock prevention and Recovery.


Deadlocks in database systems occur when transactions are waiting indefinitely for resources
held by each other, leading to a standstill. To handle deadlocks, systems employ prevention
and recovery techniques.
Deadlock Prevention: Deadlock prevention ensures that the system is structured in such a
way that deadlocks cannot occur. It involves imposing constraints on how transactions
acquire locks.
Techniques for Deadlock Prevention:
Wait-Die Scheme:
 If a transaction requests a lock on a resource already held by another transaction:
 If the requesting transaction is older (based on timestamps), it is allowed to wait.
 If the requesting transaction is younger, it is aborted and restarted later.
 This ensures older transactions are prioritized, avoiding circular waits.
Wound-Wait Scheme: If a transaction requests a lock on a resource already held by another
transaction:
 If the requesting transaction is older, it forces the younger transaction to abort
(wounds it).
 If the requesting transaction is younger, it waits.
 This prevents younger transactions from blocking older ones.
Pre-Acquisition of Locks: Transactions acquire all required locks before execution begins.
This prevents waiting during execution and eliminates cyclic dependencies.
Resource Ordering: Resources are assigned a global order, and transactions must request
locks in that order.
This avoids circular waits by ensuring consistent locking patterns.
Deadlock Recovery: Deadlock recovery involves detecting deadlocks and resolving them
after they occur.
Techniques for Deadlock Recovery:
Deadlock Detection: The system periodically checks for cycles in the wait-for graph, which
represents dependencies between transactions.
If a cycle is found, it indicates a deadlock.
Transaction Rollback: One or more transactions involved in the deadlock are aborted
(rolled back) to break the cycle.
The victim transaction is chosen based on criteria such as priority, resource usage, or
execution time.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

5]Illustrate difference between conflict serializable schedule and view


serializable schedual by an appropriate example.

6] What are the types of errors that may cause a transaction to fail?
1) such as incorrect input or insufficient data to complete the transaction.
Syntax Errors: These happen when the DBMS cannot execute a transaction due to syntax
issues, often resulting in the system aborting the transaction.
Deadlocks: Situations where two or more transactions are waiting indefinitely for each other
to release resources, causing the transaction to fail.
2)System Crash/ Failure:
Hardware Issues: Failures due to hardware problems like power supply disruptions or
hardware malfunctions.
Software Issues: Errors caused by operating system bugs or database software issues.
3)Disk Failure:
Hardware Failure: Disk failures due to bad sectors, corruption, or insufficient resources.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

7] What is concurrency control? Explain time stamping method.


Concurrency Control
Concurrency control is a mechanism in database management systems (DBMS) to manage
simultaneous operations without conflicts. It ensures that multiple transactions can execute
concurrently while maintaining data integrity and consistency. The main goals include
isolation, consistency, and correctness of transactions.
Timestamping Method
The timestamping method is a concurrency control protocol that assigns unique timestamps
to each transaction based on their start time. These timestamps determine the order in which
transactions access data, ensuring serializability. Key aspects of the method include:
Timestamp Assignment:
Each transaction is given a unique timestamp when it starts.
Older transactions are given priority over newer ones.
Read and Write Rules:
If a transaction attempts to read or write a data item, the system checks the timestamps to
ensure no conflicts arise.
Transactions violating the timestamp order are aborted and restarted with a new timestamp.
Advantages:
Prevents deadlocks as transactions are ordered.
Ensures serializability through strict timestamp-based ordering.
Disadvantages:
May lead to transaction starvation if newer transactions are frequently aborted.
Requires additional storage for timestamps and increases overhead.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

8] Explain the concept of Transaction. Describe ACID properties of


transaction.
Concept of Transaction
1)A transaction in a database is a sequence of operations performed as a single logical unit of
work.
2)It ensures that either all operations are executed successfully (commit) or none are executed
(rollback), maintaining the consistency and integrity of the database.
3)For example, transferring money between two accounts involves reading balances,
subtracting from one account, and adding to another—all treated as one atomic transaction.
ACID Properties of Transactions
The ACID properties ensure reliable transaction processing in a database:
Atomicity:
Ensures that all operations within a transaction are completed entirely or not at all.
If any operation fails, the entire transaction is rolled back, leaving the database unchanged.
Consistency:
Guarantees that the database remains in a consistent state before and after the transaction.
The transaction must adhere to all defined constraints and rules.
Isolation:
Ensures that transactions are executed independently without interference from other
concurrent transactions.
The outcome of a transaction is not affected by others running simultaneously.
Durability:
Ensures that once a transaction is committed, its changes are permanently stored in the
database.
The changes persist even in case of system failures like power outages.
These properties collectively ensure data integrity and reliability in multi-user environments.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

9] Explain Deferred database modification.


Deferred Database Modification
1)Deferred database modification is a technique used in transaction management and
recovery systems where changes to the database are not applied immediately during the
transaction's execution.
2)Instead, all updates are deferred until the transaction reaches its commit point. This ensures
atomicity and consistency, as changes are only applied if the transaction successfully
commits.
Key Features
Log-Based Recovery:
Updates are recorded in a log file during the transaction but are not applied to the database
immediately.
The log contains details like transaction ID, data item, old value, and new value.
Execution Process:
During transaction execution:
Updates are stored in a temporary workspace or log.
After the transaction commits:
Changes are applied to the database using the log entries.
If the transaction aborts or crashes before committing:
The log entries are ignored, and no changes are made to the database.
Advantages:
Ensures atomicity since changes are only made after a successful commit.
Prevents partial updates in case of system failures.
Disadvantages:
Requires additional storage for logs and temporary data.
Increases overhead due to delayed updates.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

10] Explain:
i) ACID Properties: ACID is an acronym representing the four essential properties of
database transactions that ensure data integrity and reliability:
Atomicity: Ensures that a transaction is treated as a single unit of work.
Either all operations in the transaction are executed successfully, or none are executed
(rollback on failure).
Consistency: Guarantees that a transaction takes the database from one consistent state to
another.
Ensures adherence to constraints like foreign keys and unique keys.
Isolation: Ensures that concurrent transactions do not interfere with each other.
Each transaction appears as if it is executed sequentially, preventing issues like dirty reads or
non-repeatable reads.
Durability: Ensures that once a transaction commits, its changes are permanently stored in
the database.
ii) Timestamp-Based Concurrency Control: Timestamp-based concurrency control is a
method used to manage concurrent transactions by assigning unique timestamps to each
transaction based on their start time. It ensures serializability by ordering transactions
according to their timestamps.
Key Features:
Timestamp Assignment: Each transaction is assigned a unique timestamp when it starts.
Older transactions (with smaller timestamps) are given priority over newer ones.
Read and Write Rules:
Read Rule: A transaction can read a data item only if no newer transaction has written to it.
Write Rule: A transaction can write to a data item only if no newer transaction has read or
written to it.
Conflict Resolution: If a transaction violates the timestamp order, it is aborted and restarted
with a new timestamp.
Advantages:
 Prevents deadlocks as transactions are strictly ordered.
 Ensures serializability through timestamp-based ordering.
Disadvantages:
 May lead to starvation of newer transactions due to frequent aborts.
 Requires additional storage for maintaining timestamps.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

11] What is the need of Serializability?


Need for Serializability: Serializability is essential in database systems to ensure the
correctness and consistency of data during concurrent transaction execution. Here are the key
reasons why serializability is needed:
Data Consistency: Ensures that the database remains in a consistent state despite concurrent
transactions.
Prevention of Anomalies: Avoids issues like lost updates, dirty reads, and uncommitted data
by enforcing a controlled order of execution.
Correctness: Guarantees that the outcome of concurrent transactions is equivalent to
executing them sequentially, preserving logical correctness.
Isolation: Provides isolation by preventing interference between transactions, ensuring
independent operations.
Recoverability:
Helps in recovery processes by ensuring that failed transactions do not corrupt the database
state.
Timestamp-Based Concurrency Control
Timestamp-based concurrency control is a method to ensure serializability by assigning
unique timestamps to transactions based on their start time. It uses these timestamps to order
transaction operations.
Key Features:
 Timestamp Assignment:
 Each transaction is given a unique timestamp when it begins.
 Older transactions (with smaller timestamps) are prioritized over newer ones.
Read and Write Rules:
Read Rule: A transaction can read a data item only if no newer transaction has written to it.
Write Rule: A transaction can write to a data item only if no newer transaction has read or
written to it.
Conflict Resolution: Transactions violating timestamp order are aborted and restarted with
new timestamps.
Advantages:
 Prevents deadlocks due to strict ordering.
 Ensures serializability through timestamp-based scheduling.
Disadvantages:
 May lead to starvation of newer transactions due to frequent aborts.
 Requires additional storage for maintaining timestamps

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

12] What is Log Based Recovery? Explain Deferred Database Modification


and Immediate Database Modification.
Log-Based Recovery
1)Log-based recovery is a technique used in database management systems (DBMS) to
ensure data consistency and recoverability in case of system failures, such as crashes or
power outages.
2)It relies on maintaining a transaction log that records all operations performed during
transactions. The log contains information such as transaction start, updates (old and new
values), commits, and aborts.
3) This mechanism allows the DBMS to undo incomplete transactions and redo committed
ones to restore the database to a consistent state.
Deferred Database Modification
Deferred database modification is a technique in log-based recovery where changes made by
a transaction are not applied to the database immediately. Instead, all updates are recorded in
the log and applied only after the transaction commits.
Key Features:
Execution Process:
During transaction execution, updates are stored in a log but not applied to the database.
After the transaction commits, changes are applied using the log entries.
Advantages:
Ensures atomicity since no changes are made until commit.
Reduces risk of partial updates due to system failures.
Disadvantages:
Requires additional storage for logs.
Delayed updates increase recovery time.
Example:
If a transaction modifies Account A and Account B:
Log: <T1, Start>, <T1, A, OldValue, NewValue>, <T1, B, OldValue, NewValue>, <T1,
Commit>
Changes are applied only after commit.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

Immediate Database Modification


Immediate database modification is another technique where changes made by a transaction
are applied to the database immediately during its execution. However, these changes are
logged before being applied to ensure recoverability.
Key Features:
Execution Process:
Updates are logged and applied to the database during transaction execution.
If the transaction fails before committing, undo operations are performed using the log.
Advantages:
Reduces recovery time since changes are already applied.
Allows partial updates to be undone using logs.
Disadvantages:
Higher risk of inconsistencies if recovery is not properly handled.
Increased complexity due to simultaneous logging and updating.
Example:
If a transaction modifies Account A and Account B:
Log: <T1, Start>, <T1, A, OldValue, NewValue>, <T1, B, OldValue, NewValue>, <T1,
Commit>
Changes are applied immediately during execution

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

13] Write a note on “Shadow Paging”.


Shadow Paging
1)Shadow paging is a recovery technique used in database management systems (DBMS) to
ensure data consistency and reliability during transaction processing.
2)It works by maintaining two versions of the database state: the shadow page table
(representing the pre-transaction state) and the current page table (reflecting updates made
during the transaction).
3)This approach eliminates the need for log files and simplifies crash recovery.
Key Features of Shadow Paging
Page Tables:
 The database is divided into fixed-sized pages.
 The shadow page table maps logical pages to physical storage blocks before the
transaction starts.
 The current page table tracks changes made during the transaction.
 Transaction Execution:
 Updates are applied to new physical pages, leaving the shadow page table unchanged.
 If a transaction commits, the current page table replaces the shadow page table.
Crash Recovery:
 If a system crash occurs, the database reverts to the shadow page table, ensuring
consistency.

Advantages
No Log Files: Eliminates the overhead of maintaining log files.
Fast Recovery: Crash recovery is quick since only page tables are swapped.
Isolation: Transactions operate on separate copies, reducing interference.

Disadvantages
Commit Overhead: Flushing all modified pages during commit increases overhead.
Data Fragmentation: Pages may become scattered, leading to fragmentation.
Garbage Collection: Old versions of modified pages require clean up after transactions.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

14]What is transaction? Explain ACID properties of transaction.


Transaction
1)A transaction is a sequence of database operations (e.g., reads, writes) executed as a single
logical unit of work.
2)It ensures that either all operations complete successfully (commit) or none are applied
(rollback), maintaining database integrity.
3) For example, transferring funds between bank accounts involves deducting from one
account and crediting another—both actions must succeed or fail together.
ACID Properties
ACID ensures reliable transaction processing in databases:
Atomicity
Guarantees that a transaction is treated as an indivisible unit.
Example: If a transaction fails midway (e.g., power outage), all changes are rolled back to the
pre-transaction state.
Consistency
Ensures the database transitions from one valid state to another.
Example: A transaction cannot violate constraints (e.g., negative account balances).
Isolation
Ensures concurrent transactions do not interfere.
Example: Two transactions updating the same data are scheduled to avoid conflicts (e.g.,
dirty reads).
Durability
Committed changes persist even after system failures.
Example: Once a fund transfer is confirmed, the changes are stored permanently on disk.
Importance of ACID
ACID properties ensure data reliability, correctness, and resilience in multi-user
environments, critical for applications like banking, e-commerce, and inventory management.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

15] What is the need of two phase locking protocol? Explain.


1)The Two-Phase Locking (2PL) Protocol is essential in database systems to ensure
serializability and maintain the integrity of concurrent transactions.
2) It addresses challenges like data conflicts, inconsistencies, and deadlocks when multiple
transactions access or modify the same data simultaneously.
Ensures Serializability:
2PL guarantees that concurrent transactions produce results equivalent to a serial schedule,
maintaining data consistency.
Prevents Conflicts:
By controlling when locks are acquired and released, 2PL avoids issues like dirty reads, lost
updates, and uncommitted data dependencies.
Maintains Isolation:
It ensures that transactions do not interfere with each other during execution, fulfilling the
isolation property of ACID.
Supports Recoverability:
Variations like Strict 2PL prevent cascading rollbacks by ensuring that no transaction reads
uncommitted data.
Explanation of Two-Phase Locking Protocol
The 2PL protocol divides the transaction into two distinct phases:
Growing Phase:
Transactions acquire locks (shared or exclusive) but cannot release any locks during this
phase.
This phase continues until the transaction reaches its "lock point," where all required locks
are acquired.
Shrinking Phase:
Transactions release locks but cannot acquire new ones.
This phase begins after the lock point, ensuring no additional resources are locked
Benefits of Two-Phase Locking
 Guarantees conflict serializability.
 Prevents inconsistent states during concurrent execution.
 Variants like Strict 2PL and Rigorous 2PL improve recoverability and avoid
cascading rollbacks.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

16] What is Serializable schedule? Explain with suitable example the types
of serializable schedules.
Serializable Schedule
A serializable schedule is a sequence of operations from multiple transactions that can be
executed concurrently while maintaining the same outcome as if they were executed
sequentially. This ensures that the final state of the database is consistent and adheres to all
integrity constraints, even when transactions overlap in their execution.
Conflict Serializable Schedule:
A schedule is conflict serializable if it can be transformed into a serial schedule by swapping
non-conflicting operations.
Example: Consider two transactions T1 and T2 accessing different data items A and B. If T1
reads A and writes A, and T2 reads B and writes B, this schedule is conflict serializable
because it can be rearranged into a serial order (e.g., T1 then T2).
View Serializable Schedule: A schedule is view serializable if it is view equivalent to a
serial schedule, meaning the final state of the database is the same as if transactions were
executed sequentially.
Example: Suppose T1 reads A, writes A, and T2 reads A, writes A. If the final value of A is
the same as if T1 executed before T2 or vice versa, the schedule is view serializable.
Example of Serializable Schedule
Consider two transactions T1 and T2:
T1 T2
R(A) R(B)
W(A) W(B)
R(C) R(C)
W(C) W(C)
This schedule is serializable because it can be rearranged into a serial order (e.g., T1 followed
by T2) without changing the final database state.
Non-Serializable Schedule Example
If T1 reads A, writes A, and T2 reads A before T1 writes A, the schedule is not serializable
because T2 reads uncommitted data (a dirty read), which cannot be transformed into a valid
serial order.
T1 T2

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

R(A) R(A)
W(A) W(A)

17] What is concurrency control? Explain time stamp based concurrency


control.
Concurrency Control
Concurrency control is a mechanism in database management systems (DBMS) that ensures
the consistency, integrity, and isolation of data when multiple transactions are executed
simultaneously. It prevents issues such as dirty reads, lost updates, and non-repeatable reads
by managing access to shared resources. The main goals of concurrency control include:
Isolation: Ensuring transactions do not interfere with each other.
Consistency: Maintaining database rules and constraints during concurrent operations.
Correctness: Guaranteeing that the database state reflects the intended results of all
transactions.
Timestamp-Based Concurrency Control
Timestamp-based concurrency control is a protocol used to manage concurrent transactions
by assigning unique timestamps to each transaction based on their start time. These
timestamps determine the order of execution, ensuring serializability.
Key Features:
Timestamp Assignment:
Each transaction is assigned a unique timestamp when it begins.
Older transactions (with smaller timestamps) are given priority over newer ones.
Read and Write Rules:
Read Rule: A transaction can read a data item only if no newer transaction has written to it.
Write Rule: A transaction can write to a data item only if no newer transaction has read or
written to it.
Conflict Resolution:
If a transaction violates the timestamp order, it is aborted and restarted with a new timestamp.
Advantages:
Prevents deadlocks due to strict ordering of transactions.
Ensures serializability by enforcing timestamp-based scheduling.
Disadvantages:
May lead to starvation of newer transactions due to frequent aborts.
Requires additional storage for maintaining timestamps.

SIDDHANT COLLEGE OF ENGINEERING


DBMS CHAP-5

SIDDHANT COLLEGE OF ENGINEERING

You might also like