Log based Recovery in DBMS
Last Updated :
30 Jul, 2025
Log-based recovery in DBMS ensures data can be maintained or restored in the event of a system failure. The DBMS records every transaction on stable storage, allowing for easy data recovery when a failure occurs. For each operation performed on the database, a log file is created. Transactions are logged and verified before being applied to the database, ensuring data integrity.
Log Based RecoveryLog in DBMS
A log is a sequence of records that document the operations performed during database transactions. Logs are stored in a log file for each transaction, providing a mechanism to recover data in the event of a failure. For every operation executed on the database, a corresponding log record is created. It is critical to store these logs before the actual transaction operations are applied to the database, ensuring data integrity and consistency during recovery processes.
For example, consider a transaction to modify a student's city. This transaction generates the following logs:
1. Start Log: When the transaction begins, a log is created to indicate the start of the transaction.
- Format:<Tn, Start>
- Here, Tn represents the transaction identifier.
- Example: <T1, Start> indicates that Transaction 1 has started.
2. Operation Log: When the city is updated, a log is recorded to capture the old and new values of the operation.
- Format:<Tn, Attribute, Old_Value, New_Value>
- Example: <T1, City, 'Gorakhpur', 'Noida'> shows that in Transaction 1, the value of the City attribute has changed from 'Gorakhpur' to 'Noida'.
3. Commit Log: Once the transaction is successfully completed, a final log is created to indicate that the transaction has been completed and the changes are now permanent.
- Format:<Tn, Commit>
- Example: <T1, Commit> signifies that Transaction 1 has been successfully completed.
These logs play a crucial role in ensuring that the database can recover to a consistent state after a system crash. If a failure occurs, the DBMS can use these logs to either roll back incomplete transactions or redo committed transactions to maintain data consistency.
Key Operations in Log-Based Recovery
Undo Operation
The undo operation reverses the changes made by an uncommitted transaction, restoring the database to its previous state.
Example of Undo: Consider a transaction T1 that updates a bank account balance but fails before committing:
Initial State: Account balance = 500.
Transaction T1:
- Update balance to 600.
- Log entry:
<T1, Balance, 500, 600>
Failure: T1 fails before committing.
Undo Process:
- Use the old value from the log to revert the change.
- Set balance back to 500.
- Final log entry after undo:
<T1, Abort>
Redo Operation
The redo operation re-applies the changes made by a committed transaction to ensure consistency in the database.
Example of Redo: Consider a transaction T2 that updates an account balance but the database crashes before changes are permanently reflected:
Initial State: Account balance = 300.
Transaction T2:
- Update balance to 400.
- Log entries:
<T2, Start><T2, Balance, 300, 400><T2, Commit>
Crash: Changes are not reflected in the database.
Redo Process:
- Use the new value from the log to reapply the committed change.
- Set balance to 400.
Undo-Redo Example:
Assume two transactions:
- T1: Failed transaction (requires undo).
- T2: Committed transaction (requires redo).
Log File:
<T1, Start><T1, Balance, 500, 600><T2, Start><T2, Balance, 300, 400><T2, Commit><T1, Abort>
Identify Committed and Uncommitted Transactions:
- T1: Not committed → Undo.
- T2: Committed → Redo.
Undo T1: Revert balance from 600 to 500.
Redo T2: Reapply balance change from 300 to 400.
Operation | Trigger | Action |
---|
Undo | For uncommitted/failed transactions | Revert changes using the old values in the log. |
Redo | For committed transactions | Reapply changes using the new values in the log. |
These operations ensure data consistency and integrity in the event of system failures.
Approaches to Modify the Database
In database systems, changes to the database can be made using two main methods: Immediate Modification and Deferred Modification.
In the Immediate Modification method, the database is updated as soon as a change is made during a transaction, even before the transaction is committed. Logs are written before making any changes to ensure recovery is possible in case of a system failure.
Transaction T0:
- Start: Transaction T0 begins, and the log records <T_0 start>.
- Change to A: A is updated from 500 to 450. The change is logged as <T_0, A, 500, 450>, and A's new value is reflected in memory.
- Change to B: B is updated from 300 to 350. The change is logged as <T_0, B, 300, 350>, and B's new value is reflected in memory.
- Commit: After both changes are completed, T0 is committed. The log records <T_0 commit>, and the new values of A and B are permanently written to storage.
Transaction T1:
- Start: Transaction T1 begins, and the log records
<T_1 start>
. - Change to C: C is updated from 200 to 180. The change is logged as
<T_1, C, 200, 180>
, and C's new value is reflected in memory. - Commit: After the change, T1 is committed. The log records
<T_1 commit>
, and the new value of C is permanently written to storage.
- Changes Are Applied Immediately: Updates to the database are made as soon as a transaction executes an operation, even before the transaction commits.
- Requires Undo and Redo for Recovery: Uncommitted changes are reverted using undo, while committed changes are reapplied using redo during recovery.
- Logs Are Written First: All changes are logged before being applied to ensure recoverability and consistency in case of failure.
2. Deferred Modification
In the Deferred Modification method, changes to the database are not applied immediately. Instead, they are logged and stored temporarily. The database is only updated after the transaction is fully committed. This method ensures that no partial changes are made to the database, reducing the risk of inconsistency.
Transaction T0:
- Start: Transaction T0T_0T0 begins, and the log records
<T_0 start>
. - Change to A: A is intended to be updated from 1000 to 950.
The log entry <T_0, A, 1000, 950>
is recorded, but the change is not applied to the database yet. The value of A in the database remains 1000. - Change to B: B is intended to be updated from 2000 to 2050.
The log entry <T_0, B, 2000, 2050>
is recorded, but the change is not applied to the database yet. The value of B in the database remains 2000. - Commit: When T0 commits, the changes to A and B are applied to the database: A=950 B=2050
Transaction T1:
- Start: Transaction T1 begins, and the log records
<T_1 start>
. - Change to C: C is intended to be updated from 700 to 600.
The log entry <T_1, C, 700, 600>
is recorded, but the change is not applied to the database yet. The value of C in the database remains 700. - Commit: When T1 commits, the change to C is applied to the database: C=600
Key Characteristics of Deferred Modification:
- Changes Are Logged First: All updates are recorded in the log before any changes are applied to the database.
- Changes Are Applied Only After Commit: No updates are made to the database until the transaction commits. This prevents partial changes in case of a failure.
- Simpler Recovery Process: Since no changes are applied before commit, only redo operations are needed for recovery.
Recovery using Log records
Log-based recovery is a method used in database systems to restore the database to a consistent state after a crash or failure. The process uses a transaction log, which keeps a record of all operations performed on the database, including updates, inserts, deletes, and transaction states (start, commit, or abort).
How Log-Based Recovery Works ?
Transaction Log:
- The log stores all changes made by transactions, ensuring recoverability.
- Each transaction's start, changes (with old and new values), and its commit or abort state are recorded.
Recovery Process:
After a system crash, the database uses the log to determine:
- Undo: Transactions that started but didn’t commit (incomplete transactions) are undone to reverse their changes.
- Redo: Transactions that committed before the crash are redone to ensure their changes are applied to the database.
Recovery Using Checkpoints
Checkpointing is a process used in DBMS to streamline the recovery procedure after a system crash by reducing the amount of log data that needs to be examined. It helps save the current state of the database and active transactions to make recovery faster and more efficient.
For example: <checkpoint L>
means the database state and the list of active transactions (L) were saved.
When a crash occurs, recovery involves the following steps:
1. Find the Most Recent Checkpoint: Scan the log backward to locate the last <checkpoint>
record.
2. Identify Relevant Transactions:
- Continue scanning backward until the
<Ti start>
record of the oldest active transaction at the time of the checkpoint is found. - Transactions that started before this can be ignored, as their updates are already saved.
3. Perform Undo Operations: For transactions without a <Ti commit>
record, execute undo(Ti) to reverse incomplete changes. (Only needed in the immediate modification approach.)
4. Perform Redo Operations: For transactions with a <Ti commit>
record, execute redo(Ti) to reapply their changes if needed.
Example: checkpoint occurred during: T67 and T69
Transactions T0 to T66 and T68: Completed before checkpoint → No action needed
Transactions T67, T69 to T100: Occurred during/after checkpoint → Need to be checked
- If committed or aborted -> Redo
- If incomplete -> Undo
Advantages of Log based Recovery
- Durability: In the event of a breakdown, the log file offers a dependable and long-lasting method of recovering data. It guarantees that in the event of a system crash, no committed transaction is lost.
- Faster Recovery: Since log-based recovery recovers databases by replaying committed transactions from the log file, it is typically faster than alternative recovery methods.
- Incremental Backup: Backups can be made in increments using log-based recovery. Just the changes made since the last backup are kept in the log file, rather than creating a complete backup of the database each time.
- Lowers the Risk of Data Corruption: By making sure that all transactions are correctly committed or canceled before they are written to the database , log-based recovery lowers the risk of data corruption.
Disadvantages of Log based Recovery
- Additional overhead: Maintaining the log file incurs an additional overhead on the database system, which can reduce the performance of the system.
- Complexity: Log-based recovery is a complex process that requires careful management and administration. If not managed properly, it can lead to data inconsistencies or loss.
- Storage space: The log file can consume a significant amount of storage space, especially in a database with a large number of transactions.
- Time-Consuming: The process of replaying the transactions from the log file can be time-consuming, especially if there are a large number of transactions to recover.
Related article
Checkpoints in DBMS.
Similar Reads
DBMS Tutorial â Learn Database Management System Database Management System (DBMS) is a software used to manage data from a database. A database is a structured collection of data that is stored in an electronic device. The data can be text, video, image or any other format.A relational database stores data in the form of tables and a NoSQL databa
7 min read
Basic of DBMS
Entity Relationship Model
Introduction of ER ModelThe Entity-Relationship Model (ER Model) is a conceptual model for designing a databases. This model represents the logical structure of a database, including entities, their attributes and relationships between them. Entity: An objects that is stored as data such as Student, Course or Company.Attri
10 min read
Structural Constraints of Relationships in ER ModelStructural constraints, within the context of Entity-Relationship (ER) modeling, specify and determine how the entities take part in the relationships and this gives an outline of how the interactions between the entities can be designed in a database. Two primary types of constraints are cardinalit
5 min read
Generalization, Specialization and Aggregation in ER ModelUsing the ER model for bigger data creates a lot of complexity while designing a database model, So in order to minimize the complexity Generalization, Specialization and Aggregation were introduced in the ER model. These were used for data abstraction. In which an abstraction mechanism is used to h
4 min read
Introduction of Relational Model and Codd Rules in DBMSThe Relational Model is a fundamental concept in Database Management Systems (DBMS) that organizes data into tables, also known as relations. This model simplifies data storage, retrieval, and management by using rows and columns. Coddâs Rules, introduced by Dr. Edgar F. Codd, define the principles
14 min read
Keys in Relational ModelIn the context of a relational database, keys are one of the basic requirements of a relational database model. Keys are fundamental components that ensure data integrity, uniqueness and efficient access. It is widely used to identify the tuples(rows) uniquely in the table. We also use keys to set u
6 min read
Mapping from ER Model to Relational ModelConverting an Entity-Relationship (ER) diagram to a Relational Model is a crucial step in database design. The ER model represents the conceptual structure of a database, while the Relational Model is a physical representation that can be directly implemented using a Relational Database Management S
7 min read
Strategies for Schema design in DBMSThere are various strategies that are considered while designing a schema. Most of these strategies follow an incremental approach that is, they must start with some schema constructs derived from the requirements and then they incrementally modify, refine or build on them. What is Schema Design?Sch
6 min read
Relational Model
Introduction of Relational Algebra in DBMSRelational Algebra is a formal language used to query and manipulate relational databases, consisting of a set of operations like selection, projection, union, and join. It provides a mathematical framework for querying databases, ensuring efficient data retrieval and manipulation. Relational algebr
9 min read
SQL Joins (Inner, Left, Right and Full Join)SQL joins are fundamental tools for combining data from multiple tables in relational databases. For example, consider two tables where one table (say Student) has student information with id as a key and other table (say Marks) has information about marks of every student id. Now to display the mar
4 min read
Join operation Vs Nested query in DBMSThe concept of joins and nested queries emerged to facilitate the retrieval and management of data stored in multiple, often interrelated tables within a relational database. As databases are normalized to reduce redundancy, the meaningful information extracted often requires combining data from dif
3 min read
Tuple Relational Calculus (TRC) in DBMSTuple Relational Calculus (TRC) is a non-procedural query language used to retrieve data from relational databases by describing the properties of the required data (not how to fetch it). It is based on first-order predicate logic and uses tuple variables to represent rows of tables.Syntax: The basi
4 min read
Domain Relational Calculus in DBMSDomain Relational Calculus (DRC) is a formal query language for relational databases. It describes queries by specifying a set of conditions or formulas that the data must satisfy. These conditions are written using domain variables and predicates, and it returns a relation that satisfies the specif
4 min read
Relational Algebra
Introduction of Relational Algebra in DBMSRelational Algebra is a formal language used to query and manipulate relational databases, consisting of a set of operations like selection, projection, union, and join. It provides a mathematical framework for querying databases, ensuring efficient data retrieval and manipulation. Relational algebr
9 min read
SQL Joins (Inner, Left, Right and Full Join)SQL joins are fundamental tools for combining data from multiple tables in relational databases. For example, consider two tables where one table (say Student) has student information with id as a key and other table (say Marks) has information about marks of every student id. Now to display the mar
4 min read
Join operation Vs Nested query in DBMSThe concept of joins and nested queries emerged to facilitate the retrieval and management of data stored in multiple, often interrelated tables within a relational database. As databases are normalized to reduce redundancy, the meaningful information extracted often requires combining data from dif
3 min read
Tuple Relational Calculus (TRC) in DBMSTuple Relational Calculus (TRC) is a non-procedural query language used to retrieve data from relational databases by describing the properties of the required data (not how to fetch it). It is based on first-order predicate logic and uses tuple variables to represent rows of tables.Syntax: The basi
4 min read
Domain Relational Calculus in DBMSDomain Relational Calculus (DRC) is a formal query language for relational databases. It describes queries by specifying a set of conditions or formulas that the data must satisfy. These conditions are written using domain variables and predicates, and it returns a relation that satisfies the specif
4 min read
Functional Dependencies & Normalization
Attribute Closure in DBMSFunctional dependency and attribute closure are essential for maintaining data integrity and building effective, organized and normalized databases. Attribute closure of an attribute set can be defined as set of attributes which can be functionally determined from it.How to find attribute closure of
4 min read
Armstrong's Axioms in Functional Dependency in DBMSArmstrong's Axioms refer to a set of inference rules, introduced by William W. Armstrong, that are used to test the logical implication of functional dependencies. Given a set of functional dependencies F, the closure of F (denoted as F+) is the set of all functional dependencies logically implied b
4 min read
Canonical Cover of Functional Dependencies in DBMSManaging a large set of functional dependencies can result in unnecessary computational overhead. This is where the canonical cover becomes useful. A canonical cover is a set of functional dependencies that is equivalent to a given set of functional dependencies but is minimal in terms of the number
7 min read
Normal Forms in DBMSIn the world of database management, Normal Forms are important for ensuring that data is structured logically, reducing redundancy, and maintaining data integrity. When working with databases, especially relational databases, it is critical to follow normalization techniques that help to eliminate
7 min read
The Problem of Redundancy in DatabaseRedundancy means having multiple copies of the same data in the database. This problem arises when a database is not normalized. Suppose a table of student details attributes is: student ID, student name, college name, college rank, and course opted. Student_ID Name Contact College Course Rank 100Hi
6 min read
Lossless Join and Dependency Preserving DecompositionDecomposition of a relation is done when a relation in a relational model is not in appropriate normal form. Relation R is decomposed into two or more relations if decomposition is lossless join as well as dependency preserving. Lossless Join DecompositionIf we decompose a relation R into relations
4 min read
Denormalization in DatabasesDenormalization is a database optimization technique in which we add redundant data to one or more tables. This can help us avoid costly joins in a relational database. Note that denormalization does not mean 'reversing normalization' or 'not to normalize'. It is an optimization technique that is ap
4 min read
Transactions & Concurrency Control
ACID Properties in DBMSIn the world of DBMS, transactions are fundamental operations that allow us to modify and retrieve data. However, to ensure the integrity of a database, it is important that these transactions are executed in a way that maintains consistency, correctness, and reliability. This is where the ACID prop
6 min read
Types of Schedules in DBMSScheduling is the process of determining the order in which transactions are executed. When multiple transactions run concurrently, scheduling ensures that operations are executed in a way that prevents conflicts or overlaps between them.There are several types of schedules, all of them are depicted
6 min read
Recoverability in DBMSRecoverability ensures that after a failure, the database can restore a consistent state by keeping committed changes and undoing uncommitted ones. It uses logs to redo or undo actions, preventing data loss and maintaining integrity.There are several levels of recoverability that can be supported by
5 min read
Implementation of Locking in DBMSLocking protocols are used in database management systems as a means of concurrency control. Multiple transactions may request a lock on a data item simultaneously. Hence, we require a mechanism to manage the locking requests made by transactions. Such a mechanism is called a Lock Manager. It relies
5 min read
Deadlock in DBMSA deadlock occurs in a multi-user database environment when two or more transactions block each other indefinitely by each holding a resource the other needs. This results in a cycle of dependencies (circular wait) where no transaction can proceed.For Example: Consider the image belowDeadlock in DBM
4 min read
Starvation in DBMSStarvation in DBMS is a problem that happens when some processes are unable to get the resources they need because other processes keep getting priority. This can happen in situations like locking or scheduling, where some processes keep getting the resources first, leaving others waiting indefinite
8 min read
Advanced DBMS
Indexing in DatabasesIndexing in DBMS is used to speed up data retrieval by minimizing disk scans. Instead of searching through all rows, the DBMS uses index structures to quickly locate data using key values.When an index is created, it stores sorted key values and pointers to actual data rows. This reduces the number
6 min read
Introduction of B TreeA B-Tree is a specialized m-way tree designed to optimize data access, especially on disk-based storage systems. In a B-Tree of order m, each node can have up to m children and m-1 keys, allowing it to efficiently manage large datasets.The value of m is decided based on disk block and key sizes.One
8 min read
Introduction of B+ TreeA B+ Tree is an advanced data structure used in database systems and file systems to maintain sorted data for fast retrieval, especially from disk. It is an extended version of the B Tree, where all actual data is stored only in the leaf nodes, while internal nodes contain only keys for navigation.C
5 min read
Bitmap Indexing in DBMSBitmap Indexing is a powerful data indexing technique used in Database Management Systems (DBMS) to speed up queries- especially those involving large datasets and columns with only a few unique values (called low-cardinality columns).In a database table, some columns only contain a few different va
3 min read
Inverted IndexAn Inverted Index is a data structure used in information retrieval systems to efficiently retrieve documents or web pages containing a specific term or set of terms. In an inverted index, the index is organized by terms (words), and each term points to a list of documents or web pages that contain
7 min read
SQL Queries on Clustered and Non-Clustered IndexesIndexes in SQL play a pivotal role in enhancing database performance by enabling efficient data retrieval without scanning the entire table. The two primary types of indexes Clustered Index and Non-Clustered Index serve distinct purposes in optimizing query performance. In this article, we will expl
7 min read
File Organization in DBMSFile organization in DBMS refers to the method of storing data records in a file so they can be accessed efficiently. It determines how data is arranged, stored, and retrieved from physical storage.The Objective of File OrganizationIt helps in the faster selection of records i.e. it makes the proces
5 min read
DBMS Practice