0% found this document useful (0 votes)
9 views60 pages

Chapter 6

The document discusses transaction management in databases. It defines transactions and describes how they ensure consistency even with concurrent access. It also covers transaction properties, states, scheduling, and techniques for database restoration in the event of failures.

Uploaded by

nonstres095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views60 pages

Chapter 6

The document discusses transaction management in databases. It defines transactions and describes how they ensure consistency even with concurrent access. It also covers transaction properties, states, scheduling, and techniques for database restoration in the event of failures.

Uploaded by

nonstres095
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

DATABASES

6. Transaction management

[email protected]
Chapter 6. Transaction management

• Transactions definition
• Concurrent access anomalies
• Transaction properties
• Transaction states
• Scheduling transactions
• Concurrency control
• Database restauration techniques
Transactions definition

• In a typical DBMS, multiple users can connect, access and


manipulate the same data concurrently
• Concurrent execution of multiple processes can take place:
• in a uniprocessor system, by sharing execution time of the processor between
multiple processes
• in a multiprocessor system, where multiple processes can be executed
simultaneously on multiple processors

• A transaction is a logical unit of indivisible (atomic) processing of


data to ensure its consistency
• Ensuring consistency means correct data changes in all cases
Transactions definition

• A transaction must ensure data consistency in any situation:


• individual or concurrent execution with other transactions
• prevent consistency loss in case of (non-catastrophic) system failures

• A transaction is an indivisible database access operation which can


either:
• successfully run all the actions and ends with a validation of the changes
(commit to the database)
• can’t run all the actions successfully (for whatever reasons), in which case the
changes are reverted and the transaction is aborted (rollback to the previous
state)
Transactions definition

• Example: An airline reservation system (simplification)


• For booking a seat on a flight, several operations are performed
• Check the availability of the requested flight
• If available seats were found, INSERT the passenger data
• Book the passenger on the requested flight (INSERT)
• Associate the passenger with the requested seat
• Create a new invoice (INSERT in the invoices table)

• If an operation fails between any of these steps, the reservation will


be incomplete and the integrity of the data will be lost
• If two or more users try to book the same seat, there will be
problems when boarding the passengers
Concurrent access anomalies

• Uncontrolled concurrent access can lead to unexpected results,


especially for actions consisting in multiple operations
• Example: A transaction T must read a value X from the database,
update it by adding a new value and write the result back
• 𝑋 =𝑋+𝑎
• If the transaction is executed concurrently by multiple users,
several anomalies may appear:
• Lost update
• Dirty reads
• Nonrepeatable reads
• Phantom reads
Concurrent access anomalies

• Lost updates appear when a transaction reads data, processes it


and writes the result before a previously launched transaction
finishes its execution and writes its result to the database
• In this case, when the first transaction finishes after the second one
and writes its result, the second transaction result will be
overwritten
• Dirty reads appear when a transaction is aborted after a second
transaction is started. The second transaction reads data that is
processed by the first one before it is aborted
• The cancellation of the first transaction should cancel any change
done by it
Concurrent access anomalies

• Nonrepeatable reads occur when a transaction has to read an item


two or more times
• Between the separate reads, another transaction updates the values
needed by the first transaction
• Phantom reads occur when a transaction processes a set of rows
resulting from a query
• During the processing (for example, computing some aggregate
values), another transaction inserts new rows or deletes existing
rows that are processed by the first transaction
Normal execution vs. lost updates

• Normal execution:
• T1: X = 100 + 20 = 120
• T2: X = 120 + 10 = 130

T1 BD (X = 100) T2 T1 BD (X = 100) T2
read(X): X = 100 read(X): X = 100
X = X + 20 = 120 X = X + 20 = 120
write(X) X = 120 read(X): X = 100
read(X): X = 120 X = X + 10 = 110
X = X + 10 = 130 X = 110 write(X)
X = 130 write(X) write(X) X = 120

Normal execution Lost update


Normal execution vs. dirty reads

T1 BD (X = 100) T2 T1 BD (X = 100) T2
read(X): X = 100 read(X): X = 100
X = X + 20 = 120 X = X + 20 = 120
write(X) X = 120 write(X) X = 120
read(X): X = 120 read(X): X = 120
X = X + 10 = 130 X = X + 10 = 130
X = 130 write(X) X = 130 write(X)
Normal execution …
abort
Dirty read
Transaction properties (ACID properties)

• Atomicity is the property of a transaction to represent an atomic


(indivisible) unit of execution (executes “all or nothing”). If the
transaction is interrupted for whatever reason, the database will
ensure (after the cause of interruption was eliminated) that the
transaction will either finish successfully or abort the transaction
and undo all the changes performed by it
• Consistency is the property of a transaction to perform consistent
changes to the database. In other words, a transaction transforms
the database from a consistent (valid) state to another consistent
(valid) state
Transaction properties (ACID properties)

• Isolation is the property of a transaction to make its changes visible


only after it has been validated (completed successfully). If
concurrent transactions are executed, their changes won’t be
available until the moment of completion, basically ensuring the
same result for concurrent access as if the transactions would be
executed sequentially
• Durability is the property of a transaction, that once it completed
successfully, the results will not be lost due to system failures. The
effects of the transaction are written in non-volatile memory
Transaction states

• Transaction operations are recorded by the recovery manager into a


separate database file (log file) which is used to recover the data in
case of system failures
• The specific states for transactions are:
• Begin – The beginning of the execution (transaction becomes ACTIVATED)
• Read or write – read or write operations from/in the database
• End – marks the completion of read or write operation, which means a
transaction can end (becomes PARTIALLY COMMITED); some verification
operations may still be required before final validation (COMMIT)
• Commit – successful completion of the transaction, validation of all changes,
storing the data to the physical drive and make the changes visible for other
operations. Changes can no longer be cancelled after this point
• Rollback (abort) – the transaction has been abandoned and any effect of the
transaction must be cancelled (rolling back the operations)
Transaction states
Transaction states

• To restore the database system, the DBMS maintains a log file in


which it memorizes the operations of each transaction identified
through a unique identifier (T) generated by the system
• The log file is stored on disk and is not affected by execution errors
inside transactions (but may be affected by catastrophic disk
failure)
• The transaction log contains:
• A record for the beginning of a transaction
• For each SQL operation: the type of operation (INSERT, UPDATE, DELETE),
the names of the affected objects, the “before” and “after” values for the fields
and pointers to the previous and next transaction log entries for the same
transaction
• The ending (COMMIT) of a transaction
Transaction states

• The transaction log increases processing overhead, but the ability to


restore a corrupted database is worth the price
• If a system failure occurs, the DBMS will examine the log for all
uncommitted or incomplete transactions and it will restore the
database to a previous state
• The log is itself a database and, to maintain its integrity, many
DBMSs will implement it on several different disks to reduce the
risk of system failure
Transaction states

• Example:
START TRANSACTION;
UPDATE products SET price = 5499.99 WHERE id = 1;
UPDATE products SET stock = 24 WHERE id = 2;
COMMIT;

• A transaction log will have the following structure


TRL TRX PREV NEXT OPERATION TABLE ROW ATTRIBUTE BEFORE AFTER
ID NUM PTR PTR ID VALUE VALUE
341 101 NULL 352 START *** Start
transaction
352 101 341 363 UPDATE PRODUCTS 15581 PRICE 5299.99 5499.99
363 101 352 365 UPDATE PRODUCTS 10011 STOCK 16 24
365 101 363 NULL COMMIT *** End
transaction
Transaction scheduling

• The transaction scheduler establishes the order of operations for


concurrent transactions
• The uninterrupted (atomic) execution of each transaction ensures a
correct result, but does not allow competition and the database
performance is poor
• If a transaction takes a long time, all other transactions need to wait
• For perfect growth, transaction scheduling is used that permit
concurrent execution of transactions without affecting the accuracy
of the results
Transaction scheduling

• The transaction scheduler interleaves the execution of database


operations to ensure serializability and isolation of transactions
• To determine the appropriate order, the scheduler bases its actions
on concurrency control algorithms such as locking and time
stamping
• The schedule S for n transactions T1, T2, ... Tn represents the
ordering and maintenance of transaction operations so that:
• For any transaction Ti, the operations of Ti in S respect the initial order of
operations in Ti
• Other operations (of other transactions Tj, j ≠ i) can be intercalated with
operations of Ti
Transaction scheduling

• Two operations in a schedule are conflicting operations if they


belong to different transactions, access the same database item and
at least one of the operations is a write operation
• Serial scheduling – a scheduling is called serial if for any
transaction T, all the operations in T execute consecutively in S;
otherwise, the scheduling is called non-serial
• Any serial scheduling transactions will ensure a correct result, but
does not allow interleaving of operations between transactions and
there is no concurrent access
• Therefore, most databases use non-serial, but serializable
scheduling, which allow concurrent access and at the same time
ensuring the consistency of the database
Serial scheduling

• Example: T1(X = X – N and Y = Y + N); T2 (X = X + M);


T1 T2 T1 T2
read(X) – R1(X) read(X) – R2(X)
X=X–N X=X+M
write(X) – W1(X) write(X) – W2(X)
read(Y) – R1(Y) read(X) – R1(X)
Y=Y+N X=X–N
write(Y) – W1(Y) write(X) – W1(X)
read(X) – R2(X) read(Y) – R1(Y)
X=X+M Y=Y+N
write(X) – W2(X) write(Y) – W1(Y)

SA SB
Non-serial scheduling

• Example: T1(X = X – N and Y = Y + N); T2 (X = X + M);

T1 T2
read(X) – R1(X)
X=X–N
write(X) – W1(X)
read(Y) – R1(Y) read(X) – R2(X)
Y=Y+N X=X+M
write(Y) – W1(Y) write(X) – W2(X)

SC
• Result: X = X – N + M; Y = Y + N;
Transaction scheduling

• In order to eliminate the concurrent execution anomalies of


transactions and to ensure data consistency, the transaction
scheduling must be serialized
• This can be accomplished by controlling the concurrent execution
of transactions
• The most commonly used concurrent execution techniques are:
• based on blocking data access by locks
• based on timestamps

• A lock associated with a database item X is a variable L(X) which


describes the status of that item in relation to the operations that
can be applied to it
Database locks

• Types of locks used in DBMSs:


• binary locks
• multi-state locks

• A binary lock L(X) can have two states:


• L(X) = 1 - free (or unlocked) – operations can access the item X
• L(X) = 0 - busy (or locked) - item X can not be accessed

• Two operations can be performed on a binary L(X):


• lock operation, lock(X) - pass L(X) in locked state (busy)
• release operation, unlock(X) - pass L(X) lock into unlocked (free)
Database locks

• Lock granularity indicates the level of the lock


• Locking can take place at the following levels:
• Database-level lock
• Entire database is locked
• Table-level lock
• Entire table is locked
• Row-level lock
• Allows concurrent transactions to access different rows of the same table
• Field-level lock
• Allows concurrent transactions to access the same row, as long as they require the
use of different fields (attributes) within that row
Database-level lock
Table-level lock
Row-level lock
Database locks

• Database-level locks:
• good for batch processing, but not for concurrent access
• transactions can’t access the same database even if the change different tables

• Table-level locks:
• multiple transactions can access the same database as long as they modify
different tables
• can cause bottlenecks when multiple transactions access the same table (even
if they require changing different parts of the table)

• Row-level locks:
• concurrent transactions can access the same table as long as they modify
different rows
• improves data availability, but with high overhead
Binary locks

• Have only two states: locked (1) or unlocked (0)


• Eliminate the “Lost Update” problem – the lock is not released
until the write statement is completed
• Rules to be followed by any transaction that follows a binary lock:
• the transaction must lock the item X before performing any read or write
operations
• the transaction must release the lock of an item X (by the unlock operation)
after performing all the read or write operations
• the transaction can not acquire a lock it already holds
• a transaction can not release a lock that it does not own

• Considered too restrictive for optimal concurrency conditions as it


locks the item even for two read operations (no update is done)
Shared/exclusive (multi-state) locks

• Exclusive lock
• Access is specifically reserved for the transaction that locked the object
• Must be used when the potential for conflict exists – when an update is
required and no locks are currently held on the data item by other transactions
• Granted if and only if no other locks are held on the data item

• Shared lock
• Concurrent transactions are granted read access on the basis of a common lock
• Issued when a transaction wants to read data and no exclusive lock is held on
that data item
• Multiple transactions can each have a shared lock on the same data item

• Mutual Exclusive Rule


• Only one transaction at a time can own an exclusive lock for the same object
Shared/exclusive (multi-state) locks

• The binary lock technique is too restrictive and sometimes


unreasonable for the concurrent execution of transactions
• That is why many management systems use multi-state locks
• A multi-state lock M(X) may be in one of the states:
• free (unlocked): the lock is not owned by any transaction and the first
transaction which launches a lock operation can get it
• locked for read (read-locked): many transactions can hold locks and can read
item X, but no transaction can write in this article
• locked for write (locked exclusively, write-locked): a single transaction can
hold the lock and can read or write in item X, and no other transaction can
access that article, neither for writing nor for reading
Shared/exclusive (multi-state) locks

• Any transaction that uses a multi-state locks M(X) must comply with the
following rules:
• A transaction must execute a shared or exclusive locking operation (read_lock(X) or
write_lock(X)) before doing any read operation of item X
• A transaction must execute an exclusive lock of the item X (write_lock(X)) before
performing any writing operations
• A transaction must release the lock of an item X (unlock(X)) after it has performed all
read or write operations of item X
• The lock release operation can only be executed by a transaction that owns the lock
• Two possible problems may occur
• The resulting transaction schedule may not be serializable
• The schedule may create deadlocks
Shared/exclusive (multi-state) locks

T1 T2 T1 T2 T1 T2
read(Y) lock(XY) lock(Y)
read(X) read(Y) read(Y)
X=X+Y read(X) unlock(Y)
write(X) lock(XY) lock(X)
read(X) X=X+Y read(X)
read(Y) write(X) T2 blocked unlock(X)
Y=X+Y unlock(XY) lock(X) lock(Y)
write(Y) read(X) read(X) read(Y)
read(Y) X=X+Y Y=X+Y
X = 20; Y = 30
Y=X+Y write(X) write(Y)
Correct result: X = 50; Y = 80 write(Y) unlock(X) unlock(Y)
unlock(XY) Wrong: X = 50; Y = 50
Two phase locking

• To ensure the serial operations of transactions that use more locks,


in addition to the rules of using the locks, it is necessary to follow a
protocol on the order of the locking and release operations called
two-phase locking
• The two-phase locking mechanism requires that each transaction
must comply with the protocol for the use of locks and all locking
operations must precede the first release of a lock
• Such transaction can be divided into two phases:
• growing phase – a transaction obtains all the necessary locks, but may not
release any lock
• the shrinking phase – a transaction releases all locks, but may not obtain any
new locks
Two phase locking

• It has been demonstrated that if each transaction from a schedule


complies with the this protocol then the schedule is serializable
• In the previous example, the third case does not follow the two
phase locking mechanism because:
• T1 releases the lock of item Y (unlock(Y)) before the acquisition of the lock for
the item X (lock(X))
• T2 releases the lock on X (unlock(X)) before the acquisition of the lock for the
item Y (lock (Y))

• This schedule is unserializable


Two phase locking

• The two phase locking is governed by the following rules:


• Two transactions cannot have conflicting locks
• No unlock operation can precede a lock operation in the same transaction
• No data is affected until all locks are obtained—that is, until the transaction is
in its locked point

• Problems of using locking mechanisms:


• Deadlock: blocking execution of transactions when two or more transactions
expect the other(s) to release a lock; there are prevention techniques for
eliminating the problem
• Indefinite postponement: the transaction is in an indefinite postponement if it
can not continue the execution for a long time, while all other transactions are
running normally; prevention is through ensuring a balanced policy of
obtaining the locks
Two phase locking
Concurrency control based on timestamps

• A timestamp is a unique identifier created by the transaction


management system
• The timestamp is based on the start time of a transaction
• A timestamp can be created by:
• using the current clock value of the operating system
• using a counter which is incremented at each assignment in the transaction
order

• A transaction T will have a unique timestamp TS(T)


• All database operations within the same transaction will have the
same timestamp
Concurrency control based on timestamps

• Timestamps produce an explicit order in which transactions are


submitted to the DBMS
• They ensure uniqueness, in that they ensure that no equal
timestamp values can exist
• Timestamps also ensure monotonicity, by ensuring the values
always increase
• For each item of the database, two timestamp values are required:
• R_TS(X) – the timestamp of reading the item X: the biggest value of the
timestamps of the transactions that read the item X (last read)
• W_TS(X) - the timestamp of writing the item X: the biggest value of the
timestamps of the transactions that write the item X (last write)
Concurrency control based on timestamps

• Serializability of scheduling is achieved if certain conditions are


imposed on the order of access for multiple concurrent transaction
items, depending on the their time
• When launching a read operation (read (X)):
• If TS (T) ≥ W_TS (X), then T will execute the read operation of X and will set
the R_TS (X) mark to the highest of the TS(T) and R_TS (X)
• If TS (T) < W_TS (X), then the T transaction must be abandoned and rolled
back, because another transaction with a higher time stamp has already
written the item X, before T had the chance to read the item
Concurrency control based on timestamps

• When starting an item writing operation (write (X)):


• If TS (T) ≥ R_TS (X) and TS (T) ≥ W_TS (X), then T will execute the write in
item X and set W_TS (X) = TS (T);
• If TS (T) < R_TS (X), then the T transaction must be abandoned and rolled
back, because another transaction with a higher time mark (launched after T)
already read the value of X, before T had the chance to write in X;
• If TS (T) < W_TS (X), then the transaction T will not execute the write
operation on X, but could continue with the other operations. This is because
another transaction with a higher time mark has already written a value in
item X, which is more recent, and the value that T needs to write is already out
of date

• A transaction T that has been canceled and rolled back will be


relaunched, but with a new timestamp, corresponding to the
moment of the new scheduling
Transaction management

• Transaction management and data recovery techniques are


included in DBMSs, and applications have limited transaction
control through SQL commands
• SQL instructions for transactions:
• SET TRANSACTION: set (define) a transaction with the options:
• Isolation Level of Transactions (ISOLATION LEVEL)
• Data access: READ ONLY, READ WRITE (default)
• SET autocommit = {0 | 1} - autocommit mode (default 1): each database
operation is immediately validated
• START TRANSACTION - Launch transaction
• COMMIT [WORK] - Ending with transaction validation
• ROLLBACK [WORK] - Abandoning and rolling back the transaction
Transaction management

• In any isolation level (ISOLATION LEVEL) it is forbidden to lose


updates, but some incorrect readings are allowed, as shown in the
table
• The isolation level is obtained by the types of locks used (binary or
multi-state) and the positioning of the rescue points
ISOLATION LEVEL Dirty reads Non-repeatable reads Phantom records
READ UNCOMMITTED Yes Yes Yes
READ COMMITTED No Yes Yes
REPEATABLE READ No No Yes
SERIALIZABLE No No No
Transaction management

• The READ UNCOMMITTED isolation level allows SELECT


statements to run in a non-locking fashion, which means that a
transaction will read changes that may not be validated by the
system and committed to the disk
• In this case, dirty reads may occur
• The READ COMMITTED isolation level sets and reads from a fresh
snapshot (even in the same transaction)
• This ensures consistent reads and eliminate the dirty reads
problem, but phantom reads may still occur if other sessions insert
or change the rows of the necessary items
Transaction management

• The REPEATABLE READ isolation level is the default isolation and


works by ensuring that all read operations read the snapshot
created by the first read operation
• This means that several (non-locking) SELECT statements within
the same transaction are consistent also with respect to each other
• The SERIALIZABLE isolation level is the highest level of isolation,
but it comes with a high degree of overhead and is mostly used in
specialized operations where issues with concurrency and
deadlocks have to be handled
• It considers every SELECT statement as its own transaction
Transaction example

• Create a new reservation for a passenger and return the reservation


id and the invoice id (full code in the annex)
Transaction example

DELIMITER //
CREATE PROCEDURE new_reservation(
IN first_name VARCHAR(255),
IN last_name VARCHAR(255),
IN dob DATE,
IN passport VARCHAR(255),
IN flight_schedule INT UNSIGNED,
IN class INT UNSIGNED,
IN seat VARCHAR(255),
IN price DECIMAL(10,2) UNSIGNED,
OUT reservation_id INT UNSIGNED,
OUT invoice_id INT UNSIGNED
)
Transaction example

DELIMITER //
CREATE PROCEDURE new_reservation(…)
BEGIN
DECLARE passenger_id INT UNSIGNED;
DECLARE num_seats INT UNSIGNED;
DECLARE seat_available BOOLEAN;
DECLARE already_booked BOOLEAN;
DECLARE EXIT HANDLER FOR SQLSTATE '45000'
BEGIN
ROLLBACK;
RESIGNAL;
END;
END //
DELIMITER ;
Transaction example

DELIMITER //
CREATE PROCEDURE new_reservation(…)
BEGIN

START TRANSACTION;
-- Check if passenger exists
-- Insert the passenger if it doesn't exist
-- Check if the passenger booked the flight
-- Check if there are any seats available
-- Check if the requested seat is available
-- Create the reservation and invoice
COMMIT;
END //
DELIMITER ;
Transaction example
Database recovery

• Recovering a database after a malfunction (database recovery)


implies to bring the database into a correct previous state, from
which, eventually, it is possible to reconstruct a new correct state as
close as possible to the moment when the problem appeared
• Database recovery techniques are generally integrated with
transaction control and depend on the DBMS
• For recovery operations, the log file is used, and/or a database
backup copy, generally stored on a magnetic tape/HDD/SSD
• A commit point is the point reached by a transaction that
successfully executed all of its operations and recorded it in the log
file
Database recovery

• At such a point, a transaction T enters the [commit] operation into


the write buffer of the log file and also writes this buffer into the
log file
• A checkpoint is enrolled in the log file when all the results of the
write operations of the validated transactions are written to the
database files by writing to the write buffer to the database files
• This means that all transactions that have the [commit] entry
recorded in the log file before a control point will not require
resuming write operations in the case of a non-catastrophic system
failure
• The recovery manager of the DBMS decides when (or for how
many transactions) it introduces a new checkpoint
Database recovery

• If the database is not physically destroyed but has become


inconsistencies due to a non-catastrophic error, then the current
state of the database and the log file are used for restoration
• When a non-catastrophic error occurs, data from the physical
memory of the computer (including that written in write buffers
but not transferred to the hard disk) is lost, but the data recorded
on the hard disk in the database files and in the log file is not
• There are two techniques for recovering data from non-catastrophic
errors:
• Deferred update recovery
• Immediate update recovery
Database recovery

• Deferred update recovery uses the write records in the log file,
which contain the transaction identifier (T), the item in which it is
written (X) and the value to be written (value) [write, T, X, value]
• Example: a transaction planning, a checkpoint, recorded at time tc
and the occurrence of a non-catastrophic fault at time tf
Database recovery

• To recover, the log file is read backwards from the last record until
the first checkpoint is met and:
• Transactions that have committed [commit] prior to the checkpoint (T1) are
not affected by the fault
• A list of validated transactions LTV is created, in which all transactions that
have a [commit] record entry are entered into the log file between the last
checkpoint and the end of the log file; In the example, LTV = {T2, T3}
• A list of non-valid transactions LTNV is created, containing all transactions
that have a start [start] entry in the log file, but do not have the corresponding
commit [commit]; In the example: LTNV = {T4, T5}
• The write operations of the validated transactions (T2 and T3) are executed in
the order in which they appear in the log file - write (T, X, value)
• Invalid transactions are relaunched: T4, T5
Database recovery

• Deferred update recovery technique properties:


• Transaction operations do not immediately update the physical database
• Only the transaction log is updated
• Database is physically updated only after the transaction reaches its commit
point using the transaction log information
• If the transaction aborts before it reaches its commit point, no ROLLBACK is
needed because the DB was never updated
• A transaction that performed a COMMIT after the last checkpoint is redone
using the “after” values of the transaction log
Database recovery

• In recovery techniques with immediate update, when a transaction


launches an update command, the update is performed
immediately without waiting for a validation checkpoint
• In most of these techniques, the change must first be stored in the
log file (disk) before being applied to the database; this rule is
known as the "write-ahead log protocol"
• In the immediate update technique, the restore is executed by
canceling (using UNDO operations) the changes made by a
transaction if it fails later for various reasons
• If the transaction aborts before it reaches its commit point, a
ROLLBACK is done to restore the database to a consistent state
Database recovery

• A transaction that committed after the last checkpoint is redone


using the “after” values of the log
• A transaction with a ROLLBACK after the last checkpoint is rolled
back using the “before” values in the log
• The immediate update recovery technique has the advantage of
simple writing operations that are performed directly in the
database without having to wait for a validation point to transfer
the data to the database
• The disadvantage of this method is that cascade cancellations may
occur, which require complicated recovery operations
Database recovery

• If the database files have been destroyed due to disk failure, then
the database can only be restored from a backup
• The last saved copy is loaded from the backup HDD/SSD/tape
and the system restarts
• However, transactions made since the last rescue operation of the
database until the error occurred are lost
• Because the log file is much smaller than the database files, it is
customary to save it more often than the database itself
• In this situation, after uploading the last saved database copy, the
saved log file (which is more recent than the saved copy of the
database) can be used to restore all existing validated transactions

You might also like