Transaction Concept: Unit - Iv Transaction Management
Transaction Concept: Unit - Iv Transaction Management
Transaction Concept: Unit - Iv Transaction Management
Transaction Management
TRANSACTION CONCEPT
A Transaction is a unit of program execution that accesses and possibly updates various
data items.
When the transaction completes successfully (is committed), the database must be
consistent.
After a transaction commits, the changes it has made to the database persist, even if there
are system failures.
system crashes
Atomicity requirement
if the transaction fails after step 3 and before step 6, money will be ―lost‖ leading to an
inconsistent database state
Failure could be due to software or hardware the system should ensure that updates of a
partially executed transaction are not reflected in the database.
Durability requirement — once the user has been notified that the transaction has completed
(i.e., the transfer of the $50 has taken place), the updates to the database by the transaction
must persist even if there are software or hardware failures.
Consistency requirement in above example: the sum of A and B is unchanged by the
execution of the transaction In general, consistency requirements include Explicitly specified
integrity constraints such as primary keys and foreign keys Implicit integrity constraints
Example sum of balances of all accounts, minus sum of loan amounts must equal value of
cash-in-hand A transaction must see a consistent database. During transaction execution the
database may be temporarily inconsistent. When the transaction completes successfully the
database must be consistent Erroneous transaction logic can lead to inconsistency.
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B )
Isolation can be ensured trivially by running transactions serially that is, one after
benefits.
ACID Properties
Atomicity. Either all operations of the transaction are properly reflected in the
database.
Durability. After a transaction completes successfully, the changes it has made to the
database persist, even if there are system failures.
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Atomicity requirement—if the transaction fails after step 3 and before step 6, the system
should ensure that its updates are notreflected in the database, else an inconsistency will result.
Consistency requirement–the sum of A and B is unchanged by the execution of the
transaction.
Isolation requirement—if between steps 3 and 6, another transaction is allowed to access the
partially updated database,it will see an inconsistent database (the sum A + Bwill be less than it
should be).
Isolation can be ensured trivially by running transactions serially,that is one after the other.
• Active – the initial state; the transaction stays in this state while it is executing
• Failed -- after the discovery that normal execution can no longer proceed.
• Aborted – after the transaction has been rolled back and the database restored to its
State prior to the start of the transaction. Two options after it has been aborted: restart the
transaction can be done only if no internal logical error kill the transaction
db_pointer always points to the current consistent copy of the database.In case transaction
fails, old consistent copy pointed to by db_pointer can be used, and the shadow copy can be
deleted.
The shadow-database scheme: Assumes that only one transaction is active at a time. Assumes
disks do not fail Useful for text editors, but extremely inefficient for large databases (why?)
Variant called shadow paging reduces copying of data, but is still not practical for large
databases does not handle concurrent transactions
Concurrent Executions
Multiple transactions are allowed to run concurrently in the system. Advantages are:
increased processor and disk utilization, leading to better transaction throughput.
Example one transaction can be using the CPU while another is reading from or writing to the
disk reduced average response time for transactions: short transactions need not
wait behind long ones Concurrency control schemes – mechanisms to achieve isolation that
is, to control the interaction among the concurrent transactions in order to prevent them from
destroying the consistency of the database.
A transaction that fails to successfully complete its execution will have an abort instruction as the
last statement.
Schedule 1
• Let T1 transfer $50 from A to B, and T2 transfer 10% of the balance from A to B.
Schedule 2
Schedule 3
Let T1 and T2 be the transactions defined previously.The following schedule is not a serial
schedule, but it is equivalent to Schedule 1.
Serializability
1. conflict serializability
2. iew serializability
Simplified view of transactions We ignore operations other than read and write instructions;
We assume that transactions may perform arbitrary computations on data in local buffers in
between reads and writes. Our simplified schedules consist of only read and write
instructions. Conflicting Instructions Instructions li and lj of transactions Ti and Tj
respectively, conflict if and only if there exists some item Q accessed by both li and lj, and at
least one of these instructions wrote Q.
Schedule 3 can be transformed into Schedule 6, a serial schedule where T2 follows T1, by
series of swaps of non-conflicting instructions.Therefore Schedule 3 is conflict serializable.
Example of a schedule that is not conflict serializable:We are unable to swap instructions in
the above schedule to obtain either the serial schedule < T3, T4 >, or the serial schedule < T4,
T3 >.
View Serializability
Let S and S´ be two schedules with the same set of transactions. S and S´ are view equivalent
if the following three conditions are met, for each data item Q,If in schedule S, transaction Ti
reads the initial value of Q, then in schedule S’ also transaction Ti must read the initial value
of Q.
If in schedule S transaction Ti executes read(Q), and that value was produced by transaction
Tj (if any), then in schedule S’ also transaction Ti must read the value of Q that was produced
by the same write(Q) operation of transaction Tj .The transaction (if any) that performs the
final write(Q) operation in schedule S must also perform the final write(Q) operation in
schedule S’. As can be seen, view equivalence is also based purely on reads and writes alone.
Every view serializable schedule that is not conflict serializable has blind writes.
Recoverability
If T8 should abort, T9 would have read (and possibly shown to the user) an inconsistent
database state. Hence, database must ensure that schedules are recoverable.
Cascading Rollbacks
Concurrency Control
A database must provide a mechanism that will ensure that all possible schedules are
either conflict or view serializable, and are recoverable and preferably cascadeless A policy
in which only one transaction can execute at a time generates serial schedules, but provides a
poor degree of concurrency Are serial schedules recoverable/cascadeless? Testing a schedule
for serializability after it has executed is a little too late! Goal – to develop concurrency
control protocols that will assure serializability.
Implementation Of Isolation
Schedules must be conflict or view serializable, and recoverable, for the sake of database
consistency, and preferably cascadeless.A policy in which only one transaction can execute at
a time generates serial schedules,but provides a poor degree of concurrency.Concurrency-
control schemes tradeoff between the amount of concurrency they allow and the amount of
overhead that they incur.Some schemes allow only conflict-serializable schedules to be
generated, while others allow view- serializable schedules that are not conflict-serializable.
Testing For Serializability
• Precedence graph — a direct graph where the vertices are the transactions (names).
The precedence graph test for conflict serializability cannot be used directly to test for view
serializability.Extension to test for view serializability has cost exponential in the size of the
precedence graph.The problem of checking if a schedule is view serializable falls in the class
of NP- complete problems. Thus existence of an efficient algorithm is extremely unlikely.
However practical algorithms that just check some sufficient conditions for view
serializability can still be used.
Concurrency Control
Concurrency-control protocols allow concurrent schedules, but ensure that the schedules
are conflict/view serializable, and are recoverable and cascadeless .Concurrency control
protocols generally do not examine the precedence graph as it is being created Instead a
protocol imposes a discipline that avoids nonseralizable schedules.Different concurrency
control protocols provide different tradeoffs between the amount of concurrency they allow
and the amount of overhead that they incur.Tests for serializability help us understand why a
concurrency control protocol is correct.
Weak Levels of Consistency
Some applications are willing to live with weak levels of consistency, allowing schedules that
are not serializable E.g. a read-only transaction that wants to get an approximate total balance
of all Accounts. Example. database statistics computed for query optimization can be
approximate (why?) Such transactions need not be serializable with respect to other
transactions Tradeoff accuracy for performance Levels of Consistency in SQL-92
Serializable — default Repeatable read — only committed records to be read, repeated reads
of same record must return same value. However, a transaction may not be serializable
it may find some records inserted by a transaction but not find others.
Read committed — only committed records can be read, but successive reads of
SQL ends by:Commit work commits current transaction and begins a new one.
Rollback work causes current transaction to abort In almost all database systems, by
setAutoCommit(false);
Fig:Lock-compatibility matrix
Data items can be locked in two modes :
1. exclusive (X) mode. Data item can be both read as
instruction.
Lock requests are made to concurrency-control manager. Transaction can proceed only after
request is granted.
A transaction may be granted a lock on an item if the requested lock is compatible with
locks already held on the item by other transactions
Any number of transactions can hold shared locks on an item, but if any transaction
holds an exclusive on the item no other transaction may hold any lock on the item.
If a lock cannot be granted, the requesting transaction is made to wait till all
incompatible locks held by other transactions have been released. The lock is then
granted.
T2: lock-
S(A);
read (A);
unlock(A
); lock-
S(B);
read (B);
unlock(B);
display(A+
B)
Locking as above is not sufficient to guarantee serializability — if A and B get updated in-
between the read of A and B, the displayed sum would be wrong.
• A locking protocol is a set of rules followed by all transactions while requesting and
releasing locks. Locking protocols restrict the set of possible schedules.Pitfalls of Lock-Based
Protocols Consider the partial schedule Neither T3 nor T4 can make progress — executing
lock-S(B) causes T4 to wait for T3 to release its lock on B, while executing lock-X(A) causes
T3 to wait for T4 to release its lock on A.Such a situation is called a deadlock. To handle a
deadlock one of T3 or T4 must be rolled back and its locks released.The potential for deadlock
exists in most locking protocols. Deadlocks are a necessary evil.
Starvation is also possible if concurrency control manager is badly designed. For example: A
transaction may be waiting for an X-lock on an item, while a sequence of other transactions
request and are granted an S-lock on the same item.The same transaction is repeatedly rolled
back due to deadlocks.Concurrency control manager can be designed to prevent starvation.
The protocol assures serializability. It can be proved that the transactions can be serialized in
the order of their lock points (i.e. the point where a transaction acquired its final lock). All
locks are released after commit or abort
Implementation of Locking
A lock manager can be implemented as a separate process to which transactions send lock
and unlock requests The lock manager replies to a lock request by sending a lock grant
messages (or a message asking the transaction to roll back, in case of a deadlock).The
requesting transaction waits until its request is answered The lock manager maintains a data-
structure called a lock table to record granted locks and pending requests The lock table is
usually implemented as an in-memory hash table indexed on the name of the data item being
locked.
modified protocol called strict two-phase locking. Here a transaction must hold all its
exclusive locks till it commits/aborts.
• Rigorous two-phase locking is even stricter: here all locks are held till commit/abort.
In this protocol transactions can be serialized in the order in which they commit.
Each transaction is issued a timestamp when it enters the system. If an old transaction Ti has
time- stamp TS(Ti), a new transaction Tj is assigned time-stamp TS(Tj) such that TS(Ti)
<TS(Tj).
The protocol manages concurrent execution such that the time-stamps determine the
serializability order.In order to assure such behavior, the protocol maintains for each data Q
two timestamp values:
The timestamp ordering protocol ensures that any conflicting read and write operations
are executed in timestamp order.
that was
rolled back.
needed previously, and the system assumed that that value would never
be produced.
Hence, this write operation is rejected, and Ti is rolled back.Otherwise, the write
operation is executed, and W-timestamp(Q) is set to TS(Ti).
A partial schedule for several data items for transactions
with timestamps 1, 2, 3, 4, 5
Thus, there will be no cycles in the precedence graph Timestamp protocol ensures freedom
from deadlock as no transaction ever waits. But the schedule may not be cascade-free, and
may not even be recoverable.
Thomas‘ Write Rule Modified version of the timestamp-ordering protocol in which obsolete
write operations may be ignored under certain circumstances. When Ti attempts to write data
item Q, if TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value of {Q}.
Rather than rolling back Ti as the timestamp ordering protocol would have done, this {write}
operation can be ignored.Otherwise this protocol is the same as the timestamp ordering
protocol.
1. Read and execution phase: Transaction Ti writes only to temporary local variables
The three phases of concurrently executing transactions can be interleaved, but each
transaction must go through the three phases in that order.Assume for simplicity that the
validation and write phase occur together,atomically and serially i.e., only one transaction
executes validation/write at a time.Also called as optimistic concurrency control since
transaction executes fully in the hope that all will go well during validation.
Finish(Ti) : the time when Ti finished its write phase Serializability order is
determined by timestamp given at validation time, to increase concurrency.
Multiple Granularities
Allow data items to be of various sizes and define a hierarchy of data granularities, where the
small granularities are nested within larger ones Can be represented graphically as a tree (but
don't confuse with tree-locking protocol) When a transaction locks a node in the tree
explicitly, it implicitly locks all the node's descendents in the same mode.
Granularity of locking (level in tree where locking is done):ine granularity (lower in tree):
high concurrency, high locking overhead coarse granularity (higher in tree): low locking
overhead, low concurrency
– database
– area
– file
– record
In addition to S and X lock modes, there are three additional lock modes with multiple granularity:
intention-exclusive (IX): indicates explicit locking at a lower level with exclusive or shared locks
shared and intention-exclusive (SIX): the subtree rooted by that node is locked explicitly in
shared mode and explicit locking is being done at a lower level with exclusive-mode
locks.intention locks allow a higher level node to be locked in S or X mode without having to
check all descendent nodes.
Multiversion Schemes
Multiversion concurrency control techniques keep the old values of a data item when the item
is updated. Several versions (values) of an item are maintained. When a transaction requires
access to an item, an appropriate version is chosen to maintain the serialisability of the
concurrently executing schedule, if possible. The idea is that some read operations that would
be rejected in other techniques can still be accepted, by reading an older version of the item to
maintain serialisability.
In this technique, several versions X1, X2, … Xk of each data item X are kept by the system.
For each version, the value of version Xi and the following two timestamps are kept:
1. read_TS(Xi): The read timestamp of Xi; this is the largest of all the timestamps of
transactions that have successfully read version Xi.
2. write_TS(Xi): The write timestamp of Xi; this is the timestamp of the transaction that
wrote the value of version Xi.
Whenever a transaction T is allowed to execute a write_item(X) operation, a new version of
item X, Xk+1, is created, with both the write_TS(Xk+1) and the read_TS(Xk+1) set to
TS(T). Correspondingly, when a transaction T is allowed to read the value of version Xi, the
value of read_TS(Xi) is set to the largest of read_TS(Xi) and TS(T).
To ensure serialisability, we use the following two rules to control the reading and writing of
data items:
In this scheme, there are three locking modes for an item: read, write and certify. Hence, the
state of an item X can be one of 'read locked', 'write locked', 'certify locked' and 'unlocked'.
The idea behind the multiversion two-phase locking is to allow other transactions T’ to
read an item X while a single transaction T holds a write lock X. (Compare with standard
locking scheme.) This is accomplished by allowing two versions for each item X; one version
must always have been written by some committed transaction. The second version X’ is
created when a transaction T acquires a write lock on the item. Other transactions can
continue to read the committed version X while T holds the write lock. Now transaction T
can change the value of X’ as needed, without affecting the value of the committed
version X. However, once T is ready to commit, it must obtain a certify lock on all items that
it currently holds write locks on before it can commit. The certify lock is not compatible with
read locks, so the transaction may have to delay its commit until all its write lock items are
released by any reading transactions. At this point, the committed version X of the data item
is set to the value of version X’, version X’ is discarded, and the certify locks are then
released. The lock compatibility table for this scheme is shown below:
Figure 13.35
In this multiversion two-phase locking scheme, reads can proceed concurrently with a write
operation â€― an arrangement not permitted under the standard two-phase locking schemes.
The cost is that a transaction may have to delay its commit until it obtains exclusive certify
locks on all items it has updated. It can be shown that this scheme avoids cascading aborts,
since transactions are only allowed to read the version X that was written by committed
transaction. However, deadlock may occur.
Granularity of data items
All concurrency control techniques assumed that the database was formed of a number of
items. A database item could be chosen to be one of the following:
A database record.
A disk block.
A whole file.
Several trade-offs must be considered in choosing the data item size. We shall discuss data
item size in the context of locking, although similar arguments can be made for other
concurrency control techniques.
First, the larger the data item size is, the lower the degree of concurrency permitted. For
example, if the data item is a disk block, a transaction T that needs to lock a record A must
lock the whole disk block X that contains A. This is because a lock is associated with the
whole data item X. Now, if another transaction S wants to lock a different record B that
happens to reside in the same block X in a conflicting disk mode, it is forced to wait until the
first transaction releases the lock on block X. If the data item size was a single record,
transaction S could proceed as it would be locking a different data item (record B).
On the other hand, the smaller the data item size is, the more items will exist in the database.
Because every item is associated with a lock, the system will have a larger number of locks to
be handled by the lock manger. More lock and unlock operations will be performed, causing
a higher overhead. In addition, more storage space will be required for the lock table. For
timestamps, storage is required for the read_TS and write_TS for each data item, and the
overhead of handling a large number of items is similar to that in the case of locking.
The size of data items is often called the data item granularity. Fine granularity refers to small
item size, whereas coarse granularity refers to large item size. Given the above trade-offs, the
obvious question to ask is: What is the best item size? The answer is that it depends on the
types of transactions involved. If a typical transaction accesses a small number of records, it
is advantageous to have the data item granularity be one record. On the other hand, if a
transaction typically accesses many records of the same file, it may be better to have block or
file granularity so that the transaction will consider all those records as one (or a few) data
items.
Most concurrency control techniques have a uniform data item size. However, some
techniques have been proposed that permit variable item sizes. In these techniques, the data
item size may be changed to the granularity that best suits the transactions that are currently
executing on the system.
deadlock
Another problem that may be introduced by 2PL protocol is deadlock. The formal definition
of deadlock will be discussed below. Here, an example is used to give you an intuitive idea
about the deadlock situation. The two transactions that follow the 2PL protocol can be
interleaved as shown here:
Figure 13.24
At time step 5, it is not possible for T1’ to acquire an exclusive lock on X as there is
already a shared lock on X held by T2’. Therefore, T1’ has to wait. Transaction
T2’ at time step 6 tries to get an exclusive lock on Y, but it is unable to as T1’ has a
shared lock on Y already. T2’ is put in waiting too. Therefore, both transactions wait
fruitlessly for the other to release a lock. This situation is known as a deadly embrace or
deadlock. The above schedule would terminate in a deadlock.
Conservative 2PL
A variation of the basic 2PL is conservative 2PL also known as static 2PL, which is a way of
avoiding deadlock. The conservative 2PL requires a transaction to lock all the data items it
needs in advance. If at least one of the required data items cannot be obtained then none of
the items are locked. Rather, the transaction waits and then tries again to lock all the items it
needs. Although conservative 2PL is a deadlock-free protocol, this solution further limits
concurrency.
Strict 2PL
In practice, the most popular variation of 2PL is strict 2PL, which guarantees a strict
schedule. (Strict schedules are those in which transactions can neither read nor write an item
X until the last transaction that wrote X has committed or aborted). In strict 2PL, a
transaction T does not release any of its locks until after it commits or aborts. Hence, no other
transaction can read or write an item that is written by T unless T has committed, leading to a
strict schedule for recoverability. Notice the difference between conservative and strict 2PL;
the former must lock all items before it starts, whereas the latter does not unlock any of its
items until after it terminates (by committing or aborting). Strict 2PL is not deadlock-free
unless it is combined with conservative 2PL.
In summary, all type 2PL protocols guarantee serialisability (correctness) of a schedule but
limit concurrency. The use of locks can also cause two additional problems: deadlock and
livelock. Conservative 2PL is deadlock-free.
For example, assume a set of transactions {T0, T1, T2, ...,Tn}. T0 needs a resource X to
complete its task. Resource X is held by T1, and T1 is waiting for a resource Y, which is held
by T2. T2 is waiting for resource Z, which is held by T 0. Thus, all the processes wait for each
other to release resources. In this situation, none of the processes can finish their task. This
situation is known as a deadlock.
Deadlocks are not healthy for a system. In case a system is stuck in a deadlock, the
transactions involved in the deadlock are either rolled back or restarted.
Deadlock Prevention
To prevent any deadlock situation in the system, the DBMS aggressively inspects all the
operations, where transactions are about to execute. The DBMS inspects the operations and
analyzes if they can create a deadlock situation. If it finds that a deadlock situation might
occur, then that transaction is never allowed to be executed.
There are deadlock prevention schemes that use timestamp ordering mechanism of
transactions in order to predetermine a deadlock situation.
Wait-Die Scheme
In this scheme, if a transaction requests to lock a resource (data item), which is already held
with a conflicting lock by another transaction, then one of the two possibilities may occur −
If TS(Ti) < TS(Tj) − that is Ti, which is requesting a conflicting lock, is older than Tj
− then Ti is allowed to wait until the data-item is available.
If TS(Ti) > TS(tj) − that is Ti is younger than Tj − then Ti dies. Ti is restarted later with
a random delay but with the same timestamp.
This scheme allows the older transaction to wait but kills the
In this scheme, if a transaction requests to lock a resource (data item), which is already held
with conflicting lock by some another transaction, one of the two possibilities may occur −
If TS(Ti) < TS(Tj), then Ti forces Tj to be rolled back − that is Ti wounds Tj. Tj is
restarted later with a random delay but with the same timestamp.
If TS(Ti) > TS(Tj), then Ti is forced to wait until the resource is available.
This scheme, allows the younger transaction to wait; but when an older transaction requests
an item held by a younger one, the older transaction forces the younger one to abort and
release the item.
In both the cases, the transaction that enters the system at a later stage is
Wait-for Graph
This is a simple method available to track if any deadlock situation may arise. For each
transaction entering into the system, a node is created. When a transaction Ti requests for a
lock on an item, say
X, which is held by some other transaction Tj, a directed edge is created from Ti to Tj. If Tj
releases item X, the edge between them is dropped and Ti locks the data item.
The system maintains this wait-for graph for every transaction waiting for some data items
held by others. The system keeps checking if there's any cycle in the graph.
First, do not allow any request for an item, which is already locked by another
transaction. This is not always feasible and may cause starvation, where a transaction
indefinitely waits for a data item and can never acquire it.
The second option is to roll back one of the transactions. It is not always feasible to
roll back the younger transaction, as it may be important than the older one. With the
help of some relative algorithm, a transaction is chosen, which is to be aborted. This
transaction is known as the victim and the process is known as victim selection.
Recovery System:
Crash Recovery
DBMS is a highly complex system with hundreds of transactions being executed every
second. The durability and robustness of a DBMS depends on its complex architecture and its
underlying hardware and system software. If it fails or crashes amid transactions, it is
expected that the system would follow some sort of algorithm or techniques to recover lost
data.
Failure Classification
To see where the problem has occurred, we generalize a failure into various categories, as
A transaction has to abort when it fails to execute or when it reaches a point from where it
can‘t go any further. This is called transaction failure where only a few transactions or
processes are hurt.
Reasons for a transaction failure could be −
Logical errors − Where a transaction cannot complete because it has some code error
or any internal error condition.
System errors − Where the database system itself terminates an active transaction
because the DBMS is not able to execute it, or it has to stop because of some system
condition. For example, in case of deadlock or resource unavailability, the system
aborts an active transaction.
System Crash
There are problems − external to the system − that may cause the system to stop abruptly and
cause the system to crash. For example, interruptions in power supply may cause the failure
of underlying hardware or software failure.
In early days of technology evolution, it was a common problem where hard-disk drives or
storage drives used to fail frequently.
Disk failures include formation of bad sectors, unreachability to the disk, disk head crash or
any other failure, which destroys all or a part of disk storage.
Storage Structure
We have already described the storage system. In brief, the storage structure can be divided into
two categories −
Volatile storage − As the name suggests, a volatile storage cannot survive system
crashes. Volatile storage devices are placed very close to the CPU; normally they are
embedded onto the chipset itself. For example, main memory and cache memory are
examples of volatile storage. They are fast but can store only a small amount of
information.
Non-volatile storage − These memories are made to survive system crashes. They are
huge in data storage capacity, but slower in accessibility. Examples may include hard-
disks, magnetic tapes, flash memory, and non-volatile (battery backed up) RAM.
When a system crashes, it may have several transactions being executed and various files
opened for them to modify the data items. Transactions are made of various operations,
which are atomic in nature. But according to ACID properties of DBMS, atomicity of
transactions as a whole must be maintained, that is, either all the operations are executed or
none.
It should check the states of all the transactions, which were being executed.
A transaction may be in the middle of some operation; the DBMS must ensure the
atomicity of the transaction in this case.
It should check whether the transaction can be completed now or it needs to be rolled
back.
No transactions would be allowed to leave the DBMS in an inconsistent state.
There are two types of techniques, which can help a DBMS in recovering as well as
maintaining the atomicity of a transaction −
Maintaining the logs of each transaction, and writing them onto some stable storage
before actually modifying the database.
Maintaining shadow paging, where the changes are done on a volatile memory, and
later, the actual database is updated.
Log-based Recovery
When a transaction enters the system and starts execution, it writes a log about it.
<Tn, Start>
<Tn, commit>
Deferred database modification − All logs are written on to the stable storage and the
database is updated when a transaction commits.
When more than one transaction are being executed in parallel, the logs are interleaved. At
the time of recovery, it would become hard for the recovery system to backtrack all logs, and
then start recovering. To ease this situation, most modern DBMS use the concept of
'checkpoints'.
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all the
memory space available in the system. As time passes, the log file may grow too big to be
handled at all. Checkpoint is a mechanism where all the previous logs are removed from the
system and stored permanently in a
storage disk. Checkpoint declares a point before which the DBMS was in consistent state,
and all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in the
following manner
−
The recovery system reads the logs backwards from the end to the last
checkpoint.
If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or just
<Tn, Commit>, it puts the transaction in the redo-list.
If the recovery system sees a log with <Tn, Start> but no commit or abort
log found, it puts the transaction in undo-list.
All the transactions in the undo-list are then undone and their logs are removed. All
the transactions in the redo-list and their previous logs are removed and then redone
before saving their logs.