0% found this document useful (0 votes)
22 views

Bit 424E Advanced Database Systems Notes

bit 424E advanced database systems notes

Uploaded by

Chrispus Wanjala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Bit 424E Advanced Database Systems Notes

bit 424E advanced database systems notes

Uploaded by

Chrispus Wanjala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 59

CHAPTER ONE

Transaction Processing

Transaction
o The transaction is a set of logically related operation. It contains a
group of tasks.
o A transaction is an action or series of actions. It is performed by a
single user to perform operations for accessing the contents of the
database.

Example: Suppose an employee of bank transfers Rs 800 from X's account


to Y's account. This small transaction contains several low-level tasks:

X's Account

1. Open_Account(X)
2. Old_Balance = X.balance
3. New_Balance = Old_Balance - 800
4. X.balance = New_Balance
5. Close_Account(X)

Y's Account

1. Open_Account(Y)
2. Old_Balance = Y.balance
3. New_Balance = Old_Balance + 800
4. Y.balance = New_Balance
5. Close_Account(Y)

Operations of Transaction:
Following are the main operations of transaction:

Read(X): Read operation is used to read the value of X from the database
and stores it in a buffer in main memory.
Write(X): Write operation is used to write the value back to the database
from the buffer.

Let's take an example to debit transaction from an account which consists of


following operations:

1. 1. R(X);
2. 2. X = X - 500;
3. 3. W(X);

Let's assume the value of X before starting of the transaction is 4000.

o The first operation reads X's value from database and stores it in a
buffer.
o The second operation will decrease the value of X by 500. So buffer will
contain 3500.
o The third operation will write the buffer's value to the database. So X's
final value will be 3500.

But it may be possible that because of the failure of hardware, software or


power, etc. that transaction may fail before finished all the operations in the
set.

For example: If in the above transaction, the debit transaction fails after
executing operation 2 then X's value will remain 4000 in the database which
is not acceptable by the bank.

To solve this problem, we have two important operations:

Commit: It is used to save the work done permanently.

Transaction property
The transaction has the four properties. These are used to maintain
consistency in a database, before and after the transaction.

Property of Transaction
1. Atomicity
2. Consistency
3. Isolation
4. Durability

Atomicity
o It states that all operations of the transaction take place at once if not,
the transaction is aborted.
o There is no midway, i.e., the transaction cannot occur partially. Each
transaction is treated as one unit and either run to completion or is not
executed at all.

Atomicity involves the following two operations:

Abort: If a transaction aborts then all the changes made are not visible.

Commit: If a transaction commits then all the changes made are visible.

27.7M
585
How to find Nth Highest Salary in SQL

Example: Let's assume that following transaction T consisting of T1 and T2.


A consists of Rs 600 and B consists of Rs 300. Transfer Rs 100 from account
A to account B.

T1 T2

Read(A) Read(B)
A:= A-100 Y:=
Write(A) Write(B)

After completion of the transaction, A consists of Rs 500 and B consists of Rs


400.

If the transaction T fails after the completion of transaction T1 but before


completion of transaction T2, then the amount will be deducted from A but
not added to B. This shows the inconsistent database state. In order to
ensure correctness of database state, the transaction must be executed in
entirety.

Consistency
o The integrity constraints are maintained so that the database is
consistent before and after the transaction.
o The execution of a transaction will leave a database in either its prior
stable state or a new stable state.
o The consistent property of database states that every transaction sees
a consistent database instance.
o The transaction is used to transform the database from one consistent
state to another consistent state.

For example: The total amount must be maintained before or after the
transaction.

1. Total before T occurs = 600+300=900


2. Total after T occurs= 500+400=900

Therefore, the database is consistent. In the case when T1 is completed but


T2 fails, then inconsistency will occur.

Isolation
o It shows that the data which is used at the time of execution of a
transaction cannot be used by the second transaction until the first one
is completed.
o In isolation, if the transaction T1 is being executed and using the data
item X, then that data item can't be accessed by any other transaction
T2 until the transaction T1 ends.
o The concurrency control subsystem of the DBMS enforced the isolation
property.

Durability
o The durability property is used to indicate the performance of the
database's consistent state. It states that the transaction made the
permanent changes.
o They cannot be lost by the erroneous operation of a faulty transaction
or by the system failure. When a transaction is completed, then the
database reaches a state known as the consistent state. That
consistent state cannot be lost, even in the event of a system's failure.
o The recovery subsystem of the DBMS has the responsibility of
Durability property.

States of Transaction
In a database, the transaction can be in one of the following states -

Active state

o The active state is the first state of every transaction. In this state, the
transaction is being executed.
o For example: Insertion or deletion or updating a record is done here.
But all the records are still not saved to the database.

Partially committed

o In the partially committed state, a transaction executes its final


operation, but the data is still not saved to the database.
o In the total mark calculation example, a final display of the total marks
step is executed in this state.
Committed
A transaction is said to be in a committed state if it executes all its
operations successfully. In this state, all the effects are now permanently
saved on the database system.

Failed state

o If any of the checks made by the database recovery system fails, then
the transaction is said to be in the failed state.
o In the example of total mark calculation, if the database is not able to
fire a query to fetch the marks, then the transaction will fail to execute.

Aborted

o If any of the checks fail and the transaction has reached a failed state
then the database recovery system will make sure that the database is
in its previous consistent state. If not then it will abort or roll back the
transaction to bring the database into a consistent state.
o If the transaction fails in the middle of the transaction then before
executing the transaction, all the executed transactions are rolled back
to its consistent state.
o After aborting the transaction, the database recovery module will
select one of the two operations:
1. Re-start the transaction
2. Kill the transaction
Schedule
A series of operation from one transaction to another transaction is known as
schedule. It is used to preserve the order of the operation in each of the
individual transaction.

1. Serial Schedule
The serial schedule is a type of schedule where one transaction is executed
completely before starting another transaction. In the serial schedule, when
the first transaction completes its cycle, then the next transaction is
executed.
For example: Suppose there are two transactions T1 and T2 which have
some operations. If it has no interleaving of operations, then there are the
following two possible outcomes:

1. Execute all the operations of T1 which was followed by all the


operations of T2.
2. Execute all the operations of T1 which was followed by all the
operations of T2.

o In the given (a) figure, Schedule A shows the serial schedule where T1
followed by T2.
o In the given (b) figure, Schedule B shows the serial schedule where T2
followed by T1.

2. Non-serial Schedule
o If interleaving of operations is allowed, then there will be non-serial
schedule.
o It contains many possible orders in which the system can execute the
individual operations of the transactions.
o In the given figure (c) and (d), Schedule C and Schedule D are the non-
serial schedules. It has interleaving of operations.

3. Serializable schedule
o The serializability of schedules is used to find non-serial schedules that
allow the transaction to execute concurrently without interfering with
one another.
o It identifies which schedules are correct when executions of the
transaction have interleaving of their operations.
o A non-serial schedule will be serializable if its result is equal to the
result of its transactions executed serially.
Here,

Schedule A and Schedule B are serial schedule.

Schedule C and Schedule D are Non-serial schedule.


Conflict Serializable Schedule
o A schedule is called conflict serializability if after swapping of non-
conflicting operations, it can transform into a serial schedule.
o The schedule will be a conflict serializable if it is conflict equivalent to a
serial schedule.

Conflicting Operations
The two operations become conflicting if all conditions satisfy:

1. Both belong to separate transactions.


2. They have the same data item.
3. They contain at least one write operation.

Example:
Swapping is possible only if S1 and S2 are logically equal.

Here, S1 = S2. That means it is non-conflict.


Here, S1 ≠ S2. That means it is conflict.

Conflict Equivalent
In the conflict equivalent, one can be transformed to another by swapping
non-conflicting operations. In the given example, S2 is conflict equivalent to
S1 (S1 can be converted to S2 by swapping non-conflicting operations).

Two schedules are said to be conflict equivalent if and only if:

1. They contain the same set of the transaction.


2. If each pair of conflict operations are ordered in the same way.

Example:
Schedule S2 is a serial schedule because, in this, all operations of T1 are
performed before starting any operation of T2. Schedule S1 can be
transformed into a serial schedule by swapping non-conflicting operations of
S1.

After swapping of non-conflict operations, the schedule S1 becomes:

T1 T2

Read(A)
Write(A)
Read(B)
Write(B)
Read(A)
Write(A)
Read(B)
Write(B)

Since, S1 is conflict serializable.

View Serializability
o A schedule will view serializable if it is view equivalent to a serial
schedule.
o If a schedule is conflict serializable, then it will be view serializable.
o The view serializable which does not conflict serializable contains blind
writes.

View Equivalent
Two schedules S1 and S2 are said to be view equivalent if they satisfy the
following conditions:

1. Initial Read
An initial read of both schedules must be the same. Suppose two schedule
S1 and S2. In schedule S1, if a transaction T1 is reading the data item A,
then in S2, transaction T1 should also read A.

Above two schedules are view equivalent because Initial read operation in S1
is done by T1 and in S2 it is also done by T1.

2. Updated Read
In schedule S1, if Ti is reading A which is updated by Tj then in S2 also, Ti
should read A which is updated by Tj.
Above two schedules are not view equal because, in S1, T3 is reading A
updated by T2 and in S2, T3 is reading A updated by T1.

3. Final Write
A final write must be the same between both the schedules. In schedule S1,
if a transaction T1 updates A at last then in S2, final writes operations should
also be done by T1.

Above two schedules is view equal because Final write operation in S1 is


done by T3 and in S2, the final write operation is also done by T3.

Example:
Schedule S

With 3 transactions, the total number of possible schedule

1. = 3! = 6
2. S1 = <T1 T2 T3>
3. S2 = <T1 T3 T2>
4. S3 = <T2 T3 T1>
5. S4 = <T2 T1 T3>
6. S5 = <T3 T1 T2>
7. S6 = <T3 T2 T1>

Taking first schedule S1:

Schedule S1

Step 1: final updation on data items

In both schedules S and S1, there is no read except the initial read that's
why we don't need to check that condition.
Step 2: Initial Read

The initial read operation in S is done by T1 and in S1, it is also done by T1.

Step 3: Final Write

The final write operation in S is done by T3 and in S1, it is also done by T3.
So, S and S1 are view Equivalent.

The first schedule S1 satisfies all three conditions, so we don't need to check
another schedule.

Hence, view equivalent serial schedule is:

1. T1 → T2 → T3

Recoverability of Schedule
Sometimes a transaction may not execute completely due to a software
issue, system crash or hardware failure. In that case, the failed transaction
has to be rollback. But some other transaction may also have used value
produced by the failed transaction. So we also have to rollback those
transactions.
The above table 1 shows a schedule which has two transactions. T1 reads
and writes the value of A and that value is read and written by T2. T2
commits but later on, T1 fails. Due to the failure, we have to rollback T1. T2
should also be rollback because it reads the value written by T1, but T2 can't
be rollback because it already committed. So this type of schedule is known
as irrecoverable schedule.

Irrecoverable schedule: The schedule will be irrecoverable if Tj reads the


updated value of Ti and Tj committed before Ti commit.

The above table 2 shows a schedule with two transactions. Transaction T1


reads and writes A, and that value is read and written by transaction T2. But
later on, T1 fails. Due to this, we have to rollback T1. T2 should be rollback
because T2 has read the value written by T1. As it has not committed before
T1 commits so we can rollback transaction T2 as well. So it is recoverable
with cascade rollback.

Recoverable with cascading rollback: The schedule will be recoverable


with cascading rollback if Tj reads the updated value of Ti. Commit of Tj is
delayed till commit of Ti.

The above Table 3 shows a schedule with two transactions. Transaction T1


reads and write A and commits, and that value is read and written by T2. So
this is a cascade less recoverable schedule.

Failure Classification
To find that where the problem has occurred, we generalize a failure into the
following categories:

1. Transaction failure
2. System crash
3. Disk failure

1. Transaction failure
The transaction failure occurs when it fails to execute or when it
reaches a point from where it can't go any further. If a few transaction
or process is hurt, then this is called as transaction failure.

Reasons for a transaction failure could be -


1. Logical errors: If a transaction cannot complete due to some
code error or an internal error condition, then the logical error
occurs.
2. Syntax error: It occurs where the DBMS itself terminates an
active transaction because the database system is not able to
execute it. For example, The system aborts an active
transaction, in case of deadlock or resource unavailability.

2. System Crash

o System failure can occur due to power failure or other hardware


or software failure. Example: Operating system error.

Fail-stop assumption: In the system crash, non-volatile


storage is assumed not to be corrupted.

3. Disk Failure

o It occurs where hard-disk drives or storage drives used to fail


frequently. It was a common problem in the early days of
technology evolution.
o Disk failure occurs due to the formation of bad sectors, disk head
crash, and unreachability to the disk or any other failure, which
destroy all or part of disk storage.

Log-Based Recovery
o The log is a sequence of records. Log of each transaction is maintained
in some stable storage so that if any failure occurs, then it can be
recovered from there.
o If any operation is performed on the database, then it will be recorded
in the log.
o But the process of storing the logs should be done before the actual
transaction is applied in the database.
Let's assume there is a transaction to modify the City of a student. The
following logs are written for this transaction.

o When the transaction is initiated, then it writes 'start' log.


1. <Tn, Start>

o When the transaction modifies the City from 'Noida' to 'Bangalore',


then another log is written to the file.

1. <Tn, City, 'Noida', 'Bangalore' >

o When the transaction is finished, then it writes another log to indicate


the end of the transaction.

1. <Tn, Commit>

There are two approaches to modify the database:

1. Deferred database modification:

o The deferred modification technique occurs if the transaction does not


modify the database until it has committed.
o In this method, all the logs are created and stored in the stable
storage, and the database is updated when a transaction commits.

2. Immediate database modification:

o The Immediate modification technique occurs if database modification


occurs while the transaction is still active.
o In this technique, the database is modified immediately after every
operation. It follows an actual database modification.

Recovery using Log records


When the system is crashed, then the system consults the log to find which
transactions need to be undone and which need to be redone.

1. If the log contains the record <Ti, Start> and <Ti, Commit> or <Ti,
Commit>, then the Transaction Ti needs to be redone.
2. If log contains record<Tn, Start> but does not contain the record either
<Ti, commit> or <Ti, abort>, then the Transaction Ti needs to be
undone.

Checkpoint
o The checkpoint is a type of mechanism where all the previous logs are
removed from the system and permanently stored in the storage disk.
o The checkpoint is like a bookmark. While the execution of the
transaction, such checkpoints are marked, and the transaction is
executed then using the steps of the transaction, the log files will be
created.
o When it reaches to the checkpoint, then the transaction will be
updated into the database, and till that point, the entire log file will be
removed from the file. Then the log file is updated with the new step of
transaction till next checkpoint and so on.
o The checkpoint is used to declare a point before which the DBMS was
in the consistent state, and all transactions were committed.

Recovery using Checkpoint


In the following manner, a recovery system recovers the database from this
failure:
o The recovery system reads log files from the end to start. It reads log
files from T4 to T1.
o Recovery system maintains two lists, a redo-list, and an undo-list.
o The transaction is put into redo state if the recovery system sees a log
with <Tn, Start> and <Tn, Commit> or just <Tn, Commit>. In the
redo-list and their previous list, all the transactions are removed and
then redone before saving their logs.
o For example: In the log file, transaction T2 and T3 will have <Tn,
Start> and <Tn, Commit>. The T1 transaction will have only <Tn,
commit> in the log file. That's why the transaction is committed after
the checkpoint is crossed. Hence it puts T1, T2 and T3 transaction into
redo list.
o The transaction is put into undo state if the recovery system sees a log
with <Tn, Start> but no commit or abort log found. In the undo-list, all
the transactions are undone, and their logs are removed.
o For example: Transaction T4 will have <Tn, Start>. So T4 will be put
into undo list since this transaction is not yet complete and failed amid.
CHAPER TWO
Deadlock in DBMS
A deadlock is a condition where two or more transactions are waiting
indefinitely for one another to give up locks. Deadlock is said to be one of the
most feared complications in DBMS as no task ever gets finished and is in
waiting state forever.

For example: In the student table, transaction T1 holds a lock on some rows
and needs to update some rows in the grade table. Simultaneously,
transaction T2 holds locks on some rows in the grade table and needs to
update the rows in the Student table held by Transaction T1.

Now, the main problem arises. Now Transaction T1 is waiting for T2 to


release its lock and similarly, transaction T2 is waiting for T1 to release its
lock. All activities come to a halt state and remain at a standstill. It will
remain in a standstill until the DBMS detects the deadlock and aborts one of
the transactions.

Deadlock Avoidance
o When a database is stuck in a deadlock state, then it is better to avoid
the database rather than aborting or restating the database. This is a
waste of time and resource.
o Deadlock avoidance mechanism is used to detect any deadlock
situation in advance. A method like "wait for graph" is used for
detecting the deadlock situation but this method is suitable only for
the smaller database. For the larger database, deadlock prevention
method can be used.

Deadlock Detection
In a database, when a transaction waits indefinitely to obtain a lock, then the
DBMS should detect whether the transaction is involved in a deadlock or not.
The lock manager maintains a Wait for the graph to detect the deadlock
cycle in the database.

Wait for Graph

o This is the suitable method for deadlock detection. In this method, a


graph is created based on the transaction and their lock. If the created
graph has a cycle or closed loop, then there is a deadlock.
o The wait for the graph is maintained by the system for every
transaction which is waiting for some data held by the others. The
system keeps checking the graph if there is any cycle in the graph.

The wait for a graph for the above scenario is shown below:
Deadlock Prevention
o Deadlock prevention method is suitable for a large database. If the
resources are allocated in such a way that deadlock never occurs, then
the deadlock can be prevented.
o The Database management system analyzes the operations of the
transaction whether they can create a deadlock situation or not. If they
do, then the DBMS never allowed that transaction to be executed.

Wait-Die scheme
In this scheme, if a transaction requests for a resource which is already held
with a conflicting lock by another transaction then the DBMS simply checks
the timestamp of both transactions. It allows the older transaction to wait
until the resource is available for execution.

Let's assume there are two transactions Ti and Tj and let TS(T) is a
timestamp of any transaction T. If T2 holds a lock by some other transaction
and T1 is requesting for resources held by T2 then the following actions are
performed by DBMS:
1. Check if TS(Ti) < TS(Tj) - If Ti is the older transaction and Tj has held
some resource, then Ti is allowed to wait until the data-item is
available for execution. That means if the older transaction is waiting
for a resource which is locked by the younger transaction, then the
older transaction is allowed to wait for resource until it is available.
2. Check if TS(Ti) < TS(Tj) - If Ti is older transaction and has held some
resource and if Tj is waiting for it, then Tj is killed and restarted later
with the random delay but with the same timestamp.

Wound wait scheme

o In wound wait scheme, if the older transaction requests for a resource


which is held by the younger transaction, then older transaction forces
younger one to kill the transaction and release the resource. After the
minute delay, the younger transaction is restarted but with the same
timestamp.
o If the older transaction has held a resource which is requested by the
Younger transaction, then the younger transaction is asked to wait
until older releases it.
CHAPTER THREE

Concurrency Control

DBMS Concurrency Control


Concurrency Control is the management procedure that is required for
controlling concurrent execution of the operations that take place on a
database.

But before knowing about concurrency control, we should know about


concurrent execution.

Concurrent Execution in DBMS


o In a multi-user system, multiple users can access and use the same
database at one time, which is known as the concurrent execution of
the database. It means that the same database is executed
simultaneously on a multi-user system by different users.
o While working on the database transactions, there occurs the
requirement of using the database by multiple users for performing
different operations, and in that case, concurrent execution of the
database is performed.
o The thing is that the simultaneous execution that is performed should
be done in an interleaved manner, and no operation should affect the
other executing operations, thus maintaining the consistency of the
database. Thus, on making the concurrent execution of the transaction
operations, there occur several challenging problems that need to be
solved.

Problems with Concurrent Execution


In a database transaction, the two main operations
are READ and WRITE operations. So, there is a need to manage these two
operations in the concurrent execution of the transactions as if these
operations are not performed in an interleaved manner, and the data may
become inconsistent. So, the following problems occur with the Concurrent
Execution of the operations:

Problem 1: Lost Update Problems (W - W Conflict)


The problem occurs when two different database transactions perform the
read/write operations on the same database items in an interleaved manner
(i.e., concurrent execution) that makes the values of the items incorrect
hence making the database inconsistent.

For example:

Consider the below diagram where two transactions T X and TY, are
performed on the same account A where the balance of account A is
$300.
o At time t1, transaction TX reads the value of account A, i.e., $300 (only
read).
o At time t2, transaction T X deducts $50 from account A that becomes
$250 (only deducted and not updated/write).
o Alternately, at time t3, transaction T Y reads the value of account A that
will be $300 only because TX didn't update the value yet.
o At time t4, transaction TY adds $100 to account A that becomes $400
(only added but not updated/write).
o At time t6, transaction TX writes the value of account A that will be
updated as $250 only, as TY didn't update the value yet.
o Similarly, at time t7, transaction TY writes the values of account A, so it
will write as done at time t4 that will be $400. It means the value
written by TX is lost, i.e., $250 is lost.

Hence data becomes incorrect, and database sets to inconsistent.

Dirty Read Problems (W-R Conflict)


The dirty read problem occurs when one transaction updates an item of the
database, and somehow the transaction fails, and before the data gets
rollback, the updated database item is accessed by another transaction.
There comes the Read-Write Conflict between both transactions.

For example:

Consider two transactions TX and TY in the below diagram


performing read/write operations on account A where the available
balance in account A is $300:

o At time t1, transaction TX reads the value of account A, i.e., $300.


o At time t2, transaction TX adds $50 to account A that becomes $350.
o At time t3, transaction TX writes the updated value in account A, i.e.,
$350.
o Then at time t4, transaction T Y reads account A that will be read as
$350.
o Then at time t5, transaction T X rollbacks due to server problem, and
the value changes back to $300 (as initially).
o But the value for account A remains $350 for transaction T Y as
committed, which is the dirty read and therefore known as the Dirty
Read Problem.
Unrepeatable Read Problem (W-R Conflict)
Also known as Inconsistent Retrievals Problem that occurs when in a
transaction, two different values are read for the same database item.

For example:

Consider two transactions, TX and TY, performing the read/write


operations on account A, having an available balance = $300. The
diagram is shown below:

o At time t1, transaction TX reads the value from account A, i.e., $300.
o At time t2, transaction TY reads the value from account A, i.e., $300.
o At time t3, transaction TY updates the value of account A by adding
$100 to the available balance, and then it becomes $400.
o At time t4, transaction TY writes the updated value, i.e., $400.
o After that, at time t5, transaction TX reads the available value of
account A, and that will be read as $400.
o It means that within the same transaction T X, it reads two different
values of account A, i.e., $ 300 initially, and after updation made by
transaction TY, it reads $400. It is an unrepeatable read and is
therefore known as the Unrepeatable read problem.
Thus, in order to maintain consistency in the database and avoid such
problems that take place in concurrent execution, management is needed,
and that is where the concept of Concurrency Control comes into role.

Concurrency Control
Concurrency Control is the working concept that is required for controlling
and managing the concurrent execution of database operations and thus
avoiding the inconsistencies in the database. Thus, for maintaining the
concurrency of the database, we have the concurrency control protocols.

Concurrency Control Protocols


The concurrency control protocols ensure the atomicity, consistency,
isolation, durability and serializability of the concurrent execution of the
database transactions. Therefore, these protocols are categorized as:

o Lock Based Concurrency Control Protocol


o Time Stamp Concurrency Control Protocol
o Validation Based Concurrency Control Protocol

Lock-Based Protocol
In this type of protocol, any transaction cannot read or write data until it
acquires an appropriate lock on it. There are two types of lock:

1. Shared lock:

o It is also known as a Read-only lock. In a shared lock, the data item can
only read by the transaction.
o It can be shared between the transactions because when the
transaction holds a lock, then it can't update the data on the data item.

2. Exclusive lock:

o In the exclusive lock, the data item can be both reads as well as
written by the transaction.
o This lock is exclusive, and in this lock, multiple transactions do not
modify the same data simultaneously.
There are four types of lock protocols available:
1. Simplistic lock protocol
It is the simplest way of locking the data while transaction. Simplistic lock-
based protocols allow all the transactions to get the lock on the data before
insert or delete or update on it. It will unlock the data item after completing
the transaction.

2. Pre-claiming Lock Protocol

o Pre-claiming Lock Protocols evaluate the transaction to list all the data
items on which they need locks.
o Before initiating an execution of the transaction, it requests DBMS for
all the lock on all those data items.
o If all the locks are granted then this protocol allows the transaction to
begin. When the transaction is completed then it releases all the lock.
o If all the locks are not granted then this protocol allows the transaction
to rolls back and waits until all the locks are granted.

3. Two-phase locking (2PL)

o The two-phase locking protocol divides the execution phase of the


transaction into three parts.
o In the first part, when the execution of the transaction starts, it seeks
permission for the lock it requires.
o In the second part, the transaction acquires all the locks. The third
phase is started as soon as the transaction releases its first lock.
o In the third phase, the transaction cannot demand any new locks. It
only releases the acquired locks.

There are two phases of 2PL:

Growing phase: In the growing phase, a new lock on the data item may be
acquired by the transaction, but none can be released.

Shrinking phase: In the shrinking phase, existing lock held by the


transaction may be released, but no new locks can be acquired.

In the below example, if lock conversion is allowed then the following phase
can happen:

1. Upgrading of lock (from S(a) to X (a)) is allowed in growing phase.


2. Downgrading of lock (from X(a) to S(a)) must be done in shrinking
phase.

Example:
The following way shows how unlocking and locking work with 2-PL.

Transaction T1:

o Growing phase: from step 1-3


o Shrinking phase: from step 5-7
o Lock point: at 3

Transaction T2:

o Growing phase: from step 2-6


o Shrinking phase: from step 8-9
o Lock point: at 6

4. Strict Two-phase locking (Strict-2PL)


o The first phase of Strict-2PL is similar to 2PL. In the first phase, after
acquiring all the locks, the transaction continues to execute normally.
o The only difference between 2PL and strict 2PL is that Strict-2PL does
not release a lock after using it.
o Strict-2PL waits until the whole transaction to commit, and then it
releases all the locks at a time.
o Strict-2PL protocol does not have shrinking phase of lock release.

It does not have cascading abort as 2PL does.

Timestamp Ordering Protocol


o The Timestamp Ordering Protocol is used to order the transactions
based on their Timestamps. The order of transaction is nothing but the
ascending order of the transaction creation.
o The priority of the older transaction is higher that's why it executes
first. To determine the timestamp of the transaction, this protocol uses
system time or logical counter.
o The lock-based protocol is used to manage the order between
conflicting pairs among transactions at the execution time. But
Timestamp based protocols start working as soon as a transaction is
created.
o Let's assume there are two transactions T1 and T2. Suppose the
transaction T1 has entered the system at 007 times and transaction T2
has entered the system at 009 times. T1 has the higher priority, so it
executes first as it is entered the system first.
o The timestamp ordering protocol also maintains the timestamp of last
'read' and 'write' operation on a data.

Basic Timestamp ordering protocol works as follows:

1. Check the following condition whenever a transaction Ti issues a Read


(X) operation:

o If W_TS(X) >TS(Ti) then the operation is rejected.


o If W_TS(X) <= TS(Ti) then the operation is executed.
o Timestamps of all the data items are updated.

2. Check the following condition whenever a transaction Ti issues


a Write(X) operation:

o If TS(Ti) < R_TS(X) then the operation is rejected.


o If TS(Ti) < W_TS(X) then the operation is rejected and Ti is rolled back
otherwise the operation is executed.

Where,

TS(TI) denotes the timestamp of the transaction Ti.

R_TS(X) denotes the Read time-stamp of data-item X.

W_TS(X) denotes the Write time-stamp of data-item X.

Advantages and Disadvantages of TO protocol:


o TO protocol ensures serializability since the precedence graph is as
follows:
o TS protocol ensures freedom from deadlock that means no transaction
ever waits.
o But the schedule may not be recoverable and may not even be
cascade- free.

Validation Based Protocol


Validation phase is also known as optimistic concurrency control technique.
In the validation based protocol, the transaction is executed in the following
three phases:

1. Read phase: In this phase, the transaction T is read and executed. It


is used to read the value of various data items and stores them in
temporary local variables. It can perform all the write operations on
temporary variables without an update to the actual database.
2. Validation phase: In this phase, the temporary variable value will be
validated against the actual data to see if it violates the serializability.
3. Write phase: If the validation of the transaction is validated, then the
temporary results are written to the database or system otherwise the
transaction is rolled back.

Here each phase has the following different timestamps:

Start(Ti): It contains the time when Ti started its execution.

Validation (Ti): It contains the time when Ti finishes its read phase and
starts its validation phase.
Finish(Ti): It contains the time when Ti finishes its write phase.

o This protocol is used to determine the time stamp for the transaction
for serialization using the time stamp of the validation phase, as it is
the actual phase which determines if the transaction will commit or
rollback.
o Hence TS(T) = validation(T).
o The serializability is determined during the validation process. It can't
be decided in advance.
o While executing the transaction, it ensures a greater degree of
concurrency and also less number of conflicts.
o Thus it contains transactions which have less number of rollbacks.

Thomas write Rule


Thomas Write Rule provides the guarantee of serializability order for the
protocol. It improves the Basic Timestamp Ordering Algorithm.

The basic Thomas write rules are as follows:

o If TS(T) < R_TS(X) then transaction T is aborted and rolled back, and
operation is rejected.
o If TS(T) < W_TS(X) then don't execute the W_item(X) operation of the
transaction and continue processing.
o If neither condition 1 nor condition 2 occurs, then allowed to execute
the WRITE operation by transaction Ti and set W_TS(X) to TS(T).

If we use the Thomas write rule then some serializable schedule can be
permitted that does not conflict serializable as illustrate by the schedule in a
given figure:
Figure: A Serializable Schedule that is not Conflict Serializable

In the above figure, T1's read and precedes T1's write of the same data item.
This schedule does not conflict serializable.

Thomas write rule checks that T2's write is never seen by any transaction. If
we delete the write operation in transaction T2, then conflict serializable
schedule can be obtained which is shown in below figure.

Figure: A Conflict Serializable Schedule

Multiple Granularity
Let's start by understanding the meaning of granularity.

Granularity: It is the size of data item allowed to lock.

Multiple Granularity:

o It can be defined as hierarchically breaking up the database into blocks


which can be locked.
o The Multiple Granularity protocol enhances concurrency and reduces
lock overhead.
o It maintains the track of what to lock and how to lock.
o It makes easy to decide either to lock a data item or to unlock a data
item. This type of hierarchy can be graphically represented as a tree.

For example: Consider a tree which has four levels of nodes.

o The first level or higher level shows the entire database.


o The second level represents a node of type area. The higher level
database consists of exactly these areas.
o The area consists of children nodes which are known as files. No file
can be present in more than one area.
o Finally, each file contains child nodes known as records. The file has
exactly those records that are its child nodes. No records represent in
more than one file.
o Hence, the levels of the tree starting from the top level are as follows:
1. Database
2. Area
3. File
4. Record
In this example, the highest level shows the entire database. The levels
below are file, record, and fields.

There are three additional lock modes with multiple granularity:

Intention Mode Lock


Intention-shared (IS): It contains explicit locking at a lower level of the
tree but only with shared locks.

Intention-Exclusive (IX): It contains explicit locking at a lower level with


exclusive or shared locks.

Shared & Intention-Exclusive (SIX): In this lock, the node is locked in


shared mode, and some node is locked in exclusive mode by the same
transaction.
Compatibility Matrix with Intention Lock Modes: The below table
describes the compatibility matrix for these lock modes:

It uses the intention lock modes to ensure serializability. It requires that if a


transaction attempts to lock a node, then that node must follow these
protocols:

o Transaction T1 should follow the lock-compatibility matrix.


o Transaction T1 firstly locks the root of the tree. It can lock it in any
mode.
o If T1 currently has the parent of the node locked in either IX or IS
mode, then the transaction T1 will lock a node in S or IS mode only.
o If T1 currently has the parent of the node locked in either IX or SIX
modes, then the transaction T1 will lock a node in X, SIX, or IX mode
only.
o If T1 has not previously unlocked any node only, then the Transaction
T1 can lock a node.
o If T1 currently has none of the children of the node-locked only, then
Transaction T1 will unlock a node.

Observe that in multiple-granularity, the locks are acquired in top-down


order, and locks must be released in bottom-up order.

o If transaction T1 reads record R a9 in file Fa, then transaction T1 needs


to lock the database, area A1 and file Fa in IX mode. Finally, it needs to
lock Ra2 in S mode.
o If transaction T2 modifies record Ra9 in file Fa, then it can do so after
locking the database, area A1 and file Fa in IX mode. Finally, it needs to
lock the Ra9 in X mode.
o If transaction T3 reads all the records in file F a, then transaction T3
needs to lock the database, and area A in IS mode. At last, it needs to
lock Fa in S mode.
o If transaction T4 reads the entire database, then T4 needs to lock the
database in S mode.

Recovery with Concurrent Transaction


o Whenever more than one transaction is being executed, then the
interleaved of logs occur. During recovery, it would become difficult for
the recovery system to backtrack all logs and then start recovering.
o To ease this situation, 'checkpoint' concept is used by most DBMS.

As we have discussed checkpoint in Transaction Processing Concept of this


tutorial, so you can go through the concepts again to make things more
clear.

CHAPTER FOUR
Indexing and B+ Tree

Indexing in DBMS
o Indexing is used to optimize the performance of a database by
minimizing the number of disk accesses required when a query is
processed.
o The index is a type of data structure. It is used to locate and access the
data in a database table quickly.

Index structure:
Indexes can be created using some database columns.

o The first column of the database is the search key that contains a copy
of the primary key or candidate key of the table. The values of the
primary key are stored in sorted order so that the corresponding data
can be accessed easily.
o The second column of the database is the data reference. It contains a
set of pointers holding the address of the disk block where the value of
the particular key can be found.

Indexing Methods
Ordered indices
The indices are usually sorted to make searching faster. The indices which
are sorted are known as ordered indices.

Example: Suppose we have an employee table with thousands of record and


each of which is 10 bytes long. If their IDs start with 1, 2, 3....and so on and
we have to search student with ID-543.

o In the case of a database with no index, we have to search the disk


block from starting till it reaches 543. The DBMS will read the record
after reading 543*10=5430 bytes.
o In the case of an index, we will search using indexes and the DBMS will
read the record after reading 542*2= 1084 bytes which are very less
compared to the previous case.

Primary Index

o If the index is created on the basis of the primary key of the table, then
it is known as primary indexing. These primary keys are unique to each
record and contain 1:1 relation between the records.
o As primary keys are stored in sorted order, the performance of the
searching operation is quite efficient.
o The primary index can be classified into two types: Dense index and
Sparse index.

Dense index

o The dense index contains an index record for every search key value in
the data file. It makes searching faster.
o In this, the number of records in the index table is same as the number
of records in the main table.
o It needs more space to store index record itself. The index records
have the search key and a pointer to the actual record on the disk.

Sparse index

o In the data file, index record appears only for a few items. Each item
points to a block.
o In this, instead of pointing to each record in the main table, the index
points to the records in the main table in a gap.

Clustering Index
o A clustered index can be defined as an ordered data file. Sometimes
the index is created on non-primary key columns which may not be
unique for each record.
o In this case, to identify the record faster, we will group two or more
columns to get the unique value and create index out of them. This
method is called a clustering index.
o The records which have similar characteristics are grouped, and
indexes are created for these group.

Example: suppose a company contains several employees in each


department. Suppose we use a clustering index, where all employees which
belong to the same Dept_ID are considered within a single cluster, and index
pointers point to the cluster as a whole. Here Dept_Id is a non-unique key.
The previous schema is little confusing because one disk block is shared by
records which belong to the different cluster. If we use separate disk block
for separate clusters, then it is called better technique.

Secondary Index
In the sparse indexing, as the size of the table grows, the size of mapping
also grows. These mappings are usually kept in the primary memory so that
address fetch should be faster. Then the secondary memory searches the
actual data based on the address got from mapping. If the mapping size
grows then fetching the address itself becomes slower. In this case, the
sparse index will not be efficient. To overcome this problem, secondary
indexing is introduced.

In secondary indexing, to reduce the size of mapping, another level of


indexing is introduced. In this method, the huge range for the columns is
selected initially so that the mapping size of the first level becomes small.
Then each range is further divided into smaller ranges. The mapping of the
first level is stored in the primary memory, so that address fetch is faster.
The mapping of the second level and actual data are stored in the secondary
memory (hard disk).

For example:

o If you want to find the record of roll 111 in the diagram, then it will
search the highest entry which is smaller than or equal to 111 in the
first level index. It will get 100 at this level.
o Then in the second index level, again it does max (111) <= 111 and
gets 110. Now using the address 110, it goes to the data block and
starts searching each record till it gets 111.
o This is how a search is performed in this method. Inserting, updating or
deleting is also done in the same manner.
B+ Tree
o The B+ tree is a balanced binary search tree. It follows a multi-level
index format.
o In the B+ tree, leaf nodes denote actual data pointers. B+ tree ensures
that all leaf nodes remain at the same height.
o In the B+ tree, the leaf nodes are linked using a link list. Therefore, a
B+ tree can support random access as well as sequential access.

Structure of B+ Tree
o In the B+ tree, every leaf node is at equal distance from the root node.
The B+ tree is of the order n where n is fixed for every B+ tree.
o It contains an internal node and leaf node.

Internal node

o An internal node of the B+ tree can contain at least n/2 record pointers
except the root node.
o At most, an internal node of the tree contains n pointers.

Leaf node

o The leaf node of the B+ tree can contain at least n/2 record pointers
and n/2 key values.
o At most, a leaf node contains n record pointer and n key values.
o Every leaf node of the B+ tree contains one block pointer P to point to
next leaf node.

Searching a record in B+ Tree


Suppose we have to search 55 in the below B+ tree structure. First, we will
fetch for the intermediary node which will direct to the leaf node that can
contain a record for 55.

So, in the intermediary node, we will find a branch between 50 and 75 nodes.
Then at the end, we will be redirected to the third leaf node. Here DBMS will
perform a sequential search to find 55.

B+ Tree Insertion
Suppose we want to insert a record 60 in the below structure. It will go to the
3rd leaf node after 55. It is a balanced tree, and a leaf node of this tree is
already full, so we cannot insert 60 there.

In this case, we have to split the leaf node, so that it can be inserted into
tree without affecting the fill factor, balance and order.
The 3rd leaf node has the values (50, 55, 60, 65, 70) and its current root node
is 50. We will split the leaf node of the tree in the middle so that its balance
is not altered. So we can group (50, 55) and (60, 65, 70) into 2 leaf nodes.

If these two has to be leaf nodes, the intermediate node cannot branch from
50. It should have 60 added to it, and then we can have pointers to a new
leaf node.

This is how we can insert an entry when there is overflow. In a normal


scenario, it is very easy to find the node where it fits and then place it in that
leaf node.

B+ Tree Deletion
Suppose we want to delete 60 from the above example. In this case, we
have to remove 60 from the intermediate node as well as from the 4th leaf
node too. If we remove it from the intermediate node, then the tree will not
satisfy the rule of the B+ tree. So we need to modify it to have a balanced
tree.

After deleting node 60 from above B+ tree and re-arranging the nodes, it will
show as follows:
Chapter five
Query Processing in DBMS
Query Processing is the activity performed in extracting data from the
database. In query processing, it takes various steps for fetching the data
from the database. The steps involved are:

1. Parsing and translation


2. Optimization
3. Evaluation

The query processing works in the following way:

Parsing and Translation


As query processing includes certain activities for data retrieval. Initially, the
given user queries get translated in high-level database languages such as
SQL. It gets translated into expressions that can be further used at the
physical level of the file system. After this, the actual evaluation of the
queries and a variety of query -optimizing transformations and takes place.
Thus before processing a query, a computer system needs to translate the
query into a human-readable and understandable language. Consequently,
SQL or Structured Query Language is the best suitable choice for humans.
But, it is not perfectly suitable for the internal representation of the query to
the system. Relational algebra is well suited for the internal representation of
a query. The translation process in query processing is similar to the parser
of a query. When a user executes any query, for generating the internal form
of the query, the parser in the system checks the syntax of the query,
verifies the name of the relation in the database, the tuple, and finally the
required attribute value. The parser creates a tree of the query, known as
'parse-tree.' Further, translate it into the form of relational algebra. With this,
it evenly replaces all the use of the views when used in the query.

Thus, we can understand the working of a query processing in the below-


described diagram:
Suppose a user executes a query. As we have learned that there are various
methods of extracting the data from the database. In SQL, a user wants to
fetch the records of the employees whose salary is greater than or equal to
10000. For doing this, the following query is undertaken:

select emp_name from Employee where salary>10000;

Thus, to make the system understand the user query, it needs to be


translated in the form of relational algebra. We can bring this query in the
relational algebra form as:

o σsalary>10000 (πsalary (Employee))


o πsalary (σsalary>10000 (Employee))

After translating the given query, we can execute each relational algebra
operation by using different algorithms. So, in this way, a query processing
begins its working.

Evaluation
For this, with addition to the relational algebra translation, it is required to
annotate the translated relational algebra expression with the instructions
used for specifying and evaluating each operation. Thus, after translating the
user query, the system executes a query evaluation plan.

Query Evaluation Plan

o In order to fully evaluate a query, the system needs to construct a


query evaluation plan.
o The annotations in the evaluation plan may refer to the algorithms to
be used for the particular index or the specific operations.
o Such relational algebra with annotations is referred to as Evaluation
Primitives. The evaluation primitives carry the instructions needed for
the evaluation of the operation.
o Thus, a query evaluation plan defines a sequence of primitive
operations used for evaluating a query. The query evaluation plan is
also referred to as the query execution plan.
o A query execution engine is responsible for generating the output of
the given query. It takes the query execution plan, executes it, and
finally makes the output for the user query.

Optimization
o The cost of the query evaluation can vary for different types of queries.
Although the system is responsible for constructing the evaluation
plan, the user does need not to write their query efficiently.
o Usually, a database system generates an efficient query evaluation
plan, which minimizes its cost. This type of task performed by the
database system and is known as Query Optimization.
o For optimizing a query, the query optimizer should have an estimated
cost analysis of each operation. It is because the overall operation cost
depends on the memory allocations to several operations, execution
costs, and so on.

Finally, after selecting an evaluation plan, the system evaluates the query
and produces the output of the query.

You might also like