Unit 5
Unit 5
Deletion of record i:
alternatives:
move records i + 1, . . ., n
to i, . . . , n – 1
move record n to i
do not move records, but
link all free records on a
free list
Deleting record 3 and compacting
Deleting record 3 and moving last record
Variable-Length Records
Variable-length records arise in database systems in several ways:
Storage of multiple record types in a file.
Record types that allow variable lengths for one or more fields such as
strings (varchar)
Record types that allow repeating fields (used in some older data
models).
Attributes are stored in order
Variable length attributes represented by fixed size (offset, length),
with actual data stored after all fixed length attributes
Null values represented by null-value bitmap
Variable-Length Records: Slotted Page Structure
department
instructor
multitable clustering
of department and
instructor
Multitable Clustering File Organization (cont.)
good for queries involving department instructor, and for
queries involving one single department and its instructors
bad for queries involving only department
results in variable size records
Can add pointer chains to link records of a particular relation
Data Dictionary Storage
The Data dictionary (also called system catalog) stores
metadata; that is, data about data, such as
Information about relations
names of relations
names, types and lengths of attributes of each relation
names and definitions of views
integrity constraints
User and accounting information, including passwords
Statistical and descriptive data
number of tuples in each relation
Physical file organization information
How relation is stored (sequential/hash/…)
Physical location of relation
Relational
representatio
n on disk
Specialized
data
structures
designed for
efficient
access, in
memory
Storage Access
search-key pointer
Index files are typically much smaller than the original file
Two basic kinds of indices:
Ordered indices: search keys are stored in sorted order
Hash indices: search keys are distributed uniformly across
―buckets‖ using a ―hash function‖.
Index Evaluation Metrics
Access types supported efficiently. E.g.,
records with a specified value in the attribute
or records with an attribute value falling in a specified range of
values.
Access time
Insertion time
Deletion time
Space overhead
Ordered Indices
In an ordered index, index entries are stored sorted on the search key
value. E.g., author catalog in library.
Primary index: in a sequentially ordered file, the index whose search
key specifies the sequential order of the file.
Also called clustering index
The search key of a primary index is usually but not necessarily the
primary key.
Secondary index: an index whose search key specifies an order
different from the sequential order of the file. Also called
non-clustering index.
Index-sequential file: ordered sequential file with a primary index.
Dense Index Files
Dense index — Index record appears for every search-key
value in the file.
E.g. index on ID attribute of instructor relation
Dense Index Files (Cont.)
Dense index on dept_name, with instructor file sorted on
dept_name
Sparse Index Files
Sparse Index: contains index records for only some search-key
values.
Applicable when records are sequentially ordered on search-key
To locate a record with search-key value K we:
Find index record with largest search-key value < K
Search file sequentially starting at the record to which the index
record points
Sparse Index Files (Cont.)
Compared to dense indices:
Less space and less maintenance overhead for insertions and
deletions.
Generally slower than dense index for locating records.
Good tradeoff: sparse index with an index entry for every block in file,
corresponding to least search-key value in the block.
Secondary Indices Example
If there are K search-key values in the file, the height of the tree is no
more than logn/2(K).
A node is generally the same size as a disk block, typically 4
kilobytes
and n is typically around 100 (40 bytes per index entry).
With 1 million search key values and n = 100
at most log50(1,000,000) = 4 nodes are accessed in a lookup.
Contrast this with a balanced binary tree with 1 million search key
values — around 20 nodes are accessed in a lookup
above difference is significant since every node access may need
a disk I/O, costing around 20 milliseconds
Updates on B+-Trees: Insertion
1. Find the leaf node in which the search-key value would appear
2. If the search-key value is already present in the leaf node
1. Add record to the file
2. If necessary add a pointer to the bucket.
3. If the search-key value is not present, then
1. add the record to the main file (and create a bucket if
necessary)
2. If there is room in the leaf node, insert (key-value, pointer)
pair in the leaf node
3. Otherwise, split the node (along with the new (key-value,
pointer) entry) as discussed in the next slide.
Updates on B+-Trees: Insertion (Cont.)
Splitting a leaf node:
take the n (search-key value, pointer) pairs (including the one
being inserted) in sorted order. Place the first n/2 in the original
node, and the rest in a new node.
let the new node be p, and let k be the least key value in p. Insert
(k,p) in the parent of the node being split.
If the parent is full, split it and propagate the split further up.
Splitting of nodes proceeds upwards till a node that is not full is found.
In the worst case the root node may be split increasing the height
of the tree by 1.
Result of splitting node containing Brandt, Califieri and Crick on inserting Adams
Next step: insert entry with (Califieri,pointer-to-new-node) into parent
B+-Tree Insertion
Califieri
Node with Gold and Katz became underfull, and was merged with its sibling
Parent node becomes underfull, and is merged with its sibling
Value separating two nodes (at the parent) is pulled down when merging
Root node then has only one child, and is deleted
Updates on B+-Trees: Deletion
Find the record to be deleted, and remove it from the main file and
from the bucket (if present)
Remove (search-key value, pointer) from the leaf node if there is no
bucket or if the bucket has become empty
If the node has too few entries due to the removal, and the entries in
the node and a sibling fit into a single node, then merge siblings:
Insert all the search-key values in the two nodes into a single node
(the one on the left), and delete the other node.
Delete the pair (Ki–1, Pi), where Pi is the pointer to the deleted
node, from its parent, recursively using the above procedure.
Updates on B+-Trees: Deletion
Otherwise, if the node has too few entries due to the removal, but the
entries in the node and a sibling do not fit into a single node, then
redistribute pointers:
Redistribute the pointers between the node and a sibling such that
both have more than the minimum number of entries.
Update the corresponding search-key value in the parent of the
node.
The node deletions may cascade upwards till a node which has n/2
or more pointers is found.
If the root node has only one pointer after deletion, it is deleted and
the sole child becomes the root.
Non-Unique Search Keys
Alternatives to scheme described earlier
Buckets on separate block (bad idea)
List of tuple pointers with each key
Extra code to handle long lists
Deletion of a tuple can be expensive if there are many
duplicates on search key (why?)
Low space overhead, no extra cost for queries
Make search key unique by adding a record-identifier
Extra storage overhead for keys
Simpler code for insertion/deletion
Widely used
B+-Tree File Organization
Worst hash function maps all search-key values to the same bucket;
this makes access time proportional to the number of search-key
values in the file.
An ideal hash function is uniform, i.e., each bucket is assigned the
same number of search-key values from the set of all possible values.
Ideal hash function is random, so each bucket will have the same
number of records assigned to it irrespective of the actual distribution of
search-key values in the file.
Typical hash functions perform computation on the internal binary
representation of the search-key.
For example, for a string search-key, the binary representations of
all the characters in the string could be added and the sum modulo
the number of buckets could be returned. .
Handling of Bucket Overflows
Bitmaps are packed into words; a single word and (a basic CPU
instruction) computes and of 32 or 64 bits at once
E.g. 1-million-bit maps can be and-ed with just 31,250 instruction
Counting number of 1s can be done fast by a trick:
Use each byte to index into a precomputed array of 256 elements
each storing the count of 1s in the binary representation
Can use pairs of bytes to speed up further at a higher memory
cost
Add up the retrieved counts
Bitmaps can be used instead of Tuple-ID lists at leaf levels of
B+-trees, for values that have a large number of matching records
Worthwhile if > 1/64 of the records have that value, assuming a
tuple-id is 64 bits
Above technique merges benefits of bitmap and B+-tree indices
Index Definition in SQL
Create an index
create index <index-name> on <relation-name>
(<attribute-list>)
E.g.: create index b-index on branch(branch_name)
Use create unique index to indirectly specify and enforce the
condition that the search key is a candidate key is a candidate key.
Not really required if SQL unique integrity constraint is supported
To drop an index
drop index <index-name>
Most database systems allow specification of type of index, and
clustering.
Transactions
Chapter 14: Transactions
Transaction Concept
Transaction State
Concurrent Executions
Serializability
Recoverability
Implementation of Isolation
Transaction Definition in SQL
Testing for Serializability.
Transaction Concept
A transaction is a unit of program execution that accesses and
possibly updates various data items.
E.g. transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Two main issues to deal with:
Failures of various kinds, such as hardware failures and system
crashes
Concurrent execution of multiple transactions
Example of Fund Transfer
Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Atomicity requirement
if the transaction fails after step 3 and before step 6, money will be ―lost‖
leading to an inconsistent database state
Failure could be due to software or hardware
the system should ensure that updates of a partially executed transaction
are not reflected in the database
Durability requirement — once the user has been notified that the transaction
has completed (i.e., the transfer of the $50 has taken place), the updates to the
database by the transaction must persist even if there are software or
hardware failures.
Example of Fund Transfer (Cont.)
Transaction to transfer $50 from account A to account B:
1. read(A)
2. A := A – 50
3. write(A)
4. read(B)
5. B := B + 50
6. write(B)
Consistency requirement in above example:
the sum of A and B is unchanged by the execution of the transaction
In general, consistency requirements include
Explicitly specified integrity constraints such as primary keys and foreign
keys
Implicit integrity constraints
– e.g. sum of balances of all accounts, minus sum of loan amounts
must equal value of cash-in-hand
A transaction must see a consistent database.
During transaction execution the database may be temporarily inconsistent.
When the transaction completes successfully the database must be
consistent
Erroneous transaction logic can lead to inconsistency
Example of Fund Transfer (Cont.)
Isolation requirement — if between steps 3 and 6, another
transaction T2 is allowed to access the partially updated database, it
will see an inconsistent database (the sum A + B will be less than it
should be).
T1 T2
1. read(A)
2. A := A – 50
3. write(A)
read(A), read(B), print(A+B)
4. read(B)
5. B := B + 50
6. write(B
Isolation can be ensured trivially by running transactions serially
that is, one after the other.
However, executing multiple transactions concurrently has significant
benefits, as we will see later.
ACID Properties
A transaction is a unit of program execution that accesses and possibly
updates various data items.To preserve the integrity of data the database
system must ensure:
Atomicity. Either all operations of the transaction are properly reflected
in the database or none are.
Consistency. Execution of a transaction in isolation preserves the
consistency of the database.
Isolation. Although multiple transactions may execute concurrently,
each transaction must be unaware of other concurrently executing
transactions. Intermediate transaction results must be hidden from other
concurrently executed transactions.
That is, for every pair of transactions Ti and Tj, it appears to Ti that
either Tj, finished execution before Ti started, or Tj started execution
after Ti finished.
Durability. After a transaction completes successfully, the changes it
has made to the database persist, even if there are system failures.
Transaction State
Active – the initial state; the transaction stays in this state while it is
executing
Partially committed – after the final statement has been executed.
Failed -- after the discovery that normal execution can no longer
proceed.
Aborted – after the transaction has been rolled back and the
database restored to its state prior to the start of the transaction.
Two options after it has been aborted:
restart the transaction
can be done only if no internal logical error
kill the transaction
Committed – after successful completion.
Transaction State (Cont.)
Concurrent Executions
Multiple transactions are allowed to run concurrently in the system.
Advantages are:
increased processor and disk utilization, leading to better
transaction throughput
E.g. one transaction can be using the CPU while another is
reading from or writing to the disk
reduced average response time for transactions: short
transactions need not wait behind long ones.
Concurrency control schemes – mechanisms to achieve isolation
that is, to control the interaction among the concurrent
transactions in order to prevent them from destroying the
consistency of the database
Will study in Chapter 16, after studying notion of correctness
of concurrent executions.
Schedules
Schedule – a sequences of instructions that specify the chronological
order in which instructions of concurrent transactions are executed
a schedule for a set of transactions must consist of all instructions
of those transactions
must preserve the order in which the instructions appear in each
individual transaction.
A transaction that successfully completes its execution will have a
commit instructions as the last statement
by default transaction assumed to execute commit instruction as its
last step
A transaction that fails to successfully complete its execution will have
an abort instruction as the last statement
Schedule 1
Let T1 transfer $50 from A to B, and T2 transfer 10% of the
balance from A to B.
A serial schedule in which T1 is followed by T2 :
Schedule 2
• A serial schedule where T2 is followed by T1
Schedule 3
Let T1 and T2 be the transactions defined previously. The
following schedule is not a serial schedule, but it is equivalent
to Schedule 1.
Schedule 3 Schedule 6
Conflict Serializability (Cont.)
If T8 should abort, T9 would have read (and possibly shown to the user)
an inconsistent database state. Hence, database must ensure that
schedules are recoverable.
Cascading Rollbacks
Cascading rollback – a single transaction failure leads to a
series of transaction rollbacks. Consider the following schedule
where none of the transactions has yet committed (so the
schedule is recoverable)
In non-serial schedules,
multiple transactions
execute concurrently.
Operations of all the
transactions are interleaved
or mixed with each other
Finding Number Of Schedules-
Consider there are n number of transactions T1, T2, T3 ….
, Tn with N1, N2, N3 …. , Nn number of operations
respectively.
Total Number of Schedules-
Total number of possible schedules (serial + non-serial) is
given by-
If in a schedule,
A transaction performs a dirty read operation
from an uncommitted transaction and commits
before the transaction from which it has read
the value then such a schedule is known as
an Irrecoverable Schedule.
Consider the following schedule
Here,
T2 performs a dirty read
operation.
T2 commits before T1.
T1 fails later and roll
backs.
The value that T2 read
now stands to be
incorrect.
T2 can not recover since
it has already committed.
Recoverable Schedules-
If in a schedule,
A transaction performs a dirty read operation from
an uncommitted transaction and its commit
operation is delayed till the uncommitted
transaction either commits or roll backs then such
a schedule is known as a Recoverable Schedule.
Here,
The commit operation of the transaction that
performs the dirty read is delayed.
This ensures that it still has a chance to recover if
the uncommitted transaction fails later.
Consider the following schedule
Here,
T2 performs a dirty read
operation.
The commit operation of
T2 is delayed till T1
commits or roll backs.
T1 commits later.
T2 is now allowed to
commit.
In case, T1 would have
failed, T2 has a chance
to recover by rolling
back.
Checking Whether a Schedule is Recoverable
or Irrecoverable
Method-01:
Check whether the given schedule is conflict
serializable or not.
If the given schedule is conflict serializable,
then it is surely recoverable. Stop and report
your answer.
If the given schedule is not conflict
serializable, then it may or may not be
recoverable. Go and check using other
methods.
Method-02:
Check if there exists any dirty read operation.
(Reading from an uncommitted transaction is
called as a dirty read)
If there does not exist any dirty read operation,
then the schedule is surely recoverable. Stop and
report your answer.
If there exists any dirty read operation, then the
schedule may or may not be recoverable.
If there exists a dirty read operation, then follow the following cases
Case-01:
Case-02: