0% found this document useful (0 votes)
11 views83 pages

CS2202 IndexingHashing

The document discusses indexing and hashing techniques used to improve data access speed in databases, highlighting the types of indices such as ordered and hash indices. It covers evaluation metrics for indices, including access time, insertion and deletion times, and space overhead, as well as the structure and properties of B+-tree index files. Additionally, it explains the processes for updating indices through insertion and deletion, and the implications of using composite keys for non-unique search keys.

Uploaded by

Sparsh Rastogi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views83 pages

CS2202 IndexingHashing

The document discusses indexing and hashing techniques used to improve data access speed in databases, highlighting the types of indices such as ordered and hash indices. It covers evaluation metrics for indices, including access time, insertion and deletion times, and space overhead, as well as the structure and properties of B+-tree index files. Additionally, it explains the processes for updating indices through insertion and deletion, and the implications of using composite keys for non-unique search keys.

Uploaded by

Sparsh Rastogi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Indexing and Hashing

CS2202

1
Indexing and Hashing: Basic Concepts
• Indexing mechanisms used to speed up access to desired data.
– E.g., author catalog in library
• Search Key - attribute or set of attributes used to look up records in a file.
• An index file consists of records (called index entries) of the form

search-key pointer
• Index files are typically much smaller than the original file
• Two basic kinds of indices:
– Ordered indices: search keys are stored in sorted order
– Hash indices: search keys are distributed uniformly across “buckets”
using a “hash function”.

2
Index Evaluation Metrics
• Access types supported efficiently
– records with a specified value in the attribute
– or records with an attribute value falling in a specified range of
values (e.g. 10000 < salary < 40000)
• Access time
– The time it takes to find a particular data item, or set of items
• Insertion time includes
– The time it takes to find the correct place to insert new data item
– The time it takes to update the index structure
• Deletion time includes
– The time it takes to find the item to be deleted
– The time to update the index structure
• Space overhead
– The additional space occupied by an index structure
– Usually worthwhile to sacrifice some space to achieve improved performance
3
Ordered Indices
• In an ordered index, index entries are stored sorted on the search key value.
E.g., author catalog in library.
• Primary index: in a sequentially ordered file, the index whose search key
specifies the sequential order of the file.
– Also called clustering index
– The search key of a primary index is usually but not necessarily the
primary key.
• Secondary index: an index whose search key specifies an order different from
the sequential order of the file. Also called non-clustering index
• Index-sequential file: ordered sequential file with a primary index.
• Types-
– Dense
– Sparse 4
Dense Index Files
• Dense index — Index record appears for every search-key value in
the file.

5
Sparse Index Files
• Sparse Index: contains index records for only some search-key values.
– Applicable when records are sequentially ordered on search-key
• To locate a record with search-key value K we:
– Find index record with largest search-key value <= K
– Search file sequentially starting at the record to which the index record points

6
Sparse Index Files (Cont.)
• Compared to dense indices:
– Less space and less maintenance overhead for insertions and deletions.
– Generally slower than dense index for locating records.
• Good tradeoff: sparse index with an index entry for every block in file,
corresponding to least search-key value in the block.

7
Multilevel Index
• If primary index does not fit in memory, access becomes expensive.
• Solution: treat primary index kept on disk as a sequential file and construct a
sparse index on it.
– outer index – a sparse index of primary index
– inner index – the primary index file
• If even outer index is too large to fit in main memory, yet another level of
index can be created, and so on.
• Indices at all levels must be updated on insertion or deletion from the file.

8
Multilevel Index (Cont.)

9
Index Update: Record Deletion
• If deleted record was the only record in the file with its particular search-key
value, the search-key is deleted from the index also.
• Single-level index deletion:
– Dense indices – deletion of search-key: similar to file record deletion.
– Sparse indices –
• if deleted key value exists in the index, the value is replaced by the
next search-key value in the file (in search-key order).
• If the next search-key value already has an index entry, the entry is
deleted instead of being replaced.

10
Index Update: Record Insertion
• Single-level index insertion:
– Perform a lookup using the key value from inserted record
– Dense indices – if the search-key value does not appear in the index,
insert it.
– Sparse indices – if index stores an entry for each block of the file, no
change needs to be made to the index unless a new block is created.
• If a new block is created, the first search-key value appearing in
the new block is inserted into the index.
• Multilevel insertion (as well as deletion) algorithms are simple extensions
of the single-level algorithms

11
Secondary Indices Example

Secondary index on balance field of account

• Index record points to a bucket that contains pointers to all the actual
records with that particular search-key value.
• Secondary indices have to be dense
12
Indices on Multiple Keys
• A search key containing more than one attribute is referred as a composite search
key
• If the index attributes are A1, …, An then the tuple of values can be represented of
the form (a1,…,an)
• The ordering of search key is lexicographic ordering
• For example: for the case of two attributes search keys, (a1,a2)<(b1,b2) if either
– a1<b1 or
– a1=b1 and a2<b2

13
Primary and Secondary Indices
• Indices offer substantial benefits when searching for records.
• BUT: Updating an index imposes overhead on database modification --
when a file is modified, every index on the file must be updated,
• Sequential scan using primary index is efficient, but a sequential scan
using a secondary index is expensive
– Each record access may fetch a new block from disk
– Block fetch requires about 5 to 10 micro seconds, versus about 100
nanoseconds for memory access

14
Indexed Sequential File
• Disadvantage of indexed-sequential files
– Performance degrades as file grows, since many overflow blocks get
created.
– Periodic reorganization of entire file is required.

15
B+-Tree Index Files
• Disadvantage of indexed-sequential files
– Performance degrades as file grows, since many overflow blocks get
created.
– Periodic reorganization of entire file is required.
• Advantage of B+-tree index files:
– Automatically reorganizes itself with small, local, changes, in the face
of insertions and deletions.
– Reorganization of entire file is not required to maintain performance.
• (Minor) disadvantage of B+-trees:
– Extra insertion and deletion overhead, space overhead.
• Advantages of B+-trees outweigh disadvantages
– B+-trees are used extensively

16
Example of B+-Tree

17
B+-Tree Index Files (Cont.)
A B+-tree is a rooted tree satisfying the following properties:
• All paths from root to leaf are of the same length
• Each non-leaf node that is not a root has between n/2 and n children.
• A leaf node has between (n–1)/2 and n–1 key values
• Special cases:
– If the root is not a leaf, it has at least 2 children.
– If the root is a leaf (that is, there are no other nodes in the tree), it
can have between 0 and (n–1) key values.

18
B+-Tree Node Structure
• Typical node

– Ki are the search-key values


– Pi are pointers to children (for non-leaf nodes) or pointers to records or
buckets of records (for leaf nodes).
• The search-keys in a node are ordered
K1 < K2 < K3 < . . . < Kn–1
(Initially assume no duplicate keys, address duplicates later)

19
Leaf Nodes in B+-Trees
Properties of a leaf node:
• For i = 1, 2, . . ., n–1, pointer Pi points to a file record with search-key value Ki,
• If Li, Lj are leaf nodes and i < j, Li’s search-key values are less than or equal to
Lj’s search-key values
• Pn points to next leaf node in search-key order

20
Non-Leaf Nodes in B+-Trees
• Non leaf nodes form a multi-level sparse index on the leaf nodes. For a non-
leaf node with m pointers:
– All the search-keys in the subtree to which P1 points are less than K1
– For 2  i  n – 1, all the search-keys in the subtree to which Pi points have
values greater than or equal to Ki–1 and less than Ki
– All the search-keys in the subtree to which Pn points have values greater
than or equal to Kn–1
– General structure

21
Example of B+-tree
• B+-tree for instructor file (n = 6)

• Leaf nodes must have between 3 and 5 values


((n–1)/2 and n –1, with n = 6).
• Non-leaf nodes other than root must have between 3 and 6 children
((n/2 and n with n =6).
• Root must have at least 2 children.

22
Observations about B+-trees
• Since the inter-node connections are done by pointers, “logically” close
blocks need not be “physically” close.
• The non-leaf levels of the B+-tree form a hierarchy of sparse indices.
• The B+-tree contains a relatively small number of levels
– If there are K search-key values in the file, the tree height is no more
than  logn/2(K)
– thus searches can be conducted efficiently.
• Insertions and deletions to the main file can be handled efficiently, as the
index can be restructured in logarithmic time (as we shall see).

23
Queries on B+-Trees
function find(V)
1. C=root
2. while (C is not a leaf node)
1. Let i be least number s.t. V  Ki.
2. if there is no such number i then
3. Set C = last non-null pointer in C
4. else if (V = C.Ki ) Set C = Pi +1
5. else set C = C.Pi
3. if for some i, Ki = V then return C.Pi
4. else return null /* no record with search-key value v exists. */

24
printAll(V) function
Procedure printAll(value V)
/* Prints all records with search key value V */
Set done = false;
Set (L, i) = find(V);
If ((L, i) is null) return;
repeat
repeat
Print record pointed to by L.Pi;
Set i = i + 1;
until (i > number of keys in L or L.Ki > V);
if (i > number of keys in L) then
L = L.Pn;
else
Set done = true;
until (done or L is null);

25
Queries on B+-Trees (Cont.)
• Range queries find all records with search key values in a given range
– Assume function findRange(lb, ub) which returns set of all such
records
– Real implementations usually provide an iterator interface to fetch
matching records one at a time, using a next() function

26
Queries on B+-Trees (Cont.)
• If there are K search-key values in the file, the height of the tree is no more
than logn/2(K).
• A node is generally the same size as a disk block, typically 4 kilobytes
– and n is typically around 200 (with search key as 12 bytes and pointer
size as 8 bytes).
• With 1 million search key values and n = 100
– at most log50(1,000,000) = 4 nodes are accessed in a lookup traversal
from root to leaf.
• Contrast this with a balanced binary tree with 1 million search key values —
around 20 nodes are accessed in a lookup
– above difference is significant since every node access may need a disk
I/O, costing around 20 milliseconds

27
Non-Unique Keys
• If a search key ai is not unique, create instead an index on a composite key
(ai , Ap), which is unique
– Ap could be a primary key, record ID, or any other attribute that
guarantees uniqueness
– This extra attribute (Ap) is called a uniquifier attribute.
• Search for ai = v can be implemented by a range search on composite key,
with range (v, - ∞) to (v, + ∞)
• But more I/O operations are needed to fetch the actual records
– If the index is clustering, all accesses are sequential
– If the index is non-clustering, each record access may need an I/O
operation

28
Updates on B+-Trees: Insertion
Assume record already added to the file. Let
– pr be pointer to the record, and let
– v be the search key value of the record
 Find the leaf node in which the search-key value would appear
1. If there is room in the leaf node, insert (v, pr) pair in the leaf node
2. Otherwise, split the node (along with the new (v, pr) entry) as discussed
in the next slide, and propagate updates to parent nodes.

29
Updates on B+-Trees: Insertion (Cont.)
• Splitting a leaf node:
– take the n (search-key value, pointer) pairs (including the one being inserted) in
sorted order. Place the first n/2 in the original node, and the rest in a new
node.
– let the new node be p, and let k be the least key value in p. Insert (k,p) in the
parent of the node being split.
– If the parent is full, split it and propagate the split further up.
• Splitting of nodes proceeds upwards till a node that is not full is found.
– In the worst case the root node may be split increasing the height of the tree by
1.

Result of splitting node containing Brandt, Califieri and Crick on inserting Adams
Next step: insert entry with (Califieri, pointer-to-new-node) into parent

30
Insertion in B+-Trees (Cont.)
• Splitting a non-leaf node: when inserting (k,p) into an already full internal
node N
– Copy N to an in-memory area M with space for n+1 pointers and n keys
– Insert (k,p) into M
– Copy P1,K1, …, K n/2-1,P n/2 from M back into node N
– Copy Pn/2+1,K n/2+1,…,Kn,Pn+1 from M into newly allocated node N'
– Insert (K n/2,N') into parent N
• Example

31
B+-Tree Insertion

B+-Tree before insertion of “Adams”

Affected nodes

B+-Tree after insertion of “Adams”


32
B+-Tree Insertion

B+-Tree before insertion of “Lamport”


Affected nodes

Affected nodes
B+-Tree after insertion of “Lamport” 33
Updates on B+-Trees: Deletion
Assume record already deleted from file. Let V be the search key value of the
record, and Pr be the pointer to the record.
• Remove (Pr, V) from the leaf node
• If the node has too few entries due to the removal, and the entries in the
node and a sibling fit into a single node, then merge siblings:
– Insert all the search-key values in the two nodes into a single node (the
one on the left), and delete the other node.
– Delete the pair (Ki–1, Pi), where Pi is the pointer to the deleted node,
from its parent, recursively using the above procedure.

34
Updates on B+-Trees: Deletion
• Otherwise, if the node has too few entries due to the removal, but the
entries in the node and a sibling do not fit into a single node, then
redistribute pointers:
– Redistribute the pointers between the node and a sibling such that both
have more than the minimum number of entries.
– Update the corresponding search-key value in the parent of the node.
• The node deletions may cascade upwards till a node which has n/2 or
more pointers is found.
• If the root node has only one pointer after deletion, it is deleted and the sole
child becomes the root.

35
Examples of B+-Tree Deletion

Before deleting “Srinivasan”

Affected nodes

• Deleting “Srinivasan” causes merging of under-full leaves


36
After deleting “Srinivasan”
Examples of B+-Tree Deletion (Cont.)

Before deleting “Singh” and “Wu”

Affected nodes

• Leaf containing Singh and Wu became underfull, and borrowed a value Kim from its left sibling
• Search-key value in the parent changes as a result

After deleting “Singh” and “Wu” 37


Example of B+-tree Deletion (Cont.)

Before deletion of “Gold”

• Node with Gold and Katz became underfull, and was merged with its sibling
• Parent node becomes underfull, and is merged with its sibling
– Value separating two nodes (at the parent) is pulled down when merging
• Root node then has only one child, and is deleted
After deletion of “Gold” 38
Complexity of Updates
• Cost (in terms of number of I/O operations) of insertion and deletion of a
single entry proportional to height of the tree
– With K entries and maximum fanout of n, worst case complexity of
insert/delete of an entry is O(logn/2(K))
• In practice, number of I/O operations is less:
– Internal nodes tend to be in buffer
– Splits/merges are rare, most insert/delete operations only affect a leaf
node
• Average node occupancy depends on insertion order
– 2/3rds with random, ½ with insertion in sorted order

39
B+-Tree File Organization
• B+-Tree File Organization:
– Leaf nodes in a B+-tree file organization store records, instead of
pointers
– Helps keep data records clustered even when there are
insertions/deletions/updates
• Leaf nodes are still required to be half full
– Since records are larger than pointers, the maximum number of records
that can be stored in a leaf node is less than the number of pointers in a
nonleaf node.
• Insertion and deletion are handled in the same way as insertion and deletion
of entries in a B+-tree index.

40
B+-Tree File Organization (Cont.)
• Example of B+-tree File Organization

• Good space utilization important since records use more space than
pointers.
• To improve space utilization, involve more sibling nodes in redistribution
during splits and merges
– Involving 2 siblings in redistribution (to avoid split / merge where possible) results in
each node having at least 2n / 3 entries

41
B-Tree Index Files
 Similar to B+-tree, but B-tree allows search-key values to appear only once;
eliminates redundant storage of search keys.
 Search keys in nonleaf nodes appear nowhere else in the B-tree; an additional
pointer field for each search key in a nonleaf node must be included.
 Generalized B-tree leaf node

(a) Leaf node and (b) Non-leaf node structure


 Nonleaf node – pointers Bi are the bucket or file record pointers.

42
B-Tree Index File Example

B-tree (above) and B+-tree (below) on same data

43
B-Tree Index Files (Cont.)
• Advantages of B-Tree indices:
– May use less tree nodes than a corresponding B+-Tree.
– Sometimes possible to find search-key value before reaching leaf node.
• Disadvantages of B-Tree indices:
– Only small fraction of all search-key values are found early
– Non-leaf nodes are larger, so fan-out is reduced. Thus, B-Trees typically
have greater depth than corresponding B+-Tree
– Insertion and deletion more complicated than in B+-Trees
– Implementation is harder than B+-Trees.
• Typically, advantages of B-Trees do not out weigh disadvantages.

44
Static Hashing
• A bucket is a unit of storage containing one or more records (a bucket is
typically a disk block).

• In a hash file organization we obtain the bucket of a record directly from


its search-key value using a hash function.

• Hash function h is a function from the set of all search-key values K to the
set of all bucket addresses B.

• Hash function is used to locate records for access, insertion as well as


deletion.

• Records with different search-key values may be mapped to the same


bucket; thus entire bucket has to be searched sequentially to locate a
record.
45
Example of Hash File Organization
• Let’s consider the hash file organization of account file, using branch_name as
search key
• There are 10 buckets
• The binary representation of the ith character is assumed to be the integer i
• The hash function returns the sum of the binary representations of the
characters modulo 10
– E.g. h(Perryridge) = 5 h(Round Hill) = 3 h(Brighton) = 3

Another hash function ℎ(𝑠) = 𝑠 0 ∗ 31𝑛−1 + 𝑠 1 ∗ 31𝑛−2 +…+s*n-1]


s is the string of length n and s[i] denote the ith byte of the string.

46
Example of Hash File Organization
Hash file organization of
account file, using
branch_name as key

47
Hash Functions
• Worst hash function maps all search-key values to the same
bucket;
– this makes access time proportional to the number of search-key
values in the file.
• Hash function should be-
– Uniform:
• the hash function should distribute the search keys uniformly over all the
available buckets
– Random:
• Typical hash value should not be correlated to any externally visible
ordering on the search key value
• Like alphabet ordering, length of the search keys, etc.

48
Handling of Bucket Overflows
• Bucket overflow can occur because of
– Insufficient buckets
– Skew in distribution of records. This can occur due to two reasons:
• multiple records have same search-key value
• chosen hash function produces non-uniform distribution of key
values
• Although the probability of bucket overflow can be reduced, it cannot be
eliminated; it is handled by using overflow buckets.

49
Handling of Bucket Overflows (Cont.)
• Overflow chaining – the
overflow buckets of a given
bucket are chained together
in a linked list.

50
Hash Indices
• Hashing can be used not only for file organization, but also for index-structure
creation.
• A hash index organizes the search keys, with their associated record pointers, into
a hash file structure.
• Strictly speaking, hash indices are always secondary indices
– if the file itself is organized using hashing, a separate primary hash index on it
using the same search-key is unnecessary.
– However, we use the term hash index to refer to both secondary index
structures and hash organized files.

51
Example of Hash Index

52
Deficiencies of Static Hashing
• In static hashing, function h maps search-key values to a fixed set of B of
bucket addresses. Databases grow or shrink with time.
– If initial number of buckets is too small, and file grows, performance will
degrade due to too much overflows.
– If space is allocated for anticipated growth, a significant amount of space
will be wasted initially (and buckets will be underfull).
– If database shrinks, again space will be wasted.
• One solution: periodic re-organization of the file with a new hash function
– Expensive, disrupts normal operations
• Better solution: allow the number of buckets to be modified dynamically.

53
Dynamic Hashing
• Good for database that grows and shrinks in size
• Allows the hash function to be modified dynamically
• Extendable hashing – one form of dynamic hashing
– Hash function generates values over a large range — typically b-bit
integers, with b = 32.
– At any time use only a prefix of the hash function to index into a table of
bucket addresses.
– Let the length of the prefix be i bits, 0  i  32.
• Bucket address table size = 2i. Initially i = 0
• Value of i grows and shrinks as the size of the database grows and
shrinks.
– Multiple entries in the bucket address table may point to a bucket
– Thus, actual number of buckets is typically < 2i
• The number of buckets also changes dynamically due to coalescing
and splitting of buckets. 54
General Extendable Hash Structure

In this structure, i2 = i3 = i, whereas i1 = i –1


55
Use of Extendable Hash Structure
• Each bucket j stores a value ij
– All the entries that point to the same bucket have the same values on the
first ij bits.
• To locate the bucket containing search-key Kj:
1. Compute h(Kj) = X
2. Use the first i high order bits of X as a displacement into bucket address
table, and follow the pointer to appropriate bucket
• To insert a record with search-key value Kj
– follow same procedure as look-up and locate the bucket, say j.
– If there is room in the bucket j insert record in the bucket.
– Else the bucket must be split and insertion re-attempted
• Overflow buckets used instead in some cases
56
Insertion in Extendable Hash Structure (Cont)
To split a bucket j when inserting record with search-key value Kj:
• If i > ij (more than one pointer to bucket j)
– allocate a new bucket z, and set ij = iz = (ij + 1)
– Update the second half of the bucket address table entries originally
pointing to j, to point to z
– remove each record in bucket j and reinsert (in j or z)
– recompute new bucket for Kj and insert record in the bucket (further
splitting is required if the bucket is still full)
• If i = ij (only one pointer to bucket j)
– If i reaches some limit b, or too many splits have happened in this
insertion, create an overflow bucket
– Else
• increment i and double the size of the bucket address table.
• replace each entry in the table by two entries that point to the same bucket.
• recompute new bucket address table entry for Kj
Now i > ij so use the first case above.

57
Deletion in Extendable Hash Structure
• To delete a key value,
– locate it in its bucket and remove it.
– The bucket itself can be removed if it becomes empty (with appropriate
updates to the bucket address table).
– Coalescing of buckets can be done (can coalesce only with a “buddy”
bucket having same value of ij and same ij –1 prefix, if it is present)
– Decreasing bucket address table size is also possible
• Note: decreasing bucket address table size is an expensive operation
and should be done only if number of buckets becomes much smaller
than the size of the table

58
Use of Extendable Hash Structure: Example

Initial Hash structure, bucket size = 2


59
Example (Cont.)
• Hash structure after insertion of one Brighton (0010 …) and two Downtown
(1010 …) records

0
1

Now what happens if we want to insert Mianus (1100…) record?


60
Example (Cont.)
Hash structure after insertion of Mianus (1100…) record

00
01
10
11

Now insert Perryridge(1111…) record three times 61


Example (Cont.)
Hash structure after insertion of three Perryridge (1111…) records

000
001
010
011
100
101
110
111

Now insert Redwood (0011…) and Round Hill (1101…) records


62
Example (Cont.)
• Hash structure after insertion of Redwood (0011…) and Round Hill
(1101…) records

63
Extendable Hashing vs. Other Schemes
• Benefits of extendable hashing:
– Hash performance does not degrade with growth of file
– Minimal space overhead
• Disadvantages of extendable hashing
– Extra level of indirection to find desired record
– Bucket address table may itself become very big (larger than memory)
• Cannot allocate very large contiguous areas on disk either
• Solution: B+-tree file organization to store bucket address table
– Changing size of bucket address table is an expensive operation
• Linear hashing is an alternative mechanism
– Allows incremental growth of its directory (equivalent to bucket address table)
– At the cost of more bucket overflows
64
Linear Hashing
• Another dynamic hashing technique
• It grows and shrinks one bucket at a time
• Unlike extendable hashing, it does not use a bucket address table
• When an overflow occurs it does not always split the overflow bucket
• The no. of buckets grows and shrinks in a linear fashion
• Overflow are handled by creating a chain of buckets
• Hashing function changes dynamically
• Atmost two hashing functions can be used at any given instant

65
Linear Hashing: Initial Layout
• The linear hashing has m initial buckets labeled 0 through m-1
• An initial hashing function h0(k) = f(k) % m
• For simplicity, we assume that h0(k) = k % m
• A pointer p which points to the bucket to be split next
• The bucket split occurs whenever an overflow bucket is generated

66
Bucket# Overflow buckets
p
0 4 8 12 16
1 1 5
2 6 10 22
3 3 7 15 19

Here m=4, p=0, h0(k)=k %4

67
Linear Hashing: Bucket Split
• When the first overflow occurs (it may occur in any bucket)
– Bucket 0 which is pointed by p is split into two buckets (original bucket# 0 and
a new bucket bucket# m)
– A new empty bucket is also added in the overflown bucket to accommodate
the overflow
– The search values originally mapped into bucket 0 (using function h0) are now
distributed between buckets 0 and m using a new hashing function h1

Now let’s try to insert a new record with key 11

68
Bucket# Overflow buckets

0 8 16
p
1 1 5
2 6 10 22
3 3 7 15 19 11

4 4 12

Here p=1, h0(k)=k %4, h1(k)=k %8


69
• In case of insertion and overflow condition,
– If (h0(k)<p) then use h1(k)
– a new split that will attach a new bucket m+1 and the contents of bucket 1 will
be distributed using h1 between buckets 1 and m+1
• For every split, p is increased by 1
• The necessary property for linear hashing to work-
– The search values that were originally mapped by h0 to some bucket j must be
remapped using h1 to bucket j or j+m
– An example of such hashing function would be h1(k) =k % 2m

70
Linear Hashing: Round and Hash Function
advancement
• After enough overflows, all original m buckets will be split
• This marks the end of splitting round 0
• During round 0, p went subsequently from 0 to m-1
• At the end of round 0, the linear hashing scheme has a total of 2m buckets
• Hashing function h0 is no longer needed as all 2m buckets can be addressed by
hashing function h1
• Variable p is reset to 0 and a new round (namely splitting round 1) starts
• A new hash function h2 will start to be used

71
• In general, linear hashing involves a family of hash functions h0, h1, h2, and so on
• Let the initial hash function is h0(k) = f(k) % m
• Then any later hash function can be defined as hi(k) = f(k) % 2im
• This will guarantee that if hi hashes a key to bucket j
– then hi+1 will hash the same key to either j or bucket j+2im

72
Linear Hashing: Searching
• A search scheme is needed to map a key k to a bucket
• It works as follows-
– If hi(k) >=p choose bucket hi(k) since the bucket has not been split yet
– If hi(k) < p choose bucket hi+1(k) which can either be hi(k) or its split image hi(k)
+2im

73
Bucket# Overflow buckets

0 8 16
p
1 1 5
2 6 10 22
3 3 7 15 19 11

4 4 12

Here p=1, h0(k)=k %4, h1(k)=k %8

Now, let’s insert the following key values- 20, 13, 23, 2, 17, 25

74
Bucket# Overflow buckets

0 8 16 Insert 20
p
1 1 5 13 17 Insert 13
2 6 10 22 2 Insert 23

3 3 7 15 19 11 23 Insert 2
Insert 17
4 4 12 20

Here p=1, h0(k)=k %4, h1(k)=k %8

75
Bucket# Overflow buckets

0 8 16 Insert 20
1 1 17 25 Insert 13
p
2 6 10 22 2 Insert 23

3 3 7 15 19 11 23 Insert 2
Insert 17
4 4 12 20
Insert 25
5 5 13

Here p=1, h0(k)=k %4, h1(k)=k %8

76
Comparison of Ordered Indexing and Hashing
• The following issues must be considered
– Cost of periodic re-organization
– Relative frequency of insertions and deletions
– Is it desirable to optimize average access time at the expense of worst-
case access time?
– Expected type of queries:
• Hashing is generally better at retrieving records having a specified
value of the key.
• If range queries are common, ordered indices are to be preferred

77
Bitmap Indices
• Bitmap indices are a special type of index designed for efficient querying on
multiple keys
• Records in a relation are assumed to be numbered sequentially from, say, 0
– Given a number n it must be easy to retrieve record n
• Particularly easy if records are of fixed size
• Applicable on attributes that take on a relatively small number of distinct values
– E.g. gender, country, state, …
– E.g. income-level (income broken up into a small number of levels such as 0-
9999, 10000-19999, 20000-49999, 50000- infinity)
• A bitmap is simply an array of bits

78
Bitmap Indices (Cont.)
• In its simplest form a bitmap index on an attribute has a bitmap for each
value of the attribute
– Bitmap has as many bits as records
– In a bitmap for value v, the bit for a record is 1 if the record has the
value v for the attribute, and is 0 otherwise

79
Bitmap Indices (Cont.)
• Bitmap indices are useful for queries • Each operation (AND, OR) takes two bitmaps
of the same size and applies the operation
on multiple attributes on corresponding bits to get the result
– not particularly useful for single bitmap
attribute queries – E.g. 100110 AND 110011 = 100010
100110 OR 110011 = 110111
• Queries are answered using bitmap NOT 100110 = 011001
operations – Males with income level L1:
– Intersection (AND) 10010 AND 10100 = 10000
• Can then retrieve required tuples.
– Union (OR)
• Counting number of matching
– Complementation (NOT) tuples is even faster

80
Bitmap Indices (Cont.)
• Bitmap indices generally very small compared with relation size
– E.g. if record is 100 bytes, space for a single bitmap is 1/800 of space used by
relation.
• If number of distinct attribute values is 8, bitmap is only 1% of relation
size
• Deletion needs to be handled properly
– Existence bitmap to note if there is a valid record at a record location
– Needed for complementation
• NOT(A=v): (NOT bitmap-A-v) AND ExistenceBitmap
• Should keep bitmaps for all values, even null value
– To correctly handle SQL null semantics for NOT(A=v):
• intersect above result with (NOT bitmap-A-Null)
81
Efficient Implementation of Bitmap Operations
• Bitmaps are packed into words; a single word (a basic CPU instruction) computes
32 or 64 bits at once
– E.g. if a relation has 1-million-records then using bitmaps it would take 31,250
instructions to compute the intersection assuming 32 bit length
• Counting number of 1s can be done fast by a trick:
– Use each byte to index into a precomputed array of 256 elements each storing
the count of 1s in the binary representation
• Can use pairs of bytes to speed up further at a higher memory cost
– Add up the retrieved counts

82
Bitmaps and B+ Trees
• Bitmaps can be used instead of record ID lists at leaf levels of B+-trees, for values
that have a large number of matching records
• Suppose a particular value vi occurs in 1/16 of the records of a relation
• Let N be the no. of records in the relation
• Now assume that a record ID is 64 bits
• The bit map needs only 1 bit per record or N bits in total
• In contrast, the list representation requires 64 bit per record where the value
occurs or 64 x N/16=4N bits
• Above technique merges benefits of bitmap and B+-tree indices

83

You might also like