0% found this document useful (0 votes)
39 views62 pages

Chapter 6

Database

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views62 pages

Chapter 6

Database

Uploaded by

zebrehe
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Group assignment

1. Group 4 from slide 2-29


2. Group 2 from slide 30-62
NB
1. Every member must participate in the presentation.
2. The presentation will held after 11 days.
3. This chapter is for group 2 and group for according to the above slide pages.

01/02/2025 1
Chapter SIX
Record Storage and Primary
File Organization

01/02/2025 2
Outline
• Introduction
• Operations on Files
• Files of Unordered Records (Heap Files)
• Files of Ordered Records (Sorted Files)
• Hashing Techniques
• Index Structure for Files
• Types of Single Level Ordered Index
• Dynamic Multilevel indexes using B-Trees and B+
Trees
• Indexes on Multiple Indexes

01/02/2025 3
Introduction
• Its goal is to efficiently store and organize data in a way that
optimizes retrieval, minimizes storage space, and ensures data
integrity.
• Primary storage:
• This category includes storage media that can be operated on directly by the
computer central processing unit (CPU), such as the computer main memory and
smaller but faster cache memories. Primary storage usually provides fast access
to data but is of limited storage capacity.

• Secondary storage:
• This category includes magnetic disks, optical disks, and tapes. These devices
usually have a larger capacity, cost less, and provide slower access to data than do
primary storage devices. Data in secondary storage cannot be processed directly
by the CPU; it must first be copied into primary storage.
01/02/2025 4
Cont…

• Hardware Description of Disk Devices


• The most basic unit of data on the disk is a single bit of information.
By magnetizing an area on disk in certain ways, one can make it
represent a bit value of either 0 (zero) or 1 (one).
• To code information, bits are grouped into bytes (or characters). Byte
sizes are typically 4 to 8 bits, depending on the computer and the
device.
• The capacity of a disk is the number of bytes it can store, which is
usually very large.

01/02/2025 5
Cont…

• Disk Storage Devices


• Preferred secondary storage device for high storage capacity and low cost.

• Data stored is as magnetized areas on magnetic disk surfaces.

• A disk pack contains several magnetic disks connected to a rotating spindle.

• Disks are divided into concentric circular tracks on each disk surface.

• Track capacities vary typically from 4 to 50 Kbytes or more


• A track is divided into smaller blocks or sectors

• because it usually contains a large amount of information


• The division of a track into sectors is hard-coded on the disk surface and
cannot be changed.

01/02/2025 6
Cont…
• A track is divided into blocks.
• The block size B is fixed for each system.
• Typical block sizes range from B=512 bytes to B=4096 bytes.
• Whole blocks are transferred between disk and main memory for processing.

01/02/2025 7
Cont… Storage
• A read-write head moves to the track that contains the block
to be transferred.
• Disk rotation moves the block under the read-write head for reading
or writing.
• A physical disk block (hardware) address consists of:
• a cylinder number (imaginary collection of tracks of same radius from
all recorded surfaces)
• the track number or surface number (within the cylinder) and block
number (within track).
• Reading or writing a disk block is time consuming because of
the seek time s and rotational delay (latency) rd.
• Double buffering can be used to speed up the transfer of
contiguous disk blocks.

01/02/2025 8
Cont…

01/02/2025 9
Types of Disk Parameters

01/02/2025 10
Records

• A record is a group of related data held within the same structure.


• Fixed and variable length records

• A record consists of fields, with each field describing an attribute of the


entity.

• Records contain fields which have values of a particular type


• E.g., amount, date, time, age, Image

• Fields themselves may be fixed length or variable length

• Variable length fields can be mixed into one record:


• Separator characters or length fields are needed so that the record
can be “parsed.”
01/02/2025 11
Blocking

• Blocking:
• Refers to storing a number of records in one block on the disk.

• Blocking factor (bfr) refers to the number of records per


block.

• There may be empty space in a block if an integral


number of records do not fit in one block.

• Spanned Records:
• Refers to records that exceed the size of one or more
blocks and hence span a number of blocks.
01/02/2025 12
File Records

• A file is a sequence of records, where each record is a


collection of data values (or data items).
• A file descriptor (or file header) includes information that
describes the file, such as the field names and their data
types, and the addresses of the file blocks on disk.
• Records are stored on disk blocks.

• The blocking factor bfr for a file is the (average) number of


file records stored in a disk block.
• A file can have fixed-length records or variable-length
records.
01/02/2025 13
Cont…

• File - a group of related records

• File records can be unspanned or spanned


• Unspanned: no record can span two blocks
• Spanned: a record can be stored in more than one block

• The physical disk blocks that are allocated to hold the records of a file
can be contiguous, linked, or indexed.

• In a file of fixed-length records, all records have the same format.


Usually, unspanned blocking is used with such files.

• Files of variable-length records require additional information to be


stored in each record, such as separator characters and field types.
Usually spanned blocking is used with such files.
01/02/2025 14
Operation on Files

Typical file operations include:


• OPEN: Readies the file for access, and associates a pointer that will
refer to a current file record at each point in time.
• FIND: Searches for the first file record that satisfies a certain
condition, and makes it the current file record.
• FINDNEXT: Searches for the next file record (from the current record)
that satisfies a certain condition, and makes it the current file record.
• READ: Reads the current file record into a program variable.
• INSERT: Inserts a new record into the file & makes it the current file
record.

01/02/2025 15
Indexing Structures for Files
• Indexing is regarded as the process of describing
and identifying documents in terms of their subject
contents.
• Indexing is a data structure technique which allows
you to quickly retrieve records from a database file.
• Single-level Ordered Indexes
• Primary Indexes
• Clustering Indexes
• Secondary Indexes
• Multilevel Indexes
• Dynamic Multilevel Indexes Using B-Trees and B+-Trees
• Indexes on Multiple Keys

01/02/2025 16
Cont…

• DELETE: Removes the current file record from the file, usually
by marking the record to indicate that it is no longer valid.
• MODIFY: Changes the values of some fields of the current file
record.
• CLOSE: Terminates access to the file.
• REORGANIZE: Reorganizes the file records.
• For example, the records marked deleted are physically removed from
the file or a new organization of the file records is created.

• READ_ORDERED: Read the file blocks in order of a specific


field of the file.

01/02/2025 17
Unordered Files

• Also called a heap or a pile file.

• New records are inserted at the end of the file.

• A linear search through the file records is necessary to


search for a record.
• This requires reading and searching half the file blocks on
the average, and is hence quite expensive.

• Record insertion is quite efficient.

• Reading the records in order of a particular field requires


sorting the file records.
01/02/2025 18
Ordered Files

• Also called a sequential file.

• File records are kept sorted by the values of an ordering field.

• Insertion is expensive: records must be inserted in the correct order.


• It is common to keep a separate unordered overflow (or transaction)
file for new records to improve insertion efficiency; this is periodically
merged with the main ordered file.

• A binary search can be used to search for a record on its ordering field
value.
• This requires reading and searching log2 of the file blocks on the average, an
improvement over linear search.

• Reading the records in order of the ordering field is quite efficient.

01/02/2025 19
Ordered Files Cont…

01/02/2025 20
Average Access Time

• The following table shows the average access time to access a


specific record for a given type of file

01/02/2025 21
Hashed Files

• Hashing is the process of converting a given key into another


value.

• A hash function is used to generate the new value according to a


mathematical algorithm.

• The result of a hash function is known as a hash value or simply, a


hash.

• Hashing for disk files is called External Hashing

• The file blocks are divided into M equal-sized buckets, numbered


bucket0, bucket1, ..., bucketM-1.

• One of the file fields is designated to be the hash key of the file.
01/02/2025 22
Cont….

• The record with hash key value K is stored in bucket i, where


i=h(K), and h is the hashing function. Search is very efficient
on the hash key.

• Collisions occur when a new record hashes to a bucket that


is already full.
• An overflow file is kept for storing such records.
• Overflow records that hash to each bucket can be linked together.

01/02/2025 23
Cont…

• There are numerous methods for collision resolution, including the following:
• Open addressing: Proceeding from the occupied position specified by the hash
address, the program checks the subsequent positions in order until an unused
(empty) position is found.
• Chaining: For this method, various overflow locations are kept, usually by
extending the array with a number of overflow positions. In addition, a pointer
field is added to each record location. A collision is resolved by placing the new
record in an unused overflow location and setting the pointer of the occupied
hash address location to the address of that overflow location.
• Multiple hashing: The program applies a second hash function if the first results
in a collision. If another collision results, the program uses open addressing or
applies a third hash function and then uses open addressing if necessary.

01/02/2025 24
Cont…

01/02/2025 25
Cont…

• To reduce overflow records, a hash file is typically kept 70-80% full.

• The hash function h should distribute the records uniformly among


the buckets
• Otherwise, search time will be increased because many
overflow records will exist.

• Main disadvantages of static external hashing:


• Fixed number of buckets M is a problem if the number of
records in the file grows or shrinks.
• Ordered access on the hash key is quite inefficient (requires
sorting the records).

01/02/2025 26
Dynamic And Extendible Hashed
Files
• Dynamic and Extendible Hashing Techniques
• Hashing techniques are adapted to allow the dynamic growth and
shrinking of the number of file records.
• These techniques include the following: dynamic hashing, extendible
hashing, and linear hashing.

• Both dynamic and extendible hashing use the binary


representation of the hash value h(K) in order to access a directory.
• In dynamic hashing the directory is a binary tree.
• In extendible hashing the directory is an array of size 2d where d is
called the global depth.

01/02/2025 27
Cont…

• The directories can be stored on disk, and they expand or shrink


dynamically.
• Directory entries point to the disk blocks that contain the stored
records.

• An insertion in a disk block that is full causes the block to split into two
blocks and the records are redistributed among the two blocks.
• The directory is updated appropriately.

• Dynamic and extendible hashing do not require an overflow area.

• Linear hashing does require an overflow area but does not use a
directory.
• Blocks are split in linear order as the file expands.
01/02/2025 28
Extendible Hashing

01/02/2025 29
Parallelizing Disk Access using RAID
Technology
• Secondary storage technology must take steps to keep up in
performance and reliability with processor technology.

• A major advance in secondary storage technology is represented


by the development of RAID, which originally stood for Redundant
Arrays of Inexpensive Disks.

• The main goal of RAID is to even out the widely different rates of
performance improvement of disks against those in memory and
microprocessors.

01/02/2025 30
RAID Technology….

• A natural solution is a large array of small independent disks


acting as a single higher-performance logical disk.

• A concept called data striping is used, which utilizes parallelism to


improve disk performance.

• Data striping distributes data transparently over multiple disks to


make them appear as a single large, fast disk.

01/02/2025 31
Cont…
• Different raid organizations were defined based on different combinations of the two factors of
granularity of data interleaving (striping) and pattern used to compute redundant information.

• Raid level 0 has no redundant data and hence has the best write performance at the risk of
data loss

• Raid level 1 uses mirrored disks.

• Raid level 2 uses memory-style redundancy by using Hamming codes, which contain
parity bits for distinct overlapping subsets of components. Level 2 includes both error
detection and correction.

• Raid level 3 uses a single parity disk relying on the disk controller to figure out which disk
has failed.

• Raid Levels 4 and 5 use block-level data striping, with level 5 distributing data and parity
information across all disks.

• Raid level 6 applies the so-called P + Q redundancy scheme using Reed-Soloman codes
to protect against up to two disk failures by using just two redundant disks.

01/02/2025 32
Use of RAID Technology
• Different raid organizations are being used under different situations
• Raid level 1 (mirrored disks) is the easiest for rebuild of a disk from other disks
• It is used for critical applications like logs

• Raid level 2 uses memory-style redundancy by using Hamming codes, which contain
parity bits for distinct overlapping subsets of components.
• Level 2 includes both error detection and correction.
• Raid level 3 (single parity disks relying on the disk controller to figure out which disk has
failed) and level 5 (block-level data striping) are preferred for Large volume storage, with level
3 giving higher transfer rates.

• Most popular uses of the RAID technology currently are:


• Level 0 (with striping), Level 1 (with mirroring) and Level 5 with an extra drive for parity.

• Design Decisions for RAID include:


• Level of RAID, number of disks, choice of parity schemes, and grouping of disks for block-
level striping.

01/02/2025 33
Cont…

01/02/2025 34
Trends in Disk Technology

01/02/2025 35
Storage Network

• The demand for higher storage has risen considerably in recent times.

• Organizations have a need to move from a static fixed data center


oriented operation to a more flexible and dynamic infrastructure for
information processing.

• Thus they are moving to a concept of Storage Area Networks (SANs).

• In a SAN, online storage peripherals are configured as nodes on a


high-speed network and can be attached and detached from servers
in a very flexible manner.

• This allows storage systems to be placed at longer distances from the


servers and provide different performance and connectivity options.

01/02/2025 36
Cont…

• Advantages of SANs are:


• Flexible many-to-many connectivity among servers and storage
devices using fiber channel hubs and switches.
• Up to 10km separation between a server and a storage system
using appropriate fiber optic cables.
• Better isolation capabilities allowing non-disruptive addition of
new peripherals and servers.

• SANs face the problem of combining storage options from multiple


vendors and dealing with evolving standards of storage management
software and hardware.
01/02/2025 37
Indexing Structures for Files
• Indexing is regarded as the process of describing and identifying documents in
terms of their subject contents.

• An Index is a small table having only two columns. The first column comprises a
copy of the primary or candidate key of a table.
• Its second column contains a set of pointers for holding the address of the disk block where that specific
key value stored.
• Indexing is a data structure technique which allows you to quickly retrieve records
from a database file.
• Single-level Ordered Indexes
• Primary Indexes
• Clustering Indexes
• Secondary Indexes
• Multilevel Indexes
• Dynamic Multilevel Indexes Using B-Trees and B+-Trees
• Indexes on Multiple Keys
01/02/2025 38
Indexes as Access Paths
 A single-level index is an auxiliary file that
makes it more efficient to search for a record
in the data file.
 The index is usually specified on one field of
the file (although it could be specified on
several fields)
 One form of an index is a file of entries <field
value, pointer to record>, which is ordered
by field value
 The index is called an access path on the
field.

01/02/2025 39
Cont…
 The index file usually occupies considerably less disk
blocks than the data file because its entries are much
smaller
 A binary search on the index yields a pointer to the file
record
 Indexes can also be characterized as dense or sparse
 A dense index has an index entry for every search key
value (and hence every record) in the data file.
 A sparse (or nondense) index, on the other hand, has
index entries for only some of the search values

01/02/2025 40
Cont…
 Example: Given the following data file EMPLOYEE(NAME, SSN,
ADDRESS, JOB, SAL, ... )
 Suppose that:
 record size R=150 bytes block size B=512 bytes
r=30000 records
 Then, we get:
 blocking factor Bfr= B div R= 512 div 150= 3 records/block
 number of file blocks b= (r/Bfr)= (30000/3)= 10000 blocks

For an index on the SSN field, assume the field size VSSN=9 bytes,
assume the record pointer size PR=7 bytes. Then:

index entry size RI=(VSSN+ PR)=(9+7)=16 bytes

index blocking factor BfrI= B div RI= 512 div 16= 32
entries/block

number of index blocks b= (r/ BfrI)= (30000/32)= 938 blocks

binary search needs log2bI= log2938= 10 block accesses
 This is compared to an average linear search cost of:

(b/2)= 30000/2= 15000 block accesses
 If the file records are ordered, the binary search cost would be:

log2b= log230000= 15 block accesses

01/02/2025 41
Types of Primary Index
 Primary Index
 Defined on an ordered data file
 The data file is ordered on a key field
 Includes one index entry for each block in the data file;
the index entry has the key field value for the first record
in the block, which is called the block anchor
 A similar scheme can use the last record in a block.
 A primary index is a nondense (sparse) index, since it
includes an entry for each disk block of the data file and
the keys of its anchor record rather than for every
search value.

01/02/2025 42
Primary index on the ordering
key field

01/02/2025 43
Cont…
 Clustering Index
 Defined on an ordered data file
 The data file is ordered on a non-key field unlike
primary index, which requires that the ordering field of
the data file have a distinct value for each record.
 Includes one index entry for each distinct value of the
field; the index entry points to the first data block that
contains records with that field value.
 It is another example of nondense index where Insertion
and Deletion is relatively straightforward with a
clustering index.

01/02/2025 44
Example

01/02/2025 45
Cont…

01/02/2025 46
Types of Single-Level Indexes
• Secondary Index
• A secondary index provides a secondary means of
accessing a file for which some primary access already
exists.
• The secondary index may be on a field which is a
candidate key and has a unique value in every record, or a
non-key with duplicate values.
• The index is an ordered file with two fields.
• The first field is of the same data type as some non-
ordering field of the data file that is an indexing field.
• The second field is either a block pointer or a record
pointer.
• There can be many secondary indexes (and hence,
indexing fields) for the same file.
• Includes one entry for each record in the data file; hence,
it is a dense index
01/02/2025 47
Example of Dense Secondary Index

01/02/2025 48
Cont…

01/02/2025 49
Properties of Index Types

01/02/2025 50
Multi-Level Indexes
 Because a single-level index is an ordered file, we can
create a primary index to the index itself;
 In this case, the original index file is called the first-level
index and the index to the index is called the second-
level index.
 We can repeat the process, creating a third, fourth, ...,
top level until all entries of the top level fit in one disk
block
 A multi-level index can be created for any type of first-
level index (primary, secondary, clustering) as long as
the first-level index consists of more than one disk
block

01/02/2025 51
A Two-Level Primary Index

01/02/2025 52
Multi-Level Indexes
 Such a multi-level index is a form of search
tree
 However, insertion and deletion of new index
entries is a severe problem because every
level of the index is an ordered file.

01/02/2025 53
A Node in a Search Tree with Pointers
to Subtrees below It
• Eg.

01/02/2025 54
FIGURE 14.9
A search tree of order p = 3.

01/02/2025 55
Dynamic Multilevel Indexes Using B-
Trees and B+-Trees
 Most multi-level indexes use B-tree or B+-tree data
structures because of the insertion and deletion
problem
 This leaves space in each tree node (disk block) to
allow for new index entries
 These data structures are variations of search trees
that allow efficient insertion and deletion of new
search values.
 In B-Tree and B+-Tree data structures, each node
corresponds to a disk block
 Each node is kept between half-full and completely full

01/02/2025 56
Cont…
 An insertion into a node that is not full is quite
efficient
 If a node is full the insertion causes a split into
two nodes
 Splitting may propagate to other tree levels
 A deletion is quite efficient if a node does not
become less than half full
 If a deletion causes a node to become less
than half full, it must be merged with
neighboring nodes

01/02/2025 57
Difference between B-tree and
B+-tree
 In a B-tree, pointers to data records exist at all
levels of the tree
 In a B+-tree, all pointers to data records exists
at the leaf-level nodes
 A B+-tree can have less levels (or higher
capacity of search values) than the
corresponding B-tree

01/02/2025 58
B-tree Structure

01/02/2025 59
The Nodes of a B+-tree

01/02/2025 60
An Example of an Insertion in a B+-
tree

01/02/2025 61
An Example of a Deletion in a B+-
tree

01/02/2025 62

You might also like