Chapter 17: Disk Storage, Basic File Structures, and Hashing

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 54

Chapter 17: Disk Storage, Basic

File Structures, and Hashing


The three-schema architecture
Chapter Outline
 Disk Storage Devices
 Files of Records
 Operations on Files
 Unordered Files
 Ordered Files
 Hashed Files
 Dynamic and Extendible Hashing Techniques
 RAID Technology
Introduction
 The collection of data that makes up a computerized
database must be stored physically on some computer
storage medium. The DBMS software can then retrieve,
update, and process this data as needed. Computer storage
media form a storage hierarchy that includes two main
categories:
 Primary Storage
 Secondary Storage
Memory Hierarchy
 Primary Storage Level
 Cache Memory (Static RAM)
 DRAM (main memory)
 Secondary level
 Mass Storage (CD/DVD/Disk Drive)
 The storage capacity is measured in Kbyte,
MB, GB, terabytes. The word petabyte is now
becoming relevant in the context of very large
repositories of data
Main memory database
 In some cases, entire databases can be kept in
main memory (with a backup copy on magnetic
disk), leading to main memory databases;
these are particularly useful in real-time
applications that require extremely fast
response times.
 An example is telephone switching applications,
which store databases that contain routing and
line information in main memory.
Disk Storage Devices
 Preferred secondary storage device for high
storage capacity and low cost.
 Data stored as magnetized areas on
magnetic disk surfaces.
 A disk pack contains several magnetic
disks connected to a rotating spindle.
 Disks are divided into concentric circular
tracks on each disk surface. Track
capacities vary typically from 4 to 50
Kbytes.
Disk Storage Devices (cont.)
Because a track usually contains a large amount
of information, it is divided into smaller blocks or
sectors.
 The division of a track into sectors is hard-coded
on the disk surface and cannot be changed. One
type of sector organization calls a portion of a
track that subtends a fixed angle at the center as
a sector.
 A track is divided into blocks. The block size B is
fixed for each system. Typical block sizes range
from B=512 bytes to B=4096 bytes. Whole blocks
are transferred between disk and main memory
for processing.
Disk Storage Devices (cont.)
Disk Storage Devices (cont.)
Review Question
 The Megatron 747 disk has the following
characteristics, which are typical of a large
vintage-2008 disk drive.
 There are eight platters providing sixteen
surfaces.
 There are 216, or 65,536, tracks per surface.
 There are (on average) 28 = 256 sectors per
track.
 There are 212 = 4096 bytes per sector.
Review Question
 Find the capacity of the disk?
 Solution:
 Capacity = (16 surfaces) *
(65536 tracks/surface)*
(256 sectors/track)*
(4096 bytes/track)
 Capacity = 240 bytes
Disk Storage Devices (cont.)
 A read-write head moves to the track that contains the
block to be transferred. Disk rotation moves the block
under the read-write head for reading or writing.
 A physical disk block (hardware) address consists of a
cylinder number (imaginery collection of tracks of same
radius from all recoreded surfaces), the track number or
surface number (within the cylinder), and block number
(within track).
 Reading or writing a disk block is time consuming because
of the seek times and rotational delay (latency) rd.
 Double buffering can be used to speed up the transfer of
contiguous disk blocks.
Disk Storage Devices (cont.)
Typical Disk
Parameters
Disk Controller
 A disk controller, typically embedded
in the disk drive, controls the disk drive
and interfaces it to the computer
system.
 One of the standard interfaces used
today for disk drives on PCs and
workstations is called SCSI (Small
Computer System Interface).
Seek Time
 To transfer a disk block, given its
address, the disk controller must first
mechanically position the read/write
head on the correct track. The time
required to do this is called the seek
time.
 Typical seek times are 5 to 10 msec on
desktops and 3 to 8 msecs on servers.
Rotational Delay
 There is another delay—called the rotational
delay or latency—while the beginning of the
desired block rotates into position under the
read/write head. It depends on the rpm of the
disk.
 For example, at 15,000 rpm, the time per rotation
is 4 msec and the average rotational delay is the
time per half revolution, or 2 msec. At 10,000
rpm the average rotational delay increases to 3
msec.
Block Transfer Time
 Some additional time is needed to transfer the
data; this is called the block transfer time.
 Hence, the total time needed to locate and
transfer an arbitrary block, given its address, is
the sum of the seek time, rotational delay, and
block transfer time.
 The seek time and rotational delay are usually
much larger than the block transfer time.
Review Question
 A disk drive has the following
characteristics: the block size B = 512
bytes, average seek time s = 30 ms
(milliseconds), disc is rotating at 12000
rpm, transfer rate tr = 512 B/ms (bytes
per millisecond). How much time does it
take on average to locate and transfer a
single block, given its block address?
Review Question
 Solution:
 Seek time = 30 ms (given)
 Rotational delay = T/2 = 2.5ms
 Block Transfer time = 512/512 = 1ms
 Total delay = 30+2.5+1=33.5 ms
Records
 Fixed and variable length records
 Records contain fields which have values of a
particular type (e.g., amount, date, time, age)
 Fields themselves may be fixed length or
variable length
 Variable length fields can be mixed into one
record: separator characters or length fields are
needed so that the record can be “parsed”.
Record Blocking
Spanning
 To utilize this unused space, we can store
part of a record on one block and the rest
on another. A pointer at the end of the first
block points to the block containing the
remainder of the record in case it is not the
next consecutive block on disk.
 This organization is called spanned because
records can span more than one block.
Unspanned
 If records are not allowed to cross block boundaries,
the organization is called unspanned.
 This is used with fixed-length records having B > R
 For variable-length records using spanned
organization, each block may store a different
number of records. In this case, the blocking factor
bfr represents the average number of records per
block for the file.
 We can use bfr to calculate the number of blocks b
needed for a file of r records: b = (r/bfr) blocks.
Example
Review Question
 Suppose we are storing records in unspanned
order. These records are 316 bytes long.
Suppose also that we use 4096-byte blocks.
 Of these bytes, say 12 will be used for a
block header, leaving 4084 bytes for data.
 How man blocks will be required to store this
data and how much space will remain
unoccupied due to unspanning?
Review Question
 Solution:
 Bfr = floor(4084/316) = 12
 In this space we can fit twelve records of the
given 316-byte format
 Blocked space per block (BS)
 BS = B – (bfr*R) = 4084-(12*316)
 BS = 292 bytes
 Thus 292 bytes of each block axe wasted
space.
Block Structure with
addressing
Unordered Files
 Also called a heap or a pile file.
 New records are inserted at the end of the file.
 To search for a record, a linear search through
the file records is necessary. This requires
reading and searching half the file blocks on the
average, and is hence quite expensive.
 Record insertion is quite efficient.
 Reading the records in order of a particular field
requires sorting the file records.
Ordered Files
 Also called a sequential file.
 File records are kept sorted by the values of an
ordering field.
 Insertion is expensive: records must be inserted in
the correct order. It is common to keep a separate
unordered overflow (or transaction ) file for new
records to improve insertion efficiency; this is
periodically merged with the main ordered file.
 A binary search can be used to search for a record
on its ordering field value. This requires reading
and searching log2 of the file blocks on the
average, an improvement over linear search.
 Reading the records in order of the ordering field is
quite efficient.
Ordered Files
(cont.)
Average Access Times
The following table shows the average access
time to access a specific record for a given
type of file
Hashed Files
 Hashing for disk files is called External Hashing
 The file blocks are divided into M equal-sized buckets,
numbered bucket0, bucket1, ..., bucket M-1. Typically, a
bucket corresponds to one (or a fixed number of) disk block.
 One of the file fields is designated to be the hash key of the
file.
 The record with hash key value K is stored in bucket i,
where i=h(K), and h is the hashing function.
 Search is very efficient on the hash key.
 Collisions occur when a new record hashes to a bucket that
is already full. An overflow file is kept for storing such
records. Overflow records that hash to each bucket can be
linked together.
Hashed Files (cont.)
There are numerous methods for collision resolution, including the
following:
 Open addressing: Proceeding from the occupied position
specified by the hash address, the program checks the
subsequent positions in order until an unused (empty) position is
found.
 Chaining: For this method, various overflow locations are kept,
usually by extending the array with a number of overflow
positions. In addition, a pointer field is added to each record
location. A collision is resolved by placing the new record in an
unused overflow location and setting the pointer of the occupied
hash address location to the address of that overflow location.
 Multiple hashing: The program applies a second hash function
if the first results in a collision. If another collision results, the
program uses open addressing or applies a third hash function
and then uses open addressing if necessary.
Hashed Files (cont.)
Hashed Files (cont.)
 To reduce overflow records, a hash file is
typically kept 70-80% full.
 The hash function h should distribute the
records uniformly among the buckets;
otherwise, search time will be increased because
many overflow records will exist.
 Main disadvantages of static external hashing:
- Fixed number of buckets M is a problem if the
number of records in the file grows or shrinks.
- Ordered access on the hash key is quite
inefficient (requires sorting the records).
Hashed Files - Overflow handling
Dynamic And Extendible Hashed
Files
Dynamic and Extendible Hashing Techniques
 Hashing techniques are adapted to allow the dynamic
growth and shrinking of the number of file records.
 These techniques include the following: dynamic
hashing , extendible hashing , and linear hashing .
 Both dynamic and extendible hashing use the binary
representation of the hash value h(K) in order to
access a directory. In dynamic hashing the directory
is a binary tree. In extendible hashing the directory is
an array of size 2d where d is called the global depth.
Dynamic And Extendible Hashing (cont.)
 The directories can be stored on disk, and they expand or
shrink dynamically. Directory entries point to the disk
blocks that contain the stored records.
 An insertion in a disk block that is full causes the block to
split into two blocks and the records are redistributed
among the two blocks. The directory is updated
appropriately.
 Dynamic and extendible hashing do not require an
overflow area.
 Linear hashing does require an overflow area but does
not use a directory. Blocks are split in linear order as the
file expands.
Extendible
Hashing
Parallelizing Disk Access using RAID Technology.

 Secondary storage technology must take steps to keep up


in performance and reliability with processor technology.
 A major advance in secondary storage technology is
represented by the development of RAID, which
originally stood for Redundant Arrays of Inexpensive
Disks.
 The main goal of RAID is to even out the widely different
rates of performance improvement of disks against those
in memory and microprocessors.
RAID Technology (cont.)
 A natural solution is a large array of small
independent disks acting as a single
higher-performance logical disk. A concept
called data striping is used, which utilizes
parallelism to improve disk performance.
 Data striping distributes data transparently
over multiple disks to make them appear
as a single large, fast disk.
Reliability in RAID
 For an array of n disks, the likelihood of
failure is n times as much as that for one
disk.Hence, if the MTBF (Mean Time Between
Failures) of a disk drive is assumed to be
200,000 hours or about 22.8 years (for
Cheetah NS, it is 1.4 million hours).
 Keeping a single copy of data in such an
array of disks will cause a significant loss of
reliability.
Reliability in RAID
 One technique for introducing redundancy is called mirroring or
shadowing.
 Data is written redundantly to two identical physical disks that are
treated as one logical disk.
 When data is read, it can be retrieved from the disk with shorter
queuing, seek, and rotational delays. If a disk fails, the other disk is
used until the first is repaired.
 Suppose the mean time to repair is 24 hours, then the mean time to
data loss of a mirrored disk system using 100 disks with MTBF of
200,000 hours each is (200,000)^2/(2 * 24) = 8.33 * 108 hours,
which is 95,028 years.
 Disk mirroring also doubles the rate at which read requests are
handled, since a read can go to either disk.
Review Question (Case Study
on Reliability Analysis in RAID)
Review Question (Case Study
on Reliability Analysis in RAID)
Performance Improvement
 The disk arrays employ the technique of data
striping to achieve higher transfer rates.
 Note that data can be read or written only
one block at a time, so a typical transfer
contains 512 to 8192 bytes.
 Disk striping may be applied at a finer
granularity by breaking up a byte of data into
bits and spreading the bits to different disks.
RAID Technology (cont.)
Different raid organizations were defined based on different
combinations of the two factors of granularity of data interleaving
(striping) and pattern used to compute redundant information.
 Raid level 0 has no redundant data and hence has the best write performance.
 Raid level 1 uses mirrored disks.
 Raid level 2 uses memory-style redundancy by using Hamming codes, which
contain parity bits for distinct overlapping subsets of components. Level 2
includes both error detection and correction.
 Raid level 3 uses a single parity disk relying on the disk controller to figure out
which disk has failed.
 Raid Levels 4 and 5 use block-level data striping, with level 5 distributing data
and parity information across all disks.
 Raid level 6 applies the so-called P + Q redundancy scheme using Reed-
Soloman codes to protect against up to two disk failures by using just two
redundant disks.
Use of RAID Technology (cont.)
Different raid organizations are being used under different
situations
 Raid level 1 (mirrored disks)is the easiest for rebuild of a disk from other
disks
 It is used for critical applications like logs
 Raid level 2 uses memory-style redundancy by using Hamming codes,
which contain parity bits for distinct overlapping subsets of components.
Level 2 includes both error detection and correction.
 Raid level 3 ( single parity disks relying on the disk controller to figure
out which disk has failed) and level 5 (block-level data striping) are
preferred for Large volume storage, with level 3 giving higher transfer
rates.
 Most popular uses of the RAID technology currently are: Level 0 (with
striping), Level 1 (with mirroring) and Level 5 with an extra drive for
parity.
 Design Decisions for RAID include – level of RAID, number of disks,
choice of parity schemes, and grouping of disks for block-level striping.
Use of RAID
Technology
(cont.)
Trends in Disk Technology
Storage Area Networks
 The demand for higher storage has risen considerably
in recent times.
 Organizations have a need to move from a static
fixed data center oriented operation to a more flexible
and dynamic infrastructure for information processing.
 Thus they are moving to a concept of Storage Area
Networks (SANs). In a SAN, online storage peripherals
are configured as nodes on a high-speed network and
can be attached and detached from servers in a very
flexible manner.
 This allows storage systems to be placed at longer
distances from the servers and provide different
performance and connectivity options.
Storage Area Networks (contd.)
Advantages of SANs are:
 Flexible many-to-many connectivity among servers
and storage devices using fiber channel hubs and
switches.
 Up to 10km separation between a server and a
storage system using appropriate fiber optic cables.
 Better isolation capabilities allowing non disruptive
addition of new peripherals and servers.

 SANs face the problem of combining storage


options from multiple vendors and dealing with
evolving standards of storage management
software and hardware.

You might also like