Database Desig and Applications
Database Desig and Applications
A database management system stores data in such a way that it becomes easier
to retrieve, manipulate, and produce information.
Characteristics
Traditionally, data was organized in file formats. DBMS was a new concept
then, and all the research was done to make it overcome the deficiencies in
traditional style of data management. A modern DBMS has the following
characteristics −
Users
A typical DBMS has users with different rights and permissions who use it for
different purposes. Some users retrieve data and some back it up. The users
of a DBMS can be broadly categorized as follows −
DBMS - Architecture
The design of a DBMS depends on its architecture. It can be centralized or
decentralized or hierarchical. The architecture of a DBMS can be seen as
either single tier or multi-tier. An n-tier architecture divides the whole
system into related but independent n modules, which can be independently
modified, altered, changed, or replaced.
In 1-tier architecture, the DBMS is the only entity where the user directly sits
on the DBMS and uses it. Any changes done here will directly be done on the
DBMS itself. It does not provide handy tools for end-users. Database
designers and programmers normally prefer to use single-tier architecture.
Explore our latest online courses and learn new skills at your own pace. Enroll
and become a certified expert to boost your career.
3-tier Architecture
A 3-tier architecture separates its tiers from each other based on the
complexity of the users and how they use the data present in the database.
It is the most widely used architecture to design a DBMS.
Database (Data) Tier − At this tier, the database resides along with its
query processing languages. We also have the relations that define the
data and their constraints at this level.
Application (Middle) Tier − At this tier reside the application server and the
programs that access the database. For a user, this application tier
presents an abstracted view of the database. End-users are unaware of
any existence of the database beyond the application. At the other end,
the database tier is not aware of any other user beyond the application
tier. Hence, the application layer sits in the middle and acts as a
mediator between the end-user and the database.
User (Presentation) Tier − End-users operate on this tier and they know
nothing about any existence of the database beyond this layer. At this
layer, multiple views of the database can be provided by the application.
All views are generated by applications that reside in the application tier.
Entity-Relationship Model
Entity-Relationship (ER) Model is based on the notion of real-world entities
and relationships among them. While formulating real-world scenario into the
database model, the ER Model creates entity set, relationship set, general
attributes and constraints.
ER Model is based on −
Relational Model
The most popular data model in DBMS is the Relational Model. It is more
scientific a model than others. This model is based on first-order predicate
logic and defines a table as an n-ary relation.
The main highlights of this model are −
A database schema defines its entities and the relationship among them. It
contains a descriptive detail of the database, which can be depicted by
means of schema diagrams. It’s the database designers who design the
schema to help programmers understand the database and make it useful.
A database schema can be divided broadly into two categories −
Database Instance
It is important that we distinguish these two terms individually. Database
schema is the skeleton of database. It is designed when the database doesn't
exist at all. Once the database is operational, it is very difficult to make any
changes to it. A database schema does not contain any data or information.
Data Independence
A database system normally contains a lot of data in addition to users’ data.
For example, it stores data about data, known as metadata, to locate and
retrieve data easily. It is rather difficult to modify or update a set of
metadata once it is stored in the database. But as a DBMS expands, it needs
to change over time to satisfy the requirements of the users. If the entire
data is dependent, it would become a tedious and highly complex job.
For example, in case we want to change or upgrade the storage system itself
− suppose we want to replace hard-disks with SSD − it should not have any
impact on the logical data or schemas.
Entity
An entity can be a real-world object, either animate or inanimate, that can
be easily identifiable. For example, in a school database, students, teachers,
classes, and courses offered can be considered as entities. All these entities
have some attributes or properties that give them their identity.
Attributes
Entities are represented by means of their properties, called attributes. All
attributes have values. For example, a student entity may have name, class,
and age as attributes.
Types of Attributes
Simple attribute − Simple attributes are atomic values, which cannot be
divided further. For example, a student's phone number is an atomic
value of 10 digits.
Composite attribute − Composite attributes are made of more than one
simple attribute. For example, a student's complete name may have
first_name and last_name.
Derived attribute − Derived attributes are the attributes that do not exist
in the physical database, but their values are derived from other
attributes present in the database. For example, average_salary in a
department should not be saved directly in the database, instead it can
be derived. For another example, age can be derived from data_of_birth.
Single-value attribute − Single-value attributes contain single value. For
example − Social_Security_Number.
Multi-value attribute − Multi-value attributes may contain more than one
values. For example, a person can have more than one phone number,
email_address, etc.
Relationship
The association among entities is called a relationship. For example, an
employee works_at a department, a student enrolls in a course. Here,
Works_at and Enrolls are called relationships.
Relationship Set
A set of relationships of similar type is called a relationship set. Like entities,
a relationship too can have attributes. These attributes are called descriptive
attributes.
Degree of Relationship
The number of participating entities in a relationship defines the degree of
the relationship.
Binary = degree 2
Ternary = degree 3
n-ary = degree
Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be
associated with the number of entities of other set via relationship set.
One-to-one − One entity from entity set A can be associated with at most
one entity of entity set B and vice versa.
One-to-many − One entity from entity set A can be associated with more
than one entities of entity set B however an entity from entity set B, can
be associated with at most one entity.
Many-to-one − More than one entities from entity set A can be associated
with at most one entity of entity set B, however an entity from entity
set B can be associated with more than one entity from entity set A.
Many-to-many − One entity from A can be associated with more than one
entity from B and vice versa.
ER Diagram Representation
Let us now learn how the ER Model is represented by means of an ER
diagram. Any object, for example, entities, attributes of an entity,
relationship sets, and attributes of relationship sets, can be represented with
the help of an ER diagram.
Entity
Entities are represented by means of rectangles. Rectangles are named with
the entity set they represent.
Attributes
Attributes are the properties of entities. Attributes are represented by means
of ellipses. Every ellipse represents one attribute and is directly connected to
its entity (rectangle).
If the attributes are composite, they are further divided in a tree like structure.
Every node is then connected to its attribute. That is, composite attributes
are represented by ellipses that are connected with an ellipse.
Relationship
Relationships are represented by diamond-shaped box. Name of the
relationship is written inside the diamond-box. All the entities (rectangles)
participating in a relationship, are connected to it by a line.
Binary Relationship and Cardinality
A relationship where two entities are participating is called a binary
relationship. Cardinality is the number of instance of an entity from a relation
that can be associated with the relation.
Many-to-many − The following image reflects that more than one instance
of an entity on the left and more than one instance of an entity on the
right can be associated with the relationship. It depicts many-to-many
relationship.
Participation Constraints
Total Participation − Each entity is involved in the relationship. Total
participation is represented by double lines.
Partial participation − Not all entities are involved in the relationship.
Partial participation is represented by single lines.
Generalization Aggregation
Let us now learn how the ER Model is represented by means of an ER
diagram. Any object, for example, entities, attributes of an entity,
relationship sets, and attributes of relationship sets, can be represented with
the help of an ER diagram.
Entity
Entities are represented by means of rectangles. Rectangles are named with
the entity set they represent.
Attributes
Attributes are the properties of entities. Attributes are represented by means
of ellipses. Every ellipse represents one attribute and is directly connected to
its entity (rectangle).
If the attributes are composite, they are further divided in a tree like structure.
Every node is then connected to its attribute. That is, composite attributes
are represented by ellipses that are connected with an ellipse.
Relationship
Relationships are represented by diamond-shaped box. Name of the
relationship is written inside the diamond-box. All the entities (rectangles)
participating in a relationship, are connected to it by a line.
Many-to-many − The following image reflects that more than one instance
of an entity on the left and more than one instance of an entity on the
right can be associated with the relationship. It depicts many-to-many
relationship.
Participation Constraints
Total Participation − Each entity is involved in the relationship. Total
participation is represented by double lines.
Partial participation − Not all entities are involved in the relationship.
Partial participation is represented by single lines.
Generalization Aggregation
The ER Model has the power of expressing database entities in a conceptual
hierarchical manner. As the hierarchy goes up, it generalizes the view of
entities, and as we go deep in the hierarchy, it gives us the detail of every
entity included.
Generalization
As mentioned above, the process of generalizing entities, where the
generalized entities contain the properties of all the generalized entities, is
called generalization. In generalization, a number of entities are brought
together into one generalized entity based on their similar characteristics.
For example, pigeon, house sparrow, crow and dove can all be generalized as
Birds.
Specialization
Specialization is the opposite of generalization. In specialization, a group of
entities is divided into sub-groups based on their characteristics. Take a
group ‘Person’ for example. A person has name, date of birth, gender, etc.
These properties are common in all persons, human beings. But in a
company, persons can be identified as employee, employer, customer, or
vendor, based on what role they play in the company.
Inheritance
We use all the above features of ER-Model in order to create classes of
objects in object-oriented programming. The details of entities are generally
hidden from the user; this process known as abstraction.
Codd's 12 Rules
Dr Edgar F. Codd, after his extensive research on the Relational Model of
database systems, came up with twelve rules of his own, which according to
him, a database must obey in order to be regarded as a true relational
database.
These rules can be applied on any database system that manages stored
data using only its relational capabilities. This is a foundation rule, which acts
as a base for all the other rules.
Concepts
Tables − In relational data model, relations are saved in the format of Tables.
This format stores the relation among entities. A table has rows and
columns, where rows represents records and columns represent the
attributes.
Tuple − A single row of a table, which contains a single record for that
relation is called a tuple.
Relation key − Each row has one or more attributes, known as relation key,
which can identify the row in the relation (table) uniquely.
Attribute domain − Every attribute has some pre-defined value scope, known
as attribute domain.
Constraints
Every relation has some conditions that must hold for it to be a valid relation.
These conditions are called Relational Integrity Constraints. There are three main
integrity constraints −
Key constraints
Domain constraints
Referential integrity constraints
Key Constraints
There must be at least one minimal subset of attributes in the relation, which
can identify a tuple uniquely. This minimal subset of attributes is
called key for that relation. If there are more than one such minimal subsets,
these are called candidate keys.
in a relation with a key attribute, no two tuples can have identical values
for key attributes.
a key attribute can not have NULL values.
Domain Constraints
Attributes have specific values in real-world scenario. For example, age can
only be a positive integer. The same constraints have been tried to employ
on the attributes of a relation. Every attribute is bound to have a specific
range of values. For example, age cannot be less than zero and telephone
numbers cannot contain a digit outside 0-9.
Relational Algebra
Relational database systems are expected to be equipped with a query
language that can assist its users to query the database instances. There are
two kinds of query languages − relational algebra and relational calculus.
Relational Algebra
Relational algebra is a procedural query language, which takes instances of
relations as input and yields instances of relations as output. It uses
operators to perform queries. An operator can be either unary or binary. They
accept relations as their input and yield relations as their output. Relational
algebra is performed recursively on a relation and intermediate results are
also considered relations.
Select
Project
Union
Set different
Cartesian product
Rename
Notation − σp(r)
For example −
σsubject="database"(Books)
Output − Selects tuples from books where subject is 'database' and 'price' is
450 or those books published after 2010.
For example −
Selects and projects columns named as subject and author from the relation
Books.
r ∪ s = { t | t ∈ r or t ∈ s}
Notation − r U s
Where r and s are either database relations or relation result set (temporary
relation).
Output − Projects the names of the authors who have either written a book or
an article or both.
Notation − r − s
Output − Provides the name of authors who have written books but not
articles.
Notation − r Χ s
r Χ s = { q t | q ∈ r and t ∈ s}
Output − Yields a relation, which shows all the books and articles written by
tutorialspoint.
Notation − ρ x (E)
Set intersection
Assignment
Natural join
Relational Calculus
In contrast to Relational Algebra, Relational Calculus is a non-procedural
query language, that is, it tells what to do but never explains how to do it.
Notation − {T | Condition}
For example −
Output − Returns tuples with 'name' from Author who has written article on
'database'.
TRC can be quantified. We can use Existential (∃) and Universal Quantifiers
(∀).
For example −
Output − The above query will yield the same result as the previous one.
Notation −
Where a1, a2 are attributes and P stands for formulae built by inner
attributes.
For example −
Output − Yields Article, Page, and Subject from the relation TutorialsPoint,
where subject is database.
Just like TRC, DRC can also be written using existential and universal
quantifiers. DRC also involves relational operators.
Mapping Entity
An entity is a real-world object with some attributes.
Mapping Relationship
A relationship is an association among entities.
Mapping Process
Create table for a relationship.
Add the primary keys of all participating Entities as fields of table with
their respective data types.
If relationship has any attribute, add each attribute as field of table.
Declare a primary key composing all the primary keys of participating
entities.
Declare all foreign key constraints.
Mapping Process
Create table for weak entity set.
Add all its attributes to table as field.
Add the primary key of identifying entity set.
Declare all foreign key constraints.
Mapping Process
Create tables for all higher-level entities.
Create tables for lower-level entities.
Add primary keys of higher-level entities in the table of lower-level
entities.
In lower-level tables, add all other attributes of lower-level entities.
Declare primary key of higher-level table and the primary key for lower-
level table.
Declare foreign key constraints.
SQL Overview
SQL is a programming language for Relational Databases. It is designed over
relational algebra and tuple relational calculus. SQL comes as a package with
all major distributions of RDBMS.
SQL comprises both data definition and data manipulation languages. Using
the data definition properties of SQL, one can design and modify database
schema, whereas data manipulation properties allows SQL to store and
retrieve data from database.
CREATE
Creates new databases, tables and views from RDBMS.
For example −
For example−
For example−
This command adds an attribute in the relation article with the name subject of
string type.
These basic constructs allow database programmers and users to enter data
and information into the database and retrieve efficiently using a number of
filter options.
SELECT/FROM/WHERE
SELECT − This is one of the fundamental query command of SQL. It is
similar to the projection operation of relational algebra. It selects the
attributes based on the condition described by WHERE clause.
FROM − This clause takes a relation name as an argument from which
attributes are to be selected/projected. In case more than one relation
names are given, this clause corresponds to Cartesian product.
WHERE − This clause defines predicate or conditions, which must match
in order to qualify the attributes to be projected.
For example −
Select author_name
From book_author
Where age > 50;
INSERT INTO/VALUES
This command is used for inserting values into the rows of a table (relation).
Syntax−
INSERT INTO table (column1 [, column2, column3 ... ]) VALUES (value1 [, value2, value3 ... ])
Or
For example −
Syntax −
For example −
Syntax −
For example −
DBMS - Normalization
Functional Dependency
Functional dependency (FD) is a set of constraints between two attributes in
a relation. Functional dependency says that if two tuples have same values
for attributes A1, A2,..., An, then those two tuples must have to have same
values for attributes B1, B2, ..., Bn.
Armstrong's Axioms
If F is a set of functional dependencies then the closure of F, denoted as F+,
is the set of all functional dependencies logically implied by F. Armstrong's
Axioms are a set of rules, that when applied repeatedly, generates a closure
of functional dependencies.
Reflexive rule − If alpha is a set of attributes and beta is_subset_of alpha,
then alpha holds beta.
Augmentation rule − If a → b holds and y is attribute set, then ay → by
also holds. That is adding attributes in dependencies, does not change
the basic dependencies.
Transitivity rule − Same as transitive rule in algebra, if a → b holds and b
→ c holds, then a → c also holds. a → b is called as a functionally that
determines b.
Normalization
If a database design is not perfect, it may contain anomalies, which are like a
bad dream for any database administrator. Managing a database with
anomalies is next to impossible.
Update anomalies − If data items are scattered and are not linked to each
other properly, then it could lead to strange situations. For example,
when we try to update one data item having its copies scattered over
several places, a few instances get updated properly while a few others
are left with old values. Such instances leave the database in an
inconsistent state.
Deletion anomalies − We tried to delete a record, but parts of it was left
undeleted because of unawareness, the data is also saved somewhere
else.
Insert anomalies − We tried to insert data in a record that does not exist
at all.
Each attribute must contain only a single value from its pre-defined domain.
We broke the relation in two as depicted in the above picture. So there exists
no partial dependency.
We find that in the above Student_detail relation, Stu_ID is the key and only
prime key attribute. We find that City can be identified by Stu_ID as well as
Zip itself. Neither Zip is a superkey nor is City a prime attribute. Additionally,
Stu_ID → Zip → City, so there exists transitive dependency.
To bring this relation into third normal form, we break the relation into two
relations as follows −
Boyce-Codd Normal Form
Boyce-Codd Normal Form (BCNF) is an extension of Third Normal Form on
strict terms. BCNF states that −
and
Zip → City
DBMS - Joins
We understand the benefits of taking a Cartesian product of two relations,
which gives us all the possible tuples that are paired together. But it might
not be feasible for us in certain cases to take a Cartesian product where we
encounter huge relations with thousands of tuples having a considerable
large number of attributes.
Notation
R1 ⋈θ R2
R1 and R2 are relations having attributes (A1, A2, .., An) and (B1, B2,.. ,Bn)
such that the attributes don’t have anything in common, that is R1 ∩ R2 = Φ.
Student
101 Alex 10
102 Maria 11
Subjects
Class Subject
10 Math
10 English
11 Music
11 Sports
Student_Detail =
Student_detail
Equijoin
When Theta join uses only equality comparison operator, it is said to be
equijoin. The above example corresponds to equijoin.
Natural join acts on those matching attributes where the values of attributes
in both the relations are same.
Courses
CS01 Database CS
ME01 Mechanics ME
EE01 Electronics EE
HoD
Dept Head
CS Alex
ME Maya
EE Mira
Courses ⋈ HoD
Outer Joins
Theta Join, Equijoin, and Natural Join are called inner joins. An inner join
includes only those tuples with matching attributes and the rest are
discarded in the resulting relation. Therefore, we need to use outer joins to
include all the tuples from the participating relations in the resulting relation.
There are three kinds of outer joins − left outer join, right outer join, and full
outer join.
Left
A B
100 Database
101 Mechanics
102 Electronics
Right
A B
100 Alex
102 Maya
104 Mira
Courses HoD
A B C D
Courses HoD
A B C D
Courses HoD
A B C D
100 Database 100 Alex
The memory with the fastest access is the costliest one. Larger storage
devices offer slow speed and they are less expensive, however they can store
huge volumes of data as compared to CPU registers or cache memory.
Magnetic Disks
Hard disk drives are the most common secondary storage devices in present
computer systems. These are called magnetic disks because they use the
concept of magnetization to store information. Hard disks consist of metal
disks coated with magnetizable material. These disks are placed vertically on
a spindle. A read/write head moves in between the disks and is used to
magnetize or de-magnetize the spot under it. A magnetized spot can be
recognized as 0 (zero) or 1 (one).
RAID
RAID stands for Redundant Array of Independent Disks, which is a
technology to connect multiple secondary storage devices and use them as a
single storage media.
RAID 3 − RAID 3 stripes the data onto multiple disks. The parity bit
generated for data word is stored on a different disk. This technique
makes it to overcome single disk failures.
RAID 4 − In this level, an entire block of data is written onto data disks
and then the parity is generated and stored on a different disk. Note
that level 3 uses byte-level striping, whereas level 4 uses block-level
striping. Both level 3 and level 4 require at least three disks to
implement RAID.
RAID 5 − RAID 5 writes whole data blocks onto different disks, but the
parity bits generated for data block stripe are distributed among all the
data disks rather than storing them on a different dedicated disk.
File Organization
File Organization defines how file records are mapped onto disk blocks. We
have four types of File Organization to organize file records −
Heap File Organization
When a file is created using Heap File Organization, the Operating System
allocates memory area to that file without any further accounting details. File
records can be placed anywhere in that memory area. It is the responsibility
of the software to manage the records. Heap File does not support any
ordering, sequencing, or indexing on its own.
Update Operations
Retrieval Operations
Open − A file can be opened in one of the two modes, read mode or write
mode. In read mode, the operating system does not allow anyone to alter
data. In other words, data is read only. Files opened in read mode can
be shared among several entities. Write mode allows data modification.
Files opened in write mode can be read but cannot be shared.
Locate − Every file has a file pointer, which tells the current position
where the data is to be read or written. This pointer can be adjusted
accordingly. Using find (seek) operation, it can be moved forward or
backward.
Read − By default, when files are opened in read mode, the file pointer
points to the beginning of the file. There are options where the user can
tell the operating system where to locate the file pointer at the time of
opening a file. The very next data to the file pointer is read.
Write − User can select to open a file in write mode, which enables them
to edit its contents. It can be deletion, insertion, or modification. The file
pointer can be located at the time of opening or can be dynamically
changed if the operating system allows to do so.
Close − This is the most important operation from the operating system’s
point of view. When a request to close a file is generated, the operating
system
o removes all the locks (if in shared mode),
o saves the data (if altered) to the secondary storage media, and
o releases all the buffers and file handlers associated with the file.
The organization of data inside a file plays a major role here. The process to
locate the file pointer to a desired record inside a file various based on
whether the records are arranged sequentially or clustered.
DBMS - Indexing
We know that data is stored in the form of records. Every record has a key
field, which helps it to be recognized uniquely.
Indexing is a data structure technique to efficiently retrieve records from the
database files based on some attributes on which the indexing has been
done. Indexing in database systems is similar to what we see in books.
Primary Index − Primary index is defined on an ordered data file. The data
file is ordered on a key field. The key field is generally the primary key of
the relation.
Secondary Index − Secondary index may be generated from a field which
is a candidate key and has a unique value in every record, or a non-key
with duplicate values.
Clustering Index − Clustering index is defined on an ordered data file. The
data file is ordered on a non-key field.
Dense Index
Sparse Index
Dense Index
In dense index, there is an index record for every search key value in the
database. This makes searching faster but requires more space to store
index records itself. Index records contain search key value and a pointer to
the actual record on the disk.
Sparse Index
In sparse index, index records are not created for every search key. An index
record here contains a search key and an actual pointer to the data on the
disk. To search a record, we first proceed by index record and reach at the
actual location of the data. If the data we are looking for is not where we
directly reach by following the index, then the system starts sequential
search until the desired data is found.
Multilevel Index
Index records comprise search-key values and data pointers. Multilevel index
is stored on the disk along with the actual database files. As the size of the
database grows, so does the size of the indices. There is an immense need to
keep the index records in the main memory so as to speed up the search
operations. If single-level index is used, then a large size index cannot be
kept in memory which leads to multiple disk accesses.
Multi-level Index helps in breaking down the index into several smaller
indices in order to make the outermost level so small that it can be saved in
a single disk block, which can easily be accommodated anywhere in the main
memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index
format. The leaf nodes of a B+ tree denote actual data pointers. B+ tree
ensures that all leaf nodes remain at the same height, thus balanced.
Additionally, the leaf nodes are linked using a link list; therefore, a B+ tree
can support random access as well as sequential access.
Structure of B+ Tree
Every leaf node is at equal distance from the root node. A B+ tree is of the
order n where n is fixed for every B+ tree.
Internal nodes −
Internal (non-leaf) nodes contain at least ⌈n/2⌉ pointers, except the root
node.
At most, an internal node can contain n pointers.
Leaf nodes −
Leaf nodes contain at least ⌈n/2⌉ record pointers and ⌈n/2⌉ key values.
At most, a leaf node can contain n record pointers and n key values.
Every leaf node contains one block pointer P to point to next leaf node
and forms a linked list.
B+ Tree Insertion
B+ trees are filled from bottom and each entry is done at the leaf node.
If a leaf node overflows −
o Split node into two parts.
o Partition at i = ⌊(m+1)/2⌋.
o First i entries are stored in one node.
o Rest of the entries (i+1 onwards) are moved to a new node.
o ith key is duplicated at the parent of the leaf.
If a non-leaf node overflows −
o Split node into two parts.
o Partition the node at i = ⌈(m+1)/2⌉.
o Entries up to i are kept in one node.
o Rest of the entries are moved to a new node.
B+ Tree Deletion
B+ tree entries are deleted at the leaf nodes.
The target entry is searched and deleted.
o If it is an internal node, delete and replace with the entry from the
left position.
After deletion, underflow is tested,
o If underflow occurs, distribute the entries from the nodes left to
it.
If distribution is not possible from left, then
o Distribute from the nodes right to it.
If distribution is not possible from left or from right, then
o Merge the node with left and right to it.
DBMS - Hashing
For a huge database structure, it can be almost next to impossible to search
all the index values through all its level and then reach the destination data
block to retrieve the desired data. Hashing is an effective technique to
calculate the direct location of a data record on the disk without using index
structure.
Hashing uses hash functions with search keys as parameters to generate the
address of a data record.
Hash Organization
Bucket − A hash file stores data in bucket format. Bucket is considered
a unit of storage. A bucket typically stores one complete disk block,
which in turn can store one or more records.
Hash Function − A hash function, h, is a mapping function that maps all
the set of search-keys K to the address where actual records are placed.
It is a function from search keys to bucket addresses.
Static Hashing
In static hashing, when a search-key value is provided, the hash function
always computes the same address. For example, if mod-4 hash function is
used, then it shall generate only 5 values. The output address shall always
be same for that function. The number of buckets provided remains
unchanged at all times.
Operation
Insertion − When a record is required to be entered using static hash, the
hash function h computes the bucket address for search key K, where
the record will be stored.
Bucket address = h(K)
Search − When a record needs to be retrieved, the same hash function
can be used to retrieve the address of the bucket where the data is
stored.
Delete − This is simply a search followed by a deletion operation.
Bucket Overflow
The condition of bucket-overflow is known as collision. This is a fatal state for
any static hash function. In this case, overflow chaining can be used.
Overflow Chaining − When buckets are full, a new bucket is allocated for
the same hash result and is linked after the previous one. This
mechanism is called Closed Hashing.
Linear Probing − When a hash function generates an address at which
data is already stored, the next free bucket is allocated to it. This
mechanism is called Open Hashing.
Dynamic Hashing
The problem with static hashing is that it does not expand or shrink
dynamically as the size of the database grows or shrinks. Dynamic hashing
provides a mechanism in which data buckets are added and removed
dynamically and on-demand. Dynamic hashing is also known as extended
hashing.
Operation
Querying − Look at the depth value of the hash index and use those bits
to compute the bucket address.
Update − Perform a query as above and update the data.
Deletion − Perform a query to locate the desired data and delete the
same.
Insertion − Compute the address of the bucket
o If the bucket is already full.
Add more buckets.
Add additional bits to the hash value.
Re-compute the hash function.
o Else
Add data to the bucket,
o If all the buckets are full, perform the remedies of static hashing.
Hashing is not favorable when the data is organized in some ordering and the
queries require a range of data. When data is discrete and random, hash
performs the best.
Hashing algorithms have high complexity than indexing. All hash operations
are done in constant time.
DBMS - Transaction
A transaction can be defined as a group of tasks. A single task is the
minimum processing unit which cannot be divided further.
A’s Account
Open_Account(A)
Old_Balance = A.balance
New_Balance = Old_Balance - 500
A.balance = New_Balance
Close_Account(A)
B’s Account
Open_Account(B)
Old_Balance = B.balance
New_Balance = Old_Balance + 500
B.balance = New_Balance
Close_Account(B)
ACID Properties
A transaction is a very small unit of a program and it may contain several
lowlevel tasks. A transaction in a database system must
maintain Atomicity, Consistency, Isolation, and Durability − commonly
known as ACID properties − in order to ensure accuracy, completeness, and
data integrity.
Serializability
When multiple transactions are being executed by the operating system in a
multiprogramming environment, there are possibilities that instructions of
one transactions are interleaved with some other transaction.
Result Equivalence
If two schedules produce the same result after execution, they are said to be
result equivalent. They may yield the same result for some value and
different results for another set of values. That's why this equivalence is not
generally considered significant.
View Equivalence
Two schedules would be view equivalence if the transactions in both the
schedules perform similar actions in a similar manner.
For example −
If T reads the initial data in S1, then it also reads the initial data in S2.
If T reads the value written by J in S1, then it also reads the value written
by J in S2.
If T performs the final write on the data value in S1, then it also performs
the final write on the data value in S2.
Conflict Equivalence
Two schedules would be conflicting if they have the following properties −
Note − View equivalent schedules are view serializable and conflict equivalent
schedules are conflict serializable. All conflict serializable schedules are view
serializable too.
States of Transactions
A transaction in a database can be in one of the following states −
Active − In this state, the transaction is being executed. This is the initial
state of every transaction.
Partially Committed − When a transaction executes its final operation, it
is said to be in a partially committed state.
Failed − A transaction is said to be in a failed state if any of the checks
made by the database recovery system fails. A failed transaction can no
longer proceed further.
Aborted − If any of the checks fails and the transaction has reached a
failed state, then the recovery manager rolls back all its write operations
on the database to bring the database back to its original state where it
was prior to the execution of the transaction. Transactions in this state
are called aborted. The database recovery module can select one of the
two operations after a transaction aborts −
o Re-start the transaction
o Kill the transaction
Committed − If a transaction executes all its operations successfully, it is
said to be committed. All its effects are now permanently established on
the database system.
Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by
which any transaction cannot read or write data until it acquires an
appropriate lock on it. Locks are of two kinds −
Timestamp-based Protocols
The most commonly used concurrency protocol is the timestamp based
protocol. This protocol uses either system time or logical counter as a
timestamp.
Lock-based protocols manage the order between the conflicting pairs among
transactions at the time of execution, whereas timestamp-based protocols
start working as soon as a transaction is created.
Every transaction has a timestamp associated with it, and the ordering is
determined by the age of the transaction. A transaction created at 0002
clock time would be older than all other transactions that come after it. For
example, any transaction 'y' entering the system at 0004 is two seconds
younger and the priority would be given to the older one.
In addition, every data item is given the latest read and write-timestamp.
This lets the system know when the last ‘read and write’ operation was
performed on the data item.
DBMS - Deadlock
In a multi-process system, deadlock is an unwanted situation that arises in a
shared resource environment, where a process indefinitely waits for a
resource that is held by another process.
For example, assume a set of transactions {T0, T1, T2, ...,Tn}. T0 needs a
resource X to complete its task. Resource X is held by T1, and T1 is waiting
for a resource Y, which is held by T2. T2 is waiting for resource Z, which is
held by T0. Thus, all the processes wait for each other to release resources.
In this situation, none of the processes can finish their task. This situation is
known as a deadlock.
Deadlock Prevention
To prevent any deadlock situation in the system, the DBMS aggressively
inspects all the operations, where transactions are about to execute. The
DBMS inspects the operations and analyzes if they can create a deadlock
situation. If it finds that a deadlock situation might occur, then that
transaction is never allowed to be executed.
Wait-Die Scheme
In this scheme, if a transaction requests to lock a resource (data item),
which is already held with a conflicting lock by another transaction, then one
of the two possibilities may occur −
This scheme allows the older transaction to wait but kills the younger one.
Wound-Wait Scheme
In this scheme, if a transaction requests to lock a resource (data item),
which is already held with conflicting lock by some another transaction, one
of the two possibilities may occur −
In both the cases, the transaction that enters the system at a later stage is
aborted.
Deadlock Avoidance
Aborting a transaction is not always a practical approach. Instead, deadlock
avoidance mechanisms can be used to detect any deadlock situation in
advance. Methods like "wait-for graph" are available but they are suitable for
only those systems where transactions are lightweight having fewer
instances of resource. In a bulky system, deadlock prevention techniques
may work well.
Wait-for Graph
This is a simple method available to track if any deadlock situation may arise.
For each transaction entering into the system, a node is created. When a
transaction Ti requests for a lock on an item, say X, which is held by some
other transaction Tj, a directed edge is created from Ti to Tj. If Tj releases
item X, the edge between them is dropped and Ti locks the data item.
The system maintains this wait-for graph for every transaction waiting for
some data items held by others. The system keeps checking if there's any
cycle in the graph.
First, do not allow any request for an item, which is already locked by
another transaction. This is not always feasible and may cause
starvation, where a transaction indefinitely waits for a data item and can
never acquire it.
The second option is to roll back one of the transactions. It is not always
feasible to roll back the younger transaction, as it may be important
than the older one. With the help of some relative algorithm, a
transaction is chosen, which is to be aborted. This transaction is known
as the victim and the process is known as victim selection.
Grown-up databases are too bulky to be frequently backed up. In such cases,
we have techniques where we can restore a database just by looking at its
logs. So, all that we need to do here is to take a backup of all the logs at
frequent intervals of time. The database can be backed up once a week, and
the logs being very small can be backed up every day or as frequently as
possible.
Remote Backup
Remote backup provides a sense of security in case the primary location
where the database is located gets destroyed. Remote backup can be offline
or real-time or online. In case it is offline, it is maintained manually.
Online backup systems are more real-time and lifesavers for database
administrators and investors. An online backup system is a mechanism
where every bit of the real-time data is backed up simultaneously at two
distant places. One of them is directly connected to the system and the other
one is kept at a remote place as backup.
As soon as the primary database storage fails, the backup system senses the
failure and switches the user system to the remote storage. Sometimes this
is so instant that the users can’t even realize a failure.
Failure Classification
To see where the problem has occurred, we generalize a failure into various
categories, as follows −
Transaction failure
A transaction has to abort when it fails to execute or when it reaches a point
from where it can’t go any further. This is called transaction failure where
only a few transactions or processes are hurt.
Disk Failure
In early days of technology evolution, it was a common problem where hard-
disk drives or storage drives used to fail frequently.
Disk failures include formation of bad sectors, unreachability to the disk, disk
head crash or any other failure, which destroys all or a part of disk storage.
Storage Structure
We have already described the storage system. In brief, the storage
structure can be divided into two categories −
It should check the states of all the transactions, which were being
executed.
A transaction may be in the middle of some operation; the DBMS must
ensure the atomicity of the transaction in this case.
It should check whether the transaction can be completed now or it
needs to be rolled back.
No transactions would be allowed to leave the DBMS in an inconsistent
state.
There are two types of techniques, which can help a DBMS in recovering as
well as maintaining the atomicity of a transaction −
Maintaining the logs of each transaction, and writing them onto some
stable storage before actually modifying the database.
Maintaining shadow paging, where the changes are done on a volatile
memory, and later, the actual database is updated.
Log-based Recovery
Log is a sequence of records, which maintains the records of actions
performed by a transaction. It is important that the logs are written prior to
the actual modification and stored on a stable storage media, which is
failsafe.
Deferred database modification − All logs are written on to the stable storage
and the database is updated when a transaction commits.
Immediate database modification − Each log follows an actual database
modification. That is, the database is modified immediately after every
operation.
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill
out all the memory space available in the system. As time passes, the log file
may grow too big to be handled at all. Checkpoint is a mechanism where all
the previous logs are removed from the system and stored permanently in a
storage disk. Checkpoint declares a point before which the DBMS was in
consistent state, and all the transactions were committed.
Recovery
When a system with concurrent transactions crashes and recovers, it
behaves in the following manner −
The recovery system reads the logs backwards from the end to the last
checkpoint.
It maintains two lists, an undo-list and a redo-list.
If the recovery system sees a log with <Tn, Start> and <Tn, Commit>
or just <Tn, Commit>, it puts the transaction in the redo-list.
If the recovery system sees a log with <Tn, Start> but no commit or
abort log found, it puts the transaction in undo-list.