Dbms Tutorial
Dbms Tutorial
Audience
This tutorial will especially help computer science graduates in understanding the
basic-to-advanced concepts related to Database Management Systems.
Prerequisites
Before you start proceeding with this tutorial, it is recommended that you have a
good understanding of basic computer concepts such as primary memory,
secondary memory, and data structures and algorithms.
Table of Contents
About the Tutorial ..................................................................................................................................... i
Audience .................................................................................................................................................... i
Prerequisites .............................................................................................................................................. i
Copyright & Disclaimer .............................................................................................................................. i
Table of Contents ...................................................................................................................................... ii
1. OVERVIEW ............................................................................................................................ 1
Characteristics .......................................................................................................................................... 1
Users ......................................................................................................................................................... 2
2. ARCHITECTURE ..................................................................................................................... 4
3-tier Architecture .................................................................................................................................... 4
ii
Relationship ............................................................................................................................................ 14
9. CODDS 12 RULES................................................................................................................ 25
Rule 1: Information Rule ......................................................................................................................... 25
Rule 2: Guaranteed Access Rule .............................................................................................................. 25
Rule 3: Systematic Treatment of NULL Values ......................................................................................... 25
Rule 4: Active Online Catalog .................................................................................................................. 25
Rule 5: Comprehensive Data Sub-Language Rule .................................................................................... 25
Rule 6: View Updating Rule ..................................................................................................................... 26
Rule 7: High-Level Insert, Update, and Delete Rule ................................................................................. 26
Rule 8: Physical Data Independence........................................................................................................ 26
Rule 9: Logical Data Independence ......................................................................................................... 26
Rule 10: Integrity Independence ............................................................................................................. 26
Rule 11: Distribution Independence ........................................................................................................ 26
Rule 12: Non-Subversion Rule ................................................................................................................. 26
iii
15. JOINS................................................................................................................................... 46
Theta () Join .......................................................................................................................................... 46
Equijoin ................................................................................................................................................... 47
Natural Join () ...................................................................................................................................... 47
Outer Joins .............................................................................................................................................. 49
iv
vi
1. OVERVIEW
DBMS
Characteristics
Traditionally, data was organized in file formats. DBMS was a new concept then,
and all the research was done to make it overcome the deficiencies in traditional
style of data management. A modern DBMS has the following characteristics:
DBMS
Multiple views: DBMS offers multiple views for different users. A user
who is in the Sales department will have a different view of database than
a person working in the Production department. This feature enables the
users to have a concentrate view of the database according to their
requirements.
Users
A typical DBMS has users with different rights and permissions who use it for
different purposes. Some users retrieve data and some back it up. The users of
a DBMS can be broadly categorized as follows:
DBMS
Designers: Designers are the group of people who actually work on the
designing part of the database. They keep a close watch on what data
should be kept and in what format. They identify and design the whole set
of entities, relations, constraints, and views.
End Users: End users are those who actually reap the benefits of having
a DBMS. End users can range from simple viewers who pay attention to
the logs or market rates to sophisticated users such as business analysts.
2. ARCHITECTURE
DBMS
3-tier Architecture
A 3-tier architecture separates its tiers from each other based on the complexity
of the users and how they use the data present in the database. It is the most
widely used architecture to design a DBMS.
Database (Data) Tier: At this tier, the database resides along with its
query processing languages. We also have the relations that define the
data and their constraints at this level.
4
DBMS
Application (Middle) Tier: At this tier reside the application server and
the programs that access the database. For a user, this application tier
presents an abstracted view of the database. End-users are unaware of
any existence of the database beyond the application. At the other end,
the database tier is not aware of any other user beyond the application
tier. Hence, the application layer sits in the middle and acts as a mediator
between the end-user and the database.
User (Presentation) Tier: End-users operate on this tier and they know
nothing about any existence of the database beyond this layer. At this
layer, multiple views of the database can be provided by the application.
All views are generated by applications that reside in the application tier.
3. DATA MODELS
DBMS
Data models define how the logical structure of a database is modeled. Data
Models are fundamental entities to introduce abstraction in a DBMS. Data
models define how data is connected to each other and how they are processed
and stored inside the system.
The very first data model could be flat data-models, where all the data used are
to be kept in the same plane. Earlier data models were not so scientific, hence
they were prone to introduce lots of duplication and update anomalies.
Entity-Relationship Model
Entity-Relationship (ER) Model is based on the notion of real-world entities and
relationships among them. While formulating real-world scenario into the
database model, the ER Model creates entity set, relationship set, general
attributes, and constraints.
ER Model is best used for the conceptual design of a database.
ER Model is based on:
[Image: ER Model]
Entity
An entity in an ER Model is a real-world entity having properties called
attributes. Every attribute is defined by its set of values called domain.
For example, in a school database, a student is considered as an entity.
Student has various attributes like name, age, class, etc.
Relationship
6
DBMS
one to one
one to many
many to one
many to many
Relational Model
The most popular data model in DBMS is the Relational Model. It is more
scientific a model than others. This model is based on first-order predicate logic
and defines a table as an n-ary relation.
DBMS
4. DATA SCHEMAS
DBMS
Database Schema
A database schema is the skeleton structure that represents the logical view of
the entire database. It defines how the data is organized and how the relations
among them are associated. It formulates all the constraints that are to be
applied on the data.
A database schema defines its entities and the relationship among them. It
contains a descriptive detail of the database, which can be depicted by means of
schema diagrams. Its the database designers who design the schema to help
programmers understand the database and make it useful.
DBMS
Database Instance
It is important that we distinguish these two terms individually. Database
schema is the skeleton of database. It is designed when the database doesn't
exist at all. Once the database is operational, it is very difficult to make any
changes to it. A database schema does not contain any data or information.
A database instance is a state of operational database with data at any given
time. It contains a snapshot of the database. Database instances tend to change
with time. A DBMS ensures that its every instance (state) is in a valid state, by
diligently following all the validations, constraints, and conditions that the
database designers have imposed.
10
5. DATA INDEPENDENCE
DBMS
Data Independence
A database system normally contains a lot of data in addition to users data. For
example, it stores data about data, known as metadata, to locate and retrieve
data easily. It is rather difficult to modify or update a set of metadata once it is
stored in the database. But as a DBMS expands, it needs to change over time to
satisfy the requirements of the users. If the entire data is dependent, it would
become a tedious and highly complex job.
11
DBMS
12
DBMS
The ER model defines the conceptual view of a database. It works around realworld entities and the associations among them. At view level, the ER model is
considered a good option for designing databases.
Entity
An entity can be a real-world object, either animate or inanimate, that can be
easily identifiable. For example, in a school database, students, teachers,
classes, and courses offered can be considered as entities. All these entities
have some attributes or properties that give them their identity.
An entity set is a collection of similar types of entities. An entity set may contain
entities with attribute sharing similar values. For example, a Students set may
contain all the students of a school; likewise a Teachers set may contain all the
teachers of a school from all faculties. Entity sets need not be disjoint.
Attributes
Entities are represented by means of their properties called attributes. All
attributes have values. For example, a student entity may have name, class, and
age as attributes.
There exists a domain or range of values that can be assigned to attributes. For
example, a student's name cannot be a numeric value. It has to be alphabetic. A
student's age cannot be negative, etc.
Types of Attributes
Derived attribute: Derived attributes are the attributes that do not exist
in the physical database, but their values are derived from other
attributes present in the database. For example, average_salary in a
department should not be saved directly in the database, instead it can be
derived. For another example, age can be derived from data_of_birth.
13
DBMS
Primary Key: A primary key is one of the candidate keys chosen by the
database designer to uniquely identify the entity set.
Relationship
The association among entities is called a relationship. For example, an
employee works_at a department, a student enrolls in a course. Here,
Works_at and Enrolls are called relationships.
Relationship Set
A set of relationships of similar type is called a relationship set. Like entities, a
relationship too can have attributes. These attributes are called descriptive
attributes.
Degree of Relationship
The number of participating entities in a relationship defines the degree of the
relationship.
14
DBMS
Binary = degree 2
Ternary = degree 3
n-ary = degree
Mapping Cardinalities
Cardinality defines the number of entities in one entity set, which can be
associated with the number of entities of other set via relationship set.
One-to-one: One entity from entity set A can be associated with at most
one entity of entity set B and vice versa.
One-to-many: One entity from entity set A can be associated with more
than one entities of entity set B, however an entity from entity set B can
be associated with at most one entity.
Many-to-one: More than one entities from entity set A can be associated
with at most one entity of entity set B, however an entity from entity set
B can be associated with more than one entity from entity set A.
15
DBMS
Many-to-many: One entity from A can be associated with more than one
entity from B and vice versa.
16
DBMS
7. ER DIAGRAM REPRESENTATION
Let us now learn how the ER Model is represented by means of an ER diagram.
Any object, for example, entities, attributes of an entity, relationship sets, and
attributes of relationship sets, can be represented with the help of an ER
diagram.
Entity
Entities are represented by means of rectangles. Rectangles are named with the
entity set they represent.
Attributes
Attributes are the properties of entities. Attributes are represented by means of
ellipses. Every ellipse represents one attribute and is directly connected to its
entity (rectangle).
17
DBMS
18
DBMS
Relationship
Relationships are represented by diamond-shaped box. Name of the relationship
is written inside the diamond-box. All the entities (rectangles) participating in a
relationship are connected to it by a line.
[Image: One-to-one]
19
DBMS
[Image: One-to-many]
[Image: Many-to-one]
Many-to-many: The following image reflects that more than one instance
of an entity on the left and more than one instance of an entity on the
right can be associated with the relationship. It depicts many-to-many
relationship.
[Image: Many-to-many]
20
DBMS
Participation Constraints
21
8. GENERALIZATION &
SPECIALIZATION
DBMS
Generalization
As mentioned above, the process of generalizing entities, where the generalized
entities contain the properties of all the generalized entities, is called
generalization. In generalization, a number of entities are brought together into
one generalized entity based on their similar characteristics. For example,
pigeon, house sparrow, crow, and dove can all be generalized as Birds.
[Image: Generalization]
Specialization
Specialization is the opposite of generalization. In specialization, a group of
entities is divided into sub-groups based on their characteristics. Take a group
Person for example. A person has name, date of birth, gender, etc. These
properties are common in all persons, human beings. But in a company, persons
can be identified as employee, employer, customer, or vendor, based on what
role they play in the company.
22
DBMS
[Image: Specialization]
Similarly, in a school database, persons can be specialized as teacher, student,
or a staff, based on what role they play in school as entities.
Inheritance
We use all the above features of ER-Model in order to create classes of objects in
object-oriented programming. The details of entities are generally hidden from
the user; this process known as abstraction.
Inheritance is an important feature of Generalization and Specialization. It allows
lower-level entities to inherit the attributes of higher-level entities.
23
DBMS
[Image: Inheritance]
For example, the attributes of a Person class such as name, age, and gender can
be inherited by lower-level entities such as Student or Teacher.
24
9. CODDS 12 RULES
DBMS
25
DBMS
DBMS
Relational data model is the primary data model, which is used widely around
the world for data storage and processing. This model is simple and it has all the
properties and capabilities required to process data with storage efficiency.
Concepts
Tables: In relational data model, relations are saved in the format of Tables.
This format stores the relation among entities. A table has rows and columns,
where rows represent records and columns represent the attributes.
Tuple: A single row of a table, which contains a single record for that relation is
called a tuple.
Relation instance: A finite set of tuples in the relational database system
represents relation instance. Relation instances do not have duplicate tuples.
Relation schema: A relation schema describes the relation name (table name),
attributes, and their names.
Relation key: Each row has one or more attributes, known as relation key,
which can identify the row in the relation (table) uniquely.
Attribute domain: Every attribute has some predefined value scope, known as
attribute domain.
Constraints
Every relation has some conditions that must hold for it to be a valid relation.
These conditions are called Relational Integrity Constraints. There are three
main integrity constraints:
Key constraints
Domain constraints
Key Constraints
There must be at least one minimal subset of attributes in the relation, which
can identify a tuple uniquely. This minimal subset of attributes is called key for
that relation. If there are more than one such minimal subsets, these are
called candidate keys.
Key constraints force that:
27
DBMS
in a relation with a key attribute, no two tuples can have identical values
for key attributes.
Domain Constraints
Attributes have specific values in real-world scenario. For example, age can only
be a positive integer. The same constraints have been tried to employ on the
attributes of a relation. Every attribute is bound to have a specific range of
values. For example, age cannot be less than zero and telephone numbers
cannot contain a digit outside 0-9.
28
DBMS
Relational Algebra
Relational algebra is a procedural query language, which takes instances of
relations as input and yields instances of relations as output. It uses operators to
perform queries. An operator can be either unary or binary. They accept
relations as their input and yield relations as their output. Relational algebra is
performed recursively on a relation and intermediate results are also considered
relations.
The fundamental operations of relational algebra are as follows:
Select
Project
Union
Set different
Cartesian product
Rename
Select Operation ()
It selects tuples that satisfy the given predicate from a relation.
Notation: p(r)
Where stands for selection predicate and r stands for relation. p is
prepositional logic formula which may use connectors like and, or, and not.
These terms may use relational operators like: =, , , <, >, .
For example:
subject="database"(Books)
Output: Selects tuples from books where subject is 'database'.
subject="database"
and price="450"(Books)
29
DBMS
Output: Selects tuples from books where subject is 'database' and 'price' is 450.
subject="database"
Output: Selects tuples from books where subject is 'database' and 'price' is 450
or those books published after 2010.
Project Operation ()
It projects column(s) that satisfy a given predicate.
Notation: A1, A2, An (r)
Where A1, A2, An are attribute names of relation r.
Duplicate rows are automatically eliminated, as relation is a set.
For example:
subject,
author
(Books)
Selects and projects columns named as subject and author from the relation
Books.
Union Operation ()
It performs binary union between two given relations and is defined as:
r s = { t | t r or t s}
Notion: r U s
Where r and s are either database relations or relation result set (temporary
relation).
For a union operation to be valid, the following conditions must hold:
author
(Books)
author
(Articles)
Output: Projects the names of the authors who have either written a book or an
article or both.
30
DBMS
Set Difference ()
The result of set difference operation is tuples, which are present in one relation
but are not in the second relation.
Notation: r s
Finds all the tuples that are present in r but not in s.
author(Books) author(Articles)
Output: Provides the name of authors who have written books but not articles.
Cartesian Product ()
Combines information of two different relations into one.
Notation: r s
Where r and s are relations and their output will be defined as:
r s = { q t | q r and t s}
author
= 'tutorialspoint'(Books
Articles)
Output: Yields a relation, which shows all the books and articles written by
tutorialspoint.
Rename Operation ()
The results of relational algebra are also relations but without any name. The
rename operation allows us to rename the output relation. rename operation is
denoted with small Greek letter rho .
Notation: x (E)
Where the result of expression E is saved with name of x.
Additional operations are:
Set intersection
Assignment
Natural join
Relational Calculus
In contrast to Relational Algebra, Relational Calculus is a non-procedural query
language, that is, it tells what to do but never explains how to do it.
Relational calculus exists in two forms:
31
DBMS
Output: Returns tuples with 'name' from Author who has written article on
'database'.
TRC can be quantified. We can use Existential () and Universal Quantifiers ().
For example:
{ R| T
Output: The above query will yield the same result as the previous one.
Output: Yields Article, Page, and Subject from the relation TutorialsPoint, where
subject is database.
Just like TRC, DRC can also be written using existential and universal quantifiers.
DRC also involves relational operators.
The expression power of Tuple Relation Calculus and Domain Relation Calculus is
equivalent to Relational Algebra.
32
DBMS
ER Model, when conceptualized into diagrams, gives a good overview of entityrelationship, which is easier to understand. ER diagrams can be mapped to
relational schema, that is, it is possible to create relational schema using ER
diagram. We cannot import all the ER constraints into relational model, but an
approximate schema can be generated.
There are several processes and algorithms available to convert ER Diagrams
into Relational Schema. Some of them are automated and some of them are
manual. We may focus here on the mapping diagram contents to relational
basics.
ER diagrams mainly comprise of:
Mapping Entity
An entity is a real-world object with some attributes.
Entity's attributes should become fields of tables with their respective data
types.
33
DBMS
Mapping Relationship
A relationship is an association among entities.
Mapping Process:
Add the primary keys of all participating Entities as fields of table with
their respective data types.
DBMS
Mapping Process:
Mapping Process
Declare primary key of higher-level table and the primary key for lowerlevel table.
35
DBMS
36
DBMS
CREATE
Creates new databases, tables, and views from RDBMS.
For example:
Create database tutorialspoint;
Create table article;
Create view for_students;
DROP
Drops commands, views, tables, and databases from RDBMS.
For example:
Drop object_type object_name;
Drop database tutorialspoint;
Drop table article;
Drop view for_students;
ALTER
Modifies database schema.
Alter object_type object_name parameters;
37
DBMS
For example:
Alter table article add subject varchar;
This command adds an attribute in the relation article with the name subject of
string type.
SELECT/FROM/WHERE
INSERT INTO/VALUES
UPDATE/SET/WHERE
DELETE FROM/WHERE
These basic constructs allow database programmers and users to enter data and
information into the database and retrieve efficiently using a number of filter
options.
SELECT/FROM/WHERE
SELECT
This is one of the fundamental query command of SQL. It is similar to the
projection operation of relational algebra. It selects the attributes based on
the condition described by WHERE clause.
FROM
This clause takes a relation name as an argument from which attributes are
to be selected/projected. In case more than one relation names are given,
this clause corresponds to Cartesian product.
WHERE
This clause defines predicate or conditions, which must match in order to
qualify the attributes to be projected.
For example:
Select author_name
From book_author
Where age > 50;
38
DBMS
This command will yield the names of authors from the relation
book_author whose age is greater than 50.
INSERT INTO/VALUES
This command is used for inserting values into the rows of a table (relation).
Syntax:
INSERT INTO table (column1 [, column2, column3 ... ]) VALUES (value1 [,
value2, value3 ... ])
Or
INSERT INTO table VALUES (value1, [value2, ... ])
For example:
INSERT INTO tutorialspoint (Author, Subject) VALUES ("anonymous",
"computers");
UPDATE/SET/WHERE
This command is used for updating or modifying the values of columns in a table
(relation).
Syntax:
UPDATE table_name SET column_name = value [, column_name = value ...]
[WHERE condition]
For example:
UPDATE tutorialspoint SET Author="webmaster" WHERE Author="anonymous";
DELETE/FROM/WHERE
This command is used for removing one or more rows from a table (relation).
Syntax:
DELETE FROM table_name [WHERE condition];
39
DBMS
For example:
DELETE FROM tutorialspoint
WHERE Author="unknown";
40
14. NORMALIZATION
DBMS
Functional Dependency
Functional dependency (FD) is a set of constraints between two attributes in a
relation. Functional dependency says that if two tuples have same values for
attributes A1, A2,..., An, then those two tuples must have to have same values
for attributes B1, B2, ..., Bn.
Functional dependency is represented by an arrow sign () that is, XY, where
X functionally determines Y. The left-hand side attributes determine the values
of attributes on the right-hand side.
Armstrong's Axioms
If F is a set of functional dependencies then the closure of F, denoted as F+, is
the set of all functional dependencies logically implied by F. Armstrong's Axioms
are a set of rules that, when applied repeatedly, generates a closure of
functional dependencies.
41
DBMS
Normalization
If a database design is not perfect, it may contain anomalies, which are like a
bad dream for any database administrator. Managing a database with anomalies
is next to impossible.
Update anomalies: If data items are scattered and are not linked to
each other properly, then it could lead to strange situations. For example,
when we try to update one data item having its copies scattered over
several places, a few instances get updated properly while a few others
are left with old values. Such instances leave the database in an
inconsistent state.
Insert anomalies: We tried to insert data in a record that does not exist
at all.
Normalization is a method to remove all these anomalies and bring the database
to a consistent state.
42
DBMS
If we follow second normal form, then every non-prime attribute should be fully
functionally dependent on prime key attribute. That is, if X A holds, then there
should not be any proper subset Y of X for which Y A also holds true.
DBMS
We broke the relation in two as depicted in the above picture. So there exists no
partial dependency.
X is a superkey or,
A is prime attribute.
44
DBMS
In the above image, Stu_ID is the super-key in the relation Student_Detail and
Zip is the super-key in the relation ZipCodes. So,
Stu_ID Stu_Name, Zip
and
Zip City
Which confirms that both the relations are in BCNF.
45
DBMS
15. JOINS
Theta () Join
Theta join combines tuples from different relations provided they satisfy the
theta condition. The join condition is denoted by the symbol .
Notation:
R1
R2
R1 and R2 are relations having attributes (A1, A2, .., An) and (B1, B2,.. ,Bn) such
that the attributes dont have anything in common, that is, R1 R2 = .
Theta join can use all kinds of comparison operators.
Student
SID
Name
Std
101
Alex
10
102
Maria
11
46
DBMS
Subjects
Class
Subject
10
Math
10
English
11
Music
11
Sports
[Table: Subjects Relation]
Student_Detail = STUDENT
Student.Std
= Subject.Class
SUBJECT
Student_detail
SID
Name
Std
Class
Subject
101
Alex
10
10
Math
101
Alex
10
10
English
102
Maria
11
11
Music
102
Maria
11
11
Sports
Equijoin
When Theta join uses only equality comparison operator, it is said to be
equijoin. The above example corresponds to equijoin.
Natural Join ()
Natural join does not use any comparison operator. It does not concatenate the
way a Cartesian product does. We can perform a Natural Join only if there is at
least one common attribute that exists between two relations. In addition, the
attributes must have the same name and domain.
47
DBMS
Natural join acts on those matching attributes where the values of attributes in
both the relations are same.
Courses
CID
Course
Dept
CS01
Database
CS
ME01
Mechanics
ME
EE01
Electronics
EE
Head
CS
Alex
ME
Maya
EE
Mira
[Table: Relation HoD]
Courses HoD
Dept
CID
Course
Head
CS
CS01
Database
Alex
ME
ME01
Mechanics
Maya
EE
EE01
Electronics
Mira
48
DBMS
Outer Joins
Theta Join, Equijoin, and Natural Join are called inner joins. An inner join
includes only those tuples with matching attributes and the rest are discarded in
the resulting relation. Therefore, we need to use outer joins to include all the
tuples from the participating relations in the resulting relation. There are three
kinds of outer joins: left outer join, right outer join, and full outer join.
S)
All the tuples from the Left relation, R, are included in the resulting relation. If
there are tuples in R without any matching tuple in the Right relation S, then the
S-attributes of the resulting relation are made NULL.
Left
A
100
Database
101
Mechanics
102
Electronics
[Table: Left Relation]
Right
A
100
Alex
102
Maya
104
Mira
[Table: Right Relation]
49
DBMS
Courses
HoD
100
Database
100
Alex
101
Mechanics
---
---
102
Electronics
102
Maya
S)
All the tuples from the Right relation, S, are included in the resulting relation. If
there are tuples in S without any matching tuple in R, then the R-attributes of
resulting relation are made NULL.
Courses
HoD
100
Database
100
Alex
102
Electronics
102
Maya
---
---
104
Mira
S)
All the tuples from both participating relations are included in the resulting
relation. If there are no matching tuples for both relations, their respective
unmatched attributes are made NULL.
50
DBMS
Courses
HoD
100
Database
100
Alex
101
Mechanics
---
---
102
Electronics
102
Maya
---
---
104
Mira
51
DBMS
Databases are stored in file formats, which contain records. At physical level, the
actual data is stored in electromagnetic format on some device. These storage
devices can be broadly categorized into three types:
Memory Hierarchy
A computer system has a well-defined hierarchy of memory. A CPU has direct
access to it main memory as well as its inbuilt registers. The access time of the
52
DBMS
main memory is obviously less than the CPU speed. To minimize this speed
mismatch, cache memory is introduced. Cache memory provides the fastest
access time and it contains data that is most frequently accessed by the CPU.
The memory with the fastest access is the costliest one. Larger storage devices
offer slow speed and they are less expensive, however they can store huge
volumes of data as compared to CPU registers or cache memory.
Magnetic Disks
Hard disk drives are the most common secondary storage devices in present
computer systems. These are called magnetic disks because they use the
concept of magnetization to store information. Hard disks consist of metal disks
coated with magnetizable material. These disks are placed vertically on a
spindle. A read/write head moves in between the disks and is used to magnetize
or de-magnetize the spot under it. A magnetized spot can be recognized as 0
(zero) or 1 (one).
Hard disks are formatted in a well-defined order to store data efficiently. A hard
disk plate has many concentric circles on it, called tracks. Every track is further
divided into sectors. A sector on a hard disk typically stores 512 bytes of data.
RAID
RAID stands for Redundant Array of Independent Disks, which is a technology
to connect multiple secondary storage devices and use them as a single storage
media.
RAID consists of an array of disks in which multiple disks are connected together
to achieve different goals. RAID levels define the use of disk arrays.
[Image: RAID 0]
DBMS
[Image: RAID 1]
[Image: RAID 2]
RAID 3: RAID 3 stripes the data onto multiple disks. The parity bit
generated for data word is stored on a different disk. This technique
makes it to overcome single disk failures.
[Image: RAID 3]
RAID 4: In this level, an entire block of data is written onto data disks
and then the parity is generated and stored on a different disk. Note that
level 3 uses byte-level striping, whereas level 4 uses block-level striping.
Both level 3 and level 4 require at least three disks to implement RAID.
54
DBMS
[Image: RAID 4]
RAID 5: RAID 5 writes whole data blocks onto different disks, but the
parity bits generated for data block stripe are distributed among all the
data disks rather than storing them on a different dedicated disk.
[Image: RAID 5]
[Image: RAID 6]
55
DBMS
File Organization
File Organization defines how file records are mapped onto disk blocks. We have
four types of File Organization to organize file records:
56
DBMS
File Operations
Operations on database files can be broadly classified into two categories:
Update Operations
Retrieval Operations
Open: A file can be opened in one of the two modes, read mode or
write mode. In read mode, the operating system does not allow anyone
to alter data. In other words, data is read only. Files opened in read mode
can be shared among several entities. Write mode allows data
modification. Files opened in write mode can be read but cannot be
shared.
Locate: Every file has a file pointer, which tells the current position where
the data is to be read or written. This pointer can be adjusted accordingly.
Using find (seek) operation, it can be moved forward or backward.
Read: By default, when files are opened in read mode, the file pointer
points to the beginning of the file. There are options where the user can
tell the operating system where to locate the file pointer at the time of
opening a file. The very next data to the file pointer is read.
57
DBMS
Write: User can select to open a file in write mode, which enables them
to edit its contents. It can be deletion, insertion, or modification. The file
pointer can be located at the time of opening or can be dynamically
changed if the operating system allows to do so.
Close: This is the most important operation from the operating systems
point of view. When a request to close a file is generated, the operating
system
o
saves the data (if altered) to the secondary storage media, and
releases all the buffers and file handlers associated with the file.
The organization of data inside a file plays a major role here. The process to
locate the file pointer to a desired record inside a file various based on whether
the records are arranged sequentially or clustered.
58
18. INDEXING
DBMS
We know that data is stored in the form of records. Every record has a key field,
which helps it to be recognized uniquely.
Indexing is a data structure technique to efficiently retrieve records from the
database files based on some attributes on which the indexing has been done.
Indexing in database systems is similar to what we see in books.
Indexing is defined based on its indexing attributes. Indexing can be of the
following types:
Dense Index
Sparse Index
Dense Index
In dense index, there is an index record for every search key value in the
database. This makes searching faster but requires more space to store index
records itself. Index records contain search key value and a pointer to the actual
record on the disk.
59
DBMS
Sparse Index
In sparse index, index records are not created for every search key. An index
record here contains a search key and an actual pointer to the data on the disk.
To search a record, we first proceed by index record and reach at the actual
location of the data. If the data we are looking for is not where we directly reach
by following the index, then the system starts sequential search until the desired
data is found.
Multilevel Index
Index records comprise search-key values and data pointers. Multilevel index is
stored on the disk along with the actual database files. As the size of the
database grows, so does the size of the indices. There is an immense need to
keep the index records in the main memory so as to speed up the search
operations. If single-level index is used, then a large size index cannot be kept
in memory which leads to multiple disk accesses.
DBMS
Multi-level Index helps in breaking down the index into several smaller indices in
order to make the outermost level so small that it can be saved in a single disk
block, which can easily be accommodated anywhere in the main memory.
B+ Tree
A B+ tree is a balanced binary search tree that follows a multi-level index format.
The leaf nodes of a B+ tree denote actual data pointers. B+ tree ensures that all
leaf nodes remain at the same height, thus balanced. Additionally, the leaf
nodes are linked using a link list; therefore, a B+ tree can support random
access as well as sequential access.
Structure of B+ Tree
Every leaf node is at equal distance from the root node. A B+ tree is of the order
n where n is fixed for every B+ tree.
[Image: B+ tree]
Internal nodes:
Internal (non-leaf) nodes contain at least n/2 pointers, except the root
node.
Leaf nodes:
Leaf nodes contain at least n/2 record pointers and n/2 key values.
At most, a leaf node can contain n record pointers and n key values.
Every leaf node contains one block pointer P to point to next leaf node
and forms a linked list.
B+ Tree Insertion
B+ trees are filled from bottom and each entry is done at the leaf node.
DBMS
Partition at i = (m+1)/2.
B+ Tree Deletion
If underflow occurs, distribute the entries from the nodes left to it.
If it is an internal node, delete and replace with the entry from the left
position.
62
19. HASHING
DBMS
For a huge database structure, it can be almost next to impossible to search all
the index values through all its level and then reach the destination data block to
retrieve the desired data. Hashing is an effective technique to calculate the
direct location of a data record on the disk without using index structure.
Hashing uses hash functions with search keys as parameters to generate the
address of a data record.
Hash Organization
Bucket: A hash file stores data in bucket format. Bucket is considered a unit
of storage. A bucket typically stores one complete disk block, which in turn
can store one or more records.
Hash Function: A hash function, h, is a mapping function that maps all the
set of search-keys K to the address where actual records are placed. It is a
function from search keys to bucket addresses.
Static Hashing
In static hashing, when a search-key value is provided, the hash function always
computes the same address. For example, if mod-4 hash function is used, then
it shall generate only 5 values. The output address shall always be same for that
function. The number of buckets provided remains unchanged at all times.
DBMS
Operation:
Bucket Overflow
The condition of bucket-overflow is known as collision. This is a fatal state for
any static hash function. In this case, overflow chaining can be used.
Overflow Chaining: When buckets are full, a new bucket is allocated for
the same hash result and is linked after the previous one. This mechanism
is called Closed Hashing.
64
DBMS
Dynamic Hashing
The problem with static hashing is that it does not expand or shrink dynamically
as the size of the database grows or shrinks. Dynamic hashing provides a
mechanism in which data buckets are added and removed dynamically and ondemand. Dynamic hashing is also known as extended hashing.
Hash function, in dynamic hashing, is made to produce a large number of values
and only a few are used initially.
65
DBMS
Organization
The prefix of an entire hash value is taken as a hash index. Only a portion of the
hash value is used for computing bucket addresses. Every hash index has a
depth value to signify how many bits are used for computing a hash function.
These bits can address 2n buckets. When all these bits are consumed that is,
when all the buckets are full then the depth value is increased linearly and
twice the buckets are allocated.
Operation
Querying: Look at the depth value of the hash index and use those bits
to compute the bucket address.
Deletion: Perform a query to locate the desired data and delete the
same.
Else,
If all the buckets are full, perform the remedies of static hashing.
Hashing is not favorable when the data is organized in some ordering and the
queries require a range of data. When data is discrete and random, hash
performs the best.
Hashing algorithms have high complexity than indexing. All hash operations are
done in constant time.
66
DBMS
67
20. TRANSACTION
DBMS
As Account
Open_Account(A)
Old_Balance = A.balance
New_Balance = Old_Balance - 500
A.balance = New_Balance
Close_Account(A)
Bs Account
Open_Account(B)
Old_Balance = B.balance
New_Balance = Old_Balance + 500
B.balance = New_Balance
Close_Account(B)
ACID Properties
A transaction is a very small unit of a program and it may contain several lowlevel tasks. A transaction in a database system must maintain Atomicity,
Consistency, Isolation, and Durability commonly known as ACID properties
in order to ensure accuracy, completeness, and data integrity.
DBMS
Durability: The database should be durable enough to hold all its latest
updates even if the system fails or restarts. If a transaction updates a
chunk of data in a database and commits, then the database will hold the
modified data. If a transaction commits but the system fails before the
data could be written on to the disk, then that data will be updated once
the system springs back into action.
Serializability
When multiple transactions are being executed by the operating system in a
multiprogramming environment, there are possibilities that instructions of one
transaction are interleaved with some other transaction.
Equivalence Schedules
An equivalence schedule can be of the following types:
69
DBMS
Result Equivalence
If two schedules produce the same result after execution, they are said to be
result equivalent. They may yield the same result for some value and different
results for another set of values. That's why this equivalence is not generally
considered significant.
View Equivalence
Two schedules would be view equivalence if the transactions in both the
schedules perform similar actions in a similar manner.
For example:
o
If T reads the initial data in S1, then it also reads the initial data in S2.
If T reads the value written by J in S1, then it also reads the value
written by J in S2.
If T performs the final write on the data value in S1, then it also
performs the final write on the data value in S2.
Conflict Equivalence
Two schedules would be conflicting if they have the following properties:
o
Two schedules having multiple transactions with conflicting operations are said
to be conflict equivalent if and only if:
o
Note: View equivalent schedules are view serializable and conflict equivalent
schedules are conflict serializable. All conflict serializable schedules are view
serializable too.
70
DBMS
States of Transactions
A transaction in a database can be in one of the following states:
Active: In this state, the transaction is being executed. This is the initial
state of every transaction.
Aborted: If any of the checks fails and the transaction has reached a
failed state, then the recovery manager rolls back all its write operations
on the database to bring the database back to its original state where it
was prior to the execution of the transaction. Transactions in this state
are called aborted. The database recovery module can select one of the
two operations after a transaction aborts:
71
DBMS
Lock-based protocols
Timestamp-based protocols
Lock-based Protocols
Database systems equipped with lock-based protocols use a mechanism by
which any transaction cannot read or write data until it acquires an appropriate
lock on it. Locks are of two kinds:
Binary Locks
locked or unlocked.
the locks based on their uses. If a lock is acquired on a data item to
perform a write operation, it is an exclusive lock. Allowing more than one
transaction to write on the same data item would lead the database into
an inconsistent state. Read locks are shared because no data value is
being changed.
DBMS
[Image: Pre-claiming]
Two-phase locking has two phases, one is growing, where all the locks are
being acquired by the transaction; and the second phase is shrinking, where the
locks held by the transaction are being released.
To claim an exclusive (write) lock, a transaction must first acquire a shared
(read) lock and then upgrade it to an exclusive lock.
DBMS
Timestamp-based Protocols
The most commonly used concurrency protocol is the timestamp based protocol.
This protocol uses either system time or logical counter as a timestamp.
Lock-based protocols manage the order between the conflicting pairs among
transactions at the time of execution, whereas timestamp-based protocols start
working as soon as a transaction is created.
Every transaction has a timestamp associated with it, and the ordering is
determined by the age of the transaction. A transaction created at 0002 clock
time would be older than all other transactions that come after it. For example,
any transaction 'y' entering the system at 0004 is two seconds younger and the
priority would be given to the older one.
In addition, every data item is given the latest read and write-timestamp. This
lets the system know when the last read and write operation was performed on
the data item.
DBMS
Operation rejected.
Operation executed.
Operation rejected.
75
22. DEADLOCK
DBMS
Deadlock Prevention
To prevent any deadlock situation in the system, the DBMS aggressively inspects
all the operations, where transactions are about to execute. The DBMS inspects
the operations and analyzes if they can create a deadlock situation. If it finds
that a deadlock situation might occur, then that transaction is never allowed to
be executed.
There are deadlock prevention schemes that use timestamp ordering mechanism
of transactions in order to predetermine a deadlock situation.
Wait-Die Scheme
In this scheme, if a transaction requests to lock a resource (data item), which is
already held with a conflicting lock by another transaction, then one of the two
possibilities may occur:
This scheme allows the older transaction to wait but kills the younger one.
Wound-Wait Scheme
In this scheme, if a transaction requests to lock a resource (data item), which is
already held with conflicting lock by another transaction, one of the two
possibilities may occur:
76
DBMS
If TS(Ti) < TS(Tj), then Ti forces Tj to be rolled back that is Ti wounds Tj.
Tj is restarted later with a random delay but with the same timestamp.
If TS(Ti) > TS(Tj), then Ti is forced to wait until the resource is available.
This scheme allows the younger transaction to wait; but when an older
transaction requests an item held by a younger one, the older transaction forces
the younger one to abort and release the item.
In both the cases, the transaction that enters the system at a later stage is
aborted.
Deadlock Avoidance
Aborting a transaction is not always a practical approach. Instead, deadlock
avoidance mechanisms can be used to detect any deadlock situation in advance.
Methods like "wait-for graph" are available but they are suitable for only those
systems where transactions are lightweight having fewer instances of resource.
In a bulky system, deadlock prevention techniques may work well.
Wait-for Graph
This is a simple method available to track if any deadlock situation may arise.
For each transaction entering into the system, a node is created. When a
transaction Ti requests for a lock on an item, say X, which is held by some other
transaction Tj, a directed edge is created from Ti to Tj. If Tj releases item X, the
edge between them is dropped and Ti locks the data item.
The system maintains this wait-for graph for every transaction waiting for some
data items held by others. The system keeps checking if there's any cycle in the
graph.
77
DBMS
First, do not allow any request for an item, which is already locked by
another transaction. This is not always feasible and may cause starvation,
where a transaction indefinitely waits for a data item and can never
acquire it.
The second option is to roll back one of the transactions. It is not always
feasible to roll back the younger transaction, as it may be important than
the older one. With the help of some relative algorithm, a transaction is
chosen, which is to be aborted. This transaction is known as the victim
and the process is known as victim selection.
78
DBMS
<dump> can be marked on a log file, whenever the database contents are
dumped from a non-volatile memory to a stable one.
Recovery:
When the system recovers from a failure, it can restore the latest dump.
Grown-up databases are too bulky to be frequently backed up. In such cases, we
have techniques where we can restore a database just by looking at its logs. So,
79
DBMS
all that we need to do here is to take a backup of all the logs at frequent
intervals of time. The database can be backed up once a week, and the logs
being very small can be backed up every day or as frequently as possible.
Remote Backup
Remote backup provides a sense of security in case the primary location where
the database is located gets destroyed. Remote backup can be offline or realtime or online. In case it is offline, it is maintained manually.
80
DBMS
Crash Recovery
DBMS is a highly complex system with hundreds of transactions being executed
every second. The durability and robustness of a DBMS depends on its complex
architecture and its underlying hardware and system software. If it fails or
crashes amid transactions, it is expected that the system would follow some sort
of algorithm or techniques to recover lost data.
Failure Classification
To see where the problem has occurred, we generalize a failure into various
categories, as follows:
Transaction Failure
A transaction has to abort when it fails to execute or when it reaches a point
from where it cant go any further. This is called transaction failure where only a
few transactions or processes are hurt.
Reasons for a transaction failure could be:
System Crash
There are problems external to the system that may cause the system to
stop abruptly and cause the system to crash. For example, interruptions in
power supply may cause the failure of underlying hardware or software failure.
Examples may include operating system errors.
Disk Failure
In early days of technology evolution, it was a common problem where hard-disk
drives or storage drives used to fail frequently.
81
DBMS
Disk failures include formation of bad sectors, unreachability to the disk, disk
head crash or any other failure, which destroys all or a part of disk storage.
Storage Structure
We have already described the storage system. In brief, the storage structure
can be divided into two categories:
It should check the states of all the transactions, which were being
executed.
There are two types of techniques, which can help a DBMS in recovering as well
as maintaining the atomicity of a transaction:
Maintaining the logs of each transaction, and writing them onto some
stable storage before actually modifying the database.
82
DBMS
Log-based Recovery
Log is a sequence of records, which maintains the records of actions performed
by a transaction. It is important that the logs are written prior to the actual
modification and stored on a stable storage media, which is failsafe.
Log-based recovery works as follows:
When a transaction enters the system and starts execution, it writes a log
about it.
<Tn, Start>
Checkpoint
Keeping and maintaining logs in real time and in real environment may fill out all
the memory space available in the system. As time passes, the log file may grow
too big to be handled at all. Checkpoint is a mechanism where all the previous
logs are removed from the system and stored permanently in a storage disk.
Checkpoint declares a point before which the DBMS was in consistent state, and
all the transactions were committed.
83
DBMS
Recovery
When a system with concurrent transactions crashes and recovers, it behaves in
the following manner:
The recovery system reads the logs backwards from the end to the last
checkpoint.
If the recovery system sees a log with <Tn, Start> and <Tn, Commit> or
just <Tn, Commit>, it puts the transaction in the redo-list.
If the recovery system sees a log with <Tn, Start> but no commit or abort
log found, it puts the transaction in the undo-list.
All the transactions in the undo-list are then undone and their logs are removed.
All the transactions in the redo-list and their previous logs are removed and then
redone before saving their logs.
84