Dbms Notes Unit1,2,6
Dbms Notes Unit1,2,6
INTRODUCTION TO DBMS:
What is data?
• Data is nothing but facts and statistics stored or free flowing over a network,
generally it's raw and unprocessed.
• Data becomes information when it is processed, turning it into something
meaningful.
• What is database: The database is a collection of inter-related data which is used to
retrieve, insert and delete the data efficiently.
• It is also used to organize the data in the form of a table, schema, views, and
reports, etc.
• Using the database, you can easily retrieve, insert, and delete the information.
• For example: The college Database organizes the data about the admin, staff,
students and faculty etc.
DBMS is a collection of data. In DBMS, the user File system is a collection of data. In this system, the
is not required to write the procedures. user has to write the procedures for managing the
database.
DBMS gives an abstract view of data that hides File system provides the detail of the data
the details. representation and storage of data.
DBMS provides a crash recovery mechanism, File system doesn't have a crash mechanism, i.e., if
i.e., DBMS protects the user from the system the system crashes while entering some data, then the
failure. content of the file will lost.
DBMS provides a good protection mechanism. It is very difficult to protect a file under the file
system.
DBMS contains a wide variety of sophisticated File system can't efficiently store and retrieve the
techniques to store and retrieve the data. data.
DBMS takes care of Concurrent access of data In the File system, concurrent access has many
using some form of locking. problems like redirecting the file while other deleting
some information or updating some information.
• The main purpose of database systems is to manage the data. Consider a university
that keeps the data of students, teachers, courses, books etc. To manage this data
we need to store this data somewhere where we can add new data, delete unused
data, update outdated data, retrieve data, to perform these operations on data we
need a Database management system that allows us to store the data in such a way
so that all these operations can be performed on the data efficiently.
Characteristics of DBMS
• Data stored into Tables: Data is never directly stored into the database. Data is
stored into tables, created inside the database.
• Reduced Redundancy: In the modern world hard drives are very cheap, but earlier
when hard drives were too expensive, unnecessary repetition of data in database
was a big problem. But DBMS follows Normalisation which divides the data in
such a way that repetition is minimum.
• Data Consistency: On Live data, i.e. data that is being continuosly updated and
added, maintaining the consistency of data can become a challenge. But DBMS
handles it all by itself.
• Support Multiple user and Concurrent Access: DBMS allows multiple users to
work on it(update, insert, delete data) at the same time and still manages to
maintain the data consistency.
• Query Language: DBMS provides users with a simple Query language, using
which data can be easily fetched, inserted, deleted and updated in a database.
• Controls database redundancy: It can control data redundancy because it stores all
the data in one single database file and that recorded data is placed in the database.
• Data sharing: In DBMS, the authorized users of an organization can share the data
among multiple users.
• Easily Maintenance: It can be easily maintainable due to the centralized nature of
the database system.
• Reduce time: It reduces development time and maintenance need.
• Backup: It provides backup and recovery subsystems which create automatic
backup of data from hardware and software failures and restores the data if
required.
• multiple user interface: It provides different types of user interfaces like graphical
user interfaces, application program interfaces
Disadvantages of DBMS
• Cost of Hardware and Software: It requires a high speed of data processor and
large memory size to run DBMS software.
• Size: It occupies a large space of disks and large memory to run them efficiently.
• Complexity: Database system creates additional complexity and requirements.
• Higher impact of failure: Failure is highly impacted the database because in most
of the organization, all the data stored in a single database and if the database is
damaged due to electric failure or database corruption then the data may be lost
forever.
Database systems are made-up of complex data structures. To ease the user interaction with
database, the developers hide internal irrelevant details from users. This process of hiding
irrelevant details from user is called data abstraction.
Logical level: This is the middle level of 3-level data abstraction architecture. It describes
what data is stored in database.
View level: Highest level of data abstraction. This level describes the user interaction
with database system.
Instance and schema in DBMS
Definition of instance:
The data stored in database at a particular moment of time is called instance of database.
Database schema defines the variable declarations in tables that belong to a particular
database; the value of these variables at a moment of time is called the instance of that
database.
DBMS ARCHITECTURE:
• Database management systems architecture will help us understand the components of
database system and the relation among them.
• The architecture of DBMS depends on the computer system on which it runs.
• the basic client/server architecture is used to deal with a large number of PCs, web
servers, database servers and other components that are connected with networks.
• The client/server architecture consists of many PCs and a workstation which are
connected via the network.
• DBMS architecture depends upon how users are connected to the database to get their
request done.
• In this type of architecture, the database is readily available on the client machine, any
request made by client doesn’t require a network connection to perform the action on
the database.
• Any changes done here will directly be done on the database itself. It doesn't provide
a handy tool for end users.
• The 1-Tier architecture is used for development of the local application, where
programmers can directly communicate with the database for the quick response.
• In two-tier architecture, the Database system is present at the server machine and
the DBMS application is present at the client machine, these two machines are
connected with each other through a reliable network.
• Whenever client machine makes a request to access the database present at server
using a query language like sql, the server perform the request on the database
and returns the result back to the client.
• The application connection interface such as JDBC, ODBC are used for the
interaction between server and client.
• In three-tier architecture, another layer is present between the client machine and
server machine.
• In this architecture, the client application doesn’t communicate directly with the
database systems present at the server machine, rather the client application
communicates with server application and the server application internally
communicates with the database system present at the server.
DATA MODELS:
• Data Model is the modeling of the data description, data semantics, and
consistency constraints of the data.
• It provides the conceptual tools for describing the design of a database at each
level of data abstraction.
• Therefore, there are following four data models used for understanding the
structure of the database:
• Hierarchical database
Hierarchical DBMS
In a Hierarchical database, model data is organized in a tree-like structure. Data is Stored
Hierarchically (top down or bottom up) format. Data is represented using a parent-child
relationship. In Hierarchical DBMS parent may have many children, but children have
only one parent.
Network Model
The network database model allows each child to have multiple parents. It helps you to address
the need to model more complex relationships like as the orders/parts many-to-many
relationship. In this model, entities are organized in a graph which can be accessed through
several paths.
Relational model
Relational DBMS is the most widely used DBMS model because it is one of the easiest. This
model is based on normalizing data in the rows and columns of the tables. Relational model
stored in fixed structures and manipulated using SQL.
Entity-Relationship Model
Database Administrators
A Database Administrator (DBA) is an individual or person responsible for
controlling, maintaining, coordinating, and operating a database
management system
The life cycle of database starts from designing, implementing to administration of it. A
database for any kind of requirement needs to be designed perfectly so that it should work
without any issues. Once all the design is complete, it needs to be installed. Once this step is
complete, users start using the database. The database grows as the data grows in the
database. When the database becomes huge, its performance comes down. Also accessing the
data from the database becomes challenge. There will be unused memory in database, making
the memory inevitably huge. These administration and maintenance of database is taken care
by database Administrator – DBA.
DML pre-compiler:
DDL compiler:
The DDL compiler converts the data definition statements into a set of tables. These
tables contains information concerning the database and are in a form that can be
used by other components of the dbms.
File manager:
File manager manages the allocation of space on disk storage and the data structure
used to represent information stored on disk.
Database manager:
A database manager is a program module which provides the interface between the
low level data stored in the database and the application programs and queries
submitted to the system.
The responsibilities of database manager are:
1. Interaction with file manager: The data is stored on the disk using the
file system which is provided by operating system. The database manager
translate the the different DML statements into low-level file system
commands. so The database manager is responsible for the actual
storing,retrieving and updating of data in the database.
2. Integrity enforcement:The data values stored in the database must satisfy
certain constraints(eg: the age of a person can't be less then zero).These
constraints are specified by DBA. Data manager checks the constraints and
if it satisfies then it stores the data in the database.
3. Security enforcement:Data manager checks the security measures for
database from unauthorized users.
4. Backup and recovery:Database manager detects the failures occurs due
to different causes (like disk failure, power failure,deadlock,s/w error) and
restores the database to original state of the database.
5. Concurrency control:When several users access the same database file
simultaneously, there may be possibilities of data inconsistency. It is
responsible of database manager to control the problems occurs for
concurrent transactions.
6. query processor:
The query processor used to interpret to online user’s query and
convert it into an efficient series of operations in a form capable of
being sent to the data manager for execution. The query processor
Transaction Management?
As query processing includes certain activities for data retrieval. Initially, the given user
queries get translated in high-level database languages such as SQL. It gets translated into
Query Evaluation Plan o In order to fully evaluate a query, the system needs to
construct a query evaluation plan.
o The annotations in the evaluation plan may refer to the algorithms to be used for the
particular index or the specific operations.
o Such relational algebra with annotations is referred to as Evaluation Primitives. The
evaluation primitives carry the instructions needed for the evaluation of the operation.
o Thus, a query evaluation plan defines a sequence of primitive operations used for
evaluating a query. The query evaluation plan is also referred to as the query
execution plan. o A query execution engine is responsible for generating the
output of the given query. It takes the query execution plan, executes it, and finally
makes the output for the user query.
Optimization o The cost of the query evaluation can vary for different types of
queries. Although the system is responsible for constructing the evaluation plan, the user
does need not to write their query efficiently. o Usually, a database system generates an
efficient query evaluation plan, which minimizes its cost. This type of task performed by
the database system and is known as Query Optimization. o For optimizing a query,
the query optimizer should have an estimated cost analysis of each operation. It is
because the overall operation cost depends on the memory allocations to several
operations, execution costs, and so on.
The table name and column names are helpful to interpret the meaning of values in each row.
The data are represented as a set of relations. In the relational model, data are stored as tables.
However, the physical storage of the data is independent of the way the data are logically
organized.
2. Tables – In the Relational model the, relations are saved in the table format. It is
stored along with its entities. A table has two properties rows and columns. Rows
represent records and columns represent attributes.
3. Tuple – It is nothing but a single row of a table, which contains a single record.
4. Relation Schema: A relation schema represents the name of the relation with its
attributes.
5. Degree: The total number of attributes which in the relation is called the degree
of the relation.
6. Cardinality: Total number of rows present in the Table.
7. Column: The column represents the set of values for a specific attribute.
8. Relation instance – Relation instance is a finite set of tuples in the RDBMS
system.
Relation instances never have duplicate tuples.
9. Relation key - Every row has one, two or multiple attributes, which is called
relation key.
10. Attribute domain – Every attribute has some pre-defined value and scope which
is known as attribute domain
Keys in DBMS
KEYS in DBMS is an attribute or set of attributes which helps you to identify a row(tuple) in
a relation(table). They allow you to find the relation between two tables. Keys help you
uniquely identify a row in a table by a combination of one or more columns in that table. Key
is also helpful for finding unique record or row from the table. Database key is also helpful
for finding unique record or row from the table.
Here are some reasons for using sql key in the DBMS system.
There are mainly seven different types of Keys in DBMS and each key has its different
functionality:
• Super Key - A super key is a group of single or multiple keys which identifies
rows in a table.
• Primary Key - is a column or group of columns in a table that uniquely identify
every row in that table.
• Candidate Key - is a set of attributes that uniquely identify tuples in a table.
Candidate Key is a super key with no repeated attributes.
• Alternate Key - is a column or group of columns in a table that uniquely identify
every row in that table.
• Foreign Key - is a column that creates a relationship between two tables. The
purpose of Foreign keys is to maintain data integrity and allow navigation
between two different instances of an entity.
• Compound Key - has two or more attributes that allow you to uniquely
recognize a specific record. It is possible that each column may not be unique by
itself within the database.
• Composite Key - An artificial key which aims to uniquely identify each record is
called a surrogate key. These kind of key are unique because they are created
when you don't have any natural primary key.
• Surrogate Key - An artificial key which aims to uniquely identify each record is
called a surrogate key. These kind of key are unique because they are created
when you don't have any natural primary key.
ER model
o It develops a conceptual design for the database. It also develops a very simple and
easy to design view of data.
o In ER modeling, the database structure is portrayed as a diagram called an
entityrelationship diagram.
For example, Suppose we design a school database. In this database, the student will be an
entity with attributes like address, name, id, age, etc. The address can be another entity with
attributes like city, street name, pin code, etc and there will be a relationship between them.
Component of ER Diagram
An entity may be any object, class, person or place. In the ER diagram, an entity can be
represented as rectangles.
2. Attribute
The attribute is used to describe the property of an entity. Eclipse is used to represent an
attribute.
For example, id, age, contact number, name, etc. can be attributes of a student.
1.Simple Attributes
2.Key Attribute
The key attribute is used to represent the main characteristics of an entity. It represents a
primary key. The key attribute is represented by an ellipse with the text underlined.
An attribute that composed of many other attributes is known as a composite attribute. The
composite attribute is represented by an ellipse, and those ellipses are connected with an
ellipse.
4.Multivalued Attribute
An attribute can have more than one value. These attributes are known as a multivalued
attribute. The double oval is used to represent multivalued attribute.
For example, a student can have more than one phone number.
An attribute that can be derived from other attribute is known as a derived attribute. It can be
represented by a dashed ellipse.
For example, A person's age changes over time and can be derived from another attribute
like Date of birth.
3. Relationship
A relationship is used to describe the relation between entities. Diamond or rhombus is used
to represent the relationship
When only one instance of an entity is associated with the relationship, then it is known
as one to one relationship.
For example, A female can marry to one male, and a male can marry to one female.
b. One-to-many relationship
When only one instance of the entity on the left, and more than one instance of an entity on
the right associates with the relationship then this is known as a one-to-many relationship.
For example, Scientist can invent many inventions, but the invention is done by the only
specific scientist.
c. Many-to-one relationship
When more than one instance of the entity on the left, and only one instance of an entity on
the right associates with the relationship then it is known as a many-to-one relationship.
For example, Student enrolls for only one course, but a course can have many students.
d. Many-to-many relationship
For example, Employee can assign by many projects and project can have many employees.
Notation of ER diagram
Database can be represented using the notations. In ER diagram, many notations are used to
express the cardinality. These notations are as follows:
Example – A student entity can exist without needing any other entity in the schema
or a course entity can exist without needing any other entity in the schema
• A Strong entity is nothing but an entity set having a primary key attribute or a
table that consists of a primary key column
Representation
•
An entity that depends on another entity called a weak entity. The weak entity doesn't contain
any key attribute of its own. The weak entity is represented by a double rectangle.
Example 1 – A loan entity can not be created for a customer if the customer doesn’t
exist
Example 2 – A dependents list entity can not be created if the employee doesn’t exist
• Simply a weak entity is nothing but an entity that does not have a primary key
attribute
• It contains a partial key called a discriminator which helps in identifying a
group of entities from the entity set
• A discriminator is represented by underlining with a dashed line
Representation
•
1. Generalization
• Generalization is the process of generalizing the entities which contain
the properties of all the generalized entities.
• It is a bottom approach, in which two lower level entities combine to
form a higher level entity.
• Generalization is the reverse process of Specialization.
• It defines a general entity type from a set of specialized entity type.
• It minimizes the difference between the entities by identifying the
common features.
For example:
In the above example, Tiger, Lion, Elephant can all be generalized as Animals.
•
In the above example, Employee can be specialized as Developer or
Tester, based on what role they play in an Organization.
o Integrity constraints ensure that the data insertion, updating, and other processes have to
be performed in such a way that data integrity is not affected.
o Thus, integrity constraint is used to guard against accidental damage to the database.
Example:
o This is because the primary key value is used to identify individual rows in relation
and if the primary key has a null value, then we can't identify those rows.
o A table can contain a null value other than the primary key field.
3. Referential
o In the Referential integrity constraints, if a foreign key in Table 1 refers to the Primary
Key of Table 2, then every value of the Foreign Key in Table 1 must be null or be
available in Table 2.
o An entity set can have multiple keys, but out of which one key will be the primary
key. A primary key can contain a unique and null value in the relational table.
Unit3
Important questions on UNIT3
1. 10M
1.Select (σ)
2. Project (∏)
3. Union ( )
4. Set Difference (-)
5. Cartesian product (X)
6. Rename (ρ)
➢ σ is the predicate
➢ r stands for relation which is the name of the table
Input:
σ BRANCH_NAME="perryride" (LOAN)
Output:
Example:
∏Name, Age(Student)
Above statement will show us only the Name and Age columns for all the
rows of data in Student table.
Input:
∏ NAME, CITY
(CUSTOMER) Output:
NAME CITY
Jones Harrison
Smith Rye
Hays Harrison
Curry Rye
Johnson Brooklyn
Brooks Brooklyn
Union Operation ( ):
DEPOSITOR RELATION
CUSTOMER_NAME ACCOUNT_NO
Johnson A-101
Smith A-121
Mayes A-321
Turner A-176
Johnson A-273
Jones A-472
Lindsay A-284
BORROW RELATION
CUSTOMER_NAME LOAN_NO
Smith L-23
Hayes L-15
Jackson L-14
Curry L-93
Smith L-11
Williams L-17
Input:
∏ CUSTOMER_NAME (BORROW) ∏ CUSTOMER_NAME (DEPOSITOR)
Output:
CUSTOMER_NAME
Johnson
Smith
Hayes
Jones
Lindsay
Jackson
Curry
Williams
Mayes
CUSTOMER_NAME
Jones
This is used to combine data from two different relations(tables) into one and
fetch data from the combined relation.
Syntax: A X B
For example, if we want to find the information for Regular Class and Extra Class
which are conducted during morning, then, we can use the following operation:
σtime = 'morning' (RegularClass X ExtraClass)
For the above query to work, both RegularClass and ExtraClass
should have the attribute time. Notation: E X D
EMPLOYEE
EMP_ID EMP_NAME EMP_DEPT
1 Smith A
2 Harry C
3 John B
DEPT_NO DEPT_NAME
A Marketing
B Sales
C Legal
Input:
EMPLOYEE X DEPARTMENT
Output:
1 Smith A A Marketing
1 Smith A B Sales
1 Smith A C Legal
2 Harry C A Marketing
2 Harry C C Legal
3 John B A Marketing
3 John B B Sales
3 John B C Legal
This operation is used to rename the output relation for any query operation which
returns result like Select, Project etc. Or to simply rename a relation(table)
Syntax: ρ(RelationNew, RelationOld)
The rename operation is used to rename the output relation. It is denoted by rho
(ρ).
• A JOIN clause is used to combine rows from two or more tables, based on
a related column between them.
• Join in DBMS is a binary operation which allows you to combine join
product and selection in one single statement.
• The goal of creating a join condition is that it helps you to combine the
data from two or more DBMS tables.
• The tables in DBMS are associated using the primary key and foreign
keys.
PROJECT_NO DEPARTMENT
EMP_ID
101 1 Testing
102 2 Development
103 3 Designing
104 4 Development
1. INNER JOIN
In SQL, INNER JOIN selects records that have matching values in both tables as
long as the condition is satisfied.
It returns the combination of all rows from both the tables where the condition
satisfies.
Syntax
SELECT table1.column1, table1.column2
Query
SELECT EMPLOYEE.EMP_NAME, PROJECT.DEPARTMENT
FROM EMPLOYEE INNER JOIN PROJECT
ON PROJECT.EMP_ID = EMPLOYEE.EMP_ID;
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
2. LEFT JOIN
The SQL left join returns all the values from left table and the matching values
from the right table. If there is no matching join value, it will return NULL.
Query
SELECT EMPLOYEE.EMP_NAME, PROJECT.DEPARTMENT
FROM EMPLOYEE LEFT JOIN PROJECT
ON PROJECT.EMP_ID = EMPLOYEE.EMP_ID;
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
Russell NULL
3. RIGHT JOIN
In SQL, RIGHT JOIN returns all the values from the values from the rows of right
table and the matched values from the left table. If there is no matching in both
tables, it will return
NULL.
Syntax
SELECT table1.column1, table1.column2
FROM table1 RIGHT JOIN table2
ON table1.matching_column = table2.matching_column;
Query
SELECT EMPLOYEE.EMP_NAME, PROJECT.DEPARTMENT
FROM EMPLOYEE RIGHT JOIN PROJECT
ON PROJECT.EMP_ID = EMPLOYEE.EMP_ID;
Output
EMP_NAME DEPARTMENT
Robert Development
Christian Designing
Kristen Development
4. FULL JOIN
In SQL, FULL JOIN is the result of a combination of both left and right outer
join. Join tables have all the records from both tables. It puts NULL on the place
of matches not found.
Syntax
SELECT table1.column1, table1.column2
FROM table1 FULL JOIN table2
ON table1.matching_column = table2.matching_column;
Query
SELECT EMPLOYEE.EMP_NAME, PROJECT.DEPARTMENT
FROM EMPLOYEE
Output
EMP_NAME DEPARTMENT
Angelina Testing
Robert Development
Christian Designing
Kristen Development
Russell NULL
Marry NULL
The division operator is used when we have to evaluate queries which contain
the keyword ALL.
Student_Name Course
Robert Databases
David Databases
Course
Databases
Programming Languages
Create a set of all students that have taken courses. This can be done easily using
the following command.
Student_name
Robert
David
Hannah
Tom
Robert Databases
David Databases
Hannah Databases
Tom Databases
• The main problem that can happen during a transaction is that the transaction can fail
before finishing the all the operations in the set. This can happen due to power
failure, system crash etc.
Commit: If all the operations in a transaction are completed successfully then commit those
changes to the database permanently.
Rollback: If any of the operation fails then rollback all the changes done by previous
operations.
TRANSACTION PROPERTIES
The transaction has the four properties. These are used to maintain consistency in a database,
before and after the transaction.
Property of Transaction
1. Atomicity
2. Consistency
3. Isolation
4. Durability
Atomicity o It states that all operations of the transaction take place at once if not, the
transaction is aborted.
o There is no midway, i.e., the transaction cannot occur partially. Each transaction is
treated as one unit and either run to completion or is not executed at all.
Abort: If a transaction aborts then all the changes made are not visible.
Commit: If a transaction commits then all the changes made are visible.
Consistency o The integrity constraints are maintained so that the database is consistent
before and after the transaction.
Isolation o It shows that the data which is used at the time of execution of a
transaction cannot be used by the second transaction until the first one is completed.
o In isolation, if the transaction T1 is being executed and using the data item X, then
that data item can't be accessed by any other transaction T2 until the transaction T1
ends.
o The concurrency control subsystem of the DBMS enforced the isolation property.
Durability o The durability property is used to indicate the performance of the database's
consistent state. It states that the transaction made the permanent changes.
• They cannot be lost by the erroneous operation of a faulty transaction or by the system
failure. When a transaction is completed, then the database reaches a state known as the
consistent state. That consistent state cannot be lost, even in the event of a system's
failure.
• o The recovery subsystem of the DBMS has the responsibility of Durability property.
STATES OF Transactions
Transactions can be implemented using SQL queries and Server. In the below-given diagram,
you can see how transaction states works.
Active state
o The active state is the first state of every transaction. In this state, the transaction is
being executed.
o For example: Insertion or deletion or updating a record is done here. But all the
records are still not saved to the database.
Partially committed
o In the partially committed state, a transaction executes its final operation, but the data
is still not saved to the database.
o In the total mark calculation example, a final display of the total marks step is
executed in this state.
Committed
Failed state
o If any of the checks made by the database recovery system fails, then the transaction
is said to be in the failed state.
o In the example of total mark calculation, if the database is not able to fire a query to
fetch the marks, then the transaction will fail to execute.
o If any of the checks fail and the transaction has reached a failed state then the
database recovery system will make sure that the database is in its previous consistent
state. If not then it will abort or roll back the transaction to bring the database into a
consistent state.
o If the transaction fails in the middle of the transaction then before executing the
transaction, all the executed transactions are rolled back to its consistent state. o
After aborting the transaction, the database recovery module will select one of the
two operations:
1. Re-start the transaction
2. Kill the transaction
The recovery-management component of a database system can support atomicity and durability
by a variety of schemes.
E.g. the shadow-database scheme:
Shadow copy:
• In the shadow-copy scheme, a transaction that wants to update the database first creates a
complete copy of the database.
• All updates are done on the new database copy, leaving the original copy, the shadow copy,
untouched. If at any point the transaction has to be aborted, the system merely deletes the
new copy. The old copy of the database has not been affected.
• This scheme is based on making copies of the database, called shadow copies, assumes that
only one transaction is active at a time.
• The scheme also assumes that the database is simply a file on disk. A pointer called
dbpointer is maintained on disk; it points to the current copy of the database.
Figure below depicts the scheme, showing the database state before and after the update.
SCHEDULE
The serial schedule is a type of schedule where one transaction is executed completely before
starting another transaction. In the serial schedule, when the first transaction completes its
cycle, then the next transaction is executed.
For example: Suppose there are two transactions T1 and T2 which have some operations. If
it has no interleaving of operations, then there are the following two possible outcomes:
1. Execute all the operations of T1 which was followed by all the operations of T2.
2. Execute all the operations of T1 which was followed by all the operations of T2.
o In the given (a) figure, Schedule A shows the serial schedule where T1 followed by
T2.
o In the given (b) figure, Schedule B shows the serial schedule where T2 followed by
T1.
2. NON-SERIAL SCHEDULE
o If interleaving of operations is allowed, then there will be non-serial schedule.
o It contains many possible orders in which the system can execute the individual
operations of the transactions.
o In the given figure (c) and (d), Schedule C and Schedule D are the non-serial
schedules. It has interleaving of operations.
3. SERIALIZABLE SCHEDULE
o It identifies which schedules are correct when executions of the transaction have
interleaving of their operations. o A non-serial schedule will be serializable if
its result is equal to the result of its transactions executed serially.
SERIALIZABILITY IN DBMS
Some non-serial schedules may lead to inconsistency of the database.
• Serializability is a concept that helps to identify which non-serial schedules are correct and
will maintain the consistency of the database.
Types of Serializability
1. Conflict Serializability
2. View Serializability
Conflict Serializability
If a given non-serial schedule can be converted into a serial schedule by swapping its
nonconflicting operations, then it is called as a conflict serializable schedule.
Conflicting Operations
Two operations are called as conflicting operations if all the following conditions hold true for
them-
Example-
In this schedule,
Follow the following steps to check whether a given non-serial schedule is conflict
serializable or not-
Follow the following steps to check whether a given non-serial schedule is conflict
serializable or not-
Step-01:
Step-02:
Step-03:
Draw an edge for each conflict pair such that if Xi (V) and Yj (V) forms a conflict pair then
draw an edge from Ti to Tj.
• If there is no cycle found, then the schedule is conflict serializable otherwise not.
VIEW SERIALIZABILITY?
View Serializability is a process to find out that a given schedule is view serializable or not.
To check whether a given schedule is view serializable, we need to check whether the given
schedule is View Equivalent to its serial schedule. Lets take an example to understand what
I mean by that.
The view serializable which does not conflict serializable contains blind
writes.
View Equivalent
Two schedules S1 and S2 are said to be view equivalent if they satisfy the following
conditions:
1. Initial Read:
An initial read of both schedules must be the same. Suppose two schedule S1 and S2. In
schedule S1, if a transaction T1 is reading the data item A, then in S2, transaction T1 should
also read A.
2. Updated Read
3. Final Write
A final write must be the same between both the schedules. In schedule S1, if a transaction
T1 updates A at last then in S2, final writes operations should also be done by T1.
Recoverability of Schedule
Sometimes a transaction may not execute completely due to a software issue, system crash or
hardware failure. In that case, the failed transaction has to be rollback. But
FAILURE CLASSIFICATION
To find that where the problem has occurred, we generalize a failure into the following
categories:
1. Transaction failure
2. System crash
3. Disk failure
1. Transaction failure
2. System Crash o System failure can occur due to power failure or other hardware
3. Disk Failure
In the transaction process, a system usually allows executing more than one transaction
simultaneously. This process is called a concurrent execution.
In a database transaction, the two main operations are READ and WRITE operations. So,
there is a need to manage these two operations in the concurrent execution of the
transactions as if these operations are not performed in an interleaved manner, and the data
may become inconsistent. So, the following problems occur with the Concurrent Execution
of the operations:
CONCURRENCY CONTROL
Concurrency Control is the working concept that is required for controlling and managing the
concurrent execution of database operations and thus avoiding the inconsistencies in the
database. Thus, for maintaining the concurrency of the database, we have the concurrency
control protocols.
Lock-Based Protocol
In this type of protocol, any transaction cannot read or write data until it acquires an
appropriate lock on it. There are two types of lock:
1. Shared lock:
2.Exclusive lock
• Exclusive Lock allows the data item to be read as well as written. This is a one-time
use mode that can't be utilized on the exact data item twice. To obtain X-lock, the user
needs to make use of the lock-x instruction. After finishing the 'write' step,
transactions can unlock the data item
• At any given time, the exclusive locks can only be owned by one transaction.
• By imposing an X lock on a transaction that needs to update a person's account
balance, for example, you can allow it to proceed. As a result of the exclusive lock, the
second transaction is unable to read or write.
• The other name for an exclusive lock is write lock.
o The two-phase locking protocol divides the execution phase of the transaction into
three parts.
o In the first part, when the execution of the transaction starts, it seeks permission for
the lock it requires.
o In the second part, the transaction acquires all the locks. The third phase is started as
soon as the transaction releases its first lock.
o In the third phase, the transaction cannot demand any new locks. It only releases the
acquired locks.
Growing phase: In the growing phase, a new lock on the data item may be acquired by the
transaction, but none can be released.
Shrinking phase: In the shrinking phase, existing lock held by the transaction may be
released, but no new locks can be acquired.
In the below example, if lock conversion is allowed then the following phase can happen:
Example:
o The priority of the older transaction is higher that's why it executes first. To
determine the timestamp of the transaction, this protocol uses system time or logical
counter.
o The lock-based protocol is used to manage the order between conflicting pairs among
transactions at the execution time. But Timestamp based protocols start working as
soon as a transaction is created.
1. Check the following condition whenever a transaction Ti issues a Read (X) operation:
o If TS(Ti) < W_TS(X) then the operation is rejected and Ti is rolled back otherwise
the operation is executed.
Where,
Validation phase is also known as optimistic concurrency control technique. In the validation
based protocol, the transaction is executed in the following three phases:
1. Read phase: In this phase, the transaction T is read and executed. It is used to read
the value of various data items and stores them in temporary local variables. It can
perform all the write operations on temporary variables without an update to the
actual database.
2. Validation phase: In this phase, the temporary variable value will be validated
against the actual data to see if it violates the serializability.
3. Write phase: If the validation of the transaction is validated, then the temporary
results are written to the database or system otherwise the transaction is rolled back.
Validation (Ti): It contains the time when Ti finishes its read phase and starts its validation
phase.
o This protocol is used to determine the time stamp for the transaction for serialization
using the time stamp of the validation phase, as it is the actual phase which
determines if the transaction will commit or rollback.
o Hence TS(T) = validation(T).
• When a system crashes, it may have several transactions being executed and various
files opened for them to modify the data items.
• Database recovery means recovering the data when it get deleted, hacked or
damaged accidentally.
• Atomicity is must whether is transaction is over or not it should reflect in the
database permanently or it should not effect the database at all.
• It should check the states of all the transactions, which were being executed.
• A transaction may be in the middle of some operation; the DBMS must ensure the
atomicity of the transaction in this case.
• It should check whether the transaction can be completed now or it needs to be rolled
back.
There are two types of techniques, which can help a DBMS in recovering as well as
maintaining the atomicity of a transaction −
• Maintaining the logs of each transaction, and writing them onto some stable storage
before actually modifying the database.
• Maintaining shadow paging, where the changes are done on a volatile memory, and
later, the actual database is updated.
The log is a sequence of records. Log of each transaction is maintained in some stable
storage so that if any failure occurs, then it can be recovered from there.
• If any operation is performed on the database, then it will be recorded in the log.
o But the process of storing the logs should be done before the actual
transaction is applied in the database.
• When transaction Ti starts, it registers itself by writing a
<Ti start>log record
• Before Ti executes write(X), a log record
<Ti, X, V1, V2>
is written, where V1 is the value of X before the write (the old value), and V2 is
the value to be written to X (the new value).
• When Ti finishes it last statement, the log record <Ti commit> is written.
o The deferred modification technique occurs if the transaction does not modify the
database until it has committed.
o In this method, all the logs are created and stored in the stable storage, and the
database is updated when a transaction commits.
Let us consider we have some organization’s database. In the database, we have one
employee table and one salary table.
Suppose two transactions T1 and T2, are working concurrently, the transaction T1 holds
the lock of some rows in the employee table and the transaction T2 holds the lock of
some rows in the salary table.
Now, consider a situation where T1 wants to make some changes in the salary table,
and at the same time, T2 also wants to make some changes in the employee table.
Deadlock Avoidance
• Deadlock avoidance algorithms are used to avoid the deadlock before it
occurs than to deal with it.
Deadlock Prevention
There are two methods for Deadlock Prevention:
1. Wait-Die Method
2. Wound-Wait Method
Wait-Die Method
Wait-Die is a non-preemptive type of deadlock prevention method. Since it is non-
preemptive, CPU time will be distributed unevenly among all transactions. One
Wound-Wait Method
Wound-Wait is a preemptive type deadlock prevention method. It ensures that all
transactions get equal CPU time. The transactions are not executed to entire burst-time;
they can be preempted during their execution to get CPU time for other transactions.
The wound-wait schema is the opposite of the wait-die schema; here, the transaction
waits if it has arrived after the other transaction, or the other transaction is rolled back.
Again, let’s consider two transactions Ti and Tj, where Ti requests a data-lock held by Tj.