Module3,4,5 QAA
Module3,4,5 QAA
1. Explain the Cursor & its properties in embedded SQL with an example.
A cursor in SQL is a database object used to retrieve, process, and manipulate data
one row at a time. While SQL is designed to handle large data sets in bulk, sometimes
we just need to focus on one row at a time. A cursor in SQL is a temporary memory or
workspace allocated by the database server to process DML operations.
It allows processing query results row-by-row instead of applying operations to the
entire set
• Performing conditional logic row-by-row.
• Looping through data to calculate or transform fields
• Iterating over result sets for conditional updates or transformations.
• Handling hierarchical or recursive data structures.
• Performing clean-up tasks that can not be done with a single SQL Query.
Implicit Cursors
In PL/SQL, when we perform INSERT, UPDATE or DELETE operations, an implicit cursor
is automatically created. This cursor holds the data to be inserted or identifies the rows
to be updated or deleted.
Useful Attributes:
• %FOUND: True if the SQL operation affects at least one row.
• %NOTFOUND: True if no rows are affected.
• %ROWCOUNT: Returns the number of rows affected.
• %ISOPEN: Checks if the cursor is open.
Explicit Cursors
These are user-defined cursors created explicitly by users for custom operations. They
provide complete control over every part of their lifecycle: declaration, opening,
fetching, closing, and deallocating.
Explicit cursors are useful when:
• We need to loop through results manually
• Each row needs to be handled with custom logic
• We need access to row attributes during processing
2. What is a Normalization? Explain the 1NF, 2NF & 3NF with examples.
Normalization is a systematic approach to organize data within a database to reduce
redundancy and eliminate undesirable characteristics such as insertion, update,
and deletion anomalies. The process involves breaking down large tables into smaller,
well-structured ones and defining relationships between them. This not only reduces
the chances of storing duplicate data but also improves the overall efficiency of the
database.
First normal form (1NF)
In 1NF, every database cell or relation contains an atomic value that can’t be further
divided, i.e., the relation shouldn’t have multivalued attributes.
Example:
The following table contains two phone number values for a single attribute.
Student Phone Number
Emp_ID
Name
1 John 12345767890
9242314321,
2 Claire 7689025341
1 John 12345767890
2 Claire 9242314321
2 Claire 7689025341
Here, we can notice data repetition, but 1NF doesn’t care about it.
Example 2:
Consider the following table. Its primary key is {StudentId, ProjectId}.
The Functional dependencies given are -
StudentId → StudentName
ProjectId → ProjectName
Student Project
StudentId ProjectId Name
Name
1 P2 John IOT
2 P1 Claire Cloud
3 P7 Clara IOT
4 P3 Abhk Cloud
1 P2 John
2 P1 Claire
3 P7 Clara
4 P3 Abhk
P2 IOT
P1 Cloud
ProjectId Project Name
P7 IOT
P3 Cloud
John Olivia
Clara Emma
Robin Olivia
Kaley Sophia
Teacher Subject
Olivia Physics
Emma English
Olivia Physics
Sophia English
Design a schema that can be explained easily relation by relation. The semantics of
attributes should be easy to interpret.
1.2 Redundant Information in Tuples and Update Anomalies
- Mixing attributes of multiple entities may cause problems
- Information is stored redundantly wasting storage
- Problems with update anomalies:
- Insertion anomalies
- Deletion anomalies
- Modification anomalies
GUIDELINE 2: Design a schema that does not suffer from the insertion, deletion and
update anomalies. If there are any present, then
note them so that applications can be made to take them into account.
1.3 Null Values in Tuples
GUIDELINE 3: Relations should be designed such that their tuples will have as few
NULL values as possible
- Attributes that are NULL frequently could be placed in separate relations (with the
primary key)
4. What is Functional Dependency? Write algorithm to find minimal cover for 10 set of
Functional Dependency. Construct the minimal cover m for set of functional
dependency.
Active state: This is the very first state of the transaction. All the read-write
operations of the transaction are currently running then the transaction is in the
active state. If there is any failure, it goes to the failed state. If all operations are
successful then the transaction moves to the partially committed state. All the
changes that are carried out in this stage are stored in the buffer memory
• Partially Committed state: Once all the instructions of the transaction are
successfully executed, the transaction enters the Partially Committed state. If the
changes are made permanent from the buffer memory, then the transaction enters
the Committed state. Otherwise, if there is any failure, it enters the failed state. The
main reason for this state is that every time a database operation is performed, a
transaction can involve a large number of changes to the database, and if a power
failure or other technical problem occurs when the system goes down the
transaction will result in Inconsistent changes to the database.
Committed state: Once all the operations are successfully executed and the
transaction is out of the partially committed state, all the changes become
permanent in the database. That is the Committed state. There’s no going back! The
changes cannot be rolled back and the transaction goes to the terminated state.
Failed state: In case there is any failure in carrying out the instructions while the
transaction is in the active state, or there are any issues while saving the changes
permanently into the database (i.e. in the partially committed stage) then the
transaction enters the failed state.
• Aborted state: If any of the checks fail and the transaction reaches a failed state, the
database recovery system ensures that the database is in a previously consistent
state. Otherwise, the transaction is aborted or rolled back, leaving the database in a
consistent state. If a transaction fails in the middle of a transaction, all running
transactions are rolled back to a consistent state before executing the transaction.
• Terminated state: If a transaction is aborted, then there are two ways of recovering
the DBMS, one is by restarting the task, and the other is by terminating the task and
making itself free for other transactions. The latter is known as the terminated state.
Triggers and assertions are database objects used to enforce data integrity and automate
certain actions within a database.
Triggers: Triggers are special types of stored procedures that are automatically executed
or fired when certain events occur in a database. These events can include INSERT,
UPDATE, or DELETE operations on a table. Triggers are commonly used to enforce
business rules, audit changes, or maintain data consistency.
Example of a trigger:
BEGIN
In this example, a trigger is created to automatically insert a record into an audit table
whenever a new record is inserted into the employees table.
Assertions: Assertions are conditions that are defined and enforced at the database level
to ensure that the data in the database meets certain criteria. They are typically used to
enforce business rules or constraints that cannot be expressed using primary key, foreign
key, or check constraints.
Example of an assertion:
CHECK (
SELECT COUNT(*)
FROM employees
In this example, an assertion named "salary_check" is created to ensure that the salary of
all employees is greater than 0. The assertion uses a SELECT statement to compare the
count of employees with a salary greater than 0 to the total count of employees, ensuring
that all employees have a positive salary.
Triggers and assertions are powerful tools for maintaining data integrity and enforcing
business rules within a database.
10. Demonstrate the Two phase locking protocol used for concurrency control.
Locking in a database management system is used for handling transactions in
databases. The two-phase locking protocol ensures serializable conflict schedules. A
schedule is called conflict serializable
• Shared Lock: Data can only be read when a shared lock is applied. Data cannot be
written. It is denoted as lock-S
• Exclusive lock: Data can be read as well as written when an exclusive lock is applied.
It is denoted as lock-X
Growing Phase: In the growing phase, the transaction only obtains the lock. The
transaction can not release the lock in the growing phase. Only when the data
changes are committed the transaction starts the Shrinking phase.
Shrinking Phase: Neither locks are obtained nor they are released in this phase.
When all the data changes are stored, only then the locks are released.
Time Action Notes
t5 T1 reads B
t7 T1 releases lock on B
t9 T2 reads A
t10 T2 releases lock on A
• If any of the data items are not available for locking before execution of the lock,
then no data items are locked.
• The read-and-write data items need to be known before the transaction begins. This
is not possible normally.
n The timestamp ordering protocol ensures that any conflicting read and write
operations are executed in timestamp order.
n Suppose a transaction Ti issues a read(Q)
1. If TS(Ti) W-timestamp(Q), then Ti needs to read a value of Q that was
already overwritten. Hence, the read operation is rejected, and Ti is rolled
back.
2. If TS(Ti) W-timestamp(Q), then the read operation is executed, and R-
timestamp(Q) is set to the maximum of R-timestamp(Q) and TS(Ti).
n Suppose that transaction Ti issues write(Q).
1. If TS(Ti) < R-timestamp(Q), then the value of Q that Ti is producing was needed
previously, and the system assumed that that value would never be
produced. Hence, the write operation is rejected, and Ti is rolled back.
2. If TS(Ti) < W-timestamp(Q), then Ti is attempting to write an obsolete value
of Q. Hence, this write operation is rejected, and Ti is rolled back.
3. Otherwise, the write operation is executed, and W-timestamp(Q) is set to
TS(Ti).
Example
CAP THEOREM
The CAP Theorem is an important concept in distributed database systems that helps
architects and designers understand the trade offs while designing a system.
It states that a system can only guarantee two of three properties: Consistency,
Availability, and Partition Tolerance. This means no system can do it all, so
designers must make smart choices based on their needs.
Consistency
Consistency defines that all clients see the same data simultaneously, no matter
which node they connect to in a distributed system. For eventual consistency, the
guarantees are a bit loose. Eventual consistency gurantee means client will
eventually see the same data on all the nodes at some point of time in the future.
Consistency
Below is the explaination of the above Diagram:
• All nodes in the system see the same data at the same time. This is because the
nodes are constantly communicating with each other and sharing updates.
• Any changes made to the data on one node are immediately propagated to all
other nodes, ensuring that everyone has the same up-to-date information.
Availability
Availabilty defines that all non-failing nodes in a distributed system return a
response for all read and write requests in a bounded amount of time, even if one
or more other nodes are down.
• User send requests, even though we don't see specific network components. This
implies that the system is available and functioning.
• Every request receives a response, whether successful or not. This is a crucial
aspect of availability, as it guarantees that users always get feedback.
Partition Tolerance
Partition Tolerance defines that the system continues to operate despite arbitrary
message loss or failure in parts of the system. Distributed systems guranteeing
partition tolerance can gracefuly recover from partitions once the partition heals.
CAP TRADEOFF
. CA System
A CA System delivers consistency and availiability across all the nodes. It can't do
this if there is a partition between any two nodes in the system and therefore
does't support partition tolerance.
2. CP System
A CP System delivers consistency and partition tolerance at the expense of
availability. When a partition occurs between two nodes, the systems shuts down
the non-available node until the partition is resolved. Some of the examples of the
databases are MongoDB, Redis, and HBase.
3. AP System
An AP System availabiiity and partition tolerance at the expense of consistency.
When a partition occurs, all nodes remains available, but those at the wrong end of
a partition might return an older version of data than others. Example: CouchDB,
Cassandra and Dyanmo DB, etc.
15. What are document based NOSQL systems? Explain basic operations CRUD in
MongoDB
Document-Based NoSQL Systems are a type of NoSQL database that store data in
document-like structures, typically using formats like JSON, BSON (Binary JSON), or
XML. These databases are designed to handle unstructured, semi-structured, or
structured data and provide flexibility, scalability, and high performance, especially
for web and big data applications.
1. Schema-less: Documents can have different fields, allowing dynamic changes to
data structure.
2. Document-Oriented: Each record is stored as a document (e.g., JSON).
3. Nested Data: Documents can contain nested structures such as arrays and sub-
documents.
4. Indexing Support: Most systems offer indexing for fast query performance.
5. Horizontal Scalability: Easy to scale across servers.
Popular Document-Based NoSQL Databases:
• MongoDB (most widely used)
• CouchDB
• Amazon DocumentDB
• RethinkDB
The basic methods of interacting with a MongoDB server are called CRUD
operations. CRUD stands for Create, Read, Update, and Delete. These CRUD
methods are the primary ways you will manage the data in your databases.
CRUD operations describe the conventions of a user interface that let users view,
search, and modify parts of the database.
• The Create operation is used to insert new documents in the MongoDB database.
• The Read operation is used to query a document in the database.
• The Update operation is used to modify existing documents in the database.
• The Delete operation is used to remove documents in the database.
Create Operations
For MongoDB CRUD, if the specified collection doesn’t exist, the create operation
will create the collection when it’s executed. Create operations in MongoDB target
a single collection, not multiple collections. Insert operations in MongoDB
are atomic on a single document level.
MongoDB provides two different create operations that you can use to insert
documents into a collection:
• db.collection.insertOne()
• db.collection.insertMany()
Read Operations
The Read operations are used to retrieve documents from the collection, or in other
words, read operations are used to query a collection for a document. We can perform
read operation using the following method provided by the MongoDB:
Method Description
db.collection.findOne()
. Read Operations
The Read operations are used to retrieve documents from the collection,
or in other words, read operations are used to query a collection for a
document. We can perform read operation using the following method
provided by the MongoDB:
Method Description
Update Operations
The update operations are used to update or modify the existing document in the
collection. We can update a single document or multiple documents that match a
given query. We can perform update operations using the following methods
provided by the MongoDB:
Description
Method
Delete Operations
The delete operation are used to delete or remove the documents from a
collection. We can delete documents based on specific criteria or remove all
documents. We can perform delete operations using the following methods
provided by the MongoDB:
Description
Method
17. Illustrate insert, delete, update, alter & drop commands in SQL.
INSERT: This operation allows you to add new records or rows to a table.
UPDATE: The UPDATE operation enables you to modify existing records in a table.
DELETE: The DELETE operation allows you to remove records from a table.
The INSERT Statement
The INSERT statement is used to add new data into a table. It allows you to specify
the columns to which you want to insert data, as well as the values for each
column. The basic syntax for the INSERT statement is as follows:
INSERT INTO table_name (column1, column2, column3, ...)
VALUES (value1, value2, value3, ...);
The UPDATE Statement
The UPDATE statement is used to modify existing records in a table. It allows you to
change the values of specific columns based on certain conditions. The basic syntax
for the UPDATE statement is as follows:
UPDATE table_name
SET column1 = value1, column2 = value2, ...
WHERE condition;
The DELETE Statement
The DELETE statement is used to remove records from a table based on certain
conditions. It allows you to specify which rows you want to delete. The basic syntax
for the DELETE statement is as follows:
DELETE FROM table_name
WHERE condition;
CREATE TABLE
The CREATE TABLE command creates a new table in the database.
The following SQL creates a table called "Persons" that contains five columns:
PersonID, LastName, FirstName, Address, and City:
ExampleGet your own SQL Server
CREATE TABLE Persons (
PersonID int,
LastName varchar(255),
FirstName varchar(255),
Address varchar(255),
City varchar(255)
);
DROP TABLE
The DROP TABLE command deletes a table in the database.
The following SQL deletes the table "Shippers":
Example
DROP TABLE Shippers;
Note: Be careful before deleting a table. Deleting a table results in loss of all
information stored in the table!
Eg:
3. Reducing the Redundant Value in Tuples Mixing attributes of multiple entities may
cause problems Information is stored redundantly wasting storage Problems with
update anomalies
Insertion anomalies
Deletion anomalies
Modification anomalies
Here whenever if we insert the tuples there may be ‘N’ stunents in one
department,so Dept No,Dept Name values are repeated ‘N’ times which leads to
data redundancy.
Another problem is updata anamolies ie if we insert new dept that has no
students.
If we delet the last student of a dept,then whole information about that
department will be deleted
If we change the value of one of the attributes of aparticaular table the we must
update the tuples of all the students belonging to thet depy else Database will
become inconsistent.
4. Reducing Null values in Tuples.
Note: Relations should be designed such that their tuples will have as few NULL
values as possible
Attributes that are NULL frequently could be placed in separate relations (with the
primary key)
Reasons for nulls: attribute not applicable or invalid
attribute value unknown (may exist)value known to exist, but unavailable
5. Disallowing spurious Tuples Bad designs for a relational database may result in
erroneous results for certain JOIN operations
The "lossless join" property is used to guarantee meaningful results for join
operations
Note: The relations should be designed to satisfy the lossless join condition.
No spurious tuples should be generated by doing a natural-join of any relations.
19. What is Functional dependency? Explain the inference rules for functional
dependency with proof.
In relational database theory, a functional dependency (FD) is a constraint between
two sets of attributes in a relation from a database. A functional dependency is
denoted as:
X→Y
Where:
• X and Y are subsets of attributes of a relation R.
• It means that if two tuples (rows) have the same values for attributes in X, then they
must have the same values for attributes in Y.
Example:
Serial schedule:
A schedule S is serial if, for every transaction T participating in the schedule, all
the operations of T are executed consecutively in the schedule.
Otherwise, the schedule is called nonserial schedule.
Serializable schedule:
A schedule S is serializable if it is equivalent to some serial schedule of the same
n transactions.
Result equivalent:
Two schedules are called result equivalent if they produce the same final state of
the database.
Conflict equivalent:
Two schedules are said to be conflict equivalent if the order of any two conflicting
operations is the same in both schedules.
Conflict serializable:
A schedule S is said to be conflict serializable if it is conflict equivalent to some
Being serializable is not the same as being serial
Being serializable implies that the schedule is a correct schedule.
It will leave the database in a consistent state.
The interleaving is appropriate and will result in a state as if the transactions
were serially executed, yet will achieve efficiency due to concurrent execution.
Fig: Constructing the precedence graphs for schedules A and D from fig 21.5 to test
for conflict
serializability.
(a) Precedence graph for serial schedule A. (b) Precedence graph for serial schedule
B. (c) Precedence graph for schedule C (not serializable).
(d) Precedence graph for schedule D (serializable, equivalent to schedule A).
Another example of serializability testing. (a) The READ and WRITE operations of
three
transactions T1, T2, and T3.
24. What are the views in SQL? Explain with example
A view in SQL is a saved SQL query that acts as a virtual table. Unlike regular
tables, views do not store data themselves. Instead, they dynamically generate
data by executing the SQL query defined in the view each time it is accessed. It can
fetch data from one or more tables and present it in a customized format,
25. In SQL, write the usage of GROUP BY and HAVING clauses with suitable examples
Ans: GROUP BY is a SQL command commonly used to aggregate the data to get
insights from it. There are three phases when you group data:
• Split: the dataset is split up into chunks of rows based on the values of the
variables we have chosen for the aggregation
• Apply: Compute an aggregate function, like average, minimum and maximum,
returning a single value
• Combine: All these resulting outputs are combined in a unique table. In this way,
we’ll have a single value for each modality of the variable of interest
• GROUP BY Clause
• Purpose: Used to group rows that have the same values in specified columns into
summary rows (e.g., total sales by region).
• Commonly Used With: Aggregate functions like COUNT(), SUM(), AVG(), MAX(),
MIN().
• HAVING Clause
• Purpose: Used to filter groups created by GROUP BY. It’s similar to WHERE, but
WHERE filters rows before grouping, while HAVING filters after grouping.
GROUP BY Usage
Scenario: Find products whose total sales are greater than 1000.
FROM sales
GROUP BY product_id
➤ Explanation:
26. Discuss the types of problems that may encounter with transactions
that run concurrently
1. Mutual Exclusion: Each resource can be held by only one transaction at a time, and
other transactions must wait for it to be released.
2. Hold and Wait: Transactions can request resources while holding on to resources
already allocated to them.
3. No Preemption: Resources cannot be taken away from a transaction forcibly, and the
transaction must release them voluntarily.
4. Circular Wait: Transactions are waiting for resources in a circular chain, where each
transaction is waiting for a resource held by the next transaction in the chain.
7. Inconsistent Data: Deadlock can lead to inconsistent data if transactions are unable
to complete and leave the database in an intermediate state.
8. Difficult to Detect and Resolve: Deadlock can be difficult to detect and resolve, as it
may involve multiple transactions, resources, and dependencies.
IS IX S SIX X
IX YES YES NO NO NO
S YES NO YES NO NO
SIX YES NO NO NO NO
X NO NO NO NO NO
INNER JOIN
An inner join is the most common join operation used in applications and can be
regarded as the
default join-type. Inner join creates a new result table by combining column values
of two tables (A
and B) based upon the join- predicate (the condition). The result of the join can be
defined as the
outcome of first taking the Cartesian product (or Cross join) of all records in the
tables (combining
every record in table A with every record in table B) then return all records which
satisfy the join
predicate
Example: SELECT * FROM employee
INNER JOIN department ON
employee.dno = department.dnumber;
https://vtucode.in page 24
[BCS403]
CROSS JOIN returns the Cartesian product of rows from tables in the join. In other
words, it will
produce rows which combine each row from the first table with each row from the
second table.
OUTER JOIN
An outer join does not require each record in the two joined tables to have a
matching record. The
joined table retains each record-even if no other matching record exists. Outer joins
subdivide
further into
Left outer joins
Right outer joins
Full outer joins
No implicit join-notation for outer joins exists in standard SQL.
MULTIWAY JOIN
It is also possible to nest join specifications; that is, one of the tables in a join may
itself be a joined
table. This allows the specification of the join of three or more tables as a single
joined table, which
is called a multiway join. Example:
number, and the department m
SELECT Pnumber, Dnum, Lname, Address, Bdate
FROM ((PROJECT JOIN DEPARTMENT ON Dnum=Dnumber)
JOIN EMPLOYEE ON Mgr_ssn=Ssn)
To retrieve the names of all employees who have two or more dependents
SELECT Lname, Fname
FROM EMPLOYEE
WHERE ( SELECT COUNT (*)
FROM DEPENDENT
WHERE Ssn=Essn ) >= 2;
SQL has various rules for dealing with NULL values. NULL is used to represent a missing
value, but
Example
2. Unavailable or withheld value. A person has a home phone but does not want it to be
3. Not applicable attribute. An attribute College Degree would be NULL for a person who
has no college degrees because it does not apply to that person.
Each individual NULL value is considered to be different from every other NULL value in
the various database records. When a NULL is involved in a comparison operation, the
result is considered to be UNKNOWN (it may be TRUE or it may be FALSE). Hence, SQL
uses a three-valued logic with values TRUE, FALSE, and UNKNOWN instead of the
standard two-valued (Boolean) logic with values TRUE or FALSE. It is therefore necessary
to define the results (or truth values) of three-valued logical expressions when the
logical connectives AND, OR, and NOT are used
The rows and columns represent the values of the results of comparison conditions,
which would
In select-project-join queries, the general rule is that only those combinations of tuples
that evaluate
the logical expression in the WHERE clause of the query to TRUE are selected. Tuple
combinations
SQL allows queries that check whether an attribute value is NULL using the comparison
operators
IS or IS NOT.
Example: Retrieve the names of all employees who do not have supervisors.
FROM EMPLOYEE
WHERE Super_ssn IS NULL;
One criterion for classifying a database system is according to the number of users
who can use the system concurrently
Single-User versus Multiuser Systems
A DBMS is
single-user
- at most one user at a time can use the system
- Eg: Personal Computer System
multiuser
- many users can use the system and hence access the database concurrently
- Eg: Airline reservation database
Concurrent access is possible because of Multiprogramming. Multiprogramming can
be achieved by: interleaved execution
Parallel Processing
Multiprogramming operating systems execute some commands from one process,
then suspend that process and execute some commands from the next process, and
so on A process is resumed at the point where it was suspended whenever it gets its
turn to use the CPU again
Hence, concurrent execution of processes is actually interleaved, as illustrated in
Figure 21.1
Figure 21.1, shows two processes, A and B, executing concurrently in an interleaved
fashion
Interleaving keeps the CPU busy when a process requires an input or output (I/O)
operation, such as reading a block from disk
The CPU is switched to execute another process rather than remaining idle during
I/O time
Interleaving also prevents a long process from delaying other processes.
If the computer system has multiple hardware processors (CPUs), parallel processing
of multiple processes is possible, as illustrated by processes C and D in Figure 21.1
Most of the theory concerning concurrency control in databases is developed in
terms of interleaved concurrency
In a multiuser DBMS, the stored data items are the primary resources that may be
accessed concurrently by interactive users or application programs, which are
constantly retrieving information from and modifying the database.
34. Explain Transactions, Database Items, Read and Write Operations, and DBMS Buffers
A Transaction an executing program that forms a logical unit of database processing
It includes one or more DB access operations such as insertion, deletion,
modification or retrieval operation.
It can be either embedded within an application program using begin transaction
and end transaction statements Or specified interactively via a high level query
language such as SQL
Transaction which do not update database are known as read only transactions.
Transaction which do update database are known as read write transactions.
A database is basically represented as a collection of named data items The size of a
data item is called its granularity.
A data item can be a database record, but it can also be a larger unit such as a whole
disk block, or even a smaller unit such as an individual field (attribute) value of some
record in the database Each data item has a unique name
Basic DB access operations that a transaction can include are:
read_item(X): Reads a DB item named X into a program variable.
write_item(X): Writes the value of a program variable into the DB item named X
Executing read_item(X) include the following steps:
1. Find the address of the disk block that contains item X
2. Copy the block into a buffer in main memory
3. Copy the item X from the buffer to program variable named X.
Executing write_item(X) include the following steps:
1. Find the address of the disk block that contains item X
2. Copy the disk block into a buffer in main memory
3. Copy item X from program variable named X into its correct location in buffer.
4. Store the updated disk block from buffer back to disk (either immediately or
later).
Decision of when to store a modified disk block is handled by recovery manager of
the DBMS in cooperation with operating system.
A DB cache includes a number of data buffers. When the buffers are all occupied a
buffer replacement policy is used to choose one of the buffers to be replaced. EG:
LRU
A binary lock can have two states or values: locked and unlocked (or 1
and 0).
A binary lock can have two states or values: locked and unlocked (or 1
and 0).
If the value of the lock on X is 1, item X cannot be accessed by a database
If the simple binary locking scheme described here is used, every transaction must obey
3. A transaction T will not issue a lock_item(X) operation if it already holds the lock