0% found this document useful (0 votes)
10 views16 pages

Triggers

The document provides an overview of various database concepts including triggers, functions, views, cursors, and transaction management in PL/pgSQL. It explains the mechanisms of triggers, the types of functions, the purpose of views, and the use of cursors for processing query results. Additionally, it covers transaction properties, scheduling types, locking protocols, and concurrency control methods to ensure database integrity and performance.

Uploaded by

jadhavaniket0981
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views16 pages

Triggers

The document provides an overview of various database concepts including triggers, functions, views, cursors, and transaction management in PL/pgSQL. It explains the mechanisms of triggers, the types of functions, the purpose of views, and the use of cursors for processing query results. Additionally, it covers transaction properties, scheduling types, locking protocols, and concurrency control methods to ensure database integrity and performance.

Uploaded by

jadhavaniket0981
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

TRIGGERS

A trigger is a set of SQL statements stored in the database catalog.A trigger is a set of
actions that are run automatically when a specified change operation (SQL INSERT, UPDATE,
DELETE or TRUNCATE statement) is performed on a specified table or view.
Definition: A trigger is, "a statement that is executed automatically by the system as a side
effect of a modification to the database"
To design a trigger mechanism, we must:
1. Specify the conditions under which the trigger is to be executed.2. Specify the actions to
be taken when the trigger executes.
Triggering events can be insert, delete or update.Trigger can be set to fire BEFORE an event
occur or AFTER an event occur or even we can bypass the event by using the INSTEAD OF
command.
There are two types of trigger i.e., Row Level Trigger and Statement Level trigger.
1.A row level trigger is fired for each affected row. 2.A statement level trigger is fired only
once for a statement.
For example, consider the statement:
UPDATE account_current SET balance = balance + 100 WHERE balance > 100000;
Suppose executing this statement may affect 20 rows. If a row level trigger is defined for the
table, the trigger will be fired for each 20 updated rows. But if it was a statement level
trigger, it would have fired only once.
Functions/Stored procedure/function
PostgreSQL functions are also known as Stored Procedures. Functions allow database reuse
as other applications can interact directly with the stored procedures instead of a middle-tier
or duplicating code.
Argument can be of types like IN, OUT, INOUT and VARIADIC as explained below:
(i) IN: By default, the parameter's type of any parameter in PostgreSQL is IN parameter. We
can pass the IN parameters to the function but you cannot get them back as a part of result.
ii) OUT: The OUT parameter is a part of the function arguments list and you can get the result
back as the part of the result. To define OUT parameters, we use OUT keyword.iii) INOUT:
The INOUT parameter is the combination IN and OUT parameters. It means that the caller can
pass the value to the function. The function then changes the argument and passes the value
back as a part of the result.(iv) VARIADIC: A PostgreSQL function can accept a variable
numbers of arguments with one condition that all arguments have the same data type. The
arguments are passed to the function as an array.
The function must contain a RETURN statement.
Syntax:
RETURN expression;
•function_body contains the executable part.
•The AS keyword is used for creating a standalone function.
•plpgsql is the name of the language that the function is implemented in. For backward
compatibility, the name can be enclosed by single quotes.
Example: Example for CREATE FUNCTION
CREATE OR REPLACE FUNCTION hi_lo (IN a NUMERIC, IN b NUMERIC, IN C NUMERIC, OUT hi
NUMERIC, OUT lo NUMERIC) AS'
BEGIN
hi:= GREATEST(a,b,c);
lo:= LEAST(a,b,c);
END;
'LANGUAGE 'plpgsql';
Features of PL/pgSQL
1. PL/pgSQL is easy to use.2. PL/pgSQL adds control structures to the SQL language.3. PL/
pgSQL can be used to create functions and trigger procedures.
4. It can perform complex computations
5. PL/pgSQL inherits all user-defined types, functions, and operators.
6. It can be defined to be trusted by the server.
Views/syntax creating views
We can use views to restrict table access so that the users see only specific rows or columns
of a table.A view is a subset of a real table, selecting certain columns or certain rows from an
ordinary table.Views are pseudo-tables means they are not real tables.When we create a
view, we basically create a query and assign it name, therefore a view is useful for wrapping
a commonly used complex query.A view can be created from one or many tables, which
depends on the written PostgreSQL query.A normal view does not store any data except the
materialized view. Since views are not ordinary tables, we may not be able to execute a
DELETE, INSERT, or UPDATE statement on a view.
Syntax to create view
Syntax:CREATE [TEMP | TEMPORARY] VIEW view_name AS
SELECT column1, column2.....
FROM table_name
WHERE [condition];

CURSORS
CURSORS
A PL/pgSQL cursor allows us to encapsulate a query and process each individual row at a
time. We use cursors when we want to divide a large result set into parts and process each
part individually. If we process it at once, we may have a memory overflow error. In addition,
we can develop a function that returns a reference to a cursor. This is an efficient way to
return a large result set from a function. The caller of the function can process the result set
based on the cursor reference.
Cursors can be of two types: implicit cursors and explicit cursors.1. Implicit cursors are
declared and managed by PL/pgSQL for all DML and PL/pgSQL SELECT statements. 2.
Explicit cursors are declared and managed by the programmer.We will see how to manage
explicit cursor operations.
Explicit cursor operations are as follows:Step 1: Declare a cursor.Step 2: Open the
cursor.Step 3: Fetch rows from the result set into a target.Step 4: Check if there are more
rows left to fetch. If yes, go to step 3, otherwise go to step5.
Step 5: Close the cursor.
RAISE statement:
RAISE statement is used to raise errors and exceptions during a PL/pgSQL function's
operation. A RAISE statement sends specified information to the PostgreSQL elog
mechanism which is the standard PostgreSQL error logging utility, which typically logs data.
Syntax:
RAISE level ' 'format string''[, identifier [...]];
Followed the RAISE statement is the level option that specifies the error severity.
There are following levels in PostgreSQL:
•DEBUG•LOG•NOTICE•INFO•WARNING•EXCEPTION
1.DEBUG: DEBUG level statements send the specified text as a DEBUG: message to the
PostgreSQL Log and the client program if the client is connected to a database cluster
running in debug mode. DEBUG level RAISE statements will be ignored by a database running
in production mode.
2.NOTICE: This level statement sends the specified text as a NOTICE: message to the
PostgreSQL log and the client program in any PostgreSQL operation mode.
3.EXCEPTION: This statement sends the specified text as an ERROR: message to the client
program and the PostgreSQL database log. The EXCEPTION level also causes the current
transaction to be aborted. If we don't specify the level, by default, the RAISE statement will
use EXCEPTION level that raises an error and stops the current transaction.

PL/pgSQL &Types
PL/pgSQL &Types
Variables are used within PL/pgSQL code to store data of specified type.Variables in PL/
pgSQL can be represented by any of SQL's standard data types such as an INTEGER or
CHAR.In addition to SQL data types, PL/pgSQL also provides the additional RECORD data
type, which is designed to allow us to store row information without specifying the columns
that will be supplied when data is inserted into the variable.All variables that we will be using
within a code block must be declared under the DECLARE keyword. If a variable is not
initialized to a default value when it is declared, its value will default to the SQL NULL type.
A list of supported data types is as below:1 boolean 2 char 3 integer 4 double precision 5
date 6 time
Transaction Concepts
A transaction is a program unit whose execution accesses and possibly updates the contents
of a database.A transaction can be defined as, "a logical unit of database processing that
includes one or more database access operations".
If the database was in consistent state before a transaction then on execution of transaction,
the database will be in a consistent state.In day-to-day life, a transaction can be defined as,
"an act of giving something to a person and receiving something from that person in return".
Properties of Transaction
1.Atomicity: Atomicity property ensures that at the end of the transaction, either no changes
have occurred to the database or the database has been changed in a consistent manner. At
the end of a transaction, the updates made by the transaction will be accessible to other
transactions and processes outside the transaction. 2.Consistency: Consistency property of
transaction implies that if the database was in consistent state before the start of a
transaction, then on termination of a transaction, it will also be in a consistent state. In other
words, Data is in a consistent state when a transaction starts and when it ends. 3.Isolation:
Isolation property of transaction indicates that action performed by a transaction will be
hidden from outside the transaction until the transaction terminates. Thus each transaction
is unaware of other transactions executing concurrently in the system. 4.Durability:
Durability property of a transaction ensures that once a transaction completes successfully
(commits), the changes it has made to the database persist, even if there are system failures
Cascadeless schedule:
A transaction cannot begin until the previous transaction has committed. This prevents
cascading aborts and saves CPU time.
Recoverable Schedule
The schedules those meet this condition are called recoverable schedules
Strict schedule:
A transaction cannot read or write an item until the last transaction that wrote it has
committed. This prevents cascading rollbacks and makes recovery easier.
Non-recoverable schedule:
A schedule that is not recoverable.
States of a transaction/ life cycle of transaction
1. Active State (Initial State): This is the initial state of transaction. A transaction is active
when it is executing. A transaction always starts with active state. It remains in active state
till all commands of that transaction are executed.2. Partially Committed: When a transaction
completes its last statement (command), it enters in partially committed state. 3. Failed: If
the system decides that the normal execution of the transaction can no longer proceed, then
transaction is termed as failed. If some failure occurs in active state or partially committed
state, transaction enters in failed state. 4. Committed: When the transaction completes its
execution successfully it enters committed state from partially committed state. 5. Aborted:
To ensure the atomicity property, changes made by failed transaction are undone i.e. the
transaction is rolled back. After rollback, that transaction enters in aborted state. When the
transaction is in failed state, it rollbacks that transaction and enters in aborted state.
Complete schedule:
A schedule that contains either a commit or an abort action for each transaction.
Conflict Serializability:
Consider that T₁ and T₂ are two transactions and S is a schedule for T, and T2. I, and I are two
instructions. If I, and I, refer to different data items, then I, and I can be executed in any
sequence.But, if I, and I refer to same data items then the order of two instructions may
matter. Here, I, and I can be a read or write operation only. Hence, following 3 conditions are
possible.
(i) 1 = read (A)
1₁ = read (A)
The order of I, and I does not matter because both are reading the data.
(ii) 1₁ = read (A) 1 = write (A)
1 write (A) 1 = read (A)
Here, if read (A) is executed before write (A) then it will read the original value of A otherwise
it will read that value of A which is written by write (A). Hence, the order of I, and I matters.
(iii) I, write (A) 1 = write (A)
Here, order of I, and I does not affect either T or T. But the database is changed, and it makes
difference for next read.
Serializable schedule
Serializable schedule
A serializable schedule in a database management system (DBMS) is a transaction execution
arrangement that ensures the same final result as if the transactions were executed one after
the other
Serial Schedule
Definition: "If the actions (operations) of different transactions are not interleaved i.e.,
transactions are executed one-by-one from start to finish, the schedule is called as a Serial
Schedule".
Concurrent Schedule (Non-Serial Schedule)
When several transactions are executed concurrently, the corresponding schedule is called
Concurrent Schedule
LOCKS WITH MULTIPLE GRANULARITY
In the concurrency control schemes described so far, we have used each individual data item
as the unit on which synchronization is performed.But in some cases it would be
advantageous to group several data items and to treat them as one individual
synchronization unit.For example: If a transaction T₁ needs to access the entire database,
and locking protocol is used, then it must lock each data item in the database.It would be
better if T₁ could issue a single lock request to lock the entire database. Similarly, if
transaction T₁ needs to access only one record in the database, it should not be required to
lock the entire database.That is we need a mechanism to allows the system to define
multiple levels of granularity.We can make one by allowing data items to be of various sizes
and defining a hierarchy of data granularities where the small granularities are nested
within larger ones. Such a hierarchy can be represented by tree.

Lock-based Protocol
Serializability can easily be ensured if access to database is done in mutually exclusive
manner i.e. if one transaction is accessing a data item, no other transaction can modify that
data item.1 Lock:lock is a variable associated with each data item. Manipulating the value
of lock is called locking.
2.Shared: If a transaction T₁ has obtained a shared mode lock (denoted by S) on item A,
then T₁ can read but it cannot write A. It is also called a read-locked item,since multiple
transactions are allowed to read a database item concurrently. 3.Exclusive: If a transaction
T₁, has obtained an exclusive mode lock (denoted by X) on item A, then T, can both read
and write A. It is also called a write-locked item since a transaction exclusively holds the
lock on an item, until it finishes updating the item.
Two-Phase Locking (2PL) Protocol
A locking protocol is a set of rules followed by all transactions while requesting and releasing
locks.2PL protocol requires that each transaction issue a lock and unlock requests in two
phases: 1. Growing Phase: A transaction may obtain locks, but may not release any lock.
2. Shrinking Phase: A transaction may release locks, but may not obtain any new locks.
Initially the transaction is in growing phase. In this it acquires locks as needed. Once, the
transaction releases a lock, it enters the shrinking phase and it can issue no more lock
requests.
The point in the schedule where the transaction has obtained its final lock (the end of its
growing phase) is called the lock point of the transaction. The transactions can be ordered
according to lock points.This ordering gives the serializability ordering for transaction. This
serial schedule is conflict equivalent i.e. the two-phase locking protocol ensures conflict
serializability. list its variants:
Variations of Two-Phase Locking
1. Strict Two-Phase Locking Protocol: (Strict 2PL):Cascading rollbacks can be avoided by a
modification of Two-Phase locking called Strict Two-phase locking protocol.
2. Rigorous Two-Phase Locking Protocol: (Rigorous 2PL) :It requires that all locks to be held
until the transaction commits.With rigorous two-phase locking, transactions can be
serialized in the order in which they commit.
3. Conservative 2PL: Conservative 2PL also called a static 2PL, which requires a transaction
to lock all the items it accesses before the transaction begins execution by predeclaring its
read set and write set.
Timestamp Ordering Protocol
Timestamp ordering protocol ensures that any conflicting read and write operations are
executed in timestamp order. This protocol works as follows:
1. Suppose that transaction T_{i} issues read (Q).(i) If TS (T_{j}) < W Timestamp (Q), then
T_{i} needs to read a value of Q that was already overwritten. Hence, the read operation is
cancelled, and T_{i} is rolled back.(ii) If TS (T₁) ≥ W-Timestamp (Q), then the read operation is
executed, and R- Timestamp (Q) is set to maximum of R-Timestamp (Q) and TS (T_{i})
2. Suppose that T_{i} issues write (Q).(i) If TS (T_{j}) < R -Timestamp (Q) then the value of Q
that T_{i} is producing was needed previously and the system assumed that value would
never be produced. Hence, the write operation is cancelled, and T_{i} is rolled back.(ii) If TS
(T;) < W-timestamp (Q), then T_{i} is attempting to write an outdated value of Q. Hence, this
write operation is cancelled, T_{i} is rolled back.(iii) Otherwise the write operation is
executed and W-timestamp is set to TS (T;).
BUFFER MANAGEMENT
A database buffer is a temporary storage area in main memory used to hold a copy of a
database block. Database buffers are grouped in an area of memory called the buffer
pool.The assignment and management of the operating system that performs this task is
called buffer management.The buffer manager of a DBMS is the software component
responsible for using a limited amount of main storage as disk page buffers, thereby
reducing the number of disk I/O per database transaction.
1. Steal versus No-Steal Buffer Management: the buffer manager uses a Steal policy, which
means pages can be written out to disk even if the transaction having modified the pages is
still active.The alternative is the No-steal policy, in which case all dirty pages are retained in
the buffer pool until the final outcome of the transaction has been determined.
2.Force versus No-Force Buffer Management:Force versus No-force concerns writing of
clean pages from the buffer pool.
There are two basic approaches:1. Force policy 2. No Force policy
Deadlock Prevention Algorithms
Deadlock prevention means that we design such a system where there is no chance of having
a deadlock.
There are two approaches to deadlock prevention as given below:
1. It ensures that cyclic waits can be avoided by ordering the requests for locks or requiring
all locks to be acquired together.
2. It performs transaction rollbacks instead of waiting for a lock, whenever the wait could
potentially result in a deadlock.
Another scheme is to impose a partial ordering of all data items, and transaction can lock the
data items only in that order. Using tree protocol, this scheme can be implemented.
Following are the schemes for second approach. The second approach uses preemption and
transaction rollbacks.
When a transaction T_{2} requests a lock that is held by a transaction T 1' the lock granted to
T_{1} may be preempted by rolling back of T_{1} and granting of lock to T_{2}
Two different deadlock prevention schemes using timestamps under the second approach
are:1. Wait-die: Scheme is based on non-preemptive technique. When a transaction Ti
requests a lock on a data item currently held by T j' Ti is allowed to wait only if it has a
timestamp smaller than that of Tj Otherwise Ti is rolled back (die).
2.Wound wait :This scheme is based on preemptive technique. When a transaction requests a
lock on a data item, currently held by T j' Ti is allowed to wait only if it has timestamp larger
than that of TjOtherwise Tj is rolled back (T j is wounded by Ti).
CLASSIFICATION OF TRANSACTION FAILURE
There are different types of failure that may occur in a system, each of failure needs to be
dealt with in a different manner.The simplest form of failure is one that does not result in the
loss of information or data in the system.
1. Transaction Failure: (i) System Error: The system has entered an undesirable state as a
result of which a transaction cannot continue with its normal execution. The transaction,
however, can be re-executed at a later time.(ii) Logical Error: The transaction can no longer
continue with its normal execution, owing the some internal condition, such as data not
found, bad input or resource limit overflow exceeded.
2. System Crash:There is a hardware bug or error in the database software or the operating
system that causes the loss of the content of volatile storage and leads transaction
processing to a halt.The content of non-volatile storage remains undamaged and is not
corrupted.
3.Disk Failure:In disk failures disk block losses its content as a result of either a head failure
during a data transfer operation. Copies of the data on other disks on tertiary media such as
tapes, disks etc., are used to recover from the failure.To find out how the system should
recover from failure or crash, we need to recognize the failure modes of those devices used
for storing data
Thomas Write Rule
Thomas Write Rule does not enforce Conflict Serializablity but rejects fewer
Write Operations by modifying the check Operations for W_item(X)
1. If R_TS(X) > TS(T), then abort and rollback T and reject the operation.
2. If W_TS(X) > TS(T), then don’t execute the Write Operation and continue
processing. This is a case of Outdated or Obsolete Writes. Remember,
outdated writes are ignored in Thomas Write Rule but a Transaction
following Basic protocol will abort such a Transaction.
3. If neither the condition in 1 or 2 occurs, then and only then execute the
W_item(X) operation of T and set W_TS(X) to TS(T)
Deadlock Detection:
This is one method for dealing with deadlock. It allows the system to enter in a deadlock
state, and then try to recover using deadlock detection and dead-lock recovery scheme.If
the probability that system enters deadlock state is relatively low, this method is efficient.
An algorithm that examines the state of the system is executed periodically to determine
whether a deadlock has occurred.If it has occurred then the system must attempt to recover
from the deadlock.
Deadlock Recovery Techniques
1. Process Termination: To eliminate deadlock, we can simply kill one or more processes. For
this, we use two methods:Abort all the processes involved in a deadlock.Abort one process
at a time until the deadlock is eliminated.2. Resource Preemption: To eliminate deadlocks
using resource preemption, we preempt some resources from processes and give those
resources to other processes. Preempt means that we are taking away the resources from a
process.(a) Selection of Victim:Given a set of deadlocked transactions, we should determine
which transaction to rollback to break the deadlock.
Following should be kept in mind for determining the roll back of transaction:(i) How many
data items the transaction has used?(ii) How many more data items the transaction needs for
it to complete?(b) Rollback:Once we have decided that a particular transaction must be
rolled back, we must determine how far this transaction should be rolled back.The simplest
solution is total roll back. Abort the transaction and restart it. However it is efficient to roll
back the transaction to break the deadlock.(c) Starvation:In a system where the selection of
victims is based primarily on cost factors. It may happen that the same transaction is always
picked as a victim.As a result, the transaction never completes its designated task, thus
there is a transaction.
STATISTICAL DATABASE SECURITY/ encryption techniques
Statistical databases are mainly used to produce statistics on various populations. The
database may contain confidential data on individuals, which should be protected from user
access.Users are permitted to retrieve statistical information on the populations, such as
averages, sums, counts, maximums, minimums, and standard deviations.
A population is a set of tuples of a relation (table) that satisfy some selection condition.
Statistical queries involve applying statistical functions to a population of tuples. For
example, we may want to retrieve the number of individuals in a population or the average
income in the population.However, statistical users are not allowed to retrieve individual
data, such as the income of a specific person.
For example, if a user is permitted statistical access to an employee database, he or she is
able to write queries such as:
SELECT SUM(Salary)
FROM Employee
WHERE Dept = 10;
but not:
SELECT Salary
FROM Employee
WHERE empId = 'E101';

Checkpoint
When a system failure occurs, some transactions need to be redone and some need to be
undone. Log record can find out this. But for that we need to search the entire log.
There are two major difficulties with this approach.
1. The search process is time consuming.
2. Most of the transactions that will be redone have already written their updates into the
database. Hence, it is better to avoid such redo operations.
To reduce these types of overheads, checkpoints are introduced. During the execution the
system maintains the log, using immediate database or deferred database modification
technique.
In addition, the system periodically performs checkpoints, which require following sequence
of operations:
1. Output onto stable storage, all log records currently stored in main memory.
2. Output to the disk, all modified buffer blocks.
3. Output onto stable storage, a log record <checkpoint>.

LOG-BASED RECOVERY
The log-based recovery techniques maintain transaction logs to keep track of all update
operations of the transaction
Recovery Procedures:Two operations applied to recover from a failure are: undo and redo.
These are applied with the help of the log on the last consistent state of the database.
1. Undo: This operation reverses (rolls back) the changes made to the database by an
uncommitted transaction and restores the database to the consistent state that existed
before the start of the transaction.
2. Redo: This operation reapplies the changes of a committed transaction and restores the
database to the consistent state it would be at the end of the transaction. This operation is
required when the changes of a committed transaction are not or partially reflected to the
database on disk. The redo modifies the database on disk to the new values for the
committed transaction.
Recovery Algorithms:Conceptually, there are two main techniques for recovery from non-
catastrophic transaction failures:
1. Deferred update (or NO UNDO/REDO) algorithm.
2. Immediate update (or UNDO/REDO) algorithm
RELATIONSHIP BETWEEN RECOVERY MANAGEMENT ANDBUFFER MANAGEMENT
DBMS application program requires Input/Output (I/O) operations which are performed by a
component of the operating system. These I/O operations normally use buffers.The recovery
management system of the DBMS is responsible for recovery from hardware or software
failures.The recovery manager ensures that the database remains in a consistent state in the
presence of failures. It is responsible for transaction commit and abort operations,
maintaining a log, and restoring the system to a consistent state after a crash.The buffer
management effectively provides a temporary copy of a database page. Therefore, it is used
in database recovery systems in which the modifications are done in this temporary copy and
the original page remains unchanged in the secondary storage
Database Security and the DBA/DBA responsible for security
• The Database Administrator (DBA) is the central authority for managing a database system.
• The DBA's responsibilities include:
1. Granting privileges to users who need to use the system.
2. Classifying users and data in accordance with the policy of the organization.
The DBA is responsible for the overall security of the database system. The DBA has a DBA
account in the DBMS. Sometimes, these are called a System or Super User Account.
CONCEPT OF LOG
The most widely used structure for recording database modifications is the log.It is a
sequence of log records and maintains a record of all the update activities in the database.
There are several types of log records to record significant events during transaction
processing.An update log record describes a single database write. It has the following
fields:Transaction identifier is the unique identifier of the transaction that performed the
write operation.Data-item identifier is the unique identifier of the data item written. Typically,
it is the location on disk of the data item.Old value is the value of the data item prior to the
write.New value is the value that the data item will have after the write. Other special log
records exist to record significant events during transaction processing, such as the start of
a transaction and the commit or abort of a transaction.
Client/Server Systems
The database systems functionality can be broadly divided into two parts: the front end and
the back end.The front end of a database system consists of tools such as the SQL user
interface, forms interfaces, report generation tools, and data mining and analysis tools.The
back end manages access structures, query evaluation and optimization, concurrency
control, and recovery.The interface between the front end and the back end is through SQL,
or through an application program.
SHADOW PAGING
Shadow paging is an alternative to log-based crash recovery technique. This is one possible
form of indirect page allocation. Paging scheme is used in operating systems for virtual
memory management. The memory that is addressed by a process is called Virtual
Memory.In the shadow page scheme, the database is considered to be made up of logical
units of storage called pages which are assumed to be of a certain size (1 KB or 4 KB). The
virtual or logical pages are mapped onto physical memory blocks of same size. The mapping
of pages is provided by table called a Page Table.
The shadow page scheme uses two page tables.
1. Current page table.2. Shadow page table.
Advantages of Shadow Paging:
1. Recovery from system crash is relatively inexpensive and this is achieved without the
overhead of logging.
2. In shadow paging, the overhead of maintaining the transaction log file is eliminated.
Disadvantages of Shadow Paging:
1. Over a period of time the database will be scattered over the physical memory and ‫"ڡﺎ‬
related records may require a very long access time.
2. When the transaction completes its execution, shadow blocks have to be returned to the
pool of free blocks. If this is not done successfully such blocks become inaccessible when a
transaction commits. This is called the Garbage Collection Operation.

Threats to Database Security:


Threat is any intentional or accidental event that may adversely affect the system.
The major database security threats are as follows:
1. Loss of Confidentiality: Loss of confidentiality indicates that unauthorized users have been
able to access information.
2. Loss of Privacy: It includes loss of protection of an individual's data files.
3. Loss of Integrity: It refers to data corruption and invalid data.
4. Loss of Availability: It includes the damage of hardware, the network or applications that
may cause data to become unavailable to users.
5. Theft and Fraud: It includes intentional security breaking of data and unauthorized data
manipulation because of inadequate access controls, inadequate physical security of the
database.
6. Accidental Losses: It includes losses as a result of human error, software problems, and
hardware problems.
METHODS FOR DATABASE SECURITY
1. Authorization: A DBMS typically includes a database security and authorization sub-
system that is responsible for ensuring the security of portions of a database against
unauthorized access. It is normally refers two types of database security mechanisms:
(i) Discretionary Security Mechanisms: These are used to grant privileges to users, including
the capability to access specific data files, records, or fields in a specified mode (such as
read, insert, delete, or update).(ii) Mandatory Security Mechanisms: These are used to
enforce multilevel security by classifying the data and users into various security classes (or
levels) and then implementing the appropriate security policy of the organization.
2. Access Control: The security mechanism of a DBMS must include provisions for restricting
access to the database as a whole. This function is called Access Control. An access control
mechanism is a way to control the data that is accessible to a given user. It is handled by
creating user accounts and passwords to control login process by the DBMS.
3. Statistical Database Security: The security problem associated with databases is that of
controlling the access to a statistical database, which is used to provide statistical
information or summaries of values based on various criteria. The counter measures to
statistical database security problem are called Inference Control Measures.
4. Database Encryption Techniques: These are used to protect sensitive data (such as credit
card numbers) that is being transmitted via some type communication network. The data is
encoded using some encoding algorithm. An unauthorized user who access encoded data
will have difficulty deciphering it, but authorized users are given decoding or decrypting
algorithms (or keys) to decipher data.
ARIES ALGORITHM
ARIES (Algorithms for Recovery and Isolation Exploiting Semantics) uses a steal/no- force
approach for writing, and it is based on following three concepts:1. Write-ahead Logging: Any
change to an object is first recorded in the log, and the log must be written to stable storage
before changes to the objects are written to disk.
2. Repeating History During Redo: ARIES will retrace all actions of the database system prior
to the crash to reconstruct the database state when the crash occurred. Transactions that
were uncommitted at the time of the crash (active transactions) are undone.3. Logging
Changes during Undo: Will prevent ARIES from repeating the completed undo operations if a
failure occurs during recovery, which causes a restart of the recovery process.
Components of Client/Server Architecture) IMP
There are three major components of client/server architecture: 1. Server 2. Client 3.
Network interface
1. Server:Server is DBMS itself. It consists of DBMS and supports all basic DBMS functions.
Server components of DBMS are installed at the server. It acts as monitor of all of its clients.
It distributes work-load to other computers. Clients must obey their servers. Functions of
Server: The server performs various functions, which are as follows.
1. It supports all basic DBMS functions.
2. Monitor all his clients.
3. Distribute work-load over clients.
4. Solve problems which are not solved by clients.
5. Maintain security and privacy.
6. Avoiding unauthorized access of data.
2. Client:Client machine is a personal computer or workstation which provides services to
both Client machine is a personal comp server. Client components of DBMS are installed at
the client site. Clients are taking instructions from the server and help them by taking their
load. When any user wants to execute a query on a client, the client first takes data from the
server then executes the query on his own hardware and returns the result to the server. As a
result, servers are free to do more complex applications.
3. Network Interface: Clients are connected to the server by network interface. It is useful in
connecting the server interface with the user interface so that the server can run his
applications over its clients. In the Client/Server architecture, there are more than one server.
Sometimes, a server is used as Database Server, other as Application Server, other as
Backup Server etc.
Types of Client/Server Architecture:
A. Single-Tier Client/server Architecture.
Single-tier architecture is the first type of client/server computing model. In Single per,
client/server computing model, the client/server danputing model Ien personal computer. In
Single-tier system, the database is centralized, which means the DBMS software and the
data in one location and the dumb terminals were used to access the database management
system.
Advantage of Single-Tier:1. The data is easily and quickly available since it is located in the
same machine.
Disadvantage of Single-Tier:1. This architecture is completely not scalable. Only one user can
access the system at a given time through the local client.
Two-Tier Client/Server Architecture:
The two-tier architecture primarily has two parts, a Client tier and a Server tier. The Client
tier sends a request to the server tier and the server tier responds with the desired
information.
Advantages of Two-Tier Client/Server Structure:This structure is quite easy to maintain and
modify. The communication between the client and server in the form of request response
messages is quite fast.
Disadvantage of Two-Tier Client/Server Architecture: If the client nodes are increased
beyond capacity in the architecture, then the server is not able to handle the request
overflow and performance of the system degrades.
Three-Tier Client/Server Architecture:
The three-tier architecture has three layers namely client, application and data layer. The
client layer is the one that requests the information. In this case it could be the GUI, web
interface etc
The application layer acts as an interface between the client and data layer. It helps in
communication and also provides security. The data layer is the one that actually contains
the required data.
Advantages of Three-Tier Client/Server architecture:The three-tier structure provides
much better service and fast performance.
The structure can be scaled according to requirements without any problem.
Data security is much improved in the three- tier structure.
Disadvantage of Three-Tier Client/Server architecture:Three - tier client/server structure is
quite complex due to advanced features.
Cascadeless Schedule
Even if a schedule is recoverable, to recover correctly from the failure of a transaction Ti we
may have to rollback the transaction.
Transaction T_{10} writes a value of A that is read by transaction T_{11} Transaction T_{11}
writes a value of A that is read by transaction T_{12} Suppose that at a point transaction
T_{10} fails. T_{10} must be rolled back. Since, T_{11} is dependent on T 10' T_{11} must be
rolled back. Since, T_{12} is dependent on T 11' T_{12} must be rolled back.
This concept, in which a single transaction failure results in a series of transaction rollbacks,
is called cascading rollback.
A cascadeless schedule is one where, for each pair of transactions T_{1} and T_{j} such that
T_{j} reads a data item previously written by T_{p} the commit operation of T_{1} appears
before commit operation of T_{j}

You might also like