0% found this document useful (0 votes)
351 views

Advanced Data Base Note

The document provides an overview of database systems and concepts including: - Databases contain interrelated data that can be accessed through programs to store information for applications like banking, airlines, etc. - Database systems help solve issues like data redundancy, inconsistency, isolation, and security that arise from traditional file-based data storage. - Database concepts include schemas, instances, data models, query languages, database design, storage management, transactions, and database users.

Uploaded by

Desyilal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
351 views

Advanced Data Base Note

The document provides an overview of database systems and concepts including: - Databases contain interrelated data that can be accessed through programs to store information for applications like banking, airlines, etc. - Database systems help solve issues like data redundancy, inconsistency, isolation, and security that arise from traditional file-based data storage. - Database concepts include schemas, instances, data models, query languages, database design, storage management, transactions, and database users.

Uploaded by

Desyilal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Advanced data base note

By HABTESH
.• DBMS contains information about a particular enterprise
• Collection of interrelated data
• Set of programs to access the data
• An environment that is both convenient and efficient to use
• Database Applications:
• Banking: all transactions
• Airlines: reservations, schedules
• Universities: registration, grades
• Sales: customers, products, purchases
• Online retailers: order tracking, customized recommendations
• Manufacturing: production, inventory, orders, supply chain
• Human resources: employee records, salaries, tax deductions
• Databases touch all aspects of our lives
Purpose of Database Systems
Data redundancy and inconsistency
• Multiple file formats, duplication of information in different files
• Difficulty in accessing data
• Need to write a new program to carry out each new task
• Data isolation — multiple files and formats
• Integrity problems
. • Atomicity of updates
• Failures may leave database in an inconsistent state with partial updates carried out
• Concurrent access by multiple users
• Concurrent accessed needed for performance
• Uncontrolled concurrent accesses can lead to inconsistencies
• Security problems
• Hard to provide user access to some, but not all, data
• Database systems offer solutions to all the above problems
Levels of Abstraction
• Physical level: describes how a record (e.g., customer) is stored.
• Logical level: describes data stored in database, and the relationships among the data.
• View level: application programs hide details of data types. Views can also hide information (such as an
employee’s salary) for security purposes.
Instances and Schemas
Schema – the logical structure of the database
• Analogous to type information of a variable in a program
• Physical schema: database design at the physical level
• Logical schema: database design at the logical level
• Instance – the actual content of the database at a particular point in time
• Analogous to the value of a variable
.• Physical Data Independence – the ability to modify the physical schema without changing the logical schema
• Applications depend on the logical schema
• In general, the interfaces between the various levels and components should be well defined so that changes
in some parts do not seriously influence others
Data Models
• A collection of tools for describing
• Data
• Data relationships
• Data semantics
• Data constraints
• Relational model
• Entity-Relationship data model (mainly for database design)
• Object-based data models (Object-oriented and Object-relational)
• Semi structured data model (XML)
• Other older models:
• Network model
• Hierarchical model
Data Manipulation Language (DML)
• Language for accessing and manipulating the data organized by the appropriate data model
• DML also known as query language
• Two classes of languages
• Procedural – user specifies what data is required and how to get those data
• Declarative (nonprocedural) – user specifies what data is required without specifying how to get those data
• SQL is the most widely used query language
.• Data Definition Language (DDL)
• Specification notation for defining the database schema
Example: create table account (
account-number char(10),
balance integer)
• DDL compiler generates a set of tables stored in a data dictionary
• Data dictionary contains metadata (i.e., data about data)
• Database schema
• Data storage and definition language
• Specifies the storage structure and access methods used
• Integrity constraints
• Domain constraints
• Referential integrity (references constraint in SQL)
• Assertions
• Authorization
• Application programs generally access databases through one of
• Language extensions to allow embedded SQL
• Application program interface (e.g., ODBC/JDBC) which allow SQL queries to be sent to a
database
.Database Design
The process of designing the general structure of the database:
Logical Design – Deciding on the database schema. Database design requires that we find a “good” collection
of relation schemas.
Physical Design – Deciding on the physical layout of the database.
The Entity-Relationship Model
• Models an enterprise as a collection of entities and relationships
• Entity: a “thing” or “object” in the enterprise that is distinguishable from other objects
• Described by a set of attributes
• Relationship: an association among several entities
• Represented diagrammatically by an entity-relationship diagram
Storage Management
• Storage manager is a program module that provides the interface between the low-level data stored in the
database and the application programs and queries submitted to the system.
• The storage manager is responsible to the following tasks:
• Interaction with the file manager
• Efficient storing, retrieving and updating of data
• Issues:
• Storage access
• File organization
• Indexing and hashing
.Transaction Management
• A transaction is a collection of operations that performs a single logical function in a database application
• Transaction-management component ensures that the database remains in a consistent (correct) state despite
system failures (e.g., power failures and operating system crashes) and transaction failures.
• Concurrency-control manager controls the interaction among the concurrent transactions, to ensure the
consistency of the database.
Database Architecture
The architecture of a database systems is greatly influenced by
the underlying computer system on which the database is running:
• Centralized
• Client-server
• Parallel (multi-processor)
• Distributed
Database Users
Users are differentiated by the way they expect to interact with
the system
• Application programmers – interact with system through DML calls
• Sophisticated users – form requests in a database query language
• Specialized users – write specialized database applications that do not fit into the traditional data processing
framework
• Naïve users – invoke one of the permanent application programs that have been written previously
• Examples, people accessing database over the web, bank tellers, clerical staff
.Database Administrator
• Coordinates all the activities of the database system; the database administrator has a good understanding of
the enterprise’s information resources and needs.
• Database administrator's duties include:
• Schema definition
• Storage structure and access method definition
• Schema and physical organization modification
• Granting user authority to access the database
• Specifying integrity constraints
• Acting as liaison with users
• Monitoring performance and responding to changes in requirements
Relational Model
Basic Structure
Formally, given sets D1, D2, …. Dn a relation r is a subset of
D1 x D2 x … x Dn
Thus, a relation is a set of n-tuples (a1, a2, …, an) where each ai  Di
Then r = { (Jones, Main, Harrison),
(Smith, North, Rye),
(Curry, North, Rye),
(Lindsay, Park, Pittsfield) }
is a relation over
customer_name x customer_street x customer_city
.Attribute Types
• Each attribute of a relation has a name
• The set of allowed values for each attribute is called the domain of the attribute
• Attribute values are (normally) required to be atomic; that is, indivisible
• Note: multivalued attribute values are not atomic
• Note: composite attribute values are not atomic
• The special value null is a member of every domain
• The null value causes complications in the definition of many operations
• We shall ignore the effect of null values for now.
Relation Schema
• A1, A2, …, An are attributes
• R = (A1, A2, …, An ) is a relation schema
Example:
Customer_schema = (customer_name, customer_street, customer_city)

r(R) is a relation on the relation schema R


Example:
customer (Customer_schema)
.Relation Instance
• The current values (relation instance) of a relation are specified by a table
• An element t of r is a tuple, represented by a row in a table
Relations are Unordered
Order of tuples is irrelevant (tuples may be stored in an arbitrary order)
Database
• A database consists of multiple relations
• Information about an enterprise is broken up into parts, with each relation storing one part of the information
account : stores information about accounts
depositor : stores information about which customer
owns which account
customer : stores information about customers

• Storing all information as a single relation such as


bank(account number, balance, customer name, ..)
results in
• repetition of information (e.g., two customers own an account)
• the need for null values (e.g., represent a customer without an account)
• Normalization theory (Part III) deals with how to design relational schemas
.Keys
• Let K  R
• K is a superkey of R if values for K are sufficient to identify a unique tuple of each possible relation r(R)
• by “possible r ” we mean a relation r that could exist in the enterprise we are modeling.
are both super keys of Customer, if no two customers can possibly have the same name.
• K is a candidate key if K is minimal
Example: {customer name} is a candidate key for Customer, since it is a super key (assuming no two customers
can possibly have the same name), and no subset of it is a super key.

• Primary Key unique key for each record.


Query Languages
• Language in which user requests information from the database.
• Categories of languages
• Procedural
• Non-procedural, or declarative
• “Pure” languages:
• Relational algebra
• Tuple relational calculus
• Domain relational calculus
• Pure languages form underlying basis of query languages that people use.
.Relational Algebra
• Procedural language
• Six basic operators
• select: 
• project: 
• union: 
• set difference: –
• Cartesian product: x
• rename: 
• The operators take one or two relations as inputs and produce a new relation as a result.
Select Operation
• Notation:  p(r)
• p is called the selection predicate
• Defined as:
 p(r) = {t | t  r and p(t)}

Where p is a formula in propositional calculus consisting of terms connected by :  (and),  (or),  (not)
Each term is one of:
<attribute> op <attribute> or <constant>
where op is one of: =, , >, . <. 

• Example of selection:
 branch_name=“Perryridge”(account)
.Project Operation
• Notation:
where A1, A2 are attribute names and r is a relation name.
• The result is defined as the relation of k columns obtained by erasing the columns that are not listed
• Duplicate rows removed from result, since relations are sets
• Example: To eliminate the branch_name attribute of account

 account_number, balance (account)


Union Operation
• Notation: r  s
• Defined as:
r  s = {t | t  r or t  s}
• For r  s to be valid.
1. r, s must have the same arity (same number of attributes)
2. The attribute domains must be compatible (example: 2nd column
of r deals with the same type of values as does the 2nd
column of s)

• Example: to find all customers with either an account or a loan


 customer_name (depositor)   customer_name (borrower)
.Set Difference Operation
• Notation r – s
• Defined as:
r – s = {t | t  r and t  s}

• Set differences must be taken between compatible relations.


• r and s must have the same arity
• attribute domains of r and s must be compatible
Cartesian-Product Operation
• Notation r x s
• Defined as:
r x s = {t q | t  r and q  s}

• Assume that attributes of r(R) and s(S) are disjoint. (That is, R  S = ).
• If attributes of r(R) and s(S) are not disjoint, then renaming must be used.
Composition of Operations
• Can build expressions using multiple operations
• Example:  A=C(r x s)
.Rename Operation
• Allows us to name, and therefore to refer to, the results of relational-algebra expressions.
• Allows us to refer to a relation by more than one name.
• Example:
 x (E)
returns the expression E under the name X
• If a relational-algebra expression E has arity n, then
returns the result of expression E under the name X, and with the
attributes renamed to A1 , A2 , …., An .
Set-Intersection Operation
• Notation: r  s
• Defined as:
• r  s = { t | t  r and t  s }
• Assume:
• r, s have the same arity
• attributes of r and s are compatible
.Natural-Join Operation
Notation: r s
• Let r and s be relations on schemas R and S respectively.
Then, r s is a relation on schema R  S obtained as follows:
• Consider each pair of tuples tr from r and ts from s.
• If tr and ts have the same value on each of the attributes in R  S, add a tuple t to the result, where
• t has the same value as tr on r
• t has the same value as ts on s
• Example:
R = (A, B, C, D)
S = (E, B, D)
• Result schema = (A, B, C, D, E)
• r s is defined as:
r.A, r.B, r.C, r.D, s.E ( r.B = s.B  r.D = s.D (r x s))
Division Operation
• Notation:
• Suited to queries that include the phrase “for all”.
• Let r and s be relations on schemas R and S respectively where
• R = (A1, …, Am , B1, …, Bn )
• S = (B1, …, Bn)
The result of r  s is a relation on schema
R – S = (A1, …, Am)
r  s = { t | t   R-S (r)   u  s ( tu  r ) }
Where tu means the concatenation of tuples t and u to produce a single tuple
.Division Operation (Cont.)
• Property
• Let q = r  s
• Then q is the largest relation satisfying q x s  r
• Definition in terms of the basic algebra operation Assignment Operation example
Let r(R) and s(S) be relations, and let S  R Example: Write r  s as temp1   R-S (r )
r  s =  R-S (r ) –  R-S ( (  R-S (r ) x s ) –  R-S,S(r )) temp2   R-S ((temp1 x s ) –  R-S,S (r ))
result = temp1 – temp2
To see why
•  R-S,S (r) simply reorders attributes of r
•  R-S ( R-S (r ) x s ) –  R-S,S(r) ) gives those tuples t in
•  R-S (r ) such that for some tuple u  s, tu  r.
Assignment Operation
• The assignment operation () provides a convenient way to express complex queries.
• Write query as a sequential program consisting of
• a series of assignments
• followed by an expression whose value is displayed as a result of the query.
• Assignment must always be made to a temporary relation variable.
.Extended Relational-Algebra-Operations
• Aggregate Functions
• Outer Join
Aggregate Functions and Operations
• Aggregation function takes a collection of values and returns a single value as a result.
avg: average value
min: minimum value
max: maximum value
sum: sum of values
count: number of values
• Aggregate operation in relational algebra
E is any relational-algebra expression
• G1, G2 …, Gn is a list of attributes on which to group (can be empty)
• Each Fi is an aggregate function
• Each Ai is an attribute name
• Result of aggregation does not have a name
• Can use rename operation to give it a name
• For convenience, we permit renaming as part of aggregate operation
.Outer Join
• An extension of the join operation that avoids loss of information.
• Computes the join and then adds tuples form one relation that does not match tuples in the other relation to the
result of the join.
• Uses null values:
• null signifies that the value is unknown or does not exist
• All comparisons involving null are (roughly speaking) false by definition.
• We shall study precise meaning of comparisons with nulls later
Null Values
. It is possible for tuples to have a null value, denoted by null, for some of their attributes
. null signifies an unknown value or that a value does not exist.
. The result of any arithmetic expression involving null is null.
. Aggregate functions simply ignore null values (as in SQL)
. For duplicate elimination and grouping, null is treated like any other value, and two nulls are assumed to be the
same (as in SQL)
Modification of the Database
• The content of the database may be modified using the following operations:
• Deletion
• Insertion
• Updating
• All these operations are expressed using the assignment operator.
.Deletion
• A delete request is expressed similarly to a query, except instead of displaying tuples to the user, the selected tuples are removed from
the database.
• Can delete only whole tuples; cannot delete values on only particular attributes
• A deletion is expressed in relational algebra by:
rr–E
where r is a relation and E is a relational algebra query.
Insertion
• To insert data into a relation, we either:
• specify a tuple to be inserted
• write a query whose result is a set of tuples to be inserted
• in relational algebra, an insertion is expressed by:
r r  E
where r is a relation and E is a relational algebra expression.
• The insertion of a single tuple is expressed by letting E be a constant relation containing one tuple.
Updating
• A mechanism to change a value in a tuple without charging all values in the tuple
• Use the generalized projection operator to do this task
• Each Fi is either
• the I th attribute of r, if the I th attribute is not updated, or,
• if the attribute is to be updated Fi is an expression, involving only constants and the attributes of r, which gives the new value for
the attribute
.Why database standards?
• Having a standard for a particular type of database system is very important because of following reasons:
1. It provides support for portability of database applications. Portability is generally defined as the
capability to execute a particular application program on different systems with minimal modifications
to the program itself.
2. It helps to achieve interoperability. It generally refers to the ability of an application to access multiple
distinct systems. In terms of database systems, this means that same application program may access
some data stored under one ODBMS package, and other data store under another package.
3. It allow customers to compare commercial products more easily by determining which parts of the
standards are good
ODMG
ODMG ( Object Data Management Group) is a consortium of ODBMS vendors.
• It proposed a standard for ODBMS in 1993 as ODMG 1.0, in 1995 ODMG 2.0 and in 2000 ODMG 3.0.
• The ODMG 3.0 standard is made up of several parts:
‣ Object model
‣ Object definition language (ODL)
‣ Object query language (OQL)
.The Object Model
The Object Model specifies the constructs that are supported by an ODMS:

• The basic modeling primitives are the objectand the literal. Each object has a unique identifier.

• Objects and literals can be categorized by their types.

• An object is sometimes referred to as an instance of its type.

• The state of an object is defined by the values it carries for a set of properties. These properties can be
attributes of the object itself or relationships between the object and one or more other objects.

• Typically, the values of an object’s properties can change over time.


The Object Model
• The behavior of an object is defined by the set of operations that can be executed on or by the object.
• Operations may have a list of input and output parameters, each with a specified type. Each operation may also
return a typed result.
• An ODBMS stores objects, enabling them to be shared by multiple users and applications. These objects are
called persistent objects.
• An ODBMS is based on a schema that is defined in ODL and contains instances of the types defined by its
schema.
.Types: Specifications and Implementations
• There are two aspects to the definition of a type. A type has an external specification and one or more
implementations.

• The specification defines the aspects that are visible to users of the type: the operations that can be invoked on
its instances, the properties, or state variables, whose values can be accessed, and any exceptions that can be
raised by its operations.

• A type’s implementation defines the internal aspects of the


• objects of the type: the implementation of the type’s operations and other internal details. The implementation
of a type is determined by a language binding.
• An interface definition is a specification that defines only the abstract behavior of an object type.
• A class definition is a specification that defines the abstract behavior and abstract state of an object type.
• A class is an extended interface with information for ODMS schema definition.
• A literal definition defines only the abstract state of a literal type.
• Finally, the struct Complex defines only the abstract stateof Complex number literals. In addition to the struct
definition and the primitive literal datatypes (Boolean, char, short, long, float, double, octet, and string), ODL
defines declarations for user-defined collection, union, and enumeration literal types.
.ODMG Objects and Literals Type of Literals
• The basic building blocks of the object model are • Atomic Literals
• Correspond to the values of basic data
‣ Objects
types.
‣ Literals

• The main difference between the two is that an object has That is, numbers, characters, Boolean values
both an object identifier and a state where as a literal has a etc. are examples of atomic literal types.
value but no identifier.
• Structured Literals
Literals • Correspond to the values constructed by
tuple constructor. They include: date, time,
• Literals do not have their own identifiers and cannot stand
interval and timestamp as built-in structures
alone as objects; they are embedded in objects and cannot
and any other user defined structures.
be individually referenced.
• Collection Literals
• The value of a literal cannot change.
• The ODMG Object Model supports collection
• The Object Model supports the following three literals of the following types: set<t> ,
literal types: bag<t>, list<t>, array<t>,
‣ atomic literal dictionary<t, v>, where t is a type of
‣ collection literal objects or values in the collection
‣ structured literal
.Objects Object Names
• In addition to being assigned an object identifier by
• An object has four characteristics the ODMS, an object may be given one or more
names that are meaningful to the programmer or
1. Identifier: unique system-wide identifier end user.
. The application can refer at its convenience to an
2. Name: unique within a particular database and/or object by name; the ODMS applies the mapping
program; it is optional function to determine the object identifier that
locates the desired object.
3. Lifetime: persistent vs. transient
Object Lifetime
4. Structure: specifies how object is constructed by the
• The lifetime of an object determines how the
type constructor and whether it is an atomic object or memory and storage allocated to the object are
collection object. managed.
Object Identifiers • Two lifetimes are supported in the Object Model:
• An object can always be distinguished from all other objects ‣ transient
within its storage domain by using object identifier. ‣ persistent
• An object retains the same object identifier for its entire • An object whose lifetime is transient, is allocated
lifetime. Thus, the value of an object’s identifier will never memory that is managed by the programming
language runtime system. When the process
change.
terminates, the memory is de-allocated.
• The object remains the same object, even if its attribute
values or relationships change. • An object whose lifetime is persistent, is allocated
memory and storage managed by the ODMS
• Object identifiers are generated by the ODMS, not by runtime system. These objects continue to exist
applications. after the procedure or process that creates them
.Object Structure
• The structure of object can be either atomic or not, in which case the object is composed of other objects.

• An atomic object type is user-defined. There are no built-in atomic object types included in the ODMG Object
Model.

• In the ODMG Object Model, instances of collection objects are composed of distinct elements, each of which can be
an instance of an atomic type, another collection, or a literal type.
Collection Objects
An important distinguishing characteristic of a collection is that all the elements of the collection must be of the same
type.
They are either all the same atomic type, or all the same type of collection, or all the same type of literal.
• The collections supported by the ODMG Object Model include:
‣ Set<t>: A Set object is an unordered collection of elements, with no duplicates allowed.
‣ Bag<t>: A Bag object is an unordered collection of elements that may contain duplicates.
‣ List<t>: A List object is an ordered collection of elements.
‣ Array<t> : An Array object is a dynamically sized, ordered collection of elements that can be located by position.
‣ Dictionary<t,v>: A Dictionary object is an unordered sequence of key-value pairs with no
• duplicate keys.
• Each of these is a type generator, parameterized by the type shown within the angle brackets.
.Structured Object ODMG Interface
• All structured objects support the Object ODL • An interface is a specification of the
interface. abstract behavior of an object type.
• The ODMG Object Model defines the following • It is a signature for the persistent object.
structured objects: Interface tells external world how to interact
with an object. That is, an interface describes the
‣ Date
interface of types of objects: their visible
‣ Interval
attributes, relationships and operations.
‣ Time
‣ Timestamp • Interfaces are non-instantiable but they serve to
define operations that can be inherited by the
‣ User defined structured objects
user-defined objects for a particular application.
• Example: user defined structured object
• struct Address { • State properties of an interface
(i.e., its attributes and
• string dorm_name; relationships) cannot be
inherited.
• string
};
room_no; Object Interface
• In the object model, all the objects inherit the
basic interface
• attribute
Address • Object.
dorm_ad
dress; • Typical interface of an object of ODMG showing
operation declarations is given below:
.Collection Interface
• Collection interface: Any collection object inherits the basic collection interface.
• This interface provides following operations: cardinality() // returns no. of objects in the
collection
• is_empty() // returns true if collection is empty
• insert_element() // inserts an element from the collection
• remove_element() // removes an element from the collection
• contains_element()// returns true if an element is in the collection
• create_iterator() // creates an iterator object for the collection
• object
Interfaces and Behavior Inheritance
• In ODMG, two types of inheritance relationships exist.
• An interface is a specification of the abstract behavior of an object type, which specifies the operation
signatures.
• Interfaces are noninstantiable – that is, one cannot create objects that correspond to an interface definition.
• They are mainly used to specify abstract operations that can be inherited by classes or by other interfaces.
• Subtyping pertains to the inheritance of behavior only and it is specified by colon ( : ).
.Extents
• The extent of a type is the set of all instances of the type within a particular ODMS.
• If an object is an instance of type A, then it will necessarily be a member of the extent of A.
• If type A is a subtype of type B, then the extent of A is a subset of the extent of B.
• The ODMS schema designer can decide whether the system should automatically maintain the extent of
each type.
• In some cases, the individual instances of a type in the extent can be uniquely identified by the values they
carry for some property or set of properties. These identifying properties are called keys.
• The scope of uniqueness is the extent of the type; thus, a type must have an extent to have a key.
PL/SQL
• PL/SQL stands for Procedural Language extension of SQL.
• The purpose of PL/SQL is to combine database language and procedural programming language.
. PL/SQL is a combination of SQL along with the procedural features of programming languages.
• Following are notable facts about PL/SQL:
• PL/SQL is a completely portable, high-performance transaction-processing language.
• PL/SQL provides a built-in interpreted and OS independent programming environment.
• PL/SQL can also directly be called from the command-line SQL*Plus interface.
• Direct call can also be made from external programming language calls to database.
.PL/SQL PL/SQL Block Structure
 Features
 Tight integration with SQL
 Supports data types, functions, etc.
 Increased performance
 A block of statements sent as a single statement
 Increased productivity
 Same techniques can be used with most Oracle
products
 Portability
 Works on any Oracle platform
 Tighter security
 Users may access database objects without granted
privileges PL/SQL Programs
Output statement in PL/SQL • Declaration section (optional)
 DBMS_OUTPUT.PUT_LINE (); • Any needed variables declared here
Input statement in PL/SQL • Executable or begin section
&variable name • Program code such as statements to retrieve or
manipulate data in a table
Assignment operator in PL/SQL • Exception section (optional)
:= • Error traps can catch situations which might
Concatenation operator in PL/SQL ordinarily crash the program
||
.PL/SQL Variables
 Variables are local to the code block control Structures
• In addition to SQL commands, PL/SQL
 Names can be up to 30 characters long and must begin with a character
can also process data using flow of
 Declaration is like that in a table statements. The flow of control
statements are classified into the
 Name then data type the semi-colon
following categories.
 Can be initialized using := operator in the declaration • Conditional control -
 Can be changed with := in the begin section Branching
 Can use constraints • Iterative control - looping
 Variables can be composite or collection types • Sequential control
 Multiple values of different or same type • Conditional control in PL/SQL

Common PL/SQL Data Types Sequence of statements can be


executed on satisfying certain
 CHAR ( max_length ) condition. Different forms of if are:
 VARCHAR2 ( max_length ) • Simple IF
 NUMBER ( precision, scale ) • ELS IF
 BINARY_INTEGER – more efficient than number • ELSE IF
 RAW ( max_length )
 DATE
 BOOLEAN (true, false, null)
 Also LONG, LONG RAW.
. SIMPLE IF
Syntax: ELS IF STATEMENTS
Syntax:

IF-THEN-ELSE STATEMENT:
Syntax:

h
NESTED IF statements
h Syntax:
.SELECTION IN PL/SQL(Sequential Controls) SIMPLE LOOP
SEARCHED CASE Syntax:
LOOP
Syntax: statement1;
EXIT [ WHEN Condition];
CASE END LOOP;
For loop
WHEN searchcondition1 THEN statement1;
• The FOR LOOP statement runs one or more statements
WHEN searchcondition2 THEN statement2; while the loop index is in a specified range. The statement
has this structure:
ELSE • Syntax:
Statementn; FOR counter IN [REVERSE]
lower_bound..upper_bound
END CASE; LOOP
LOOP Statements Statements
END LOOP;
• Loop statements run the same statements with
a series of different values. The loop While loop
statements are: The WHILE LOOP statement runs one or more statements
while a condition is true. It has this structure:
• Simple LOOP
Syntax:
• FOR LOOP WHILE condition LOOP
• WHILE LOOP statements
END LOOP;
• The statements that exit a loop are: EXIT Statement
• EXIT The EXIT statement exits the current iteration of a loop
conditionally or unconditionally and transfers control to the
• EXIT WHEN end of either the current loop or an enclosing labeled loop.
.Functions and Procedures • Modes:
• Up until now, our code was in an anonymous block • IN: procedure must be called with a value for the
• It was run immediately parameter. Value cannot be changed
• OUT: procedure must be called with a variable
• It is useful to put code in a function or procedure so for the parameter. Changes to the parameter are
it can be called several times
seen by the user (i.e., call by reference)
• Once we create a procedure or function in a • IN OUT: value can be sent, and changes to the
Database, it will remain until deleted (like a table). parameter are seen by the user
Difference between Procedure and function is:- • Default Mode is: IN
Procedure
Creating a Function
No return.
Almost exactly like creating a procedure, but you
Function: supply a return type
Returns a value • CREATE [OR REPLACE] FUNCTION
Creating Procedures • function_name
• CREATE [OR REPLACE] PROCEDURE • [(parameter1 [mode1] datatype1,
procedure_name
• parameter2 [mode2] datatype2,
• [(parameter1 [mode1] datatype1,
• . . .)]
• parameter2 [mode2] datatype2,
• RETURN datatype
• . . .)]
• IS|AS
• IS|AS
• PL/SQL Block;
• PL/SQL Block;
.Triggers Syntax to create trigger
• A trigger is a pl/sql block structure which is fired
when a DML statements like Insert, Delete, Update is
executed on a database table.
• A trigger is triggered automatically when an
associated DML statement is executed.
• Associated with a particular table.
• Automatically executed when a particular event
occurs
• Insert
• Update
• Delete
• Others Triggers Timing
Triggers vs. Procedures • A triggers timing has to be specified first
• Procedures are explicitly executed by a user or • Before (most common)
application • Trigger should be fired before the operation
• Triggers are implicitly executed (fired) when the • i.e. before an insert
triggering event occurs • After
• Triggers should not be used as a lazy way to invoke a • Trigger should be fired after the operation
procedure as they are fired every time the event • i.e. after a delete is performed
occurs
.Trigger Events Database Events
• Three types of events are available • Server Errors
• DML events • Users Log On or Off
• DDL events
• Database Started or Stopped
• Database events
Database security
DML Events
Creating Users
• Changes to data in a table
• Insert The DBA creates users by using the CREATE USER
• Update statement.
• Delete • CREATE USER user

DDL Events • IDENTIFIED BY password;

• Changes to the definition of objects • SQL> CREATE USER user1


• Tables • 2 IDENTIFIED BY dec;
• Indexes • User created.
• Procedures Changing Your Password
• Functions
• The DBA creates your user account and
• Others initializes your password.
• Include CREATE, ALTER and DROP statements on • You can change your password by using the
these objects ALTER USER statement.
.RRole
ole is a set of privileges that are granted to users to operate databases.

Role is a set of privileges that are granted to users to operate databases.


Types of roles
1. System roles
a) connect
 allow the user to connect to the database
 allow to create tables and views
 supports all DML operations
b) Resource
connect +
 allow to create triggers ,procedures, functions, packages and cursors .
C) DBA
 Full role
 Connect + resource + all others like creating /dropping users ,grant/revoke privileges.
2. User defined roles
Granting roles to users
Syntax
Grant role1,role2…….to user name
Example 1 :Grant connect role to user1
grant connect to user1
Example 2 grant connect and resource to user
grant connect , resource to user 3
.How to Revoke/cancel Privileges
• You use the REVOKE statement to revoke privileges granted to other users.
• Privileges granted to others through the WITH GRANT OPTION will also be revoked.
• REVOKE {privilege [, privilege...]|ALL}
• ON object
• FROM {user[, user...]|role|PUBLIC}
• [CASCADE CONSTRAINTS];
Example: withdraw connect role from user Revoke connect from user 1
Privileges on objects
Users can grant/revoke certain permissions on his objects(tables, packages and views )
Permissions on tables
 Insertion
 Deletion
 Updating
 Selection
Syntax:
Grant prev1,prev2……..on object name to user name
Example 1 : assume user 1 has an emp-tab
permit user 2 to select the content of emp-tab;
Grant select on emp_tab to user2
.How to Revoke Object Privileges User defined roles
• You use the REVOKE statement to revoke Steps
privileges granted to other users.
• Privileges granted to others through the Step 1:Create the role
WITH GRANT OPTION will also be revoked. Syntax:
• REVOKE {privilege [, privilege...]|ALL} Create role rolename
• ON object Example
• FROM {user[, user...]|role|PUBLIC} SQL> CREATE ROLE manager;
• [CASCADE CONSTRAINTS]; Role created.
Revoking Object Privileges
As user user 1, revoke the SELECT privileges Step 2. Assign privileges to the role
given to user 2 on the emp_tab table. syntax:
• SQL> REVOKE select Grant privilege list on table name to role name
• 2 ON emp_tab Example :
• 3 FROM user-2; SQL> GRANT create table on t1
• Revoke succeeded. 2 to manager;
Grant succeeded.
.Step 3. Assign roles to the users
syntax :
grant role name to user name
SQL> GRANT manager to user1;
example
Grant succeeded.
1 Introduction to Transaction Processing
• Single-User System:
• At most one user at a time can use the system.
• Multiuser System:
• Many users can access the system concurrently.
• Concurrency
• multitask processing:
• Concurrent execution of processes is interleaved in a single CPU
• Parallel processing(multicore):
• Processes are concurrently executed in multiple CPUs.
Introduction to Transaction Processing
• A Transaction:
• Logical unit of database processing that includes one or more access operations (read -retrieval, write - insert or
update, delete).
• A transaction (set of operations) may be stand-alone specified in a high level language like SQL submitted interactively,
or may be embedded within a program.
• Transaction boundaries:
• Begin and End transaction.
• An application program may contain several transactions separated by the Begin and End transaction boundaries.
.Introduction to Transaction Processing
• A database is a collection of named data items
• Basic operations are read and write
• read_item(X): Reads a database item named X into a program variable. To simplify our notation, we assume that
the program variable is also named X.
• write_item(X): Writes the value of program variable X into the database item named X.
Introduction to Transaction Processing
READ AND WRITE OPERATIONS:
• Basic unit of data transfer from the disk to the computer main memory is one block. In general, a data item (what is
read or written) will be the field of some record in the database, although it may be a larger unit such as a record or
even a whole block.
• read_item(X) command includes the following steps:
• Find the address of the disk block that contains item X.
• Copy that disk block into a buffer in main memory (if that disk block is not already in some main memory buffer).
• Copy item X from the buffer to the program variable named X.
Introduction to Transaction Processing
READ AND WRITE OPERATIONS (contd.):
• write_item(X) command includes the following steps:
• Find the address of the disk block that contains item X.
• Copy that disk block into a buffer in main memory (if that disk block is not already in some main memory buffer).
• Copy item X from the program variable named X into its correct location in the buffer.
• Store the updated block from the buffer back to disk (either immediately or at some later point in time).
.Two sample transactions
• Two sample transactions:
• (a) Transaction T1
• (b) Transaction T2

Introduction to Transaction Processing


Why Concurrency Control is needed:
• The Lost Update Problem
• This occurs when two transactions that access the same database items have their operations interleaved in
a way that makes the value of some database item incorrect.
• The Temporary Update (or Dirty Read) Problem
• This occurs when one transaction updates a database item and then the transaction fails for some reason.
• The updated item is accessed by another transaction before it is changed back to its original value.
• The Incorrect Summary Problem
• If one transaction is calculating an aggregate summary function on a number of records while other
transactions are updating some of these records, the aggregate function may calculate some values before
they are updated and others after they are updated.
.Introduction to Transaction Processing
Why recovery is needed:
(What causes a Transaction to fail)
1. A computer failure (system crash):
A hardware or software error occurs in the computer system during transaction execution. If the hardware
crashes, the contents of the computer’s internal memory may be lost.
2. A transaction or system error:
Some operation in the transaction may cause it to fail, such as integer overflow or division by zero. Transaction
failure may also occur because of erroneous parameter values or because of a logical programming error. In
addition, the user may interrupt the transaction during its execution.
5. Disk failure:
Some disk blocks may lose their data because of a read or write malfunction or because of a disk read/write
head crash. This may happen during a read or a write operation of the transaction.
6. Physical problems
This refers to an endless list of problems that includes power or air-conditioning failure, fire, theft, overwriting
disks or tapes by mistake, and mounting of a wrong tape by the operator.
2 Transaction and System Concepts (1)
• A transaction is an atomic unit of work that is either completed in its entirety or not done at all.
• For recovery purposes, the system needs to keep track of when the transaction starts, terminates, and commits or
aborts.
• Transaction states:
• Active state
• Partially committed state
• Committed state
• Failed state
.Transaction and System Concepts (2)
• Recovery manager keeps track of the following operations:
• begin_transaction: This marks the beginning of transaction execution.
• read or write: These specify read or write operations on the database items that are executed as part of a
transaction.
• end_transaction: This specifies that read and write transaction operations have ended and marks the end
limit of transaction execution.
• At this point it may be necessary to check whether the changes introduced by the transaction can be
permanently applied to the database or whether the transaction has to be aborted because it violates
concurrency control or for some other reason.
Transaction and System Concepts (3)
• Recovery manager keeps track of the following operations (cont):
• commit_transaction: This signals a successful end of the transaction so that any changes (updates) executed
by the transaction can be safely committed to the database and will not be undone.
• rollback (or abort): This signals that the transaction has ended unsuccessfully, so that any changes or effects
that the transaction may have applied to the database must be undone.
Transaction and System Concepts (4)
• Recovery techniques use the following operators:
• undo: Similar to rollback except that it applies to a single operation rather than to a whole transaction.
• redo: This specifies that certain transaction operations must be redone to ensure that all the operations of a
committed transaction have been applied successfully to the database.
.3 Desirable Properties of Transactions (1)
ACID properties:
• Atomicity: A transaction is an atomic unit of processing; it is either performed in its entirety or not performed at
all.
• Consistency preservation: A correct execution of the transaction must take the database from one consistent
state to another.
• Isolation: A transaction should not make its updates visible to other transactions until it is committed; this
property, when enforced strictly, solves the temporary update problem and makes cascading rollbacks of
transactions unnecessary.
• Durability or permanency: Once a transaction changes the database and the changes are committed, these
changes must never be lost because of subsequent failure.
Characterizing Schedules based on Recoverability (1)
• Transaction schedule or history:
• When transactions are executing concurrently in an interleaved fashion, the order of execution of
operations from the various transactions forms what is known as a transaction schedule (or history).
• A schedule (or history) S of n transactions T1, T2, …, Tn:
• It is an ordering of the operations of the transactions subject to the constraint that, for each transaction Ti
that participates in S, the operations of T1 in S must appear in the same order in which they occur in T1.
• Note, however, that operations from other transactions Tj can be interleaved with the operations of Ti in S.
.Characterizing Schedules based on Recoverability (2)
Schedules classified on recoverability:
• Recoverable schedule:
• One where no transaction needs to be rolled back.
• A schedule S is recoverable if no transaction T in S commits until all transactions T’ that have written an item that
T reads have committed.
• Cascadeless schedule:
• One where every transaction reads only the items that are written by committed transactions.
Characterizing Schedules based on Recoverability (3)
Schedules classified on recoverability (contd.):
• Schedules requiring cascaded rollback:
• A schedule in which uncommitted transactions that read an item from a failed transaction must be rolled back.
• Strict Schedules:
• A schedule in which a transaction can neither read or write an item X until the last transaction that wrote X has
committed.
5 Characterizing Schedules based on Serializability (1)
• Serial schedule:
• A schedule S is serial if, for every transaction T participating in the schedule, all the operations of T are executed
consecutively in the schedule.
• Otherwise, the schedule is called nonserial schedule.
• Serializable schedule:
• A schedule S is serializable if it is equivalent to some serial schedule of the same n transactions.
.Characterizing Schedules based on Serializability (2)
• Result equivalent:
• Two schedules are called result equivalent if they produce the same final state of the database.
• Conflict equivalent:
• Two schedules are said to be conflict equivalent if the order of any two conflicting operations is the same in
both schedules.
• Conflict serializable:
• A schedule S is said to be conflict serializable if it is conflict equivalent to some serial schedule S’.
Characterizing Schedules based on Serializability (3)
• Being serializable is not the same as being serial
• Being serializable implies that the schedule is a correct schedule.
• It will leave the database in a consistent state.
• The interleaving is appropriate and will result in a state as if the transactions were serially executed, yet will
achieve efficiency due to concurrent execution.
Characterizing Schedules based on Serializability (4)
• Serializability is hard to check.
• Interleaving of operations occurs in an operating system through some scheduler
• Difficult to determine beforehand how the operations in a schedule will be interleaved.
.Characterizing Schedules based on Serializability (5)
Practical approach:
• Come up with methods (protocols) to ensure serializability.
• It’s not possible to determine when a schedule begins and when it ends.
• Hence, we reduce the problem of checking the whole schedule to checking only a committed project of the
schedule (i.e. operations from only the committed transactions.)
• Current approach used in most DBMSs:
• Use of locks with two phase locking
Characterizing Schedules based on Serializability (6)
• View equivalence:
• A less restrictive definition of equivalence of schedules
• View serializability:
• Definition of serializability based on view equivalence.
• A schedule is view serializable if it is view equivalent to a serial schedule.
Characterizing Schedules based on Serializability (7)
• Two schedules are said to be view equivalent if the following three conditions hold:
1. The same set of transactions participates in S and S’, and S and S’ include the same operations of those
transactions.
2. For any operation Ri(X) of Ti in S, if the value of X read by the operation has been written by an operation
Wj(X) of Tj (or if it is the original value of X before the schedule started), the same condition must hold
for the value of X read by operation Ri(X) of Ti in S’.
3. If the operation Wk(Y) of Tk is the last operation to write item Y in S, then Wk(Y) of Tk must also be the
last operation to write item Y in S’.
.Characterizing Schedules based on Serializability (8)
• The premise behind view equivalence:
• As long as each read operation of a transaction reads the result of the same write operation in both
schedules, the write operations of each transaction must produce the same results.
• “The view”: the read operations are said to see the same view in both schedules.
Characterizing Schedules based on Serializability (9)
• Relationship between view and conflict equivalence:
• The two are same under constrained write assumption which assumes that if T writes X, it is constrained by
the value of X it read; i.e., new X = f(old X)
• Conflict serializability is stricter than view serializability. With unconstrained write (or blind write), a
schedule that is view serializable is not necessarily conflict serializable.
• Any conflict serializable schedule is also view serializable, but not vice versa.
Characterizing Schedules based on Serializability (10)
• Relationship between view and conflict equivalence (cont):
• Consider the following schedule of three transactions
• T1: r1(X), w1(X); T2: w2(X); and T3: w3(X):
• Schedule Sa: r1(X); w2(X); w1(X); w3(X); c1; c2; c3;
• In Sa, the operations w2(X) and w3(X) are blind writes, since T1 and T3 do not read the value of X.
• Sa is view serializable, since it is view equivalent to the serial schedule T1, T2, T3.
• However, Sa is not conflict serializable, since it is not conflict equivalent to any serial schedule.
.Characterizing Schedules based on Serializability (11)
Testing for conflict serializability: Algorithm.
• Looks at only read_Item (X) and write_Item (X) operations
• Constructs a precedence graph (serialization graph) - a graph with directed edges
• An edge is created from Ti to Tj if one of the operations in Ti appears before a conflicting operation in Tj
• The schedule is serializable if and only if the precedence graph has no cycles.
6 Transaction Support in SQL
• A single SQL statement is always considered to be atomic.
• Either the statement completes execution without error or it fails and leaves the database unchanged.
• With SQL, there is no explicit Begin Transaction statement.
• Transaction initiation is done implicitly when particular SQL statements are encountered.
• Every transaction must have an explicit end statement, which is either a COMMIT or ROLLBACK.
Recovery Techniques
• Transaction failure :
• Logical errors: transaction cannot complete due to some internal error condition
• System errors: the database system must terminate an active transaction due to an error condition (e.g.,
deadlock)
• System crash: a power failure or other hardware or software failure causes the system to crash.
• Fail-stop assumption: non-volatile storage contents are assumed to not be corrupted by system crash
• Database systems have numerous integrity checks to prevent corruption of disk data
• Disk failure: a head crash or similar disk failure destroys all or part of disk storage
• Destruction is assumed to be detectable: disk drives use checksums to detect failures
.Recovery Algorithms
• Recovery algorithms are techniques to ensure database consistency and transaction atomicity and durability
despite failures
• Focus of this chapter
• Recovery algorithms have two parts:
1. Actions taken during normal transaction processing to ensure enough information exists to recover from
failures
2. Actions taken after a failure to recover the database contents to a state that ensures atomicity, consistency
and durability
Storage Structure
• Volatile storage:
• does not survive system crashes
• Ex: main memory, cache memory
• Nonvolatile storage:
• survives system crashes
• Ex: disk, tape, flash memory,
non-volatile (battery backed up) RAM
• Stable storage:
• a mythical form of storage that survives all failures
• approximated by maintaining multiple copies on distinct nonvolatile media
.Stable-Storage Implementation
• Maintain multiple copies of each block on separate disks
• copies can be at remote sites to protect against disasters such as fire or flooding.
• Failure during data transfer can still result in inconsistent copies:
• Block transfer can result in:
• Successful completion
• Partial failure: destination block has incorrect information
• Total failure: destination block was never updated
• Protecting storage media from failure during data transfer (one solution):
• Protecting storage media from failure during data transfer (cont.):
• Copies of a block may differ due to failure during output operation.
To recover from failure:
1. First find inconsistent blocks:
1. Expensive solution: Compare the two copies of every disk block.
2. Better solution:
 Record in-progress disk writes on non-volatile storage (Non-volatile RAM or special area of disk).
 Use this information during recovery to find blocks that may be inconsistent, and only compare
copies of these.
 Used in hardware RAID systems
.Data Access
• Physical blocks are those blocks residing on the disk.
• Buffer blocks are the blocks residing temporarily in main memory.
• Block movements between disk and main memory are initiated through the following two operations:
• input(B) transfers the physical block B to main memory.
• output(B) transfers the buffer block B to the disk, and replaces the appropriate physical block there.
• Transaction transfers data items between system buffer blocks and its private work-area using the following
operations :
• read(X) assigns the value of data item X to the local variable xi.
• write(X) assigns the value of local variable xi to data item {X} in the buffer block.
• Transactions
• Perform read(X) while accessing X for the first time;
• All subsequent accesses are to the local copy.
• After last access, transaction executes write(X).
• output(BX) need not immediately follow write(X).
• System can perform the output operation when it deems fit.
.Recovery and Atomicity
• Modifying the database without ensuring that the transaction will commit
• may leave the database in an inconsistent state.
• Consider transaction Ti that transfers $50 from account A to account B; goal is:
• either to perform all database modifications made by Ti
• or none at all.
• Several output operations may be required for Ti (to output A and B).
• A failure may occur after one of these modifications have been made
• but before all of them are made.
• To ensure atomicity despite failures,
• we first output information describing the modifications to stable storage without modifying the database itself.
• We study two approaches:
• log-based recovery, and
• shadow-paging
• We assume (initially) that transactions run serially, that is, one after the other.
Log-Based Recovery
• A log is kept on stable storage.
• The log is a sequence of log records, and
• maintains a record of update activities on the database.
• When transaction Ti starts, it registers itself by writing a <Ti start> log record
.• Before T executes write(X), a log record <T , X, V , V > is written, where V
i i 1 2 1 is the value of X before
the write, and V2 is the value to be written to X.
• Two approaches using logs:
• Deferred database modification
• Immediate database modification
Deferred Database Modification
• The deferred database modification scheme records all modifications to the log,
• but defers all the writes to after partial commit.
• Assume that transactions execute serially
• Transaction starts by writing <Ti start> record to log.
• A write(X) operation results in a log record <Ti, X, V> being written, where V is the new value for X
• Note: old value is not needed for this scheme
• The write is not performed on X at this time, but is deferred.
• When Ti partially commits, <Ti commit> is written to the log
• Finally, the log records are read and used to:
• actually execute the previously deferred writes.
.• The immediate database modification scheme allows:
• database updates of an uncommitted transaction to be made:
• as the writes are issued
• since, undoing may be needed,
• update logs must have both old value and new value
• Update log record must be written before database item is written
• We assume that the log record is output directly to stable storage
• Can be extended to postpone log record output,
• Prior to execution of an output(B) operation for a data block B,
• all log records corresponding to items B must be flushed to stable storage
Shadow Paging
• Shadow paging is an alternative to log-based recovery;
• this scheme is useful if transactions execute serially
• Idea: maintain two page tables during the lifetime of a transaction:
• the current page table, and the shadow page table
• Store the shadow page table in nonvolatile storage,
• such that state of the database prior to transaction execution may be recovered.
• Shadow page table is never modified during execution
• To start with, both the page tables are identical.
• Only current page table is used for:
.Remote Backup Systems
Remote backup systems provide high availability by allowing transaction processing to continue even if the
primary site is destroyed.
• Detection of failure:
Backup site must detect when primary site has failed
• to distinguish primary site failure from link failure maintain several communication links between the
primary and the remote backup.
• Transfer of control:
• To take over control backup site first perform recovery using its copy of the database and all the long
records it has received from the primary.
• Thus, completed transactions are redone and incomplete transactions are rolled back.
• When the backup site takes over processing it becomes the new primary
• To transfer control back to old primary when it recovers, old primary must receive redo logs from the old
backup and apply all updates locally.
• Time to recover: To reduce delay in takeover, backup site periodically proceses the redo log records (in effect,
performing recovery from previous database state), performs a checkpoint, and can then delete earlier parts of
the log.
• Hot-Spare configuration permits very fast takeover:
• Backup continually processes redo log record as they arrive, applying the updates locally.
• When failure of the primary is detected the backup rolls back incomplete transactions, and is ready to
process new transactions.
.• Ensure durability of updates by delaying transaction commit until update is logged at backup; avoid this delay by
permitting lower degrees of durability.
• One-safe: commit as soon as transaction’s commit log record is written at primary
• Problem: updates may not arrive at backup before it takes over.
• Two-very-safe: commit when transaction’s commit log record is written at primary and backup
• Reduces availability since transactions cannot commit if either site fails.
• Two-safe: proceed as in two-very-safe if both primary and backup are active. If only the primary is active, the
transaction commits as soon as is commit log record is written at the primary.
• Better availability than two-very-safe; avoids problem of lost transactions in one-safe.
What Is Database Security?
Database:
It is a collection of related information stored in a computer.
Security:
It is being free from danger.
Database Security:
It is the mechanisms that protect the database against intentional or accidental threats.
Three Main Aspects
1. Secrecy
2. Integrity

3. Availability
.Secrecy What is a Threat?
• It is protecting the database from unauthorized users. Threat: it can be defined as a hostile agent that,
• Ensures that users are allowed to do the things they are trying to either casually or by using:
do.
specialized technique
• For examples,
• The employees should not see the salaries of their modify
managers. delete the information managed by a DBMS
Integrity
Two Kinds of Threat
• Checking the database from modified by users.
1. Non-fraudulent Threat
• Ensures that what users are trying to do is correct. • Natural or accidental disasters.
• For examples, • Errors or bugs in hardware or software.
• An employee should be able to modify his or her own • Human errors.
information. 2. fraudulent Threat
Availability • Authorized users
• Those who abuse their privileges and
• Authorized users should be able to access data for Legal authority.
purposes as necessary • Hostile agents
• Those improper users (outsider or
• For examples, insiders).
• Payment orders regarding taxes should be made on time by • who attack the software and/or
hardware system, or read or write
the tax law.
.Database Protection Requirements
1. Protection from Improper Access
2. Protection from Inference
3. Integrity of the Database
4. User Authentication
5. Management and Protection of Sensitive Data
Type of Security Controls
1. Flow Control
• Flow controls regulate the distribution (flow) of information among accessible objects.
• A flow between object X and object Y occurs when a statement reads values from X and writes values into Y.
• Copying data from X to Y is the typical example of information flow.
2. Inference Control
Inference control aim at protecting data from indirect detection.
Information inference occurs when: a set X of data items to be read by a user can be used to get the set Y of data.
An inference channel is a channel where users can find an item X and then use X to get Y as
Y = f(X).
.Main Inference Channels
Indirect Access:
• Occurs when a user derives:
• unauthorized data (say Y)
• from an authorized source (say X).
Correlated Data:
• If visible data X is semantically connected to invisible data Y.
3. Access Control
• Access control in information system are responsible for ensuring that all direct accesses to the
system objects occur base on modes and rules fixed by protection policies.
• An access control system includes :
• subjects (users, processes).
• Who access objects (data, programs).
• Through operations (‘read’, ‘write’, ‘run’).
.

Good luck

You might also like