Unit 2
Unit 2
Parts of trigger
Use of trigger
Types of triggers
After-triggers − It is used after the triggering action is completed. For example, if the trigger
is associated with the INSERT command then it is fired after the row is inserted into the table.
Row-level triggers − It is fired for each row that is affected by DML command. For example,
if an UPDATE command updates 150 rows then a row-level trigger is fired 150 times whereas
a statement-level trigger is fired only for once.
To create a database trigger, we use the CREATE TRIGGER command. The details to be given at
the time of creating a trigger are as follows −
In this section, we discuss some additional issues concerning how rules are designed and
implemented. The first issue concerns activation, deactivation, and grouping of rules. In addition to
creating rules, an active database system should allow users to activate, deactivate, and drop rules by
referring to their rule names. A deactivated rule will not be triggered by the triggering event.
This feature allows users to selectively deactivate rules for certain periods of time when they
are not needed. The activate command will make the rule active again. The drop command deletes
the rule from the system. Another option is to group rules into named rule sets, so the whole set of
rules can be activated, deactivated, or dropped. It is also useful to have a command that can trigger a
rule or rule set via an explicit PROCESS RULES command issued by the user.
The second issue concerns whether the triggered action should be executed before, after,
instead of, or concurrently with the triggering event. A before trigger executes the trigger before
executing the event that caused the trigger. It can be used in appli- cations such as checking for
constraint violations. An after trigger executes the trig- ger after executing the event, and it can be
used in applications such as maintaining derived data and monitoring for specific events and
conditions. An instead of trigger executes the trigger instead of executing the event, and it can be
used in applications such as executing corresponding updates on base relations in response to an
event that is an update of a view.
A related issue is whether the action being executed should be considered as a separate
transaction or whether it should be part of the same transaction that triggered the rule. We will try to
categorize the various options. It is important to note that not all options may be available for a
particular active database system. In fact, most com- mercial systems are limited to one or two of the
options that we will now discuss.
Let us assume that the triggering event occurs as part of a transaction execution. We should
first consider the various options for how the triggering event is related to the evaluation of the rule’s
condition.
The rule condition evaluation is also known as rule consideration , since the action is to be executed
only after considering whether the condition evaluates to true or false. There are three main
possibilities for rule consideration:
1.Immediate consideration. The condition is evaluated as part of the same transaction as the
triggering event, and is evaluated immediately.
2. Deferred consideration. The condition is evaluated at the end of the trans- action that included the
triggering event. In this case, there could be many
3. Detached consideration. The condition is evaluated as a separate transaction, spawned from the
triggering transaction.
The next set of options concerns the relationship between evaluating the rule condition and executing
the rule action. Here, again, three options are possible: immediate , deferred, or detached execution.
Most active systems use the first option.
That is, as soon as the condition is evaluated, if it returns true, the action is immediately executed.
The Oracle system uses the immediate consideration model, but it allows the user to specify
for each rule whether the before or after option is to be used with immediate condition evaluation. It
also uses the immediate execution model. The STARBURST system uses the deferred consideration
option, meaning that all rules triggered by a transaction wait until the triggering transaction reaches
its end and issues its COMMIT WORK command before the rule conditions are evaluated.
Another issue concerning active database rules is the distinction between row-level rules and
statement-level rules. Because SQL update statements (which act as triggering events) can specify a
set of tuples, one has to distinguish between whether the rule should be considered once for the
whole statement or whether it should be considered separately for each row (that is, tuple) affected by
the statement.
Valid Time: Time period during which a fact is true in real world, provided to the system.
Transaction Time: Time period during which a fact is stored in the database, based on
transaction serialization order and is the timestamp generated automatically by the system.
Temporal Relation
Temporal Relation is one where each tuple has associated time; either valid time or transaction time
or both associated with it.
Uni-Temporal Relations: Has one axis of time, either Valid Time or Transaction Time.
Bi-Temporal Relations: Has both axis of time – Valid time and Transaction time. It
includes Valid Start Time, Valid End Time, Transaction Start Time, Transaction End Time.
Valid Time Example
Now let’s see an example of a person, John:
Similarly John changes his address to Mumbai on 10th Jan 2016. However, he has been living in
Mumbai from 21st June of the previous year. So his valid time entry would be 21 June 2015.
Table:Uni-temporal Database
Bi-Temporal Relation (John’s Data Using Both Valid And Transaction Time)
Next we’ll see a bi-temporal database which includes both the valid time and transaction time.
Transaction time records the time period during which a database entry is made. So, now the
database will have four additional entries the valid from, valid till, transaction
entered and transaction superseded.
John Chennai April 3, 1992 June 20, 2015 April 6, 1992 Jan 10, 2016
Bi-temporal Database
Advantages
The main advantages of this bi-temporal relations is that it provides historical and roll back
information. For example, you can get the result for a query on John’s history, like: Where did John
live in the year 2001?. The result for this query can be got with the valid time entry. The transaction
time entry is important to get the rollback information.
Temporal Query
When a temporal table is created in SQL Server, a history table is created behind the scenes.
The main table contains the records as they exist at the current point in time and the history table
contains all the previous versions of records. You can query the main table as normal or add temporal
There are two main ways to query the history table. The first is looking at previous versions of the
table by adding time-based clauses to your queries. The second is to look into the history table
manually, which lets you see all the previous versions of records.
First let’s consider a normal table that we can use as our sample main table. For this post, we’ll work
That table is not temporal, but having the normal definition to compare it to will highlight what we
There are two main changes. First, we added SysStartTime and SysEndTime as generated columns
then used that to create the Period column. These are required columns for temporal tables. Making
the start and end time columns hidden is optional, but can help to hide the versioning when it’s not
needed. Second, we added with (system_versioning = on (…)) to the end of the statement. This will
create the history table using the Period column defined above. The default naming for the history
table is kind of messy, so we also defined what the table should be called. I like the _History suffix, so
A spatial database includes location. It has geometry as points, lines, and polygons. GIS
combines spatial data from many sources with many different people. Databases connect users to the
GIS database.
For example, a city might have the wastewater division, land records, transportation, and fire
departments connected and using datasets from common spatial databases. Let’s take a closer look at
spatial databases and how we use them in GIS.
It is a database system
It offers spatial data types (SDTs) in its data model and query language.
It supports spatial data types in its implementation, providing at least spatial indexing and
efficient algorithms for spatial join.
Example
Vector data: This data is represented as discrete points, lines and polygons
Rastor data: This data is represented as a matrix of square cells.
The spatial data in the form of points, lines, polygons etc. is used by many different databases as
shown above.
Spatial data represents information about the physical location and shape of geometric
objects. These objects can be point locations or more complex objects such as countries, roads, or
lakes.
SQL Server supports two spatial data types: the geometry data type and the geography data type.
Both data types are implemented as .NET common language runtime (CLR) data types in SQL
Server.
There are two types of spatial data. The geometry data type supports planar, or Euclidean
(flat-earth), data. The geometry data type both conforms to the Open Geospatial Consortium (OGC)
Simple Features for SQL Specification version 1.1.0 and is compliant with SQL MM (ISO standard).
SQL Server also supports the geography data type, which stores ellipsoidal (round-earth) data, such
as GPS latitude and longitude coordinates.
Tip
SQL Server spatial tools is a Microsoft sponsored open-source collection of tools for use with the
spatial types in SQL Server. This project provides a set of reusable functions which applications can
make use of. These functions may include data conversion routines, new transformations, aggregates,
etc.
The geometry and geography data types support 16 types of spatial data objects, or instance types.
However, only 11 of these instance types are instantiable; you can create and work with these
instances (or instantiate them) in a database. These instances derive certain properties from their
parent data types.
The figure below shows the geometry hierarchy upon which the geometry and geography data types
are based. The instantiable types of geometry and geography are indicated in blue.
Data types for geographic features that can be perceived as forming a single unit; for
example, individual residences and isolated lakes.
Data types for geographic features that are made up of multiple units or components; for
example, canal systems and groups of islands in a lake.
A data type for geographic features of all kinds.
The subtypes for geometry and geography types are divided into simple and collection types. Some
methods like STNumCurves() work only with simple types.
Point
LineString
CircularString
CompoundCurve
Polygon
CurvePolygon
MultiPoint
MultiLineString
MultiPolygon
GeometryCollection
Examples –
open (region), close (region), and inside (point, loop).
2. Projective operators :
Projective operators, like convex hull are used to establish predicates regarding the concavity
convexity of objects.
Convexity is a measure of the curvature, or the degree of the curve, in the relationship
between bond prices and bond yields.
Concavity relates to the rate of change of a function's derivative. A function fff is concave
up (or upwards) where the derivative f'f′f, prime is increasing.
Example –
Having inside the object’s concavity.
3. Metric operators :
Metric operators’s task is to provide a more accurate description of the geometry of the object.
They are often used to measure the global properties of singular objects, and to measure the
relative position of different objects, in terms of distance and direction.
Example –
length (of an arc) and distance (of a point to point).
2. Dynamic Spatial Operators :
Dynamic operations changes the objects upon which the operators are applied. Create, destroy, and
update are the fundamental dynamic operations.
Example –
Updation of a spatial object via translate, rotate, scale up or scale down, reflect, and shear.
Operator Description
SDO_POINTINPOLYGON Takes a set of rows whose first column is a point's x-coordinate value and
the second column is a point's y-coordinate value, and returns those rows that
SDO_WITHIN_DISTANCE Determines if two geometries are within a specified distance from one another.
Table : lists operators, provided for convenience, that perform an SDO_RELATE operation of a
specific mask type.
Spatial Queries
It is a set of spatial conditions characterized by spatial operators that form the basis for the
retrieval of spatial information from a spatial database system
A request expressed as a combination of spatial conditions (e.g., Euclidean distance from
a query point) for extracting specific information from a large amount of spatial data without
actually changing these data
A spatial query is a special type of database query supported by geodatabases. The queries
differ from SQL queries in several important ways. Two of the most important are that they
allow for the use of geometry data types such as points, lines and polygons and that these
queries consider the spatial relationship between these geometries
A spatial query uses properties and/or relationships that are of spatial nature and are not
explicitly available in the BIM. To process a spatial query the 3D geometry model is
analyzed
The requests for the spatial data which requires the use of spatial operations are called Spatial
Queries.
Spatial indexing method divides the space into manageable number of smaller subspaces,
which can be further divided into smaller subspaces and so on. The partitioning continues
until the unpartitioned subspace contains the objects that can be stored in a data page.
While designing the index structures for spatial databases the storage space must be
efficiently utilized and the information retrieval should be fast and easy
In the figure above, the number of lines that intersect the yellow star is one, the red line. But the
bounding boxes of features that intersect the yellow box is two, the red and blue ones.
The way the database efficiently answers the question <what lines intersect the yellow star= is to first
answer the question <what boxes intersect the yellow box= using the index (which is very fast) and then
do an exact calculation of <what lines intersect the yellow star= only for those features returned by
the first test.
For a large table, this <two pass= system of evaluating the approximate index first, then carrying out an
exact test can radically reduce the amount of calculations necessary to answer a query.
Both PostGIS and Oracle Spatial share the same <R-Tree= 1 spatial index structure. R-Trees break up
data into rectangles, and sub-rectangles, and sub-sub rectangles, etc. It is a self-tuning index structure
that automatically handles variable data density, differing amounts of object overlap, and object size.
Data structures like B trees have been designed for efficient insertion and deletion in databases.
Spatial indexing is used to look up the values that match the predicate in efficient manner. There are
two ways to provide spatial indexing:
i.) Dedicated external spatial data structures are added to the system that provide the attributes for
spatial databases e.g. a B-tree does for standard attributes, and
ii.) spatial objects are mapped into a one-dimensional space so that they can be stored within a
standard one dimensional index such as a B-tree.
Spatial data mining refers to the process of the retrieval of information or patterns that are not
explicitly stored in the spatial databases. Spatial data mining methods are used for the better
understanding of spatial data, identifying the relationships between spatial data and non- spatial data,
query optimization in spatial databases etc.
Statistical Spatial analysis is the most commonly and widely used data mining technique. It
assumes that the spatial data are independent which in fact is not true as the spatial data are
interrelated with their neighboring objects. Statistical method cannot handle symbolic values and non
linear rules and are also very costly in the result computation. Several Machine learning techniques
like learning from examples and generalization and specialization are used in spatial data mining.
Matheus architecture is the most general and widely used architecture in spatial data mining.
This architecture is user controlled. All the predefined information about the objects is stored in the
knowledge base which is fetched by the DB interface for query optimization. The information which
is useful for the pattern recognition is decided by the Focus Component and fed as input to the
pattern extraction. The output is then monitored and evaluated by Evaluation module and duplicate
values are removed. All the components interact using the Controller.
Geographic data consists of the spatial objects and the non spatial information about these
objects (which can be stored in database as a pointer to the spatial description of object). Spatial data
is characterised by geometric as well as topological characteristics where geometric characteristics
involve the information about length, area, perimeter etc and topological characteristics include the
information about neighbours, intrsection etc.
Various methods have been designed for mining the data related to geometric space like points,
polygon, rectangles, network and other complex objects. There are various kinds of rules associated
with spatial data mining.
a.) Characteristic Rule: It refers to the general description of object data. Example rule describing the
general price range of shops in various geographic regions of a city.
b.) Discriminant Rule: It refers to the properties or features that distinguish one object from other.
Example the comparison of the various shops prices in different regions.
c.) Association Rule: It refers to the association of one object with other.
2.7 Applications
The following are examples of the kinds of data mining applications that could benefit from
including spatial information in their processing:
Property analysis: Use colocation rules to find hidden associations between proximity to a
highway and either the price of a house or the sales volume of a store.
Property assessment: In assessing the value of a house, examine the values of similar houses
in a neighborhood, and derive an estimate based on variations and spatial correlation.
Location Management: In cellular systems a mobile unit is free to move around within the entire
area of coverage. Its movement is random and therefore its geographical location is unpredictable.
This situation makes it necessary to locate the mobile unit and record its location to HLR and
VLR when a call has to be delivered to it.
Thus, the entire process of the mobility management component of the cellular system is
responsible for two tasks:
(a) location management- identification of the current geographical location or current
point of attachment of a mobile unit which is required by the MSC (Mobile Switching Center) to
route the call.
(b) handoff- transferring (handing off) the current (active) communication session to the
next base station, which seamlessly resumes the session using its ownset of channels.
One of the main objectives of efficient location management schemes is to minimize the
communication overhead due to database updates (mainly HLR).
The current point of location of a subscriber (mobile unit) is expressed in terms of the cell
or the base station to which it is presently connected. The mobile units (called and calling
subscribers) can continue to talk and move around in their respective cells; but as soon as both or
any one of the units moves to a different cell,the location management procedure is invoked to
identify the new location.
The cost of update and paging increases as cell size decreases,which becomes quite
significant for finer granularity cells such as micro- or picocell clusters. The presence of frequent
cell crossing, which is a common scenario in highly commuting zones, further adds to the cost.
The system creates location areas and paging areas to minimize the cost.
A number of neighbouring cells are grouped together to form a location area, and the
paging area is constructed in a similar way. It is useful to keep the same set of cells for creating
location and paging areas, and in most commercial systems they are usually identical. This
arrangement reduces location update frequency because location updates are not necessary when a
mobile unit moves in the cells of a location area. A large number of schemes to achieve low cost
and infrequent update have been proposed, and new schemes continue to emerge as cellular
technology advances.
When it moves to a different cell in doze or power down modes,then it is neither possible
nor necessary for the location manager to find the location.
The location management module uses a two-tier scheme for location- related tasks. The
first tier provides a quick location lookup, and the second tier search is initiated only when the first
tier search fails.
Location Lookup:A location lookup finds the location of the called party to establish the
communication session. It involves searching VLR and possibly HLR. Figure 3.1 illustrates the
Handoff technology
In cellular communications, the handoff is the process of transferring an active call or data
session from one cell in a cellular network or from one channel to another. In satellite
communications, it is the process of transferring control from one earth station to another. Handoff is
necessary for preventing loss of interruption of service to a caller or a data session user. Handoff is
also called handover.
Types of Handoffs
Mobile Assisted Handoff (MAHO) is a technique in which the mobile devices assist the Base Station
Controller (BSC) to transfer a call to another BSC. It is used in GSM cellular networks. In other
systems, like AMPS, a handoff is solely the job of the BSC and the Mobile Switching Centre (MSC),
without any participation of the mobile device. However, in GSM, when a mobile station is not
using its time slots for communicating, it measures signal quality to nearby BSC and sends this
information to the BSC. The BSC performs handoff according to this information.
2.9 Deductive database
A deductive database is a database system that can make deductions (i.e. conclude additional
facts) based on rules and facts stored in the (deductive) database. Datalog is the language typically
used to specify facts, rules and queries in deductive databases. Deductive databases have grown out
of the desire to combine logic programming with relational databases to construct systems that
support a powerful formalism and are still fast and able to deal with very large datasets. Deductive
databases are more expressive than relational databases but less expressive than logic programming
systems. In recent years, deductive databases such as Datalog have found new application in data
integration, information extraction, networking, program analysis, security, and cloud computing.
Deductive databases reuse many concepts from logic programming; rules and facts specified
in the deductive database language Datalog look very similar to those in Prolog. However important
differences between deductive databases and logic programming:
Order sensitivity and procedurality: In Prolog, program execution depends on the order of rules
in the program and on the order of parts of rules; these properties are used by programmers to
build efficient programs. In database languages (like SQL or Datalog), however, program
execution is independent of the order of rules and facts.
Special predicates: In Prolog, programmers can directly influence the procedural evaluation of
the program with special predicates such as the cut, this has no correspondence in deductive
databases.
Function symbols: Logic Programming languages allow function symbols to build up complex
symbols. This is not allowed in deductive databases.
Tuple-oriented processing: Deductive databases use set-oriented processing while logic
programming languages concentrate on one tuple at a time.
A Deductive Database is a type of database that can make conclusions or we can say
deductions using a sets of well defined rules and fact that are stored in the database. In today’s
world as we deal with a large amount of data, this deductive database provides a lot of advantages.
It helps to combine the RDBMS with logic programming. To design a deductive database a purely
declarative programming language called Datalog is used.
The implementations of deductive databases can be seen in LDL (Logic Data Language),
NAIL (Not Another Implementation of Logic), CORAL, and VALIDITY.
The use of LDL and VALIDITY in a variety of business/industrial applications
1. LDL Applications:
This system has been applied to the following application domains:
Enterprise modelling
Hypothesis testing or data dredging
Software reuse
2. VALIDITY Applications:
Electronic commerc
Rules-governed processes
Knowledge discovery
Concurrent Engineering
The multimedia database stored the multimedia data and information related to it. This is given in
detail as follows −
Media data
This is the multimedia data that is stored in the database such as images, videos, audios, animation
etc.
Media format data
The Media format data contains the formatting information related to the media data such as
sampling rate, frame rate, encoding scheme etc.
Media keyword data
This contains the keyword data related to the media in the database. For an image the keyword data
can be date and time of the image, description of the image etc.
Media feature data
Th Media feature data describes the features of the media data. For an image, feature data can be
colours of the image, textures in the image etc.
There are many challenges to implement a multimedia database. Some of these are:
Multimedia databases contains data in a large type of formats such as .txt(text), .jpg(images),
.swf(videos), .mp3(audio) etc. It is difficult to convert one type of data format to another.
The multimedia database requires a large size as the multimedia data is quite large and needs
to be stored successfully in the database.
It takes a lot of time to process multimedia data so multimedia database is slow.
Multimedia databases are the main source of interaction between users and multimedia elements.
Multimedia storage is characterised by the following −
Massive storage volumes.
Large object sizes.
Multiple related objects.
Temporal requirements for retrieval.
A multimedia database system stores and manages a large collection of multimedia data, such
as audio, video, image, graphics, speech, text, document, and hypertext data, which contain text,
text markups, and linkages. Multimedia database systems are increasingly common owing to the
popular use of audio-video equipment, digital cameras, CD-ROMs, and the Internet. There are
multimedia database systems include NASA’s EOS (Earth Observation System), various kinds
of image and audio video databases, and Internet databases.
There is two main groups of multimedia indexing and retrieval systems which are as follows –
Description-based retrieval systems − It is used to build indices and perform object retrieval
based on image descriptions, such as keywords, captions, size, and time of creation. Description-
based retrieval is labor-intensive if performed manually. If automated, the results are typical of
poor quality.
For instance, the assignment of keywords to images can be a difficult and arbitrary service.
The latest development of Web-based image clustering and classification techniques has
enhanced the quality of definition-based Web image retrieval because image surrounded text
information and Web linkage information can be used to extract proper description and group
images describing a similar theme together.
Content-based retrieval systems − It can support retrieval based on the image content, such as
color histogram, texture, pattern, image topology, and the shape of objects and their layouts and
locations within the image. Content-based retrieval facilitates visual characteristics to index
images and improves object retrieval based on feature similarity, which is highly desirable in
several applications.
In a content-based image retrieval system, there are often two kinds of queries − image
sample-based queries and image feature specification queries. Image-sample-based queries find
all of the images that are similar to the given image sample. This search analyzes the feature
vector (or signature) extracted from the sample with the feature vectors of images that have been
extracted and ordered in the image database.