0% found this document useful (0 votes)
65 views52 pages

XMLType Datatype in Oracle9i

The document discusses Oracle's XMLTYPE datatype, which stores XML data in a CLOB and provides member functions to access the data from SQL. It provides an example of creating a table with an XMLTYPE column, populating it with XML documents from CLOBs and queries, and extracting data from the XML using functions like GETSTRINGVAL() and EXTRACT().

Uploaded by

rehmanimt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views52 pages

XMLType Datatype in Oracle9i

The document discusses Oracle's XMLTYPE datatype, which stores XML data in a CLOB and provides member functions to access the data from SQL. It provides an example of creating a table with an XMLTYPE column, populating it with XML documents from CLOBs and queries, and extracting data from the XML using functions like GETSTRINGVAL() and EXTRACT().

Uploaded by

rehmanimt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 52

XMLType Datatype In Oracle9i

Oracle9i has a dedicated XML datatype called XMLTYPE. It is made up of a CLOB to store the
original XML data and a number of member functions to make the data available to SQL. In this
article I'll present a simple example of it's use.

First we must create a table to store XML documents using the XMLTYPE datatype:
CREATE TABLE tab1 (
col1 SYS.XMLTYPE
);
The table can be populated using XML from a CLOB, VARCHAR2 or an XMLTYPE generated from a
query:
DECLARE
v_xml SYS.XMLTYPE;
v_doc CLOB;
BEGIN
-- XMLTYPE created from a CLOB
v_doc := '<?xml version="1.0"?>' || Chr(10) || '
<TABLE_NAME>MY_TABLE</TABLE_NAME>';
v_xml := sys.xmltype.createXML(v_doc);

INSERT INTO tab1 (col1) VALUES (v_xml);

-- XMLTYPE created from a query


SELECT SYS_XMLGen(table_name)
INTO v_xml
FROM user_tables
WHERE rownum = 1;

INSERT INTO tab1 (col1) VALUES (v_xml);

COMMIT;
END;
/
The data in the table can be viewed using the following query:
SET LONG 1000
SELECT a.col1.getStringVal()
FROM tab1 a;

A.COL1.GETSTRINGVAL()
------------------------------------------------------------------------
----------------------------
<?xml version="1.0"?>
<TABLE_NAME>MY_TABLE</TABLE_NAME>

<?xml version="1.0"?>
<TABLE_NAME>TAB1</TABLE_NAME>

2 rows selected.

SQL>
We can extract the value of specific tags using:
SELECT a.col1.extract('//TABLE_NAME/text()').getStringVal() AS "Table
Name"
FROM tab1 a
WHERE a.col1.existsNode('/TABLE_NAME') = 1;

Table Name
------------------------------------------------------------------------
----------------------------
MY_TABLE
TAB1

2 rows selected.

SQL>
In the above example I was expecting a string, but NUMBERs and CLOBs can be returned using
getNumVal() and getClobVal() respectively. Since the XMLTYPE datatype can contain any
XML document it is sensible to limit the query to those rows which contain the relevant tags,
hence the WHERE clause.

For more information see:


 Oracle9i XML Articles
 XMLType API for PL/SQL

...Hope this helps. Regards Tim

WITH Clause
The WITH clause, or subquery factoring clause, is part of the SQL-99 standard and was added
into the Oracle SQL syntax in Oracle 9.2. This article shows how it can be used to reduce
repetition and simplify complex SQL statements.

Note. I'm not suggesting the following queries are the best way to retrieve the required
information. They merely demonstrate the use of the WITH clause.

Using the SCOTT schema, for each employee we want to know how many other people are in
their department. Using an inline view we might do the following.
SELECT e.ename AS employee_name,
dc.dept_count AS emp_dept_count
FROM emp e,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc
WHERE e.deptno = dc.deptno;
Using a WITH clause this would look like the following.
WITH dept_count AS (
SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno)
SELECT e.ename AS employee_name,
dc.dept_count AS emp_dept_count
FROM emp e,
dept_count dc
WHERE e.deptno = dc.deptno;
The difference seems rather insignificant here.

What if we also want to pull back each employees manager name and the number of people in
the managers department? Using the inline view it now looks like this.
SELECT e.ename AS employee_name,
dc1.dept_count AS emp_dept_count,
m.ename AS manager_name,
dc2.dept_count AS mgr_dept_count
FROM emp e,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc1,
emp m,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc2
WHERE e.deptno = dc1.deptno
AND e.mgr = m.empno
AND m.deptno = dc2.deptno;
Using the WITH clause this would look like the following.
WITH dept_count AS (
SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno)
SELECT e.ename AS employee_name,
dc1.dept_count AS emp_dept_count,
m.ename AS manager_name,
dc2.dept_count AS mgr_dept_count
FROM emp e,
dept_count dc1,
emp m,
dept_count dc2
WHERE e.deptno = dc1.deptno
AND e.mgr = m.empno
AND m.deptno = dc2.deptno;
So we don't need to redefine the same subquery multiple times. Instead we just use the query
name defined in the WITH clause, making the query much easier to read.

Even when there is no repetition of SQL, the WITH clause can simplify complex queries, like the
following example that lists those departments with above average wages.
WITH
dept_costs AS (
SELECT dname, SUM(sal) dept_total
FROM emp e, dept d
WHERE e.deptno = d.deptno
GROUP BY dname),
avg_cost AS (
SELECT SUM(dept_total)/COUNT(*) avg
FROM dept_costs)
SELECT *
FROM dept_costs
WHERE dept_total > (SELECT avg FROM avg_cost)
ORDER BY dname;
In the previous example, the main body of the query is very simple, with the complexity hidden in
the WITH clause.
For more information see:
 subquery_factoring_clause

Hope this helps. Regards Tim...

Handling NULLS in SQL statements


James Koopmann, [email protected]

For some reason the ability to handle NULLS in SQL statements can confuse
some. This article takes a look at how to think of NULLs.

For some reason the value of NULL has confused many and even started feuds
across the internet for the slightest slip of the tongue. Hopefully I will not slip
myself here.

I often wonder why there is such confusion over such a simplistic concept. I think
the confusion stems from the fact that, like one of my early professors stated,
statements about the supossed value of NULL have stuck in our brains. My
professor made a statement something like this: “NULL has a value, we just don’t
know what it is“. I personally think that because people place the term “value“
around the word NULL that it for some reason is given some form of literal value.
After all we can use columns that contain NULL in conditional comparisons.
Simply stated, a column that contains a NULL is said to have no value or is
unknown. I personally think that we should limit ourselves from using any
reference to a “value“ when talking about NULL. So instead of saying “it contains
a NULL value“ we should just say “it contains a NULL“.

For simplicities sake, we should first talk about how NULLs get into a column.
There are basically two ways a NULL can get into a column of a table. I will just
focus on the INSERT statement here for simplicity.

Assume we have a table called TABLE_NULL with the following columns.

CREATE TABLE table_null


(id NUMBER NOT NULL,
name VARCHAR2(10));

We can then INSERT into the table by explicitly stating the NULL keyword.

SQL> INSERT INTO table_null values (1, NULL);


Alternatively, we can imply a NULL by not specifying the NULLABLE fields.
Notice that you must specify the required columns in the insert list.

SQL> INSERT INTO table_null (ID) values (2);

Warning: Oracle allows for the insertion of a NULL through the use of ‘’ as an
empty value. This can be done for character and numeric fields alike.

SQL> INSERTINTO table_null values (1, ‘’);

Even though I know this is quite a wide practice of doing so, this should be
avoided at all costs, as Oracle has stated they may change this “feature”. In
addition, as will be shown later in this article, it might produce a false sense of
value for the NULLs entered.

Getting a NULL into a column is not typically the hard part of dealing with
NULLS. It is the extraction of information in a table that gets a bit confusing. For
this part of the article, I will focus on the following two rows of data that is in our
TABLE_NULL table. (If there is an absence of data that means it contains a
NULL).

ID NAME
---------- ----------
1 1
2

When comparing a column that contains a NULL you must use either ‘IS NULL’
or ‘IS NOT NULL’. Remembering that an evaluation of a condition is either TRUE
or FALSE, anything other than using ‘IS NULL’ or ‘IS NOT NULL’ against a NULL
would result to something unknown (not TRUE or FALSE). A few quick examples
help here. Notice that the use of equality ‘=’, while it may be experienced as an
evaluation of FALSE, actually is an evaluation of UNKNOWN and returns
nothing.

SQL> SELECT * FROM table_null WHERE name IS NULL;


ID NAME
---------- ----------
2
1 row selected.

SQL> SELECT * FROM table_null WHERE name IS NOT NULL;


ID NAME
---------- ----------
1 1
1 row selected.

SQL> SELECT * FROM table_null WHERE name = NULL;


no rows selected
The evaluation of an SQL statement to UNKNOWN is a key principle when
writing and interpreting the results of a SQL statement. There is a big difference
here as it is really saying that Oracle cannot evaluate the condition and thus does
not return a row. Where our minds might think NULL = NULL it is not the case.
Really we should be saying to ourselves does “something unknown” =
“something unknown”. In which case our logic and outcome would more closely
resemble what the result set would be.

This UNKOWN can cause havoc on some programming out there. A long time
ago, when counting rows in a table that met some condition, we were told not to
use the ‘*’ in the counting of rows in a SQL statement. For instance, we would get
the following result for our demo data.

SQL> SELECT COUNT(*) FROM table_null;


COUNT(*)
----------
2
1 row selected.

Over time we were told to replace the ‘*’ in the COUNT function with a column in
the table. This was to speed up some performance problems in the Oracle
engine. So if we replaced the ‘*’ with the column ID we would get the following.

SQL> SELECT COUNT(id) FROM table_null;


COUNT(ID)
----------
2
1 row selected.

Now suppose we happen to choose a column that was NULLABLE. Or let’s say
that over time the name column, that once was a NOT NULL column, became a
NULLABLE column. The following SQL statement would have the following
results. You can see that there could be a huge problem with any subsequent
processing of an application. The reason this happens is that an UNKNOWN in a
function evaluates to an UNKNOWN and thus is unable to be counted, or used
by any function for that matter.

SQL> SELECT COUNT(name) FROM table_null;


COUNT(NAME)
-----------
1
1 row selected.

One way to remedy some of the hassles of using NULLs is to wrap columns of
concern with other functions such as DECODE or NVL. When doing this it gives
the coder the option to replace NULL values with something more acceptable to
the SQL being performed. For instance, the previous COUNTing example could
be helped along to “possibly” get the required results by the following SQL.
SQL> SELECT COUNT(NVL(name,0)) FROM table_null;
COUNT(NVL(NAME,0))
------------------
2
1 row selected.

While databases, tables, and columns were intended to store values there will no
doubt be times when we will have to store something quite different from a value.
Is a nullable column a placeholder for future values or is it something quite
different? Remember that NULL isn’t something more or less than a value, it isn’t
an empty string, it isn’t zero, but is actually just something we don’t know.

Serving up Server Alerts


Steve Callan, [email protected]

How and when do alerts or informational messages about what’s taking place
inside your database make their way out to you, the DBA par excellence? There
are several ways, some free and some hand-crafted, to expose alerts and
messages. Let’s face it, as the Oracle RDBMS engine becomes more and more
Skynet-Terminator-Judgement Day-aware, keeping track of what’s taking place
inside an instance has become easier and harder at the same time. The easier
aspect of this statement is evidenced by more sophisticated monitoring tools and
interfaces, and the harder part is borne out by the sheer number of metrics that
are available to monitor.

Let’s start off with a simple peek inside the database option.

Tail the Alert Log

A commonly used quick and dirty monitoring tool in UNIX-based environments


(AIX, HP, Solaris, and Linux) is a simple script to tail “X” number of lines out of
the alert log, and then search (grep) the extract for whatever is of interest to you.
Specific ORA-xxxxx errors can be searched, or to make things even simpler, the
search can be based on any ORA error. If an ORA error appears, then an email
is fired off via a mail transfer agent (MTA) to one or more addresses.

The steps can be summarized by the shell script pseudo code below:

#! /usr/bin/ksh
tail $ORACLE_HOME\bdump\alert_<SID>.log > alert.log
COUNT=`grep ORA alert.log | wc -l`
if [$COUNT is something other than zero or an empty string]
then
mail -s "Check alert log" [email protected] < alert.log
fi

Several features need to be in place for this scheme. First, whomever (as a
person or machine user such as oracle) is running, the script needs to have
appropriate file system permissions to be able to read $ORACLE_HOME and
write to wherever.

Second, your MTA can be as simple as “mail” (or mailx, depending on your
flavor/version of UNIX). Chances are your UNIX admin already has UNIX mail
working as no doubt much of his or her watchfulness is notification after the fact
as opposed to scanning logs all day long (which is pretty much what this is for
you as well).

Third, you need something to read mail yourself, so that implies something along
the lines of Outlook/Exchange Server in your company’s office. Assuming you
have been assimilated by the Borg, oops, I mean Microsoft, then the email
address shown in the example would stand out to those familiar with aliases or
mail groups. Otherwise, have the script “cat” a file with email addresses in it and
loop through the addresses.

Fourth, you need something to execute the tail job on a periodic basis as you are
pulling the alert log information as opposed to it being pushed to you, and what
better than a cron job to mange this aspect of the process. The cron can run
every ten minutes (as an example) all week long. While crons are very reliable,
what the job cannot do is guarantee you that it will catch an ORA error. One way
to help ensure that your tail of 100 lines does not miss the ORA error at the 101 st
line (i.e., you missed it by one line) is to grab enough lines to increase the
likelihood that the extract will contain at least the last ten minutes of alert log
activity. Better to grab too much than not enough of the alert log.

As a variation on what is emailed to you, don’t include the entire alert log extract
in the DBA alert email. You only need a subject line telling you to inspect the alert
log as opposed to sending (and waiting) multiple KB worth of text, especially if
you’re receiving email on a PDA while on call.

Check for Required Processes

A variation (or complement) of the alert log scan is an existence check for
required processes. As a minimum, does the script need to check for PMON,
SMON, DBWn, LGWR, and CKPT? The answer is not really – checking for
PMON by itself, as an example, is sufficient in and of itself. No PMON means no
instance, which in turn means no running database (assuming a single
instance/single database pairing).

Between an alert log scan and an instance checking “is my database up” script,
the instance checking version is more of a superset of the alert log scan. Here is
why this is so: is an alert log going to be written to if the instance is no longer
running?

Or looked at this way, can an instance still be viable if it encounters or detects an


ORA error? Yes it can, and a deadlock is an excellent example of this scenario.
Deadlock detected, trace file info is written to the alert log, one session’s
transaction is essentially cancelled, and life goes on because absolutely nothing
is wrong with the database. Remember, Oracle’s philosophy on deadlocks is that
when they do occur, it is because of something you caused via code, not
something that is a shortcoming or error within Oracle.

Knocking down an instance by killing a required process typically generates alert


log information, and can be easily demonstrated. On Windows, use the orakill
utility to kill a SPID associated with a SID (kill -9 PID counterpart in UNIX). Use a
query like the one below to obtain a SPID.

select c.name, b.spid, a.sid


from v$session a, v$process b, v$bgprocess c
where c.paddr <> '00'
and c.paddr = b.addr
and b.addr = a.paddr;

NAME SPID SID


----- ------------ ----------
PMON 288 170
MMAN 536 168
DBW0 2596 167
LGWR 3936 166
CKPT 3252 165
SMON 3400 164
RECO 2432 163

We’ll use 288 (for PMON) as one of the parameters for orakill.
The alert log then records information about instance failure, and you can see the
ripple effect among the trace files related to other processes (not all alert entries
are shown)..

Tue May 22 01:46:24 2007


LGWR: terminating instance due to error 472

Tue May 22 01:46:25 2007


Errors in file
c:\oracle\product\10.2.0\admin\db10\bdump\db10_ckpt_3252.trc:
ORA-00472: PMON process terminated with error

Tue May 22 01:46:26 2007


Errors in file
c:\oracle\product\10.2.0\admin\db10\bdump\db10_dbw0_2596.trc:
ORA-00472: PMON process terminated with error

Tue May 22 01:46:31 2007


Errors in file
c:\oracle\product\10.2.0\admin\db10\bdump\db10_reco_2432.trc:
ORA-00472: PMON process terminated with error

Tue May 22 01:46:31 2007


Errors in file
c:\oracle\product\10.2.0\admin\db10\bdump\db10_smon_3400.trc:
ORA-00472: PMON process terminated with error

Instance terminated by LGWR, pid = 3936


Going beyond alert logs and background processes

We can get much more information about what’s going on inside a database with
the DBMS_SERVER_ALERT built-in PL/SQL package. In fact, more than 140
metrics are available, and the alert threshold values for many of these can be
adjusted to suit your particular needs.

One alert or metric you may find to be useful involves the detection of blocking,
the “silent” show stopper of Oracle. Blocking can go on for hours and hours with
no discernible or externally noticeable signs of it taking place. Blocking is usually
detected when users start to complain about hung sessions, followed by calls
about not being able to log in, and when scripted jobs fail to complete (noticed by
you or others). Aside from manually detecting blocking, wouldn’t it be nice to be
alerted when Oracle detects a blocking situation? In Oracle 10g, we can do
exactly that.

One of the configurable metrics is for blocked user sessions, and it comes with
its own graph. The “Metric Value” picture below is a result of the competing
update statements shown in the SQL*Plus session windows (with an output of
the blocking info below that).
Blocking is really quite insidious, and user sessions in an OLTP database can
stack up in no time at all. From a customer service perspective, you can be
certain your company would hate to have customers dissatisfied with your Web
site that manages personal account information, mailing/shipping preferences,
and any number of service oriented functionality. With server managed alerts,
you can be one of the first to know about this situation as opposed to being
practically the last to know.

In Closing

In the next article about serving up server alerts, we’ll go into detail about two
ways to configure and manage server alert/metric settings: using the
DBMS_SERVER_ALERT package and its GUI counterpart in Database Control.

Back to DBAsupport.com

DBA Interview Questions


Sean Hull, [email protected]
There are nearly an infinite number and combination of questions one can pose
to a DBA candidate in an interview. I prefer to lean towards the conceptional,
rather than the rote, as questions of this kind emphasize your foundation, and
thorough understanding. Besides, I've never been one to remember facts and
details I can lookup in a reference. Therefore, with that in mind, here are some
brainteasers for you to ponder over.

1. Why is a UNION ALL faster than a UNION?

The union operation, you will recall, brings two sets of data together. It will *NOT*
however produce duplicate or redundant rows. To perform this feat of magic, a
SORT operation is done on both tables. This is obviously computationally
intensive, and uses significant memory as well. A UNION ALL conversely just
dumps collection of both sets together in random order, not worrying about
duplicates.

2. What are some advantages to using Oracle's CREATE DATABASE


statement to create a new database manually?

 You can script the process to include it in a set of install scripts you deliver
with a product.
 You can put your create database script in CVS for version control, so as
you make changes or adjustments to it, you can track them like you do
changes to software code.

 You can log the output and review it for errors.

 You learn more about the process of database creation, such as what
options are available and why.

3. What are three rules of thumb to create good passwords? How would a
DBA enforce those rules in Oracle? What business challenges might you
encounter?

Typical password cracking software uses a dictionary in the local language, as


well as a list of proper names, and combinations thereof to attempt to guess
unknown passwords. Since computers can churn through 10's of thousands of
attempts quickly, this can be a very affective way to break into a database. A
good password therefore should not be a dictionary word, it should not be a
proper name, birthday, or other obvious guessable information. It should also be
of sufficient length, such as eight to ten characters, including upper and
lowercase, special characters, and even alternate characters if possible.

Oracle has a facility called password security profiles. When installed they can
enforce complexity, and length rules as well as other password related security
measures.
In the security arena, passwords can be made better, and it is a fairly solvable
problem. However, what about in the real-world? Often the biggest challenge is
in implementing a set of rules like this in the enterprise. There will likely be a lot
of resistance to this, as it creates additional hassles for users of the system who
may not be used to thinking about security seriously. Educating business folks
about the real risks, by coming up with real stories of vulnerabilities and break-ins
you've encountered on the job, or those discussed on the internet goes a long
way towards emphasizing what is at stake.

4. Describe the Oracle Wait Interface, how it works, and what it provides.
What are some limitations? What do the db_file_sequential_read and
db_file_scattered_read events indicate?

The Oracle Wait Interface refers to Oracle's data dictionary for managing wait
events. Selecting from tables such as v$system_event and v$session_event give
you event totals through the life of the database (or session). The former are
totals for the whole system, and latter on a per session basis. The event
db_file_sequential_read refers to single block reads, and table accesses by
rowid. db_file_scattered_read conversely refers to full table scans. It is so named
because the blocks are read, and scattered into the buffer cache.

5. How do you return the top-N results of a query in Oracle? Why doesn't
the obvious method work?

Most people think of using the ROWNUM pseudocolumn with ORDER BY.
Unfortunately the ROWNUM is determined *before* the ORDER BY so you don't
get the results you want. The answer is to use a subquery to do the ORDER BY
first. For example to return the top-5 employees by salary:

SELECT * FROM (SELECT * FROM employees ORDER BY salary) WHERE


ROWNUM < 5;

6. Can Oracle's Data Guard be used on Standard Edition, and if so how?


How can you test that the standby database is in sync?

Oracle's Data Guard technology is a layer of software and automation built on top
of the standby database facility. In Oracle Standard Edition it is possible to be a
standby database, and update it *manually*. Roughly, put your production
database in archivelog mode. Create a hotbackup of the database and move it to
the standby machine. Then create a standby controlfile on the production
machine, and ship that file, along with all the archived redolog files to the standby
server. Once you have all these files assembled, place them in their proper
locations, recover the standby database, and you're ready to roll. From this point
on, you must manually ship, and manually apply those archived redologs to stay
in sync with production.
To test your standby database, make a change to a table on the production
server, and commit the change. Then manually switch a logfile so those changes
are archived. Manually ship the newest archived redolog file, and manually apply
it on the standby database. Then open your standby database in read-only
mode, and select from your changed table to verify those changes are available.
Once you're done, shutdown your standby and startup again in standby mode.

7. What is a database link? What is the difference between a public and a


private database link? What is a fixed user database link?

A database link allows you to make a connection with a remote database, Oracle
or not, and query tables from it, even incorporating those accesses with joins to
local tables.

A private database link only works for, and is accessible to the user/schema that
owns it. A global one can be accessed by any user in the database.

A fixed user link specifies that you will connect to the remote db as one and only
one user that is defined in the link. Alternatively, a current user database link will
connect as the current user you are logged in as.

As you prepare for your DBA Interview, or prepare to give one, we hope these
questions provide some new ideas and directions for your study. Keep in mind
that there are a lot of directions an interview can go.  As a DBA emphasize what
you know, even if it is not the direct answer to the question, and as an
interviewee, allow the interview to go in creative directions.  In the end, what is
important is potential or aptitude, not specific memorized answers.  So listen for
problem solving ability, and thinking outside the box, and you will surely find or
be the candidate for the job.

Using Oracle's SQL Functions


Steve Callan, [email protected]

Oracle provides quite an array of functions when it comes to manipulating data


via SQL. The Oracle9i SQL Reference guide (for release 2) lists five categories
of SQL functions, with each category containing one or more functions within a
category. The major functions are single-row, aggregate, analytic, object
reference and user defined.

The single-row function category contains the following functions: number,


character, datetime, conversion and miscellaneous single row. What are single-
row functions? The definition shown in the SQL Reference guide states that
single-row "functions return a single result row for every row of a queried table or
view. These functions can appear in select lists, WHERE clauses, START WITH and
CONNECT BY clauses, and HAVING clauses." Aggregate functions are also quite
powerful. These functions "return a single result row based on groups of rows,
rather than on single rows." In general, single-row and aggregate functions
complement one another.

Of particular interest for this new series are the functions related to numbers.
How does this relate to your job as a DBA? Answer: in several ways. First, you
may have to support a Decision Support group. Decision support groups
frequently use analytic functions and statistical methods to support a business
decision. It helps to understand (or at least recognize the name) some of the
analytic tools being used. Second, you may be working in a data warehouse type
of environment where YOU are the decision support guru. Granted, your job may
be to simply massage or extract the data for others to analyze, but you are the
SQL expert by virtue of being the DBA. If you are asked to find the regression
line for quantity produced versus sales, you will make a better impression by not
having that deer in the headlights look come across your face.

A third reason has to do to adding value to your company by saving your


company some money. How does that work? Well, companies that need to
analyze data often purchase sophisticated tools such as SPSS and SAS. These
products are not cheap. What frequently happens is that users of these high-end
statistical packages wind up using very few features. Many of these features are
the same ones found in Excel. Already you are thinking about how to extract data
into a comma-separated value flat file for use in Excel. However, what if you
didn't need to do that "export" and could do the same exact thing within Oracle
itself? You have already paid for Oracle, and likely have Excel on every PC, but
have the overhead of extracting the data and then having to do whatever in
Excel. How about skipping the Excel step and performing the data analysis within
Oracle? Note: by "Oracle," I mean the RDBMS product and not anything having
to do with Oracle Apps. We are talking about using the database as a high-end
calculator, so to speak.

One thing that goes hand-in-hand with analysis is some type of pictorial
representation of what is being analyzed. Although Oracle does provide other
tools for accomplishing this, that falls more into the Forms & Reports developer
realm, so we'll say that is a possible solution and leave it at that. So even if there
is a heavy chart/graph/histogram/whatever requirement to be met by using Excel
(or something else), you can still use Oracle as a backup to check the work
performed by someone else using a different tool.

Let's look at the linear regression aggregate function as an example of what


Oracle can do as an analysis engine. If you hate math, the next few paragraphs
may be painful, but you'll be able to follow along without getting buried in the
theorems and formulas.

What is linear regression? Let's say you have an input, like list price of some
product your company produces. The lower the price, the more your company
sells of that product. Conversely, the higher the price, the fewer your company
sells. Over time, you plot selling price versus quantity sold. When you look at the
dots or plots on a piece of graph paper, perhaps the arrangement of the dots
tends to suggest drawing a straight line, which generally comes close to
connecting most of the dots. If the dots are pretty close to the line, then you may
have a strong relationship between the X and Y (selling price and quantity
produced). In fact, the line may be such a good fit that you can come close to
predicting the quantity sold if given the selling price, and vice versa.

On the other hand, the points, when plotted, may look like a hazy cloud on the
graph paper - there is no distinct line or trend between the X and Y values.
Because there is no line which "fits" or connects the points, there is a weak
relationship between X and Y (a list price of 52 may be just as likely to result in a
quantity sold of 12, 17, or 18 - there is no or very little predictive power knowing
one thing or the other).

The general equation of a line is y = mx + b. In English, that means "Y" is equal


to the slope ("m") times "X" plus a constant "b." That constant is actually called
the y-intercept (if the input of x = 0, then the output of y = b, or that is where the
line crosses the Y axis). The y-intercept may or may not have any real meaning.
The slope is important because it shows the rate of change between X and Y,
and the sign of the slope reflects the direction of the relationship (a positive slope
shows that as X increases, so does Y; a negative slope shows that as X
increases, Y decreases).

So, getting back to what Oracle can do for you, with respect to linear regression:
quite a bit. The linear regression function, REGR_"component," outputs nine
items of interest typically used when evaluating data. A component is selected by
specifying the related REGR phrase and the two inputs - expr1 and expr2 - or the
independent and dependent variables. I could have said the "X" and the "Y," and
I will later, but not for now.

To look at the example Oracle provides, you will need to install the sample
schema named SH (for sales history). The sample schemas are referenced in
the Oracle9i Sample Schemas guide (http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96539/toc.htm ). If you installed the
sample schemas during the Oracle9i installation, you will also need to unlock the
SH user account (and give it a password you can remember, like "SH"). If you
have never seen a million-row table, this is your opportunity. As the user SH,
doing a "select count(*) from sales" shows just over a million rows (1016271, to
be exact).
The example shown in the SQL Reference guide has five channel_id's, but we
will concentrate on the first one (identified as "C"). Here is the select statement
and its output as shown in a SQL*Plus session as the user named SH:

SQL> SELECT
2 s.channel_id,
3 REGR_SLOPE(s.quantity_sold, p.prod_list_price) SLOPE ,
4 REGR_INTERCEPT(s.quantity_sold, p.prod_list_price) INTCPT ,
5 REGR_R2(s.quantity_sold, p.prod_list_price) RSQR ,
6 REGR_COUNT(s.quantity_sold, p.prod_list_price) COUNT ,
7 REGR_AVGX(s.quantity_sold, p.prod_list_price) AVGLISTP ,
8 REGR_AVGY(s.quantity_sold, p.prod_list_price) AVGQSOLD
9 FROM sales s, products p
10 WHERE s.prod_id=p.prod_id AND
11 p.prod_category='Men' AND
12 s.time_id=to_DATE('10-OCT-2000')
13 GROUP BY s.channel_id;

C SLOPE INTCPT RSQR COUNT AVGLISTP AVGQSOLD


- ---------- ---------- ---------- ------ ---------- ----------
C -.0683687 16.627808 .051342581 20 65.495 12.15
I .019710295 14.8113924 .001631488 46 51.4804348 15.826087
P -.01247359 12.854546 .017039788 30 81.87 11.8333333
S .006155886 13.9919243 .000898438 83 69.813253 14.4216867
T -.00411314 5.22717214 .008132242 27 82.2444444 4.88888889

Let's add another AND to the WHERE clause to get just the first row:

SQL> SELECT
2 s.channel_id,
3 REGR_SLOPE(s.quantity_sold, p.prod_list_price) SLOPE ,
4 REGR_INTERCEPT(s.quantity_sold, p.prod_list_price) INTCPT ,
5 REGR_R2(s.quantity_sold, p.prod_list_price) RSQR ,
6 REGR_COUNT(s.quantity_sold, p.prod_list_price) COUNT ,
7 REGR_AVGX(s.quantity_sold, p.prod_list_price) AVGLISTP ,
8 REGR_AVGY(s.quantity_sold, p.prod_list_price) AVGQSOLD
9 FROM sales s, products p
10 WHERE s.prod_id=p.prod_id AND
11 p.prod_category='Men' AND
12 s.time_id=to_DATE('10-OCT-2000')
13 AND s.channel_id = 'C'
14 group by s.channel_id;

C SLOPE INTCPT RSQR COUNT AVGLISTP AVGQSOLD


- ---------- ---------- ---------- ------ ---------- ----------
C -.0683687 16.627808 .051342581 20 65.495 12.15

Overall, there were 206 rows involved, but we are only interested in the 20 rows
for "C."

Next
DDL Event Security in Oracle Database
Amar Kumar Padhi, Amar Kumar Padhi

We have various levels of security (inside and outside of Oracle database) that
can be implemented according to one's requirements. Mentioned here is a way of
implementing security against structural changes or Data Definition Language
(DDL) changes. This security is put within the database for logical objects.

What it means for the DBA

DDL commands are critical and cannot be rolled back. As a norm, new
developments/changes should be done on test boxes before promoting them to
Production. Such promotions to production can be done as scheduled upgrades
during off-peak hours or when a maintenance window is available as system
downtime. At all other times the system should be up and running as desired.

DDL event security is aimed at preventing structural changes when the site
application is up and running. An application user may never fire such commands
explicitly or may not have access to do so, but then, this security can be
implemented in general and not aimed at a specific set of users. For example, a
script may be fired on the production database by the IT teamin error.

Commands such as CREATE, ALTER, DROP and TRUNCATE can be tracked


and audited or prevented from executing. The requirement here is to maintain the
stability and availability of the system and prevent mishaps when the users are
working.

The idea is to track down DDL commands when they are fired. This can be done
using system event triggers introduced in Oracle 8i. Mentioned below is a simple
process that I use at my site. If you plan to set up something similar, you can use
the code below or modify it as per your needs.

Setup

1. One SYS owned tables are created to hold information regarding the objects
that should be prevented from structural changes. Please note that if you
implement this security globally, you may not require the table below, but if you
intend to do it for only the application tables and allow other temporary objects in
the database to be changed, you will have to maintain a master such as the one
below to differentiate such tables.

For example, my site runs Oracle Applications 11i. There are times when
reporting or custom objects have to be modified for urgent implementation of a
change (e.g.: new adhoc report requirement). As these are custom objects and
are meant for data extraction for reporting purpose, I allow some room for
structure changes here. However, nocompromise can be done with the core
application tables, so these are permanently locked against all structural
changes. I maintain a table such as the one below to hold application related
objects and important custom objects.

create table az_secure_obj


(owner varchar2(30) not null, --Owner of the
object.
obj_name varchar2(30) not null, --Object name
obj_type varchar2(30) not null, --Object type
event_create varchar2(1) default 'N' not null, --Allow CREATE
command
event_alter varchar2(1) default 'N' not null, --Allow ALTER
command
event_drop varchar2(1) default 'N' not null, --Allow DROP
command
event_trunc varchar2(1) default 'N' not null, --Allow TRUNCATE
command
status varchar2(10) default 'Active' not null) --Enable/Disable
the security.
/

Here, the object type can be a PROCEDURE, FUNCTION, PACKAGE,


PACKAGE BODY, SYNONYM, TABLE, TRIGGER, INDEX or a VIEW.

The columns starting with EVENT_ are meant to track if CREATE, ALTER,
DROP or TRUNCATE are allowed. There may be a need to allow truncating of a
table but prevent other changes; the EVENT_TRUNC can be set to 'Y' and the
rest of the events can be set to 'N'.

Similarly there may be a need for a routine that should be allowed to be re-
compiled (ALTER) while re-creating is prevented (CREATE OR REPLACE ..).
The EVENT_ALTER can be set to 'Y' and other events prevented.

The STATUS column states whether the security is 'Active' for an object or not.
This can be set to 'Inactive' to disable the security.

2. The following system event trigger is created as SYS user.

create or replace trigger az_secure_obj_trg


before create or alter or drop or truncate on database
declare
l_errmsg varchar2(100):= 'enabled for DDL Event Security: '
|| 'You cannot do structural changes to this
object.';
l_chk pls_integer;
begin
if ora_sysevent = 'CREATE' then
select 1
into l_chk
from az_secure_obj
where owner = ora_dict_obj_owner
and obj_name = ora_dict_obj_name
and obj_type = ora_dict_obj_type
and event_create = 'N'
and status = 'Active'
and rownum = 1;

raise_application_error(-20001, ora_dict_obj_owner || '.' ||


ora_dict_obj_name || ' ' || l_errmsg);

elsif ora_sysevent = 'ALTER' then


select 1
into l_chk
from az_secure_obj
where owner = ora_dict_obj_owner
and obj_name = ora_dict_obj_name
and obj_type = ora_dict_obj_type
and event_alter = 'N'
and status = 'Active'
and rownum = 1;

raise_application_error(-20001, ora_dict_obj_owner || '.' ||


ora_dict_obj_name || ' ' || l_errmsg);

elsif ora_sysevent = 'DROP' then


select 1
into l_chk
from az_secure_obj
where owner = ora_dict_obj_owner
and obj_name = ora_dict_obj_name
and obj_type = ora_dict_obj_type
and event_drop = 'N'
and status = 'Active'
and rownum = 1;

raise_application_error(-20001, ora_dict_obj_owner || '.' ||


ora_dict_obj_name || ' ' || l_errmsg);

elsif ora_sysevent = 'TRUNCATE' then


select 1
into l_chk
from az_secure_obj
where owner = ora_dict_obj_owner
and obj_name = ora_dict_obj_name
and obj_type = ora_dict_obj_type
and event_trunc = 'N'
and status = 'Active'
and rownum = 1;

raise_application_error(-20001, ora_dict_obj_owner || '.' ||


ora_dict_obj_name || ' ' || l_errmsg);
end if;

exception
when no_data_found then
null;
end;
/

3. If there is a need to disable DDL event security for a particular object or for all
objects, the master table STATUS column should be updated as 'Inactive'. As the
System trigger looks at this table to enable security, it will skip all objects with an
inactive status.

How this setup works

For example, I enable this security for a custom table called AZ_CATENT. The
following insert will add this object to the master.

insert into az_secure_obj


(owner, obj_name, obj_type, event_create, event_alter,
event_drop, event_trunc, status)
values('APPS', 'AZ_CATENT', 'TABLE', 'N', 'N', 'N', 'N', 'Active');

commit;

Now structural changes to this table fail.

SQL> drop table az_catent;


drop table az_catent;
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: APPS.AZ_CATENT enabled for DDL Event Security: You cannot do
structural changes to this object.
ORA-06512: at line 43

SQL> alter table az_catent add flg varchar2(1);


alter table az_catent add flg varchar2(1)
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: APPS.AZ_CATENT enabled for DDL Event Security: You cannot do
structural changes to this object.
ORA-06512: at line 30

orAP>truncate table az_catent;


truncate table az_catent
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: APPS.AZ_CATENT enabled for DDL Event Security: You cannot do
structural changes to this object.
ORA-06512: at line 56

The security is in place now.


Conclusion

Please be careful when working with system triggers. Do not leave an event
trigger with compilation errors. This would throw errors for all structural changes
being carried out.

SQL> truncate table abc;


truncate table abc
*
ERROR at line 1:
ORA-04098: trigger 'SYS.AZ_SECURE_OBJ_TRG' is invalid and failed re-
validation

DDL Event Security is one of the many alternatives that can be used to prevent
mishaps, but this does not necessary mean that the basic security established by
password, roles and privileges take a back seat; these should not be
compromised for an alternative. Use the above feature to suit your setup
requirements.

Storing Word Documents in Oracle


James Koopmann, [email protected]

Have you ever wondered about storing documents into your Oracle database
and just didn't know where to start? Here is a quick introduction to the basics you
need to know.

Manipulating Oracle Files with UTL_FILE showed you how to read the alert log and
do some manipulation on the file while it was external to the database. You
should review this article as it contains some background information you will
need to know, along with some explanation of some of the procedures in this
code that I will not go into here. The next logical extension to the last article is the
manipulation of external files, such as documents, and the storage in the
database. This article will take you through a brief overview of the datatypes and
procedures in order to store word documents within the database.
The Datatypes

When talking about manipulating documents within a database, there are only a
few choices for a datatype that can handle a large document. These large
objects (LOBs) can use any one of the four datatypes depending on the
characteristics of the object you are storing. These large objects can be in the
form of text, graphics, video or audio.

Datatype Description

BLOB Used to store unstructured binary data up to 4G. This datatype stores the
full binary object in the database.

CLOB/NCLOB Used to store up to 4G of character data. This datatype stores the full
character data in the database.

BFILE Used to point at large objects that are external to the database and in
operating system files. The BFILE column also contains binary data and
cannot be selected.

Benefits of LOBs

It use to be that the largest object you could store in the database was of the
datatype LONG. Oracle has for the last few releases kept telling us to convert our
LONG datatypes to a LOB datatype (maybe they will too). The reason for
converting our LONGs to LOBs can be seen in this short list of benefits.

1. LOB columns can reach the size of 4G.


2. You can store LOB data internally within a table or externally.
3. You can perform random access to the LOB data.
4. It is easier to do transformations on LOB columns.
5. You can replicate the tables that contain LOB columns.

Create a Table to Store the Document

In order to store the documents into the database you must obviously first create
an object to store the information. Following is the DDL to create the table
MY_DOCS. You will notice that there is a holder for the bfile location and a
column (DOC_BLOB) to hold the document.

CREATE TABLE my_docs


(doc_id NUMBER,
bfile_loc BFILE,
doc_title VARCHAR2(255),
doc_blob BLOB DEFAULT EMPTY_BLOB() );
The Load Procedure

The load procedure takes as arguments the document name and an id number for the
document. The procedure will then prime a row for update based on the document id,
BFILE location and document name (which becomes the document title). The procedure
will then open internal and external BLOBs and load the internal from the external. At
this point, the document has been loaded into the database table.

Code Meaning
bfile_loc := BFILENAME('DOC_DIR',
in_doc); In order to load the document, you must first
point to the external object through a BFILE
locator. The BFILENAME procedure takes a
directory location and the document name.
INSERT INTO
my_docs (doc_id, bfile_loc, This statement is to prime the row into which
doc_title)
VALUES (1, bfile_loc, in_doc); the external object will be inserted.
SELECT doc_blob INTO temp_blob
FROM my_docs WHERE doc_id = in_id Associate the temporary blob object to the
FOR UPDATE;
table blob object for updating.
DBMS_LOB.OPEN(bfile_loc,
DBMS_LOB.LOB_READONLY); Open the external blob object for reading.

DBMS_LOB.OPEN(temp_blob, Open the temporary blob object for reading


DBMS_LOB.LOB_READWRITE); and writing.
DBMS_LOB.LOADFROMFILE
(temp_blob, bfile_loc, Copy the entire external blob object (BFILE)
Bytes_to_load);
into the internal temporary blob object.

The Search Procedure

The search procedure takes as arguments a document id and a search string. The search
procedure takes as arguments a document id and a search string. The procedure then
converts the search string into raw format and places it into the variable named
PATTERN. Once the variable PATTERN is populated, it is used for searching the loaded
temporary BLOB DOC_BLOB to see if the particular pattern exists.

Pattern :=
utl_raw.cast_to_raw(in_search); Take the input search characters and convert
them to raw characters that can be used to
search your document.
SELECT doc_blob INTO lob_doc
FROM my_docs WHERE doc_id = Put the document into a temporary BLOB for
in_id;
manipulation.
DBMS_LOB.OPEN (lob_doc,
DBMS_LOB.LOB_READONLY); Open the temporary BLOB for reading.
Position := DBMS_LOB.INSTR
(lob_doc, Pattern, Offset, Search the temporary BLOB for the supplied
Occurrence);
search string. If it finds the pattern then the
variable POSITION will be not 0.

How to Use the code.

The procedures that I have given you are very simplistic in nature and are
intended to be part of a larger application for managing external documents
within a database. They are intended to setup a directory where your documents
live, load the documents into a database, and then search for string patterns in
the document id provided. I personally can see you taking out the reliance of
supplying a document id and allowing the search to span multiple documents
within your library. Below I have given a brief description on how to use the code
as is but feel free to modify and integrate into your own set of procedures.

How to Use
1. log into your database of choice as the SYS user
2. compile the package

SQL> @mydocs.sql
3. set serveroutput on

SQL> set serveroutput on


4. initial setup of directory object where your documents live

SQL> exec mydocs.doc_dir_setup


5. Check to make sure you can read one of your documents on disk.

SQL> exec mydocs.list(‘Your Document Here.doc');


6. Load your document into the database

SQL> exec mydocs.load(‘Your Document Here.doc', 1);


7. Search your documents for a string pattern

SQL> exec mydocs.search(‘Search Pattern', 1);


The Code
CREATE OR REPLACE PACKAGE
mydocs
AS
PROCEDURE doc_dir_setup;
PROCEDURE list (in_doc IN VARCHAR2);
PROCEDURE load (in_doc IN VARCHAR2,
in_id IN NUMBER);
PROCEDURE search (in_search IN VARCHAR2,
in_id IN NUMBER);
END mydocs;
/
CREATE OR REPLACE PACKAGE BODY
mydocs
AS
vexists BOOLEAN;
vfile_length NUMBER;
vblocksize NUMBER;

PROCEDURE doc_dir_setup IS
BEGIN
EXECUTE IMMEDIATE
'CREATE DIRECTORY DOC_DIR AS'||
'''"E:\jkoopmann\publish\databasejournal\Oracle"''';
END doc_dir_setup;

PROCEDURE list (in_doc IN VARCHAR2) IS


BEGIN
UTL_FILE.FGETATTR('DOC_DIR',
in_doc,
vexists,
vfile_length,
vblocksize);
IF vexists THEN
dbms_output.put_line(in_doc||' '||vfile_length);
END IF;
END list;

PROCEDURE load (in_doc IN VARCHAR2,


in_id IN NUMBER) IS
temp_blob BLOB := empty_blob();
bfile_loc BFILE;
Bytes_to_load INTEGER := 4294967295;
BEGIN
bfile_loc := BFILENAME('DOC_DIR', in_doc);
INSERT INTO my_docs (doc_id, bfile_loc, doc_title)
VALUES (in_id, bfile_loc, in_doc);
SELECT doc_blob INTO temp_blob
FROM my_docs WHERE doc_id = in_id
FOR UPDATE;
DBMS_LOB.OPEN(bfile_loc, DBMS_LOB.LOB_READONLY);
DBMS_LOB.OPEN(temp_blob, DBMS_LOB.LOB_READWRITE);
DBMS_LOB.LOADFROMFILE(temp_blob, bfile_loc, Bytes_to_load);
DBMS_LOB.CLOSE(temp_blob);
DBMS_LOB.CLOSE(bfile_loc);
COMMIT;
END load;

PROCEDURE search (in_search VARCHAR2,


in_id NUMBER) IS
lob_doc BLOB;
Pattern VARCHAR2(30);
Position INTEGER := 0;
Offset INTEGER := 1;
Occurrence INTEGER := 1;
BEGIN
Pattern := utl_raw.cast_to_raw(in_search);
SELECT doc_blob INTO lob_doc
FROM my_docs WHERE doc_id = in_id;
DBMS_LOB.OPEN (lob_doc, DBMS_LOB.LOB_READONLY);
Position := DBMS_LOB.INSTR(lob_doc, Pattern, Offset, Occurrence);
IF Position = 0 THEN
DBMS_OUTPUT.PUT_LINE('Pattern not found');
ELSE
DBMS_OUTPUT.PUT_LINE('The pattern occurs at '|| position);
END IF;
DBMS_LOB.CLOSE (lob_doc);
END search;

BEGIN
DBMS_OUTPUT.ENABLE(1000000);
END mydocs;
/

The Trigger-Happy DBA - System Triggers


Steve Callan, [email protected]

So far, the trigger-happy DBA series has dealt with triggers related to Data
Manipulation Language (DML) operations. As mentioned in the beginning of the
series, another type of useful trigger is that of the system trigger. System triggers
can be delineated into two categories: those based on Data Definition Language
(DDL) statements, and those based upon database events. Use of system
triggers can greatly expand a DBA's ability to monitor database activity and
events. Moreover, after having read this article, you'll be able to sharp shoot
someone who asks, "How many triggers does Oracle have?" Most people will
seize upon the before/during/after insert/update/delete on row/table easy-to-
answer OCP test type of question (and answer), which is largely correct where
plain vanilla DML triggers are concerned. How would you count INSTEAD-OF
triggers when it comes to DML? So, how many other triggers does Oracle have
or allow?
The syntax for creating a system trigger is very similar to the syntax used for
creating DML triggers.

The number of system triggers available to the DBA is 11 under the short and simple
plan. If you want the deluxe version or variety, you can refer to the more than 20 system-
defined event attributes shown in Chapter 16 of the Oracle9i Application Developer's
Guide. In this month's article, we will look at the 11 "simple" triggers related to DDL and
database events. Let's identify these 11 triggers before going further.

Event or DDL statement When allowed or applicable

STARTUP AFTER

SHUTDOWN BEFORE

SERVERERROR AFTER

LOGON AFTER

LOGOFF BEFORE

CREATE BEFORE and AFTER

DROP BEFORE and AFTER

ALTER BEFORE and AFTER

Some common sense is in order here. If you wait long enough and visit enough
DBA-related question-and-answer web sites, inevitably, and amusingly, you will
see the question about, "I'm trying to create a trigger that does X and Y before a
user logs on – how do I do that?" That may be possible in some future version of
Oracle - the version that senses when you are about to log on? - but don't hold
your breath waiting for that release date!

So, the BEFORE and AFTER timing part really matters for the system events
shown above, and you have a bit more flexibility with respect to the DDL
statements. You will also notice there are no DURING's for the "when" part, and
perhaps not so obvious, there are no INSTEAD-OF triggers for system events.
What about TRUNCATE statements, you ask. TRUNCATE is a DDL statement,
but unfortunately, Oracle does not capture this event (as far as triggers are
concerned).
Let's construct a simple auditing type of trigger that captures a user's logon
information. With auditing in mind, as with when to fire a DML type of trigger,
timing matters. Maybe you have a sensitive database, one where you need to
capture a user's session information. It would be more appropriate to capture a
user's access/logon to the sensitive database immediately after that user logs on
as opposed to capturing the session information before logging off. Who says
you have to log off gracefully? If a session is abnormally terminated, does a
BEFORE LOGOFF trigger fire?

As a side note, how can you obtain session information? One way is to use the
SYS_CONTEXT function. Oracle's SQL Reference manual lists 37 parameters
you can use with SYS_CONTEXT to obtain session information. See
http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions122a.htm#1038178 for more
information regarding this feature.

Here is our audit table:

SQL> CREATE TABLE session_info


2 (username VARCHAR2(30),
3 logon_date DATE,
4 session_id VARCHAR2(30),
5 ip_addr VARCHAR2(30),
6 hostname VARCHAR2(30),
7 auth_type VARCHAR2(30));

Table created.

Here is the code for the BEFORE LOGOFF trigger:

SQL> CREATE OR REPLACE TRIGGER trg_session_info


2 BEFORE LOGOFF
3 ON DATABASE
4 DECLARE
5 session_id VARCHAR2(30);
6 ip_addr VARCHAR2(30);
7 hostname VARCHAR2(30);
8 auth_type VARCHAR2(30);
9 BEGIN
10 SELECT sys_context ('USERENV', 'SESSIONID')
11 INTO session_id
12 FROM dual;
13
14 SELECT sys_context ('USERENV', 'IP_ADDRESS')
15 INTO ip_addr
16 FROM dual;
17
18 SELECT sys_context ('USERENV', 'HOST')
19 INTO hostname
20 FROM dual;
21
22 SELECT sys_context ('USERENV', 'AUTHENTICATION_TYPE')
23 INTO auth_type
24 FROM dual;
25
26 INSERT INTO session_info VALUES
27 (user, sysdate, session_id, ip_addr, hostname, auth_type);
28 END;
29 /

Trigger created.

Let's connect as Scott then return as someone else:

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- ------- ------------------- ---------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE

Looks like the trigger worked, twice in fact, because when I left to connect as
Scott, the newly created trg_session_info trigger captured my information as well.

Let's go back as Scott, and have Scott's session abnormally terminated. Does
the BEFORE LOGOFF trigger capture Scott's session information?

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- -------- ------------------- ----------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 579 WORKGROUP\D2JW5027 DATABASE

The answer is no. In this particular case, Scott's session was terminated by
clicking on the Windows close button. Assuming Scott is a malicious user, he
could have viewed salaries and other personal information with some degree of
obscurity. Use of an AFTER LOGON trigger would have immediately captured
his session information. Of course, this trigger, in of and by itself, is not sufficient
to fully protect access to sensitive information, but it is a means of letting
potentially malicious (or snooping) users know they are being watched or
monitored. Locks on doors help keep honest people honest, so the saying goes.
Same idea here.

Changing the trigger to AFTER LOGON yields the following results with Scott
logging on, and his session being terminated:

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- -------- ------------------- ----------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 579 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 581 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 581 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 582 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 582 WORKGROUP\D2JW5027 DATABASE

7 rows selected.

Look at the last four rows – aside from the date (the times would be different per
user), it looks like two rows per user. This is an example of two things. First, do
not forget to clean up after yourself (remove unnecessary triggers), and second,
maybe you will want to capture the AFTER LOGON and BEFORE LOGOFF
times (of course, it would be hard to change machines in the middle of a
session!).

Another use of a system trigger may help you (as the DBA) identify users in need
of some help when it comes to forming SQL queries. The SERVERERROR
system event, when combined with the CURRENT_SQL parameter in the
SYS_CONTEXT function, can flag or identify users who frequently make
mistakes. The CURRENT_SQL parameter "returns the current SQL that
triggered the fine-grained auditing event. You can specify this attribute only
inside the event handler for the Fine-Grained Auditing feature." (From the SQL
Reference manual) Even without FGAC, you can set up a simple trigger-audit
table relationship as follows:

SQL> CREATE TABLE error_info


2 (username VARCHAR2(30),
3 logon_date DATE,
4 session_id VARCHAR2(30),
5 sql_statement VARCHAR2(64));

Table created.

SQL> CREATE OR REPLACE TRIGGER trg_server_error


2 AFTER SERVERERROR
3 ON DATABASE
4 DECLARE
5 session_id VARCHAR2(30);
6 sql_statement VARCHAR2(64);
7 BEGIN
8 SELECT sys_context ('USERENV', 'SESSIONID')
9 INTO session_id
10 FROM dual;
11
12 SELECT sys_context ('USERENV', 'CURRENT_SQL')
13 INTO sql_statement
14 FROM dual;
15
16 INSERT INTO error_info VALUES
17 (user, sysdate, session_id, sql_statement);
18 END;
19 /

Trigger created.

SQL> select * from some_table;


select * from some_table
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> select * from error_info;

USERNAME LOGON_DAT SESSION_ID SQL_STATEMENT


-------- --------- ---------- ---------------
STECAL 12-JAN-04 583

There are a great many things you can do with triggers, whether they are based
on DML statements or system events. As a developer or DBA (or both), there is
no such thing as having too many tricks up your sleeve. In terms of job or role
separation, you can think of the DML triggers as being in the purview of the
developer, and the system event triggers being in the DBA's, but a good DBA
should possess some decent programming skills of his or her own, and that's
where knowing how to avoid problems with DML triggers comes into play. Being
and staying well-informed on the use (and limitations) of triggers will make you a
trigger-happy DBA.

The Trigger-Happy DBA - Part 2


Steve Callan, [email protected]

Here is a problem many developers run into: ORA-04091 table


owner.table_name is mutating, trigger/function may not see it. In many
cases, the cause of this error is due to code within a trigger that looks at or
touches the data within the table the trigger is being called or invoked from. The
"look and touch" refers to using select (the look) and DML statements (the
touch). In other words, you need to take your DML elsewhere. The reason Oracle
raises this error is related to one of Oracle's primary strengths as a relational
database management system. The particular strength in question here is that of
having a read consistent view of data.

It is worthwhile to note that this error occurs not only in the "pure" database
development environment (CREATE or REPLACE trigger trigger_name… in a
script or SQL*Plus session), but also in the Oracle tools type of development
environment such as Oracle Forms. An Oracle form relies on triggers for a great
many things, ranging from capturing user interaction with the form (when-button-
pressed) to performing transaction processing (on-commit). A forms trigger may
do nothing more than change the focus to a new item or show a new canvas.
What a form trigger can do, and has in common with the "pure" development
type of trigger, is generate the ORA-04091 mutating table error.

One common solution to avoid the mutating table error is to use three other
triggers. Tom Kyte, author of Expert One-on-One Oracle and Effective Oracle by
Design, two of the very best books on Oracle, provides an excellent example of
this technique at http://osi.oracle.com/~tkyte/Mutate/index.html (part of the Ask Tom
series at www.oracle.com). Another solution relies on using an INSTEAD-OF trigger
instead of the trigger you meant to use when you received the error. Another
solution is actually more of a preventative measure, namely, using the right type
of trigger for the task at hand.

Here is a simple example of where a trigger can generate the mutating table
error. The hapless Oracle user named Scott wants to generate a statement
telling him how many employees are left after an employee record is deleted.
This code for this example comes from Oracle's Application Developer's Guide.

Here is the data before the delete statement is issued:

SQL> select empno, ename


2 from emp_tab; -- a copy of the scott.emp table

EMPNO ENAME
---------- ----------
7369 SMITH
7499 ALLEN
7521 WARD
7566 JONES
7654 MARTIN
7698 BLAKE
7782 CLARK
7788 SCOTT
7839 KING
7844 TURNER
7876 ADAMS
7900 JAMES
7902 FORD
7934 MILLER

14 rows selected.

Here is the trigger code:

CREATE OR REPLACE TRIGGER Emp_count


AFTER DELETE ON Emp_tab
FOR EACH ROW
DECLARE
n INTEGER;
BEGIN
SELECT COUNT(*) INTO n FROM Emp_tab;
DBMS_OUTPUT.PUT_LINE(' There are now '|| n ||
' employees.');
END;

In a SQL*Plus session, here is what the coding looks like (same as above, but
without the feedback statement):

SQL> CREATE OR REPLACE TRIGGER Emp_count


2 AFTER DELETE ON Emp_tab
3 FOR EACH ROW
4 DECLARE
5 n INTEGER;
6 BEGIN
7 SELECT COUNT(*) INTO n FROM Emp_tab;
8 DBMS_OUTPUT.PUT_LINE(' There are now '|| n ||
9 ' employees.');
10 END;
11 /

Trigger created.

Any hint so far that there may be a problem with the trigger? Not with the
"Trigger created" feedback Oracle provides. Looks like no errors and that the
trigger should fire when the triggering condition (after delete on the Emp_tab
table) occurs.

Here is a DML statement that will trigger the ORA-04091 error:

SQL> DELETE FROM Emp_tab WHERE Empno = 7499;


DELETE FROM Emp_tab WHERE Empno = 7499
*
ERROR at line 1:
ORA-04091: table SCOTT.EMP_TAB is mutating, trigger/function may not see
it
ORA-06512: at "SCOTT.EMP_COUNT", line 4
ORA-04088: error during execution of trigger 'SCOTT.EMP_COUNT'

Let's modify the trigger code just a bit, and remove the FOR EACH ROW clause.
If 'we are not "doing" each row, the trigger becomes a statement-level trigger.

SQL> CREATE OR REPLACE TRIGGER Emp_count


2 AFTER DELETE ON Emp_tab
3 -- FOR EACH ROW
4 DECLARE
5 n INTEGER;
6 BEGIN
7 SELECT COUNT(*) INTO n FROM Emp_tab;
8 DBMS_OUTPUT.PUT_LINE(' There are now '|| n ||
9 ' employees.');
10 END;
11 /
Trigger created.

SQL> DELETE FROM Emp_tab WHERE Empno = 7499;


There are now 13 employees.

1 row deleted.

Note that the trigger successfully fired with this one modification. But was it really
a modification or just a better design and use of a trigger? As stated in the
previous article, triggers can act on each row or act at the statement level. In
Scott's case, what he really needed was a statement-level trigger, not a row-level
trigger. Mastering this concept alone - knowing whether to base the trigger on the
statement or on rows – can prevent many instances of the mutating trigger error.

Suppose Scott has a table based on two other tables:

SQL> create table trigger_example_table


2 as
3 select empno, ename, a.deptno, dname, alter_date
4 from emp a, dept2 b
5 where a.deptno = b.deptno;

Table created.
SQL> select * from trigger_example_table;

EMPNO ENAME DEPTNO DNAME ALTER_DAT


---------- ---------- ---------- -------------- ---------
7782 CLARK 10 ACCOUNTING 09-DEC-03
7839 KING 10 ACCOUNTING 09-DEC-03
7934 MILLER 10 ACCOUNTING 09-DEC-03
7369 SMITH 20 RESEARCH 09-DEC-03
7876 ADAMS 20 RESEARCH 09-DEC-03
7902 FORD 20 RESEARCH 09-DEC-03
7788 SCOTT 20 RESEARCH 09-DEC-03
7566 JONES 20 RESEARCH 09-DEC-03
7499 ALLEN2 30 SALES 09-DEC-03
7698 BLAKE 30 SALES 09-DEC-03
7654 MARTIN 30 SALES 09-DEC-03
7900 JAMES 30 SALES 09-DEC-03
7844 TURNER 30 SALES 09-DEC-03
7521 WARD 30 SALES 09-DEC-03

14 rows selected.

And, for whatever reason, whenever an update is made on someone, the


ALTER_DATE column is updated to reflect the current date:

SQL> create or replace trigger


2 trig_trigger_example_table
3 after update on trigger_example_table
4 for each row
5 begin
6 update trigger_example_table
7 set alter_date = sysdate;
8 end;
9 /

Trigger created.

Issuing an update statement – does the trigger allow the ALTER_DATE to be


updated?

SQL> update trigger_example_table


2 set ename = 'ALLEN_TRIG'
3 where empno = 7499;
update trigger_example_table
*
ERROR at line 1:
ORA-04091: table SCOTT.TRIGGER_EXAMPLE_TABLE is mutating,
trigger/function may not see it
ORA-06512: at "SCOTT.TRIG_TRIGGER_EXAMPLE_TABLE", line 2
ORA-04088: error during execution of trigger
'SCOTT.TRIG_TRIGGER_EXAMPLE_TABLE'

No, because it is the same problem as before (touching a table that is being
updated). This example just reinforces the idea that the mutating trigger error still
occurs on a table based on other tables (which is still just a table as far as Oracle
is concerned). Another name for the concept of presenting data based on a
combination (i.e., a join) other tables? Straight out of the Concepts Guide: "A
view is a tailored presentation of the data contained in one or more tables or
other views." The Application Developer's Guide (yes, this is a plug for Oracle's
documentation) presents a good example of how to construct an INSTEAD-OF
trigger. You can copy the sample code shown in the guide and experiment with
using various DML statements against the view.

Perhaps the greatest strength or utility of an INSTEAD-OF trigger is its ability to


update what would normally appear to be non-updateable views. Simple views
(pretty much based on a single base table) generally are inherently updateable
via DML statements issued against the view. However, when a view becomes
more complex (multiple tables or views used in various join conditions to create
the new single view), there is a good chance that many columns, as referenced
by the view, lose their "updateable-ness." So, being the data dictionary view/table
name trivia wizard that you are, you know to query the
XXX_UPDATABLE_COLUMNS views, substituting USER, ALL or DBA for XXX
as applicable.

There are exceptions to this rule about views being inherently updateable. The
exceptions (or restrictions) include views that use aggregate functions; group
functions; use of the DISTINCT keyword; use of GROUP BY, CONNECT BY or
START WITH clauses; and use of some joins. In many cases, use of the
INSTEAD-OF trigger feature allows you to work around these restrictions.

INSTEAD-OF triggers are also useful for Forms developers because forms are
commonly based on views. The INSTEAD-OF trigger, being a "real" trigger, and
not a true form trigger, is stored on the server. This may require coordination
between the DBA and developer, which, of course, always happens in complete
harmony (NOT! – but that is a separate issue).

Previous   Next

The Trigger-Happy DBA - System Triggers


Steve Callan, [email protected]

So far, the trigger-happy DBA series has dealt with triggers related to Data
Manipulation Language (DML) operations. As mentioned in the beginning of the
series, another type of useful trigger is that of the system trigger. System triggers
can be delineated into two categories: those based on Data Definition Language
(DDL) statements, and those based upon database events. Use of system
triggers can greatly expand a DBA's ability to monitor database activity and
events. Moreover, after having read this article, you'll be able to sharp shoot
someone who asks, "How many triggers does Oracle have?" Most people will
seize upon the before/during/after insert/update/delete on row/table easy-to-
answer OCP test type of question (and answer), which is largely correct where
plain vanilla DML triggers are concerned. How would you count INSTEAD-OF
triggers when it comes to DML? So, how many other triggers does Oracle have
or allow?

The syntax for creating a system trigger is very similar to the syntax used for
creating DML triggers.

The number of system triggers available to the DBA is 11 under the short and simple
plan. If you want the deluxe version or variety, you can refer to the more than 20 system-
defined event attributes shown in Chapter 16 of the Oracle9i Application Developer's
Guide. In this month's article, we will look at the 11 "simple" triggers related to DDL and
database events. Let's identify these 11 triggers before going further.

Event or DDL statement When allowed or applicable

STARTUP AFTER

SHUTDOWN BEFORE
SERVERERROR AFTER

LOGON AFTER

LOGOFF BEFORE

CREATE BEFORE and AFTER

DROP BEFORE and AFTER

ALTER BEFORE and AFTER

Some common sense is in order here. If you wait long enough and visit enough
DBA-related question-and-answer web sites, inevitably, and amusingly, you will
see the question about, "I'm trying to create a trigger that does X and Y before a
user logs on – how do I do that?" That may be possible in some future version of
Oracle - the version that senses when you are about to log on? - but don't hold
your breath waiting for that release date!

So, the BEFORE and AFTER timing part really matters for the system events
shown above, and you have a bit more flexibility with respect to the DDL
statements. You will also notice there are no DURING's for the "when" part, and
perhaps not so obvious, there are no INSTEAD-OF triggers for system events.
What about TRUNCATE statements, you ask. TRUNCATE is a DDL statement,
but unfortunately, Oracle does not capture this event (as far as triggers are
concerned).

Let's construct a simple auditing type of trigger that captures a user's logon
information. With auditing in mind, as with when to fire a DML type of trigger,
timing matters. Maybe you have a sensitive database, one where you need to
capture a user's session information. It would be more appropriate to capture a
user's access/logon to the sensitive database immediately after that user logs on
as opposed to capturing the session information before logging off. Who says
you have to log off gracefully? If a session is abnormally terminated, does a
BEFORE LOGOFF trigger fire?

As a side note, how can you obtain session information? One way is to use the
SYS_CONTEXT function. Oracle's SQL Reference manual lists 37 parameters
you can use with SYS_CONTEXT to obtain session information. See
http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions122a.htm#1038178 for more
information regarding this feature.

Here is our audit table:


SQL> CREATE TABLE session_info
2 (username VARCHAR2(30),
3 logon_date DATE,
4 session_id VARCHAR2(30),
5 ip_addr VARCHAR2(30),
6 hostname VARCHAR2(30),
7 auth_type VARCHAR2(30));

Table created.

Here is the code for the BEFORE LOGOFF trigger:

SQL> CREATE OR REPLACE TRIGGER trg_session_info


2 BEFORE LOGOFF
3 ON DATABASE
4 DECLARE
5 session_id VARCHAR2(30);
6 ip_addr VARCHAR2(30);
7 hostname VARCHAR2(30);
8 auth_type VARCHAR2(30);
9 BEGIN
10 SELECT sys_context ('USERENV', 'SESSIONID')
11 INTO session_id
12 FROM dual;
13
14 SELECT sys_context ('USERENV', 'IP_ADDRESS')
15 INTO ip_addr
16 FROM dual;
17
18 SELECT sys_context ('USERENV', 'HOST')
19 INTO hostname
20 FROM dual;
21
22 SELECT sys_context ('USERENV', 'AUTHENTICATION_TYPE')
23 INTO auth_type
24 FROM dual;
25
26 INSERT INTO session_info VALUES
27 (user, sysdate, session_id, ip_addr, hostname, auth_type);
28 END;
29 /

Trigger created.

Let's connect as Scott then return as someone else:

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- ------- ------------------- ---------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE
Looks like the trigger worked, twice in fact, because when I left to connect as
Scott, the newly created trg_session_info trigger captured my information as well.

Let's go back as Scott, and have Scott's session abnormally terminated. Does
the BEFORE LOGOFF trigger capture Scott's session information?

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- -------- ------------------- ----------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 579 WORKGROUP\D2JW5027 DATABASE

The answer is no. In this particular case, Scott's session was terminated by
clicking on the Windows close button. Assuming Scott is a malicious user, he
could have viewed salaries and other personal information with some degree of
obscurity. Use of an AFTER LOGON trigger would have immediately captured
his session information. Of course, this trigger, in of and by itself, is not sufficient
to fully protect access to sensitive information, but it is a means of letting
potentially malicious (or snooping) users know they are being watched or
monitored. Locks on doors help keep honest people honest, so the saying goes.
Same idea here.

Changing the trigger to AFTER LOGON yields the following results with Scott
logging on, and his session being terminated:

SQL> select * from session_info;

USERNAME LOGON_DAT SESSION_ID IP_ADDR HOSTNAME AUTH_TYPE


-------- --------- ---------- -------- ------------------- ----------
STECAL 12-JAN-04 577 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 578 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 579 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 581 WORKGROUP\D2JW5027 DATABASE
STECAL 12-JAN-04 581 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 582 WORKGROUP\D2JW5027 DATABASE
SCOTT 12-JAN-04 582 WORKGROUP\D2JW5027 DATABASE

7 rows selected.

Look at the last four rows – aside from the date (the times would be different per
user), it looks like two rows per user. This is an example of two things. First, do
not forget to clean up after yourself (remove unnecessary triggers), and second,
maybe you will want to capture the AFTER LOGON and BEFORE LOGOFF
times (of course, it would be hard to change machines in the middle of a
session!).

Another use of a system trigger may help you (as the DBA) identify users in need
of some help when it comes to forming SQL queries. The SERVERERROR
system event, when combined with the CURRENT_SQL parameter in the
SYS_CONTEXT function, can flag or identify users who frequently make
mistakes. The CURRENT_SQL parameter "returns the current SQL that
triggered the fine-grained auditing event. You can specify this attribute only
inside the event handler for the Fine-Grained Auditing feature." (From the SQL
Reference manual) Even without FGAC, you can set up a simple trigger-audit
table relationship as follows:

SQL> CREATE TABLE error_info


2 (username VARCHAR2(30),
3 logon_date DATE,
4 session_id VARCHAR2(30),
5 sql_statement VARCHAR2(64));

Table created.

SQL> CREATE OR REPLACE TRIGGER trg_server_error


2 AFTER SERVERERROR
3 ON DATABASE
4 DECLARE
5 session_id VARCHAR2(30);
6 sql_statement VARCHAR2(64);
7 BEGIN
8 SELECT sys_context ('USERENV', 'SESSIONID')
9 INTO session_id
10 FROM dual;
11
12 SELECT sys_context ('USERENV', 'CURRENT_SQL')
13 INTO sql_statement
14 FROM dual;
15
16 INSERT INTO error_info VALUES
17 (user, sysdate, session_id, sql_statement);
18 END;
19 /

Trigger created.

SQL> select * from some_table;


select * from some_table
*
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> select * from error_info;

USERNAME LOGON_DAT SESSION_ID SQL_STATEMENT


-------- --------- ---------- ---------------
STECAL 12-JAN-04 583

There are a great many things you can do with triggers, whether they are based
on DML statements or system events. As a developer or DBA (or both), there is
no such thing as having too many tricks up your sleeve. In terms of job or role
separation, you can think of the DML triggers as being in the purview of the
developer, and the system event triggers being in the DBA's, but a good DBA
should possess some decent programming skills of his or her own, and that's
where knowing how to avoid problems with DML triggers comes into play. Being
and staying well-informed on the use (and limitations) of triggers will make you a
trigger-happy DBA.

Previous  

Back to DBAsupport.com

Altering Oracle's SQL*Plus Help Facility


James Koopmann, [email protected]

Everyone needs a little help now and then. If you have never used Oracle's help
facility, venture with me and find new ways you can provide benefit to your users
of SQL*Plus through this simple interface.

Invoking Help

The SQL*Plus facility has a very simple syntax.

HELP [topic]

In order to get the appropriate help information, you need only issue the HELP or
'?' command on the command line within SQL*Plus, followed by the command or
subject matter you need help on. If you do not know what you want or just want
to see what is available, then for the subject matter supply the global 'TOPICS' or
'INDEX' keyword and get a listing of everything available for HELP.

SQL> HELP INDEX

Enter Help [topic] for help.

@ COPY PAUSE SHUTDOWN


@@ DEFINE PRINT SPOOL
/ DEL PROMPT SQLPLUS
ACCEPT DESCRIBE QUIT START
APPEND DISCONNECT RECOVER STARTUP
ARCHIVE LOG EDIT REMARK STORE
ATTRIBUTE EXECUTE REPFOOTER TIMING
BREAK EXIT REPHEADER TTITLE
BTITLE GET RESERVED WORDS (SQL) UNDEFINE
CHANGE HELP RESERVED WORDS (PL/SQL) VARIABLE
CLEAR HOST RUN WHENEVER OSERROR
COLUMN INPUT SAVE WHENEVER SQLERROR
COMPUTE LIST SET
CONNECT PASSWORD SHOW

Not only can you supply a single topic for the HELP command, you may also
supply an abbreviated topic. If the abbreviated topic also covers multiple topic
areas, all of the topics will be reported. For example, if I supplied the topic 'H',
under the base installation of HELP, I would get results for both HELP and
HOST.

SQL> ? H

HELP
----

Accesses this command line help system. Enter HELP INDEX for a list
of topics.
In iSQL*Plus, click the Help button to display iSQL*Plus help.

HELP [topic]

HOST
----

Executes a host operating system command without leaving


SQL*Plus.

HO[ST] [command]

Not available in iSQL*Plus

Possible errors you may encounter are an indication of the HELP facility not
being installed or an invalid topic.

SP2-0171 HELP not accessible


Cause: On-line SQL*Plus help is not installed in this Oracle instance.
Action: Contact the Database Administrator to install the on-line help.

SP2-0172 No HELP available


Cause: There is no help information available for the specified command.
Action: Enter HELP INDEX for a list of topics.

Next
Web Reports from SQL *Plus in Oracle 8i/9i
Ajay Gursahani, [email protected]

The SQL *Plus command-line interface enables you to generate a complete


HTML output which can be embedded in a web page.

SQL *Plus provides you with a command, SET MARKUP HTML ON SPOOL ON,
which is used to produce HTML pages automatically. You can view these pages
using any web browser.

The SET MARKUP HTML ON SPOOL ON only specifies that the SQL *Plus
output will be HTML encoded; it does not create or begin writing to an output file
till you issue SPOOL <filename>. The file will then have HTML tags including
<HTML> and </HTML>

You have to use SPOOL OFF to close the spool file and issue SET MARKUP
HTML OFF to disable HTML output.

Example:

SQL> desc test

Name Null? Type


----------------------------------- -------- -----------------
UNIQUE_ID NUMBER
NAME VARCHAR2(25)
SALARY NUMBER

SQL> SELECT * FROM TEST;

UNIQUE_ID NAME SALARY


---------- ------------------------- ----------
1 ANDY 4500
2 ALAN 3500
3 JACK 3600
4 PETER 4000
5 JOE 2900

Issue the following commands at SQL Prompt

SET ECHO OFF


SET MARKUP HTML ON SPOOL ON
SPOOL c:\test.html
SELECT * FROM test;
SPOOL OFF
SET MARKUP HTML OFF
SET ECHO OFF

The above commands will generate an HTML file "test.html" which you can view
using a web browser. A sample output is as below:

SQL> SELECT * FROM test;

UNIQUE_ID NAME SALARY

1 ANDY 4500

2 ALAN 3500

3 JACK 3600

4 PETER 4000

5 JOE 2900

SQL> spool off

Please note that, the SQL query (SELECT * FROM test;) and SPOOL OFF are
also a part of "test.html".

A view source on "test.html" will typically display the following:

<html>
<head>
<title>
SQL*Plus Report
</title>
<meta name="generator" content="SQL*Plus 9.0.1">
</head>
<body>

SQL> select * from test;


<br>
<p>
<table border="1" width="90%">
<tr><th>
UNIQUE_ID
</th><th>
NAME
</th><th>
SALARY
</th></tr>
<tr><td align="right">
1
</td><td>
ANDY
</td><td align="right">
4500
</td></tr>
<tr><td align="right">
2
</td><td>
ALAN
</td>
<td align="right">
3500
</td></tr>
<tr><td align="right">
3
</td><td>
JACK
</td><td align="right">
3600
</td></tr>
<tr><td align="right">
4
</td><td>
PETER
</td><td align="right">
4000
</td></tr>
<tr><td align="right">
5
</td><td>
JOE
</td><td align="right">
2900
</td></tr>
</table>
<p>

SQL> spool off


<br>
</body>
</html>

However, you can generate an HTML file by executing the following script from
the OS level, which will suppress the SQL commands from spooling.

test.sql

SELECT * FROM test;


EXIT

SQL> sqlplus -s -m "HTML ON" wbur/wbur @c:\test.sql>test.html

"-m" uses HTML Markup Options. It allows you to start SQL *Plus session in
'markup mode', rather than using SET MARKUP command interactively (as we
did in above example)
"-s" uses silent mode

The output is redirected to "test.html". A sample output of "test.html" is as below;

test.html

UNIQUE_ID NAME SALARY

1 ANDY 4500

2 ALAN 3500

3 JACK 3600

4 PETER 4000

5 JOE 2900

Note that the SQL commands are no longer displayed as part of html.

Summary

The above article gives a basic understanding of how we can generate HTML
reports using the MARKUP option in SQL *Plus. You can extend this exercise by
writing CGI scripts that will generate web reports.

External Tables in Oracle 9i


Ajay Gursahani, [email protected]

This article gives a brief understanding about External tables. External Tables
are defined as tables that do not reside in the database, and can be in any format
for which an access driver is provided. This external table definition can be
thought of as a view that allows running any SQL query against external data
without requiring that the external data first be loaded into the database.

You can, for example, select, join, or sort external table data. You can also
create views and synonyms for external tables. However, no DML operations
(UPDATE, INSERT, or DELETE) are possible, and indexes cannot be created on
external tables.
Oracle provides the means of defining the metadata for external tables through
the CREATE TABLE ... ORGANIZATION EXTERNAL statement.

Before firing the above command we need to create a directory object where the
external files will reside.

CREATE OR REPLACE DIRECTORY EXT_TABLES AS 'C:\EXT_TABLES\';

Example: The example below describes how to create external files, create
external tables, query external tables and create views.

Step I: Creating the flat files, which will be queried

The file "emp_ext1.dat" contains the following sample data:

101,Andy,FINANCE,15-DEC-1995
102,Jack,HRD,01-MAY-1996
103,Rob,DEVELOPMENT,01-JUN-1996
104,Joe,DEVELOPMENT,01-JUN-1996

The file "emp_ext2.dat" contains the following sample data:

105,Maggie,FINANCE,15-DEC-1997
106,Russell,HRD,01-MAY-1998
107,Katie,DEVELOPMENT,01-JUN-1998
108,Jay,DEVELOPMENT,01-JUN-1998

Copy these files under "C:\EXT_TABLES"

Step II: Create a Directory Object where the flat files will reside

SQL> CREATE OR REPLACE DIRECTORY EXT_TABLES AS 'C:\EXT_TABLES';

Directory created.

Step III: Create metadata for the external table

SQL> CREATE TABLE emp_ext


(
empcode NUMBER(4),
empname VARCHAR2(25),
deptname VARCHAR2(25),
hiredate date
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ext_tables
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
)
LOCATION ('emp_ext1.dat','emp_ext2.dat')
)
REJECT LIMIT UNLIMITED;

Table created.

The REJECT LIMIT clause specifies that there is no limit on the number of errors
that can occur during a query of the external data.

"The ORACLE_LOADER is an access driver for loading data from the external
files into the tables."

Step IV: Querying Data

SQL> SELECT * FROM emp_ext;

EMPCODE EMPNAME DEPTNAME HIREDATE


--------- ------------------- ---------------------- ---------
101 Andy FINANCE 15-DEC-95
102 Jack HRD 01-MAY-96
103 Rob DEVELOPMENT 01-JUN-96
104 Joe DEVELOPMENT 01-JUN-96
105 Maggie FINANCE 15-DEC-97
106 Russell HRD 01-MAY-98
107 Katie DEVELOPMENT 01-JUN-98
108 Jay DEVELOPMENT 01-JUN-98

8 rows selected.
Step V: Creating Views

SQL> CREATE VIEW v_empext_dev AS


SELECT * FROM emp_ext
WHERE deptname='DEVELOPMENT';
View created.

SQL> SELECT * FROM v_empext_dev;

EMPCODE EMPNAME DEPTNAME HIREDATE


------------ ------------- ---------------------- ---------
103 Rob DEVELOPMENT 01-JUN-96
104 Joe DEVELOPMENT 01-JUN-96
107 Katie DEVELOPMENT 01-JUN-98
108 Jay DEVELOPMENT 01-JUN-98

You can get the information of the objects you have created through
DBA_OBJECTS, ALL_OBJECTS or USER_OBJECTS.

SQL> SELECT OBJECT_NAME, OBJECT_TYPE FROM ALL_OBJECTS


WHERE OBJECT_NAME LIKE 'EMP_EXT';

OBJECT_NAME OBJECT_TYPE
---------------------- ------------------
EMP_EXT TABLE

1 row selected.

SQL> SELECT OBJECT_NAME, OBJECT_TYPE FROM ALL_OBJECTS


WHERE OBJECT_NAME LIKE 'EXT_TABLES';

OBJECT_NAME OBJECT_TYPE
---------------------- ------------------
EXT_TABLES DIRECTORY

1 row selected.

Populating Tables using the INSERT command

You can populate data from external files using an "insert into … select from"
statement instead of using SQL*Loader. This method provides very fast data
loads.

Example:

Consider a table EMPLOYEES:

SQL> desc EMPLOYEES;

Name Null? Type


--------------------------------- -------- --------------
EMPCODE NUMBER(4)
EMPNAME VARCHAR2(25)
DEPTNAME VARCHAR2(25)
HIREDATE DATE

SQL> INSERT INTO employees


(empcode,empname,deptname,hiredate) SELECT * FROM emp_ext;

8 rows created.

SQL> SELECT * FROM employees;

EMPCODE EMPNAME DEPTNAME


HIREDATE
------------ ------------------- ---------------------- ---------
101 Andy FINANCE 15-DEC-
95
102 Jack HRD 01-MAY-
96
103 Rob DEVELOPMENT 01-JUN-
96
104 Joe DEVELOPMENT 01-JUN-
96
105 Maggie FINANCE 15-DEC-
97
106 Russell HRD 01-MAY-
98
107 Katie DEVELOPMENT 01-JUN-
98
108 Jay DEVELOPMENT 01-JUN-
98

8 rows selected.

Dropping External Tables

For an external table, the DROP TABLE statement removes only the table
metadata in the database. It has no affect on the actual data, which resides
outside of the database.

Summary

The external files are thus tables in the data dictionary, which can be queried as
you would query ordinary Oracle tables. You can perform fast data loads using
the above method instead of using SQL*Loader.

You might also like