XMLType Datatype in Oracle9i
XMLType Datatype in Oracle9i
Oracle9i has a dedicated XML datatype called XMLTYPE. It is made up of a CLOB to store the
original XML data and a number of member functions to make the data available to SQL. In this
article I'll present a simple example of it's use.
First we must create a table to store XML documents using the XMLTYPE datatype:
CREATE TABLE tab1 (
col1 SYS.XMLTYPE
);
The table can be populated using XML from a CLOB, VARCHAR2 or an XMLTYPE generated from a
query:
DECLARE
v_xml SYS.XMLTYPE;
v_doc CLOB;
BEGIN
-- XMLTYPE created from a CLOB
v_doc := '<?xml version="1.0"?>' || Chr(10) || '
<TABLE_NAME>MY_TABLE</TABLE_NAME>';
v_xml := sys.xmltype.createXML(v_doc);
COMMIT;
END;
/
The data in the table can be viewed using the following query:
SET LONG 1000
SELECT a.col1.getStringVal()
FROM tab1 a;
A.COL1.GETSTRINGVAL()
------------------------------------------------------------------------
----------------------------
<?xml version="1.0"?>
<TABLE_NAME>MY_TABLE</TABLE_NAME>
<?xml version="1.0"?>
<TABLE_NAME>TAB1</TABLE_NAME>
2 rows selected.
SQL>
We can extract the value of specific tags using:
SELECT a.col1.extract('//TABLE_NAME/text()').getStringVal() AS "Table
Name"
FROM tab1 a
WHERE a.col1.existsNode('/TABLE_NAME') = 1;
Table Name
------------------------------------------------------------------------
----------------------------
MY_TABLE
TAB1
2 rows selected.
SQL>
In the above example I was expecting a string, but NUMBERs and CLOBs can be returned using
getNumVal() and getClobVal() respectively. Since the XMLTYPE datatype can contain any
XML document it is sensible to limit the query to those rows which contain the relevant tags,
hence the WHERE clause.
WITH Clause
The WITH clause, or subquery factoring clause, is part of the SQL-99 standard and was added
into the Oracle SQL syntax in Oracle 9.2. This article shows how it can be used to reduce
repetition and simplify complex SQL statements.
Note. I'm not suggesting the following queries are the best way to retrieve the required
information. They merely demonstrate the use of the WITH clause.
Using the SCOTT schema, for each employee we want to know how many other people are in
their department. Using an inline view we might do the following.
SELECT e.ename AS employee_name,
dc.dept_count AS emp_dept_count
FROM emp e,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc
WHERE e.deptno = dc.deptno;
Using a WITH clause this would look like the following.
WITH dept_count AS (
SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno)
SELECT e.ename AS employee_name,
dc.dept_count AS emp_dept_count
FROM emp e,
dept_count dc
WHERE e.deptno = dc.deptno;
The difference seems rather insignificant here.
What if we also want to pull back each employees manager name and the number of people in
the managers department? Using the inline view it now looks like this.
SELECT e.ename AS employee_name,
dc1.dept_count AS emp_dept_count,
m.ename AS manager_name,
dc2.dept_count AS mgr_dept_count
FROM emp e,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc1,
emp m,
(SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno) dc2
WHERE e.deptno = dc1.deptno
AND e.mgr = m.empno
AND m.deptno = dc2.deptno;
Using the WITH clause this would look like the following.
WITH dept_count AS (
SELECT deptno, COUNT(*) AS dept_count
FROM emp
GROUP BY deptno)
SELECT e.ename AS employee_name,
dc1.dept_count AS emp_dept_count,
m.ename AS manager_name,
dc2.dept_count AS mgr_dept_count
FROM emp e,
dept_count dc1,
emp m,
dept_count dc2
WHERE e.deptno = dc1.deptno
AND e.mgr = m.empno
AND m.deptno = dc2.deptno;
So we don't need to redefine the same subquery multiple times. Instead we just use the query
name defined in the WITH clause, making the query much easier to read.
Even when there is no repetition of SQL, the WITH clause can simplify complex queries, like the
following example that lists those departments with above average wages.
WITH
dept_costs AS (
SELECT dname, SUM(sal) dept_total
FROM emp e, dept d
WHERE e.deptno = d.deptno
GROUP BY dname),
avg_cost AS (
SELECT SUM(dept_total)/COUNT(*) avg
FROM dept_costs)
SELECT *
FROM dept_costs
WHERE dept_total > (SELECT avg FROM avg_cost)
ORDER BY dname;
In the previous example, the main body of the query is very simple, with the complexity hidden in
the WITH clause.
For more information see:
subquery_factoring_clause
For some reason the ability to handle NULLS in SQL statements can confuse
some. This article takes a look at how to think of NULLs.
For some reason the value of NULL has confused many and even started feuds
across the internet for the slightest slip of the tongue. Hopefully I will not slip
myself here.
I often wonder why there is such confusion over such a simplistic concept. I think
the confusion stems from the fact that, like one of my early professors stated,
statements about the supossed value of NULL have stuck in our brains. My
professor made a statement something like this: “NULL has a value, we just don’t
know what it is“. I personally think that because people place the term “value“
around the word NULL that it for some reason is given some form of literal value.
After all we can use columns that contain NULL in conditional comparisons.
Simply stated, a column that contains a NULL is said to have no value or is
unknown. I personally think that we should limit ourselves from using any
reference to a “value“ when talking about NULL. So instead of saying “it contains
a NULL value“ we should just say “it contains a NULL“.
For simplicities sake, we should first talk about how NULLs get into a column.
There are basically two ways a NULL can get into a column of a table. I will just
focus on the INSERT statement here for simplicity.
We can then INSERT into the table by explicitly stating the NULL keyword.
Warning: Oracle allows for the insertion of a NULL through the use of ‘’ as an
empty value. This can be done for character and numeric fields alike.
Even though I know this is quite a wide practice of doing so, this should be
avoided at all costs, as Oracle has stated they may change this “feature”. In
addition, as will be shown later in this article, it might produce a false sense of
value for the NULLs entered.
Getting a NULL into a column is not typically the hard part of dealing with
NULLS. It is the extraction of information in a table that gets a bit confusing. For
this part of the article, I will focus on the following two rows of data that is in our
TABLE_NULL table. (If there is an absence of data that means it contains a
NULL).
ID NAME
---------- ----------
1 1
2
When comparing a column that contains a NULL you must use either ‘IS NULL’
or ‘IS NOT NULL’. Remembering that an evaluation of a condition is either TRUE
or FALSE, anything other than using ‘IS NULL’ or ‘IS NOT NULL’ against a NULL
would result to something unknown (not TRUE or FALSE). A few quick examples
help here. Notice that the use of equality ‘=’, while it may be experienced as an
evaluation of FALSE, actually is an evaluation of UNKNOWN and returns
nothing.
This UNKOWN can cause havoc on some programming out there. A long time
ago, when counting rows in a table that met some condition, we were told not to
use the ‘*’ in the counting of rows in a SQL statement. For instance, we would get
the following result for our demo data.
Over time we were told to replace the ‘*’ in the COUNT function with a column in
the table. This was to speed up some performance problems in the Oracle
engine. So if we replaced the ‘*’ with the column ID we would get the following.
Now suppose we happen to choose a column that was NULLABLE. Or let’s say
that over time the name column, that once was a NOT NULL column, became a
NULLABLE column. The following SQL statement would have the following
results. You can see that there could be a huge problem with any subsequent
processing of an application. The reason this happens is that an UNKNOWN in a
function evaluates to an UNKNOWN and thus is unable to be counted, or used
by any function for that matter.
One way to remedy some of the hassles of using NULLs is to wrap columns of
concern with other functions such as DECODE or NVL. When doing this it gives
the coder the option to replace NULL values with something more acceptable to
the SQL being performed. For instance, the previous COUNTing example could
be helped along to “possibly” get the required results by the following SQL.
SQL> SELECT COUNT(NVL(name,0)) FROM table_null;
COUNT(NVL(NAME,0))
------------------
2
1 row selected.
While databases, tables, and columns were intended to store values there will no
doubt be times when we will have to store something quite different from a value.
Is a nullable column a placeholder for future values or is it something quite
different? Remember that NULL isn’t something more or less than a value, it isn’t
an empty string, it isn’t zero, but is actually just something we don’t know.
How and when do alerts or informational messages about what’s taking place
inside your database make their way out to you, the DBA par excellence? There
are several ways, some free and some hand-crafted, to expose alerts and
messages. Let’s face it, as the Oracle RDBMS engine becomes more and more
Skynet-Terminator-Judgement Day-aware, keeping track of what’s taking place
inside an instance has become easier and harder at the same time. The easier
aspect of this statement is evidenced by more sophisticated monitoring tools and
interfaces, and the harder part is borne out by the sheer number of metrics that
are available to monitor.
Let’s start off with a simple peek inside the database option.
The steps can be summarized by the shell script pseudo code below:
#! /usr/bin/ksh
tail $ORACLE_HOME\bdump\alert_<SID>.log > alert.log
COUNT=`grep ORA alert.log | wc -l`
if [$COUNT is something other than zero or an empty string]
then
mail -s "Check alert log" [email protected] < alert.log
fi
Several features need to be in place for this scheme. First, whomever (as a
person or machine user such as oracle) is running, the script needs to have
appropriate file system permissions to be able to read $ORACLE_HOME and
write to wherever.
Second, your MTA can be as simple as “mail” (or mailx, depending on your
flavor/version of UNIX). Chances are your UNIX admin already has UNIX mail
working as no doubt much of his or her watchfulness is notification after the fact
as opposed to scanning logs all day long (which is pretty much what this is for
you as well).
Third, you need something to read mail yourself, so that implies something along
the lines of Outlook/Exchange Server in your company’s office. Assuming you
have been assimilated by the Borg, oops, I mean Microsoft, then the email
address shown in the example would stand out to those familiar with aliases or
mail groups. Otherwise, have the script “cat” a file with email addresses in it and
loop through the addresses.
Fourth, you need something to execute the tail job on a periodic basis as you are
pulling the alert log information as opposed to it being pushed to you, and what
better than a cron job to mange this aspect of the process. The cron can run
every ten minutes (as an example) all week long. While crons are very reliable,
what the job cannot do is guarantee you that it will catch an ORA error. One way
to help ensure that your tail of 100 lines does not miss the ORA error at the 101 st
line (i.e., you missed it by one line) is to grab enough lines to increase the
likelihood that the extract will contain at least the last ten minutes of alert log
activity. Better to grab too much than not enough of the alert log.
As a variation on what is emailed to you, don’t include the entire alert log extract
in the DBA alert email. You only need a subject line telling you to inspect the alert
log as opposed to sending (and waiting) multiple KB worth of text, especially if
you’re receiving email on a PDA while on call.
A variation (or complement) of the alert log scan is an existence check for
required processes. As a minimum, does the script need to check for PMON,
SMON, DBWn, LGWR, and CKPT? The answer is not really – checking for
PMON by itself, as an example, is sufficient in and of itself. No PMON means no
instance, which in turn means no running database (assuming a single
instance/single database pairing).
Between an alert log scan and an instance checking “is my database up” script,
the instance checking version is more of a superset of the alert log scan. Here is
why this is so: is an alert log going to be written to if the instance is no longer
running?
We’ll use 288 (for PMON) as one of the parameters for orakill.
The alert log then records information about instance failure, and you can see the
ripple effect among the trace files related to other processes (not all alert entries
are shown)..
We can get much more information about what’s going on inside a database with
the DBMS_SERVER_ALERT built-in PL/SQL package. In fact, more than 140
metrics are available, and the alert threshold values for many of these can be
adjusted to suit your particular needs.
One alert or metric you may find to be useful involves the detection of blocking,
the “silent” show stopper of Oracle. Blocking can go on for hours and hours with
no discernible or externally noticeable signs of it taking place. Blocking is usually
detected when users start to complain about hung sessions, followed by calls
about not being able to log in, and when scripted jobs fail to complete (noticed by
you or others). Aside from manually detecting blocking, wouldn’t it be nice to be
alerted when Oracle detects a blocking situation? In Oracle 10g, we can do
exactly that.
One of the configurable metrics is for blocked user sessions, and it comes with
its own graph. The “Metric Value” picture below is a result of the competing
update statements shown in the SQL*Plus session windows (with an output of
the blocking info below that).
Blocking is really quite insidious, and user sessions in an OLTP database can
stack up in no time at all. From a customer service perspective, you can be
certain your company would hate to have customers dissatisfied with your Web
site that manages personal account information, mailing/shipping preferences,
and any number of service oriented functionality. With server managed alerts,
you can be one of the first to know about this situation as opposed to being
practically the last to know.
In Closing
In the next article about serving up server alerts, we’ll go into detail about two
ways to configure and manage server alert/metric settings: using the
DBMS_SERVER_ALERT package and its GUI counterpart in Database Control.
Back to DBAsupport.com
The union operation, you will recall, brings two sets of data together. It will *NOT*
however produce duplicate or redundant rows. To perform this feat of magic, a
SORT operation is done on both tables. This is obviously computationally
intensive, and uses significant memory as well. A UNION ALL conversely just
dumps collection of both sets together in random order, not worrying about
duplicates.
You can script the process to include it in a set of install scripts you deliver
with a product.
You can put your create database script in CVS for version control, so as
you make changes or adjustments to it, you can track them like you do
changes to software code.
You learn more about the process of database creation, such as what
options are available and why.
3. What are three rules of thumb to create good passwords? How would a
DBA enforce those rules in Oracle? What business challenges might you
encounter?
Oracle has a facility called password security profiles. When installed they can
enforce complexity, and length rules as well as other password related security
measures.
In the security arena, passwords can be made better, and it is a fairly solvable
problem. However, what about in the real-world? Often the biggest challenge is
in implementing a set of rules like this in the enterprise. There will likely be a lot
of resistance to this, as it creates additional hassles for users of the system who
may not be used to thinking about security seriously. Educating business folks
about the real risks, by coming up with real stories of vulnerabilities and break-ins
you've encountered on the job, or those discussed on the internet goes a long
way towards emphasizing what is at stake.
4. Describe the Oracle Wait Interface, how it works, and what it provides.
What are some limitations? What do the db_file_sequential_read and
db_file_scattered_read events indicate?
The Oracle Wait Interface refers to Oracle's data dictionary for managing wait
events. Selecting from tables such as v$system_event and v$session_event give
you event totals through the life of the database (or session). The former are
totals for the whole system, and latter on a per session basis. The event
db_file_sequential_read refers to single block reads, and table accesses by
rowid. db_file_scattered_read conversely refers to full table scans. It is so named
because the blocks are read, and scattered into the buffer cache.
5. How do you return the top-N results of a query in Oracle? Why doesn't
the obvious method work?
Most people think of using the ROWNUM pseudocolumn with ORDER BY.
Unfortunately the ROWNUM is determined *before* the ORDER BY so you don't
get the results you want. The answer is to use a subquery to do the ORDER BY
first. For example to return the top-5 employees by salary:
Oracle's Data Guard technology is a layer of software and automation built on top
of the standby database facility. In Oracle Standard Edition it is possible to be a
standby database, and update it *manually*. Roughly, put your production
database in archivelog mode. Create a hotbackup of the database and move it to
the standby machine. Then create a standby controlfile on the production
machine, and ship that file, along with all the archived redolog files to the standby
server. Once you have all these files assembled, place them in their proper
locations, recover the standby database, and you're ready to roll. From this point
on, you must manually ship, and manually apply those archived redologs to stay
in sync with production.
To test your standby database, make a change to a table on the production
server, and commit the change. Then manually switch a logfile so those changes
are archived. Manually ship the newest archived redolog file, and manually apply
it on the standby database. Then open your standby database in read-only
mode, and select from your changed table to verify those changes are available.
Once you're done, shutdown your standby and startup again in standby mode.
A database link allows you to make a connection with a remote database, Oracle
or not, and query tables from it, even incorporating those accesses with joins to
local tables.
A private database link only works for, and is accessible to the user/schema that
owns it. A global one can be accessed by any user in the database.
A fixed user link specifies that you will connect to the remote db as one and only
one user that is defined in the link. Alternatively, a current user database link will
connect as the current user you are logged in as.
As you prepare for your DBA Interview, or prepare to give one, we hope these
questions provide some new ideas and directions for your study. Keep in mind
that there are a lot of directions an interview can go. As a DBA emphasize what
you know, even if it is not the direct answer to the question, and as an
interviewee, allow the interview to go in creative directions. In the end, what is
important is potential or aptitude, not specific memorized answers. So listen for
problem solving ability, and thinking outside the box, and you will surely find or
be the candidate for the job.
Of particular interest for this new series are the functions related to numbers.
How does this relate to your job as a DBA? Answer: in several ways. First, you
may have to support a Decision Support group. Decision support groups
frequently use analytic functions and statistical methods to support a business
decision. It helps to understand (or at least recognize the name) some of the
analytic tools being used. Second, you may be working in a data warehouse type
of environment where YOU are the decision support guru. Granted, your job may
be to simply massage or extract the data for others to analyze, but you are the
SQL expert by virtue of being the DBA. If you are asked to find the regression
line for quantity produced versus sales, you will make a better impression by not
having that deer in the headlights look come across your face.
One thing that goes hand-in-hand with analysis is some type of pictorial
representation of what is being analyzed. Although Oracle does provide other
tools for accomplishing this, that falls more into the Forms & Reports developer
realm, so we'll say that is a possible solution and leave it at that. So even if there
is a heavy chart/graph/histogram/whatever requirement to be met by using Excel
(or something else), you can still use Oracle as a backup to check the work
performed by someone else using a different tool.
What is linear regression? Let's say you have an input, like list price of some
product your company produces. The lower the price, the more your company
sells of that product. Conversely, the higher the price, the fewer your company
sells. Over time, you plot selling price versus quantity sold. When you look at the
dots or plots on a piece of graph paper, perhaps the arrangement of the dots
tends to suggest drawing a straight line, which generally comes close to
connecting most of the dots. If the dots are pretty close to the line, then you may
have a strong relationship between the X and Y (selling price and quantity
produced). In fact, the line may be such a good fit that you can come close to
predicting the quantity sold if given the selling price, and vice versa.
On the other hand, the points, when plotted, may look like a hazy cloud on the
graph paper - there is no distinct line or trend between the X and Y values.
Because there is no line which "fits" or connects the points, there is a weak
relationship between X and Y (a list price of 52 may be just as likely to result in a
quantity sold of 12, 17, or 18 - there is no or very little predictive power knowing
one thing or the other).
So, getting back to what Oracle can do for you, with respect to linear regression:
quite a bit. The linear regression function, REGR_"component," outputs nine
items of interest typically used when evaluating data. A component is selected by
specifying the related REGR phrase and the two inputs - expr1 and expr2 - or the
independent and dependent variables. I could have said the "X" and the "Y," and
I will later, but not for now.
To look at the example Oracle provides, you will need to install the sample
schema named SH (for sales history). The sample schemas are referenced in
the Oracle9i Sample Schemas guide (http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96539/toc.htm ). If you installed the
sample schemas during the Oracle9i installation, you will also need to unlock the
SH user account (and give it a password you can remember, like "SH"). If you
have never seen a million-row table, this is your opportunity. As the user SH,
doing a "select count(*) from sales" shows just over a million rows (1016271, to
be exact).
The example shown in the SQL Reference guide has five channel_id's, but we
will concentrate on the first one (identified as "C"). Here is the select statement
and its output as shown in a SQL*Plus session as the user named SH:
SQL> SELECT
2 s.channel_id,
3 REGR_SLOPE(s.quantity_sold, p.prod_list_price) SLOPE ,
4 REGR_INTERCEPT(s.quantity_sold, p.prod_list_price) INTCPT ,
5 REGR_R2(s.quantity_sold, p.prod_list_price) RSQR ,
6 REGR_COUNT(s.quantity_sold, p.prod_list_price) COUNT ,
7 REGR_AVGX(s.quantity_sold, p.prod_list_price) AVGLISTP ,
8 REGR_AVGY(s.quantity_sold, p.prod_list_price) AVGQSOLD
9 FROM sales s, products p
10 WHERE s.prod_id=p.prod_id AND
11 p.prod_category='Men' AND
12 s.time_id=to_DATE('10-OCT-2000')
13 GROUP BY s.channel_id;
Let's add another AND to the WHERE clause to get just the first row:
SQL> SELECT
2 s.channel_id,
3 REGR_SLOPE(s.quantity_sold, p.prod_list_price) SLOPE ,
4 REGR_INTERCEPT(s.quantity_sold, p.prod_list_price) INTCPT ,
5 REGR_R2(s.quantity_sold, p.prod_list_price) RSQR ,
6 REGR_COUNT(s.quantity_sold, p.prod_list_price) COUNT ,
7 REGR_AVGX(s.quantity_sold, p.prod_list_price) AVGLISTP ,
8 REGR_AVGY(s.quantity_sold, p.prod_list_price) AVGQSOLD
9 FROM sales s, products p
10 WHERE s.prod_id=p.prod_id AND
11 p.prod_category='Men' AND
12 s.time_id=to_DATE('10-OCT-2000')
13 AND s.channel_id = 'C'
14 group by s.channel_id;
Overall, there were 206 rows involved, but we are only interested in the 20 rows
for "C."
Next
DDL Event Security in Oracle Database
Amar Kumar Padhi, Amar Kumar Padhi
We have various levels of security (inside and outside of Oracle database) that
can be implemented according to one's requirements. Mentioned here is a way of
implementing security against structural changes or Data Definition Language
(DDL) changes. This security is put within the database for logical objects.
DDL commands are critical and cannot be rolled back. As a norm, new
developments/changes should be done on test boxes before promoting them to
Production. Such promotions to production can be done as scheduled upgrades
during off-peak hours or when a maintenance window is available as system
downtime. At all other times the system should be up and running as desired.
DDL event security is aimed at preventing structural changes when the site
application is up and running. An application user may never fire such commands
explicitly or may not have access to do so, but then, this security can be
implemented in general and not aimed at a specific set of users. For example, a
script may be fired on the production database by the IT teamin error.
The idea is to track down DDL commands when they are fired. This can be done
using system event triggers introduced in Oracle 8i. Mentioned below is a simple
process that I use at my site. If you plan to set up something similar, you can use
the code below or modify it as per your needs.
Setup
1. One SYS owned tables are created to hold information regarding the objects
that should be prevented from structural changes. Please note that if you
implement this security globally, you may not require the table below, but if you
intend to do it for only the application tables and allow other temporary objects in
the database to be changed, you will have to maintain a master such as the one
below to differentiate such tables.
For example, my site runs Oracle Applications 11i. There are times when
reporting or custom objects have to be modified for urgent implementation of a
change (e.g.: new adhoc report requirement). As these are custom objects and
are meant for data extraction for reporting purpose, I allow some room for
structure changes here. However, nocompromise can be done with the core
application tables, so these are permanently locked against all structural
changes. I maintain a table such as the one below to hold application related
objects and important custom objects.
The columns starting with EVENT_ are meant to track if CREATE, ALTER,
DROP or TRUNCATE are allowed. There may be a need to allow truncating of a
table but prevent other changes; the EVENT_TRUNC can be set to 'Y' and the
rest of the events can be set to 'N'.
Similarly there may be a need for a routine that should be allowed to be re-
compiled (ALTER) while re-creating is prevented (CREATE OR REPLACE ..).
The EVENT_ALTER can be set to 'Y' and other events prevented.
The STATUS column states whether the security is 'Active' for an object or not.
This can be set to 'Inactive' to disable the security.
exception
when no_data_found then
null;
end;
/
3. If there is a need to disable DDL event security for a particular object or for all
objects, the master table STATUS column should be updated as 'Inactive'. As the
System trigger looks at this table to enable security, it will skip all objects with an
inactive status.
For example, I enable this security for a custom table called AZ_CATENT. The
following insert will add this object to the master.
commit;
Please be careful when working with system triggers. Do not leave an event
trigger with compilation errors. This would throw errors for all structural changes
being carried out.
DDL Event Security is one of the many alternatives that can be used to prevent
mishaps, but this does not necessary mean that the basic security established by
password, roles and privileges take a back seat; these should not be
compromised for an alternative. Use the above feature to suit your setup
requirements.
Have you ever wondered about storing documents into your Oracle database
and just didn't know where to start? Here is a quick introduction to the basics you
need to know.
Manipulating Oracle Files with UTL_FILE showed you how to read the alert log and
do some manipulation on the file while it was external to the database. You
should review this article as it contains some background information you will
need to know, along with some explanation of some of the procedures in this
code that I will not go into here. The next logical extension to the last article is the
manipulation of external files, such as documents, and the storage in the
database. This article will take you through a brief overview of the datatypes and
procedures in order to store word documents within the database.
The Datatypes
When talking about manipulating documents within a database, there are only a
few choices for a datatype that can handle a large document. These large
objects (LOBs) can use any one of the four datatypes depending on the
characteristics of the object you are storing. These large objects can be in the
form of text, graphics, video or audio.
Datatype Description
BLOB Used to store unstructured binary data up to 4G. This datatype stores the
full binary object in the database.
CLOB/NCLOB Used to store up to 4G of character data. This datatype stores the full
character data in the database.
BFILE Used to point at large objects that are external to the database and in
operating system files. The BFILE column also contains binary data and
cannot be selected.
Benefits of LOBs
It use to be that the largest object you could store in the database was of the
datatype LONG. Oracle has for the last few releases kept telling us to convert our
LONG datatypes to a LOB datatype (maybe they will too). The reason for
converting our LONGs to LOBs can be seen in this short list of benefits.
In order to store the documents into the database you must obviously first create
an object to store the information. Following is the DDL to create the table
MY_DOCS. You will notice that there is a holder for the bfile location and a
column (DOC_BLOB) to hold the document.
The load procedure takes as arguments the document name and an id number for the
document. The procedure will then prime a row for update based on the document id,
BFILE location and document name (which becomes the document title). The procedure
will then open internal and external BLOBs and load the internal from the external. At
this point, the document has been loaded into the database table.
Code Meaning
bfile_loc := BFILENAME('DOC_DIR',
in_doc); In order to load the document, you must first
point to the external object through a BFILE
locator. The BFILENAME procedure takes a
directory location and the document name.
INSERT INTO
my_docs (doc_id, bfile_loc, This statement is to prime the row into which
doc_title)
VALUES (1, bfile_loc, in_doc); the external object will be inserted.
SELECT doc_blob INTO temp_blob
FROM my_docs WHERE doc_id = in_id Associate the temporary blob object to the
FOR UPDATE;
table blob object for updating.
DBMS_LOB.OPEN(bfile_loc,
DBMS_LOB.LOB_READONLY); Open the external blob object for reading.
The search procedure takes as arguments a document id and a search string. The search
procedure takes as arguments a document id and a search string. The procedure then
converts the search string into raw format and places it into the variable named
PATTERN. Once the variable PATTERN is populated, it is used for searching the loaded
temporary BLOB DOC_BLOB to see if the particular pattern exists.
Pattern :=
utl_raw.cast_to_raw(in_search); Take the input search characters and convert
them to raw characters that can be used to
search your document.
SELECT doc_blob INTO lob_doc
FROM my_docs WHERE doc_id = Put the document into a temporary BLOB for
in_id;
manipulation.
DBMS_LOB.OPEN (lob_doc,
DBMS_LOB.LOB_READONLY); Open the temporary BLOB for reading.
Position := DBMS_LOB.INSTR
(lob_doc, Pattern, Offset, Search the temporary BLOB for the supplied
Occurrence);
search string. If it finds the pattern then the
variable POSITION will be not 0.
The procedures that I have given you are very simplistic in nature and are
intended to be part of a larger application for managing external documents
within a database. They are intended to setup a directory where your documents
live, load the documents into a database, and then search for string patterns in
the document id provided. I personally can see you taking out the reliance of
supplying a document id and allowing the search to span multiple documents
within your library. Below I have given a brief description on how to use the code
as is but feel free to modify and integrate into your own set of procedures.
How to Use
1. log into your database of choice as the SYS user
2. compile the package
SQL> @mydocs.sql
3. set serveroutput on
PROCEDURE doc_dir_setup IS
BEGIN
EXECUTE IMMEDIATE
'CREATE DIRECTORY DOC_DIR AS'||
'''"E:\jkoopmann\publish\databasejournal\Oracle"''';
END doc_dir_setup;
BEGIN
DBMS_OUTPUT.ENABLE(1000000);
END mydocs;
/
So far, the trigger-happy DBA series has dealt with triggers related to Data
Manipulation Language (DML) operations. As mentioned in the beginning of the
series, another type of useful trigger is that of the system trigger. System triggers
can be delineated into two categories: those based on Data Definition Language
(DDL) statements, and those based upon database events. Use of system
triggers can greatly expand a DBA's ability to monitor database activity and
events. Moreover, after having read this article, you'll be able to sharp shoot
someone who asks, "How many triggers does Oracle have?" Most people will
seize upon the before/during/after insert/update/delete on row/table easy-to-
answer OCP test type of question (and answer), which is largely correct where
plain vanilla DML triggers are concerned. How would you count INSTEAD-OF
triggers when it comes to DML? So, how many other triggers does Oracle have
or allow?
The syntax for creating a system trigger is very similar to the syntax used for
creating DML triggers.
The number of system triggers available to the DBA is 11 under the short and simple
plan. If you want the deluxe version or variety, you can refer to the more than 20 system-
defined event attributes shown in Chapter 16 of the Oracle9i Application Developer's
Guide. In this month's article, we will look at the 11 "simple" triggers related to DDL and
database events. Let's identify these 11 triggers before going further.
STARTUP AFTER
SHUTDOWN BEFORE
SERVERERROR AFTER
LOGON AFTER
LOGOFF BEFORE
Some common sense is in order here. If you wait long enough and visit enough
DBA-related question-and-answer web sites, inevitably, and amusingly, you will
see the question about, "I'm trying to create a trigger that does X and Y before a
user logs on – how do I do that?" That may be possible in some future version of
Oracle - the version that senses when you are about to log on? - but don't hold
your breath waiting for that release date!
So, the BEFORE and AFTER timing part really matters for the system events
shown above, and you have a bit more flexibility with respect to the DDL
statements. You will also notice there are no DURING's for the "when" part, and
perhaps not so obvious, there are no INSTEAD-OF triggers for system events.
What about TRUNCATE statements, you ask. TRUNCATE is a DDL statement,
but unfortunately, Oracle does not capture this event (as far as triggers are
concerned).
Let's construct a simple auditing type of trigger that captures a user's logon
information. With auditing in mind, as with when to fire a DML type of trigger,
timing matters. Maybe you have a sensitive database, one where you need to
capture a user's session information. It would be more appropriate to capture a
user's access/logon to the sensitive database immediately after that user logs on
as opposed to capturing the session information before logging off. Who says
you have to log off gracefully? If a session is abnormally terminated, does a
BEFORE LOGOFF trigger fire?
As a side note, how can you obtain session information? One way is to use the
SYS_CONTEXT function. Oracle's SQL Reference manual lists 37 parameters
you can use with SYS_CONTEXT to obtain session information. See
http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions122a.htm#1038178 for more
information regarding this feature.
Table created.
Trigger created.
Looks like the trigger worked, twice in fact, because when I left to connect as
Scott, the newly created trg_session_info trigger captured my information as well.
Let's go back as Scott, and have Scott's session abnormally terminated. Does
the BEFORE LOGOFF trigger capture Scott's session information?
The answer is no. In this particular case, Scott's session was terminated by
clicking on the Windows close button. Assuming Scott is a malicious user, he
could have viewed salaries and other personal information with some degree of
obscurity. Use of an AFTER LOGON trigger would have immediately captured
his session information. Of course, this trigger, in of and by itself, is not sufficient
to fully protect access to sensitive information, but it is a means of letting
potentially malicious (or snooping) users know they are being watched or
monitored. Locks on doors help keep honest people honest, so the saying goes.
Same idea here.
Changing the trigger to AFTER LOGON yields the following results with Scott
logging on, and his session being terminated:
7 rows selected.
Look at the last four rows – aside from the date (the times would be different per
user), it looks like two rows per user. This is an example of two things. First, do
not forget to clean up after yourself (remove unnecessary triggers), and second,
maybe you will want to capture the AFTER LOGON and BEFORE LOGOFF
times (of course, it would be hard to change machines in the middle of a
session!).
Another use of a system trigger may help you (as the DBA) identify users in need
of some help when it comes to forming SQL queries. The SERVERERROR
system event, when combined with the CURRENT_SQL parameter in the
SYS_CONTEXT function, can flag or identify users who frequently make
mistakes. The CURRENT_SQL parameter "returns the current SQL that
triggered the fine-grained auditing event. You can specify this attribute only
inside the event handler for the Fine-Grained Auditing feature." (From the SQL
Reference manual) Even without FGAC, you can set up a simple trigger-audit
table relationship as follows:
Table created.
Trigger created.
There are a great many things you can do with triggers, whether they are based
on DML statements or system events. As a developer or DBA (or both), there is
no such thing as having too many tricks up your sleeve. In terms of job or role
separation, you can think of the DML triggers as being in the purview of the
developer, and the system event triggers being in the DBA's, but a good DBA
should possess some decent programming skills of his or her own, and that's
where knowing how to avoid problems with DML triggers comes into play. Being
and staying well-informed on the use (and limitations) of triggers will make you a
trigger-happy DBA.
It is worthwhile to note that this error occurs not only in the "pure" database
development environment (CREATE or REPLACE trigger trigger_name… in a
script or SQL*Plus session), but also in the Oracle tools type of development
environment such as Oracle Forms. An Oracle form relies on triggers for a great
many things, ranging from capturing user interaction with the form (when-button-
pressed) to performing transaction processing (on-commit). A forms trigger may
do nothing more than change the focus to a new item or show a new canvas.
What a form trigger can do, and has in common with the "pure" development
type of trigger, is generate the ORA-04091 mutating table error.
One common solution to avoid the mutating table error is to use three other
triggers. Tom Kyte, author of Expert One-on-One Oracle and Effective Oracle by
Design, two of the very best books on Oracle, provides an excellent example of
this technique at http://osi.oracle.com/~tkyte/Mutate/index.html (part of the Ask Tom
series at www.oracle.com). Another solution relies on using an INSTEAD-OF trigger
instead of the trigger you meant to use when you received the error. Another
solution is actually more of a preventative measure, namely, using the right type
of trigger for the task at hand.
Here is a simple example of where a trigger can generate the mutating table
error. The hapless Oracle user named Scott wants to generate a statement
telling him how many employees are left after an employee record is deleted.
This code for this example comes from Oracle's Application Developer's Guide.
EMPNO ENAME
---------- ----------
7369 SMITH
7499 ALLEN
7521 WARD
7566 JONES
7654 MARTIN
7698 BLAKE
7782 CLARK
7788 SCOTT
7839 KING
7844 TURNER
7876 ADAMS
7900 JAMES
7902 FORD
7934 MILLER
14 rows selected.
In a SQL*Plus session, here is what the coding looks like (same as above, but
without the feedback statement):
Trigger created.
Any hint so far that there may be a problem with the trigger? Not with the
"Trigger created" feedback Oracle provides. Looks like no errors and that the
trigger should fire when the triggering condition (after delete on the Emp_tab
table) occurs.
Let's modify the trigger code just a bit, and remove the FOR EACH ROW clause.
If 'we are not "doing" each row, the trigger becomes a statement-level trigger.
1 row deleted.
Note that the trigger successfully fired with this one modification. But was it really
a modification or just a better design and use of a trigger? As stated in the
previous article, triggers can act on each row or act at the statement level. In
Scott's case, what he really needed was a statement-level trigger, not a row-level
trigger. Mastering this concept alone - knowing whether to base the trigger on the
statement or on rows – can prevent many instances of the mutating trigger error.
Table created.
SQL> select * from trigger_example_table;
14 rows selected.
Trigger created.
No, because it is the same problem as before (touching a table that is being
updated). This example just reinforces the idea that the mutating trigger error still
occurs on a table based on other tables (which is still just a table as far as Oracle
is concerned). Another name for the concept of presenting data based on a
combination (i.e., a join) other tables? Straight out of the Concepts Guide: "A
view is a tailored presentation of the data contained in one or more tables or
other views." The Application Developer's Guide (yes, this is a plug for Oracle's
documentation) presents a good example of how to construct an INSTEAD-OF
trigger. You can copy the sample code shown in the guide and experiment with
using various DML statements against the view.
There are exceptions to this rule about views being inherently updateable. The
exceptions (or restrictions) include views that use aggregate functions; group
functions; use of the DISTINCT keyword; use of GROUP BY, CONNECT BY or
START WITH clauses; and use of some joins. In many cases, use of the
INSTEAD-OF trigger feature allows you to work around these restrictions.
INSTEAD-OF triggers are also useful for Forms developers because forms are
commonly based on views. The INSTEAD-OF trigger, being a "real" trigger, and
not a true form trigger, is stored on the server. This may require coordination
between the DBA and developer, which, of course, always happens in complete
harmony (NOT! – but that is a separate issue).
Previous Next
So far, the trigger-happy DBA series has dealt with triggers related to Data
Manipulation Language (DML) operations. As mentioned in the beginning of the
series, another type of useful trigger is that of the system trigger. System triggers
can be delineated into two categories: those based on Data Definition Language
(DDL) statements, and those based upon database events. Use of system
triggers can greatly expand a DBA's ability to monitor database activity and
events. Moreover, after having read this article, you'll be able to sharp shoot
someone who asks, "How many triggers does Oracle have?" Most people will
seize upon the before/during/after insert/update/delete on row/table easy-to-
answer OCP test type of question (and answer), which is largely correct where
plain vanilla DML triggers are concerned. How would you count INSTEAD-OF
triggers when it comes to DML? So, how many other triggers does Oracle have
or allow?
The syntax for creating a system trigger is very similar to the syntax used for
creating DML triggers.
The number of system triggers available to the DBA is 11 under the short and simple
plan. If you want the deluxe version or variety, you can refer to the more than 20 system-
defined event attributes shown in Chapter 16 of the Oracle9i Application Developer's
Guide. In this month's article, we will look at the 11 "simple" triggers related to DDL and
database events. Let's identify these 11 triggers before going further.
STARTUP AFTER
SHUTDOWN BEFORE
SERVERERROR AFTER
LOGON AFTER
LOGOFF BEFORE
Some common sense is in order here. If you wait long enough and visit enough
DBA-related question-and-answer web sites, inevitably, and amusingly, you will
see the question about, "I'm trying to create a trigger that does X and Y before a
user logs on – how do I do that?" That may be possible in some future version of
Oracle - the version that senses when you are about to log on? - but don't hold
your breath waiting for that release date!
So, the BEFORE and AFTER timing part really matters for the system events
shown above, and you have a bit more flexibility with respect to the DDL
statements. You will also notice there are no DURING's for the "when" part, and
perhaps not so obvious, there are no INSTEAD-OF triggers for system events.
What about TRUNCATE statements, you ask. TRUNCATE is a DDL statement,
but unfortunately, Oracle does not capture this event (as far as triggers are
concerned).
Let's construct a simple auditing type of trigger that captures a user's logon
information. With auditing in mind, as with when to fire a DML type of trigger,
timing matters. Maybe you have a sensitive database, one where you need to
capture a user's session information. It would be more appropriate to capture a
user's access/logon to the sensitive database immediately after that user logs on
as opposed to capturing the session information before logging off. Who says
you have to log off gracefully? If a session is abnormally terminated, does a
BEFORE LOGOFF trigger fire?
As a side note, how can you obtain session information? One way is to use the
SYS_CONTEXT function. Oracle's SQL Reference manual lists 37 parameters
you can use with SYS_CONTEXT to obtain session information. See
http://download-
west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions122a.htm#1038178 for more
information regarding this feature.
Table created.
Trigger created.
Let's go back as Scott, and have Scott's session abnormally terminated. Does
the BEFORE LOGOFF trigger capture Scott's session information?
The answer is no. In this particular case, Scott's session was terminated by
clicking on the Windows close button. Assuming Scott is a malicious user, he
could have viewed salaries and other personal information with some degree of
obscurity. Use of an AFTER LOGON trigger would have immediately captured
his session information. Of course, this trigger, in of and by itself, is not sufficient
to fully protect access to sensitive information, but it is a means of letting
potentially malicious (or snooping) users know they are being watched or
monitored. Locks on doors help keep honest people honest, so the saying goes.
Same idea here.
Changing the trigger to AFTER LOGON yields the following results with Scott
logging on, and his session being terminated:
7 rows selected.
Look at the last four rows – aside from the date (the times would be different per
user), it looks like two rows per user. This is an example of two things. First, do
not forget to clean up after yourself (remove unnecessary triggers), and second,
maybe you will want to capture the AFTER LOGON and BEFORE LOGOFF
times (of course, it would be hard to change machines in the middle of a
session!).
Another use of a system trigger may help you (as the DBA) identify users in need
of some help when it comes to forming SQL queries. The SERVERERROR
system event, when combined with the CURRENT_SQL parameter in the
SYS_CONTEXT function, can flag or identify users who frequently make
mistakes. The CURRENT_SQL parameter "returns the current SQL that
triggered the fine-grained auditing event. You can specify this attribute only
inside the event handler for the Fine-Grained Auditing feature." (From the SQL
Reference manual) Even without FGAC, you can set up a simple trigger-audit
table relationship as follows:
Table created.
Trigger created.
There are a great many things you can do with triggers, whether they are based
on DML statements or system events. As a developer or DBA (or both), there is
no such thing as having too many tricks up your sleeve. In terms of job or role
separation, you can think of the DML triggers as being in the purview of the
developer, and the system event triggers being in the DBA's, but a good DBA
should possess some decent programming skills of his or her own, and that's
where knowing how to avoid problems with DML triggers comes into play. Being
and staying well-informed on the use (and limitations) of triggers will make you a
trigger-happy DBA.
Previous
Back to DBAsupport.com
Everyone needs a little help now and then. If you have never used Oracle's help
facility, venture with me and find new ways you can provide benefit to your users
of SQL*Plus through this simple interface.
Invoking Help
HELP [topic]
In order to get the appropriate help information, you need only issue the HELP or
'?' command on the command line within SQL*Plus, followed by the command or
subject matter you need help on. If you do not know what you want or just want
to see what is available, then for the subject matter supply the global 'TOPICS' or
'INDEX' keyword and get a listing of everything available for HELP.
Not only can you supply a single topic for the HELP command, you may also
supply an abbreviated topic. If the abbreviated topic also covers multiple topic
areas, all of the topics will be reported. For example, if I supplied the topic 'H',
under the base installation of HELP, I would get results for both HELP and
HOST.
SQL> ? H
HELP
----
Accesses this command line help system. Enter HELP INDEX for a list
of topics.
In iSQL*Plus, click the Help button to display iSQL*Plus help.
HELP [topic]
HOST
----
HO[ST] [command]
Possible errors you may encounter are an indication of the HELP facility not
being installed or an invalid topic.
Next
Web Reports from SQL *Plus in Oracle 8i/9i
Ajay Gursahani, [email protected]
SQL *Plus provides you with a command, SET MARKUP HTML ON SPOOL ON,
which is used to produce HTML pages automatically. You can view these pages
using any web browser.
The SET MARKUP HTML ON SPOOL ON only specifies that the SQL *Plus
output will be HTML encoded; it does not create or begin writing to an output file
till you issue SPOOL <filename>. The file will then have HTML tags including
<HTML> and </HTML>
You have to use SPOOL OFF to close the spool file and issue SET MARKUP
HTML OFF to disable HTML output.
Example:
The above commands will generate an HTML file "test.html" which you can view
using a web browser. A sample output is as below:
1 ANDY 4500
2 ALAN 3500
3 JACK 3600
4 PETER 4000
5 JOE 2900
Please note that, the SQL query (SELECT * FROM test;) and SPOOL OFF are
also a part of "test.html".
<html>
<head>
<title>
SQL*Plus Report
</title>
<meta name="generator" content="SQL*Plus 9.0.1">
</head>
<body>
However, you can generate an HTML file by executing the following script from
the OS level, which will suppress the SQL commands from spooling.
test.sql
"-m" uses HTML Markup Options. It allows you to start SQL *Plus session in
'markup mode', rather than using SET MARKUP command interactively (as we
did in above example)
"-s" uses silent mode
test.html
1 ANDY 4500
2 ALAN 3500
3 JACK 3600
4 PETER 4000
5 JOE 2900
Note that the SQL commands are no longer displayed as part of html.
Summary
The above article gives a basic understanding of how we can generate HTML
reports using the MARKUP option in SQL *Plus. You can extend this exercise by
writing CGI scripts that will generate web reports.
This article gives a brief understanding about External tables. External Tables
are defined as tables that do not reside in the database, and can be in any format
for which an access driver is provided. This external table definition can be
thought of as a view that allows running any SQL query against external data
without requiring that the external data first be loaded into the database.
You can, for example, select, join, or sort external table data. You can also
create views and synonyms for external tables. However, no DML operations
(UPDATE, INSERT, or DELETE) are possible, and indexes cannot be created on
external tables.
Oracle provides the means of defining the metadata for external tables through
the CREATE TABLE ... ORGANIZATION EXTERNAL statement.
Before firing the above command we need to create a directory object where the
external files will reside.
Example: The example below describes how to create external files, create
external tables, query external tables and create views.
101,Andy,FINANCE,15-DEC-1995
102,Jack,HRD,01-MAY-1996
103,Rob,DEVELOPMENT,01-JUN-1996
104,Joe,DEVELOPMENT,01-JUN-1996
105,Maggie,FINANCE,15-DEC-1997
106,Russell,HRD,01-MAY-1998
107,Katie,DEVELOPMENT,01-JUN-1998
108,Jay,DEVELOPMENT,01-JUN-1998
Step II: Create a Directory Object where the flat files will reside
Directory created.
Table created.
The REJECT LIMIT clause specifies that there is no limit on the number of errors
that can occur during a query of the external data.
"The ORACLE_LOADER is an access driver for loading data from the external
files into the tables."
8 rows selected.
Step V: Creating Views
You can get the information of the objects you have created through
DBA_OBJECTS, ALL_OBJECTS or USER_OBJECTS.
OBJECT_NAME OBJECT_TYPE
---------------------- ------------------
EMP_EXT TABLE
1 row selected.
OBJECT_NAME OBJECT_TYPE
---------------------- ------------------
EXT_TABLES DIRECTORY
1 row selected.
You can populate data from external files using an "insert into … select from"
statement instead of using SQL*Loader. This method provides very fast data
loads.
Example:
8 rows created.
8 rows selected.
For an external table, the DROP TABLE statement removes only the table
metadata in the database. It has no affect on the actual data, which resides
outside of the database.
Summary
The external files are thus tables in the data dictionary, which can be queried as
you would query ordinary Oracle tables. You can perform fast data loads using
the above method instead of using SQL*Loader.