Rampant Advanced Oracle DBMS Packages
Rampant Advanced Oracle DBMS Packages
Advanced Oracle
DBMS Packages
The Definitive Reference
Paulo Portugal
Paulo Portugal
I dedicate this book to my parents and siblings that have always supported me in my professional life.
Also thanks to friends Michael Hell, Gabriel Rosales and the Burleson team with their valuable help
on this project.
In particular, I dedicate this book to my daughter Maria Clara and my dear wife Simone who are
the people that bring me balance, motivation, inspiration, jo y and an eternal desire to live alongside
them.
Paulo Portugal
Advanced Oracle DBMS Packages
The D efinitive Reference
By Paulo Portugal
Table of Contents
Transport Tablespaces between Databases Using RMAN and
dbms_streams_tablespace_adm Package..........................................................................295
Package dbms_tdb....................................................................................................................... 299
Package dbms_tts......................................................................................................................... 301
How to Check i f a Tablespace is Transportable or Not.............................................. 302
Summary..................................................................................................304
Chapter 7: Oracle Management and M onitoring................................ 305
Package dbms_alert.................................................................................................................... 305
Package dbms_auto_task_admin.......................................................................................... 307
Package dbms_comparison........................................................................................................312
Package dbms_db_version ........................................................................................................ 316
Package dbms_ddl...................................................................................................................... 319
Package dbms_debug..................................................................................................................327
Package dbms_describe.............................................................................................................. 331
Package dbms_hm ...................................................................................................................... 334
Package dbm sJava .................................................................................................................... 340
Package dbms,Job .......................................................................................................................342
Package dbm sjdap .................................................................................................................... 346
Package dbms_metadata........................................................................................................... 350
Package dbms_output................................................................................................................ 354
Package d b m sJ ip e .................................................................................................................... 357
Package dbmspreprocessor ...................................................................................................... 360
Package dbmsjresconfig ............................................................................................................ 362
Package dbms_resource_manager and dbms_resource manager_privs364
Package dbms_resumable.......................................................................................................... 373
Package dbms_scheduler........................................................................................................... 378
Package dbms_server_alert...................................................................................................... 384
Package dbms_session................................................................................................................ 388
Package dbm s_sharedJool ..................................................................................................... 393
Package dbms_utility................................................................................................................. 398
Package dbms_waming ............................................................................................................ 403
Package debug_extproc............................................................................................................. 408
Package dbms_random ............................................................................................................. 410
Summary..................................................................................................411
Chapter 8: Oracle Data W arehouse.......................................................412
Table of Contents
Chapter 12: Oracle HTML DB and XML D B .....................................525
Introduction.............................................................................................525
Package htmldb_custom_auth ............................................................................................... 525
Package htmldb_item ............................................................................................................... 530
Package htmldb_util..................................................................................................................534
Package dbms_xdb ................................................................................................................... 538
How to Check Current XDB Tablespace and Change It.........................................545
Package dbms_xdbt.................................................................................................................. 545
How to Create Indexesfor XATL DB ............................................................................ 546
Package dbms_xdb%..................................................................................................................547
Packages..........................................................................................................................................550
Packages dbms_xmlgen and dbms_xmlquery................................................................. 555
Tipsfo r Using dbms_xmlgen............................................................................................. 559
Packages dbms_xmlsave and dbms_xmlstore................................................................. 561
Package dbms_xmlschema...................................................................................................... 563
Summary................................................................................................. 565
Book Summary........................................................................................566
Index....................................................................................................... 567
About the Author................................................................................... 573
Technical Editor.....................................................................................574
Acknowledgements
This type of highly technical reference book requires the dedicated efforts of many
people. Even though I am the author, my work ends when I deliver the content.
After each chapter is delivered, several Oracle DBAs carefully review and correct the
technical content. After the technical review, experienced copy editors polish the
grammar and syntax.
The finished work is then reviewed as page proofs and turned over to the production
manager, who arranges the creation of the online code depot and manages the cover
art, printing distribution, and warehousing.
Paulo Portugal
Acknowledgements 3
Preface
This book is geared to present the main packages used in the Oracle Database and
their usage by database administrators and developers. Like many of you, throughout
my career I have faced difficulties in finding information about the Oracle DBMS
packages. In the course of this relendess and exhausting pursuit, I have collected the
most useful packages for this book. I’m also providing practical examples on the
application of these tools (which are also notoriously difficult to find).
After 30 years, Oracle has slowly evolved into l lg , and with each new version of
Oracle, new DBMS packages are created to ease the Titanic job of administrating the
database. Many of those DBMS packages are extremely specialized and do not
become well-known to Oracle professionals. Other DBMS packages are under
documented requiring creativity to find our how they work.
Many of the graphic tools of database management like OEM and Oracle Grid
Control, make use of the DBMS packages. Notwithstanding, most modifications
made available by these graphic tools usually run in the background, and therefore the
user ends up not knowing exacdy what is going on in the inner workings of their
Oracle database.
This is only one reason why a senior DBA never relies on a GUI and takes the time to
understand the command-line interfaces to all of the important DBMS packages.
But even when there exists a graphic tool to replace a package, for some mission
critical databases, running a “UI Wizard” application against the production database
is never advisable. Direcdy calling a DBMS package is simply the best way, the only
way, a “best practice” to do some important task.
Among the advantages of running DBMS packages from SQL*Plus is that they
provide complete information to the DBA and developers of all of the intermediate
stages of a procedure. Moreover, direct invocation of the DBMS packages allows us
to quickly and easily access or alter database information without the use of flawed or
unsecure graphic tools, or when a GUI tool is simply not available.
Another very important advantage is that DBMS packages can be combined and
some of them can also be edited, serving as a basis to perform extremely complex
tasks; for example, when the functionality needed is not exactly the same but is close
to what one package offers by itself. That is something usually not possible with most
GUI tools.
Nowadays, the speed at which Oracle professionals must do their tasks as well as the
excess of information they are exposed to prevents them from learning all the
packages and their different usage alternatives by heart.
Therefore, by organizing the book by job function, I hope that the readers will be able
to make use of it as a practical guide to learn about the packages, see which packages
Preface 5
are the most useful to them and use our book as a reference book to be kept on your
desks, hopefully making your grueling daily routines a little easier!
I am always looking to improve the content of this book, so if you find any paackages
that neeed additional information or errata, I’d love to correct it in a futire edition.
Oracle Packages
In this first chapter, we will review the history of Oracle packages and describe the
advantages and benefits of their use by database administrators and application
developers.
The PL/SQL language was introduced in Oracle 6 and has its origins in the 3rd
generation language syntax of ADA and Pascal languages. Despite the creation of
PL/SQL language version 1.0 as an Oracle 6 option, it was not until Oracle 7 with
PL/SQL version 1.1 that many of PL/SQL’s more representative characteristics
Oracle Packages 7
where introduced including features such as PL/SQL packages, functions,
procedures, user-defined record types and PL/SQL tables.
Before the introduction of the PL/SQL language, the only way to use procedure
constructs with SQL was through the PRO*Cobol, Pro*Fortran and Pro*C languages
where the procedural logic was coded in C language, but Oracle SQL instructions
could be added.
At compile time, the entire code had to be precompiled to interpret the Oracle SQL
instructions and convert them into the native language library calls. The precompiler
then created a file, written entirely in the language, which finally could be compiled
using the regular language compiler.
With the advent of PL/SQL, SQL commands could be placed inside a PL/SQL block
that ran inside the same Oracle server with minimum overhead.
With PL/SQL, it became possible to recompile one package body without the need to
recompile the package specification, resulting in less impact to the database. Also, by
placing SQL: inside PL/SQL with stored procedures and functions that use bind
variables, SQL statements can reside inside the library cache of Oracle and be fully
reentrant, avoiding the need to repeated hard parsing.
Oracle uses a signature generation algorithm to assign a hash value to each SQL
statement based on the characters in the SQL statement. Any change in a statement
(generally speaking) will result in a new hash and thus Oracle assumes it is a new
statement. Each new statement must be verified, parsed and have an execution plan
generated and stored, all high overhead procedures.
When SQL is placed within PL/SQL, the embedded SQL never changes and a single
library cache entry will be maintained and searched, greatly improving the library
cache hit ratio and reducing parsing overhead. Here are some particularly noteworthy
advantages of placing SQL within Oracle stored procedures and packages:
■ High productivity: PL/SQL is a language common to all Oracle environments.
Developer productivity is increased when applications are designed to use
PL/SQL procedures and packages because it avoids the need to rewrite code.
Also, the migration complexity to different programming environments and
Oracle Packages 9
front-end tools will be gready reduced because Oracle process logic code is
maintained inside the database with the data, where it belongs. The application
code becomes a simple “shell” consisting of calls to stored procedures and
functions.
■ Improved Security: Making use of the “grant execute” construct, it is possible
to restrict access to Oracle, enabling the user to run only the commands that are
inside the procedures. For example, it allows an end user to access one procedure
that has a command delete in one particular table instead of granting the delete
privilege direcdy to the end user. The security of the database is further improved
since you can define which variables, procedures and cursors will be public and
which will be private, thereby completely limiting access to those objects inside
the PL/SQL package. With the “grant” security model, back doors like
SQL*Plus can lead to problems; with “grant execute” you force the end-user to
play by your rules.
■ Application portability: Every application written in PL/SQL can be
transferred to any other environment that has the Oracle Database installed
regardless of the platform. Systems that consist without any embedded PL/SQL
or SQL become “database agnostic” and can be moved to other platforms
without changing a single line of code.
■ Code Encapsulation: Placing all related stored procedures and functions into
packages allows for the encapsulation of storage procedures, variables and
datatypes in one single program unit in the database, making packages perfect for
code organization in your applications.
■ Global variables and cursors: Packages can have global variables and cursors
that are available to all the procedures and functions inside the package.
Packages Internals
During the installation of Oracle, several built-in DBMS packages are included in
order to extend Oracle’s core functionality. These packages are referred to as built-in
packages. The built-in packages are installed by the scripts catproc.sql and catalog.sql
located in the directory $ORACUE_HOAdE/rdbms/admin. If Oracle Universal
Installer is used to install the database (Oracle’s recommended method), the installer
will run the built-in scripts automatically. These scripts call many other individual
scripts that refer to each package that is being created in the database.
When the PL/SQL engine resides inside a stored procedure of function the PL/SQL
blocks are sent to the PL/SQL engine in the Oracle server. On the client PC, the
PL/SQL engine is located in the client side, the PL/SQL processing is done at the
client.
This way, all the SQL commands inserted in the PL/SQL block are sent to the Oracle
server for processing, but the PL/SQL logic gets processed on the client, thus
diminishing the server overhead. In some cases, when the PL/SQL block does not
have any SQL statement, it can be processed entirely on the client's side.
There is also another important area, the private SQL area, which holds information
specific for each user session and Oracle allocates a private SQL area to store specific
values of a session.
However, the code itself is stored in the shared SQL area. When more than one user
runs the same program unit, a single, shared copy of the unit will be used by both
users. This enables Oracle not to waste memory, allowing the database to perform
more efficiently.
The private SQL area is always stored in an area named User Global Area (UGA).
However, the location of the UGA depends on the type of connection that is
established with the database.
At the time of the creation of a PL/SQL package, Oracle automatically stores the
following information in the database: the name of the object in the schema, the
source code, the parse tree and the error messages. The storage of this information
inside Oracle avoids unnecessary re-compilations being made in the database.
In the process of running a procedure within a package, Oracle checks the security to
see if the user that is running the procedure is the owner of the package or whether he
has permission to run this package or procedure. Next, Oracle checks the data
dictionary so as to verify whether the package is valid or invalid and, finally, Oracle
executes the procedure.
In order to run the select command in a view defined with a PL/SQL function, there
needs to be access to select, just in this view; it is not necessary to have access to
execute the function.
Invalid packages
A package may become invalid for various reasons such as in the following cases:
■ Table Invalid: One or more objects referred by the procedure or package has
been altered the procedure wil be marked as invalid. For example, this can
After a brief description of the relevant internal aspects, we will proceed with a
presentation of the specific packages which are the central point of this book.
Invalid packages 13
Table, Index and
Tablespace
Management
One of the primary jobs of the Oracle DBA is to manage the data as it resides in rows
within tables; the data which, in turn, reside within our tablespaces.
The DBA is also responsible for managing indexes, ensuring that the SQL workload
has all of the indexes that are required for optimal query completion, and deploying
indexes as a shortcut to fetch the rows with the least amount of database I/O and
resource consumption.
Traditionally, he DBA relied solely on the SQL*Plus command line interface, and it
has only been recently that Oracle has provided DBMS packages to make it easier to
manage tables, tablespaces and indexs storage. Oracle also offers the Oracle
Enterpise Manager Grid Control GUI (called OEM), but it is extremely limited when
compared to the command-line invocation.
This chapter will show us useful DBMS packages for managing tables, indices and
tablespaces.
The beauty of the Oracle DBMS packages is that the end-user does not need
knowledge that limits the awareness of the procedure being executed.
Package dbms_errlog
When doing a batch insert you may receive data from a host of external locations.
While it’s nice to assume that the data has been scrubbed and validated, there is
always a chance that you will have invalid numeric and character data. The problem is
how to deal with large volumes of errors and that is what the dbms_errlog package does
for us.
To do this, dbms_errlog creates a table called an “error log” table. Any records not
processed by the DML operation due to errors will be inserted into this table allowing
any problems in the operation to be analyzed and fixed later on.
When doing massive DML operations, problems like these may arise:
■ Data values that are too large for the column (e.g. inserting 40 characters into a
varchar2(20).
■ Partition mapping errors happen (No partition exists)
■ Errors during triggers execution occur (mutating table error)
* Constraint violations (check, unique, referential and NOT NULL constraints)
occur
■ Type conversion errors (numeric with alpha characters, invalid dates) happen
For these cases, the dbms_errlog package can be used to create a table that will store
details about all DML operations that present errors.
The following script demonstrates its use as seen in the package. DMT, errors of
operation are simulated, then they are inserted in the log table that was created for the
package under analysis. First, the main user is created which will be used throughout
this book.
Note: This script will create a DBA user with a weak password what is not
recommended for any production environment.
Package dbms_errlog 15
Next, a test table is created. Just for fun, we decided to name it tb_dbms_errlog as that is
the name of the package that is being studied. Finally, our test table is also given a
primary key.
alter table
tb_dbms_errlog
add constraint
pk_obj_idlO primary key (object_id);
After executing this DDL, our error log table is created using the package dbms_errlog
and rows are inserted to simulate constraint errors.
To create an error log table, we specify both the name and location where it should be
created as well as the name of the table it is intended to deal with, i.e. the table whose
DML will be logged into it.
exec dbms_errlog.create_error_log(
dml_table_name => 'tb_dbms_errlog1,
err_log_table_name -> 'tb_log',
err_log_table_owner => 'pkg',
err_log_table_space => ’users');
Now, we insert rows into the tb_dbms_errlog table, logging any errors. Also specify an
optional tag that can be used to identify errors more easily, and an unlimited reject
limit to ensure the operation succeeds no matter how many records present errors.
select
count(*)
from
tb_dbms_errlog;
COUNT!*)
0
insert into
commit;
Commit complete.
select
count(*)
from
tb_dbms_errlog;
COUNT(*)
49742
Next, we select data from the tb_log table and confirm that no errors exist.
select
count{*)
from
tb_log;
COUNT (*)
Next, we delete some rows from the test table. This will be needed in the next step to
simulate some records failing because of the primary key constraint while the others
are successfully inserted.
delete
from
tb_dbms_errlog
where
object_id between 2354 and 4598;
Package dbms_errlog 17
commit;
Commit complete.
Now, let’s insert all the rows again using the log errors into table_name syntax. As some
rows exist with the same object_id, some errors will be generated.
insert into
tb_dbms_errlog
select
■k
from
dba_objects
log errors into
tb_log(1tag_27042009_l')
reject limit unlimited;
commit;
Commit complete.
Now it is possible to check errors generated by the insert command in our error log
table. In this example, the ROWNUM clause was used to return less than 10 rows.
select
t .ora_err_mesg$ err msg”,
t .ora_err_optyp$ err type"
t .ora_err_tag$ err tag",
t.object_id "obj id"
from
tb_log t
where
rownum < 10;
Package dbmsJ o t
Oracle Index-organized tables (IOTs) are a unique style of table structure, an
alternative to the traditional “heap structure” tables. An IOT is equivalent to the
highly normalized fourth-normal-form (4NF), where every row in the table is indexed.
Whenever all rows in a table are indexes, the table itself becomes redundant and the
entire table data can be stored within the B-tree index structure.
Besides storing the primary key values of an Oracle indexed-organized tables row,
each index entry in the B-tree also stores the non-key column values.
122
65 136 151
m
Oracle Indexed-organized tables provide faster access to table rows by the primary
key or any key that is a valid prefix of the primary key. Because the non-key columns
of a row are all present in the B-tree leaf block itself, there is no additional block
access for index blocks.
This improves I/O, especially when the IOT is placed in a tablespace with a 32k
blocksize.
For the installation of dbms_iot, it is necessary to execute the script dbmsiotc.sql. This
script can be found in the $ORACLE_HOME/rdbms/admin directory. This package
contains two procedures:
■ build_chain_rom_table
Package d b m sjo t 19
■ build_exceptions_table
In order to identify these chained rows in an IOT (Index Organized Tables), we use
the command analyse together with the package build_chain_rows_table. Procedure
build_chain_rom_table creates a table to hold information about these chained rows.
Finding and repairing chained rows is an important part of the Oracle administration.
When an Oracle row expands, it sometimes chains onto multiple data blocks.
Note: Excessive row chaining can cause a dramatic increase in disk I/O because
several l/Os are required to fetch the block instead of one single I/O.___________
This extra disk 1/O dramatically affects performance. This procedure is used to create
a table that will hold information about the chained rows.
Chained rows are a symptom of suboptimal IOT settings, where not enough room
has been left on the data block for the rows ro grow. You must also ensure that the
bocksize is larger than the largest row length for the IOT entries.
Q> After the first fix, the DBA is expected to avoid future fragmentation by fixing
the root cause of the chaining and row migration:________________________
Some articles about how to prevent and monitor chained rows can be used at
www.dba-oracle.com.
Here is an example showing a table that contains chained rows. First, the IOT table is
created and rows are inserted.
20 Advanced DBMS Packages
y Code 2.2-dbm s_iot.sql
conn pkg/pkg#123
connected.
table created.
insert into
tb_dbms_iot
(coll)
values
('a')
/
1 row created.
insert into
tb_dbms_iot
(coll,col2)
values
('al','b21)
/
1 row created.
insert into
tb_dbms_iot
(coll,col2,col3)
values
('a3', 1b 3 ', 'c 3 ')
1 row created.
insert into
tb dbms iot
Package d b m sjo t 21
(coll,col2,col3,c o l 4 )
values
(' a4 ', ' b4 ', 'c4 ', 'd4 1)
/
1 row created.
insert into
tb_dbms_iot
(coll,col2,col3,col4,col5)
values
( ' a5' , ' b5' , ' c5' , 1d5' , 1e5')
/
1 row created.
commit;
Next, statistics are collected for this table followed by information about chained
rows. Note that the “compute statistic.? ’ command populates the chained_rom column in
the dba_tables view.
Or
Table analyzed.
CHAIN CNT PCT CHAINED AVG ROW LEN PCT FREE PCT_USED
3 60 313 0 0
select
num_rows,
chain cnt
NUM_ROWS CHAIN_CNT
5 3
Now, using package dbmsJ o t , (the table that records chained rows) is created and then
their statistics are collected.
execute dbms_iot.build_chain_rows_table(
owner => 1pkg1,
iot_name => 1tb dbms iot',
chainrow_table_name => 'tab_iot_chained_rows');
Table analyzed.
Finally, the information about chained rows is collected with the query below.
select
owner_name,
table_name,
timestamp,
coll
from
tab_iot_chained_rows ,-
Package d b m sjo t 23
pkg tb dbms iot 4 /26/2009 1 a5
Remember, locating and repairing chained rows fixes a symptom and it does not fix
the root cause of the row chaining. Remember, in order to eliminate and prevent
chained rows and migration rows, you need to execute one of the following
procedures to adjust for anticipated future row expansion:
■ alter table... move
■ Increase PCTFREE
■ Move table into a tablespace with a larger blocksize
■ Imp/Exp
■ Avoid tables with more than 255 columns
■ Take advantage of utlchain or dbms_iot, select chained records into a temporary
table, delete them, and insert them back.
A very common scenario where an exceptions table is very useful is when a batch
load needs to be put into an IOT table. Before that, the decision may be made to
disable some constraints to boost the performance of this process. At the time of
enabling these constraints, after the load this procedure could be used to record rows
that had violated any constraints that were being enabled.
Thus, check this exception table to gather more information about these rows. Let’s
create the test table named tb_dbms_iot_excpt and populate it with some rows to
simulate the constraint error.
Connected.
create table
tb_dbms_iot_excpt (
Table created.
insert into
tb_dbms_iot_excpt
select
object_id,
obj ect_name,
object_type,status
from
dba_objects
where
rownum < 1000;
commit;
Commit complete.
Now, let’s invalidate some rows so the constraint that is added will have some errors.
update
tb_dbms_iot_excpt
set
status='invalid'
where
rownum < 100;
99 rows updated.
commit;
Commit complete.
exec dbms_iot.build_exceptions_table(
owner => 1pkg',
iot_name => 1tb_dbms_iot_excpt*,
exceptions_table_name => 'tab_iot_exceptions’);
Package d b m sjo t 25
select
count(*)
from
tab_iot_exceptions;
COUNT(*)
Below, the check constraint fails and the rows responsible for this failure are inserted
into the tab_iot_exceptions table.
alter table
tb_dbms_i ot_excpt
add constraint
ck_stat
check (status='valid')
exceptions into
tab_iot_exceptions
ERROR at line 1:
select
*
from
tab_iot_exceptions;
The table tab_jot_exceptions has information about rows that have violated some
constraint of the tb_dbms_iot_excp table.
Package dbmsJo b
Starting in Oracle8, Oracle recognized that a tablespace mist be able to store much
more than just text and numbers. A sophisticated database must also be able to store
images, videos, maps; all tyes of unstructured data.
The LOB data type allows holding and manipulating unstructured data such as texts,
graphic images, video sound files. The dbms_lob package was designed to manipulate
LOB data types. Oracle provides the dbms_lob package which is used to access and
manipulate LOB values in both internal or external storage locations.
With this package dbms_lob, it is possible to read and modify given BLOB, CLOB and
NLOB types as well as effecting operations of reading in BFILEs. The types of data
used for package dbms_lob include:
■ BLOB
■ RAW
■ CLOB
> VARCHAR2
■ INTEGER
■ BFILE
It is important to remember that the maximum size for a data type LOB is 8 TB for
databases with blocks of 8k, and 128 TB for databases configured with blocks of 32k.
Package dbms_lob contains procedures that are used to manipulate segments of type
LOB (BLOBs, CLOBs and NCLOBs) and BFILEs.
Package dbms Jo b 27
external £ile(s), enabling access to each external file’s data via SQL (data type
BFILE).
Below is a list containing some of the main procedures and functions that are
presented in this package and what they do.
■ isopen: This function checks to see if the LOB was already opened using the
input locator.
■ createtemporary: The procedure createtemporary creates a temporary CLOB or
BLOB and its corresponding index in the user default temporary tablespace.
■ instr: Used to return the matching position of the nth occurrence of the pattern
in the LOB.
■ getlength: Used to get the length of specified LOB.
■ copy: Copies part or all of a source internal LOB to a destination internal LOB.
■ writeappend: Writes a specified amount of data to the end of an internal LOB.
■ trim: Trims the value of the internal LOB to the length specified by the neivlen
parameter.
Here is how to perform, search and replace in a CLOB with some procedures of the
dbms_lob package. First, create the test table and insert some rows.
drop table
tab_dbms_lob_search
purge
/
Table dropped
insert into
tab_dbms_lob_search
values
(1,'Oracle Database 7i,8i,9i and 10g')
/
28 Advanced DBMS Packages
1 row inserted
insert into
tab_dbms_lob_search
values
(2,'Oracle Database 7i,8i,9i and 10g')
/
1 row inserted
commit
/
Commit complete
Let’s create a procedure that uses the dbms_lob package to search and replace text
within a LOB (CLOB, BLOB).
begin
if lob_local is NULL then
raise_application_error(-20001, 'LOB is empty. You need to execute
inserts first!');
end if;
--The function isopen check to see if the LOB was already opened using
the input locator
if dbms_lob.isopen(
lob_local) = 0
dbms_output.put_line(' LOB is open!');
then
nul 1 ;
dbms_output.put_line(' LOB is closed!');
end if;
Package db m sjob 29
temp_clob,
TRUE,
dbms_lob. session) ;
LOOP
-- The function instr returns the matching position of the nth
occurrence of
--the pattern in the LOB, starting from the offset you specify.
if lob_local_len > 0
then
--The procedure copy copies all, or a part of, a source internal
LOB to a destination internal LOB.
dbms_lob.copy(
temp_clob,
lob_local,
lob_local_len,
temp_clob_len + 1,
start_offset);
end if;
exit;
end if;
--The function getlength gets the length of the specified LOB.
temp_clob_len := dbms_lob.getlength(temp_clob);
if (end_offset - start_offset) > 0
then
dbms_lob.copy (
temp_clob,
lob_local,
(end_offset - start_offset),
temp_clob_len + 1,
start_offset);
end if;
start_offset := end_offset + length(srch_string) ,-
nth := nth + 1;
dbms_lob.writeappend(
temp_clob,
rep_string_len,
rep_string);
end if ;
end loop;
if length(srch_string) > length(rep_string) then
--The procedure trim trims the value of the internal LOB to the
length you specify in the
--newlen parameter
dbms_lob.trim(
lob loc => lob_local,
1. First, it finds the position of the first occurrence of the searched text.
3. Now, it trims the temporary NCLOB to keep only what came after the word
to replace.
Now let’s look at the table results to see how they are before the search and replace
process.
select
*
from
tab_dbms_lob_search;
CLOB ID C
Call the procedure to change the value for the first row only {clob_id—1).
declare
lob_local clob;
begin
select c
into lob_local
from
tab dbms lob search
Package db m sjob 31
where
clob_id = 1
for update;
proc_dbras_lob_search_rep(
lob_local => lob_local,
srch_string => 1 and 10g',
rep_string => ' ,lOg and llg ');
commit;
end;
/
select
*
from
tab_dbms_lob_search
/
CLOB_ID C
In the preceding example, the following procedures and functions were used: isopen,
createtemporaiy, instr, getlength, copy and trim.
The dbms_pclxutil procedure is used in cases where there are large partitioned tables
and local indices need to be created on these tables. Through the build_part_index
procedure, the user can create these indexes using parallelism. The dbms_pclxutil is an
alternative to a parallel hint.
insert into
Now, we create an index using the unusable option just to add index information to the
dictionary.
Next, let’s rebuild the index using four jobs with the procedure build_part_index.
Of course, to experience the most benefits from this technique, the server must have
enough free spare resources to physically perform this task in parallel.
To get the most from it, it is desirable to have at least as many CPU cores as
processes as are intended to run in parallel, and to have the index and table data
stored over several hard drives.
Be careful because if the server does not have enough resource parallelism, it may
indeed adversely affect performance. Remember, parallelism only works with
Enterprise Edition databases.
It is important to note that the package dbms_redefmition may be used to redefine the
structure of a table while simultaneously arranging for its reorganization online when
DML operations are executed on the package.
Fortunately, the dbms_redefinition package provides a very simple and useful front end
and internally performs most of these tasks, thus saving the user from having to do
any complex manual intervention. Unfortunately, it is only available with Enterprise
Edition.
There has been an ongoing debate about the value of periodic rebuilding of tables and
indexes along two dimensions:
1. Reclaimed storage: The Oracle lOg segment advisor identifies tables and
indexes that have become sparse as a result of high DML as a candidate for
rebuilding.
2. Improved speed: There are documented cases where rebuilding a table or index
reduces consistent gets and makes the SQL run faster, but this workload feature is
not yet in the Oracle lOg segment advisor.
There are eight procedures in the package dbms_redefinition'.
■ abort_redef_table
• can_redef__tabie
■ copy_table_dependents
Table created
create index
idx_tab_redef
on
table_redefinition (prod_category)
tablespace
pkg_idx;
Index created
The procedure can_redej_table is used to check whether the table can be redefined or
reorganized. Note that error ORA-1208P appears because the table does not have a
primary key yet.
37
execute dbms_redefinition.can_redef_table(
uname => 'pkg',
tname => 'table_redefinition1);
begin dbms_redefinition.can_redef_table(
uname => 1pkg1,
tname => 'table_redef inition')
end;
Create a primary key and now the package will work on this table.
alter table
pkg.table_redefinition
add constraint
pk_obj_id
primary key
(prod_id)
Now we create the temporary “snapshot” table that will receive changed data during
the redefinition process using different column names for the example and another
tablespace (reorg).
Table created
execute dbms_redefinition.start_redef_table(
uname => 'pkg',
orig_table => 'table_redefinition1,
int_table => 'temp_ redefinition
col_mapping => 'prod_id prod_id_diff,prod_name
prod_name_diff,
prod_desc prod_desc,prod_category prod_category,prod_subcategory
prod_subcategory_diff,prod_category_id prod_category_id 1,
options_flag => dbms_redefinition.cons_use_pk)
select
sql_text
from
v$sqlarea
where
sql_text like '%temp_redef inition% '
SQL_TEXT
9 rows selected
To see which objects are being reorganized, use (he sample below.
select
*
from
dba_redefinition_objects;
OBJECT_TYPE OBJECT_OWNER
OBJECT_NAME BASE_TABLE_OWNER BASE_TABLE_NAME
INTERIM_OBJECT_OWNER INTERIM_OBJECT_NAME
EDITION NAME
Now, let’s check the interim table and the original table.
select
count(*)
from
pkg.temp_redefinition;
COUNT(*)
72
select
count(*)
from
pkg.table_redef inition
COUNT(*)
72
To ensure that the redefinition is including changed rows, let’s make some changes to
the original table.
delete
from
pkg.table_redefinition
where
prod_id=33
/
1 row deleted
commit
/
Commit complete
select
count(*)
from
pkg.temp_redefinition
/
COUNT{*)
72
select
count(*)
from
pkg.table_redefinition
/
71
commit
/
Commit complete
exec dbms_redefinition.sync_interim_table{
uname => 'pkg',
orig_table => 'table_redefinition',
int_table => 'temp_redefinition');
71
select
count(*)
from
pkg.table_redef inition
/
COUNT(*)
71
The final step performs one of the most critical and easily forgotten steps, to create
any required triggers, indexes, materialized view logs, grants, and/or constraints on
the newly reorganized table, probably using the dbms_metadata, package to generate the
DDL for these ancilliary objects.
Using dbms_metadata, look how easy it is to copy all dependent objects with just a
single call to dbms_redejinition.
declare
num_err pls_integer;
begin
select
*
from
dba_redefinition_errors
/
OBJECT_TYPE OBJECT_OWNER
OBJECT_NAME BASE_TABLE_OWNER BAS E_TAB LE_NAME DDL_TXT
EDITION NAME
index pkg
pk_obj_id pkg table_redefinition
unique index "pkg"."tmp$$_ pk_obj_id0" ON "pkg"."temp_ redefinition " ("prod
index pkg
pk_obj_id pkg table_ redefinition
unique index "pkg"."tmp$$_pk_obj_idO" ON "PKG"."temp_ redefinition " ("prod
alter table
temp_redefinition
add constraint
Table altered
create index
idx__prod_sub
on
temp_redefinition (prod_subcategory_di ff)
tablespace
pkg_idx_32tn;
Index created
Next we create the same primary key, but with a different name.
alter table
pkg.temp_redefinition
add constraint
pk_obj_idl
primary key (prod_id_diff);
And here we finish the reorganization process with the finish_redef_table procedure.
exec dbms_redefinition.finish_redef_table(
uname = > 1pkg',
orig_table => 'table_redefinition',
int_table => 'temp_ redefinition ');
select
owner,
segment_name,
tablespace_name
from
dba_segments
where
segment_name in ('table_redefinitionidx_prod_sub1)/
select
index_name
from
dba_indexes
where
table_name='table_redefinition1
/
select
index_name
from
dba_indexes
where
table name='table_redefinition1;
select
owner,
constraint_name,
constraint_type,
table_name,
search_condition,
index_name
from
dba_constraints
where
table_name=1table_redefinition’/
8 rows selected
select
index_name
from
dba_indexes
where
table_name= table_redefinition1
/
44 Advanced DBMS Packages
select
index_name
from
dba_indexes
where
table_name='table_redefinition';
select
index_name
from
dba_indexes
where
table_name=’table_redefinition'
/
INDEX NAME
idx_tab_redef
idx_prod_sub
pk_obj_idl
If the reorganization fails, you will need to take special steps to restart it. Because the
redefinition requires creating a snapshot, upon an abort you need to execute
dbms_redefmition.abort_redef_table to release the snapshot to restart the procedure. The
dbms_redefinition.abort_redef_table procedure accepts three parameters, e.g. schema,
original table name, and holding table name, and “pops the stack”, allowing the DBA
to start over.
Package dbms_rowid
The dbms_romd package allows us to create roivids and obtain information about romds
that are already created. This might include the object number or the data block
number, and the dbms_romd package allows us to see this information without having
to write any additional code.
Displaying a rowid is useful for identifying which row is being locked by a session and
not just the object that is being locked, which is information usually acquired through
views such as v$lock, v$locked_pbjects, the script utllockt.sql and others.
Beware: Oracle rowicf s are not permanent, and they may change as a result of
table reorganization, table coalescing, and row relocation.___________________
Package dbms_rowid 45
All tables of the Oracle database have one pseudo column called romd.
Just like your home address indicates where you live, a unique address call a rom d is
used for each row of each table of the database is created, using the file number, data
block number and offest into the data block.
The romd column holds the data block that contains definitive information, the line
inside of the data block, the database file that has the line and the data object number.
The rom d can be easily manipulated with the Oracle dbms_wmd package.
There are also “old rowids”, leftovers from Oracle when a change was mnade to the
roiwid storage method. Old rowidk, also called restricted rowidk, will be automatically
converted to the new rom d format if:
■ Export/import is used to move data
■ The migration utility is used
■ The ODMA is used to upgrade to 9i
If rowids are used in an Oracle 7 appkcation and stored as columns in other tables,
then these columns will have to be manually changed to the new format using the
Oracle dbms_rowid package. If a column in a table has been designated as a datatype of
romd, it will be altered to accept the new format during migration; this will not affect
the data in the column.
When an index in a table is created, the Oracle database uses rom d to construct the
index, using pairs of symbolic keys and rom d’s.
Each key of the index points to one definitive rom d associated to the address of a line
in the table, supplying fast access to the register, rom d also can be used for tasks such
as access to particular lines (rows); verification of how the table is organized;
exportation of lines specified in the set with the export (expdp) utility; verification of
which line is being locked for a session; and verification of the space really used by
one table.
The code of this package is found in the dbmsutil.sql script and it is called by catproc.sql
in the creation of the database. A public synonym is created for the package by this
script and the execute privilege is granted to public. The package dbms_romd contains
only one procedure and ten functions. The following are examples on how to make
use of the functions and procedures of the package dbms_rowid.
So, how can we find the block number of a row? This information is useful when the
block that is corrupted is known and from which line this block belongs needs to be
found.
If a situation like this ever comes up where a few blocks are corrupt, Oracle will
provide the block information along with the error message so that information is
actually present. RMAN manuals can be found at tahiti.oracle.com.
However, for the purpose of this example, a sample block address is obtained with
the following procedure just to show what it is like. First, the block number of a
single row using rowid_block_number will be returned by this query.
select
dbms_rowid.rowid_block_number(rowid)
from
table_redefinition
where
prod_i d_di ff=14;
DBMS rowid.rowid_BL0CK_NUMBER (
8267
Once the rowid_block_number procedure shows the block number, use the romd_object
procedure to get the rowid of the object The query below shows all of the rowid
information including object, file, block and row:
select
substr(rowid,1,6) "object",
substr(rowid,7,3) "file",
substr(rowid,10,6) "block",
substr(rowid,16,3) "row"
from
table_redefinition;
Package dbms_rowid 47
Then the rowid object number is determined by using the procedure rowid_object of the
package dbms_rowid.
set serveroutput on
declare
object_no integer;
row_id rowid;
begin
select
rowid
into
row_id
from
table_redefinition
where
prod_id_diff = 13;
object_no := dbms_rowid.rowid_object(row_id! ;
dbms_output,put_line(
'Here is the object rowid number: ' jj object_no);
end;
/
Here is the object rowid number: 71384
The example above shows the object number. We can check it using the query below
to compare the values.
select
dbms_rowid.rowid_obj ect(rowid)
from
table_redefinition
where
prod_id_diff = 14;
Next, a rowid is created for test purposes only. Note that this should not be used
because the Oracle server will create valid rowids automatically.
DBMS_rowid.rowid_CREATE(rowid_
00004009.0001.0009
AAARb ZAAJAAAEAJAAB
The dbms_n>wid procedure rowid_info can be used to return information like rowid type,
object_id, datafile number, block id and row number. One example of rowid_info is
shown below:
declare
row id rowid;
rowid_type number;
object_id number;
Package dbms_rowid 49
datafile_num number-
blocked number;
row_number number;
begin
row_id := dbms_rowid.rowid_create(1, 71385, 9, 16393, 1);
dbms_rowid.rowid_info(row_id,
rowid_type,
object_id,
datafile_nu
block_id,
row_number)
dbms_output.put_line(1rowid: | row_id);
dbms_output.put_line{'Object ID: | object_id);
dbms_output.put_line('Datafile Nu | datafile_num);
dbms_output.putAline('Block ID: 1 j| block_id);
dbms_output.put_line('Row Number: ' || row_number);
end;
/
rowid: AAARbZAAJAAAEAJAAB
Object ID: 71385
Datafile Number:9
Block ID: 16393
Row Number: 1
Note that the above procedure returns the datafile where the object is located using
the function romd_relative_Jno.
select
dbms_rowid.rowid_relative_fno(rowid) "Datafile Number"
from
table_redefinition
where
prod_id_diff = 14;
Datafile Number
select
dbms_rowid.rowid_row_number(rowid)
from
table_redefinition
where
prod_id_diff = 14;
--Using query-
select
dbms_rowid.rowid_to_absolute_fno(
row_id => rowid,
schema_name => 'sh',
object_name=> 'products') "Absolute file number"
from
sh.products
where
rownum <2;
dbms_output.put_line(
'Absolute file number: ' || absolute#);
end;
/
Starting with Oracle 8, information about rowick such as the object number, relative
file number, block and row were added. When an Oracle migration from Oracle 7 to
Oracle 8 is completed, the extension of the rowid process is done automatically.
However, for single columns of a round data type, this extension is not automatically
done, so the function rowid_to_extended needs to be used.
Package dbms_rowid 51
Convert a restricted rom d to an extended rom d using the function romd_to_extended.
Extended rowid
D/////AAJAAACBLAAA
select
dbms_rowid.rowid_to_extended{
old_rowid => dbms_rowid.rowid_to_restricted(rowid,0),
schema_name => NULL,
object_name => NULL,
conversion_type = > 0) "Extended rowid"
from
table_redefinition
where
prod_id_diff=14;
Extended rowid
AAARbXAAJAAACBLAAA
DBMS_rowid. rowid_TO_EXTENDKD (O
D/////AAFAAAAvlAAA
Restricted rowid
0000204B.0000.0009
rowid Type
rowid Type
0
Verify whether a restricted rowid can be converted to an extended format using the
function rowid_verijy.
Package dbms_rowid 53
conversion_type =>0) = 1;
AAARbVAAJAAAABLAAA Electronics
A useful function of this package is the ability to export only one line of a table, a set
of lines or just a block using the export {exp) utility. Below is a demonstration of how
to execute such procedures.
The example below is just a simple task to show what this package can do but, of
course, a lot of things can be done with this package. Some other examples can be
found at www.dba-oracle.com.
create table
tab_rowid_exp
tablespace
pkg_data_32m as
select
*
from
dba_tables;
select
distinct
dbms_rowid.rowid_block_nv.mber (
rowid)
from
tab_rowid_exp;
20561
20573
20577
20588
select
dbms_rowid.rowid_block_nuinber (rowid) "block number",
count(1) ’num rows in block"
from
tab_rowid_exp
where
dbms_rowid.rowid_block_number(rowid)
in
(20567,20632)
group by
dbms_rowid.rowid_block_number(rowid) ;
20632 29
20567 41
Show some data from the rows that are inside the blocks that will be
exported.
select
dbms_rowid.rowid_block_number(rowid) "block number",
table_name "part of row to be exported"
from
tab_rowid_exp
where
dbms_rowid.rowid_block_number(rowid)
in
(20567, 20632)
group by
dbms_rowid.rowid_block_number(rowid),
table_name;
20567 logmnr_indsubpart$
20567 logmnr_ccol$
20567 logmnr_col$
Package dbms_rowid 55
Export: Release 11.1.0.6.0 - Production on Mon May 4 07:30:34 2009
Connected to: Oracle llg Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8MSWIN1252 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table tab_rowid_exp 70 rows exported
Export terminated successfully without warnings.
Next to be shown is how to export specific rows of a table. First, choose which rows
will be exported.
select
rowid,
table_name,
owner
from
tab_rowid_exp
where
table_name i n ('promotions1, 1countries 1, ’products’)
and
owner='S H ';
AAARbyAAJAAAFBQAAS products SH
AAARbyAAJAAAFBQAAX countries SH
AAARbyAAJAAAFBQAAa promotions SH
The following is another function of this dbms_romd package. In cases where there is
a lock in objects, generally a search is done in the SGA v$ views, such as vSlock,
v$locked_object> or v$dml_locks, trying to find which object is being locked at any given
time.
--Populating table
begin
for x in 1 .. 50 loop
insert into
tab_rowid_lock
values (x)
end loop;
end;
/
commit;
--Generating lock
--Open a session (sessionl)
select
★
from
tab_rowid_lock
where
coll=2
for update;
Now lets use the “select for update” clause to simulate a long-term locked rowset. To
release these locks, we use a variety of queries against v$views and then use
dbms_romd.romd_create to find the specific row numbers being locked.
Package dbms_rowid 57
col os_user_name format al5 heading 'os user'
select 1.session_id,
o.owner,
o ,obj ect_name,
o .object_id,
1 .oracle_username,
1.os_user_name
from
gv$locked_object 1,
dba_objects o
where
1.object_id = o.object_id
order by
o .object_name;
COL1
Package dbms_space
The dbms_sptiw package contains five procedures and two functions that serve to
analyze the object growth and the space used by tables and indexes. The dbms_spaw
package gives us the following additional information:
■ Cost: To determine the cost to create an index
■ Management: To find recommendations on management of segments
■ Verify: To verify information on free blocks in an object
■ List: To return a list of objects that are associated to one specific segment
■ Gowth: To verify the growth of one definitive object in a given time
Space: To verify space made unusable in a table, index or cluster.
The dbms_space package is created by the script dbmsutil.sql which, in turn, is executed
by another script called catproc.sql. The user must have the analyse privilege in objects
to be able to execute this package.
Package dbms_space 59
Use the function asa_recommendations to verify if there are recommendations for
improvement in a particular segment. See an example in the code below:
To see how this works, let’s create an example table and then gather statistics of this
table. Next, simulate the creation of an index in this table using the procedure
create_index_cost. This will show how much space is needed on the index tablespace to
create this index.
y Code 2 .1 3 - dbms_space_create_idx_cost.sql
conn pkg/pkg#123
--Creating table
create table tab_dbms_space
tablespace
pkg_data_3 2m
as
select
*
from
dba_objects;
declare
used_bytes number;
a 1loc_bytes_on_tbs n u m b e r ;
begin
db m s _ s p a c e .create_index_ cost(
Package dbms_space 61
ddl => 'create index idx_obj_name ON
tab_dbms_space(object_name)',
used_bytes => used_bytes,
alloc_bytes => alloc_bytes_on_tbs,
plan_table => '');
dbms_output.put_line('Index Used Bytes: ' || used_bytes);
dbms_output.put_line('Allocated Bytes on Tablespace: ' j
alloc_bytes_on_tbs);
end;
/
Index Used Bytes: 1723300
Allocated Bytes on Tablespace: 3145728
For dbms_space.create_index_cost, the procedure accepts the DDL for a "create index"
statement and outputs the storage needed to create the index. The Oracle
documentation notes these input parameters for dbms_space.create_index_cost:
■ ddl: The create index DDL statement
■ used_bytes: The number of bytes representing the actual index data
■ alloc_bytes: Size of the index when created in the tablespace
■ plan_table: Which plan table to use, default NULL
The create_table_cost procedure allows the user to identify the amount of space the
table to be created in the database will occupy. The procedure bases its calculation on
either the information of the columns of the table, or the information on the average
size of each row of the table.
The example below shows a simulation of a table creation using the create_tab_cost
procedure. Note that it is necessary to input some parameters like av^_ron>_si^e (in
bytes), row_count, and pet J r e e so the calculation of table size can be done when the
procedure is executed.
Package dbms_space 63
As we see, the dbms_space.create__table_cost procedure gives us an estimate of the used
and allocated space for the target table.
If the database uses manual segment space management, i.e. for LOBs, and they are
stored in a dictionary managed tablespace, there may be a limit on the maximum
number of extents. However, in this case, monitoring free blocks in order to avoid
out of space errors is a task that all DBAs should do in their day-by-day work.
In the next example, a table is created on a tablespace with the segment space management
manual and then the freejblock s procedure is executed to show how many free blocks
this table has.
For the parameter, objtype must be specified by one of the following numbers:
■ object_type_table —1
■ object_type_nested_table —2
■ object_type_index = 3
■ object_type_cluster —4
■ object_type_table_partition —7
■ object_tJpe_index_partition —8
■ object_type_table_subpartition = 9
■ object_type_index_subpartition —10
■ object_type_mv —13
■ object_type_mvlog - 1 4
The example below demonstrates how to use object_dependent__segments to get all objects
that have some dependency with the table used by variable objname (products).
y Code 2.16-dbms_space_obj_depend_seg.sql
conn pkg/pkg#123
set linesize 200
col segment_owner format a25
Package dbms_space 65
col segment_name format a25
col segment_type format a20
col tablespace_name format a20
col partition_name format a20
col lob_column_name format al2
set serveroutput on
select
segment_owner,
segment_name,
segment_type,
tablespace_name
from
(table(dbms_space.object_dependent_segments(
objowner => ’sh',
objname => ’products',
partname => NULL,
objtype => 1)));
The active space monitoring of individual segments in the database gives the up-to-
the-minute status of individual segments in the system available to the database. This
provides sufficient information over time to perform growth trending of individual
objects in the database as well as the database as a whole.
You can use the object_growth_trend function to verify space used for a certain object in
different spaces of time. It is used quite often to create capacity planning and ensure
that there is always enough space in tablespaces for tables and indexes.
Most production databases grow over the course of time and planning for growth is
a very important task of every professional Oracle DBA. If resources are carefully
planned out well in advance, such problems as the system being out of space can be
avoided and hence, using the object_gron>th_trend procedure is indespensible for capacity
planning.
In the example below, the size of the table products over time is shown and can
predict the growth and deduce the capacity planning for this table.
y Code 2 . 1 6 - dbms_space_obj_growth_trend.sql
conn pkg/pkg#123
set linesize 200
select
*
from
table(dbms_space.object_growth_trend(
object_ovmer => 'SH',
object_name => 'products',
object_type => 'table'))
where
space_usage > 0;
The result of this query shows the time when the statistics were collected, the space
used for the data of the object, the space allocated by the object and the quality of the
result.
Package dbms_space 67
■ Index
■ Index partition
■ Index subpartition
■ Cluster
■ LOB
■ LOB partition
■ LOB subpartition
To understand how the space_usage procedure works, consider a table called
tab_dbms_space_usage which receives many insert and delete operations.
Because the delete operations leave empty space, the space_usage procedure can be
used to verify the lost space in this table and justify as reorganization of the table. A
table can be reorganized in a variety of ways:
■ alter table ... shrink space.
• CTAS copy of the table
■ The dbms_redefinition package
There are many ways to count row space used within a table:
■ Count: You can count the rows and multiply by dba_tables.avg^row_len.
■ dbms_space: We can see percentages of spaced used with data blocks using the
space_usage procedure.
■ Blocks: Some rough estimates of rows space in a table can be computed by
gathering dba_tables.blocks and subtract the value of PCTFREE.
■ File size: One way of looking at total consumed space within a table is to map
the table to a single tablespace and the tablespace to a single data file. You can
then check dba_segments to see the total file size for the table rows.
The following is an example of collecting actual spaced used within a table. A table is
created and some operations performed to simulate changes.
At this time, all blocks are full, so check it by running the dbms_space.space_usage
procedure. The results below:
set serveroutput on
declare
v_unformatted_blocks number
v_unformatted_bytes number
v_fsl_blocks number
v_fsl_bytes number
v_f{;2 blocks number
v_fs2_bytes number
v_fs3_blocks number
v_fs3_bytes number
v_fs4_blocks number
v_fs4_bytes number
v_full_blocks number
v_full_bytes number
begin
dbms_space.space_usage(1pkg1,
'tab_dbms_space_usage',
1table 1,
v_unformatted_blocks,
v_unformatted_bytes,
v_fsl_blocks,
v_fsl_bytes,
v_fs2_blocks,
v_fs2_bytes,
v_fs3_blocks,
v_fs3_bytes,
v_fs4_blocks,
v_fs4_bytes,
v_full_blocks,
v_full_bytes>;
dbms_output.put_line('Unformatted Blocks = ' j| v_unforraatted_blocks)
dbms_output.put_line('Unformatted Bytes = ' jj v_unformatted_bytes);
dbms_output.put_line{'FSl Bytes (at least 0 to 25% free space} = 1 /_fsl_bytes);
dbms_output.put_line(sFSl Blocks(at least 0 to 25% free space) = 1 i/_fsl_blocks) ;
dbms_output.put_line('FS2 Bytes (at least 25 to 50% free space)= 1 >/_fs2_bytes) ;
dbms_output.put_line('FS2 Blocks(at least 25 to 50% free space)= ’ :
/_fs2_blocks) ;
dbms_output.put_line('FS3 Bytes (at least 50 to 75% free space) = v_fs3_bytes)
dbms_output.put_line(‘FS3 Blocks(at least 50 to 75% free space) = v_fs3_blocks)
dbms_output.put_line{*FS4 Bytes (at least 75 to 100% free space) = j v_fs4_bytes)
dbms_output.put_line(‘FS4 Blocks(at least 75 to 100% free space)= v_fs4_blocks)
dbins_output .put_line{’Full Blocks in segment = * || v_full_blocks)
dbms_output.put_line('Full Bytes in segment = 1 j[ v_full_bytes);
end;
/
Unformatted Blocks = 0
Unformatted Bytes = 0
FS1 Bytes (at least 0 to 25% free space) = 0
FS1 Blocks(at least 0 to 25% free space) = 0
FS2 Bytes (at least 25 to 50% free space)= 0
FS2 Blocks(at least 25 to 50% free space)= 0
FS3 Bytes (at least 50 to 75% free space) = 0
FS3 Blocks(at least 50 to 75% free space) = 0
FS4 Bytes (at least 75 to 100% free space) = 0
FS4 Blocks(at least 75 to 100% free space)= 0
Package dbms_space 69
Full Blocks in segment = 1015
Full Bytes in segment = 8314880
By deleting some rows and checking the available space again, some newly freed-up
blocks can be seen.
Unformatted Blocks = 0
Unformatted Bytes = 0
FS1 Bytes (at least 0 to 25% free space) = 0
FS1 Blocks(at least 0 to 25% free space) = 0
FS2 Bytes (at least 25 to 50% free space)= 0
FS2 Blocks(at least 25 to 50% free space)= 0
FS3 Bytes (at least 50 to 75% free space) = 0
FS3 Blocks(at least 50 to 75% free space) = 0
FS4 Bytes (at least 75 to 100% free space) = 155648
FS4 Blocks(at least 75 to 100% free space)= 19
Full Blocks in segment = 996
Full Bytes in segment = 8159232
This shows tiiat there are 19 blocks with 75-100% free space. This table can now be
reorganized using the shrink command; this will move the segmented data to the
beginning of the segment and adjust the HWM.
alter table
pkg.tab_dbms_space_usage
enable row movement;
Table altered
alter table
pkg.tab_dbms_space_usage
shrink space;
Table altered
Lasdy, the space can be checked again using the space_usage procedure of package
dbms_space.
Unformatted Blocks = 0
Unformatted Bytes = 0
FS1 Bytes (at least 0 to 25% free space) = 8192
FS1 Blocks(at least 0 to 25% free space) = 1
FS2 Bytes (at least 25 to 50% free space)= 8192
FS2 Blocks(at least 25 to 50% free space)= 1
F S 3 B y t e s (at l e a s t SO t o 7 5 % free space) = 0
FS3 Blocks(at least 50 to 75% free space) = 0
FS4 Bytes (at least 75 to 100% free space) = 0
FS4 Blocks(at least 75 to 100% free space)= 0
Full Blocks in segment = 994
Full Bytes in segment = 8142848
Then, using the space_usage procedure, it becomes evident as to which objects are
within spaces badly used in the database. From this information, use the shrink
The procedure dbms_space unused__space is also useful for locating objects that are
wasting space. The example below demonstrates the complete operation for checking
the space not used in a determined table.
In this example, a new table is created and its unused space is checked with the
procedure unused_space of package dbms_space.
begin
d b m s _ s p a c e .u n u s e d _ s p a c e (
segment_owner => 'pkg',
segment_name = > 1tab_dbms_unused_space',
segment_type => 'table',
total_blocks => tt_blk,
total_bytes => tt_bytes,
unused_blocks => unu_blk,
unused_bytes = > unu_byt e s ,
last_used_extent_file_id -> last_ext_file_id,
last used extent_block_id => last_ext_blk_id,
Package dbms_space 71
last_used_block => last_used_blk);
dbms_output.put_line('object_name = freelist_t');
dbms_output .put_line ('---------------------------------- ') ;
dbms_output.put_line ('Total Number of blocks = ’ || tt_blk)
dbms_output.put_line('Total unused blocks = ' II unu_blk);
end;
/
OBJECT NAME = FREELIST T
Now let’s delete some rows from this table and then execute the alter table xxx shrink
space command is executed. After this, some extents are freed to be used again and the
unused space can be rechecked.
delete from
tab_dbms_unused_space;
/
0 rows deleted
commit
/
Commit complete
COUNT(*) BYTES
8 1048576
16 65536
--Freeing up extents that was deleted
alter table
tab_dbms_unused_space
enable row movement;
Table altered
alter table
tab_dbms_unused_space shrink space;
--Check the number of extents again (now there is just one extent because
the table is empty after the delete command)
select
count(*),
bytes
from
dba_extents
where
segment_name = 1tab_dbms_unused_space
and
owner='pkg1
group by bytes;
1 65536
--Checking the unused space again (now we can see just 4 unused and 4 used
blocks.
-- It depends of db_clock_size and the storage type of tablespace)
set serveroutput on
declare
tt_blk number;
tt_bytes number;
unu_blk number;
unu_byt es numbe r ;
last_ext_file_id number;
last_ext_blk_id number;
last_used_blk number;
begin
dbms_space.unused_space(
segment_owne r => 'pkg',
segment_name => 'tab_dbms_unused_space1,
segment_type = > 1table 1,
total_blocks => tt_blk,
total_bytes => tt_bytes,
unused_blocks => unu_blk,
unused_bytes -> unu_byt es ,
last_used_extent_file_id => last_ext_file_id,
last_used_extent_block_id => last_ext_blk_id,
last_used_block => last_used_blk);
dbms_output.put_line{1object_name = freelist_t1);
dbms_output .put_line {'---------------------------------- ');
dbms_output.put_line{'Total Number of blocks = ' || tt_blk);
dbms_output.put_line('Total unused blocks= 1 II unu_blk);
end;/
Package dbms_space 73
As we have illustrated, the unused_space procedure shows the space that is not used
below the HWM in any table or index segment.
The HWM represents the border between the blocks that are stored (lines still are in
the blocks) or had previously stored rows (deleted lines) and the blocks that have
never stored rows (a fresh empty data block acquired from the freelist).
The identified blocks have never been used usefully by the segments and therefore,
can be set free for use when needed.
Package dbms_space_admin
The dbms_spaM—'idmin package supplies important functionality for locally managed
tablespaces. In this package, the following sub-programs are found:
■ assm_segment_verijiy (procedure)
■ assm_tablespace_verijy (procedure)
■ assm_segment_synchwm (procedure)
■ segment_comrpt (procedure)
■ segment_drop_sorrupt (procedure)
■ segment_dump (procedure)
■ segment_verijy (procedure)
■ tablespace_fix_bitmaps (procedure)
■ tablespace_Jix_segment_extblks (procedure)
■ tablespace_Jix_segn>ent_states (procedure)
■ tablespace_migrate_Jrom_local (procedure)
■ tablespace_migrate_to_local (procedure)
■ tablespace_rebuild_bitmaps (procedure)
■ tablespace_rebuild_quotas (procedure)
■ tablespace_nlocate_bitmaps (procedure)
■ tablespace_venfy (procedure)
Below is a simulated datafile corruption where this procedure is executed to check the
results of this corruption.
Package dbms_space_admin 75
ORA-OH15: 10 error reading block from file 12 (block # 17)
ORA-OlllO: data file 12: '/oracle/app/oradata/orallg/tbs_corrupt.dbf1
ORA-27072: File I/O error
Additional information: 4
Additional information: 12
Additional information: 16384
--Run the procedure tablespace_verify and let this show the error about
corrupted blocks
exec dbms_space_admin.assm_tablespace_verify(
tablespace_name => 'test_corrupt1,
ts_option =>20,
segment_option => NULL);
begin dbms_space_admin.assm_tablespace_verify(
tablespace_name => 'test_corrupt1,
ts_option => 20,
segment_option => NULL);
end;
/
ORA-01578: ORACLE data block corrupted (file # 12, block # 13)
ORA-OHIO: data file 12: 1/oracle/app/oradata/orallg/tbs_corrupt.dbf1
ORA-O6512: at "sys.dbms_space_admin", line 362
ORA-06512: at line 1
As shown above, the assm_tablespace_verify procedure can verify the integrity of the
segments within an ASSM tablespace. After finding an error in any tablespace
segment, the DBA should implement the steps necessary to fix the problem as quickly
as possible in order to minimize the end user layer impact.
Procedure dbms_space_admin
tablespace_migrateJo_local
The tablespace_migrate_to_local procedure allows tablespaces to be migrated from
dictionary managed to locally managed tablespaces. In this case, migrate all non-
SYSTEM tablespaces to locally managed before migrating the SYSTEM tablespace if
the intention is to migrate in READ WRITE mode. Also note that temporary
tablespaces cannot be migrated.
We will touch briefly on what reasons there might be for migrating dictionary
managed tablespaces to locally managed tablespaces. Locally managed tablespaces
manage their own extents internally, keeping one bitmap in each datafile to create a
mapping of the free blocks and the used blocks in a certain datafile.
Each bit in one bitmap corresponds to a block or set of blocks. When the extents are
allocated or set free for use, Oracle updates the values of the bitmap to show the new
status of the blocks. These updates do not generate rollback information nor do they
Therefore, locally managed tablespaces do not require the data dictionary and do not
generate rollback, nor need coalescing. Still, they bring an advantage in reducing
fragmentation and avoiding problems commonly faced by dictionary managed
tablespaces such as recursive updates.
When a table changes, a dictionary table changes also, thereby probably requiring
another dictionary table change to reflect it and so on.
Block contention like freelists and dictionary objects are no longer necessary, so
contention for them is eliminated with locally managed tablespaces as different blocks
of the bitmap can be concurrendy modified at any time. This gready simplifies
administration and enhances performance in most cases.
Let’s begin by comparing these two new methods of space management, LMT and
ASSM:
In a dictionary managed tablespace (DMT), the data dictionary stores the free space
details. While the free blocks list is managed in the segment heard of each table,
inside the tablespace), the Free space is recorded in the jys.uet$ table, while used space
in the sys.uet$ table.
But with high DML-rate busy tablespaces the data dictionary became a I/O
botdeneck and the, ,movement of the space management out of the data dictionary
Package dbms_space_admin 77
and into the tablespace have two benefits. First, the tablespace become independent
and can be transportable (transportable tablespaces). Second, locally managed
tablespaces remove the O/O contention away from the SYS tablespace.
Segment size management manual vs segment size management auto.
Here is how to migrate the SYSTEM tablespace from dictionary managed to local
managed.
'ALTER TABLESPACE'||TABLESPACE_
System altered
SYSTEM local
System altered
Package dbms_space_admin 79
'ALTERTABLESPACE 1 | |TABLESPACE_
To use this procedure, it is necessary to apply the patch as described in BUG 6493013
in MOSC. MOSC note 4067168.8 shows a related bug, a problem of performance in
table scan operations when using PQ (parallel query) where there are some blocks
between HWM and LWM of the segment.
Below is an example using this procedure. If the value returned by this procedure is 1,
then the segment requires HWM synchronization. A value will return 0 if the segment
has already had HWM synchronized.
conn pkg/pkg#123
--Procedure to detect and resolve HWM out-of-sync of an ASSM segment
set serveroutput on
declare
result number
begin
result := dbms_space_admin.assm_segment_synchwm{
segment_owner = > 'pkg',
segment_name => 'tab_dbms_space_admin',
segment_type => 'table',
check_only => 0) ;
--where 1 = check only and 0 = perform synchronization
dbms_output.put_line('synchwm check result: ' || result);
end;
/
The procedure was used to detect and resolve HWM out-of-sync of an ASSM
segment.
This problem is caused by DML and DDL operations. For example, the creation of
an index using parallelism or frequent operations like deletes and inserts in a given
table can cause an internal inconsistency between views. To fix it,
tablespace_Jix_segment_extblks procedure is used.
In the example below, there is a demonstrated check for the existence of this problem
in a database and its remedy.
select
/*+ rule */
s .tablespace_name,
s .segment_name segment,
s.partition_name,
s.owner owner,
s .segment_type,
s .blocks sblocks,
e.blocks eblocks,
s .extents sextents,
e.extents eextents,
s.bytes sbytes,
e.bytes ebytes from dba_segments s,
(select count(*) extents,
sum(blocks) blocks,
sum(bytes) bytes,
segment_name,
partition_name,
segment_type,
owner
from
dba_extents
group by
segment_name,
partition_name,
segment_type,
owner) e
where
s .segment_name = e .segment_name
and
s .owner = e .owner
and
Package dbms_space_admin 81
(s.partition_name = e.partition_name or s.partition_name is null)
and
s .segment_type = e.segment_type
and
s.owner not like 'SYS%'
and
s .segment_name='tab_dbms_space_admin'
and
((s.blocks <> e.blocks) or (s.extents <> e.extents) or
(s.bytes <> e.bytes))
/
exec dbms_space_admin.tablespaee_fix_segment_extblks{'users')?
select
/*+ rule */
s .tablespace_narae,
s .segment_name segment,
s .partition_name,
s .owner owner,
s .segment_type,
s.blocks sblocks,
e.blocks eblocks,
s .extents sextents,
e.extents eextents,
s.bytes sbytes,
e.bytes ebytes from dba_segments s,
(select
count■(*) extents,
sum(blocks) blocks,
sum(bytes) bytes,
segment_name,
partition_name,
segment_type,
owner
from
dba_extents
group by
segment_name,
partition_name,
segment_type,
owner) e
where
s .segment_name = e .segment_name
and
s .owner = e .owner
and
(s.partition_name = e.partition_name or s,partition_name is null)
and
As displayed, the data between views has been synchronized. If executing this
procedure in the SYSTEM tablespace before the execution of procedure
tablespace_jix_segmMt-?xtblks is desired, then execute the following command:
alter session set events '10912 trace name context forever, level 1';
It is important to remember that this procedure does not function for the migrated
object case in previous versions of database 9i, 8i or 7.
exec dbms_space_admin.assm_tablespace_verify(
tablespace_name => ’test_corrupt1,
ts_option => 20,
segment_option => NOLL);
Package dbms_space_admin 83
begin dbms_space_admin.assm_tablespace_verify(tablespace_name =>
'test_corrupt',ts_option => 20,segment_option => NULL); end;
ORA-01578: ORACLE data block corrupted (file # 12, block # 13)
ORA-OHIO: data file 12: 1/oracle/app/oradata/orallg/tbs_corrupt.dbf1
ORA-06512: at "sys.dbms_space_admin", line 362
ORA-O6512: at line 1
Another situation when these procedures could be used would be for fixing media
corruption of bitmap blocks in which three procedures would be executed in this
order.
1. If the tablespace contains corrupted blocks, the execution of procedure
tablespace_verijy shows the error and writes the following message in the alert file of
the database:
[orallg@dbms trace]$ tail -f alert_orallg.log
Hex dump of (file 12, block 13) in trace file
/oracle/app/diag/rdbms/orallg/orallg/trace/orallg_ora_177 95.trc
Corrupt block relative dba: 0x0300000d (file 12, block 13)
Bad header found during buffer read
Data in bad block:
type: 58 format: 3 rdba: 0xb746c3d2
last change sen: 0xca9b.ca24f02b seq: Oxc fig: 0x77
sparel: 0xf9 spare2: Oxde spare3: 0x9c7f
consistency value in tail: 0xf0b7c97b
check value in block header: 0x7f3d
computed block checksum: 0x4f39
Reread of rdba: 0x0300000d (file 12, block 13) found same corrupted data
Fri May 15 00:43:29 2009
Corrupt Block Found
TSN = 14, tsname = test_corrupt
RFN = 12, BLK = 13, RDBA = 50331661
OBJN = -1, OBJD = 71776, OBJECT = test_corrupt, subobject =
segment owner = , segment type = Temporary Segment
Errors in file
/oracle/app/diag/rdbms/orallg/orallg/trace/orallg_ora_17795.trc
(incident=17057):
ORA-O1578: ORACLE data block corrupted (file # 12, block # 13)
ORA-01110: data file 12: '/oracle/app/oradata/orallg/tbs_corrupt.dbf'
Incident details in:
/oracle/app/diag/rdbms/orallg/orallg/incident/incdir_17057/orallg_ora_177 95_
H7057 .trc
Errors in file
/oracle/app/diag/rdbms/orallg/orallg/trace/orallg_ora_177 95.trc
(incident=17058):
ORA-01578: ORACLE data block corrupted (file # , block # )
ORA-01578: ORACLE data block corrupted (file # 12, block # 13)
O R A - 01110: data file 12: 1/oracle/app/oradata/orallg/tbs_corrupt.d b f '
Fri May 15 00:43:34 2009
Trace dumping is performing id=[cdmp_20090515004334]
Incident details in:
/oracle/app/diag/rdbms/orallg/orallg/incident/incdir_17058/orallg_ora_17795_
il7058.trc
Trace dumping is performing id=[cdmp_20090515004335]
Fri May 15 00:43:47 2009
Sweep Incident[17057]: completed
Fri May 15 00:43:48 2009
2. Then, for the solution to the problem of bitmap corrupted blocks, we are able to
execute the three procedures described above.
Package utl_compress
The utl_compress package supplies a series of utilities for the compression of datatypes
RAW, BLOB or BFILE. It is created through the script utlcomp.sql.
The utl_compress package was designed using C language with known algorithms of
compression which are compatible with utilitarian Lempel-Ziv; for example, ^ip for
Windows and compress for UNIX. The compression and decompression of the files are
always made on the server side and not on the client sidem, and the data is sent
without compression to the server where it is compressed.
For tables that have historical data and LOB columns, the compression is very useful
because performance on queries that use these tables is improved and storage space is
also freed. A trigger could be created using package utl_compress that compresses data
while inserts are being executed on a historical table and thus, the process can be
automated.
The example below demonstrates compressions that release a lot of free space on
storage. This will make the CEO of the company very satisfied as he/she will perhaps
spend less money on disks and more money buying CPUs.
Table created
--Create a Directory to keep the binary files (pictures, .doc and others)
create or replace directory
pictures
as
'/oracle/pictures 1;
Directory created
select
■k
--Now let's insert one row with BLOB uncompressed and compressed and then
see the size differences
set serveroutput on
declare
compress_quality integer := 1; --An integer in the range 1 to 9, l=fast
compression , 9=best compression, default is 6
file_size integer;
binary_file blob;
source_file bfile := bfilename('pictures', 'tab_utl_compress.txt');
amount integer;
a_compressed_blob;
cursor blob_cur is
select
*
from
tab_utl_compress;
begin
insert into
tab_utl_compress
values
(1, empty_blob());
select
col_blob BLOB
into
binary_file
from
tab_utl_compre ss
where
col_id = 1;
Package utl_compress 87
src => binary_file,
quality => compress_quality);
file_size := dbms_lob.getlength{
lob_loc => a_compressed_blob)
dbms_output.put_line('Size of compressed file "bytes": 1 || file_size);
exception
when others then
dbms_output,put_line('A problem have been founded');
dbms_output.put_line(sqlcode {| sqlerrm);
end;
/
Size of uncompressed file "bytes": 16777000
Size of compressed file "bytes": 7567982
commit;
Commit complete
Summary
This chapter demonstrated the most useful packages for DBAs to use for the
maintenance and organization of tablespaces, tables and indexes.
In the next chapter, we will present packages that can be used in the security area of
the Oracle database.
Database security management is one of the biggest challenges and highest priorities
companies are facing today. The ever increasing speed of processes, coupled with the
worldwide scope of the Internet, make external attacks a continuous threat that must
be monitored and defended against.
Oracle, throughout its history, has always been on the forefront in their improvement
of database protection tools, both against external and internal attacks. In this
chapter, we will cover the main packages related to Oracle’s database security.
Package dbms_crypto
Oracle dbms_cypto allows a user to encrypt and decrypt Oracle data. Oracle dbms_aypto
supports the National Institute of Standards and Technology (NIST) approved
Advanced Encryption Standard (AES) encryption algorithm. Oracle dbms_aypto also
supports Data Encryption Standard (DES), Triple DES (3DES, 2-key and 3-key),
MD5, MD4, and SHA-1 cryptographic hashes, and MD5 and SHA-1 Message
Authentication Code (MAC). This package can encrypt most common Oracle
datatypes including RAW and large objects (LOBs) as well as BLOBs and CLOBs.
It is important to note that package dbms_crypto cannot be used with type varchar2. As a
workaround, it is necessary to convert varchar2 into the uniform database character set
AL32UTF8 and then convert to a raw datatype. Only then can the user encrypt using
the package dbms_c>ypto. Below are practical examples using procedures from the
package dbms_cypto.
Package dbms_crypto 89
Checking for Modifications on Stored PL/SQL
This process stores the checksum results which auditors can access at any time when
searching for alterations made to objects. Following are the procedures used in the
example and a brief description:
■ hash. This function takes a variable-length input string and converts it to a fixed
length. This fixed length can be used to identify if data has been changed or not
because of its distinct value.
■ hashmd5\ It generates a 128-bit hash more powerful than MD4
Below, the privilege below should be assigned to the user that will handle the
dbms_crypto package.
conn pkg@orallg
show user
The function below is created to check whether or not an object was changed.
Function created
show errors
Here, the hash value is gathered before the procedure changes. This value is inserted
in the audit_hash_source table.
select
vrfy_changes('pkg','proc_test','procedure') hash
from
dual;
HASH
6A9AFF108D24F016C3F5C138763E0F5 9
91
--Insert hash values in the audit table
insert into
audit_hash_source
select
owner,
object_name,
obj ect_type,
vrfy_changes(owner, object_name, object_type),
sysdate
from
dba_objects
where
object_type = 'procedure'
and
owner - 'pkg'
/
4 rows inserted
commit
/
Commit complete
A change is made in the proc_test procedure. A different hash value is apparent after
this step.
Finally, the query below is used in conjunction with the function vry_changes to retrieve
the changes that happened in the proc_test procedure and show the new hash value.
select
owner,
name,
type,
calculation_date,hash
from
audit_hash_source
where
type = 'procedure'
and
vrfy_changes(owner, name, type) <> hash;
A table is created to record the data that will be encrypted. Then some example rows
are inserted on it.
insert into
tab_dbms_crypto
(account_name,
account_passwd)
values
('user2', 1$456%')
Package dbms_crypto 93
/
1 row inserted
insert into
tab_dbms_crypto
(account_name,
account_passwd)
values
('user3', 1(876%')
/
1 row inserted
commit
/
Commit complete
Table dropped
Now a package that will execute the process of encrypting and decrypting data is
created using these functions:
■ encrypt_aes128: Advanced encryption standard. Block cipher. Uses 128-bit key.
■ chain_cbc. Cipher block chaining
■ pad_pkcs5: Password-based cryptography standard
■ randombjter. This function generates random key values
create or replace package pkg_encrypt_decrypt
as
function enc_account_passwd(
p_account_passwd in varchar2,
p_account_name in varchar2,
p_unlock_code in varchar2 default null)
return varchar2;
function dec_account_passwd(
p_account_j?asswd in varchar2,
p_account_name in varchar2,
p_unlock_code in varchar2 default null)
return varchar2;
function enc_account_passwd(
p_account_passwd in varchar 2,
p_account_narae in varchar 2,
p_unlock_code in varchar 2 default NULL)
return varchar2 as
swordfish raw(256);
swordfish_enccrypted raw(256);
begin
if (p_unlock_code is null or p_unlock_code != freejpasswprd)
then
return null;
end if;
The randombytes function below returns a raw value containing an encrypted secure
pseudo-random sequence of bytes which can be used to generate random material for
encryption keys.
--We generate the swordfish, this "random" number will be needed to decrypt
the password
swordfish := dbms_crypto.randombytes(16);
-- This function encrypts raw data using a stream or block cipher with a
user supplied key
swordfish_enccrypted := dbms_crypto.encrypt(swordfish,
enc_mode,
utl_il8n.string_to_raw(
main_password,
'al32utf8 1));
Package dbms_crypto 95
At this point, the password storage on column account_passwd is returned as an
encrypted account_passwd random key.
return
utl_encode.base64_encode(
dbms_crypto.encrypt(
utl_il8n.string_to_raw(
p_account_pa sswd,
1al32utf8 1),
enc_mode,
swordfish));
end;
function dec_account_passwd(
p_account_passwd in varchar2,
p_account_name in varchar2,
p_unlock_code in varchar2 default NULL)
return varchar2 as
swordfish raw(256);
begin
if (p_unlock_code is null or p_unlock_code != free_passsword)
then
return null;
end if;
select
dbms_crypto.decrypt(
value2,
enc_mode,
utl_il8n.string_to_raw(
main__password,
1al32utf8'))
into
swordfish
from
tab_dbras_crypto_secrets
where
valuel = p_account_name;
return utl_il8n.raw_to_char(
dbms_crypto.decrypt(
utl_encode.base64_decode(
p_account_passwd),
enc_mode,
swordfish),
1al32utf 8');
end;
end;
/
Package body created
select
*
from
tab_dbms_crypto
userl #123$
user2 $456%
user3 (876%
Here, the data is encrypted using the package and function created in the first steps.
update
tab_dbms_crypto
set
account_passwd = pkg_encrypt_decrypt.enc_aocountj)asswd (
account_passwd,
account_name,
'OpenSesame1)
/
3 rows updated
commit
/
Commit complete
select
■k
from
tab_dbms_crypto
/
ACCOUNT NAME ACCOUNT PASSWD
Just as with encrypt, decrypt can be done using the package and function created.
update
tab_dbms_crypto
set
accountjpasswd = pkg_encrypt_decrypt.dec_account_passwd(
account passwd,
account_name,
1OpenSesame1)
/
3 rows updated
commit
/
Package dbms_crypto 97
Commit complete
from
t ab_dbms_crypto
/
ACCOUNT NAME ACCOUNT PASSWD
userl #123$
user2 $456%
user3 (876%
This example uses the function randombytes. This function returns a raw value
containing a cryptographically secure pseudo-random sequence of bytes which can be
used to generate random material for encryption keys.
The decrypt and enaypt functions are also employed to encrypt and decrypt the raw data.
Note that the following functions are called in order to find information related to
cryptographic data in the database:
■ all_encrypted_columns
■ dba_enciypted_columns
■ user_encrypted_columns
■ v$encrypted_tablespaces
■ v f encryption_wallet
■ v$rman_encTyption_algorithms
From Oracle lOg, a new feature named Transparent Data Encryption performs a
similar functionality with some more flexibilities. More information about this can be
found at http://www.dba-0racle.c0m/t transparent data encryption tde.htm.
Package dbms_change_notification
Imagine that the owner of an online bookstore chain wants to know when a new
client was registered into their the system. Such a request can be met by several
mechanisms:
2. A CQN: A Change Query Notification could create a register for all DML
operations made in a specific table. In this case, if an insertion is made to a
table of clients, a notification of this operation will be made.
Here, the notifications are published for a DML event or a DDL operations
Applications that are running in the middle tier require rapid access to cached copies
of database information while, at the same time, keeping the cache as current as
possible in relation to the database.
Sadly, cached data becomes out of date or stale when a transaction modifies the data
and commits, thereby putting the application at risk of accessing incorrect results. If
the application uses Database Change Notification, then Oracle Database can publish
a notification when a change occurs to registered objects with details on what
changed. In response to the notification, the application can refresh cached data by
fetching it from the back-end database.
First, give the required privilege to the user that will execute the procedure.
show user
Three tables are created: one to record notifications about changes that will be made
(tabnotifications), one to record the actual changes (tab_changes) and the last optional
one to record rowid changes (tab_rowid_changes).
connect pkg/pkg;
REM Create the notification table
Package dbms_change_notification 99
create table tab_notifications(
id number,
evt_type number);
end loop;
Now the table pkg. clients is added, so any changes regarding what is being monitored
will be registered.
create table
clients
as
select
*
from
scott.emp;
declare
regds sys.chnf$_reg_info;
id number;
employee_num number;
begin
- - W e set the flags we need (We'll use Reliable registration, ensuring the
notification is sent before the change is committed, plus, we need to track
rowid info) qosflags := dbms_cq_notification.qos_reliable +
dbms_cq_notif ication.qos_rowids;
-- Now we'll create the registration
-- First step is to tell Oracle to define it with the notification_handler
we created before, and to use the flags we defined.
regds := sys.chnf$_reg_info (’proc_notifications_handler', qosflags, 0,0,0);
-- Last step, is to associate it with the tables we need (by selecting from
them after _notification.new_reg_start)
id ;= dbms_cq_notification.new_reg_start (regds) ;
select empno into employee_num from clients where empno = 7902;
dbms_cq_not ification.reg_end;
end;
/
An update is executed on the clients table to generate a notification. After that, a query
of tabnotifications is made to check these changes.
update
clients
set
sal=sal*l.05
where
empno=7902 ;
commit;
--We can check all changes made that are been monitored on control tables
created bellow
select
*
from
tab notifications
To check all notifications configured in the database, use the query shown below,
select
*
from
dba_change_not ification_regs
Package dbms_distributed_trust_admin
The dbms__distributed_trust_adnrin package is used to manage a list of reliable servers
that can access the local database via database links. In Oracle l l g and beyond,
current user database links operate only within a single enterprise domain between
trusted databases. The databases within the single enterprise domain must trust each
other in order to authenticate users.
You specify an enterprise domain as being trusted by using the Oracle Enterprise
Manager —Enterprise Security Manager screen. If your current user database links are
enabled for a domains by using Enterprise Security Manager, they will work for all
databases within that domain by default.
If there are databases that should not be trusted within your domain, use the PL/SQL
package dbms_distributed_trust_admin to indicate all databases which participate in a
trusted enterprise domain, but cannot be trusted. For example, we may not want a
training database to be considered a trusted database, even if it is in the same
enterprise domain with production databases. You can use the trusted_servers view to
obtain a list of trusted servers in your domain.
The value returned in the query view trusted_servers should look like the example
below; therefore, allowing all of the servers that belong to the domain to have access
to the database in which the query is being executed.
TRUST NAME
Trusted All
If the access of a specific server needs to be blocked, then the procedure deny_sen,er is
used, as shown below.
y Code 3 . 4 - dbms_distributed_trust_admin_deny_server.sql
Connected to Oracle llg Enterprise Edition Release 11.1.0.6.0
Connected as pkg
User is "pkg"
exec dbms_distributed_trust_admin.deny_server('bwfsdbspOl.b2winc.com'};
TRUST NAME
Trusted All
Untrusted bwfsdbsp01.b2winc.com
When no server is reliable, the procedure deny_all should be used as shown below.
exec dbms_distributed_trust_admin.deny_all;
select
*
from
trusted_servers;
TRUST NAME
Untrusted All
Note that the default value of the configuration is trusted a ll This allows all of the
servers that are part of the same domain in an enterprise directory server to have free
access. In order to force this option, the procedure allow_all is used as shown below:
User is "pkg"
select
from
trusted_servers;
TRUST NAME
Trusted All
Package dbms_fga
Fine-Grained Auditing (FGA) is alkso called “row level security” and it was created in
version 9i in order to allow auditing of specific rows. With this new form of auditing,
you avoid the wasting of resources by auditing only the rows that are necessary.
Oracle 9i only permitted the use of FGA through the select commands, but beginning
It is important to note that FGA is only supported with the cost-based optimizer. If
the optimizer mode is not cost-based, or if the audited objects are not analyzed, there
will be problems with the audit. Here is an example of when to use the dbm sjga
package. Suppose a business wishes to find out which user is peeking into the system
to find information on the salary bonuses placed in a table of the database. This can
be done using the package dbm sjga where a policy audits and saves the information
related to all users that have accessed certain records in a table.
In order to use this package, it is necessary to grant execute privileges on dbm sjga to
the user who will be configuring the auditing policies. It is important to keep in mind
that this “privileged” user will be able to remove policies of other users even though
he has not created them.
Beginning with Oracle lOg r2, it became possible to set the parameter audit_trail —
XML, allowing audit records to be written to XML files in the operating system. This
increases the security in accessing the information because only those with permission
on an operating system level can view these files. If, on the other hand, the parameter
is set for DB, then people with a DBA role will be able to access the view containing
the audit records. The following sections describe the procedures on how to
configure FGA using the package dbm sjga.
Procedure add_policy
This procedure is used to create an auditing policy using a predicate as a condition of
the audit. Note that the maximum number of FGA policies in a table or view is 256.
begin
dbms_fga.add_policy(
object_schema => 'pkg',
object_name => 1tab_customer1,
policy_name => 'pkg_cust_policy',
audit_condition => NULL,
audit_column => 'card_no',
handler_schema => 'pkg',
handler_module => 'mod_alert',
enable => TRUE,
statement_types => 'insert,update',
audit_trail => DB,
audit_column_opts => NULL);
end;
begin
dbms_fga.drop_policy(
object_schema => 'pkg',
object_name => 'tab_customer',
policy_name => 'pkg_cust_jpolicy 1 ) ;
end;
Procedure enable_policy
This procedure is used to enable a policy,
begin
dbms_fga.enable_policy{
object_schema => 'pkg',
object_name => 1tab_customer',
policy_name => 'pkg_cust_policy',
enable => TRUE);
end;
Procedure disable_policy
This procedure is used to disable a policy,
begin
dbms_fga .disable_j?olicy (
object_schema => 'pkg',
object_name => 'tab_customer1,
policy_name => 'pkg_cust_policy',
enable => TRUE);
end;
A table named tab_violations stores information about the user who executed the
audited command. The procedure that is executed by the event handler is proc_alert.
This inserts the records containing additional information in the tab_violations table
regarding the user that is being audited. The event handler could also call a procedure
to send an email alerting that a specific command was audited.
conn pkg/pkgl23
show user
User is "pkg"
Primarily, two tables are created. The first, tab_mstomer, is the table that will be audited
and the second, tab_molations, is the table that will record information about the user
that triggered the audit record.
create table
tab_customer
tablespace
pkg_data
as
select
★
from
sh.customers ;
create table
tab_violations(
username varchar(20),
userhost varchar(20),
ip_addr varchar(20),
os_user varchar(20),
time timestamp)
tablespace
pkg_data
Drop the policy if it already exists; otherwise, create the audit policy as follows:
begin
dbins_fg a .drop_policy (
object_schema => 'pkg',
object_name => 'tab_customer',
policy_name => 'pkg_cust^policy');
end;
--Create the policy that will audit update and insert statements done in
table tab_customer
begin
dbms_f g a .add_policy (
object_schema => 'pkg',
object_name => 'tab_customer',
policy_name => 'pkg_cust_policy',
audit_condition => '1=1',
audit_column => ’cust_credit_limit',
handler_schema => 'pkg',
handler_module => 'proc_alert',
enable => TRUE,
statement_types => 'insert,update',
audit_trail => dbms_fga.db+dbms_fga.extended,
audit_column_opts => dbms_fga.any_columns);
end;
Finally, some changes are generated and audited. After that, query tables that have
information about audited records.
1 row updated
commit;
select
db_user,
obj ect_schema “Schema",
object_name,
policy_name,
to_char(timestamp,'YY-MM-DD HH24:MI:SS') "Time",
sql_text
from
dba_fga_audit_trail;
from
tab_violations;
Package dbms_obfuscation_toolkit
Another package used to encrypt and decrypt data is dbms_obfuscation_toolkit. This
package allows the user to encrypt/decrypt data using algorithms Data Encryption
Standard (DES) and Triple DES. The installation of this package is made using the
scripts dbmsobtk.sql and prvtobtk.plb which must be executed through the user sys.
Following the execution of these scripts, grant the execute privilege in the package to
public. Although this package is being replaced by the package dbms_crypto, it is still
being used for backward compatibility. The following is an example on how to use
the package dbms_obfsucation_toolkit to encrypt a row’s data.
First, let’s create a table that stores credit card information, i.e., credit card numbers,
numbers which must be encrypted because of their confidential nature. The
encryption is done with two functions called junc_card_num_encrypt and
func_card_num_encrypt_des3, respectively. The first performs a regular encryption,
whereas the second encrypts using the Triple DES algorithm.
User is "pkg"
create table
tab_card_number {
card_l_number char(40),
card_2_nuraber char(40),
first_name varchar2(20),
last_name varchar2(20));
end;
/
show errors
Next, two triggers are created which make use of the functions created above. As
soon as a user executes one of the commands of insert or update in the r o w
tab_Mird_nuwber in one of the credit card number columns, this information is
encrypted.
For the first column card_1 _nuwber, the algorithm used for encryption is the DES and
for the second column card_2_tiuwber, the algorithm is the Triple DES.
select
*
from
tab_card_number;
insert into
tab_card_number
values (
'1111999922228888 ',
'9006784523339988',
'Paulo1,
'Portugal');
select
★
from
tab_card_number;
As we see, the data is encrypted; thus, a query to the table containing the credit card
numbers cannot be read without the code used for the encryption.
Package dbms_rls
A Virtual Private Database (V P D ) security model uses the Oracle dbms_rls package
(RLS stands for row-level security) to implement the security policies and application
contexts. This requires a policy that is defined to control access to tables and rows.
Virtual private databases have several other names within the Oracle documentation
including RLS and fine-grained access control (FG AC).
Regardless o f the name, V P D security provides a whole new way to control access to
Oracle data. Most interesting is the dynamic nature o f a V P D . A t runtime, Oracle
Oracle gathers application context information at user logon time and then calls the
policy function, which returns a predicate. A predicate is a WHERE clause that
qualifies a particular set of rows within the table.
Oracle dynamically rewrites the query by appending the predicate to users' SQL
statements. Whenever a query is run against the target tables, Oracle invokes the
policy and produces a transient view with a WHERE clause predicate pasted onto the
end of the query, like so:
VPDs are involved in the creation of a security policy and when users access a table
or view that has a security policy. The security policy modifies the user's SQL, adding
a where clause to restrict access to specific rows within the target tables. Take a close
look at how this works.
For example, assume there is a publication table and we want to restrict access based
on the type of end user. Managers are able to view all books for their publishing
company, while authors may only view their own books. So assume that user JSMITH
is a manager and user MAULT is an author. At login time, the Oracle database logon
trigger would generate the appropriate values and execute the statements shown
below for each user:
dbms_session.set_context {1publishing_application', *role_name', 'manager'} ;
Once executed, view these values with the Oracle session_context view. This data will be
used by the VPD at runtime to generate the WHERE clause. Note that each user has
his or her own specific session_context values, shown here:
connect jsmith/manpass;
select
namespace, attribute, value
from
session_eontext;
connect mault/authpass;
select
namespace, attribute, value
from
session_context
Now see how this application context information is used by the VPD security policy.
In Listing C, create a security policy function called book_access_poli(y that builds two
types of WHERE clauses depending on the information in the session_context for each
end user. Note that Oracle uses the sys_context function to gather the values.
is
d_predicate varchar2(2000);
begin
d_predicate:=
'upper(company)=sys_context(1'publishing_application'',''company'1)';
else
-- If the user_type session variable is set to anything else,
'upper(author_name)=sys_context(’'userenv'*,''session_user'’)';
end if;
return d_j?redicate ;
end;
end; /
dbtns_rls.add_policy (
'pubs',
Look at the code in this listing carefully. If the user was defined as a manager, their
WHERE clause (dpredicate) would be:
VPDs in Action
It is now time to show the VPD in acdon. In Listing D, there are very different results
from an identical SQL query depending on the application context of the specific end
user.
connect jsmith/manpass;
Book Author
Title name Publisher
Book Author
Title name Publisher
It should be obvious that VPD is a totally different way of managing Oracle access
than grant-based security mechanisms. There are many benefits to VPDs:
■ Dynamic security: No need to maintain complex roles and grants
User is "pkg"
Now the function which prevents users not listed in pkg,sys and system from accessing
the function is created below.
create or replace function prevent_access_all_source(
object_schema in varchar2,
object_name varchar2)
return varchar2 is
begin
if sys_context(1userenv','session_user1) not in
{1p k g 1s y s s y s t e m ') -if it's not one of those users
then
-- Show only for their own procedures, functions and packages,.
return ’not (sys_context(’’userenv’1,11session_user11) <>
owner and type in (1'package body'',''type
body'',1'procedure'',''function''))';
else
return null;
end i f;
end;
/
show errors
118 Advanced
--Drop policy if it already exists
begin
sys.dbms_rls-drop_policy(
object_schema => 'public',
object_name => 1all_source' ,
policy_name => 'policy_prevent_access');
end;
/
show errors
--Create policy
begin
sys .dbms_rIs .add_j>olicy (
object_schema => 'public',
object_name => 'all_source',
policy_name => 'policy_prevent_access1,
function_schema =>'pkg',
policy_function => 'prevent_access_all_source',
statement_types =>'select',
enable => TRUE)
end;
/
show errors
Obj owner Obj Name Policy Name Function Owner Funct Name
After the policy is created, the user no_access tries to get the source_code of the package
body test_access but nothing is returned by the query. This is because the policy has
been implemented and is protecting the data. The user can see the package source code
but not the package body as shown below:
show user
TEXT
package test_access AS
function hire (last_name varchar2, job_id varchar2,
manager_id number, salary number,
commission__pct number, department_id number)
return number;
end test_access;
6 rows selected
select
text
from
all_source
where
name='test_access'
and
owner='pkg'
and
type = 'package body';
TEXT
If we try, for example, to get the explain plan for this query, the error below occurs:
A good view used to display all fine-grained security policies and predicates associated
with the cursors in the library cache is v$vpd_poliiy.
User is "pkg"
--Creating function
create or replace function hide_tab_exp
(obj_schema varchar2, obj_name varchar2)
return varchar2 is qualifier varchar2(500);
begin
if sys_context ('userenv', 'session_user') = 'pkg' then
qualifier := '1=2';
else
qualifier := '1;
end if;
return qualifier;
end hide_tab_exp;
--Creating table
create table
tab_dbms_rls
tablespace
pkg_data
as
select
*
from
dba_objects;
--Creating policy
begin
sys.dbms_rls.add_policy (
object_schema => 'pkg',
object_name => 'tab_dbms_rls',
policy_name => 1hide_exp_tab',
function_schema => 'pkg',
policy_function => 'hide_tab_exp');
end
If there is an attempt to export the table tab_dbms_rls, the following error will be
displayed:
VPDs have now been examined in detail in regards to the dbms_Hs package.
User is "pkg"
return varchar2
as
begin
return
1upper(
substr(
sys_context(
11userenv'1,
'1module'1)
,7
from
tab_dbms_rls;
This shows that when the user tries to query the table, an error is shown with the
name of the function that is used in the policy created.
2. To bypass FGA and VPD policies, use the privilege exempt access policy. Be careful
using this action and do not use the with admin option.
Prevent Tool-based Access to the Database 123
3. Use the function sys_context (userenv, polig_invoker) rather than sys_context (userenv,
session_user) because the latter returns the user’s logon, not the user that requested
the RLS policy. This can cause security problems as shown in the MOSC
452322.1 note, “How to Implement RLS to Avoid Any Potential Issues”.
Package dbms_wms
The dbmsjwms package provides an interface to the Oracle Database Workspace
Manager. Even though dbms_wms is not specifically designed for security of Oracle’s
database, it can be used for auditing as seen in the following example.
y Code 3 .1 2 - dbms_wm.sql
Connected to Oracle llg Enterprise Edition Release 11.1.0.6.0
Connected as pkg
grant
connect,
resource
to
pkg;
grant
create table
to
pkg;
exec dbras_wm.grantsystempriv (
priv_types => 'access_any_workspace,
merge_any_workspace,create_any_workspace, retnove_any_workspace,
rollback_any_workspace',
grantee => 'pkg',
grant_option => ’YES');
conn pkg@orallg
drop table
tab_dbms_wm_sal
purge;
drop sequence
seq_user_id;
create sequence
seq_user_id
minvalue 1
maxvalue 999999999999999999999999999
start with 1
increment by 1
cache 20
cycle;
alter table
tab_dbms_wm_sal
add constraint
pk_user_id
primary key (user_id);
Now we enable versioning on this table. The option view_wo_overwrite stores the
complete history of information on the view tab_dbms_wfnsal_hist. Now insert some
data in this table.
exec dbms_wm.enableversioning (
table_name => 'tab_dbms_wm_sal',
hist =>'view_wo_overwrite');
insert into
tab_dbms_wm_sal
values
(seq_user_id.nextval, 'Paul' ,1000) ,-
insert into
tab_dbms_wm_sal
values
(seq_user_id.nextval,'Robert',3000);
insert into
tab_dbms_wm_sal
values
(seq user id.nextval,'Michael',2500);
Now check the table values on both the original and history table.
1 Paul 1000,00
2 Robert 3000,00
3 Michael 2500,00
exec dbms_wm.removeworkspace(
workspace => 'Change_Sal_Ef fect_2 ') ,-
exec dbms_wm.createworkspace {
workspace => 'Change_Sal_Effect_l',
description => 'Salaries changes for first plan. Check company
impact.') ;
exec dbms_wm.createworkspace (
workspace => 'Change_Sal_Effect_2',
The manager can work on a first scenario, adjusting salaries and then testing the
impact on his company.
execute dbms_wm.gotoworkspace(
workspace => 1Change_Sal_Bf fect___l ’) ;
update
tab_dbms_wm_sal
set
sal = sal*1.3
where
user_id=l;
update
tab_dbms_wm_sal
set
sal = sal*1.2
where
user_id=2;
update
tab_dbms_wm_sal
set
sal = sal*1.5
where
user_id=3;
commit;
1 Paul 1300,00
2 Robert 3600,00
3 Michael 3750,00
The steps below will freeze the workspace Change_Sal_Effect_l so no changes can
be made to this data. To do that, it is necessary to exit
the workspace Change_Sal_Effect_l because it cannot be frozen while users are in it.
execute dbms_wm.gotoworkspace(
workspace => ’live1); --This is the live database Workspace. When users
connect to a database, they are placed in this workspace
execute dbms_wm.freezeworkspace{
workspace => 'Change_Sal_Effect_l');
Here, the manager works on the second scenario of salary adjustments to decide
which one is changes is the best.
execute dbms^wm.gotoworkspace (
workspace => 'Change_Sal_Effect_2');
update
tab_dbms_wm_sal
set
sal = sal*l.l
where
user_id=l;
update
tab_dbms_wm_sal
set
sal = sal*1.4
where
user_id=2 ,-
update
tab_dbms_wm_sal
set
sal = sal*1.3
where
user_id=3;
commit;
A savepoint is now created, enabling him to roll back to this point later. Then more
changes are made on this workspace.
execute dbms_wm.createsavepoint (
workspace => 1Change_Sal_Effect_21,
savepoint_name => 1Change_Sal_Effect_2_SP_l1);
update
tab_dbms_wm_sal
set
sal = sal*l.l
where
user_id=3;
commit;
1 Paul 1100,00
2 Robert 4200,00
3 Michael 3575,00
Assume that this last change does not make sense and we want to roll back to the
savepoint created before.
execute dbms_wm.gotoworkspace{
workspace => ’live1);
execute dbms_wm.rollbacktosp(
workspace => ’Change_Sal_Effect_21,
savepoint_name -> 1Change_Sal_Effect_2_SP_l1);
execute dbms_wm.gotoworkspace(
workspace => 'Change_Sal_Effect_21);
execute dbms_wm.unfreezeworkspace(
workspace => 'Change_Sal_Effect_l');
execute dbms_wm.gotoworkspace(
workspace => 'Change_Sal_Effect_l’) j
1 Paul 1300,00
2 Robert 3600,00
3 Michael 3750,00
Assume that the manager has concluded that the first scenario is the best for the
company and employees and will now discard the second scenario as below.
execute dbms_wm.gotoworkspace(
workspace => 'live');
execute dbms_wm.removeworkspace (
workspace => 'Change_Sal_Effect_2');
execute dbms_wm.gotoworkspace!
workspace => 'live');
from
tab_dbms_wm_sal;
execute dbms_wm.mergeworkspace(
workspace => *Change_Sal_Effect_l') ,-
1 Paul 1300,00
2 Robert 3600,00
3 Michael 3750,00
Summary
Oracle provides several methods for security and some of the most useful packages
that deal with security were presented in this chapter with a real-world example of the
use of these packages.
The next chapter will examine what, how and when to use utl__ packages and will also
give practical examples of using the utl_ package procedures.
Package u tljile
This package was created with Oracle Database Version 7.3 and is intended to let
DBAs and developers read and write operating system text files on the server side. It
is created by default when the database is installed and the script that creates the
u tlJile package is utlfile.sql which is called by the catproc.sql script A public synonym is
created for this package, then the execute privilege is granted to public.
The directories which are accessible by the u tlJile package are only those that have a
data directory object created for them, and those that are specified in the utlJile_ dir
initialization parameter. Theese u tlJile directory objects can be created dynamically
without needing to shut down the database and are therefore very easy to maintain.
The u tljile package provides a very easy way to work with operating system files.
However, it has some important exceptions and subprograms and some of these
exceptions and subprograms are demionstrated below.
The following example creates a procedure that could be used for generating UNIX-
based export scripts using the utl J ile package. It shows some of the most important
procedures and functions of this package:
show user
User is "pkg"
begin
exception
when utl_file.invalid_path then
raise_application_error(-20000, 'Invalid path or file name!');
utl_file.fclose(file => load_file);
when utl_f ile.invalid_mode then
raise_application_error(-20001,
'The <open_mode> parameter in fopen is
invalid!');
utl_file.fclose(file => load_file);
when utl_file.read_error then
raise_application_error(-20001, 'Read error!');
utl_file.fclose(file => load_file);
when utl_file.invalid_operation then
raise_application_error(-20002,
'File could not be opened or operated on as
requested!');
utl_file.fclose(file => load_file) ;
when utl_file.write_error then
raise_application_error(-20003,
'Operating system error occurred during the
write operation!1);
utl_file.fclose(file => load_file);
when utl_file.invalid_filehandle then
raise_application_error(-20004, 'Invalid file handle!');
utl_file.fclose(file => load_file);
when others then
dbms_output.put_line('Other errors!');
utl_file.fclose(file => load_file);
end create_exp_file;
Finally, use the procedure as follows to get the script generated according to
parameters used:
SQL>
exec
create_exp_f ile (user_login => 'pkg',passwd => 'passwd_jpkg',sid =>
'orallg',file_name => 'file_teste',directory => 'test_dir');
The script containing this output was created on the OS directory specified by the
database object directory created on the first step (test_dir)\
Another useful and simple example is the one below that allows creating a text Hie
with rows from a table using the utl J ile package.
show user
User is "pkg"
cursor c_sales is
select
*
from
pkg.salgrade;
begin
v_arq := utl_file.fopen(location => 'test_dir',
filename => 'ext_table_rows.txt',
open_mode => 'W');
for rl in c_sales loop
utl_file.put_line(v_arq, 'Grade '||rl.grade||' Low Salary $'||rl.losal
|| ’ High Salary $' || rl.hisal);
end loop;
utl_file.fclose(v_arq)
end;
Below is the last example on this package. It drafts information direcdy to a printer.
It was created for a printer named HP-Casa, and the database was running on a
computer named pportugalcasa on Windows.
y Code 4 .3 - utl_file_printing.sql
conn sys@orallg as sysdba
show user
User is "pkg"
utl_file.put_line(u_file, print_text);
These were just three examples showing some of what the utl_file package provides.
Several other different and powerful scripts can be created with this package. For an
additional useful example, look at My Oracle Support note: 779824, “Script to
monitor and dump information from sessions holding database locks”.
Package utl_mail
The utl_mail package was created in Oracle lOg as the successor to the combersome
utl_smtp package for sending e-mail. It makes use of utl_tcp and utl_smtp internally.
The main purpose of the utl_mail package is to do what utl_smtp does, but in a much
easier way. These two packages will be covered now using as an example the task of
sending email notifications of job errors.
The mechanism for sending email notifications can vary depending on the version of
Oracle being used. Oracle lOg and higher allows the use of the simpler utl_mail
package rather than the utl_smtp package available in previous versions.
Using utl_smtp
The obselete utl_smtp package was first introduced in Oracle 8i to give access to the
SMTP protocol from PL/SQL. The package is dependent on the JServer option
which can be loaded using the Database Configuration Assistant (DBCA) or by
running the following scripts as the sys user if it is not already present.
@$0RACLE_H0ME/javavm/install/initjvm.sql
@$ORACLE_HOME/rdbms/admin/initplsj.sql
Using the package to send an email requires some knowledge of the SMTP protocol,
but for the purpose of this text, a simple send_mail procedure has been written that
should be suitable for most error reporting.
show user
User is "pkg"
The following code shows how the send_mail procedure can be used to send an email.
begin
send_mail(p_mail_host => 'smtp.mycompany.com',
p_from => '[email protected]',
p_to => '[email protected]',
p_subject => 'Test send_mail Procedure',
p_message => 'If you are reading this it worked!
end;
/
The p_mail_host parameter specifies the SMTP gateway that actually sends the
message.
Now that the email mechanism has been presented, how to capture errors and
produce email notifications will be explained. The simplest way to achieve this is to
place all the code related to the job into a database procedure or preferably, a
packaged procedure. This allows the capture of errors using an exception handler and
show user
User is "pkg"
If this procedure were run as part of a scheduled job, an email notification would be
generated whether the job completed successfully or not. In the event of an error, the
associated Oracle error would be captured and reported in the e-mail.
Another utilization for this is when there are several mission critical jobs; if for any
reason the job fails, there is a need to inform the users first thing in the morning to
prevent them from working with inaccurate data. Therefore, we created a very simple
job that runs at 7 am, and checked dbaJobs to see if all other jobs completed okay. If
they did not, it sends an e-mail to the senior analysts and DBA. Now, let’s take a look
at how utl_mail simplifies this process.
Before the package can be used, the SMTP gateway must be specified by setting the
smtp_out_server parameter. The parameter is dynamic, but the instance must be
restarted before an email can be sent with utl_mail.
With the configuration complete, it is now possible to send an email using the send
procedure.
begin
u t l _ m a i l .send(sender => '[email protected]',
recipients => 'you®mycompany.com',
subject => 'Test u t l _ m a i l .send procedure',
message => 'If you are reading this it worked!');
end;
/
As with the utl_smtp example, the code related to the job needs to be placed into a
database procedure which captures errors using an exception handler and sends the
appropriate email. The following procedure is the Oracle l l g equivalent of the one
used in the utl_smtp example.
SI Code 4 .6 - utl_mail_1.sql
* automated_email_alert_llg.sql
Next, a mechanism for running operating system commands and scripts from within
PL/SQL will be introduced.
If combining these techniques with the error logging method described previously, it
may help to send additional information in the email (prefix, start and end
timestamps) to help pinpoint the errors in the error_logs table.
file_handle utl_file.file_type;
output varchar2(4000) ;
attachment_text varchar2 (4000) ;
add_date varchar2(20) := to_char(sysdate,
'ddmmrr' || '_' |j 'hh24:mi:ss ');
v_dir varchar2(30) := directory;
v_recipients varchar2(200) := recipients;
v_sub varchar2{30) := subject;
v_message varchar2(2000) := message;
begin
file_handle := utl_file.fopen(location => v_dir,
filename => 'file to attach.txt',
loop
begin
utl_file.get_line(file_handle, output); -- we read the file, line
by line
attachment_text := attachment_text||utl_tcp.crlf; --and store every
line in the attachment_text variable, separated by a "new line" character,
exception
when no_data found then
exit ;
end;
end loop;
utl_f ile.fclose(file_handle);
Note that in the script above, crljstands for a new line character.
Those were some simple examples of what is possible to do with these powerful
packages. There are a number of different functionalities for these packages, and
developers always find them really useful for improving their programs through these.
Package utl_raw
This package is available since the release of Oracle version 8 and the purpose of
utl_mw is to manipulate binary data. Prior to the introduction of the utl_mw package,
the only way to work with binary data was through hextoraw and rawtohex functions.
The scripts used to create this package are utlraw.sql and prvtraw.plb and they are
automatically executed when the database is created.
The long raw datatype used in this example is provided for backward compatibility.
Currendy, the blob and bfile datatypes can be used to store larger amounts of binary
data with additional flexibility and should be preferred over the long raw datatype when
working with newer versions of Oracle.
Note that Oracle recommends converting long raw datatype to lob datatype. These
datatypes raw and long raw are used to store binary data like documents, graphics and
This first example shows some of the main functions of package utl_nm> that allow the
manipulation of data by loading some values, applying several different binary
functions provided, and showing the result.
end proc_utl_raw_functions;
SQL>
conn pkg/pkg
Connected.
SQL>
show user
user is "pkg"
SQL>
set
serveroutput on
SQL>
call
proc_utl_raw_functions('paulo 1);
Call completed.
SQL>
In this example, the main functions of package utl__mw have been used and they are
briefly explained below:
Package utl_ref
Since Oracle version 8, user-defined functions are supported and this package is used
to write generic type methods without knowing the object table name. In sum, the
utl_ref package allows for working with reference-based operations.
Each row of a table created using a type operator has an object ID. A reference is a
pointer to an object ID and each row can have an identifier object.
It is important to note that the ref datatype is used to identify a unique row object. It
is also used to assist in referencing based on other objects. The revalues are pointers
to objects. To use this utl_ref package, the user must run two scripts: utlrejld.sql and
2. lock_object Locks an object given a reference and permits only the object
selection. The execution of this procedure has the same effect as the following
command:
select
value(t)
into
object
from
object_table t
where
ref(t) = reference
for update;
3. select_object Selects an object given a reference and its value into the PL/SQL
variable object. The execution of this procedure has the same effect as the
following command:
select
value(t)
into
object
from
object_table t
where
ref(t) = reference;
4. update_object- Updates an object given a reference with the value contained in the
PL/SQL variable object. The execution of this procedure has the same effect as
the following command:
update
obj ect__table t
set
value(t) = object
where
ref(t) = reference;
utl_ref.select_object(
ref => objref, object => object_id);
dbms_output,put_line(’Using select_object procedure: ');
dbras_output.put_line(object_id.street_address
object_id.postal_code ||
object_id.city || ',' ||
object_id.state_province
object_id.country_id);
end;
Executing the procedure two times will give the following results:
SQL>
set
serveroutput on
SQL>
exec
proc_utl_ref,-
Using select_object procedure:
SQL>
exec
proc_utl_ref;
begin
proc_utl_ref;
end;
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at "pkg.proc_utl_ref", line 9
ORA-06512: at line 1
It is evident that this procedure is working with just one object (row) on the example
table. The second time, after the delete_object procedure is run, it generates an error.
That is to show that, at this point, the data was actually deleted and it can no longer
find data on the table.
Summary
File management is one of the day-to-day tasks of a database administrator and the
packages presented on this chapter can always be used to increase the powerful
packages that are available at Oracle Database.
Most people would agree that the time spent with Oracle tuning is among the hardest
tasks of any database administrator. Since its foundation, Oracle has launched a new
database version approximately every three years. With each new release, the database
includes many new functions and a large number of internal changes.
Even though these changes ultimately create a better database, many times, after
migrating a database to the new version, performance issues may be discovered. The
database administrator needs to tools to be able to pinpoint and fix problems as
quickly as possible.
Every database is different, and there are few rules that apply to all systems. Each
database has its own complexity and should be treated uniquely. Some common tasks
that database administrators need to consider when dealing with tuning are reactive
tuning, proactive tuning, hardware tuning, instance tuning, object tuning and SQL
tuning.
Package dbms_advisor
Before Oracle 10g, Oracle tuning was a very complex and time consuming task. The
main tool database administrators had at their disposal was STATSPACK which
collected a set of statistics about events that consume execution time. The DBA used
this to find where the database was spending more time and investigate further on
this event to try to find and solve the performance problem.
Since Oracle lOg, a new tool named the SQLAccess Advisor can be used to improve
database performance by automatically recommending creation of certain suitable
materialized views, indexes and partitions. This tool can be accessed from Oracle
Enterprise Manager (OEM) or by direcdy invoking the dbms_advisor package. It also
can show how to create a materialized view to ensure it is fast and refreshable and
The user needs to be granted a role named advisor and also needs to have the select
privilege on the tables to be analyzed by the SQL Access Advisor; otherwise, the
statement will be excluded from analysis. If all statements are excluded from analysis,
the workload itself becomes invalid.
The example below shows some of the more important procedures, functions and
constants of the dbms_advisor package. Now let’s see it executed step by step. First of
all, we grant the necessary privileges to the user of this example. In this case, it is pkg
user.
show user
User is "sys"
conn pkg@orallg
show user
User is "pkg"
Now the task is created using the create_task procedure, but first the delete_task
procedure is executed just in case a task with the same name already exists.
declare
taskname varchar2(30) := 'My Task Example';
task_desc varchar2(256) := 'My Task - Description';
task_or_template varchar2(30) := ’My_Task_Template';
task_id number := 0;
wkld__name varchar2 (30) := 1Workload__Test';
saved_rows number := 0;
failed_rows number := 0;
num_found number;
begin
--Create task
dbms_advisor.create_task(dbms_advisor.sqlaccess_advisor,task_id,taskname,tas
k_desc,dbms_advisor.sqlaccess_qltp);
--Reset task
dbms advisor.reset_task(taskname);
The query below deletes previous workload task links and previous workloads if they
already exist
Now the workload is created and associated to the task and then some workload
parameters and task parameters are set. These will be explained below.
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 1valid_module_list',
dbms_advisor.set_sqlwkld_parameter(
workload_narae => wkld_name,
parameter => 'sql_limit1,
value => 125 ');
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_natne,
parameter => 'valid_username_list',
value => apps');
dbtns_advisor.set_sqlwkld__parameter (
workload_name => wkld_name,
parameter => 1valid_table_list1,
value => dbms_advisor.advisor_usused),-
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 1invalid_table_list1,
value => dbms_advisor.advisor_usused);
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 'order_list',
value => 'priority,optimizer_cost');
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => ’invalid_action_list',
value => dbms_advisor.advisor_usused);
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 'invalid_username_list',
value => dbms_advisor.advisor_usused);
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 1invalid_module_list1,
value => dbins_advisor.advisor_usused) ;
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 'valid_sqlstring_list',
value => dbms_advisor ,advisor_usused) ,-
dbms_advisor.set_sqlwkld_jparameter {
workload_name => wkld_name,
parameter => 'invalid_sqlstring_list',
value => dbtns_advisor ,advisor_usused) ;
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
parameter => 'journaling',
value => '4');
dbms_advisor.set_sqlwkld_parameter(
workload_name => wkld_name,
dbms_advisor.import_sqlwkld_sqlcache(
workload_name => wkld_name,
import_mode => 'replace',
priority => 2,
saved_rows => saved_rows,
failed_rows => failed_rows),-
dbms_advisor.set_task^parameter(
task_name => taskname,
parameter => 'order_list',
value => 'priority, disk_reads') ,-
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 1evaluation_only',
value =>'FALSE');
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'mode',
value =>'comprehensive');
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'storage_change',
value =>dbms_advisor.advisor_uslimited);
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'dml_volatility',
value =>'TRUE1);
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'workload_scope',
value =>'full');
dbms_advisor.set_task__parameter {
task_name => taskname,
parameter => 'def_index_tablespace’,
value =>dbms_advisor.advisor_usused);
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'def_index_owner',
value =>dbms_advisor.advisor_usused);
dbms_advisor.set_task_joarameter (
task_name => taskname,
parameter => 'def_mview_owner',
value =>dbms_advisor.advisor_usused);
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'def_mvlog_tablespace1,
value =>dbms_advisor,advisor_usused);
dbms_advisor.set_task__parameter (
task_name => taskname,
parameter => 'creation_cost1,
value = >'TRUE');
dbtns_advisor.set_task_parameter (
task_name => taskname,
parameter => 'journaling',
value =>'4');
dbms_advisor.set_task_parameter(
task_name => taskname,
parameter => 'days_to_expire',
value => 130 1) ,-
end;
To check if the task has been created, the view dba_advisor_executions (only available
with ll g ) or dba_advisorJasks could be used as follows:
select
owner "Own",
task_name "Tsk_Name",
to_char(created, 'mm-dd-yy hh24:mi') "Created",
to_char(last_modified,'mm-dd-yy hh24:mi') "Modif",
advisor_name "Adv_Name",
status "Status"
from
dba_advisor_tasks
where
task_name-'My Task Example'
execution_start > systimestamp - interval '1' hour ,-
After checking the task, just execute it so the advisor works using the workload
generated and tries to gather the best recommendations for increasing the
performance of the database. Note that if the task was already executed, the ORA-
13630 error will appear:
ORA-13630: The task My Task Example contains execution results and cannot be
executed.
After that, we reset the task and then execute the task again.
--If you want to execute this task again you can do the following:
--Reset the task using command as follows:
exec dbms_advisor.reset_task(task_name => 'My Task Example');
After the execution of the task, all recommendations can be checked using the query
below.
select
owner,
task_name,
command,
attrl,
attr2,
attr4,
attr4,
message
from
dba_advisor_actions
where task_name='My Task Example';
select
owner,
task_narae,
command,
attrl,
attr2,
attr3,
attr4,
message
from
dba_advi sor_act ions
where task_name = v task name;
v_owner varchar2(20);
v_command varchar2(30),-
v_attrl varchar2(2000)
v_attr2 varchar2(2000)
v_attr3 varchar2 (2000)
v_attr4 varchar2(2000)
v_message varchar2(500);
begin
w_task_name := v_task_name;
open adv_cur,-
loop
fetch adv_cur into
v_owner,w_task_name,v_command v_attrl, v_attr2, v_attr3.
v_attr4,v_message ;
exit when adv_cur%hotfound;
dbms_output.put _line ('Ta:ik__name = | v ask_name) ;
dbms_output.put_line(1Task Owner : v_command) ,
dbms_output.put_line('Task Command : v_command) ,
dbms_output.put_line('Atribute 1 - Name; v_attrl),
dbms_output.put_line('Attribute 2 - tablespace: v_attr2)
dbms_output.put_line('Attribute 3: v_attr3)
dbms_output.put_line('Attribute 4: v_attr4)
dbms output.put line('Message: v_message) ,
dbms_output.put_line('##################################');
end loop;
close adv_cur;
dbms_output.put_line('############# Advisor Recommendations
#############');
end advisor_recommendations;
/
##################################
Task_name = sys_auto_sql_tuning_task
There are six subprograms, procedures, in fact, that can be used inside the
dbms_application_info package. Here are descriptions of each:
■ read_client_info; This procedure reads the value of the client_injo field of a
current session
■ read_module; This procedure reads the module and action fields of a current
session
■ set_action; This procedure sets the name of the current action within the
current module
■ set_c!ient_info: This procedure sets the client_info field of the session
■ set_module; This procedure sets the module that is running to a new module
■ set session longops: This procedure sets a row in the v$sessionj,ongops table
that can be used to identify modules and actions of long running queries like
backups.
Below is an example of how to use dbms_application_info to register information about
long running sessions in v$session_longops.
show user
User is "pkg"
--Create the example procedure
create or replace procedure dbms_app_info_longops_test
is
v_rindex binary_integer;
v_slno binary_integer;
v_totalwork number;
v_sofar number;
v_obj_target binary_integer;
begin
v_sofar := 0;
v totalwork := 10;
v_sofar := v_sofar + 1;
dbms_application_info.set_session_longops(
rindex => v_rindex,
slno => v_slno,
op_name => 1!!!pay - load!!1'||
to_char (systitnestamp, 1hh24:mi:ss1) ,
target => v_obj_target,
context => 0,
sofar => v_sofar,
totalwork => v_totalwork,
target_desc = > 'pay tables',
units => 'rows');
end loop;
end;
/
--On another SQL prompt, query v$session_longops view
select
sid "SID",
serial# "Serial",
to_char(start_time,1hh24:mi:ss') "Start Time",
opname "Operation",
target_desc "Target",
units "Units",
time_remaining "Time Remaining",
elapsed_seconds "Elapsed Sec"
from
v$session_longops;
SID Serial Start Time Op Name Target Units Time Remai Elapsed Sec
This shows that it is easy to track long operation sessions using the
dbms_appHcation_mfo package together with the v$session Jongops view.
Another interesting example shows how to identify Discoverer sessions for a specific
workbook. More details can be found on MOSC Note: 279635.1. Basically, a function
just needs to be created that uses dbm_appHcation_info, and then register this function
with the Oracle Discoverer Administrator GUI.
y Code 5 .3 - dbms_application_info_disco.sql
conn sys@orallg as sysdba
show user
User is "pkg"
end dbms_app_info_disc_client;
Here are some brief examples on how to use the procedures set_Mtion, set_module and
set_dient_info.
First, the test_dbms_app—infi table is created to use with this example. Then a
procedure that makes some inserts in this table is created setting information using
dbms_apphcation_info between insert commands. Lasdy, check information generated by
this session running a query on the vjsession view.
y Code 5 .4 - dbms_application_info.sql
conn pkg@orallg
show user
User is "pkg"
insert into
pkg.test_dbms_app_info
values
('time'||t_time);
sys.dbms_lock.sleep(5);
insert into
pkg.test_dbms_app_info
values
('time'||t_time);
dbms_application_info.set_client_info (
client_info =>
sys_contxt('userenv1, 'ip_address'));
sys.dbms_lock.sleep(5);
insert into
pkg.test_dbms_app_info
values
('time'||t_time);
dbms_application_info.set_module ('','');
end;
/
--Now we can query v$session view
col username for alO
col module for alO
col action for al5
col client_info for al5
select s.sid,
s .username,
s .command,
s .status,
s.module,
s .action,
s .client_info
from v$session s
where s .module = 1add_phone';
The client info will only appear when the second insert command is executed because
the set_client_info procedure is used after the second insert command.
With these examples, it becomes easy to track sessions that are logged in the database,
even if these come from different kinds of applications and connection types like jdbc.,
Pro*C programs and others. This is very helpful when finding who is causing
Package dbms_aw_stats
The dbms_an>_stats package is a new package created to allow the generation of
statistics on dimensions and cubes used in OLAP databases. In Oracle l l g , a new
form of materialized view named Cube Materialized View was created. The data is
stored in an OLAP cube and not in a relational table. When joining this to other
objects, or whenever there is a need to execute query rewrite on it, ensure that the
statistics are up to date using the dbms_aw_stats package.
Oracle database needs this to create the corresponding execution plan for joins and
query rewrites on these objects. Thus, one of the main reasons for keeping the
statistics of dimensions and cubes updated is to ensure that Query Rewrite will be
executed continuously.
This package has just one procedure named analyse. A good example of how and
when to use this package is described here and is also on MOSC Note: 577293.1. To
ensure that Query Rewrite is used on Cube Materialized Views, follow the example
below:
show user
User is "pkg"
/*
1- Check that query rewrite is enabled in awm
2-Check constraints using dbms_cube_advise.mv_cube_advice
3-Check aggregation type because only sura, min and max can be leveraged by
rewrite cube
4-The status of MV should be 'usable'
5-The user who runs the query should have the global_query_rewrite privilege
6-Check statistics using package dbms_aw_stats as below:
*/
SQL>
exec
dbms_aw_stats.analyze(1dimension1);
SQL>
exec
dbms_aw_stats.analyze{’cube1);
All servers and processes that are accessing the database through DRCP are pooled,
and thus, are shared across connections coming from various application processes.
?/rdbms/adtnin/prvtkpps .plb
The first step is to query dba_cpool_info to check the values of the default pool. Then
some changes are made according to my needs using the configure_pool procedure
which has the following parameters:
■ pool_name: The name of the pool. Actually, there is just one that can be used,
the default one.
■ minsize: The minimum number of pool servers
■ maxsize: Maximum number of pool servers acceptable
SQL>
show user
User is "pkg"
The max_think_time is 50 seconds. Simulate this idle time and see what happens.
First start the pool using the start_pool procedure as below:
Now, easily connect to the database and wait 50 seconds to get this error as shown:
--Now using Easy Connect you can connect on database using this new pool
connect pkg/[email protected]:1521/dbms:pooled
--Connect through this new service name and stay connect without executing
any command and watch the error
sqlplus pkg®dbms__pool
SQL>
set time on
13:48:45 SQL>
13:49:39 SQL>
select * from
dual ;
select * from
dual
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 4127
Session ID: 27 Serial number: 196
It is also possible to create a new service using this activated pool. The
service may then be used on client machines as needed.
--If you have reached the maximum capacity of your connection pool, you will
find waits on connection_status column of view below
--Some useful statistics about your connection pool can be found on view
v$cpool_stats
To conclude, this new dbms_connection_pool feature will help significandy minimize the
server resources used. The DRCP can manage connections much better than many
current mid-tier applications do; for example, eliminating idle connections from the
pool.
The common name used for this process is “warming” the library cache. In cases of a
server crash and a RAC failover, the new primary instance does not need to re-parse
all SQL statements and re-compile all of the PL/SQL unit programs. The
dbm sjibcache package captures information about selected cursors that can be shared
between RAC nodes and it then compiles the information and populates the
secondary RAC instance with these SQL and PL/SQL executables.
USER is "sys"
SQIi>
@?/rdbms/admin/catlibc.sql
For example, assume that the library cache of an instance named dbms2 needs to be
populated from an instance named dbm sl. The command below will compile all
cursors from user BPEL for SQL statements that were executed five or more times
and have a minimum si2e of 800 bytes.
SQL>
show
user
User is "pkg"
begin
dbms_libcache.compile_from_remote(
p_db_link => 'libc_link_dbtnsl',
p_username => 'pkg',
p_threshold_executions => 10,
p_threshold_sharable_mem => 800,
p_parallel_degree => 4);
end;
/
Total SQL statements to compile=35
Total SQL statements compiled=35
In this example, the procedure com pileJrom jrem ote has compiled 35 programs inside
the library cache of database instance dbms2 using data from the library cache of
database instance dbm sl.
Package dbms_monitor
Tracing sessions are a common technique used bydatabase administrators totrack
performance problems on SQL statements. BeforeOraclelOg, different methods like
dbms_support could be used to trace SQL sessions, alter session set events, and such.
This is accomplished in Oracle lOg through the script ?/rdbms/admin/dbmsmntr.sql
dbms_monitor. The dbms_monitor package is used to monitor SQL sessions and enable
statistics gathering using a client identifier or a combination of: service name, module
and action.
The dbms_monitor package is created by default within the database. There is no need
to run the script mentioned in the previous paragraph. Procedures included in this
package can take several actions:
■ Enable/disable statistics using client identifiers
■ Enable/disable statistics using combinations of service name, module and action
■ Enable/disable traces on session for a specific user identified by client identifier
■ Enable/disable traces for all databases or a specific instance
■ Enable/disable trace for a combination of service name, module and action
■ Enable/disable trace in a specific session using session identifier SID of the local
instance
SQL>
show
user
User is "pkg"
--Make the table bigger (run this command some times to increase the table
size)
insert into
test_dbms_monitor
select
*
from
test_dbms_monitor;
commit;
--Notice how you can specify the session where tracing will be enabled or
disabled, it's not necessarily limited to yours, thus, you don't need to
make any changes to your applications to be able to trace what they are
doing./
/uOl/app/oracle/diag/rdbms/dbms/dbms/trace/dbms_ora_3957.trc
-rw-r---- 1 orallg oinstall 97126 Nov 9 14:46
/uOl/app/oracle/diag/rdbms/dbms/dbms/trace/dbms_ora_3957.trc
Now let’s run a query on the table created and check the generated trace. To check
the explain plan, use the //^re/utility. After that, analyze the explain plan of the query
executed.
Now locate the query and see a full table scan on this table; and then assume that we
conclude that an index is needed to improve query performance.
select
count(*)
from
test_dbms_monitor t
where
t.object_id = 3289
At this time, the index on column object_id is created and statistics for the table and
index are gathered.
create index
pkg.idx_obj_id
on
pkg.test_dbms_monitor (object_id)
pctfree 10
initrans 2
maxtrans 255
logging
storage(
buffer_pool default
flash_cache default
cell_flash_cache default)
tablespace "tbx_idx";
Now the new explain plan that is using the index created can be seen and the new
outstanding execution time.
select
count(*)
from
test_dbms_monitor t
where
t.object_id = 3289
Lasdy, use the dbms_monitor package again to disable the trace in session:
The case below shows how easy it is to identify a client session and trace it. First, the
client session is identified. Execute the following command in a user session that will
be traced or create a logon_trigger to identify all user sessions.
SQL>
show
user
User is "pkg"
begin
dbms_session.set_identifier('pkg_client_id');
end;
/
--or create the logon trigger to identify all PKG user sessions (connect as
SYSDBA in another session and create this trigger)
create or replace trigger trg_pkg_logon
after logon on pkg.schema
begin
dbms_session.set_identifier('pkg_client_id');
end;
/
Use the following query to get the session information and check if the client^identifier
column is identified correcdy.
select
sid,
begin
dbms_monitor.client_id_trace_enable(client_id => 'pkg_client_id',waits =>
TRUE,binds => TRUE,plan_stat => ’all_executions');
end;
/
The view dba_enabled_traces can be queried to check if the trace is already enabled:
s e le c t
trace_type,
primary_id,
qualifier_idl,
waits,
binds
from
dba_enabled_traces;
Next, let’s generate some trace data; execute some SQL statements to generate trace
data on a session of user pkg.
show user
user is "pkg"
select
sys_context('userenv' , 'client_identifier') client_id
from
dual ,*
CLIENT ID
pkg_c1ient_i d
no rows selected
To quickly and easily find where and what the name of the trace file being generated
is, use the next query enabling the SID of the session that needs to be monitored. It is
important to mention that more than one trace can be generated; when using RAC or
MTS, for instance. If this is the case, then use the trcsess utility to merge all these trace
files into one.
show user
User is "sys"
select
sid,
serial#,
username,
sql_trace
from
gv$session
where
username='pkg';
select
distinct par.value
II •/'
| lower(instance_name)
I I 1_ora_ 1
II spid
| '.trc' "Trace file name"
from
v$instance i,
v$process p,
v$session m,
v$session s,
v$parameter par
where
s .paddr = p .addr
and
s.sid = m.sid
and
par.name='user_dump_dest'
and
m .SID=28 ;
/uOl/app/oracle/diag/rdbms/dbms/dbms/trace/dbms_ora_7310.trc
Finally, after the analysis of the trace and the solution of the performance problem
within the query of user pkg, disable the trace of this client identifier and drop the
trigger.
show user
User is "sys"
begin
dbms_monitor.client_id_trace_disable(
client_id => 'pkg_client_id');
end;
/
drop trigger trg_pkg_logon;
If enabling SQL trace in all of the database sessions is desired, though this is not
recommended for performance and space reasons, use the database_trace_enable
procedure and then disable it by using the database_trace_disable procedure.
y Code 5.10-dbms_monitor_srv_mod_act_trace_enable.sql
conn pkg®dbms
show user
User is "pkg"
SQL*Plus sys$users
--Now just run the command below to enable the trace for all SQL Plus
sessions on service name sys$users
begin
dbms_monitor.serv_mod_act_trace_enable(
service_name => 'sys$users',
module_name => 'SQL*Plus',
waits => TRUE,
binds => TRUE);
end;
/
That is all there is to it! All sessions connected to the database that are using
SQL*Plus will have tracing enabled. Execute the serv_mod_act_trace_disable procedure
to disable the trace. There is a parameter on this procedure named instance_name that
could be used in a RAC environment where just one instance is specified or it can be
left to NULL to enable it for all instances.
Package dbms_odci
Created by default or via the ?/rdbms/admin/catodci.sql script, the dbms_odci package
provides a function that calculates the approximate number of CPU instructions of a
single CPU over a specified number of seconds, using the package function dbms_odci.
estimate,_cpu_,units. The CPU usage values are returned in thousands of instructions,
and in multi processor platforms, it measures a single CPU. The overhead of invoking
the function is not included.
Connected as pkg
show user
User is "pkg"
select
dbms_odci.estimate_cpu_units(1) * 1000
from
dual ;
DBMS_0DCI.ESTIMATE_CPU_UNITS (1
1128751983,24897
This package is used in Database Cartridges. A good example of this can be found on
MOSC Note 763493.1, “Sample code on how to use odcitable Interface Approach with
Pipelined Function”. To learn more about Oracle Database Cartridges, go to Oracle®
Database Data Cartridge Developer's Guide 11g Release 2 (11.2), Doc Part Number
E l 0765-01.
Package dbms_outln
First created with Oracle 8i (8.1.5), dbms_outlln is also known as the
?/rdnms/admin/dbmsol.sql script and formerly known as outln_pkg. Stored outlines can
be used to freeze the execution plan of certain SQL statements. When tuning a
vendor application, you can use stored outlines to tune SQL when the source code
cannot be changed. Once created, stored outlines may be swapped, allowing an
improved query to be created with an outline, and swapped outside of the vendor
application to use the improved execution plan.
The stored outlines feature will be eventually discontinued in favor of SQL plan
management which uses SQL baselines, thereby providing better performance. Here
is an example of how to create a stored outline that will be used to keep the same
execution plan for the query used.
Suppose there is a query that is using an index properly, and a guarantee that it always
works using the index created if required, even if the optimizer tries to change the
execution plan based on new statistics or new indexes created. The query is executed,
then the execution plan of this query that is inside the shared_pool will be fixed for
future executions. The query uses a stored outline created by the dbms_outln package.
show user
User is "pkg"
select
count(*)
from
test_dbms_monitor t
where
t.object_id=5443;
set echo on
set lines 300 pages 0
select t.*
from v$sql s
, table(dbms_xplan.display_cursor(s.sql_id, s,child_number, 'typical
allstats last')) t
where s .sql_id = 'fb5856ddkf6k0' ;
select
name,
owner,
category,
Note that the stored outline is created and will always be used by the optimizer when
the user_stored_outlines session parameter is set. We will not go into much more detail
as this package will be discontinued by SQL Plan Management in future releases. For
reference, below is the list of procedures for this package:
■ drop_by_cat: Drop an outline that belongs to a specific category
■ drop_unused; Used to drop outlines that have never been used
■ exact_text_signatures: Used to update outline signatures (only in Oracle
version 8.1.6 or earlier)
■ update_by_cat; Changes the category of all outlines in category oldcat to new
category specified by neivcat parameter.
■ update signatures: Used to update outline signatures to the current version of
database. This is generally used when these outlines have been imported from an
earlier database version.
One package that is not available anymore in Oracle llg R 2 is dbms_outln_edit. This
being the case, it will not be covered in this book.
Package dbms_profiler
The dbms_profiler package is a built-in set of procedures used to capture performance
information from PL/SQL. The dbms_profiler package has these procedures:
■ dbms_profiler.start_profiler
■ dbms_projiler.flush_data
■ dbms_profiler.stop_profiler
The basic idea behind profiling with dbms_profiler is for the developers to understand
where their code is spending the most time. They may then optimize and adjust
accordingly. The profiling utility allows Oracle to collect data in memory structures and
then dump it into tables as the application code is executed. In many ways,
dbms_profiler is to PL/SQL what tkprof and Explain Plan are to SQL. Once the
profiler is run, Oracle will place the results inside the dbms_profiler tables.
The dbms_profiler procedures are not part of the base installation of Oracle. Two
tables need to be installed along with the Oracle supplied PL/SQL package. In the
$ ORACLE_HOME/rdbms/admin directory, two files exist that create the
environment needed for the profiler to execute:
SQL>
exec
dbms_profiler.start_profiler ('Test of raise procedure by Scott');
SQL>
exec
dbms_profiler.flush_data();
SQL>
exec
dbms_profiler.stop_profiler();
The Oracle dbms_profiler package also provides procedures that suspend and resume
profiling such as pause_profiler and resume_profiler.
The following example shows how to use the dbms_profiler package to gather execution
time information and locate the botdeneck in the overall process. First, the user pkg
will run the proftab.sql script to create tables used to store profiling information.
show user
User is "pkg"
SQL>
show
user
User is "pkg”
SQL>
@?/rdbms/adrain/proftab.sql
Table dropped.
Table dropped.
Table dropped.
Sequence dropped.
Table created.
Comment created.
Table created.
Comment created.
Table created.
Comment created.
Sequence created.
Now, we create a package and analyze its performance with dbms_profiler. The
package body has two queries. One that executes a full table scan and the other that
uses an index scan much faster, so the dbms_profiler package will easily find in which
query is the problem.
select
count(*) into d_n_obj
from
test_dbms_monitor m
where m .object_id=n_obj;
dbms_output.put_line('Second statement. Number os objects with
object_id='||n_obj||' is '||d_n_obj),-
run_num_stop:=dbms_profiler.stop_profiler;
end test_dbms_profiler_proc;
end test dbms profiler pkg;
/
Now run the following query to find the line that is consuming most of the execution
time:
select
u.unit_type "Type",
u.unit_name "Name",
d.line# "Line”,
d .total_occur "Occurrences",
d.total_time/1000000000 "Time spent/Line",
d.min_time/1000000000 "Min Exec Time/Line",
d.max_time/1000000000 "Max Exec Time/Line"
from
plsql_profiler_units u,
plsql_profiler_data d
where
d .runid=u.runid
and
Now that the line that is spending more time to execute has been identified, use the
query below to get the source code of this line:
--Get the source code of the line spending more time to execute
column "Line" format 99999
column "Total Time" format 999.999999
column "Type" format al2
column "Name" format a22
column "Occurrences" format 999999
column "Text Command" format a46
It is very easy to find the line responsible for using more than its share of resources. A
little tuning to this statement will solve the performance problem.
Package dbms_result_cache
A new feature of Oracle l lg , the Result Cache, is a powerful tool that can be used to
cache query and function results in memory. The cached information is stored in a
dedicated area inside the shared pool where it can be shared by other PL/SQL
programs that are performing similar calculations. If data stored in this cache changes,
the values that are cached become invalid. This feature is useful for databases with
statements that need to access a large number of rows and return only few of them.
The Result Cache is set up using the result_Mche_mode initialization parameter with one
of these three values:
1. auto\ The results that need to be stored are setded by the Oracle optimizer
2. ?nanual\ Cache the results by hinting the statement using the
result_cache |no_result_cache hint
3. fores'. All results will be cached
The dbms_mult_Mche package is used to yield the DBA management options of
memory used by both SQL Result Cache and PL/SQL function Result Cache. Here
are procedures and functions of the dbms_result_Mche package:
■ bypass-. The procedure lypass turns the result cache functionality on and off. If set
to true, the Result Cache is turned off and not used. The cache is not flushed by
using this procedure. The example below turns off the result cache.
■ status'. This function returns the current status of the result cache functionality.
■ memory_report: This procedure returns a report on the current memory used by
the Result Cache. If the parameter true is passed, then a more detailed report is
displayed.
■ flush: The flush operation will remove all results from the cache. It has two
Boolean parameters; the first determinesif the memory is maintainedin the cache
or released to the system. The second parameter determines if the cache statistics
are maintained or also flushed. The following example uses the defaults and will
flush the cache, releasing the memory to the system and clearing the statistics.
■ invalidate'. The invalidate operation invalidates all results in the cache that are
dependent on an object. These functions are overloaded, so the object can be
identified by owner and name or by objected.
■ invalidate_object. The invalidate_object operation invalidates specific result objects.
Result objects can be found from the v$result_Mche_objects dynamic performance
view.
A practical overview of how to use this procedures and function will be described
below.
show user
User is "pkg"
NAME VALUE
result_cache_mode manual
result_cache_max_size 1081344
result_cache_max_result 5
result_cache_remote_expiration 0
client_result_cache_size 0
client_result_cache_lag 3000
COUNT(*)
0
--Use the result_cache hint because manual mode is being used for this
instance
select /*+ result_cache */
count(*)
from
employees
where
Now queries Result Cache views that provide information like objects in memory,
relationship dependencies and statistics of Result Cache.
Name Value
1 0 0 0 NO 0 0
1 1 0 1 NO 1 0
1 2 0 2 YES
1 3 0 3 YES
Execute the flush operation to remove all objects from the memory Result Cache.
There is the option to retain or release the memory and/or statistics as needed. Here
all possibilities of flushing the Result Cache are found:
It is very easy to keep the results of queries in memory, but pay attention to the
invalidations column of the v$result_cacbe_objects table. This column shows when the
results associated to an object in memory become invalid due to a data modification.
To use Result Cache within a function, create the function passing the result_cache
parameter as follows:
The result_cache parameter is used so that the results returned by this function will be
cached in memory until the data of employees table is modified, invalidating the results
cached for this function.
Package dbms_xplan
Part of the tuning tasks that a database administrator needs to know is what plan a
certain query is performing in the database. Since Oracle 9i Release 2, the dbms_xplan
package is the tool used to accomplish this. It is necessary to first create the plan table
that will be used to store information about the execution plan which will be
displayed. The catplan.sql script must be executed to create the plan table; it can be
found in the $ORACLE_HOME/rdbms/admin directory.
This package can also be used in conjunction with the Automatic Workload
Repository (AWR). Poor plans are gathered in a report which shows its explain plan
using the dbms_xplan package. Explain plans can be caught and shown via SQL
Tuning Sets.
If the explain plan of a cursor that is running needs to be obtained, use the dbms_xplan
package to read v$sql_plan and v$sql_plan_statistics_all views. Here are some examples
on how to use this package to analyze and locate the performance problem of a query.
The first demonstration will show how to view the explain plan for a SQL query using
functions from the dbmsjxplan package.
1 - filter("object_name"=1test1)
Note
17 rows selected.
The next example shows how to display a SQL plan in XML format by using the
build_plan_xwl function.
explain plan
set statement_id = 'my_explain_test' for
select
XML PLAN
<plan>
<operation name="select statement" id="0" depth="0" pos="285">
<card>12</card>
<bytes>24 84</bytes>
<cost>285</cost>
<io_cost>2 85</io_cost>
<cpu_cost>21702581</cpu_cost>
<time>00:00:04 </time>
</operation>
<operation name="table access" options="full" id="l” depth="l" pos="l">
<obj ect >test_dbms_xplan</obj ect >
<card>12</card>
<bytes>2484</bytes>
<cost>2 85</cost>
<io_cost>285</io_cost>
<cpu_cost>21702581</cpu_cost>
<time>00:00:04 </time>
<project>"test_dbms_xplan"."owner" [varchar2,30],
"object_name"[varchar2,128], " test_dbms_xplan".
"subobject_name" [varcharR2,3 0], "
test_dbms_xplan" ."obj ect__id" [number,22] , "
test_dbms_xplan" ." ,-DAT
a_obj ect_id" [number,22] , "
test_dbms_xplan" ."obj ect_type." [varchar2,19] , " ;
test_dbms_xplan"."created&qu
o t ; [date,7], " test_dbms_xplan"."last_ddl_time" [date,7],
" test_dbms_xplan"."timestamp" [varchar2,19],
" test_dbms_xplan"."status" [varchar2,7], Squot;
test_dbms_xplan"."temporary"[ varchar2,1], "test_dbms_x
plan"."generated&guot;[VARCHAR2,1], "
test_dbms_xplan"."secondary"[ varchar2,l], "
test_dbms_xplan" .&qu
ot ;namespaceScquot; [number, 22] , "
test_dbms_xplan"."edition_name" [varchar2,30]</projects
cpredicates
type="filter">"object_name"='test'</predicates>
<qblock>sel$l</qblock>
<object_alias>test_dbms_xplan@sel$l</object_alias>
<other_xml>
<info type="db_version">11.2.0.l</info>
<info type="parse_schema"><! [cdata["pkg"]]></info>
<info type="dynamic_sampling">2</info>
Next, how to get the explain plan of a query that has gotten inside an AWR report
will be examined. First of all, execute the query that will make a full-table scan.
y Code 5.17-dbms_xplan_display_awr.sql
conn sys@orallg as sysdba
conn / as sysdba
exec dbms_workload_repository.create_snapshot;
select
★
from
test_dbms_xplan
where
object_name = 'test';
Now run an AWR snapshot to gather information about the query just executed.
exec dbms_workload_repository.create_snapshot;
SQL>
@?/rdbms/admin/awrrpt
Open the AWR report and find the sql_id of the query executed. Use the command
below to show the execution plan for the SQL query.
SQL_ID g5r5yrzlrq9dy
----------------------------------------------------------------------------- | id | Operation
Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------| o | select statement |
| | 285 (100)| |
1 | table access full| test_dbms_xplan | 12 | 2484 | 285 (0)| 00:00:04 |
Note
17 rows selected.
This last example will show how to get the explain plan for a cursor that was just
executed. To use dbms_xplan to display the cursor execution plan, first create and
execute a PL/SQL block with an example cursor.
set serveroutput on
declare
v_obj_name varchar2 (30)
v_obj_type varchar2(30);
cursor cl is
select /* my_test_dbms_xplan */
object_name,
object_type
from
test_dbms_xplan;
begin
open cl;
fetch cl into
v_obj_name,
v_obj_type ;
while cl%found
loop
dbms_output.put_line(a => 'object name:'||v_obj_name);
dbms_output.put_line(a => 'object type:1 ||v_obj_type);
end loop;
close cl;
end;
/
select
sql_text,
sql_id,
child_number
from
Get the explain plan of the cursor by using the display_cursor function in the
dbms_xplan package.
select
*
from
table(dbms_xplan.display_cursor(
sql_id => ’2ckag8c758v6h’,
cursor_child_no => 0));
These are just some examples of the functionalities of this package. There are others
that work in pretty much the same manner; for example, using dbms_xplan to explain
an SQL execution plan in an SQL Tuning Set and more.
Package dbms_spm
A new feature of Oracle l l g is called SQL Plan Management which can be used to
control the SQL plan evolution, guiding the optimizer to always choose a SQL plan
that will not regress the performance.
Newly generated SQL plans will simply be included in SQL plan baselines only if it is
found that the query performance will not become degraded, ideally optimizing
performance. In other words, the SPM will test existing SQL and only generate
improved plans for SQL statements which run faster tiian traditional SQL.
These SQL plans can be loaded from three different SQL sources:
1. SQL Tuning Sets (Captured manually)
2. AWR Snapshots
3. Cursor Cache (real-time SQL)
The dbmsjspm package can be used to load SQL execution plans from the various
sources mentioned above. Please note that the administer sql management grant privilege
is needed to use the package dbms_sp?n-
Up until now, SQL plan baselines were shown which were not fixed. This means that
the optimizer uses costing mechanisms to evaluate plans. With Manual SQL
Management, plans can be fixed in baselines, thereby forcing the optimizer to
consider only those plans from a plan baseline which have been manually marked as
possible candidates.
Since it is unknown how the application will behave in the environment of the
customer’s site, the baselines could be fixed for the SQL and shipped together with
the application. In this fashion, SQL can be pre-tuned, locked-down and deployed
where they are fed into the customer’s system, eliminating the possibility of
performance regression.
The next example will show how to use the main procedures and functions of the
dbms_spm package. This includes operations like changing attributes of SQL plans,
configuring the SQL management base, creating a staging table to be used for
transporting the SQL plan, dropping SQL baseline plans, loading plans from cursor
and SQL Tuning Sets and more.
Suppose that we are running a query on a table that is doing a full table scan. We then
decide to create an index on the referenced table, running the same query to get the
new plan. If we guarantee plan stability by using SQL Plan Management, we will first
have to evolve this new execution plan using the dbms_spm package in order to allow
the database to use the new index as demonstrated next.
The first step will usually be to change the existing index to the invisible mode rather
than dropping it. In the example, it will be dropped since this is just a test and the
effect of dropping versus making invisible will give the same results. The query will
then be made and the plan will be shown.
y Code 5 . 1 9 - dbms_spm_1.sql
conn pkg@dbms
show user
User is "pkg"
select
last_name
from
employees
where
department id=100;
LAST_NAME
Greenberg
Faviet
Chen
Sciarra
Urman
Popp
6 rows selected.
--Get the plan for the query and note that it is doing a FTS on employee
table
from
tablefdbms xplan.display)
PLAN_TABLE_OUTPUT
-Plan hash value:
1445457117
p 1an_table_output
SQL_TEXT SQL_ID
Check the execution plan of the loaded baseline using the following statement:
plan_table_output
------------------------------------------------------------------- Plan
hash value: 1445457117
0 | select statement
1 | table access full| employees |
Now create the index on the employee table and then execute the query again. Note that
the explain plan is still the same because only the first execution plan was loaded in
the SQL baselines.
--Create the index so the optimizer will start to use it and check the plan
again
create index
emp_department_ix
on
employees (department_id)
tablespace
tbs_idx;
Index created.
Explained.
select
■k
from
table(dbms_xplan.display);
plan_table_output
----------------------------------------------------------------------------------------------- Plan hash
value: 1445457117
1 - filter("department_id"=100)
Note
17 rows selected.
select
sql_handle,
plan_name,
enabled,
accepted,
fixed
from
dba_sql_plan_baselines
where
sql_text like 1%employee%1;
Now the evolved process will be used to accept the new sql execution plan that is
utilizing the created index.
Inputs:
sql_handle = sys_sql_1046cl41c5della8
plan_name = sqljplan_10jql872xw4d8c079fdff
time_limit = dbms_spm.auto_limit
verify = YES
commit = YES
Plan: sql_plan_10jql872xw4d8c079fdff
dbms_spm.evolve_sql_plan_baseline(sql_handle=>'s y s _ sql_1046C141C5DEHA8',plan_na
Rows Processed: 6 6
dbms_spm.evolve_sql_plan_baseline(sql_handle=>1sys_sql_1046C141C5DEHA81,plan_na
Report Summary
Lastly, check whether or not the execution plan being used is the new one. It must be
since it is the one that has the lowest cost tothe Oracle optimizer.
--Run the new explain plain and get the index being used
explain plan for
select
last_name
from
employees
where
department_id=100;
select
from
table(dbms_xplan.display);
N_TABLE_OUTPUT
2 - access("department_id"=100)
18 rows selected.
The note at the end of the execution plan informs that the query is using a SQL Plan
baseline.
Here is an example that shows how to disable a specific SQL plan baseline and fixate
the execution plan:
Sysaux Percent 10
Retention Weeks 53
SYSAUX Percent 40
Retention Weeks 30
To drop a single SQL plan or all plans associated with a SQL handle, you can use the
drop_sql_plan_baseline procedure. Stored outlines from plan baselines can also be
migrated into the SQL management base using the procedure migrate_stored_outline.
Here is an example for each one of these procedures:
Package dbms_sql
The dbm _sql package allows developers to write stored PL/SQL code that is capable
of generating and executing data-specific DDL and DML statements without using
hard-coded data values. The functionality of this package is very similar to that
offered by the execute immediate sentence from Oracle 8i and on. Whenever possible,
execute immediate should be preferred, mostiy because it is far more readable and makes
the code easier to maintain. However, there are times when dbms_sql could still be
useful.
One of those scenarios where dbms_sql may still be used is when there is a need to
retrieve a cursor handle and pass it to another procedure, or when it is not known
how many columns need to be retrieved until runtime. It is also good to know that we
can switch between both methods by using the dbms_sql.to_refcursor and
dbms_sql.tojMrsor_number functions. There are three different types of dynamic SQL
that can be built:
Package dbms_sql 209
■ DDL commands
■ DML statements [delete, insert, or update statement)
■ DML queries (select statement)
Each of these operations has separate calls to procedures and functions contained in
the dbms_sql package. In the end, the single steps can be broken down into a generic
set of steps:
1. Build a command by concatenating strings together.
2. Open a cursor.
3. Parse the command.
4. Bind any input variables.
5. Execute the command.
6. Fetch the results (in the case of queries).
7. Close the cursor.
There are a number of procedures and functions contained within the dbms_sql
package. The first of these to be examined are the bind_variable procedures.
While each of these procedures has a slighdy different name, each of them
accomplishes the same task; namely, storing a value in a bind variable.
The c parameter is a cursor ID number. The parameter returns from the procedure as
NULL.
All of these procedures return the value of a column that was fetched using a call to
the fetch_rows() function. The column’s value is stored in the value parameter.
212 Advanced DBMS Packages
The example described below shows almost all procedures and functions of the
dbms_sqlpackage. It is used to dynamically create objects. So pass the DDL command
to the procedure created, and then the procedure, when executed, will create the
object using dynamic SQL through the dbms_sqlpackage.
show user
User is "pkg"
begin
--The open cursor will give an ID number to the cursor that will carry the
data structure
v_sql_cursor_name := dbms_sql .open_cursor
--The parse check the sql statement syntax and associate the cursor to the
sql statement
dbms_sql.parse(c => v_sql_cursor_name,
statement => p_sql_text_l,
language_flag => dbms_sql.native);
--The execute will run the SQL statement
v_sql_execute := dbms_sql.execute(v_sql_cursor_name);
--The last_error_position function return the byte offset where the error
occured
v_sql_errors_ocur := dbms_sql.last_error__position;
--The close_cursor will just close the cursor
dbms_sql.close_cursor(v_sql_cursor_name);
dbms_output .put_line (1Command executed ' ||p_sql_text_l)
exception
when compilation_error then
dbms_sql.close_cursor(v_sql_cursor_name);
when others then
begin
dbms_sql.close_cursor(v_sql_cursor_name);
raise_application_error(-20101,
sqlerrm || ' when executing 111 |
p_sql_text_l||''||p_sql_text_2 || '11 ') ;
v_select_error_code := sqlcode;
v_select_error_mess := sqlerrm;
end;
end exec_command;
nome varchar2(10) Y
--Insert a row into the test_dbms_sql table using the second parameter as
commit
exec exec_command(p_sql_text_l => 'insert into test_dbms_sql values
(''pportugal21')',p_sql_text_2 => 'commit');
NOME
pportugal2
A simple procedure was created, and it can be used to run any DDL or DML
command through it. A table named test_dbms_sql was created and some rows
inserted into it. Just be aware that Dynamic SQL is possibly the main cause of the
worse cases of SQL injection in applications and one of the main security problems
with dynamic SQL.
Newbies often write procedures that just receive part of a SQL sentence, concatenate
it and run it, or even worse, just receive a sentence and run it as it appeared. Hackers
could very easily use such a procedure to run whatever code they want with the
Package dbms_sqltune
SQL tuning work is one of the most time consuming and challenging tasks faced by
Oracle DBAs and application developers. The Oracle lOg SQLTuning Advisor is
intended to facilitate SQL tuning tasks and to help the DBA find the optimal SQL
execution plan. The SQLTuning Advisor can search the SQL cache, the AWR, or
user inputs to search for inefficient SQL statements. The SQL Tuning Advisor is
available through the OEM console, or the dbms_sqltune package can be invoked
manually.
The dbms_sqltune package provides a PL/SQL API for using the SQL Tuning Advisor
tool. Running the SQL Tuning Advisor using PL/SQL API includes two steps:
1. Create the SQL tuning task
2. Execute the SQL tuning task
There are several options for the creation of an SQL tuning task. For example, the
following process will examine the invocation of a single SQL statement. The
dbms_sqltune.create_task function can be used to do the following:
declare
my_sqltext clob;
task_name varchar2(30);
begin
my_sqltext := 'select object_type, count(*) from ';
end;
/
After successfully creating a SQL tuning task, we can launch the SQL Tuning
Optimizer to produce tuning recommendations. Use the
dbms_sqltune.execute_tuning_task procedure to execute the specified task:
Now the DBA is ready to review recommendation details produced by the SQL
Tuning Advisor. A query like the one below can be used to retrieve the SQL analysis
results:
SQL ID : g2wr3u7slgtf3
SQL Text: 'select
object_type, count(*)
from
all objects
group by
obj ect_type'
1- Statistics Finding
Optimizer statistics for table "sys"."obj$" and its indices are stale.
Recommendation
This analysis process could consume significant processing times. Therefore, the
dbms_sqltune package provides an API to manage tuning tasks such as:
■ The interrupt_tuning_task procedure is used to stop the executing task. Any results
that have already been produced will be preserved.
■ The cancel,'_ tuningjask procedure terminates the task that is executing without
preserving its results.
■ The reset_tuning_task procedure is used to stop the running task and reset it to the
initial state.
■ The drop_tuning_task procedure can be used to remove the task from the database.
During tuning analysis, the SQLTuning Advisor can recommend and automatically
create SQL profiles. The SQL profile is a special object that is used by the optimizer.
The SQL profile contains auxiliary statistics specific to a particular SQL statement.
The Oracle optimizer has just a few seconds to calculate a plan before beginning
execution of the query. However, the Tuning Advisor can use more time, do some
deeper calculations and find a better plan that can be stored in a sql_proftle for the
Oracle optimizer to use it automatically in all the subsequent times when the query is
executed.
The SQL optimizer uses the information in the SQL profile to adjust the execution
plan for the SQL statement that has the associated SQL profile. SQL profiles are
great for SQL tuning because it is possible to tune SQL statements without any
modification of the application source code or the text of SQL queries. The
dba_sqlj>rofiles view shows information about all existing SQL profiles.
The dbms_sqltune package can be used to manage SQL profiles. The SQL Tuning
Advisor can recommend the use of a specific SQL profile. This SQL profile can be
associated with SQL statements that are being analyzed by accepting it using
dbms_sqltune.accept_sql_profiler.
declare
sqlprofile varchar2(30);
begin
sqlprofile := dbms_sqltune.accept_sql_profile (
task_name => 'sql_tuning_taskl',
name => 1sql_profilel');
end;
The dbms_sqltune package also provides a PL/SQL API to work with SQL Tuning Sets
(STS). The STS is a database object that contains one or more SQL statements
combined with their execution statistics and context such as particular schema,
application module name, list of bind variables and more. The STS also includes a set
of basic execution statistics such as CPU and elapsed times, disk reads and buffer
gets, number of executions and such.
When creating a STS, the SQL statements can be filtered by different patterns such as
application module name or execution statistics such as high disk reads. Once created,
STS can be an input source for the SQL Tuning Advisor. Typically, the following
steps are used to work with STS using the dbms_sqltune API. STS is created using the
dbms_sqltune.create_sqlset procedure. For example, the following script can be used to
create a STS called sqlsetl:
STS is loaded from such sources as the AWR, another STS, the cursor cache or a
SQL trace file. The following sample PL/SQL block loads STS from the current
cursor cache:
declare
cur dbms_sqltune.sqlset_cursor;
begin
open cur for
select
value(p)
from
table (dbms_sqltune.select_cursor_cache) p;
dbms_sqltune.load_sqlset(
sqlset_narae => 'sqlsetl1,
populate_cursor => cur);
end;
/
A SQL tuning task that uses STS as input can be created and executed like this:
exec dbras_sqltune.create_tuning_task (
sqlset_name => 'sqlsetl',
task_name => 'taskl');
The following syntax can be used to drop a SQL Tuning Set when finished:
All SQL Tuning Sets created in the database by querying the dba_sqlset,
dba_sqlset_binds, dba_sqlset_defimtions, and dba_sqlset_statements views are reviewed. For
example, the dbms_sqltune_show_sts.sql query below shows the particular SQL
statements associated with STS:
select
s .sql_text,
s.cpu_time
from
dba_sqlset_statements s ,
dba_sqlset a
where
a.name = 'sqlsetl1
and
s.sqlset_id = a.id
and
rownum <= 10
order by
s .cpu_time desc
Clearly, Oracle lOg has introduced a rich set of powerful tools that can be used to
identify and resolve possible performance problems. While these advisors cannot yet
replicate the behavior of a senior DBA, they promise to get more intelligent with each
new release of Oracle.
On the next practice example, ADDM will be used to identify the sql_id that is
experiencing performance problems. The dbms_sqltune package will also be used to
tune this SQL statement easily and quickly.
First, run a query on the employee table using in the WHERE clause a column that does
not have an index, so a full table scan will be performed on it.
show user
User is "pkg"
Proceed by running the AWR report and the sql_id of the query having been executed.
Extract the sql_id of the SQL statement, and then use this value on the next step.
Note that this value can also be obtained through other ways like querying the vSsql
table.
@?/rdbms/admin/awrrpt.sql
With the sql_id extracted from the AWR report, run dbms_sqltune and create the tuning
task using the AWR information.
declare
ttask varchar2(50);
begin
ttask := dbms_sqltune.create_tuning_task(
Using dbms_sqltune to create the tuning tasks based on the cursor cache can be done as
shown below:
declare
l_sql_tune_task_id varchar2(50);
begin
l_sql_tune_task_id := dbms_sqltune.create_tuning_task (
sql_id => 'bzpq5u0yz7wy5',
scope => dbms_sqltune.scope_comprehensive,
time_limit => 10,
task_name => 'Test DBMS Packages Cursor',
description => 'testing cursor tuning!');
dbms_output,put_line('Task Name: 1 || l_sql_tune_task_id);
end;
/
An alternative is to create the tuning tasks using a SQL Tuning Set as follows:
declare
v_sql_query varchar2 (100) ,-
l_sql_task varchar2(50);
begin
v_sql_query := 'select count(*) from employess where department_id=100';
l_sql_task := dbms_sqltune.create_tuning_task(
sql_text => v_sql_query,
bind_list =>
sql_binds(anydata.ConvertNumber(100)) ,
user_name => 'pkg',
time_limit => 60,
task_name => 'Test Task DBMSPackages Manual',
description => 'Tuning task for anEMP to dept
join query.');
dbms_output.put_line('Task Name: ' || l_sql_task);
end;
/
Proceed by executing the tuning task created. That will generate the report with the
recommendation results.
--Check the task created
select
task_id,
task_name,
created,
advi sor_name,
status
from
dba advisor tasks
Finally, the output report can be generated. It shows the recommendation of creating
an index on the employees table to gain a large performance improvement.
Here is found the output report and the index creation recommendation:
- Consider running the Access Advisor to improve the physical schema design
or creating the recommended index.
create index pkg.idx$$_01720001 on p k g .employees ("department_id")
Rationale
1- Original
| 1 | sort aggregate i i i 3 1 1
2 - access("e"."department_id"=100)
show user
User is "pkg"
NAME VALUE
--Then enable the trace in your session and execute the query that will perform a full table scan
alter session set events ■10046 trace name context forever, level 4';
--Create the mapping table that will record sql statements that was in trace
file
-- create the directory object where the SQL traces are stored
create directory
dbms_sqltune_dir
as
'/uOl/app/oracle/diag/rdbms/dbms/dbms/trace',-
After completing the SQL Tuning Set, the tuning task can now be created using such
a SQL Set based on information that comes from a SQL trace file.
Package dbms_stat_funcs
This package provides statistical functions that can be used to provide
analytical/descriptive data to the user. It is commonly used in data warehouse
environments. Procedures of this package include:
■exponential_distJit This procedure tests how well a sample of values fits an
exponential
■ normal_distJit This procedure tests how well a sample of values fits a normal
distribution
■ poisson_ distJit This procedure tests how well a sample of values fits a Poisson
distribution
■ summary: This procedure summarizes a numerical column and returns as record
of summary type
■ unijorm_distJ it: This procedure tests how well a sample of values fits a uniform
distribution
■ mibull_dist_Jit This procedure tests how well a sample of values fits a Weibull
distribution
In order to see how this package works, we will show an example of how to use the
summary procedure of the dbms_statJu n es package.
show user
User is "pkg"
--You can also use simple queries to get some of these statistical values as
stats_mode, median and others as below
select
stats_mode(salary)
from
test_dbms_stat_funcs;
select
median(salary)
from
test_dbms_stat_funcs;
Most of these functions used by the summary procedure can be used direcdy on a SQL
query as shown on the last example.
Package dbms_stats
When a SQL statement is executed, the database must convert the query into an
execution plan and choose the best way to retrieve the data. For Oracle, each SQL
query has many choices for execution plans, including which index to use to retrieve a
table row, what order in which to join multiple tables together, and which internal join
methods to use since Oracle has nested loop joins, hash joins, star joins, and sort
merge join methods. These execution plans are computed by the Oracle cost-based
SQL optimizer commonly known as the CBO.
Starting with the introduction of the dbms_stats package, Oracle provides a simple way
for the Oracle professional to collect statistics for the CBO. The old-fashioned analyse
table and dbms_utility methods for generating CBO statistics are obsolete and
somewhat dangerous to SQL performance because they do not always capture high-
quality information about tables and indexes. The CBO uses object statistics to
choose the best execution plan for all SQL statements.
The dbms_stats utility does a far better job in estimating statistics, especially for large
partitioned tables, and the better stats result in faster SQL execution plans. The
dbms_stats package offers a lot of statistics management tools such as:
■ Gathering optimizer statistics
■ Setting or getting statistics
■ Deleting statistics
■ Transferring statistics
■ Locking or unlocking statistics
■ Restoring and purging statistics
■ User-defined statistics
■ Pending statistics
■ Comparing statistics
■ Extended statistics
When changing statistics stored in the dictionary, the old versions are always saved
automatically in case they are needed for future restore.
show user
User is "pkg"
--First, use the pl/sql block below to get all default values being used
set echo off
set serveroutput on
declare
prefs v a rchar2(400)/
begin
--Show cascade default value. If true index statistics is collected as part of gathering table
statistics
select dbms_stats.get_prefs{pname => 'cascade') into prefs from dual;
dbms_output.put_Iine('Cascade = *}|prefs);
--Show method_opt default value. Control columns statistics collection and histograms
select dbms_stats.get_prefs(pname => 1method_opt') into prefs from du a l ;
dbms_output .put_line( 'Method_Opt = ! | jprefs) ,-
--Show granularity default value. Determine the granularity of statistics o n partitioned tables
select dbms_stats.get_prefs(pname => 'granularity1) into prefs from dual;
dbms_output.put_line('Granularity = 'j jprefs);
--Show publish default value. The statistics will be published as soon as it is finished or not
select dbms_stats.get_prefs(pname => ' publish ') into prefs from dual;
dbms_output.put_line{'Publish = ’jjprefs);
--Show incremental default value. Determine if global statistics of a partitioned table will be
maintained without the need of a full table scan
select dbms_stats.getjprefs{pname => 'incremental') into prefs from dual;
dbms_output.put_line(' Incremental = ' j |prefs) ,-
--Show stale_percent default value. Determine the percentage of rows in a table needs to change
before the statistics becomes stale
select dbms_stats.get_prefs(pname => 'stalejpercent') into prefs from dual;
dbms__output .put_line('Stale Percent = ' j jprefs) ;
end;
/
--The output is below
Cascade dbms_stats.auto_cascade
Degree NULL
Estimate Percent dbms_stats.auto_sample_size
Method_0pt for all columns size auto
No Invalidate dbms_stats.auto_invalidate
Granularity auto
Publish TRUE
select
dbms_stats .get_jpref s (
pname => ' publish ',
ownname => 'pkg',
tabname => 'employees')
into
prefs
from
dual;
Gather table statistics example. On the example below, Oracle will automatically
select the column from which it will collect histograms.
Check if the statistics collected are pending using the query on the
dba_tab_J>endin^_stats table.
select
*
from
dba_tab_pending_stats;
Suppose that we have tested the performance of these new statistics and we want to
use it for everybody. Then publish these statistics
using th e publish_pending^_stats procedure as follows:
begin
dbms_stats.publish_pending_stats(
ownname = > 1pkg',
tabname => 'employees',
force => TRUE);
end;
/
--Check the table again and you will not find pending stats for this table
select
*
from
dba_tab_pending_stats;
Now two new forms of gathering statistics are shown: multi-column and expression.
They are part of a new statistics feature of Oracle 1 lg named Extended Statistics.
1. Multi-column statistics: Oracle l l g introduces multi-column statistics to give
the CBO the ability to more accurately select rows when the WHERE clause
contains multi-column conditions or joins.
2. Expression statistics: When using a function or a column in a WHERE clause,
the CBO cannot accurately identify the selectivity of the column. Expression
statistics is created to offer the CBO a wider selectivity to the explain plan.
The next example shows how to gather both multi-column and expression statistics
using the create_extended_stats procedure.
show user
User is "pkg"
Remember that the value fo r all columns % needs to be used on method_opt in order to
gather statistics of grouped columns. Otherwise, specifying the column list using the
command example fo r columns (employee_iddepartment_id) will need to be done. Next,
let’s explore the feature create_extended_stats.
Package dbms_stats.create_extended_stats
One of the most exciting new features of Oracle l l g in the dbms_stats package is
specifically the ability to aid complex queries by providing extended statistics to the
cost-based optimizer (CBO). The l l g extended statistics are intended to improve the
optimizer's guesses for the cardinality of combined columns and columns that are
modified by a built-in or user-defined function.
In this example, the four-way table join only returns 18 rows, but the query carries
9,000 rows in intermediate result sets, slowing down the SQL execution speed:
5,000 Rows
If the sizes of the intermediate results were able to be more accurately predicted by
tipping off the optimizer with extended statistics, the table-join order can be
resequenced to carry less intermediate baggage during the four-way table join; in this
example, carrying only 3,000 intermediate rows between the table joins:
Figure 5.2: 11g E xtended Statistics Help the CBO Predict Inter-table Join R esult Set
Si^es
The Oracle SQL optimizer has always been ignorant of the implied relationships
between data columns within the same table. While the optimizer has traditionally
analyzed the distribution of values within a column, it does not collect value-based
relationships between columns.
For example, analyzing a table within dbms_stats will not tell the optimizer about
implied data relationships:
■ A specific city_naine column correlates to specific state_mme column values
■ A sjp_code column value is only NOT NULL when country_code—USA.
■ A province name is NULL when country_code~ USA.
■ A current_age column is related to the date_of_birth
■ The senior_citi%enJlag—’Y’ correlates to rows where sysdate-date_of_birth > 65
■ The %odiac_sign column direcdy relates to the value of month_oJ_birth
If Oracle knew about the relationships between the columns, it could look into the
distribution of both columns and come up with a more accurate estimate of extended
rows returned, leading to better execution plans. Obviously, the DBA must know
about these unique data relationships. The more the DBA knows their data, the
better they can help tip off the optimizer about these hidden correlations between
column values.
dbms_stats.create_extended_stats (
ownname varchar2,
tabname varchar2,
extension varchar2)
return varchar2
The important argument is the extension because it specifies the relationship between
the columns. The dbms_stats.create_extended_stats extension can be either an expression
The column histograms and column relationships might need to be created where the
totalJuice column relates to the row expression where total_price —
productjprice+sales_tax.
__ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
select
column_name,
histogram
from
user_tab_sol_statistics
where
table_name = 'sales';
_ ****************************************
- Create the extended optimizer statistics
_ ****************************************
select
dbms_stats.create_extended_stats
(NULL, 'sales', '(product_price+sales_tax)’)
from dual;
show user
User is "pkg"
lib=Oracle: /dbebs/products/rdbms/lib/%s_mapdummy%
Next, we see some views that can be used to get information about the mapping
feature.
from
v$map_library;
--Information about file mapping structures in the shared memory
select
*
from
v$map_file;
--Information about element mapping structures
select
*
from
v$map_element;
--Information about all subelement mapping structures
select
■k
from
v$map_subelement;
If a large volume of data needs to be kept from being generated, enable debug
information to trace only a specified program unit as shown below by using this
command before creating the program:
It is possible to pause and resume the trace collection using the trace_pause and
trace_resume constants. This will prevent a large volume of data from being created.
The number of events to be traced can also be limited by using the trace_limit
constant. This constant tries to limit the number of trace events to 8192. This value
can be modified by setting the 10940 event to n level where (trace events = 1024 * n).
In the following example, how to use the dbms_trace and its procedures will be shown.
user is "sys"
@?/rdbms/admin/tracetab. sql
exception
when others then
dbms_output.put_line('Error!!');
end;
/
--Execute the set_plsql_trace procedure with trace_all_lines constant
begin
The trace level can be defined using dbms_trace constants, thereby minimizing the
analysis time. An example is using the trace_all_exceptions constant to trace only
exceptions or by using the trace_limit constant to limit the number of lines that the
trace will generate in the trace table.
Notice that when the trace is stopped using the clear J>lsql_trace procedure, a row will
be inserted into trace table with a comment like “PL/SQL trace stopped”.
Package dbms_workload_repository
Oracle 10g contains a new feature called Automatic Workload Repository that is
basically used to maintain performance statistics and solve performance problems.
More than twenty functions and procedures compose the dbms_workload_repository
package. They are used to manage the AWR repository, display reports and create
baselines and snapshots.
Here is an example showing some of the more useful functions and procedures.
When modifying snapshot settings, the interval and the retention are set to minutes.
The topnsql is used to specify the number of SQL to collect for each criteria like
elapsed time, CPU time, parse calls, shareable memory, version count and such.
begin
dbms_workload_repository.modify_snapshot_settings{
retention => 7200,
interval =>60 ,
topnsql =>10 ,
dbid => 123661118);
end;
/
--Modify the baseline window size (number of days). Needs to be less than or
equal to the AWR retention time
begin
dbms_workload_repository.modify_baseline_window_size(
window_size => 7,
dbid => 123661118);
end;
/
Useful views for AWR are:
■ dba_hist_snapshot View to find snaphot information
■ dba_hist_sql_plan\ Contains time-series dataabout each object, table, index or
view involved in the query
■ dba_hist_wr_control. Controls information about the Workload Repository
■ dba_hist_baseline: Information about baselines
■ v$metric, v$metric_histoiy, v$metricname: Views with metric information
There are more of them that start with dbajoist, and they are grouped by their type of
performance statistics:
■ Database waits
■ Metric statistics
■ Time model statistics
■ System statistics
■ SQL statistics
■ OS statistics
--Function reports
--Using awr_report_text function
select
output
from
table (dbms_workload_repository.awr_report_text(
l_dbid =>123661118 ,
l_inst_num => 1,
l_bid =>191 ,
l_eid =>192 ));
OUTPUT
DB T i m e (s ) 0.0 0.2 0 . 00
DB CPU{s) 0. 0 0.1 0 . 00
Redo size ,173 .9 5,923 .4
Logical reads 58.2 293 .6
Block changes 6.4 32 .1
Physical reads 0.1 0.5
Physical writes 0.6 3.2
OUTPUT
Summary
By now, the reader should have a clear idea of how, when and which packages will
help in fixing performance problems in daily work such as packages that can advise
indexes, materialized views and create profiles as well as packages that can trace
sessions, collect statistics and much more.
In addition to Oracle Tuning, Database Backup is also one of the tasks that all DBAs
must know in order to keep their jobs. In the next chapter, how to use the more
important backup and recovery packages will be examined.
This chapter will demonstrate the main packages that should be used to backup and
recover a database. We will show examples of how, when and why to use them.
Package dbms_datapump
The dbms__datapump package is used to move data and metadata in and out of
databases, this package provides an API that is used as the base for expdp and impdp
utilities. The dbms_datapump package can be used to execute tasks such as copying
datafiles to another database, using a direct path to load data, creating external tables
and using a network link to import data. The subsequent examples will show, step-
by-step, the important tasks that can be accomplished with dbms_datctpump.
This package provides options for monitoring and controlling the job by simply
attaching the job’s session using the attach procedure. the roles
datapump_exp_Jull_database and datapump_imp_Jull_database must be granted to the user
who will be utilizing this package.
S3 Code 6 .1 -dbms_datapump_exp_full.sql
conn sys@orallg as sysdba
User is "pkg"
declare
handler number;
v_job_status varchar2(20);
v_get_status ku$_Status;
v_pct_completed number;
v_job_status_out ku$_JobStatus;
begin
--Operation could be export, import or sqlfile
--Job_mode could be full, schema, table, tablespace or transportable
--Name of the job
--Version of database objects to be extracted, latest, compatible or a
specific value
handler := dbms_datapump.open(
operation => 'export1,
job_mode => 'full',
job_name => 'Exp full db_10',
version => 'latest');
--File name
--Directory where the dump file will be created
--Specify the file type
dbras_datapump.add_file(
handle => handler,
filename => 'exp_full_19_dec_2009_5.dmp',
directory => 'data_pump_dir',
filetype =>dbms_datapump.ku$_file_type_dump_file);
v_pct_completed := 0 ,-
v_job_status := 'undefined',-
end;
/
select
j .job_name,
j.operation,
j .job_mode,
j.state,
j .degree,
j.datapurap_sessions
from
dba_datapump_jobs j;
If more information about the sessions being executed needs to be found, use the
following query that will join the view v$session with dba_datapump_sessions.
select
s.sid,
s.username "User",
s .program "Program",
s.action "Job Name",
d .session_type "Type"
from
gv$session s,
dba_datapump_sessions d
where
s.saddr = d.saddr;
begin
v_handler := dbms_datapump.open(
operation => v_operation, --Oparation could be export, import
or sqlfile
job_mode => v_job_mode, --Could be full, schema, table,
tablespace or transportable
job_name => v_job_name, --Name of the job
version => 'latest');
--File name
--Directory where the dump file will be created
--Specify the file type
dbms_datapump.add_file(
handle => v_handler,
filename => v_dump_file_name,
directory => v_directory,
filetype =>dbms_datapump.ku$_file_type_dump_file);
--Add logfile
dbms_datapump.add_file(
dbms_datapump.start_job(
handle => v_handler,
service_name => 'dbms');
percent_done := 0;
job_state := 'undefined';
while (job_state != 'completed') and (job_state != 'stopped') and
(job_state 1= 'not running') loop
dbms_datapump.get_status(v_handler,
dbms_datapump.ku$_status_job_error #
dbms_datapump.ku$_status_j ob_status +
dbms_datapump.ku$_status_wip,-1,job_state,v_sts);
js := v_sts.job_status;
if js.percent_done != percent_done
then
dbms_output-put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
end loop;
dbms_output .put_line ('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(v_handler);
end;
Execute this procedure once to see the results. Note that on this execution, only the
employees table will be exported.
exec exp_proc(
v_job_name => 'job_testl',
v_dump_file_name => 'exp_dump_%U.dmp',
v_operation => 'export1,
v_job_mode => 'table1,
v_table => 'employees',
v_parallel => 1,
v_directory => 1data_pump_dir',
v_logfile => 'exp_log_filel.log');
A dump file containing the employees table data will be generated. This file is named
exp__dump_01 .dmp along with a log file named exp_lo^Jile1 .log. The output of the log
file is shown below:
In addition, this procedure checks the percentage of execution and shows it at the end
with the handler number and the final job state. This monitoring block could be
migrated to a new procedure that would be used just for monitoring jobs being
executed.
select
s .sid,
s.username "User",
s .program "Program",
s.action "Job Name",
d. session_type "Type"
from
gv$session s,
dba_datapump_sessions d
where
s.saddr = d.saddr;
Package dbms_file_transfer
The db m sJilejra n sfer package is used to transfer and copy binary files between or
within databases. Binary files such as archive logs, RMAN backups, spfiles, datafiles,
change tracking files, Data Guard configuration and Data Pump files can be used
within this package. Some possible operations completed using the d b m sJd ejra n rfer
package are:
■ Create a file copy within the same database
■ Copy files from a remote database
■ Copy local files into a remote database
Procedures used for these tasks are:
■ copyJile: Copies files from the source directory to a destination directoryon the
same database. Can be used to copy files between ASM and file system.
* get.Jile-. Copies files from a remote database. Canbe used to copy files between
ASM and file system.
■ putjile-. Copies a local file to a remote database. Can be used to copy files
between ASM and file system.
Review these commands one procedure at a time. First, a procedure called
proc J ilejr a n s fe r is created and all the procedures of the dbmsJile_transfer package are
Create the package that has all three procedures inside it to test a simple datafile copy.
--Create package
create or replace package pkg_file_transfer as
procedure proc_file_transfer_copy (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_file_type in varchar2)
procedure proc_file_transfer_get (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_source_database in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_file_type in varchar2);
procedure proc_file_transfer_put (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_dest_database in varchar2,
v_file_type in varchar2);
Now, create the package body. Notice the code of each procedure example.
The first procedure, proc_file_transjer_copy, is used to copy a datafile from one place to
another and then make the database use the copy instead so the original file is not
deleted automatically. Note the parameters used as they will be utilized with the
dbms_file_transfer package in spite of which datafile will be used.
procedure proc_file_transfer_copy (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_file_type in varchar2) as
v_tbs varchar2(30);
v_old_path varchar2(100);
v_new_path varchar2(100);
begin
select
tablespace_name
into v_tbs
from
dba_data_files
where
file_name like '%'||v_file_name_orig;
dbms_file_transfer.copy_file(
source_directory_object => v_dir_source,
source_file_name => v_file_name_orig,
destination_directory_object => v_dir_dest,
destination_file_name => v_file_name_dest);
dbms_output.put_line(a => 'File ‘||v_file_name_orig||' copied from
'I|v_dir_source||' to directory '||v_dir_dest||';');
select
directory_path
into v_old_jpath
from
dba_directories
where
directory_name=v_dir_source;
select
execute immediate
'alter database rename file 1,1
||v_old_path
||v_file_name_orig
II'" to
j |v_new_path
Ilv file name dest
execute immediate
'alter tablespace 1||v_tbs||' online';
end;
The second procedure, proc Jile_transjer_get, is used to copy a remote datafile from the
ORADB database. Make sure that the correct directories are created to avoid a "file
does not exist" error.
procedure proc_file_transfer_get (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_source_database in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_file_type in varchar2) as
v_tbs varchar2(30);
v_old_path varchar2(100);
v_new_path varchar2(100);
begin
select
tablespace_name
into v_tbs
from
dba_data_files
where
file_name like '%'||v_file_name_orig;
pkg.put_tbs_offline@oradb('||v_tbs||');
end if;
dbms_file_transfer,get_file(
source_directory_object => v_dir_source,
source_file_name => v_file_name_orig,
source_database => v_source_database,
destination_directory_object => v_dir_dest,
dbms_output,put_line(a =>
'File '
||v_file_name_orig
||' moved from '
||v_dir_source
||' to directory '
||v_dir_dest
I I ' ; ’ );
select
directory_path
into v_old_path
from
dba_directories
where
directory_name-v_dir_source ;
select
directory_path
into v_new_path
from
dba_directories
where
directory_name=v_dir_dest
execute immediate
'alter database rename file '''
||v_old_path
||v_file_name_orig
j|'" to
||v_new_path
j|v_fi1e_name_dest
II", , .
end;
The last procedure in this example, proc Jik_transfer_£ut, is used to copy a local datafile
to a remote database named ORADB.
procedure proc_file_transfer_put (
v_dir_source in varchar2,
v_dir_dest in varchar2,
v_file_name_orig in varchar2,
v_file_name_dest in varchar2,
v_dest_database in varchar2,
v_file_type in varchar2) as
v_tbs varchar2(30);
v old path varchar2(100);
v_new_path varchar2(10 0);
begin
dbms_file_transfer,put_file(
source_directory_object => v_dir_source,:
source_file_name => v_file_name_orig,
destination_directory_object => v_dir_dest,
destination_file_name => v_file_name_dest,
destination_database => v_dest_database);
dbms_output.put_line(a =>
'File 1
||v_file_name_orig
||' moved from '
||v_dir_source
||' to directory 1
||v_dir_dest
II';');
select
directory_path
into v_old_j?ath
from
dba_directories
where
directory_name=v_dir_source;
select
directory_path
into v_new_path
from
dba_directories
where
directory_name=v_dir_dest;
execute immediate
'alter database rename file '''
||v_old_path
||v_file_name_orig
||"' to 111
||v_new_path
IIv file name dest
en d p kg_file_transfer;
The following two procedures must be created on a remote database; in this instance,
the ORADB database will be used.
--Create this procedure on remote database to change status of tablespace
create or replace procedure put_tbs_offline(
v_tbs_name in varchar2)
as
begin
execute immediate
'alter tablespace '||v_tbs_name||' offline normal';
end;
/
--Create this procedure on remote database to change status of tablespace
create or replace procedure put_tbs_online(
v_tbs_name in varchar2)
as
begin
execute immediate
'alter tablespace '||v_tbs_name||' online';
end;
/
exec pkg_file_transfer.proc_file_transfer_copy{
v_dir_source => 'dir_source',
v_dir_dest => 'dir_dest',
v_file_name_orig => 'ol_mf_users_5gfw3fy_.dbf ',
v_file_name_dest => 'userSOl.dbf',
v_file_type =>'datafile');
The next example will copy a remote datafile from the users tablespace to a local
directory defined by the v_dir_dest parameter. The new file name will be
users_1 _bkp.dbf The source file is identified by the remote database link used in the
v_source_database parameter.
exec pkg_file_transfer.proc_file_transfer_get(
v_dir_source => 'dir_source',
v_dir_dest => 'dir_dest',
v_source_database => 'ORADB1,
v_file_name_orig => 'usersOl.dbf',
v_file_name_dest => 'users_l_bkp.dbf',
v_file_type => 'datafile' );
Note that the v_Jlle_type parameter was created to allow the user to break up specific
types of files that can then be copied using dbm sJile_tranJer packages like ASM files,
dump files, spfiles and others.
Feel free to increment this code to fit in specific requirements.
Package dbmsjransaction
Commonly used SQL commands like commit, rollback, set transaction read only, and alter
session advise commit can also be used inside procedures through the dbmsjransaction
package. Its purpose is to manage distributed transactions using procedures and
functions of the package. The dbmstransaction package allows us to commit, rollback,
purge or create savepoints to transactions. Created automatically since Oracle 7.3.4 by
the dbmstms.sql script, the dbmstransaction package is created under the sys schema and
a public synonym is also created, granting the execute privilege to the public.
There are more than fifteen procedures and functions. Rather than describing each
one, we will give examples utilizing some of the more important ones. An easy way to
find the transaction ID and the current undo segment being used is by running the
local_transaction_id function as shown here:
set serveroutput on
declare
v_local_tran_id varchar2(40);
begin
--Start a read/write transaction
dbms_transaction.read_write;
select
dbms__transaction.local_transaction_id
into
v_local_tran_id
from
dual ;
dbms_output.put_line('My local transaction id is: '||v_local_tran_id);
end;
/
My local transaction id is: 7.9.1470
NAME
_SYSSMU7_2224510368$
The undo segment can be identified by the numbers before the first In the case
above, the undo segment number is 7.
If the transaction has a value of y e s in the column mixed, then use the purge_mixed
procedure. This could happen when using commitforce or rollback force and one site will
commit the transaction while the other site will rollback the transaction. Purge should
only be used when the database is lost or has been recreated.
In this next example, how to find and purge lost transactions will be revealed.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
SQL>
show
user
User is "pkg"
SQL>
--Using purge_lost_db_entry
begin
dbms_transaction.purge_lost_db_entry('local_tran_id');
end;
/
Commi t ;
begin
dbms_transaction.purge_lost_db_entry('local_tran_id');
end;
/
--Another useful queries that generate the purge to the command is below:
select
'commit force '' '||local_tran_id ||1' 1,-1
from
dba_2pc_pending;
select
'exec dbms_transaction.purge_lost_db_entry(1’1||local_tran_id||111);’
from
dba_2pc_pending
where
state=1forced commit
7.12.102 prepared
--If you don't find any transaction, then you should delete it manually
set transaction use rollback segment SYSTEM;
delete from
sys.pending_trans$
where
local_tran_id = '7.12.102';
delete from
sys,pending_sessions$
where
local_tran_id = '7.12.102';
delete from
sys,pending_sub_sessions$
where
local_tran_id = '7.12.102';
commit;
All this information may be found in MOSC Note: 401302.1. If more information
about how to resolve transaction problems is desired, take a time to read MOSC Note
126069.1. There, many sample scenarios will be found.
select _.
from...
as of timestamp|sen
where_;
Reading data by specifying a timestamp or a system change number (SCN) was also
common at the time.
Internally, every select is a select as o f timestamp sysdate. Only as of version 9.2 was the
normal user able to specify a timestamp to read the consistent data. This concept was
extended in Oracle’s first lOg release on a row level. Consequently, the flashback
versions query functionality could be used to view all versions of a given row between
two timestamps and two SCNs.
Once the transactions had been identified, it was also possible to look at the undo_sql
column of a view called flashback_transaction_quey. The transaction’s ID, using the
pseudo column versions__xid, could also be found. Here, changes which were made by
the same transaction could be found per the appropriate select any transaction system
privilege.
select
undo_sql
from
flashback_transaction_query
where
xid=hextoraw('my_transaction-id');
This gave the principal option to spool the undo SQL from the
flashback_transaction_quey view and execute it to undo changes. However, this was not
practical because of dependencies in transactions and applications which made it
difficult to undo things in the correct sequence. Luckily, this has been addressed in
Oracle l l g of which a closer review of the new functionality will be given shortly.
An example of the unique lOgRl functionalities is the flashback drop. The flashback drop
utilizes the so-called recycle bin which could be disabled with the retyclebin parameter
in Oracle 10gR2. The recyclebin had been enabled by default in prior releases, and in
lOgRl this parameter is the hidden parameter _re<yclebin. If a DBA drops a table in
lOg, it is internally renamed and the segment stays where it is. These extents are
available for reuse or review in dba_free_space, but the server tries not to use them as
long as possible.
Oracle starts reusing these extents before auto extending a datafile. Once the server
has reused the extents, it is not possible to flashback to before the drop. This enables
the user to flashback the drop, and simply renames the object back to its original
name. Indexes and triggers are also flashed back. Foreign key constraints are not
flashed back with the table. They must be manually recreated!
Since the object remains the same as before the drop operation, it also becomes
possible for the DBA to read from it using the new name. The flashback query also
offers the ability to read the object as it was in the past by using the new name and
select.
The DBA is limited in the inability to flashback to before ddl statements because if the
definition of the table changed in between the flashback operation, it would error out.
--First it is needed to enable archive log and add supplemental log data on
database
conn / as sysdba
shutdown immediate ,-
startup mount;
alter database archivelog;
alter database open;
alter system archive log current;
alter database add supplemental log data;
Next, we create the tables that will be used in this example. The tab_dbmsJJashback1
table receives test data and tab_nc_scn records system change numbers before each
transaction.
--Test table
create table
tab_dbms_flashbackl (
coll varchar2(6),
col2 number);
Now, the PL/SQL block below inserts rows in the tab_dbms_JJashback1 table using
different transactions, and then these rows are deleted.
declare
rec_scn number ,-
begin
rec_scn := dbms_flashback.get_system_change_number;
insert into tab_rec_scn values (rec_scn,systimestamp);
dbms_output .put_line (a => 'SCN at Parti is ' ||rec_scn) ,
-
insert into tab_dbms_f lashbackl values (1Partll1,rec_scn) ,
-
insert into tab_dbms_flashbackl values (1Partl2',rec_scn);
commit;
dbms_lock.sleep (5);
rec_scn := dbms_flashback.get_system_change_number;
insert into tab_rec_scn values (rec_scn,systimestamp);
dbms_output.put_line(a => 'SCN at Parti is 1||rec_scn);
insert into tab_dbms_flashbackl values ('Part21',rec_scn);
insert into tab_dbms_flashbackl values {1Part22',rec_scn);
commit ,-
dbms_lock.sleep(5);
rec_scn := dbms_flashback.get_system_change_number;
insert into tab_rec_scn values (rec_scn,systimestamp);
dbms output.put line(a 'SCN at Parti is 'j|rec sen);
Imagine that we need to rollback these rows because the delete was a mistake. We will
use some queries to see what is going on with my table data right now.
--Query the main table and repair that all rows ware deleted
select
■k
from
tab_dbms_flashbackl
order by
coll;
open curl;
loop
fetch curl into v_coll,v_col2;
exit when cur1%notfound;
if (v_coll is not null) then
dbms_flashback.disable;
insert into
tab_dbms_flashbackl
values
(v_coll,
v_col2);
commit;
end if;
end loop;
exception
when others then
raise_application_error(-20001,'An error was encountered -
'||sqlcode||' error '||sqlerrm);
end;
Next, we obtain the system change number for the exact time prior to the accidental
delete command. Then we roll back all rows deleted using the value of versions_start_scn
in the query below. The value must be the exact time to which the rows should be
rolled back.
select
versions_xid,
versions_startscn,
versions_endscn,
versions_operat ion,
coll,
col2
from
tab_dbms_f1ashbackl
versions between sen minvalue and maxvalue,-
The value can be used from the procedure created to get all rows deleted in the table
quickly and easily.
274 Advanced DBMS Packages
--Table has no rows
select * from tab_dbms_flashbackl order by coll;
COL1 COL 2
C0L1 C0L2
Partll 2405167
Partl2 2405167
Part21 2405174
Part22 2405174
Part31 2405176
Part32 2405176
The first step is to create the test tables and their constraints. They are named
tab_dbms_Jlashback_2 and tab_dbmsJlashback_3.
Next, transactions are executed in order to insert some rows on both tables.
from
tab_dbms_f lashback2
select
*
from
tab_dbms_flashback3;
select
*
from
tab_dbms_flashback3;
Now, find the transaction_id that should be rolled back using the
flashback_transaction_query view as below:
Next, we execute the transaction_backout procedure using the xid of the transaction to
be backed out and note the results.
--Rollback the first transaction. This will remove the first 4 rows
inserted on both tables
select * from tab_dbms_flashback2;
set serveroutput on
declare
xa sys.xid_array := sys.xid_array();
begin
xa := sys.xid_array('OA001F0047070000');
dbms_flashback.transaction_baekout(numtxns => l,xids => xa,options =>
dbms_flashback.cascade)
end;
/
The tables can now be queried again and the rows that were targeted are no longer
there. Two rows from the tab_dbms_Jlashback2 table were deleted as well as two rows
from tab_dbms_Jlashback3.
select
*
from
tab_dbms_flashback2;
COLl C0L2
select
■*
from
tab_dbms_flashback3;
COLl COL2
2A 3
26 4
3A 5
Together, the column sql_redo and the SCN of v$logmnr_contents can provide relevant
information on the activity within the database at the time the analyzed files were
produced. In addition, this view contains the segment name and owner which is
useful in further identification of the objects being altered.
The v$logmnr_contents view contains log history information. When a select statement is
executed against the v$logmnr_contents view, the archive redo log files are read
The Log Miner operations are conducted using the dbms_logmnr and dbms_logmnr_d
PL/SQL package procedures. Data of interest is retrieved using the v$logmnr_contents
view. The main steps are as follows:
1. Ensure that supplemental logging is on. Log Miner requires that supplemental
logging be enabled prior to starting the Log Miner session.
2. Specify a Log Miner dictionary. Either use the dbms_logmnr_d.build procedure or
specify the dictionary when the Log Miner process is started, depending on the
type of dictionary that is to be used.
3. Specify a list of redo and archive log files for analysis. For this, use the
dbms_logmnr.add_logfiles procedure, or direct Log Miner to create a list of log files
for analysis automatically when Log Miner is started.
4. Then start the Log Miner session by using the start_logmnr procedure.
5. Request the redo data of interest. For this, query the v$logmnr_contents v lew.
6. End the Log Miner session by using the end_logmnr procedure of dbms_logmnr_d.
The next example demonstrates the method of using Log Miner to recover deleted
historical data from the archived redo logs.
conn / as sysdba
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
alter system archive log current;
alter database add supplemental log data;
yes
Next, run the PL/SQL block shown below. It will get the SCN before and after an
insert command is done in the tab_logmnr table.
--Simulate one transaction and get this SCN before and after it
truncate table tab_logmnr;
create table tab_logmnr (coll varchar2(10), col2 number) ,-
set serveroutput on
declare
v_scn number;
begin
v_scn := dbms_flashback.get_system_change_number;
dbms_output .put_line (a => 'SCN before the insert is ; '| |-v_scn) ;
insert into tab_logmnr values ('TTl',21);
commit;
v_scn := dbms_flashback.get_system_change_number,-
dbms_output.put_line(a => 'SCN before the insert is : '||v_scn);
end;
/
The next step is to specify the redo log files that will be processed by Log Miner and
their paths by using the add_logfile procedure:
--This query mount the command to add redo log files that will be processed
by Log Miner
select
'exec dbms_logmnr,add_logfile(LogFileName => '''||member]|''', options =>
dbms_logmnr.new);'
from
v$logfile
where
is_recovery_dest_file='no';
--Commands generated. Copy the output, select the lines that contain the
logfile numbers interested in analyzing, and then run the resulting script,
alter system switch logfile;
begin
dbms_logmnr.add_logfile(
logfilename =>
'/u01/app/oracle/oradata/DBMS/onlinelog/ol_mf_l_5gfvrzvt_.log', options =>
dbms_logmnr.new);
dbms_logmnr.add_logfile(
logfilename =>
'/u01/app/oracle/oradata/DBMS/onlinelog/ol_mf_2_5gfvs0p8_.log', options =>
dbms_logmnr .new) ,-
dbms_logmnr.add_logfile(
logfilename =>
1/u01/app/oracle/oradata/dbms/onlinelog/ol_mf_3_5gfvs2m0_.log', options =>
dbms_logmnr.new);
exec dbms_logmnr.start_logmnr(
Options => dbms_logmnr.dict_from_online_catalog
+dbms_logmnr.committed_data_only);
Now use the build procedure of the dbms_logmnr_d package to build the dictionary.
This information is extracted to a flat file or to the redo log file.
If using redo log files, query the v$archived_log table to see which log archives contain
an extracted dictionary:
select
name
from
v$archived_log
where
dictionary_begin='yes 1;
select
name
from
v$archived_log
where
dictionary_end='yes';
Check if the executed command is in the generated dictionary file by using a grep
command:
Alternately, a simple query with v$logmnr_contents can be used and the sql_undo
command can be ready to use at anytime.
select
sql_redo,sql_undo
from
sys.v_$logmnr_contents c
where
c .table_name=1tab_logmnr1;
SQL_RED0 SQL_UNDO
The undo command was discovered. An error can be rolled back using this command
any time it is desired.
The asynchronous method uses online redo log files to read and get data after they
have been committed. The synchronous method is being used in this example. There
are three modes for capturing changes with the asynchronous method. More
information on theese methods can be found in the Oracle Database Data
Warehousing Guide documentation.
Next, designate both the publisher user (they will own the change table) and the
subscriber user (they will be the one that can see data on change table).
In order to use the change data capture feature, some database initialization
parameters should be modified. In this example, only one parameter needs to be
changed: java_J)ool_si%e should have a minimum value of 50000000 bytes.
Create the change set that will capture changes from sjnc_source. This is the default
source when using the synchronous method.
--Create the change set that will capture change from sync_source source
that is the default source when using Synchronous method
begin
dbms_logmnr_cdc_publish.create_change_set(
change_set_name => 1dbms_set_cdc_sync',
description => 'Change set test.',
change_source_name => 'sync_source');
end;
/
In a case where this table has already been created, execute the drop_change_table
procedure to drop it:
Now grant the select privilege to the subscriber user; this is the person who will view
the changed data. After that, simulate some transactions on the source table. The
changes on the change table will then be seen.
--Doing some changes on source table and getting these new values on change
table
conn pkg/pkg@dbms
insert into tab_cdcvalues (1,'Paulo','Portugal');
insert into tab_cdcvalues (2,'John','Wayne');
insert into tab_cdcvalues (3,' Mike ', 1Morgan') ,-
insert into tab_cdcvalues (4,'Elber','Portugal');
commit;
conn user_subscriber/dbms@dbms
col id for a5
col first_name for al5
col last name for al5
col operation$ for a2
select
operation$,
commit_timestamp$,
id,
The next step is to create a subscription that will receive changed data to a view used
by the subscriber user.
--Create a subscription
conn user_subscriber/dbms@dbms
begin
dbms_logmnr_cdc_subscribe.create_subscription(
change_set_name => 'dbras_set_cdc_sync',
description => 'Change set example',
subscription_name => 'subscription_dbms_l1);
end;
/
Use the subscribe procedure to specify which columns will be incorporated. In this
example, just the first_name and last_name columns will be seen by the subscriber.
begin
dbms_logmnr_cdc_subscribe.subscribe(
subscription_name => 'subscription_dbms_l',
source_schema => 'pkg',
source_table => 'tab_cdc',
column_list => 'first_name, last_name',
subscriber_view => 'tab_cdc_sub_view');
end;
/
Now use the active.subscription procedure so the subscription will be ready to receive
changed data. After that, check the view created on this subscriber and make some
changes to the source table logging in as the owner of the source table.
begin
dbms_logmnr_cdc_subscribe.activate_subscription(
subscription_name => 'subscription_dbms_l1);
end;
/
--Check subscribe view and get the results
select
first_name,
last_name
from
tab_cdc_sub_vi ew;
conn pkg/pkg@dbms
insert into tab_cdc values (5,'Fred1,'Arley');
commit;
Call the extended_windoiv procedure in order to retrieve the next available change data.
If the data of this subscriber needs to be purged, execute the purge_mndoiv procedure.
After that, check the view again to confirm that the view is empty.
--If you need to purge the data of this subscriber just execute the
purge_window procedure
begin
dbms_logmnr_cdc_subscribe,purge_window(
subscription_name => 1subscription_dbms_l')
end;
/
--Check the view again and notice that after executing purge_window
procedure the values were gone
select
first_name,
last_name
from
tab_cdc_sub_view;
begin
dbms_logmnr_cdc_subscribe.drop_subscription(
subscription_name => 'subscription_dbms_l1);
end;
/
Here are some useful views for finding information about the change data capture
configuration:
--Views used to obtain information about Change Data Capture
select * from a H _ _change_sources; --Display all change sources
select * from all"_change_propagations; --Display all change propagat ions
select * from all"_change_propagat ion_sets --Display all change propagat ions
select * from all change_sets; --Display all change sets
select * from all"_change_tables; --Display all change tables
select * from dba__source_tables; --Display all source tables
select * from dba subscriptions; --Display all subscriptions
select * from dba_ subscribed_tables; --Display all subscribed tables
select * from dba_subscribed_columns; --Display all source tables subscribed columns
Package dbms_repair
Another approach for finding and correcting corrupted data on tables and indexes is
the dbms_rspair package. It is unnecessary to drop and recreate objects with this
method; in some instances, it is counter-productive or not even possible. With the
dbms_repair package, not only is it unnecessary to drop the object, the corrupted object
may continued to be used while the repair operation is done.
Some limitations do exist when using this package and they are:
■ Clusters are not supported with the check_object procedure
■ LOB indexes and IOT are not supported
■ Bitmap indexes or function-based indexes are not supported with the
dump_orphan_keys procedure
■ Keys with more than 3950 bytes are not supported with the dump_orphan_keys
procedure
There are other ways of finding corrupted data including the db_verijy utility, the
analyse table command and db_block_checking. The initialization parameters
db_block_checking and db_block_checksum will need to have a value of TRUE to check
corrupted blocks before they are marked as corrupt. Two tables must first be created
under the SYS schema before the dbms_repair utility can be used. Fortunately, a
procedure in the package itself (admin_tables) creates these tables and eliminates the
need to hunt for a script in $ ORACLH_HOME/rdbms/admin.
dbms_repair.admin_tables (
table_name in varchar2,
table_type in binary_integer,
action in binary_integer,
tablespace in varchar2 default NULL);
begin
dbms_repair.admin_tables(
table_name => 'repair_test',
table_type => dbms_repair.repair_table,
action => dbms_repair.create_action,
tablespace => ’scottwork1
);
end;
begin
dbms_repair.admin_tables(
table_name => 1orphan_test1,
table_type => dbms_repair,orphan_table,
action => dbms_repair,create_action,
tablespace => ’scottwork’
);
end;
The two tables are now created. A description of the two tables reveals the following:
Repair tables will contain those objects that have corrupted blocks. Orphan tables, on
the other hand, are used to contain corrupted nodes of indexes with no predecessor
on the three structure of the index, hence the name.
In the next example, see how to repair corrupted data using the dbms_npair package.
conn / as sysdba
begin
dbms_repair.admin_tables(
table_name => 'repair_tab',
table_type => dbms_repair.repair_table,
action => dbms_repair.create_action,
tablespace => 1tbs_data');
end;
/
--Create orphan table in tablespace tbs_data. The name must start with
orphan_ prefix.
begin
dbms_repair.admin_tables(
table_name => 1orphan_tab',
table_type => dbms_repair.orphan_table,
action => dbms_repair.create_action,
Make a backup of the table that will be used for this test. For this example, the table
name is locations.
create tablespace
tbs_corrupt
datafile '/u01/app/oracle/oradata/dbms/datafile/tbs_corrupt.dbf1 size
100M;
Make a full backup of the database. At the end, how to recover the corrupted block
will appear.
Let's corrupt some data in this table. First, create a file with some
characters on / t m p named b a d _ f i l e . d d . Insert any characters in it and save
prior to running the command below.
Use the dd command (in Unix OS) generated by the query above to damage the disk
and, consequendy, damage the table.
0+1 records in
0+1 records out
55 bytes (55 B) copied, 0.0313551 seconds, 1.8 kB/s
analyze table
pkg.bkp_jobs
validate structure;
ERROR at line 1:
ORA-O1578: ORACLE data block corrupted (file # 10, block # 131)
ORA-Oil10: data file 10:
'/u01/app/oracle/oradata/dbms/datafile/tbs_corrupt.dbf1
If the bkp_locations table is attempted to be queried, an error similar to this will appear:
Next, use the check_object procedure to find which blocks were corrupted on the
bkp_locations table.
Mark the corrupt blocks if they are not yet marked by using the fix_corrupt_blocks
procedure.
292 Advanced
dbms_output.put_line(a => 'Num Fixed blocks: ' || to_char(v_fix_count));
end;
/
Now inform the database to permit users so they can execute DML operations on the
object that has corrupt blocks using the skip_corrupt_blocks procedure.
--Now you can tell the database to permit users execute DML operations on
this
--object that have corrupt blocks using the skip_corrupt_blocks procedure
begin
dbms_repair.skip_corrupt_blocks(
schema_name => 'pkg1,
object_name => 'bkp_locations',
object_type => dbms_repair.table_object,
flags => dbms_repair.skip_flag);
end;
/
Try to query the corrupted table again. DML operations are now permitted, but the
corrupted blocks are not visible. Therefore, the rows belonging to these blocks are
not being seen, as shown below.
--Try to query the corrupted table again. DML operations is permitted now
but the blocks
--corrupted are not able to be seen so the rows belonging to this block are
not seing as you see below
--Note that DML operations are alowed
insert into
pkg.bkp_locations
values ('10', ’DBA',10000,100000) ;
commit;
--Here you can now, see just new data not the corrupted data
select
*
Here, the table indexes should be checked to make sure that no point exists for a
corrupt block in the table.
set serveroutput on
declare
v_key_count int;
begin
v_key_count := 0 ;
dbms_repa ir .dump_orphan_keys (
schema_name => 'pkg',
object_name => 'idx_ini_job_id',
object_type => dbms_repair.index_object,
repair_table_name => 'repair_tab',
orphan_table_name => 'orphan_tab',
key_count => v_key_count);
To see if the table has skip_corrupt enabled, check on the dba_tables view .
select
table_name,
skip_corrupt
from
dba_tables
where table_name='bkp_locations';
Finally, analyze the table once again to check if there are still any corrupt blocks. This
example has shown how to repair and recover a corrupt block, thereby allowing the
user to access the table normally while permitting the user to execute DML
commands on the table.
Package dbms_streams_tablespace_adm
Despite being an Oracle Streams package, dbms_streams_tablespace_adm is included in
this chapter as its main purpose is copying and moving tablespaces between
databases. Before delving into examples of this package, highlight some of its
concepts. The dbms_streams_tablespace_adm package provides different procedures for
working with two types of tablespaces.
1. Simple self-contained tablespaces: This is a tablespace whose objects do not have
any relationships to other objects outside the tablespace. For example, there are
no foreign keys referencing objects in other tablespaces, no indexes in the
tablespace are for tables outside of it, and no parts of a partitioned table in the
tablespace are stored in other tablespaces.
2. Self-contained tablespace set: Very similar to the above, but instead of a single
tablespace, it is a group composed of several tablespaces. All objects in the group
may reference only other objects in the group. To check whether or not a
tablespace is self-contained, use the transport_set_check procedure of the dbms_tts
package, shown in the next example.
Here are the main procedures of this package:
■ attach_simple_tablespacer. Makes use of Data Pump in order to import a simple
tablespace or a set when using the attach_tablespaces procedure. Once exported
create table
pkg.testl
tablespace
transp_l as
select
*
from
dba_objects;
Note that we create the user pk g in the destination database if it does not already exist.
We also create the directory object where the datafiles will be exported:
The next step is to use the RMAN command that will prepare the tablespace set to be
transported. Make sure a backup is available before executing this step. In summary,
RMAN performs the following steps:
1. The auxiliary instance is created. This is where the restore and recovery is done.
2. A backup of the controlfile of the source database is restored and the auxiliary
instance is mounted.
3. A proper backup of the tablespace Is restored in the auxiliary instance.
4. RMAN performs the recovery at the specified point chosen by the until
parameter.
5. The auxiliary instance is opened.
RMAN changes the tablespace mode to read-only and calls the Data Pump Export
utility, which then exports the structure of objects belonging to the tablespaces. This
is much faster than a traditional export as the data itself remains in the datafiles that
will be copied. Finally, RMAN will shut down, dropping the auxiliary instance and its
files.
--From RMAN command line you will execute the following script to check the
current sen or use another value as you want
select current_scn from v$database,-
--0r use until time
--until time
"to_date('02 NOV 2007 14:37:00',’DD MON YYYY hh24:mi:ss')";
--Where:
/*
The following command may be used to import the tablespaces.
Substitute values for <logon> and <directory>.
impdp <logon> directory=<directory> dumpfile= 'transp_tbs.dmp'
transport_datafiles= /u01/app/oracle/transp_tbs_dir_source/transp_l.dbf,
/u01/app/oracle/transp_tbs_dir_source/transp_2.dbf
*/
Note that two options for attaching the tablespace set into the destination database
will be found. Choose the second option which has the package exemplified on this
topic. The PL/SQL block makes use of the dbms_streams_tablespace_adm package. The
next step is simply copying and pasting it into a SQL session on the destination
database.
COUNT(*)
67491
COUNT(*)
67492
This example has shown how to transport a tablespace between databases using
RMAN together with the dbms_streams_tablespace_adm package. All other procedures
of this package are used to clone, attach, detach and pull tablespaces in or between
databases. The approach used in this example demonstrates the steps that can also be
used with these other procedures.
Remember to check the possible conversion values of the source and destination
database by using the v$db_transportable_platform view.
select
■*
from
v$db_transportable_platform;
Next, execute the check_extemal function in order to list external objects that may
exist.
end;
/
The following directories exist in the database:
sys.transp_tbs_dir, sys.transp_tbs_source, sys.transp_tbs_dest,
sys.transp_tbs, sys.logmnr_dir, sys.dir_dest, sys.dir_source, sys.exp_dbms,
sys.dbms_sqltune_dir, sys.xmldir, sys.data_pump_dir
Database have external objects!
The functions of the dbms_tdb package are easily executed; not to mention, very
helpful in letting it be known if the conversion process is viable.
Package dbms_tts
One step in transporting tablespaces between databases processes is checking whether
or not the tablespace is self-contained. This is accomplished with the dbms_tts
package. If the tablespace transport set is not self-contained, all of its violations are
inserted into a temporary table accessed by the transport_set_violations view. The user
must have the execute_catalogjrole privilege set to use this package.
SI Code 6 .1 3 - dbms_tdb.sql
conn sys@orallg as sysdba
select
name into v_tbs_name
from
v$tablespace
where
ts#=v_tbs_num;
dbms output.put line(1The tablespace name is :']|v tbs name||* and number
is;'||v_tbs_num||'.');
end;
/
if dbms_tts.transport_char_set_check(
ts_list => 'users, tbs_data',
target_db_char_set_name => 'WE8IS08859P15',
target_db_nchar_set_name => 'AL16UTF16',err_msg => v_error_msg) then
dbms_output.put_line(a => 'Tablespace char set are OK!');
else
dbms_output .put_line (a => 'Tablespace char set are incompatible!'),-
end if;
end;
/
--Procedure to check if the char set is compatible or not
set serveroutput on
begin
dbms_tts.transport_char_set_check_msg(
ts_list => 'users, tbs_data’,
target_db_char_set_name => ’WE8IS08859P15',
target_db_nchar_set_name => 'AL16UTF16');
end;
/
set serveroutput on
begin
dbms_tts.transport_char_set_check_msg(
ts_list => 'users, tbs_data1,
target_db_char_set_name => 'ZHS16GBK',
target_db_nchar_set_name => ’AL16UTF16');
end;
/
Summary
This chapter showed how and when to use almost all packages designed for backup
and recovery purposes. In the next chapter, Oracle management and monitoring
packages will be explained and demonstrated.
Packages
Nowadays, database administrators have an assortment of GUI tools to help them
administer and monitor their databases. Yet none of these tools lets anyone know
which command is being executed by the tool and all that is happening behind the
scenes. This chapter will explain some of the more important packages that are used
by these GUI tools; even some that have not been explored or incorporated into a
GUI.
Oracle Enterprise Manager (OEM) Database Control and Grid Control, for instance,
are great tools offered by Oracle for monitoring Oracle databases. Some packages
give the user the option to generate scripts before executing them so they will know
which command will be executed. Since this is not a common feature, the user often
does not have the chance to become familiar with all scripts that may be internally
executed.
I will now show some of the main packages and give examples so the reader will
know how, when and why to use these packages.
Package dbms_alert
The dbms_alert package is created by executing the catproc.sql file and is owned by sys.
Once granted the execute privilege to dbms_alert, it can be executed by any software
component that can call a stored procedure including SQL*Plus, Java and Pro*C.
The dbms__akrt package provides a mechanism for the database to notify a client, i.e.
anything listening, of an event asynchronously which means that the application does
not need to periodically check for the occurrence of events. With dbms_alert, when an
event occurs, a notification is sent. Prior to dbms_ahrt, dbms_alert developers created a
polling process that checked the status of something on the database, like a completed
job, by looking for a table value that the process had just updated. The dbms_alert
package renders such techniques obsolete and is one of the most useful monitoring
packages.
The dbms_alert package is even more helpful when dealing with three-tier web
applications: client, web server, and database. Web applications are stateless by nature,
Here is a practical example of how and when to use the dbms_alert package. Say a
manager wants me to create a method that could automatically let them know when a
salary has changed. This can be implemented by using the dbms_alert package as
described below;
Create a trigger that will use the signal procedure to create alert signals when a
transaction commits.
--Create a trigger that will use the signal procedure to create alert
signals when a transaction commits,
create or replace trigger trg_tab_salary
after insert or update
on tab_salary
for each row
declare
v_message varchar2(1800) ;
begin
if inserting then
v_message := 'New Salary is: ' || :new.sal;
else
v_message := 'updated salary: 1 || :old.sal;
end if;
dbms_alert .signal (name: => 'salary_update_alert',message '=> v_message) ;
end trg_tab_salary;
/
Next, create the procedure that will monitor the changes made to the salary table.
--Create the procedure that will monitor the changes made on salary table
create or replace procedure proc_dbftis_alert_msg is
v_message varchar2(1800);
v_status pls_integer;
v_name varchar2(30) := 1salary_update_alert';
begin
dbms_alert.register(name => v_name);
dbms_alert .waitone (name => v name,message => v_jnessage, status =>
v_status,timeout => 10);
end proc_dbms_alert_msg;
/
In one session, execute the created procedure. In another session, execute any
command that will start the trigger generating an alert signal.
insert into
tab_salary
values (2,100000);
update tab_salary
set sal=sal*0.5
where emp_id=2 ,-
commit;
Here is the output from the session executing the monitoring procedure:
SQL>
exec
proc_dbms_alert_msg;
Many different methods that could be used to monitor and act on specific operations
that are happening within the database can be created.
Package dbms_auto_task_admin
Oracle l l g introduces a new feature called Automated Maintenance Tasks (AMR) and
AMR starts with some of the more common duties of a database administrator.
Information gathered from the AWR repository is analyzed by Autotask and it then
--enable autotask
exec dbms_auto_task_admin.enable;
select
status into v_status_after
from
dba_aut ot a sk_c1ient
where
client_name='sql tuning advisor';
dbms_output.put_line(a => 'client status after command: '||v_status_after);
end;
/
Next, the get_p1 _resources procedure is used to return the percentage of resources
allocated to each Auto task included in the High Priority Group.
dbms_auto_task_admin.set_pl_resources(
stats_group_pct => v_stats_group_pct,
seg_group_pct => v_seq_group_pct,
tune_group_pct => v_tune_group_pct,
health_group_pct => v_health_group_pct);
end;
/
Now, assume that I have a Real Applications Cluster (RAC) environment with three
nodes and we want to spread tasks between them. The service names are dbm sl, dbms2
and dbms2 \
In order to get the attribute values of a certain client, use the get_client_attributes
procedure.
declare
v_service_name varchar2(20);
v_service_name_l varchar2(20);
v_window varchar2(20);
v_client_name varchar2(30);
begin
v_client_name := 'auto space advisor';
dbms_auto_task_admin.get_client_attributes(
client_name => v_client_name,
service_name => v_service_name,
window_group => v_window);
dbms_output.put_line(a = >
' Attributes for client '||v_client_name||chr(10)(|
' - Service_Name is: '||v_service_name_l||chr(10)|j
' - Window Group is: '||v_window);
end;
/
Make the association. If the client attribute values need to be changed, use the
set_attribute procedure as follows:
--If you need to change client attribute values then use set_attribute
procedure as follows:
--Get the client name and current attributes
select
client_name ,attributes
from
dba_autotask_client ,-
--Change some attribute values
begin
dbms_auto_task_admin.set_attribute(
client_name =>'auto space advisor' ,
attribute_name =>'safe_to_kill' ,
attribute_value =>'FALSE');
end;
/
begin
dbms_auto_task_admin.set_attribute(
client_name =>'auto space advisor' ,
attribute_name =>'volatile' ,
attribute_value =>'FALSE');
end;
/
If the intent is to change the task priority of any client, operation and/or individual
task level, use the override_priority procedure.
--To override the priority of auto space advisor client you can execute this
command below:
--Get the current value
select
client_name,
priority_override
from
dba_autotask_client;
Package dbms_comparison
One of the many features included in Oracle l l g is the dbms_comparison package. This
important package was created to compare objects, schemas or data between
databases or schemas.
Database objects that may be compared include tables, single-table views and/or
materialized views. A table can also be compared with a materialized view.
Some organizations have environments that share database objects between multiple
databases. This kind of replicated object is commonly used in Oracle Stream
configurations. The dbms_comparison package can be used to compare objects; if there
is any discrepancy, they can be re-synchronized.
The first step is to create the tables to be compared and synchronized, and then insert
some data on the source table.
y Code 7 . 3 - dbms_comparison.sql
conn sys@orallg as sysdba
conn / as sysdba
By checking the current values of tables,note that the destination table is empty.
Next, create the database link in the source database so it will point to thedestination
database.
from
pkg.tab_comparison_dest@oradb;
Execute the compare function. It will check for any data divergence between the
objects being compared.
Now that the results of the comparison are obtained and different data values from
source table to destination table have been found, we use the converge procedure to
synchronize the objects so they become equalized.
from
pkg.tab_comparison_orig;
set serveroutput on
declare
v_scan_out dbms_comparison.comparison_type
begin
dbms_comparison.converge(
comparison_name => 'comp_dbms_test',
scan_id => 4,
scan_info => v_scan_out,
converge_options => dbms_comparison.cmp_converge_local_wins,
perform_commit => true);
dbms_output.put_line(a => 'converge scand ID is:'||v_scan_out.scan_id);
dbms_output,put_line(a => 'local rows
updated:'||v_scan_out.loc_rows_merged);
dbms_output.put_line(a => 'remote rows
updated:'||v_scan_out.rmt_rows_merged);
dbms_output.put_line(a -> 'local rows
deleted: '||v_scan_out.loc_rows_deleted)
dbms_output.put_line(a => 'remote rows
deleted:’||v_scan_out.rmt_rows_deleted);
end;
/
Checking the values again will show identical results from both of the synchronized
tables.
from
pkg.tab_compari son_orig;
Package dbms_db_version
One of the improvements that began with the Oracle lOg PL/SQL new compiler was
Conditional Compilation. This feature allows the compiler to precisely choose which
code needs to be compiled without wasting time with unnecessary compilations. One
of the functions this feature provides is the ability to enable self-tracing code while in
a development environment and to disable it when it goes to a production
environment.
Rather than becoming a reference manual for the Conditional Compilation feature,
we will turn attention to the focus of this topic: explaining how and when to use the
dbms_db_version package introduced in this feature. This next example will show how
simple it is to use the dbms_db_version to separate different codes to be executed on
different database versions and releases. First, we check the existence of constants in
the database version to make sure that the procedure used in this example will not
reference an inexistent constant.
The output from the query above should look like this:
TEXT
TEXT
--This procedure is used to check the database version and release and
indicate the user which code would be executed in each one
create or replace procedure test_dbms_version is
v_constant
$if $$XXX_DB = 0 $then
number;
$elsif $$xxx_db = 1 $then
binary_float;
$else
$error 'Value of plsql_ccflags is not correct. Must be xxx_db: ' ||
$$xxx_db
$end
$end
begin
end test_dbms_version;
/
Finally, we change the value of the flag to one that will not fit the correct information
for testing the procedure; note the different output.
This example could have included some code with newer features that work only with
newer database versions and releases and an older code. Without this feature, this
may generate an error, a common occurrence in an environment with many databases
of different versions and releases.
Package dbms_ddl
This package first became available with Oracle Database 7.3.4 and provides the
execution of some DDL commands through PL/SQL statements. Some of these
DDL commands can be executed directly in the database while others cannot. Given
the varying range of these procedures and functions, we will briefly describe them
here so the reader will know when and how to use them.
■ alter ^compiler. Compiles the PL/SQL object specified.
■ alter_table_not_referenceabk. Similar to executing the command alter table
<owner>,<table_name> not referenceable fo r <affected_schema>. This command will
prevent an object table from being the default referenceable table for the affected
schema.
Procedures used to compile objects have the same effect as using the command alter
objectJype <owner>,<object_type> compile.
Although deprecated in Oracle lOg Release 2, this procedure is still available although
Oracle recommends using the DDL command instead.
begin
dbms_ddl.alter_compile(type => 'procedure',schema => 'pkg',name =>
'test_jproc',reuse_settings => FALSE) ,-
end,•
/
commit;
insert into
tab_ref_new
select
first_name
from
tab_ref_ref_new;
commit;
select
*
from
tab_ref_new,-
select
•k
from
tab_ref_ref_new;
When using Oracle Streams, it is possible to not have a trigger fire if the table is being
modified by an apply process.
end trg_tab_test;
If there is a procedure that involves codes that must be hidden from prying eyes, the
wrap utility is used. The next example will show how simple it is to implement
Here are some examples of how to generate wrapped code using these functions and
procedures. First, a package already created in an earlier example is wrapped using
wrap function.
set serveroutput on
declare
v_j?kg_spec varchar2 (32767) ;
v__pkg_body varchar2 (32767) ;
v_wrap_code varchar2(32 767);
begin
v_pkg_spec := 'create or replace package test_wrap_dbms
as
procedure test_dbms_profiler_proc (n_obj in number);
end test_wrap_dbms ;';
end;
/
SQL>
This shows that the wrapped code was generated, but the package was not created.
The code is there to be executed when needed.
The difference between the n>rap function and the create_n>rapped procedure is that the
latter executes the wrapped code to create the object, as shown in the next example.
set serveroutput on
declare
v_pkg_spec varchar2(32767) ;
v_pkg_body varchar2(32767);
v_wrap_code varchar2 (32767)
begin
v_pkg_spec := 'create or replace package pkg.test_wrap_dbms
as
procedure test_dbms_profiler_proc (n_obj in number);
end test_wrap_dbms ;’;
end;
/
If an attempt to view the code of this procedure is made, something like this will
appear:
TEXT
package test_wrap_dbms wrapped
aOOOOOO
2e
abed
abed
abed
9
88 a6
+Vu313 Ej OfwewXfVs +DLKyL90oUwg5m4 9T0f 9b9cFqFi 0 fSWlvJW46 6hl2 KLCWfhSq8WaKnK
qhfqnFDK6gIvsY/IMB9Jmo8wDnUTsND2L+Pu2A7uH7u7XWOPKbCPH/weXasa3vDgqQpPmaL2
MMgLplSdrCodpnv4qEY=
There are many tools on the market designed to debug PL/SQL objects. All of them
utilize this package when debugging. The debug process is accomplished with two
database sessions; the first will run the PL/SQL code and the second will monitor the
session which receives the dbms_debug commands.
We will now simulate a process that will debug a session while it is running and test
some procedures of the dbms_debug package. Oracle usually calls the first session the
target session and the monitoring session is the debug session. There are two ways to
enable the debug package. By session:
Here, object types include procedures, functions, types, triggers and packages. This
example can be found in MOSC Note: 221346.1
In the first session, create a table to be used with this example. Then enable the debug
in this session by setting the plsql_debug session parameters to TRUE. Next, initialize
the debug using the initialise function and the debug_on procedure.
--On first session create the example table and enable debug on this session
drop table tab_test_debug;
create table
tab_test_debug as
select
*
from
pkg.departments ;
var x varchar2(50)
begin
:v_ssid := dbms_debug.initialize();
dbms_debug.debug_on{);
end;
/
print v_ssid
begin
update
tab_test_debug
set
manager_id=manager_id+100;
dbms_output .put_line (
a => 'Number of rows updated: '||sql%rowcount)
end;
/
In the second session, execute the PL/SQL block below using the SSID shown in the
first session. Watch the line comments in order to be aware of what is happening in
each code segment.
--On second session execute the PL/SQL block below using the SSID showed on
first session
set serveroutput on
exec dbms_debug.attach_session (' &ssid')
declare
v_program_info dbms_debug.program_info;
v_jpieces binary_integer;
v_runt ime_info dbms_debug.runt ime_info;
v_bin_int binary_integer;
v_ret binary_integer;
v_source varchar2(2000);
v_mask pls_integer := dbms_debug.info_getstackdepth +
dbms_debug.info^getbreakpoint +
dbms_debug.info_getlineinfo +
dbms_debug.info_getoerinfo;
v_break_next_line pls_integer := dbms_debug.break_next_line;
begin
v_ret := dbms_debug.synchronize(run_info => v_runtime_info,info_requested
=> 0) ;
if v_ret != dbms_debug.success then
dbms_output.put_line(a => 'Failed to syncronize!');
end if;
To set a break line, use the block below, passing the line that will be the break. In this
example, the breakpoint will be pointing to line 1.
v_program_info.namespace := NULL;
v_program_info.name := NULL;
v_program_info.owner := NULL;
v_program_info.dblink := NULL;
v_ret := dbms_debug.set_breakpoint{
program => v_program_info,
line# => 1,
breakpoint# => v_bin_int);
if v_runtime_info.reason != 3 then
dbms_output.put_line(a => 'Program interrupted!!');
end if;
dbms debug.show_source{v_runtime_info.line#,v_runtime_info.line#,1,0,v_sourc
e,4000,v_pieces);
dbms_output.put_line(a = > 'Source code is '||chr(10)||v_source);
Now the debug session can be turned off using the d eb u gn ff procedure as follows:
exec dbms_debug.debug_off;
Go to the second session and run the PL/SQL block below. This will use the
synchronise function to wait for the next signaled event to be sent. Finally, use the
detach_session procedure to stop debugging in the target session.
declare
v_runtime_info dbras_debug.runtirae_info;
v_ret binary_integer;
begin
v_ret := dbins_debug. synchronize (
run_info => v_runtime_info,
info_requested => 0);
v_ret := dbms_debug.continue (
run_info => v_runtime_info,
breakflags => 0,
info_requested => 0);
end;
/
exec dbms_debug.detach_session;
In the next example, see how to gather information on a PL/SQL object by using the
dbms_describe package. First, create the procedure to be analyzed by the dbms_describe
package. This procedure should contain a lot of data types in order to be a good
example.
begin
dbms_output .put_line (a => 'Test dbms_describe package!'),-
--Now you can create the package that will be used to describe the procedure
created above.
--This package will get all information about each parameter of procedure
being described.
v_overload dbms_describe.number_table;
yjposit ion dbms_describe.number_table;
v_level dbms_describe.number_table;
v_argument_narae dbms_describe.varchar2_table;
v_datatype dbms_describe.number_table;
v_default_value dbms_describe.number_table;
v_mode_type dbms_describe .number__table ;
v_length dbms_describe.number_table;
v_precision dbms_describe.number_table;
v_scale dbms_describe.number_table;
v_radix dbms_describe.number_table;
v_spare dbms_describe.number_table;
v_index integer := 0;
begin
dbms_describe.describe_procedure(
object_name => name,
reservedl -> NULL,
reserved2 => NULL,
overload => v_overload,
position => v_position,
level => v_level,
argument_name => v_argument_name,
end describe_plsql_obj
end test_describe_plsql;
Finally, execute the procedure describe_plsql_obj, setting the object name as the
parameter to be described.
exec test_describe_plsql,describe_plsql_obj(
name = > 1test_dbms_describe1);
Package dbms_hm
Oracle health checks are an integral task for the Oracle DBA and Oracle l l g has
introduced a new service and software to assist in performing health checks. As of
1lg , Oracle is offering two new health check offerings:
■ A premium health check service
■ A free health monitor using dbms_hm
I will now examine the free health monitor by using the dbms_hm package.
First, query v$hmj:heck to find out what kind of checks can be done when using the
run__check procedure. Keep in mind that checks that are internal cannot be run
manually; use the clause where intemal_check= N ’.
These are the available options that can be run on the database check. In this
example, perform a check for corrupted blocks using DB structure integrity check.
Package dbms_hm 335
--Check the integrity of all datafiles
begin
dbms_hm.run_check(
check_name => 'DB Structure Integrity Check',
run_name => 1Check_Datafiles_Integrity’,timeout => 3600);
end;
As can be seen from the report output, one datafile is offline and could not be
checked.
'/u01/app/oracle/oradata/dbms/datafile/ol_mf_tbsl_5j241oph_.dbf'
is offline
Another useful reason for running the Health Check is to find information about a
specific corrupted block. Run the data block integrity check to accomplish this. To
find which parameters can be used within this check, run the following query:
Next, execute the run_check procedure using the data block integrity check, specifying
a specific data block that is suspected to be corrupt.
DBMS_HM.GET_RUN_REPORT(RUN_NAME)
Assume that an ORA-OO6 OO is being experienced in the database and need to check,
among other things, the data dictionary integrity. The following commands will need
to be run:
The output shows that there is a problem with inconsistency in the database
dictionary.
DBMS_HM.GET_RUN_REP0RT (RUN_NAME)
Note that the problem is at all_core_tables . To fix this problem, open a Service
Request with Oracle Support. There are other checks that can be performed with the
dbms_hm package in which the previous example can be used as a reference point.
Oracle l l g comes with a lot of improvements in the Java language such as having the
best native Java compiles, supporting Content Repository API for Java, having
scalable Java with the automatic creation of 100% native Java code and more.
The dbm sJava package is available when Oracle JVM is installed, which has been
supported since Oracle 8i. This package lets developers use Java to create, store and
deploy code within Oracle databases, we will highlight some of the more common
uses of the dbm sJava package. Functions and procedures will be demonstrated, and a
brief description will be noted about each command.
The first example grants execute privileges to all files located in the / bin directory to the
pkg user.
To change any native compiler option for a user session, use the
set_native_compile_option procedure followed by the native_compile_options function to get
the results.
In order to get the full name of a Java object, use the longname function as shown in
the next example:
Package dbmsJo b
Starting with Oracle lOg, the Oracle scheduler was gready improved with the
dbms_scheduler package. Replacing the dbm sJob with dbms_scheduler offers additional
features by adding the ability to tie jobs with specific user-type privileges and roles.
Although dbms_scheduler is the recommended package for jobs after Oracle lOg, we
will give an example using the dbmsJ o b package here.
Jobs are created to execute a task at a specific time. These tasks may be scheduled to
run periodically at a future time if preferred. The dbm sJob package supports Oracle
Real Applications Clusters, thereby allowing the user to choose in which instance the
job will run. One example is sufficient to show the more important procedures of this
package. This example will show how to create a job that will run a procedure for
gathering statistics for a specified schema.
First, create a procedure called dbmsJob. This procedure is used to gather statistics for
a specified schema; in this ca se, pk g schema.
y Code 7 .1 3 -d b m sJ o b .s q l
conn sys@orallg as sysdba
Next, create the job using the submit procedure, informing which procedure to be
executed, the interval between run times and the next time it will start to run. This job
is scheduled to run on the 7th day of each month at 6:00 pm.
Check to see if it is running by using the dbaJobs_running view. Also, check the job
details with the dbaJ o b s view.
57 22 07/03/2010 18:03:31 1
To remove a job, simply get the job number from the dbaJobs view and execute the
remove procedure as follows.
begin
dbms_job.remove(job => 22);
commit;
end;
/
To change any parameter of a job, use the change procedure. In this example, the
interval date of job 23 is changed.
begin
dbms_j ob.change(
If what the job is executing needs to be changed, use the what procedure. In this
example, change the schema that will have statistics gathered.
begin
dbms_ jo b .what (
job = > 2 3,
what => 'proc_analyze_schema(''scott
commit;
end;
/
select
what
from
dba_j obs
where
job=23;
WHAT
proc_analyze_schema('scott');
To change the next time that the job will run, use the nexljlim e procedure.
begin
dbms_j ob.next_date(
job = > 23,
next_date => to_date(107-03-2010 1 9 : 0 0 dd-mm-yyyy hh24:mi'));
commit;
end;
/
If there is a RAC environment and a job needs to be fixed so it always runs on the
same instance, use the instance procedure shown below.
begin
dbmsjob. instance (job => 23,instance => 1,force => TRUE);
commit;
end;
/
To change the interval time that the job is executed, use the interval procedure,
begin
dbms_job,interval(job => 23,interval => 'sysdate+l+to_char(
(sysdate),''DD'' )');
commit;
It is not uncommon to visit a new client and find a huge number of jobs with nobody
knowing what they are doing and if their execution is really necessary. For those jobs
that might be unnecessary, change their status to that of a broken job to prevent them
from running, and do the checks before dropping them. Once it is sure that they are
not needed, they can be deleted. To do this, use the broken procedure:
If the job should be run immediately, just execute the run procedure. Use the force
option to run a job, even if the affinity option is set for this job and the user is
connected to another instance.
begin
dbms_j o b .run(
job = > 23,
force => TRUE); --Run even if the instance that the user is connected
is not the affinity
end;
/
This last procedure can be used if a job needs to be recreated. It shows the execute
command for the specified job and the command to set the affinity instance for that
job.
set serveroutput on
declare
v_mycall varchar2(2000);
begin
dbms_job.user_export(job => 23,mycall => v_mycall ),-
dbms_output.put_line(a => v_mycall);
end;
/
This shows that the dbm sJob package is very simple to use. Remember, it is not the
recommended approach since dbms_schedulerwas added in Oracle lOg.
If the dbm sjdap package cannot be located, execute the command below logged in as
a £}'j user in order to create it.
This will show how to perform a search that will return all entries within a specific
LDAP base directory.
declare
v_ldap_message dbms_ldap.message;
v_ldap_entry dbms^ldap.message;
v_returnval pls_integer;
v_session dbms_ldap.session;
v_str_collection dbms_ldap.string_collection;
v_ent ry_i ndex pls_integer;
v_ber_element dbms_ldap.ber_element;
v_attr_index pls_integer;
v_dn varchar2(256);
v_attrib_name varchar2(256);
i pls_integer;
v_info dbms_ldap.string_collection
v_ldap_base varchar2(256);
v_ldap_port varchar2(256);
v_ldap_host varchar2(256);
v_ldap_user varchar2(256);
v ldap passwd varchar2(256);
begin
v_r eturnva 1 := -1 ;
dbms_output.put(a => 'DBMS_LDAP Search Example ');
dbms_output .put_line (a => 'to directory .. '),-
dbms_output.put_line(a => rpad('LDAP Host ',25,' ') || ■: 1 [
v_ldap_host);
dbms_output.put_line(a => rpad('LDAP Port ',25,' ') || ': ' ||
v_ldap_port);
dbms_ldap.use_exception := TRUE;
First, the init function is used to establish a connection with the LDAP server. The
information about this connection is then displayed.
v_session := dbms_ldap.init. (
hostname => v_ldap_host,
portnum => v ldap port);
v_returnval := dbms_ldap.simple_bind_s(
Id => v_session,
dn => v_ldap_user,
passwd => v_ldap_joasswd);
Here, the searches function begins searching in a synchronized way for a value placed
in the filter parameters.
Next, the count_entries function is used to count the number of entries in a result set.
After this, the first_entiy function is used to get the first entry of that set.
v_returnval := dbms_ldap.count_entries(
Id => v_session,
msg => v_ldap_message);
v_attr_index := 1;
while v_attrib_name is NOT NULL loop
The get_yalues function is used to get all values associated with a given attribute. After
that, the next_attribute function is used to return the next attribute of a given entry in
the result set.
v_info := dbms_ldap.get_values(
Id => v_session,
ldapentry = > v_ldap_entry,
attr => v_attrib_name);
Functions like her J ree and msgfree are used to free memory allocated to the ber_element
structure. This frees up the chain of messages returned by synchronous search
functions.
dbras_ldap.ber_free(
ber = > v_ber_eletnent,
freebuf = > 0);
v_ldap_entry := dbms_ldap.next_entry{
Id => v_session,
msg => v_ldap_entry);
v_returnval := dbms_ldap.msgfree(
lm => v_ldap_tnessage) ;
Lastly, the unbind_s function is used to close an active LDAP session and finish this
example.
v_returnval := dbms_ldap.unbind_s(
Id => v_session);
exception
when others then
dbms_output.put_line(’ Error code: ' ]j to_char(sqlcode));
dbms_output .put_line (' Error Message : * || sqlerrm)
dbms_output.put_line(1 Exception encountered .. exiting1);
end;
/
Many tasks that the dbms__ldap package uses to manage an LDAP Directory Server
were touched upon in this chapter. Other tasks and their examples can be found in
the Application Developer’s Guide fo r Oracle Identity Management 11g Release 1 (11.1.1)
E10186-01 and Oracle Internet Directory Application Developer’s Guide Release 2.1.1
A86082-01.
Package dbms_metadata
Once a table or index definition has been accepted into the Oracle data dictionary, it
can be difficult to reconstruct the DDL syntax from the dictionary without the help
of specialized packages. Oracle provides the dbms_metadata package to extract table
and index DDL, and this section will explore how to use dbms^metadata to extract the
DDL for any table or index. This capability is very useful when a table definition
needs to be migrated into a new Oracle database.
Traditionally, the extraction of DDL is called punching the DDL. The term punching
dates back to the days when the code would be punched onto Hollerith cards.
This first example shows how to generate DDL for all tables belonging to a pk g user
whose name does not begin with sys%.
--Generate DDL for all tables of pkg user except tables that have their name
initialized by 'sys%'.
set pagesize 0
set long 90000
set feedback off
set echo off
select
dbms_metadata.get_ddl(object_type => ’table',name => u .table_name,)
from
user_tables u
where
table_name not like 'sys%';
Next, see how to generate DDL for all pk g user indexes except those whose table
name start with sjs%.
--Generate DDL for all indexes of pkg user except for those with a table
name beginning by 'sys%'
set pagesize 0
set long 90000
set feedback off
set echo off
select
dbms_metadata.get_ddl(object_type => 'index',name => u.index_name)
from
user_indexes u where table_name not like 'sys%';
The next script shows the user a new and practical method to gather information
about index size.
y Code 7 .1 6 -d b m s _ m e ta d a ta _ id x _ s ize .s q l
conn sys@orallg as sysdba
begin
v_schema := schema_name;
v index := index name;
end;
set serveroutput on
exec get_index_size(schema_name => 'pkg',index_name => 'D % ');
--Execute the get_ddl again and observe that storage clause was dropped,
select
dbms_metadata.get_ddl(
object_type => 'table',
name => 'ame_help')
from
dual;
--To change the session to default mode execute the command below
exec dbms_metadata.set transform param(
transform_handle => dbms_metadata.session_transform,
name => 'default');
declare
v_handle number;
v_query varchar2(20000);
begin
select
dbms_metadata.open(object_type => 'table')
into v handle
dbms_output.put_line(v_query);
dbms_metadata.close(v_handle);
end;
/
As these examples have shown, DDL scripts can be easily created using the
dbms_metadata package. There are other procedures and functions not demonstrated
here that are available in order to create customized DDL scripts.
Package dbms_output
The dbms_output package was introduced in Oracle 7 to allow PL/SQL output from
the SQL buffer in SQL*Plus to be written as standard output. The dbms_output
package was intended primarily as a debugging tool and is now being replaced by
Oracle’s step-through debuggers and several third-party tools. Its functionality is
similar to the printfQ function in C.
The dbms__output package is often used to handle runtime errors. It can be used to
isolate the location of an error so that a developer can correct the problem. It is a
handy debugging tool when used properly to “salt” PL/SQL with display statements
showing the contents of variables.
In order to use the dbms_output package for debugging, issue the set serveroutput on
command at the beginning of the session. It is this command that enables
information to be buffered. At the end of the program execution, a dbms_output
procedure named get_lines will read the buffer and print its results. If a public synonym
needs to be created and the execute privilege granted to public, run the dbmsotpt.sql
script.
exec dbms_output.disable;
The enable procedure is used to enable calls to any procedure like put_line, new_line and
get_line(s). The buffer size is specified in bytes. To specify an unlimited value, use
The get_line procedure is used to redeem what is in the buffer and show it in one line.
The status shows 0 if it completed successfully and 1 if no rows are returned. Here is
an example:
y Code 7 .1 8 -d b m s _ o u tp u t.s q l
conn sys@orallg as sysdba
declare
v_get_line varchar2(32767);
v_test_lines dbms_output.chararr;
v_status integer;
vjnumlines integer 2;
begin
v_test_lines(1) 1Line number 1!
v_test_lines(2) 'Line number 2!
v_test_lines(3) 'Line number 3!
v_test_lines(4) 'Line number 4!
v_test_lines(5) 'Line number 5!
dbms_output.put_line(a => LI |v_test_lines(1))
dbms_output.put_line(a => L2 |v_test_lines(2))
dbms_output.put_line(a => L3 |v_test_lines(3))
dbms_output.put_line(a => L4 |v_test_lines(4))
dbms_output.put_line(a = > L5 Iv test lines(5))
The g e tjin e s procedure is used to return multiple lines at the same time. This next
example will see how the put_line procedure is used to put a line in the buffer.
declare
v_get_line varchar2(32767);
v_test_lines dbms_output.chararr;
v_status integer;
v_numlines integer := 10;
begin
v_test_lines(1) :='Line number 11';
v_test_lines(2) :='Line number 2!';
v test_lines(3) :='Line number 3!';
end;
/
The neiv_line procedure is used to insert an end-of-line marker. This procedure will be
used in the next example to display a list of all usernames in the database, separated
by new lines.
set serveroutput on
declare
v_users dba_users.username%type;
cursor c_users is
select username from dba_users order by username;
begin
open c_users;
loop
fetch c_users into v_users;
exit when c_users%notfound;
dbms_output.enable(9999999) ;
dbms_output.new_line{);
dbms_output .put_line (a => 'username: '||v__users) ;
dbms_output.new_line()
end loop;
end;
/
These were just some examples showing what the dbms_output package can do. In
summary, it is used mostiy as a handy basic tool to help database administrators and
developers debug their programs. Beware that there are more advanced debugging
tools available that offer more advanced functionality and can usually be a better
choice for all debugging needs, but only the simpler ones.
Alone, this is not a very reliable way of message exchange as all information generated
by dbms_pipe is stored in the System Global Area (SGA). As with any information
inside SGA, it will be lost if the database goes down. Thus, applications that use it
often combine it with Streams, proprietary methods to store the data sent, or rely on
it only for non-critical data exchange.
To increase the security of this communication, two pipes can be used; private pipes
and public pipes. Public pipes are dropped when there is no more data in it. They
may be created implicitly, i.e. created automatically when first referenced, or explicitly,
i.e. using the create_pipe procedure. The script used for creating the dbms_pipe package
is dbms_pipe.sql and can be found at $ORACLE_HOAtE/rdbms/admin.
The next example works as an alert to inform an application that data has changed
and needs to have its cache refreshed in order to show the new information. Suppose
that there is an employees table and wewant to know each time the salary field is
updated. To accomplish this, create a trigger that sends updated information to a
session that is waiting for these changes.
First of all, create the employees table that will be used in this example.
Next, create the package that will contain procedures for sending and receiving
information via the dbms_pipe package.
begin
v_result := dbms_pipe.receive_message(
pipename => 'message from pipe!',
timeout => 10);
if v_result = 0 then
while v_result = 0 loop
v_result := dbms_pipe.receive_message(
pipename => 'message from pipe!',
timeout => 10);
dbms_pipe.unpack_message(v_name_r);
dbms_pipe.unpack_message(v_sal_r);
dbms_output.put_line('Full Name: ' || v_name_r);
dbms_output.put_line('Salary: ' || ,v_sal_r) ,-
end loop;
else
if v_result = 1 then
dbms_output.put_line{'Timeout limit exceeded!');
else
raise_application_error(-20002,
'error receiving message pipe: 1 ||
v_result);
end if;
end if;
exception
when others then
null;
end receive_message_pipe;
Create the trigger on the employees table that will use the send_messagej>ipe procedure to
send information to the pipe. This information will be read with the
receive_message__pipe procedure.
--On the first session, execute the procedure receive_message that will
output values being inserted in employees table,
set serveroutput on
exec test_dbms_pipe .receive_message_pipe ,-
--On the second session execute some insert commands on employees table.
insert into employees (name,sal) values ('John Paul', 300000);
insert into employees (name,sal) values ('Mike',350000);
insert into employees (name,sal) values ('Brad',400000);
commit;
After we wait for fifteen seconds; the results of the first session will then be displayed.
To check pipes created on the database, use the v$db_pipes view as shown:
To recap, one session has communicated with the other via send_message_pipe and
receive_message_pipe procedures which are inside the dbms_pipe package. This is
commonly known as inter-session communication.
Package dbmspreprocessor
The beginning of this chapter examined how and when to use the dbms_db_version
package. Now the dbmspreprocessor package will be covered, which can be used in
conjunction with the dbms_db_version package when the user is evaluating the
conditional compilation PL/SQL feature.
In the example below, the get_post_processed_source function is used to show the post
processed source text from a procedure created earlier in this chapter.
set serveroutput on
declare
v_source_lines dbms^preprocessor.source_lines_t;
v_line_num number;
begin
v_source_lines := dbms_preprocessor,get_post_processed_source(object_type
= > 'procedure' ,schema_name => 'pkg',ob ject_name => 'test_dbms_version') ,-
v_line_num := 0;
for i in 1..v_source_lines.last
loop
dbms_output.enable(100000);
dbms_output.put_line{'Line number: 'j|v_line_num||' Post-processed
source text lines: '||v_source_lines(i));
v_line_num := v_line_num + 1;
end loop;
end;
/
The output from above will be different from the output shown after the plsql_ctflags
parameter is changed. This is because the conditional compilation feature is being
used and has changed the process of the program unit by using different steps.
set serveroutput on
begin
dbms_preprocessor.print_post_processed_source(
object_type => 'procedure',
schema_name => 'sys',
object_name => 'test_dbms_version');
end;
/
dbms_output.put_line(
a => 'Parameter plsql_ccflags not set to the correct value! ' || 1
);
In this example, the values were able to be changed and viewed by using the
dbmspreprocessor package without the need of executing or opening all the code. This
Package dbms_resconfig
Oracle Database 9i Release 2 first introduced the Oracle XML DB repository. XML-
DB lets XML developers store their XML data in a database; thus, providing more
security, high availability, scalability and manageability for XML-tagged data.
Now that the Oracle XML repository has been covered, let’s examine how
dbmsjresconfig will help. A resource name is given to each object created in an XML
repository. Then the dbms_resconfig package is used to configure resources in an Oracle
XML repository and associate it with a resource configuration file. The resource
configuration file, e.g. an XML file, is where the parameters are defined.
A single resource configuration file may be applied to its own resource or can be
applied to all resources in the repository. This is accomplished with the addresconfig and
addrepositoiyresconfig procedures in the dbmsjresconfig package. The next example will
highlight some of the main procedures of the dbmsjresconfig package.
These were just some procedures and functions of dbms_mconfig and their utilization.
To learn more about Oracle XML Repository, go to http://tahiti.oracle.com. Here,
nearly all Oracle Documentation can be found, including the Oracle XML Developer’s
Guide 11g Release 2 (11.2). x
dbms_resource_manager.create_plan (
plan in varchar2,
comment in varchar2,
cpu_mth in varchar2 default 'emphasis',
max_active_sess_target_mth in varchar2 default
'max_active_sess_absolute',
parallel_degree_limit_mth in varchar2 default
'parallel_degree_limit_absolute');
Where:
■ plan\ The plan name
■ comment Any text comment that should be associated with the plan name
■ cpu_mth\ One of emphasis or round-robin
■ max_active_sess__target_mtb. Allocation method for max. active sessions
■ parallel_degree_limit_mth. Allocation method for degree of parallelism
There is a new procedure called set_£roup_mapping within the Oracle
dbms_nsource_manager that helps to map the session attributes to a consumer group.
These attributes are of two types: login attributes and runtime attributes. Here are
some examples using the Oracle dbms_resource_manager.
Assume that my company has just bought new servers and storage devices for my
production environment, weneed to make certain that the new 1/O is large enough
to hold my production database. Consider using the Oracle I/O calibration feature to
gauge how the new environment will behave.
Though Oracle provides a tool named ORION (Oracle I/O Calibration Tool), the
focus is on the new Oracle l l g Feature I/O Calibration for the next example. It is
configured through the dbms_resource_manager package. The first step is to check that
the database has asynch enabled. This is done by checking the filesystemio_options and
disk_asunch_io initialization parameters as follows:
select
distinct(asynch_io)
from
v$iostat_file
where
filetype_name in('data file','temp file');
Next, execute the calibrate_io procedure, entering the number of disks that are there
and the maximum latency desired. Check the calibration status by querying the
v$io_calibration_status view.
set serveroutput on
declare
v_io binary_integer;
vjnbps binary_integer;
v_latency binary_integer;
begin
dbms_resource_manager.calibrate_io(
num_physical_disks => 12,
max_latency => 20,
max_iops => v_io,
max_mbps => v_mbps,
actual_latency => v_latency);
select
max_iops,
max_mbps,
max__pmbps,
latency
from
dba_rsrc_io_calibrate
67 24 24 28
In this simulation, assume that a resource plan named dbmsJ)lan needs to be created.
dbms_plan will have three groups named grp_normal, grp_medium and grpjoigh. These
groups will have to follow these rules:
Now that the rules needed are given, let’s begin the example. First of all, let’s grant
the administer_mource_manager privilege to the user that will be using the resource
manager.
exec dbms_resource_manager_privs.grant_system_privilege(
grantee_name => 'pkg',
privilege_name => 'administer_resource_manager',
admin_option => TRUE);
dbms_resource_manager.create_consumer_group(
consumer_group => 'grp_medium',
comment => 'Consumer with medium priority.',
cpu_mth => 'round-robin'),-
dbms_resource_manager.create_consumer_group(
consumer_group => 'grp_high',
comment => 'Consumer with high priority.',
cpu_mth => 'round-robin'),-
dbms_resource_manager.submit_pending_area;
end;
/
Confirm that the consumer groups were created by using the dba_rsrc_consumer_groups
view.
select
consumer_group
from
dba_r src_cons ume r_groups
where
consumer_group like 'grp%',-
Now create the plans that will limit the resource types for each group created. Note
that each group will have their limit values set according to the plan specified at the
beginning of this example.
declare
spfilevalue varchar2(1000);
scopeValue varchar2(10) -.= 'memory', -
planName :varchar2 (100) := 'dbms_plan',-
begin
dbms_resource_manager.clear_pending_area();
dbms_resource_manager.create_pending_area();
dbms_resource_manager.create_plan(plan => 'dbms_plan',
comment => 'Oracle dbms book');
dbms_resource_manager.create__plan_directive (
plan => 'dbms_plan',
group_or_subplan => 'grp_high',
comment => 11 ,
mgmt_pl => 45,
mgmt_p2 => NULL,
mgmt_p3 => NULL,
mgmtj>4 => NULL,
mgmt_p5 => NULL,
dbms_resource_manager.create_plan_directive(
plan = > 'dbms_plan1,
group_or_subplan =:> 'grp_medium1
comment = > it/
mgmt_pl = > 30,
mgmt_p2 = > NULL,
mgmt_p3 = > NULL,
mgmt_p4 = > NULL,
mgmt_p5 = > NULL,
mgmt p6 => NULL,
mgmt_p7 => NULL,
mgmt p8 = > NULL,
parallel_degree_limit_pl = > 8,
switch_io_reqs = > NULL,
switch io megabytes = > 5000,
active_sess_pooljpl = > 20,
queueing_pl = > NULL,
switch_group = > 1cancel_sql'
switch_time = > NULL,
switch_estimate — > TRUE,
max est exec time = > 7200,
undo^pool = > 1048576,
max_idle_time = > 300,
max idle blocker time = > NULL,
switch_for_call = > TRUE);
dbms_resource_manager.create_j?lan_directive(
plan = > 'dbms plan'
group_or_s ubplan = > 'grp normal
comment = > II
mgmt_j?l = > 20,
mgmt p2 = > NULL,
mgmt_p3 = > NULL,
mgmt_p4 = > NULL,
mgmt_p5 => NULL,
mgmt_p6 = > NULL,
mgmt_p7 => NULL,
mgmt_j?8 = > NULL,
parallel_degree_limit_pl = > 4,
switch_io_reqs = > NULL,
switch_io_megabytes = > 1000,
dbms_resource_manager.create_plan_directive(
plan 1dbms_j?lan',
group_or_subplan = > 1other_groups\
comment = > ii
mgmt_pl = > 5,
mgmt_p2 = > NULL,
mgmt_p3 -> NULL,
mgmt_p4 = > NULL,
mgmt_p5 = > NULL,
my uit_p6 = > NULL,
mgmt_p7 = > NULL,
mgmt_p8 = > NULL,
parallel_degree_l iniit_pl = > 2,
switch io reqs = > NULL,
switch io megabytes = > 100,
active sess_pool_pl = > 5,
queue ing__p1 = > NULL,
switch_group = > 1kill_session*
switch_time = > NULL,
switch_estimate = > TRUE,
max est_exec_time = > 100,
undo_pool = > 104857,
max_idle_time = > 30,
max idle blocker_time = > NULL,
switch_for_call = > TRUE) ;
dbms_resource_manager.submit_pending_area () ;
select value into spfileValue from v$parameter where name = 'spfile'
if spfilevalue is not null then
execute immediate 1alter system set resource_manager_plan = ' ||
planName || ' scope=both';
end if ,-
dbms_resource__manager.switch_plan (plan_name => 'dbms__plan',
sid => 1dbms1) ;
end;
To confirm that the plan and its directives were created, use the dba_rsrc_plans and
dba_rsrc__plan_directives views.
select
plan
from
dba_rsrc_plans
where plan ='dbms_plan1;
select
It is now necessary to associate the users with the consumer groups by specifying the
login names as shown below:
begin
dbms_resource_manager.clear_pending_area{);
dbms_resource_manager.create_pending_area{);
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user,
'user_31,
'grp_normal'
);
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user,
'user_21,
1grp_medium1
);
dbms_resource_manager.set_consumer_group_mapping(
dbms_resource_manager.oracle_user,
1user_l',
'grp_high'
);
dbms_resource_manager.submit_pending_area();
end;
/
Check that they are pointed to the right target by using the dba_nrc_grvup_mappings
view:
select
attribute,
value,
consumer_group
from
dba_rsrc_group_mappings
order by value;
8 rows selected
If it is necessary to force a user to be in a specific group when he/she logs into the
database, use the procedure below. It will make the user (user_1) belong to the
grp_medium group when the user is logged in on the database.
begin
dbms_resource_manager.clear_j)ending_area () ;
dbms_resource_manager.create_jpending_area () ;
dbms_resource_manager.submit_pending_area();
end;
/
Resource groups can also be defined by specifying the module name of a user that is
using this module. The statement below maps the user with the module name
PL/SQL to the resource group normal whenever the module activates:
begin
dbms_resource_raanager.clear_pending_area();
dbras_resource_manager.create_pending_area ()
dbms_resource_manager.set_consumer_group_mapping(
attribute => dbms_resource_manager.module_name,
value => 1PL/SQL%1,
consumer_group => 'grp_normal1);
dbms_resource_manager.submit_j?ending_area {) ;
end;
/
The dbms_resource_manager package is a powerful tool that lets database administrators
take control of how users are utilizing server resources; thereby, allocating these
resources to main users who need it at that time.
Setting the resumableJim eou t initialization parameter, enable resumable space allocation
system and specify a timeout interval by setting the resumableJimeout initialization
parameter. For example, the following setting of the resumable_timeout parameter in
the initialization parameter file causes all sessions to initially be enabled for resumable
space allocation and sets the timeout period to one hour:
resumable_timeout=3600
If this parameter is set to 0, then resumable space allocation is disabled initially for all
sessions. This is the default.
Use the alter system set statement to change the value of this parameter at the system
level. For example, the following statement will disable resumable space allocation for
all sessions:
Within a session, a user can issue the alter session set statement to set the
resumableJim eou t initialization parameter and enable resumable space allocation,
change a timeout value, or disable resumable mode.
Using alter session to enable and disable resumable space allocation, a user can enable
resumable mode for a session. The alter session enable resumable statement is used to
activate resumable space allocation for a given session. Developers are able to embed
the alter session statement in programs to activate resumable space allocation. A new
parameter, called resumable, is used to enable resumable space allocation for export,
import and load utilities.
Statements do not suspend for an unlimited amount of time. A timed interval can be
specified in the alter session statement to designate the amount of time that passes
Once enabled, Oracle automatically detects the space condition and suspend the
session. Oracle writes an entry to the alert log that the session has been suspended.
Additionally, the dbajresumable view maintains a record of all currendy suspended
sessions. Once the DBA has corrected the space problem, the suspended session will
automatically resume its operation at the point of suspension.
Oracle also provides an after suspend system trigger event that allows the response to
be automated to a session suspend condition. Further, the dbms_resumable package is
provided to allow for the handling of resumable space management from within SQL
or PL/SQL.
With just three functions and three procedures, this package will now be exemplified.
In this simulation, a table is created and values are inserted until the table space size
has reached its limit. Then the dbmsjresumable package is used to fix this issue without
the need of starting the process from the beginning.
First, create the tablespace and table that is to be used for this example, and name it
tab_dbms_resumable. In order to simulate an error quickly, the tablespace size will be
10M.
y Code 7 .2 4 - dbms_resumable.sql
conn sys@orallg as sysdba
create tablespace
tbs_resum datafile size 10M;
create table
tab_dbms_resumable (
coll varchar2(100),
col2 varchar2(100))
tablespace
tbs_resum;
Configure the resumable space allocation feature with a value of 30 minutes using the
procedure below:
begin
dbms_resumable.set_timeout(timeout => 600); --30 minutes
end;
/
Create a trigger to enable the resumable space allocation feature in a user session
every time the user pk g logs into the database.
Create another trigger that will send an email every time the user pkg finds a
resumable space problem in its session. This email will be sent to the DBA, informing
them about the problem. It will contain information about which table and tablespace
is experiencing the issue so it may be easily fixed by increasing the datafile pertaining
to that tablespace.
begin
--Get all error variables
v_ret_value := dbms_resumable.space_error_info(
--Set timeout to 2 hours.This is the time that DBAs have to fix space
problem
dbms resumable.set timeout (7200) ,-
end;
/
Now execute some inserts in tab_dbms_resumable by the user pkg. Once the tablespace
becomes full, the DBA is warned with an email informing on the resumable space
problems.
In another session, logged in as system or another user with dba privilege, check the
dba_resumable view for information pertaining to the session that hangs due to space
problems.
--or
col error_msg for a30
select
session_id,
status,
error_msg
from
dba_resumable;
The alert.log file can also be checked to find information on resumable space allocation
problems that are occurring in the database. Lines like these will be shown:
Finally, fix the problem by increasing the datafile size of the tablespace experiencing
the space problem and check the insert process again. Make sure that the time limit of
resumable space timeout that was set for two hours is not exceeded.
select
'alter database datafile 1'1||file_name||''' resize xx M|G;1
from
dba_data_f iles
where
tablespace_name='tbs_resum1;
As the old adage goes, “time is money”. The dbms_mumable package can help save
both by providing the ability to continue a process from the point that a space error
happens rather than restarting the process from the beginning.
One in particular, named “lightweight” jobs, has the main purpose of reducing the
overhead originated when many small jobs need to run in a short time period. These
small jobs can generate redo overhead and hurt performance due to excessive
metadata in the creation of these jobs since every job execution needs to modify a
schema object (the scheduler object) as well as every job run takes some time to open
and close the related session.
Lightweight jobs are not schema objects and are also optimized to open and close
sessions faster; thus, they offer improved performance under these conditions. For
larger jobs, or less frequent intervals, that time tends to be irrelevant, and regular jobs
are still preferred over lightweight jobs. With more than sixty procedures and
functions, this package is one of the more robust in the Oracle Database. Rather than
spending time explaining each one, now focus on examples of the newer and most
commonly used functionalities.
Say we just started working with a company and, when running my traditional
database health check, we found too much CPU overhead. While tracking down
which process is causing the overhead, we discover that jobs creation is the culprit. So
we decide to use the new l l g feature, lightweight jobs, in order to tone up this
process.
Lightweight jobs are invoked using the createJob procedure, we just need to specify
the new job_ stjle parameter as will be seen in the first example. The first step is to
create the procedure that will be called by the program used in this example.
begin
for reg in cl loop
Next, create the program using the procedure job sJa iled that has just been created
above.
--Lightweight jobs
--First create the program
begin
/*--Drop if already exists
dbms_scheduler.drop_program(
program_name => 'my_test_print_program',
force => TRUE);
*/
dbms_scheduler.create_program(
program_name => 'my_test_print_program',
program_type => 'storedj)rocedure ',
program_action => '"pkg"."jobs_failed"',
enabled => TRUE) ,-
end;
/
Create the schedule for the job, specifying when it will start and the repeat interval. In
this case, the interval will be one minute.
begin
/* --Drop if already exists
dbais_scheduler.drop_schedule (
schedule_name => 'my_schedule_l',
force => TRUE);
*/
dbms_scheduler.create_schedule(
schedule_name => 'Tny_schedule_l',
start_date => '25-JUN-10 09.00.00 PM Europe/Warsaw1,
repeat_interval => 1freq=daily;byminute=l',
comments => 'my test schedule.'),-
end;
/
Finally, create the lightweight job using the createJob procedure as follows.
The next example will show how to create an array of jobs. This approach is a quicker
alternative when the database administrator needs to create a set of jobs in a short
period of time.
declare
v_ job sy s .job ;
v_j ob_array sys.job_array;
begin
v_job_array := sys.job_array{);
v_job_array.extend(100);
for i in 1..100 loop
v_job := sys.job(job_name => 'my_job_number_'|jto_char(i),
job_style => 'lightweight',
end loop;
dbms_scheduler.create_jobs(v_job_array,1stop_on_first_error1);
end;
/
select job_name from dba_scheduler_jobs;
JOB NAME
my_j ob_number_l
my_j ob_number_2
my_j ob_number_3
my_job_number_9 8
my_j ob_number_9 9
Information about jobs can be displayed using the dba_schedulerJobs view. The
following script uses this view to display information about currendy defined jobs.
-- Parameters;
1) Specific username or all which doesn't limit output.
-*************************** * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
set verify off
select
owner,
job_name,
job_class,
enabled,
next_run_date,
repeat_interval
from
dba_scheduler_jobs
where
owner = decode{upper{'&1'), ’ALL', owner, upper('&1>))
-- Parameters:
1) Specific username or all which doesn't limit output.
2) Job name.
- * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
There are other views used to monitor: jobs, programs, schedules, logs and more.
They all start with dba_sd>eduler_% . Now another useful feature in this package will be
reviewed.
There are many times when an Oracle DBA needs to start a database process when
an external event happens:
1. Upon arrival of a redo log from another instance
2. Upon arrival of an external file to a TEL feed. Note that more on the TEL feed
can be found at http://www.dba-oracle.com/t_file_watcher.htm.
3. Upon arrival of a new file for becoming an external table
4. Upon arrival of a new object for BFILE inclusion
Before the Oracle file watcher utility, the DBA would have to write UNIX/Linux
shell scripts to watch directories for new files, nohupping a daemon process to sleep
for a few seconds, and check for a condition, ad infinitum, until the condition is met.
Note: The Oracle file watcher is not yet very sophisticated and it will only trigger
code to execute upon the arrival of a new data file into the target directory.
Also note that there are limitations when the code execution time exceeds the arrival
time for new files. By default, the arrival of new files will be ignored if the triggered
job is already running. If the job needs to be fired for each new arrival, regardless of
whether the job is already running or not, set parallel_instances=TK\JIL.
Oracle says that creating a file watcher involves these five steps. Creating a file
watcher is not immediately obvious and the steps are a tad convoluted:
The dbms_scbeduler has many more functionalities not covered in this book. Refer to
the Oracle Documentation to learn more about the dbms^scheduler package.
Package dbms_server_alert
In the older days of Oracle management, the DBA had to write their own DBA alerts,
and many Oracle professionals had extremely sophisticated alert scripts, like those
listed below:
■ Server-side alerts: UNIX/Linux shell scripts can be written to check for alert
log messages, dump and trace files and such. For working scripts, see the Oracle
script collection where working scripts are available for instant download.
■ Internal Oracle alerts: Dozens of customized alerts have been created for every
possible exception condition within Oracle, which is a valuable approach to alert
the user before the database crashes. For examples of such scripts, see Jon
Emmon's book, Oracle Shell Scripting. For a description of Oracle Linux
commands, see the Oracle Linux command reference poster.
384 Advanced DBMS Packages
Starting in Oracle lOg, there is the ability to set alerts within OEM, specify the alert
threshold and notification methods such as e-mail, pager and more. Oracle also
created a procedural interface to the alert mechanism using the new dbms_server_aiert
package. The dbms_server_alert package contains procedures to set and get threshold
values. When the space usage exceeds either of the two thresholds, an appropriate
alert is issued. If the thresholds are not specified, the defaults are 85% for warning
and 97% for critical thresholds.
The amount that thresholds are to be higher or lower than the baseline values can be
specified by percentage DB Control that sets the thresholds using the derived values
for each metric using the Oracle dbms_server_alert.setthreshold procedure for the
dbms__server_alert package. All thresholds except for space related alerts should be
explicidy defined. To enable the dbms_server_alert package functionality, set the
statistics_level initialization parameter to TYPICAL or ALL.
The dbms_server_alert package has the corresponding DBA views to see the status of all
scheduled alerts:
■ dba_outstandingy_alertS'. This view shows all existing alerts
■ dba_alert_historj. This shows a history of all preexisting alerts that were set by
dbms_server_alert
■ dbanlertcirguments-. This view shows the arguments to dbms_server_alert
■ dbathresholds-. This new view shows the threshold values for all alerts
Here are some examples of the dbms_server_alert package.
dbms_server_alert.set_threshold(
metrics_id => dbms_server_alert. buffer_cache_hit,
warning_operator => dbms_server_alert.operator_lt,
warning_value => 99,
critical_operator => dbms_server_alert.operator_lt,
critical_value => 98,
observation_period => 30,
consecutive_occurrences => 1,
instance_name => NULL,
object_type => dbms_server_alert.object_type_system,
object_name => NULL) ,-
end;
/
declare
v_war_operator number(20);
v_war_value varchar2(100);
v_crit_oper number (10);
v_crit_val varchar2(100);
v_obs_period number(10);
v_cons_occurr number(10);
v metric_num number(10);
v_metric_name varchar2(30);
begin
dbms_server_alert.get_threshold(
metrics_id => dbms_server_alert.buffer_cache_hit,
warning_operator => v_war_operator,
warning_value -> v_war_value,
critical_operator => v_crit_oper,
critical_value => v_crit_val,
observation_period => v_obs_period,
consecutive_occurrences => v_cons_occurr,
instance_name => NULL,
object_type => dbms_server_alert.object_type_system ,
object_name => NULL );
select
metric_name
into
v_metric_name
from
v$metric
where
metric id = v metric num and rownum =1;
The example above shows how to set threshold values for a specific metric. It can be
repeated to set the other hit metrics by changing the metrics__id parameter and setting
the desired threshold values. The view thresholds function shows which metrics are
being used in the database, as shown here:
select
distinct m .metric_name,
v .object_type,
v.metrics_id,
v .instance_name,
v. flags,
v .warning_operator,
v .warning_value,
v.critical_operator,
v.critical_value,
v .observation_period,
v .consecutive_occurrences,
v.object_id,
v.object_name
from
table{dbms_server_alert.view_thresholds) v,
v$metric m
where
v.metrics_id = m.metric_id;
Oracle 1 lg's data dictionary displays the dba_OMtstanding^_alerls view using the following
source query:
select sequence_id,
reason_id,
owner,
object_name,
From this last example on the dbms_server_alert package, the user should now
understand what happens in the background when configuring thresholds using a
GUI tool like Oracle Grid Control or Oracle Database Control.
Package dbms_session
The dbms_session package has many different functionalities. Most of them can be run
through a SQL command line, and are used to/for:
The first example will focus on the set_context procedure for increasing Oracle
database security using Virtual Private Database (VPD). For the VPD to properly use
the security policy to add the WHERE clause to the end user's SQL, Oracle must
know details about the authority of the user. This is done at sign-on time using
Oracle's dbms_session package.
At sign-on, a database logon trigger executes, setting the application context for the
user by calling dbms_session.set_context. The set_context procedure can be used to set any
number of variables about the end user, including the application name, the user's
name, and specific row restriction information. Once this data is collected, the
security pokey uses this information to build the runtime WHERE clause to append
to the end user's SQL statement. The set_context procedure sets several parameters that
are used by the VPD and accepts three arguments:
For example, assume that there is a publication table and we want to restrict access
based on the type of end user. Managers will be able to view all books for their
publishing company, while authors may only view their own books. Stating that user
jsmith is a manager and user mault is an author, at login time, the Oracle database
logon trigger generates the appropriate values and execute the statements shown next.
Create the package, which will have three procedures: one to set the context, one to
clear a specific context and the last one that will clear all information pertaining to a
specific context.
select
sys_context('userenv','current_user')
into
v_current_user
from
dual;
dbms_session.set_context(
namespace => 'publishing_application',
attribute => 'user_name',
value => 'jsmith'),-
dbms_session.set_context(
namespace => 'publishing_application',
attribute => 'company',
value => 'rampant_techpress');
else if y_current_user = 'pkguser' then
dbms_session.set_context(
namespace => 'publishing_application',
attribute => 'role_name',
value => 'author');
dbms_session.set_context(
namespace => 1publishing_application1,
attribute => ’user_name',
value => 'mault');
dbms_session.set_context(
namespace => 'publishing_application',
attribute => 'company1,
value => 'rampant_techpress');
end if;
end;
procedure proc_clear_context is
begin
dbms_session.clear_context(namespace => 'publishing_application') ;
end;
procedure proc_clear_all_context is
end pkg_context;
/
Now we need to log in as sys user and create the context which will point to the
procedure just created.
Log in as mault user and execute the procedure proc_set_name_attr to test that it is
working. After executing the procedure, use the sys_context function to get the current
user context values.
select
sys_context('publishing_application1,'role_name1 )
from
dual ;
select
sys_context{'publishing_application','company1 )
from
dual;
Application Context is a security feature that can be used in conjunction with fine
grained access control for many things including improving application performance.
It can also be used to cache attribute data for use in PL/SQL conditional statements.
Another functionality of dbms_session is the setjden tifier procedure, which can be used
to help identify a user session. Here is a simple example. An identifier can be set by
executing the command below at the session level.
Query the vSsession table to find out if there are any pkg users logged in:
select
client_identifier
from
v$session
where
client_identifier is NOT NOLL;
CLIENT IDENTIFIER
not_app_user
dbms_s e ssion_app_dbms
The last functionality example for the dbms_session package will pertain to enabling and
disabling a trace in a SQL session. The set_sqi'_trace procedure is used to enable SQL
trace in a user session. It runs as the SQL command alter session set sql_trace true. If
needed, for example, to trace all sessions by a specific user, create the trigger below.
Only DBA users are able to use the dbms_monitor package to enable a trace in any
database session. Any database user can enable an SQL trace in their own session
using the session_trace_em*ble procedure in the dbms_session package, as demonstrated in
these next lines.
Package dbms_shared_pool
Oracle shared pool is a shared memory area where PL/SQL and cursor objects are
stored. The dbms_shared_pool package allows access to this area, offering procedures
that help performance improvement by keeping objects most frequendy accessed in
this memory area. Package pinning has become a very important part of Oracle
tuning, and with the introduction of system-level triggers in Oracle 8i, there is now an
easy tool to ensure that frequendy-executed PL/SQL remains cached inside the
shared pool.
Just like using the KEEP pool with the data buffer caches, the pinning of packages
ensures that the specified package always remains in the Most-Recendy-Used (MRU)
end of the data buffer. This prevents the PL/SQL from being paged-out and then
reparsed upon reload. The Oracle DBA controls the size of this RAM region by
setting the shared_pool_sise parameter to a value large enough to hold all of the
required PL/SQL code.
The Oracle shared pool contains Oracle's library cache, which is responsible for
collecting, parsing, interpreting, and executing all of the SQL statements that go
against the Oracle database. Hence, the shared pool is a key component, so it is
necessary for the Oracle database administrator to check for shared pool contention.
The next example shows how to find the largest and most frequendy executed objects
and pin then into memory using a database startup trigger.
set lines 79
column type format al2
column object format a36
column loads format 99990
column execs format 9999990
column kept format a4
column "total space (K)" format a20
This query shows the biggest and most frequendy used objects in the database. They
are perfect candidates to be pinned into the shared pool memory. To pin them, use
the trigger below:
It is also possible to pin cursors, sequences and triggers. First, create a cursor and use
it. Next, identify the address and hash_value column values from the v$open_cursor view
and use these values to pin this cursor into the shared pool memory.
begin
dbms_output.enable(buffer_size => 10000000);
open cl;
if cl%isopen then
loop
end;
/
Now, query the v$open_cursor v iew to find the cursor created. Use the keep procedure
with information returned by address and hash_value to pin this cursor into memory.
select
user_name,
address,
hash_value
from
gv$open_cursor
where
sql_text like '%dba_objects%';
select
owner||'.' ||name "Obj Name" ,
type,
to_char(sharable_mem/1024,'9,999.9') "space(K)",
loads,
executions "Execs",
kept
from
v$db_obj ect_cache
where
type ='cursor'
and
name like '%dba_objects%'
and
kept='yes'
order by
"space(K) ",
loads,
owner,
name
/
Obj Name TYPE SPACE (K) LOADS Execs KEPT
There is also a procedure named purge that will clean data from the memory for a
specified object. The target object must no longer be pinned in memory when using
the purge procedure.
begin
-- Purging sequence
dbms_shared_pool.purge(
name => 'sys,audses$',
flag => 'Q');
end;
/
If a user is executing a process that requires memory from the shared pool, and the
RDBMS cannot find the needed space, an ‘out of memory’ message can be displayed
to the user. To do this, configure the threshold using the aborted_requestthreshold
procedure. The value specified is in bytes and can range from 5000 bytes to 2 GBs.
--Setting shared pool out of memory threshold
begin
dbms_shared_pool.aborted_request_threshold(threshold_size =>10485760 !; --
10MB is the limit until * out of memory' message is displayed to the user
end;
/
Package dbms_utility
For the most part, Oracle DBMS packages focus on one subject; in this regard, the
dbms_utility package is out of the ordinary. This package has procedures and functions
that work for different purposes like returning information about instances, databases
and data blocks, analyzing database objects, converting data formats and more. The
script responsible for its creation is $ORACLE_HOME/rdbms/ admin/dbmsutil.sql and
is called when the database is created.
I will not attempt examples for all procedures and functions; instead, the main and
most commonly used features are used in the next sample. It is well known that
dbms_stats and analyse (old fashioned) commands are used to analyze objects in a
database. The dbms_utility also has its own procedures that can be used to help
database administrators optimize their analysis process.
begin
dbms_utility.analyze_database(
method => 'estimate',
estimate_rows => NULL,
estimate_percent => 30,
method_opt => 'for all columns size 20');
end;
/
begin
dbms_uti 1ity .analyze__part_object (
schema => 'pkg',
object_name => 'tab_partitioned',
object_type => "I",
command_type => 'E',
command_opt = > 'E ',
sample_clause => 'sample 30 percent');
end;
/
begin
dbms_utility.analyze_schema(
schema => 'pkg',
method => 'estimate',
estimate_rows => NULL,
estimate_percent => 20,
method_opt => for all indexes');
end;
/
Other procedures can return information about databases, instances and data blocks,
as in the next example. In a RAC environment, sometimes which instances are active
needs to be known.
from
v$active_instances;
1 hostl:SID1
2 host2 :SID1
3 host3:SID3
4 host4:SID4
set serveroutput on
declare
v_version varchar2(15);
v_compability varchar2(15);
begin
dbms_utility.db_version(
version -> v_version,
compatibility => v_compability);
dbms_output.put_line(
a => 'My database version is
:'||v_version||'. ');
Other procedures that are very handy, in this case for transporting table data to CSV
files, are comma_to_table and table_to_comma. Take a look at these examples.
For another example, let’s suppose we have a table with one column which contains
first and last names of database users and we need to separate these values with
commas:
insert into
tab_users
values (1"Abraham","Lincoln"1);
insert into
tab_users
values {'"George","Washington"') ;
insert into
tab_users
values ('"Thomas","Jefferson "');
commit;
FULL NAME
"Ronald","Reagan"
Next, we create a second table which will receive the data separated into two columns.
Then create the procedure that will use the comma_to_table procedure in order to split
the data and insert it into a new table.
create table
tab_users_temp (
first_name varchar2(30),
last_name varchar2(30));
insert into
tab_users_temp
(first_name,last_name)
values
(v_tab(1), v_tab(2) ) ;
end loop;
commit;
end prc_dbms_utility;
from
tab_users_temp;
"Ronald" "Reagan"
"Abraham" "Lincoln"
"George" "Washington"
"Thomas" "Jefferson "
The dbms_utility can also be used to get a data block address. Many Oracle scripts will
provide the file and block number, but then this information must be translated into
y Code 7 .3 5 - dbms_utility_make_data_block.sql
conn sys@orallg as sysdba
set serveroutput on
declare
v_address varchar2 (30) ;
begin
select
dbras_utility.make_data_block_address(file => 101,block => 50)
into
v_address
from
dual ;
dbms_output,put_line(a => 'Data block address is:'||v_address);
end;
/
Also use dbms_utility to help compile objects in a schema. With the compile_schema
procedure, use FALSE in the compile_all parameters to compile only the invalid
objects.
begin
dbrns_utility .compile_schetna (
schema =>'pkg' ,
compile_all => FALSE,
reuse_settings => FALSE);
end;
/
These were just a few representative examples of the many procedures and functions
available within the dbms_utility package.
Package dbms_warning
The dbms_waming built-in package is used in l l g to manipulate the default PL/SQL
warning messages in conjunction with the new l l g plsql_wamings parameter. It is
possible to manage all PL/SQL warning messages by specifying the plsql_wamings
With dbms_imrning, the user can get the current values being used, set new values and
delete settings in the session or system level. The next example shows how to use the
dbms_ivaming package to manipulate PL/SQL warning messages. Specify to enable or
disable diferent kinds of warnings or treat warnings of a specific category as errors by
setting plsql_wamings = 'Enable|Disable|Error CategoryV [ 'Enable\Disable\Error
Category2’ J ...
There are four possible warning categories in the plsql_wamings initialization parameter
or at the session level:
1. all: This sets warnings for all types
2. severe: Sets warning messages for wrong results and abnormal behavior.
3. performance: Warning messages involving performance problems
4. informational: Warning messages of the informational category
In the next example, how to use dbms_waming to change PL/SQL warning messages
to set, get and suppress warning information will be shown.
severe
select
dbms_warning.get_category(
warning_number = > 6 0 0 0 )
from
dual;
DBMS_WARNING.GET_CATEGORY (WARN
informational
select
dbms_warning.get_category(
DBMS_WARNING.GET_CATEGORY(WARN
performance
The function get_waming^_setting^_cat is used to get warning category settings for the
current session.
select
dbms_warning,get_warning_setting_cat(
warning_category => 'performance')
from
dual;
disable:performance
select
dbms_warning.get_warning_setting_cat(
warning_category => 'informational')
from
dual ;
disable:informational
select
dbms_warning.get_warning_setting_cat (
warning_category => 'severe')
from
dual;
disable:severe
Using the getjwaming^jetting^jium function will return a warning number setting in the
current session if the specified warning is enabled or disabled. Here is an example:
select
dbms_warning.get_warning_sett ing_num(
warning_number -> 5000)
from
dual ;
select
dbms_warning.get_warning^setting_num(
warning_number => 6000)
from
dual;
begin
dbms_warning.add_warning_setting_cat(
warning_category => 'all',
warning_value => 'enable',
scope => 'session');
end;
/
begin
dbms_warning.add_warning_setting_num(
warning_number => 7203,
warning_value => 'enable',
scope => 'system');
end,•
/
begin
dbms_warning.set_warning_setting_string(
value =>'disable rail' ,
scope =>'system' ),-
end;
/
begin
dbms_warning.set_warning_setting_string(
value =>'disable:all' ,
scope =>'session' );
end;
/
This shows how this warning setting influences the PL/SQL compilation warning
message. The first step is to check current warning values. Then create a test
procedure, enabling warnings at a session level, and after the procedure is compiled
again, the warning messages appear.
conn / as sysdba
enable:all
Now disable warnings in this procedure for errors PL.W-05018 and PLW-06002 by
using this compile option below or by using the add_waming^_setting^jium procedure.
begin
dbms_warning.add_warning_setting_num(
warning_number => 06002,
warning_value => 'disable',
scope => 'session');
end;
/
alter procedure prc_test_dbms_warning compile ;
--At this time just one error will be showed
show errors
begin
dbms_warning.add_warning_setting_num(
warning_number => 05018,
warning_value => 'disable',
scope => 'session'),-
end;
/
alter procedure prc_test_dbms_warning compile ;
Finally, no error will be shown when compiling the test procedure after disabling
warning messages using the add_n>aming^_setting^jium procedure as warning numbers
will be suppressed.
Package debug_extproc
External procedures have a particular package used to debug their sessions called
debug^_extproc. In order to debug an external procedure, start an agent within the
session. This package is not installed by default. If the package is needed, the script
dbextp.sql must be executed. The script is located in both the Companion CD for
Oracle lOg or the new Oracle Examples CD for Oracle llg .
Now, follow this example which shows how to enable the debug process in an external
procedure. First, run the dbextp.sql script which will create the debug^extproc package.
@?/pisql/demo/dbextp.sql
This next step is used in Oracle 10g. Start the external procedure agent. This connects
to the external procedure and enables information about the process being used to be
gathered.
begin
debug_extproc.startup_extproc_agent;
end;
/
The next step is to find the PID of this session that has an agent started.
Now use th e gdb executable to attach the extproc to the session that has an agent started.
Now run the external procedure so the agent can gather the debug information,
select
external_program('ORACLE_HOME')
from
dual;
After that, the agent connects the external procedure called by Oracle to the debugger.
Now find the errors and determine how to fix them using the debugger. In this
example, Linux gdb was used. It may be necessary to refer to the debugger’s
documentation for specific actions that can be performed from here.
Package dbms_random
The next example uses the dbms_mndom function to randomize a series of numbers
from which a character string is then constructed. From the random number obtained
with dbms_mndom, take the three numbers from the third position and check if they
exist within 33 and 125, the allowable numbers for ASCII printable characters. After
all the work with dbms_mndom, we figured out that it would have been easier just to
cycle through the possible values for the varrays in a nested looping structure. Of
course, that would have led to a non-random ordered table. For the work we were
doing, it would have been OK to do it that way, but we can see other situations where
the dbms_mndom technique would be beneficial.
My thought was to use varray types and populate them with the various possible
values, then use dbms_mndom. value to generate the index values for the various varrays.
The count column was just a truncated call to dbms_mndom in the range of 1 to 600. All
dbms_random.initialize (123456);
for i in 1..1000 loop
v_rand := dbtns_random .value;
insert
into rand values (i,v_rand);
end loop;
end;
/
Summary
This chapter has shown how and when to use almost all packages designed for
management and monitoring tasks with regard to their day-to-day database
administrative work. In the next chapter, Data Warehouse packages will be explained
and demonstrated.
Summary 411
Oracle Data CHAPTER
Warehouse
Packages
A DWH (Data Warehouse) database normally holds summarized operational data
pertaining to an entire enterprise. Usually, data is loaded through a clean-up and
aggregation process, such as an ETL process (Extraction Transformation and Load),
with predetermined intervals such as daily, monthly or quartedy. One of the key
concepts in data warehousing is that the data is stored along a timeline. A data
warehouse must support the needs of a large variety of users. A DWH may contain
summarized as well as atomic data. A DWH may combine the concepts of OLTP,
OLAP and DSS into one physical data structure. Major operations in a DWH are
usually reported with a low to medium level of analytical processing.
Data warehouse design and creation is an interactive and evolving process; it is never
entirely finished. For a DWH to succeed, the user community must be intimately
involved from design through implementation. Generally, data warehouses are
denormalized structures. A normalized database stores the greatest amount of data in
the smallest amount of space. In a data warehouse, storage space is sacrificed for
speed via denormalization.
Many of the time-honored concepts are bent or completely broken when designing a
data warehouse. An OLTP designer may have difficulty in crossing over to data
warehousing design. In fact, a great OLTP designer may find DWH to be frustrating
and difficult Many object-related concepts are implemented in a DWH design. A
source for DWH designers may be found in a pool of OO (Object-Oriented)
developers.
These next pages will introduce some packages that can be used in data warehouse
environments.
This example will show how simple it is to change the path of an SQL statement to a
new one that is performing better. First, the user must have execute privilege on the
dbms_advanced_nwrite package. The initialization parameter query._remite_enabled must be
TRUE and queryjrewrite_integrity must be trusted or stale_tolerated.
Connect as sysdba and grant the necessary privileges to pk g user. Check if the
initialization parameters have the correct values and change them if necessary.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Suppose that the application has a view named sales_v1 that is performinga full table
scan. Since it has a fu ll hint inside of it, create another view without this hint. The
second view will be sales_v2 .
Check the explain plan for both views and note that the first one is doing a FTS.
--First view
explain plan for
select * from sales_vl;
select * from table (dbms_xplan.display) ,-
PLAN TABLE OUTPUT
o
o
o
o
f-
1 | partition range all| 165 |
1 “ 85 | 1376 (1)|
|* 2 | table access full | sales 1 165 | 1485 1 1376 (1)| 00:00:17 |
2 - filter(nprod_idn=10)
14 rows selected
--Second view
explain plan for
select * from sales_v2;
select * from table(dbms_xplan.display);
p 1an_table_output
4 - ac c e s s ("PROD_ID"=10)
16 rows selected
begin
dbms_advanced_rewrite.declare_rewrite_equivalence{
name => 'my_first_equivalence',
source_stmt => 'select * from sales_vl’,
destination_stmt => 'select * from sales_v2',
validate => FALSE,
rewrite_mode => 'recursive'),-
end;
/
select
*
from
dba__rewrite_equivalences ;
OWNER NAME SOURCE_STMT DESTINATION_STMT
REWRITE MODE
Finally, test the explain plan again for the first command. It is best to get the plan for
the second view rather than the first one.
4 - ac c e s s ("PROD_ID"=10)
16 rows selected
One of the options available for the Enterprise Edition of Oracle lOg is the OLAP
Option, a multidimensional calculation engine that allows the DBA to perform OLAP
analysis on multidimensional datatypes. By using the OLAP Option, DBAs working
on data warehousing projects can choose to store their detail level data in normal
Oracle relational tables, and then store aggregated data in OLAP Option analytic
workspaces for further multidimensional analysis.
However, it was not possible to use these analytic workspaces as replacements for
materialized views if the idea was to take advantage of query rewrite as the query
rewrite mechanism in Oracle 9i would never recognize the olap_table function as being
one that could provide the aggregated answers to the users’ original query. Oracle lOg
addresses this shortcoming by providing a new feature called query equivalence.
Query equivalence allows us to declare that two SQL statements are functionally
equivalent and that the target statement should be used in preference to the source
statement. By using the query equivalence feature, we can produce a custom SQL
query, in this instance by using the olap_table feature to retrieve summary data from an
analytic workspace, and have the query used to satisfy a regular SQL statement that
summarizes via the usual SUM() and GROUP BY clauses.
select
category,
country,
sum(sales)
from
product p,
geography g,
sales s
where
s.product_id = p.product_id
and
s,geography_id = p .geography_id
group
by p.category,
g. country;
To declare that the analytic workspace query is functionally equivalent to the previous
query, issue the command:
dbms_advanced_rewrite.declare_rewrite_equivalence (
'my_second_equivalence',
'select
Query equivalence can be used to substitute any SQL or DML statement for another,
including use of the new SQL MODEL clause, and is particularly useful when SQL is
generated by an application and cannot be changed. But there is a different way to
phrase the query, perhaps using new data structures such as an OLAP Option analytic
workspace that has been created.
begin
dbms_advanced_rewrite.drop_rewrite_equivalence(
name => 'My_First_Equivalence');
end;
/
Package dbms_cube
Oracle 10g introduced some new OLAP capabilities using the built-in analytical
workspaces of the Oracle database. There are many new PL/SQL and XML-based
interfaces that aid in the creation of workspaces based on the cubes, dimensions,
measures, and calculations defined in the OLAP database catalog. These new
interfaces can be used by the new packages provided or by Oracle Enterprise
Manager to define and build analytic workspaces. This removes the need for the user
to learn OLAP DML commands. This is now the familiar territory of PL/SQL
packages and OEM!
OLAP data can be stored in either relational tables or multi-dimensional data types
held within an analytic workspace. Among its many functions, the dbms_cube package
can be used to create and populate analytic workspaces by using the build procedure.
The first step is to verify that OLAP is already installed and is valid in the database.
To do so, follow MOSC Note: 296187.1 in Oracle Support. After confirming that
OLAP is installed, move on to some examples of how to create materialized views
and dimensions using the cubes packages.
y Code 8 .2 - dbms_cube.sql
conn sysOorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
alter table
"pkg"."cal_month_sales_mv"
modify
{"calendar_month_desc" not null enable) ,-
2 - acc e s s (nitem_l"="T"."time_id")
19 rows selected.
declare
v_mview varchar2(40) ;
begin
v_mview := dbms_cube.create_mview(mvowner => 'pkg',mvname =>
1cal_month_sales_mv’,sam_parameters => 1build=immediate');
end;
/
After creating the cube, check the explain plan again. This shows that the materialized
view that has just been created is running automatically.
--After creating mview the new plan is this one below
PLAN TABLE OUTPUT
If it is necessary to refresh a cube or any stale dimensions before refreshing, use the
build procedure. Following are two examples on the correct syntax for using the build
procedure based either on dimensions or cubes:
Confusing as it may be, to build something is not the purpose of the procedure
named build, rather, it is used to load data inside a cube or a dimension. Here is some
information on the commands that can be used:
■ clear [values \ leaves \ aggregates]: This command prepares the cube for data refresh.
If used in dimensions, it deletes keys and, consequendy, data values for cubes
using this dimension.
■ values'. If this option is used, all data in the cube is cleared. All facts will need to
be reloaded. Also, aggregates need to be recomputed. Supports complete refresh
method.
■ leaves-. Clears only detail data and not aggregates. Only aggregates for changed or
new facts need to be recomputed, though all facts must be reloaded.
■ aggregates-. This option clears all aggregates but maintains detail data. Aggregates
must be recomputed.
■ load [synch \no synch]: This command loads data into a dimension or cube. Only
two options are available. They are:
■ synch: This option correlates relational data sources with dimension keys.
y Code 8 . 3 - dbms_cube_xml.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Package dbms_cube_advise
While the dbms_cube package is designed to create, load and process data in cubes and
dimensions, dbms_mbe_advise is used to check if a cube’s materialized view supports a
fast refresh and query rewrite. The important function of this package is
mv_mbe_advice. It is used to generate missing objects and/or to verify that all
requirements are met.
Here is an example:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
After the normal query results, diagnostic messages like these will appear:
20100611 10:37: 23.155957000 dbms_ coad_diag NOTE: Parameter mvOwner pkg
20100611 10:37: 23.156923000 dt>ms_ coad_diag NOTE: Parameter mvName cb$cal_month_sales
20100611 10:37: 23.157031000 dbms_ coad_diag NOTE: Parameter factTab p k g .sales
20100611 10:37: 23.157055000 dbms_ coad_diag NOTE: Parameter cubeName cal_month_sales
20100611 10:37: 23.157074000 dbms_ coad_diag NOTE: Parameter cnsState rely disable novalidate
20100611 10:37: 23.157093000 dbms_ coad_diag NOTE: Parameter NNState disable novalidate
20100611 10:37: 23.157117000 dbms_ coad_diag NOTE: Begin NN:
20100611 10:37: 23.625093000 dbms_ coad_diag NOTE: End NN
20100611 10:37: 23.625156000 dbms_ coad_diag NOTE: Begin PK
20100611 10:37: 23 .842807000 dbms_ coad_diag NOTE: End PK
20100611 10:37: 23.842907000 dbms_ coad_diag NOTE: Begin FK
20100611 10:37: 23.980269000 dbms_ coad_diag NOTE: End FK
20100611 10:37: 23.980326000 dbms_ coad_diag NOTE: Begin RD
20100611 10:37: 24.849280000 dbms_ coad_diag NOTE: End RD
20100611 10:37: 24 .849339000 dbms_ coad_diag NOTE: Begin CM
20100611 10:37: 24.851761000 dbms coad_diag NOTE: End CM
Note that additional diagnostic information was added at the end of the query results.
This can help to diagnose any issues found through the recommendations function.
Good scripts related to OLAP are available to download at this Oracle link:
http://www.oracle.com/technology/products/bi/olap/olap_ dba_scripts.zip. For
example, this script shows how much space an analytic workspace is consuming in the
database:
select
dbal.owner||'.'||substr(dbal.table_name,4) awname,
sum(dbas.bytes)/1024/1024 as mb,
dbas.tablespace_name
from
dba_lobs dbal,
dba_segments dbas
where
dbal.column_name = 'awlob'
and
dbal.segment_name = dbas.segment_name
group by
dbal.owner,
dbal.table_name,
dbas.tablespace_name
order by
dbal.owner,
dbal.table_name;
For further information regarding OLAP Administration, check out the Oracle
OLAP User's Guide l l g Release 1 (11.1).
Whilst this was useful for Java programmers, it was not all that relevant for PL/SQL
programmers and to remedy this, Oracle lOg came with a new package called
dbms_data_mining that provides PL/SQL access to the data mining engine.
Like the Java API, dbms_data_mining allows for building a data mining model, testing it
and then applying the model to provide scores or predictive information for an
application. One of the key differentiators for Oracle data mining is that mining
models can be applied direcdy to data in the database. There is no need to extract the
data and then separately load it into the mining engine, meaning that data mining can
now be carried out in real time. The Oracle data mining engine can be pointed at any
schema in the database. If the data needs processing beforehand to place continuous
and discrete values into range bins, there is also a new accompanying package,
dbms_data_mining^Jransform, to carry this out automatically.
Oracle provides a graphical tool named Oracle Data Miner. This can be downloaded
at http://www.oracle.com/technology/products/bi/ odm/index.html. Further
information on Oracle data mining can be found in “Oracle Data Mining Concepts”
at http://tahiti.oracle.com.
There should be some familiarity with Oracle data mining as some objects are a
prerequisite for using the dbms_data_mimng and dbms_data_miningJransform packages. A
complete Oracle by Example of ODM can be found through this link:
http://www.oracle.com/technetwork/database/options/odm/odm-samples-
194497.html. The first example will show how to create, drop, rename, export, import
and get information about mining models.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_data_tnining.create_model {
model_name => 1my_first_model',
mining_function => dbms_data_mining.feature_extraction,
data_table_name => 'tab_jny_tab_exemple',
Here are some important points on exporting a model. During the process of the
import or export procedures, temporary tables with the names
dm$p_model_e>p>imp_temp, dm$p_model_import_temp, and dm$p_model_tabkey_temp are
created. Located in the owner’s schema, they contain internal information about
export or import processes. Be sure that the object directory is already created in the
database.
In this example, only two models will be exported. If it is necessary to export all
models, leave m odelJilter blank.
The command below will export all models for the user that is currendy connected,
begin
dbms_data_mining.export_model(
filename => 'my_first_exp_data_mining',
directory => lexp_mining_dirl,
model filter => 'name in {' 1nmf__model_l ’', ' 'svm_model_2 '')
filesize => NULL,
operation => NULL,
remote_link => NULL,
jobname => NULL);
end;
/
The next examples show how to add, remove and get information about the cost
mode matrix. The process of adding a cost mode matrix associates the classification
model with the cost matrix table.
begin
dbms_data_mining.add_cost_matrix(
model_name => 1my_first_exp_data_mining',
cost_matrix_table_name => 'costs_nb',
cost_matrix_schema_name => NULL);
end;
/
--Remove cost mode matrix
begin
dbms_data_mining.remove_cost_matrix(
model_name => 1my_first_exp_data_mining');
end;
/
There are other functions that can be used to gather information about models. They
are described below;
from
table(dbms_data_mining.get_model_details_nb(model_name = >
’my_first_model'});
While dbms_data_mining is used to create, drop, change and get information about data
mining models, dbms^datatnining^Jransform is used to prepare data for mining.
In the next examples, operations like create, insert, stack and xform will be found. They
are used to transform columns of data for mining. The operations are briefly
described here, followed by examples:
■ create-. This operation creates a transformation table used for transformation of
data such as binning, column removal, normalization, outlier treatment and
missing value treatment.
■ insert. This operation populates a transformation table in a specified data source.
■ stack. This operation adds to a list of transformation instructions. This stack can
be used in the create_model procedure.
■ xfomr. This operation creates a view based on table data which contains
transformed columns.
Procedures starting with create_% are responsible for creating definition tables and
procedures starting with insert_% are responsible for inserting transformation
instructions into definition tables. There are also the procedures stack__% used to add
expressions to the transformation definition stack and xform_°/o to create views that
can add, remove or transform values and expressions.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Package dbms_dimension
It used to always be a challenge to describe dimensions, and we used to depend on
dictionary tables such as dba_dimensions and dba_dim_attributes. The Oracle Enterprise
Manager (OEM) package was also useful for describing the structure of a dimension.
execute dbms_dimension,describe_dimension{'sales.mydim') ,-
In the next example, these two procedures will be used to describe and validate a
dimension owned by pk g user.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
set serveroutput on
begin
dbms_dimension.describe_diraension(
dimension => ’promotions_dim' )
end;
/
dimension pkg.promotions_dim
level category is pkg.promotions,promo_category_id
level promo is pkg.promotions.promo_id
level promo_total is pkg.promotions.promo_total_id
level subcategory is pkg.promotions.promo_subcategory_id
hierarchy promo_rollup (
promo child of
subcategory child of
category child of
promo_total
)
attribute category level category determines
pkg.promotions.promo_category
attribute promo levelpromo determines pkg.promotions.promo_begin_date
attribute promo levelpromo determines pkg.promotions,promo_cost
attribute promo levelpromo determines pkg.promotions.promo_end_date
attribute promo levelpromo determines pkg.promotions.promo_name
attribute promo_total level promo_total determines
pkg.promotions.promo_total
Next, we use the validate_dimension procedure to verify that the relationships for a
dimension are valid.
begin
dbms_dimension.validate_dimension(
dimension => 'promotions_dim',
incremental => TRUE,
check_nulls => TRUE,
statement_id => 'Validating dim promotions_dim1);
end;
/
Now we check for errors regarding relationships using one of these tables described
below:
owner varchar2(30)
table_name varchar 2(30)
dimension_name varchar 2(30)
relationship varchar 2(11)
bad rowid rowid
statement_id varchar2(30) Y
owner varchar 2(30)
table_name varchar 2(30)
dimension_name varchar 2(30)
relationship varchar 2(11)
bad_rowid rowid
This single package is created using dbmssum.sql that can be found in the
$ORACLE_HOME/rdbms/admin directory and it overrides the validate^dimension that
was on the dbms_olap package on older database versions.
Oracle materialized views were first introduced in Oracle 8 as snapshots and were
enhanced to allow very fast dynamic creation of complex objects. Oracle materialized
views allow sub-second response times by pre-computing aggregate information, and
Oracle dynamically rewrites SQL queries to reference existing Oracle materialized
views.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
begin
dbms_mview.begin_table_reorganization(tabowner => 'pkgtabname =>
'cal_month_sales_mv'),-
end;
begin
dbms_mview.end_table_reorganization(tabowner => 'pkg1,tabname =>
'cal_nionth_sales_inv') ;
end;
/
If estimating the size of a materialized view prior to creating it is chosen, use the
estimate_mvieiv_si%e procedure as follows:
Connected to;
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
declare
v_num_rows number;
v_num_byt e s numbe r ;
begin
dbms_mview.estimate_mview_size(
stmt_id => 1test_estimate1,
select_clause => 'select
department_id,
job_id,
sum(salary)
from
emp
group by
department_id,
job_id',
num_rows => v_num_rows,
num_bytes => v_num_bytes);
dbms_output.put_line(a => 'Number of rows is:'|[v_nui_rows);
dbms_output.put_line(a => 'Number of bytes is:'||v_num_bytes);
end;
/
Number of rows is :1
Number of bytes is:54
Another package option is useful for gleaning materialized view details. To know
what kind of refresh is supported, simply run the explain_mview procedure and it will
show this information. The results will be stored in a table named mv_capabilities_table,
created by the utlxmv.sql script. It is also possible to load the results into an array as in
this next example.
Package dbm sjnview 435
y Code 8.11 - dbms_mview_explain_mview.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
create table tab_array_results {
v_array_results sys.explainmvarraytype);
declare
v_mview_ddl varchar2(1000) := 'create materialized view emp_sum
tablespace tbs_data
refresh on demand
enable query rewrite
as
select
department_id,
job_id,
sum(salary)
from
emp
group by
department_id,
job_id';
v_mv_array sys.ExplainMVArrayType;
begin
dbms_mview.explain_mview(
mv => v_mview_ddl,
msg_array => v_mv_array);
insert into
tab_array_results
values
(v_mv_array);
commi t ;
end;
/
Now simply query the tab_array_results table to find information regarding the
materialized view specified in this example.
y Code 8.12-dbms_mview_explain_rewrite.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
select
statement_id,
mv_owner,
mv_name,
message,
pass,
mv_in_msg,
original_cost,
rewritten_cost
from
rewrite_table
where
pass='yes';
STATEMENT_ID MV_0W MV_NAME MESSAGE PAS MV_IN_MS
ORIGINAL COST REWRITTEN COST
My_First_Explain_Rewrite pkg emp_sum QSM-01209: query rewritten with material yes emp_sum
3 2
ized view, emp_sum, using text match alg
orithm
Oracle Database automatically purges log data from materialized views; although in
some situations, the snapshot log may grow to a huge size.
The most common scenario where this problem happens is when the master table has
more than one snapshot. If one of the snapshots
is not configured with an automatic refresh, the file size may become unwieldy.
The last parameter, flag, is used to make sure that rows will be deleted from the
materialized view log for at least one materialized view.
y Code 8.13-dbms_mview_purge_log.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbras_raview.purge_log(
master => 'tab_master',
num = > 3,
flag => 'delete');
end;
/
Commonly used in data warehouse environments, the following procedure will purge
data from the direct loader log once they are no longer needed by any materialized
view.
y Code 8.14-dbms_mview_purge_direct_load_log.sql
conn sysSorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbmsjmview.purge_direct_load_log;
end;
/
This next procedure will purge only rows from a specified materialized view log. It
must be executed on the master site. Use the dba_reg}stered_mvieivs view for the
information needed to perform this procedure.
y Code 8.15-dbms_mview_purge_mview_from_log.sql
conn sysSorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
select
owner,
name,
mview_site
from
dba_registered_mviews;
begin
dbms_mview.purge_mview_from_log(
mviewowner => 'pkg1,
mviewname = > 'cal_month_sales_mv',
mviewsite => '');
end;
/
To refresh a list of materialized views all at once, use the dbms_mvien>.refresh procedure.
The next example will refresh two materialized views from different users while using
different refresh methods; complete and fa st refresh in this instance.
Connected to;
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_mview.refresh{
list =>'pkg.cb$times_dim_dl_cal_rollup,
sh.fweek_pscat_sales_mv' ,
method =>'cf' ,
refresh_after_errors => TRUE,
purge_option => 2,
parallelism => 2,
atomic_refresh => FALSE,
nested => TRUE);
end;
/
If there is a site with a large number of materialized views and all stale materialized
views need to be refreshed, use the dbms_mview.refresh_all procedure from the next
example:
set serveroutput on
declare
v_failures binary_integer;
begin
dbms_mview.refresh_all_mviews S
number_of_failures => v_failures,method => 'cf’,refresh_after errors
=> TRUE);
set serveroutput on
declare
v_failures binary_integer;
begin
dbms_mview.refresh_dependent(
number_of_failures => v_failures,
list => 'sh.sales,
pkg.test,
pkg.t itne',
method => 'c ',
nested => TRUE);
dbras_output.put_line(
a => 'Failures on this refresh:']|v_failures);
end;
/
There is also an internal procedure named dbms_mview. refresh_mv. Its utilization is not
recommended unless specifically requested by Oracle support.
begin
dbms_mview.ref resh_tnv (
pipename => 'my pipe example',
mv_index =>2,
owner => 'pkg',
name => 1mv_name1,
method => 'c ',
rollseg => NULL,
atomic_refresh => NULL,
env => NULL,
resources => NULL) ,-
end;
/
Materialized views are registered automatically by Oracle upon their creation. If, for
some reason, an error is received during the registration process, register or unregister
a materialized view or snapshot using the following procedure:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
W ith the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Register mview
begin
d b m s _ m v i e w .regi st e r _ m v i e w (
mviewowner => 'pkg',
mviewname => 1m v i e w _ n a m e ',
mviewsite -> 'dbms1,
mview_id => sysdate,
flag => dbms_mview.reg_rowid_mview,
qry_txt => 'select department_id, job_id, sum(salary) from emp group by
department_id, job_id');
end;
/
--Unregister mview
begin
d b m s _ m v i e w .unregi s t e r _ m v i e w (
mviewowner => 'pkg1,
mviewname => 'mview_name',
mviewsite => 'dbms');
end;
/
Keep in mind that the processes of register/unregister shown above are not normally
necessary.
Package dbms_olap
Oracle provides advisory functions in the dbms_olap package if there is doubt about
which materialized views to create. These functions help in designing and evaluating
materialized views for query rewrite. This package is still supported; however, most of
its functionality was improved by other packages, so if a new application is being
developed, it is recommended that those other packages be used instead.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_olap.validate_diraension(
dimension_name => 'my_view_name',
dimension_owner => 'pkg1,
incremental => TRUE,
check_nulls => TRUE);
end;
/
declare
v_rows number ,-
v_bytes number;
begin
dbms_olap.esti!Bate_nrview_size (
stmt_id => 'Test Estimate',
select_clause => 'select
sum(sal),
emp_group
from
emp
group by
emp_group1,
num_rows => v_rows,
num_bytes => v_bytes);
dbms_output.put_line(a => 'Number of rows:'j|v_rows) ,-
dbms_output.put_line(a => 'Number of bytes:'||v_bytes);
end;
/
The set_logfile_mme procedure is used to rename the default refresh.log file when
refreshing a materialized view.
begin
dbms_olap.set_logfile_name(
filename => '/tmp/my_mview_Refresh_Logfile.log');
dbms_mview.refresh(;
commit;
end;
/
As previously mentioned, dbms_olap was divided between different packages, and thus,
dbms_olap has become a package with very few procedures.
Now a complete example will be shown in which a group will be created, materialized
views added and removed to and from the group, how the group is refreshed and
lasdy, how to drop the group. First, create the materialized views that will be used in
this example:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
create table
tab_dbms_refresh
tablespace
tbs_data as
select
*
from
sys.source$;
alter table
tab_dbms_refresh
add constraint
pk_tab_dbms_refresh primary key (obj#, line);
select
owner,
mview_name
from
dba_mviews
where
mview_name like 'my_view%';
create materialized
view my_view_l
tablespace tbs_data
refresh complete
as
select
s.obj# ,
count(s.line)
from
tab_dbms_refresh s
group by
s.obj#;
create materialized
view my_view_2
tablespace tbs_data
refresh complete
as
select
s .obj # ,
count(s.line)
from
tab_dbms_refresh s
group by
s .obj #;
create materialized
view my_view_3
tablespace tbs_data
refresh complete
as
select
s.obj# ,
count(s.line)
from
tab_dbms_refresh s
group by
s .obj# ,-
begin
dbms_refresh.make(
name => 'my_refresh_group_l',
list => 'my_view_l',
next_date => sysdate.
Verify that the materialized view group was created and which materialized views
belong to it by querying the dba_snapshot and dba_refresh views. Alternately, the
sys.rgroupi table and dba_rgroup view can be used.
select
r.rname "refresh group",
sn.name "mview name"
from
dba_snapshots sn,
dba_refresh r
where
sn.refresh_group = r.refgroup;
select
*
from
sys.rgroup$;
select
*
from
dba_rgroup;
Suppose that we want to add more materialized views to the group created above. To
do so, we use the dbms_refresb.add procedure. This will add the other two materialized
views created in the group above. Now check the group again:
begin
dbms_refresh.add(
name => 'ray_refresh_group_l',
list => 1my_view_2,my_voew
-_3 ’) ;
commit;
end;
/
select
r.rname "refresh group",
sn.name "mview name"
from
dba_snapshots sn,
ray_refresh_group_l my_view_l
my_refresh_group_l my_view_2
my_refresh_group_l my_view_3
Next, use the dbms_rrfresh.chan£e procedure to change the refresh interval of a specified
group. First, check the existing interval. Then make the change and check the interval
values again.
select
r.rname "refresh group",
sn.name "mview Name" ,
r.interval "interval"
from
dba_snapshots sn,
dba_refresh r
where
sn.refresh_group = r.refgroup;
begin
dbms_refresh.change(
name => 'my_refresh_group_l',
next_date => sysdate,
interval => 'systimestamp +1/12',
parallelism => 4);
commi t ;
end;
/
col "interval" for a20
col "mview Name1' for a20
col "refresh group" for a20
select
r.rname "refresh group",
sn.name "Mview name" ,
r.interval "interval"
from
dba_snapshots sn,
dba_refresh r
where
sn.refresh_group = r.refgroup;
In order to refresh the entire group, use the dbmsjrefresh.refresh procedure as follows:
select
mview_name,
begin
dbms_refresh. refresh (name => 'my_refresh_group_l')
end;
/
select
r.rname "refresh group",
sn.name "mview name" ,
r.interval "interval"
from
dba_snapshots sn,
dba_refresh r
where
sn.refresh_group = r.refgroup;
begin
dbms_refresh.subtract(
name => 'my_refresh_group_l',
list => 'my_view_l');
commit;
end;
/
begin
dbms_refresh.destroy (name => 'my_ref resh_group_l1) ,-
commit;
end;
/
declare
v_out varchar2(2000);
Summary
In this chapter, packages pertaining to data warehousing were explained and
exemplified. How to use the most important procedures and functions of packages
such as dbms_advanced_nwrite, dbms_mbe, dbms_mbe_advise, dbms^datacnining,
dbm sctotacninin& Jwnrf0™1, dbms_dimension, dbms_mview, dbms_olap and dbms_nfresh was
illustrated
In the next chapter, packages that are used to manage Real Application Cluster
databases will be introduced.
This chapter will examine two packages that are relevant with Real Application
Cluster and Distributed Transactions. They are useful in the day-to-day setting of a
RAC Database Administrator.
Package dbms_service
There are at least three known utilities that can be used to manage services in an
Oracle RAC environment: Database Configuration Assistant (DBCA), Server Control
(SRVCTL) and the dbms_service package. lik e the first two utilities listed, the
dbms_service package can administer services but, unlike DBCA and srvctl\ dbms_service
works with one node at a time rather than all nodes in a cluster. In the first example,
how to create, delete, start, and stop services in RAC and single instance databases
will be shown.
It is important to note that some of the main procedures of this package are
deprecated in Oracle l l g Release 2 for RAC database instances. This is because the
dbms_service package will not make any updates in the Cluster Ready Services (CRS)
attributes that are necessary in this version; the service control [srvctl) utility should be
used instead.
Imagine that we have a Real Application Cluster database with three instances. The
plan is to create three services obeying the following rules:
■ srv_prod. This service is used by production users. It always connects to the first
instance of RAC.
■ srv_cust. This service is used by certain customers who only need to execute select
queries on the database every time.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Creating services
begin
dbms_service.create_service(
service_name => 'srv_prod',
network_name => 'dbmsl',
goal => dbms_service.goal_throughput,
dtp = > FALSE,
aq_ha_notifications => NULL,
failover_method => dbms_service.failover_raethod_basic,
failover_type => dbms_service.failover_type_session,
failover_retries => 3,
failover_delay => 5,
clb_goal => dbms_service.clb_goal_short);
end;
/
begin
dbms_service.create_service(
service_natne => 'srv_cust',
network_name => ldbms2l,
goal => dbms_service.goal_none,
dtp => FALSE,
aq_ha_notifications => NULL,
failover_method => dbms_service.failover_method_none,
failover_type => dbms_service.failover_type_select,
failover_retries => 2,
failover_delay => 10,
clb_goal => dbms_service.clb_goal_long);
end;
/
begin
dbms_service.create_service(
service_name => 'srv_low',
network_name => 'dbms3',
goal => dbras_service.goal_none,
dtp => TRUE,
aq_ha_notifications = > NULL,
failover_method => dbms_service.failover_method_none,
failover_type => dbms_service.failover_type_select,
failover_retries => 2,
failover_delay => 20,
clb_goal => NULL);
end;
/
select
name,
network_name,
fa i1ove r_me thod,
failover_type,
failover_retries,
goal,
clb_goal
from
dba_services
where name like 'srv%',-
begin
dbms_service.modify_service(
service_name => 'srv_low',
goal => dbms_service.goal_none,
dtp => FALSE,
begin
dbms_service.delete_service(
service_name => ’srv_low');
end;
/
select
name,
network_name
from
v$active_services
where name like 'srv%1;
srv_cust dbms2
srv prod dbmsl
srv_low dbms 3
begin
dbms_service.stop_service(service_name => 'srv_prod',instance_name =>
NULL);
end;
/
--Stopping a service in a specified instance
begin
dbms_service.stop_service(service_name => 'srv_low',instance_name =>
'dbms2');
end;
/
--Stopping a service in all RAC instances
begin
dbms_service.stop_service(service_name => 'srv_low',instance_name =>
dbms_service.all_instances);
end;
/
The next and last example of this package shows the disconnect_session procedure. It can
be used to disconnect all sessions of a specific service from all instances. There are
two options when using this procedure: disconnect immediately or after the session
transactions have finished.
Disconnecting all sessions of service with srv_cust from all instances after session
transactions has finished:
begin
dbms_service.disconnect_session{
service_name => 'srv_cust',
disconnect_option => 0);
end;
/
Disconnecting all sessions of the srv_cust service from all instances immediately,
begin
dbms_service.disconnect_session(
service_name => 'srv_cust',
disconnect_option => 1);
end;
Here are some useful views for getting information pertaining to Oracle Services:
■ gv$active_services-. Shows the active services in a database
■ gv$services-. Shows information about services in a database
■ v$service_wait_class'. Shows the wait information for each service
■ dba_services-. Shows all services in a database
■ dba_hist_service_wait—class-. Shows historical information about the wait event class
for each service that was tracked by the Workload Manager
■ dba_hist_service_n^er. Shows historical information about services tracked by the
Workload Manager
■ dba_hist_service_stat Displays historical information about service statistics tracked
by the Workload Manager
By the same token, other Oracle utilities can be used to manage services as mentioned
before. Oracle Enterprise Manager Database or Grid Control offers easy access for
managing services. SRVCTL is a littie more complicated, but a reliable method for
working with Oracle Services.
Package dbms_xa
IT environments are becoming more and more complex. With multiple databases and
applications communicating together, it can be difficult to manage without the proper
resources.
One good example is when there are transactions traveling between different
applications which must be committed to different databases like BPEL, Oracle E-
Business Suite, Oracle Transportation Manager and/or Oracle Retail, Oracle XA
provides an external interface used to interact with a transaction manager out of the
database. It is used for committing transactions of different databases while
maintaining data integrity. These database transactions are also known as distributed
transaction processing (DTP).
The dbms_x& package provides an interface for working with XA via PL/SQL
language. Certain privileges are necessary for a user to execute particular XA tasks:
1. To execute xa_mover; the user must have select privileges in the
dba_pendingJransactions view.
2. To manipulate XA transactions created by other users, the force any transaction
privilege is required.
454 Advanced DBMS Packages
In the next example, how to create a distributed transaction from one session and
commit to another session by using the dbms_xa package will be shown.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
insert into
tab_dbms_xa values (1);
commit;
select
*
from
tab_dbms_xa ,-
Now an XA transaction is created in the first session. This session will do an update
and add two insertions on the test table, but will not commit. Note that the session
timeout is increased, so there is more time to work with this sample.
set serveroutput on
declare
v_my_transaction pls_integer;
v_xa_xid dbms_xa_xid := dbms_xa_xid(3322);
v_xa_exception exception;
v_ora_error pls_integer;
begin
Notice that the xa_start function is used to associate the current session with a
transaction and that the tmnoflags constant is set to inform that it is a new transaction.
v_my_transaction := dbms_xa.xa_start(
xid => v_xa_xid,
flag = > dbms_xa.tmnoflags) ,-
--DML Operations
update
tab_dbms_xa
set
coll=ll ,
col2='Value updated on Session 1.'
where
coll=l;
insert into
tab_dbms_xa
values (2,'Value inserted on Session 1.');
insert into
t ab_dbms_xa
values (3,'Value inserted on Session 1.');
Suspending a transaction is done using xa__end and tmsuspend. This enables the
transaction to be caught later by another session.
v_my_transaction := dbms_xa.xa_end(
xid => v_xa_xid,
flag => dbms_xa.tmsuspend) ,-
--Check if XA transaction is OK
if v_my_transaction <> dbms_xa.xa_ok then
v_ora_error ;= dbms_xa.xa_getlastoer{);
dbms_output.put_line(a => 'Attention! Oracle Error - ORA-' || v_ora_error
|| ' obtained. XA Process failed!');
raise v_xa_exception;
else dbms_output.put_line(a => 'XA Process ID 3322 is working!');
end if;
exception
when others then
dbms_output.put_line(a => 'A XA problem occur. Please check the error
(‘||v_my_transaction||') and try again. This transaction will be rolled back
now!1);
v_my_transaction := dbms_xa.xa_end(xid => v_xa_xid,flag =>
dbms_xa.tmsuccess);
v_my_transaction := dbms_xa.xa_rollback(xid => v_xa_xid);
select
state,
flags,
coupling
from
gv$global_transaction
where
globalLd dbms xa xid(i322).gtrid;
Here, the xa_start function is used with the tmresume constant to join an existing
transaction; in this case, the transaction with XID—3322.
set serveroutput on
declare
v_my_transaction pls_integer;
v_xa_xid dbms_xa_xid ;= dbms_xa_xid(3322);
v_xa_exception exception;
v_ora_error pls_integer;
begin
y_my_transaction := dbms_xa,xa_start(
xid => v_xa_xid,
flag => dbms_xa.tmresume);
--Check if XA transaction is OK
if v_my_transaction <> dbms_xa.xa_ok then
v_ora_error := dbms_xa .xa_getlastoer () ,-
dbms_output.put_line(a => 'Attention! Oracle Error - ORA-' |
v_ora_error || ' obtained. XA Process failed!');
raise v_xa_exception;
else
dbms; output.put_l ine (
a => 'XA Process ID 3322 started - ####### Step 1 ##########');
end if;
v_my_transaction := dbms_xa.xa_end(
xid => v_xa_xid,
flag => dbms_xa.tmsuccess);
--Check if XA transaction is OK
if v_my_transaction <> dbms_xa.xa_ok then
v_ora_error := dbms_xa,xa_getlastoer();
dbms_output.put_line(a => 'Attention! Oracle Error - ORA-' |
v_ora_error || 1 obtained. XA Process failed!');
raise v_xa_exception;
else
dbms_output.put_line(a => 'XA Process ID 3322 started - ####### Step 2
##########');
end if;
exception
when others then
dbms_output.put_line(a => 'A XA problem occur. Please check the error
( ’ II
v_my_transaction||') and try again. This
transaction will be rolled back now!');
v_my_transaction := dbras_xa.xa_end(
xid => v_xa_xid,
flag => dbms_xa.tmsuccess);
v_my_transaction := dbms_xa.xa_rollback(
xid => v_xa_xid);
raise_application_error(-20002, 'ORA-'||v_ora_error|j
' Transaction was rolled back successfully!1);
end;
/
At this point, if the tab_dbms_xa table is checked, it will not show the lines
created/updated yet. This is because no commit has been made. In the third session
shown next, it will be made.
declare
v_my_transaction pls_integer;
v_xa_xid dbms_xa_xid := dbms_xa_xid(3322);
v_xa_exception exception;
v_ora_error pls_integer;
begin
--Check if XA transaction is OK
if v_my_transaction <> dbms_xa.xa_ok then
v_ora_error := dbms_xa.xa_getlastoer() ;
dbms_output,put_line(a => 'Attention! Oracle Error - ORA-' |
v_ora_error |[ ' obtained. XA commit Process
failed!'),-
raise v_xa_exception;
else dbms_output.put_line(a => 'XA Process ID is working and was commited
successfully!');
end if;
exception
when others then
dbms_output,put_line(a => 'A XA problem occur. Please check the error
('II
v_my_transaction||') and try again. This
transaction will be rolled back now!');
v_my_transaction := dbms_xa.xa_end(xid => v_xa_xid,flag =>
dbms_xa.tmsuccess);
v_my_transaction := dbms_xa.xa_rollback(xid => v_xa_xid);
raise_application_error(-20002, 'ORA-'||v_ora_error||
' Transaction was rolled back successfully!');
end;
/
Lasdy, check the tab_dbms_xa table to get the results created in the first sessions and
committed in the third session.
select
*
from
tab_dbms_xa;
COLl C0L2
If an error occurs when using distributed transactions like the loss of the network
connection, the transaction may become lost. They may be found in views like
dba_2pcpending, dba_2pc_mighbors, dba_pending^_transactions or v$global_transactions. When
these lost transactions need to be purged, use a script like the one below:
select
'commit force '''||local_tran_id||''';'
from
dba_2pc_pending;
select
'exec dbms_transaction,purge_lost_db_entry('''||local_tran_id||''');'
from
dba_2pc_pending
where
state='forced commit1;
This will find all pending transactions, commit and then purge them if they are still
hanging. Make sure to commit after purging the transaction.
Summary
Good things come in small packages, and the packages in this chapter are no
exception. Despite there only being two, their importance should not be
underestimated. To learn the concept and manage services is one of the main duties
of a DBA nowadays thanks to increasing Real Application Cluster environments that
demand this knowledge. Also, the way that Oracle l l g works with distributed
transactions has been ameliorated by the creation of the dbms_xa package, as
demonstrated in this chapter.
As with this chapter, the next one is small but equally as important, pertaining to
packages of the Oracle Data Guard feature.
Some companies are using the new Oracle Extended Real Application Cluster and
ASM of Oracle l l g as a disaster recovery solution, commonly known as Geo-Cluster.
This is only advisable when sites are in close proximity to each other. For companies
whose standby sites are located further than 100 kilometers (about 60 miles), it is
advisable to use Oracle Data Guard. In Oracle Database version llg R 2 , up to 30
standby sites can be housed.
This chapter will present two packages: dbms_dg is used to warn Data Guard Broker
to initiate a failover and dbm sjogstdby which is used to manage the logical standby
database.
Package dbms_dg
Introduced in Oracle llg , the dbms_dg.initiate_Js_Jailover procedure provides a new
method used to initiate a Fast-Start Failover based upon the requirement of specific
conditions. This package is housed within the Data Guard Broker, which holds all
services pertaining to the monitoring of the primary database. The Data Guard
Broker initiates the failover to a standby database in the case of any planned or
unplanned outage.
The following example will show how a scenario would play out when Data Guard
Broker is working within the Disaster Recovery scheme.
So assum e that w e already have a D ata G uard B roker configured w ith db_orig as the
primary database and db_targ as the standby database. The first step needed in order
for this package to work is to enable fast start failover. It is done with a single step as
follows:
Now, choose the protection mode to be used and set this with the command example
below:
The failover process will begin once the time specified by the faststartfailoverthreshold
parameter is reached. In this example, the time used is 45 seconds.
Now use the following command to check the fast-start failover environment:
set serveroutput on
declare
v_status integer;
v_threshold number; --This value can be from any table that record
application timeout values for example,
begin
--Choose when the failover will happens here
if v_threshold > 100 then
status := dbms_dg.initiate_fs_failover(condstr => 'Failover
Requested');
end if;
end;
/
There are some optional database initialization parameters that can be used to
configure Data Guard. They are described here:
■ jaststartfailovermyshutdown\ If set to TRUE, the primary database will be
overthrown if the value of v$database.fs_jailover_status column is stalled for a time
larger than the value specified in the faststartfailoverthreshold parameter. Its default
value is TRUE.
■ jaststartfailoverlaglimit This parameter specifies a maximum lag time in seconds; the
standby is allowed to fall behind the primary in terms of redo applied. Beyond
this time, a FSF will not be allowed. The minimum value is 10 seconds.
■ faststartfailoverautomnstate'. If it is not necessary to reinstate the primary database
after a fast-start failover, set this parameter to FALSE.
■ observerconnectidentifier. It is possible to change the connection identifier being used
by the Data Guard Broker observer by changing this parameter.
The logstdby_administrator role, created with Oracle 9i, is granted to users who will be
managing logical standby databases. In the next example, it will be shown how and
when to use this package when administering a Disaster Recover database, we will be
using Oracle Data Guard with a logical standby database in this scenario.
Use the apply_set procedure to change the parameter values of SQL Apply in a logical
standby database. In this example, the number of applier servers will be changed to
90.
Also, the option to delete the archived redo log files if they have already been applied
on logical standby database will be enabled by setting the log_aiito_ddete parameter to
TRUE.
Connected t o :
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Information can be found about when the value was actually modified by querying
the view:
select
event_time,
status
from
dba_logstdby_events
/
EVENT TIME STATUS
In order to restore the parameters default value, use the apply_unset procedure,
begin
dbms_logstdby. apply_unset {inname => 1apply_servers 1)
end;
/
--Check parameters values on dba^logstdby_parameters
select
name,
value
from
dba_logstdby_parameters
/
To build the Log Miner dictionary, connect to Oracle as sysdba, open the database
and put the database in a quiesce state. Then the dictionary can be created using the
following statement:
--Build Log Miner dictionary and record supplemental metadata that will be
used by SQL Apply process use the build procedure
--First open the database and quiesce it
alter database open;
alter database quiesce restricted;
begin
dbms_logstdby.buiId;
end;
/
The instantiate_table procedure is used to refresh a logical standby database table with
values from the primary database. It will drop the table into the logical standby
database if it already exists. Check if the table to be instantiated has any skip
configured in it:
If there is skip information on the table, remove it by using the unskip procedure as
follows:
begin
dbms_logstdby.unskip(
stmt => 'schema_ddl1,
schema_name => 'pkg',
object_name => ’jobs');
end;
/
Finally, execute the instantiate procedure using the database link pointing to the logical
standby database.
begin
dbms_logstdby.instantiate_table{schema_name => 'pkg',table_name =>
'jobs',dblink => 'db_disre');
end;
/
Sometimes it may be necessary to find out which archive redo log files were already
applied in a logical standby database and can be removed because they are no longer
needed by the SQL Apply process. To perform this task, on the logical standby use
thepurge_session procedure.
begin
dbms_logstdbv.purge_sess ion;
end;
/
select
*
from
dba_logmnr_purged_log
/
After finding the archives, delete them at the OS level because this procedure will not
do it automatically.
If my logical standby database has become the primary database, and we want
additional logical standby databases to work correctiy, we will want to run the rebuild
procedure. This will record important metadata information required for other logical
standby databases. We can run it as follows:
begin
My next example will demonstrate how to use the skip procedure. When configured,
SQL Apply replicates all data executed on a primary database to a standby database
except data that is marked to be skipped. There are many possible ways to specify
which data to skip. For example, it is possible to prevent a certain table from being
replicated using the skip procedure. We can also direct SQL Apply to not replicate
any DDL commands made in a certain schema. Here are examples of both:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Despite this, we can achieve a similar functionality with the next example. In order to
force datafiles to be created in another directory, we use the following procedure:
y Code 1 0 .4 - dbmsJogstdby_replace_dtfs.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
v_action := dbms_logstdby.skip_action_replace;
exception
when others then
v_action := dbms_logstdby.skip_action_error; --Raise an error that
halts SQL Apply process
v_new_statement := NULL;
end proc_replace_df s_location,-
begin
dbms_logstdby.skip (
stmt => 'create tablespace1,
proc_name => 1proc_replace_dfs_location');
end;
/
Now every time a tablespace is created in the primary database with its datafiles in
directory ju 0 2 /oradata/primdb', they will be placed in the '/u03/oradata/stdbydb'
directory of the standby database.
Summary 469
CHAPTER
Oracle Streams
11
By design, Oracle RAC has always had a latency issue when the RAC nodes are
geographically distributed. In these cases, many shops use Oracle Streams for back-
and-forth replication. For most configurations, Oracle Streams are cheaper, i.e.
licensing costs, than RAC and Oracle Streams allow for back-and-forth replication
between two active production servers.
Stream replications work by using queues that store Data Definition Language and
Data Manipulation Language. They then propagate these changes to the target
database. It is also possible to create rules that manipulate data before they are
propagated to the destination target.
Despite being an advanced replication tool, Oracle Streams is not the best choice for
a Disaster Recovery solution. In some cases, such as using complex applications like
Oracle E-Business, SAP, JD Edwards and PeopleSoft, it is not easy or even possible
to keep Streams working for this purpose. In these cases, the best High Availability
and Disaster Recovery solution is still Oracle Data Guard with the Oracle Real
Application cluster.
Package dbms_apply_adm
This package was originally introduced with Oracle Database 9i and is one of the
packages used to create and manage a Streams environment. It is triggered by the
$?/rdbms/admin/dbmsapp.sql script that is called by catproc.sql after the database is
created. Basically, Oracle Streams are comprised of three main processes: the capture,
propagation and apply processes. The dbms_applj—adm package can be used, among other
things, to create, alter, drop and change parameters of the apply processes.
There are some parameters that can be changed by the setparam eter procedure. Each
apply process can be modified in a way that will allow it to work properly. Next, we
will explain some procedures of dbm s_applycdm in more detail. To run these next
In order to check the recommended configuration for Oracle Streams, take a look at
Oracle Support Note, "10gR2 Streams Recommended Configuration [ID 418755.1]".
The first step is to create a simple table replication configuration between two
databases, sourcedb and targetdb.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Sourcedb (sourcedb)
conn / as sysdba
create user
str_source
identified by 123;
alter user
str_source
default tablespace users
temporary tablespace temp
quota unlimited on users;
grant
connect,
resource,
aq_administrator_role,
dba
to str_source;
begin
dbms_streams_auth.grant_admin_privilege('str_source1);
end;
/
commit ,-
--Target DB (targetdb)
create user
str_target
identified by 123;
alter user
str_target
default tablespace users
temporary tablespace temp
quota unlimited on users;
grant
connect,
resource,
aq_administrator_role,
dba
to str_target;
begin
dbms_streams_auth.grant_admin_privilege('str_target');
end;
/
commit;
connect str_source/123@sourcedb
begin
dbms_streams_adm.set_up_queue(
queue_table => 'streams_queue_table',
queue_name => 'strearas_queue',
queue_user => 'str_source');
end;
/
--Create database links on source database
conn sys/manager@sourcedb as sysdba
create public database link
targetdb using 'targetdb';
conn str_source/123@sourcedb
create database link targetdb
connect to str_target identified by "123"
using 'targetdb';
conn system/manager@targetdb
create public database link
sourcedb using 'sourcedb';
Now the schema rules need to be created in the apply process of the target database.
These rules will be used to configure what will be replicated. In this case, every DDL
and DML command will be replicated to the target database in the apply process.
--Adding schema rules on target database. The schema str_schema will be used
in this example.
conn str_target/123@targetdb
begin
dbms_streams_adm.add_schema_rules(
schema_name => 1str_schema',
streams_type => 'apply',
streams_name => ■stream_apply1,
queue_name => 'str_target.streams_queue',
Next, the capture process and propagation rules are created on the source database as
follows:
Before starting the apply process, tables from the source database must exist on the
target database. If they already exist, just instantiate the objects as follows:
conn str_source/123@sourcedb
set serveroutput on
declare
v_inst_scn number; -- variable to hold instantiation sen value
begin
v_inst_scn := dbms_f lashback ,get_system_change_number () ,-
dbms_output,put_line(a => 'The instantiation number is: ' ||
v_inst_scn);
end;
/
The instantiation number is: 9336013
connect str_target/l23@targetdb
begin
dbms_apply_adm.set_schema_instantiation_scn(
source_schema_name => 'str_schema1,
source_database_name => 'sourcedb',
recursive => TRUE,
instantiation_scn => 9336013 );
end;
/
With the setparam eter procedure, it is possible to change many apply parameters. Now
set the disable_on_error parameter to a value of «, so the apply process will not stop if an
apply error happens.
conn str_target/123@targetdb
begin
dbms_apply_adm.set_parameter(
apply_name => 'stream_apply',
parameter => 'disable_on_error',
value => 'n');
end;
/
Now the capture and apply processes can be started on the source and target database
by using the following commands:
With the Streams configuration finished, run some tests by executing the DDL and
DML commands and check the replication.
select
count(*)
from
tab_jobs;
conn str_schema/123@targetdb
select
count(*)
from
str_schema.tab_j obs;
Check for capture and propagation processes by querying the dba_capture and
dbapropagation views in the source database. Check for apply processes by querying
dba_apply in the target database.
Now that there is an Oracle Streams environment, continue to work with some
dbm s_app!ytdm procedures. Use the create_appty procedure to create an apply process in
the target database. Before creating the apply process, make sure to create the queue
table that will be used with the create_apply procedure.
y Code 11.2-dbms_apply_create_apply.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Now create the apply process using the queue table created above
begin
dbms_apply_adm.create_apply(
queue_name => 'str_target.streams_queue',
apply_user => 'str_target',
apply_name => 'apply_l',
source_database => 'sourcedb1,
apply_database_link => NULL); --This is used just in case of
heterogeneous replication. This should be the database link to a non-Oracle
Database in that case.
end;
/
--Check apply procces by querying dba_apply view
col apply_name for al5
col queue_name for al5
col queue_owner for alO
col status for alO
select
apply_name,
queue_natne,
queue_owner,
status
from
dba_apply;
The apply process just created is still disabled and does not have any rules applied to it
yet as covered in the previous example. Here it is seen how and when to utilizesome
other procedures of the dbms_apply_adm package.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
477
Oracle Database Vault and Real Application Testing options
apply_name 'apply$22’,
operation_name 'delete',
change_handler_name 'str_change_handler_$ll');
end;
/
My final example will show how to fix an error in the apply process using procedures
of the dbms_apply_adm package. Suppose we have a table without primary keys in my
replication environment. We then create a primary key on the target database table.
The first duplicated row inserted in the source database table will hang on this
replication process which can be viewed in the dba__apply_error table.
To create the error, first create a primary key on the target table and then insert a row
in the source table.
y Code 11.4-dbms_apply_duplicated_rows.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
In order to allow duplicated rows in this table, change a parameter for this apply
process:
begin
dbms_apply_adm.set_parameter(
apply_name => 'stream_apply',
parameter => 'allow_duplicate_rows',
value => 'Y');
end;
/
--Drop the primary key on target table
alter table str_schema.tab_jobs
drop constraint pk_id cascade;
Now we can see the duplicated row and also replicate any duplicated rows in this
table and/or apply process.
Once the queue table has been created, the individual queues are created and started
using the create_queue and start_queue procedures, respectively. A single queue table can
hold many queues as long as each queue uses the same type for its payload.
Messages are queued and dequeued within Oracle Advanced Queuing using the
dbms_aq package. Access to this package can be granted using the aq_user_role role.
However, access to it from a stored procedure is achieved by using the
job_chain_aq_setup.sql script which grants the privilege on this object direcdy to the test
user.
There are sets of tools and Oracle Features that make use of queue tables like Oracle
Workflow, Oracle Streams, Oracle Advanced Replication, Oracle Alerts and more. In
the next example, some procedures and functions of dbms_aq and dbms_aqadm will be
shown. Suppose that we want to monitor and receive warnings and critical alerts
when the number of blocked users in a database exceed five sessions.
The first step is to check the metric ID and then to set the metric value with my own
values.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_server_alert.set_threshold(
metrics_id => 9000,
warning_operator => dbms_server_alert.operator_ge,
warning_value => '2',
critical_operator => dbms_server_alert.operator_ge,
critical_value => '5',
observation_period => 1,
consecutive_occurrences => 1,
instance_name => NULL,
object_type => dbms_server_alert,object_type_session,
object_name => NULL);
end;
/
select
*
from
dba_thresholds ;
These alerts are stored by default in the alert_qt table in Oracle llg ; in lOg, they are
stored in the alert_que table. The MMON process checks for exceeded threshold limits
and creates a new alert for each one. In order to dequeue the alert generated, it is
necessary to create an agent using the create_aq_agent procedure of the dbms_aqadm
package:
begin
dbms_aqadm.create_aq_agent(
agent_name=> ’locked_users_alert1);
end;
/
select
agent_name
from
dba_aq_agents;
begin
dbms_aqadm.add_subscriber(
queue_name=>'alert_queue1,
subscriber => aq$_agent(’locked_users_alert', '',0)
);
end;
/
- -or
declare
v_subscriber sys.aq$_agent;
begin
v_subscriber := sys.aq$_agent('subs_alert_lock_2', , 0) ;
dbms_aqadm.add_subscriber(
queue_name=>'alert_que',
subscriber => v_subscriber);
end;
/
Now enable database access to the added subscriber by using the enable_db_access
procedure of the dbms_aqadm package:
begin
dbms_aqadm.enable_db_access(
agent_name =>'locked_users_alert',
db_username=>1usr_alert');
end;
/
Check which users have privileges to which agent by querying the dba_aq_agent_privs
view as follows:
select
agent_name,
db_username
from
dba_aq_agent_privs;
Next, grant dequeue privileges to the dbms user with the grant_queueprivilege procedure
in the dbms_Miadm package.
begin
dbms_aqadm.grant_queue_privilege(
privilege = >'dequeue',
queue_name => 'alert_que',
grantee=>'system',
grant_option=>f alse)
end;
/
set serveroutput on
declare
v_dequeue_opt ions dbms_aq.dequeue_opt ions_t;
v_message_properties dbms_aq.message_properties_t;
v_alert_type_message alert_type;
v_message_handle raw(16);
v_message_l aq.message_typ;
type msgidtab is table of raw(16) index by binary_integer;
msg_id msgidtab
dq_msg_id raw(16);
cursor c_msgs_id is
select msg_id
from aq$alert_qt
where msg_state = 'ready1;
begin
v_dequeue_options.consumer_name := 'locked_users_alert';
v_dequeue_options.wait := 1;
v_dequeue_options.navigation := dbms_aq.first_message;
dbms_aq.dequeue(queue_name => 1sys.alert_que',
dequeue_options => v_dequeue_options,
message_properties => v_message_properties,
payload => v_alert_type_message,
msgid => v_message_handle);
dbms_output.put_line('alert message dequeued:');
dbms_output.put_line(1 timestamp: '|
v_alert_type_message.timestamp_originating);
dbms_output.put_line(' message type: ' |
v_alert_type_message.message_type);
dbms_output.put_line(' message group: 1 ||
v_alert_type_message.message_group);
dbms_output.put_line(' message level: 1 |
v_alert_type_message .message_level)
dbms_output ,put_line (1 host id: 1 || v_alert_type_message.host_id)
dbms_output.put_line(' host network addr: ' |
v_alert_type_message.host_nw_addr);
dbms_output,put_line(' reason: 1 ||
dbms_server_alert.expand_message(userenv('language'),
v_a1ert_type_message.message_id,
v_alert_type_message.reason_argument_l,
v_alert_type_message.reason_argument_4,
v_alert_type_raessage.reason_argument_5));
dbms_output,put_line(' sequence id: 1 |]
v_alert_type_raessage.sequence_id);
dbms_output.put_line(' reason id: 1 || v_alert_type_message.reason_id);
dbms_output.put_line(' object name: ' ||
v_alert_type_message.object_name);
dbms_output.put_line(' object type: 1 ||
v_alert_type_message.object_type);
dbms_output.put_line(1 instance name: 1 ||
v_alert_type_message.instance_name);
dbms_output,put_line{' suggested action: ' ||
dbms_server_alert.expand_message(userenv('language';
v_alert_type_message.suggested_action_msg_id,
v_alert__type_message.aetion_argument_l /
v_alert_type_message.action_argument_2,
v_alert_type_message.action_argument_3,
v_alert_type_message.action_argument_4,
v_alert_type_message.action_argument_5));
dbms_output-put_line(' advisor name: ' ||
v_alert_type_message.advisor_name);
dbms_output.put_line(' Scope: ' || v_alert_type_message.scope);
end;
Having such a tool to send and receive messages, with methods to queue and track
them, is key to ensuring the mission-critical messages are delivered and processed,
even if the communication channel is temporarily disrupted. So it clearly has many
real life applications. This is just one of many tasks that can be accomplished with
dbms_aq and dbms_aqadm packages.
How and when to use these enqueue and dequeue packages for the specific tasks is now
apparent
Package dbms_capture_adm
One of the main processes of a Streams configuration, the capture process, is
responsible for capturing changes from the redo log files. If the information is not in
the redo log, it can capture changes from the archived log files instead. This package
For this example, a capture process will be created, parameters will be changed,
additional information added and then the process will be started, stopped and
dropped.
y Code 1 1 .6 - dbms_capture_adm.sql
conn sys®orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_capture_adm.create_capture(
queue_name => 'str_source.q_capture',
capture_name => 'str_source.capture_l');
commit;
end;
/
--Change the paralelism of a capture process
begin
dbms_capture_adtn.set_parameter (
capture_name => 'str_source.capture_l',
parameter => 'parallelism',
value => 4) ;
end;
/
The next step is to start, stop and finally drop the capture process using the
dbms_capture_adm package.
begin
dbms_capture_adra.start_capture(
capture_name => 'str_source.capture_l');
end;
/
--Check if it is started
select
capture_name,
status
from
dba_capture
/
The next code shows how to use the procedure include_extra_attribute. This instructs a
capture process to include specified additional attributes in the Logical Change Records
(LCRs) beyond those commonly captured such as row_id', serial#, session$, thread#,
tx_name and username.
This procedure may help in auditing or to just filter information that will be used in
an apply process.
In the example below, how to add username as an extra attribute for the LCR being
captured is reviewed.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Instantiation is a process of marking a table point from which a replication process will
begin. When a table is instantiated, the SCN that is used by an apply process appears.
That apply process will start the replication process for all records that occur after this
SCN.
Connected to;
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
create table
str_source.tab_test_replic(
emp_id number,
first_name varchar2(30),
last name varchar2(30) ) ;
All tables which are prepared for instantiation in a database can be checked by
querying the dba_capture_prepared_tables view as follows:
select
table_owner,
table_name,
timestamp
from
dba_capture_prepared_tables
/
Next, instantiate all tables of a specified schema. This process will also instantiate all
future tables created automatically.
begin
dbms_capture_adm.prepare_schema_instantiation(
schema_name => 'str_source',
supplemental_logging => 'keys');
end;
/
select
schema_name,
timestamp
from
dba_capture_prepared_schemas;
str_source 10/10/2010
begin
dbms_capture_adm.prepare_global_instantiation(
supplemental_logging => 'keys');
end;
/
from
dba_capture_prepared_database;
TIMESTAMP SUPPLEMENTAL_LOG_DATA_PK SUPPLEMENTAL_LOG_DATA_UI SUPPLEMENTAL_LOG_DATA_FK
SUPPLEMENTAL_LOG_DATA_ALL
Package dbms_aqelm
Oracle Advanced Queue asynchronous notification allows Advanced Queue
subscribers to receive notifications without needing to be connected to the database.
There are four different mechanisms that can be used to send notifications: PL/SQL
Callback procedure, OCI Callback procedure, email notification or by sending
notifications to a specific HTTP address.
El Code 1 1 .9 -d b m s _ a q e lm .s q l
conn sysSorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_aqelm.get_mailhost(v_mailhost);
dbms_aqelm.get_mailport(v_port);
dbms_aqelm.get_sendfrom (v_sendfrom) ,-
dbms_output.put_line(a => ' Mail hosti s :'||v_mailhost);
dbms_output.put_line(a => ' Port is:'[|v_port);
dbms_output.put_line(a => ' Send from:'|jv_sendfrom);
end;
/
--Sending notification using send_mail
begin
dbms_aqelm.send_email(
sendto => '[email protected]',
text => 'Testing email from send_mail! ') ,-
end;
/
--Sending notification to a URL
declare
v_statuscode varchar2(100);
begin
dbms_aqelm.http_send(
url => 'http://10.10.10.1/',
what => 'test sending URL ',
what1 = > 10,
status_code => v_statuscode);
dbms_output .put_line (a --> 'Status code is: '[ |v_statuscode) ;
end;
/
Another frequent task with AQ is trying to find out the active subscribers. Check the
subscribers that have been configured to receive notifications by querying the sys.negS
table.
select
subscription_name,
locat ion_name
from
sys.reg$;
"str_source"."streams_queue":aq$_p@"str_ plsql://sys.dbms_isched.event_notify
target"."streams_queue"@targetdb
Information about the subscription was obtained and can always be checked using
the reg$ table.
Package dbms_hs_passthrough
Oracle Streams is a powerful tool which can be configured in a heterogeneous
environment; for example, with a replication between an Oracle Database and a SQL
Server database. This package provides the functionality to send a command direcdy
to a non-Oracle database.
Several SQL commands can be navigated across non-Oracle databases; create table,
alter table, select, update, delete and many others. It is also possible to use bind variables
within these commands as will be shown in the following examples.
In this example, we create a table in an SQL Server database by using a database link
named sqlserverdb configured with a transparent gateway.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
declare
ret integer;
begin
ret := dbms_hs_passthrough.execute_immediate@sqlserverdb(
'create table str_dest.test_table
(id_inst int NOT NULL,
id_role_inst char(2) NOT NULL,
id_direct_status tinyint,
flg_generate_title numeric(1)) ');
end;
/
commit;
--Add primary key on a table on a non-oracle database
declare
In the second example, we will execute a query using bind variables that are specified
by the bind_variable procedure of the dbms_hs_passthrough package. This is one of many
procedures available to deal with bind variables when using this package.
y Code 11.11-dbms_hs_passthrough_variable.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
dbms_hs_passthrough.bind_variable@sqlserver(v_cursor,2,v_bind_first_name);
begin
v_ret:=0;
while (true)
loop
v_ret:=dbms_hs_passthrough.fetch_row@sqlserver(v_cursor,false);
dbms_hs_passthrough.get_value@sqlserver(v_cursor,1,v_id);
dbms_hs_passthrough.get_value@sqlserver
(v_cursor,2,v_first_name);
dbms_hs_passthrough.get_value@sqlserver
(v_cursor,3,v_last_name);
dbms_output.put_line(’First Name '[|v_first_name);
dbms_output.put_line(Last Name '||v_last_name);
end loop;
exception
when no_data_found then
begin
Note that when querying data from a non-Oracle database, the execute_immediate
procedure cannot be used; instead, use procedures to open and fetch a cursor in order
to get a query value.
For the purpose of configuring Messaging Gateway, two packages were created:
dbms_mgwadm and dbms_mgmsg. In the next codes, how to configure Messaging
Gateway to be used in conjunction with Websphere will be reviewed. The Oracle
Messaging gateway has two main components: the Agent and the dbms_mgwadm
administration package. The first thing to do is to create these packages by running
the catmgiv.sql script which is found in the $ORACL£_HOME/mgw/admin directory.
Next, configure the listener.ora and tnsnames.ora in order to start the Messaging Gateway
Agent. Some examples of how they should be configured are below:
y Code 1 1 .1 2 - dbms_mgw.sql
conn sys@orallg as sysdba
Connected to;
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Check your listener.ora and tnsnames.ora and validate if they have the
proper values necessary to start the agent
--tnsnames.ora
mgw_agent=
(description =
(address_list =
(address = (protocol=ipc)(key=extproc)))
(connect_data = (sid=mgwextproc) (presentation=RO)))
agent_service =
(description =
--listener.ora
listener =
(description =
(address = (protocol = tcp) (host = 10.10.10 .113) (port = 1521))
(address = (protocol = ipc)(key =extproc))
)
sid_list_listener =
(sid_list =
(sid_desc =
(global_dbname - dbms)
(ORACLE_HOME = /u01/app/oracle/product/ll.2.0/dbllg)
(sid_name = dbms)
)
(sid_desc=
(sid_name= mgwextproc)
(envs="ld_library_path=/u01/app/oracle/product/ll.2.0/dbllg/jdk/jre/lib/i386
:/uOl/app/oracle/product/ll.2.0/dbllg/jdk/jre/lib/i386/server:/uOl/app/oracl
e/product/ll.2.0/dbllg/lib")
(ORACLE_HOME=/uOl/app/oracle/product/ll.2.O/dbllg)
(program = extproc))
)
Check the mgw.ora file that is used by the agent. This file is located in the
$ORACLE_HOME/mow/admin directory and will look something like the lines
below:
set
ld_library_path=/u01/app/oracle/product/ll.2.0/dbllg/jdk/jre/lib/i386:/u01/a
pp/oracle/product/ll.2.0/dbllg/jdk/jre/lib/i386/server:/uOl/app/oracle/produ
ct/11.2.0/dbllg/rdbms/lib:/u01/app/oracle/product/ll.2.0/dbllg/lib
log_directory=/u01/app/oracle/product/ll.2.0/dbllg/mgw/log
log_level = 0
set
classpath=/u01/app/oracle/product/ll.2.0/dbllg/jdbc/lib/ojdbc5.jar://uOl/app
/oracle/product/II.2.0/dbllg/jdk/jre/lib/il8n.jar://uOl/app/oracle/product/1
1.2.0/dbllg/jdk/jre/lib/rt.jar://u01/app/oracle/product/ll.2.0/dbllg/sqlj/li
b/runtime12.jar://uOl/app/oracle/product/11.2.0/dbllg/jlib/orail8n.jar://uOl
/app/oracle/product/11.2.0/dbllg/jlib/jta.jar://uOl/app/oracle/product/11.2.
0/dbllg/rdbms/jlib/jmscommon.jar://u01/app/oracle/product/ll.2.0/dbllg/rdbms
/jlib/aqapi .jar: /opt/mqm/ java/lib/com. ibm.mqjms .jar -./opt/mqm/ java/lib/com. ib
m.mq.jar:/opt/mqm/java/lib:/opt/mqm/java/lib/connector.jar
Create a user that will have Messaging Gateway permissions and configure connection
information used by the agent to connect to the database.
Now start the agent by using the start_agent procedure. Note that after starting the
agent_status, the column on table mgw$_gateway will change in value from 0 to 1.
begin
dbms_mgwadm.startup;
end;
/
--Shutdown the agent
begin
dbms_mgwadm.shutdown(
sdmode => dbms_mgwadm.shutdown_immediate);
end;
/
Now it is time to create a particular queue table and a queue for this process. After
this has been created, start the queue.
begin
d b m s _ a q a d m .create_queue_table (
queue_table => 'mgw_q_table',
multipie_consume rs=>TRUE,
queue_payload_type =>1sys.anydata' ,
compatible => '8.1');
dbms_aqadm.create_queue{
queue_name => 'mgw_q',
queue_table => 1mgw q table');
end;
Now that the queue has been created and started, the next step will be creating the
link that is responsible for connecting the database to the Websphere application.
This link is created using the create_msgsystemjink procedure:
declare
v_mgw_j?roperties sys .mgw_properties ;
v_mgw_mq__properties sys .mgw_mqseries_properties;
begin
v_mgw_raq_properties := sys ,mgw_mqseries_properties .construct () ,-
v_mgw_mq_properties,max_connections ;= 5;
v_mgw_mq_properties.interface_type :=
dbms_mgwadm.mqseries_base_java_interface;
v_mgw_mq_properties.username := 'webuser1;
v_mgw_jnq_properties .password := 'webl23';
v_mgw__mq_properties.hostname := 110 .10.10.114',-
v mgw mg properties.port := 5522;
v_mgw_mq^properties.channel := 'system.def,svrconn1;
v mgw mg properties .queue manager := 'maw q__manager';
v_mgw_mq_properties.outbound_log_queue := 'mgw_out_log_q';
v_mgw_mq_properties.inbound_log_queue := 'mgw_in_log_q';
dbms_mgwadm.create_msgsystem_link (
linkname => 'mgw_link',
properties => v_mgw_mq_properties,
options => v_mgw_properties,
comment => 'This is the link to connect to Websphere.');
end;
/
The link just created can be checked by querying the mgw$Jinks table or mgw J in k s
view as follows:
select
link_name,
link_type,
agent_name
from
mgw_links;
After creating the link, we register the non-Oracle queue in the Messaging
Gateway.
Finally , after creating the link that will guide messages to Websphere, a job is created
using the m a teJ o b procedure which replaced the old add_subscriber and
schedulepropagation. This job is created using the following scripts.
begin
dbms_tngwadm.create_ job (
job_name = > 1mgw_job',
propagation_type => dbms_mgwadra.outbound_propagation,
source => 'sys.mgw_q',
destination => 1mgw_fk_queue@mgw_link',
enabled => TRUE,
poll_interval => 10,comments => 'My test mgw Job.');
end;
/
--Check the subscriber using the mgw_subscribers view.
col subscriber_id for alO
col queue_name for alO
col destination for a22
select
subscriber_id,
propagat ion_type,
queue_name,
destination,
status
from
mgw_subscribers ,-
Package dbms_propagation_adm
Thepropagation process is responsible for sending enqueued messages from the source
queue to the destination queue. A queue can participate in many propagation processes
at once. One single source queue can propagate events to multiple destination queues.
Likewise, a single destination queue may receive events form multiple source queues.
The rules for the propagation process are specified as follows: these rules can be
defined at the table, schema or global level. They can propagate or discard row
changes from DML and DDL commands.
The first step is to create a rule that will change the owner of an object. Suppose that
the owner in sourcedb is str_source and the owner in targetdb is str_target. In this example,
the Streams owner is the stradmin database user. Create the function that will be used
in rule. This function will change the object owner to str_target.
Connected to:
Now create the rule set and the rule using the PL/SQL block below; specifying the
function created above.
begin
dbms_rule_adm.create_rule_set(
rule_set_name=>'str_source.my_test_rule_set',
evaluation_context=> 'sys.streams$_evaluation_context1) ,-
end ;
/
declare
action_ctx sys ,re$nv_list ,-
ac_name varchar2(30) := 'streams$_transform_function' ;
begin
action_ctx := sys.re$nv_list(sys.re$nv_array()) ;
action ctx.add pair(ac_name,
sys.anydata.convertvarchar2('str_source.transf_function')) ; --change to my
user"st radmi n11
dbms_rule_adm.create_rule(
rule_name=>’str_source.rule_test1,
condition=>':dml.get_object_owner() = ''str_source'' and ' ||
':dml.is_null_tag() = ''Y1'1,
ijva Luat ion context -> 1s y s .streaTns$_evaluation__cQntext;'B, ■
action_context => action_ctx) ;
dbms_rule_adm.add_rule(
rule_set_name=>'str_source.my_test_rule_set1,
rule_name=>'str_source,rule_test');
end ;
/
Confirm that the rule was created by querying the dba_rules and dba_rule_set_rules
views:
Lastly, initiate the propagation process by specifying the rule just created. This will tell
Streams that all objects belonging to the user str_source in sourcedb will belong to user
str_target in targetdb.
begin
dbms_propagation_adm.create_propagation(
propagation_name => 'Prop_Test',
source_queue => 1str_source.streams_queue1,
destination_queue => 1str_target.streams_queue',
destination_dblink => ’targetdb1,
rule_set_name => 1my_test_rule_st');
end;
/
Now all propagated objects will have their owner changed from str_source to str_target
as specified by the new rule.
There are other procedures and functions in these three packages. We will touch upon
some of the more important ones here.
y Code 1 1 .1 4 - dbms_propagation_and_rules.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
In the case below, we create a table alias for table test_table, which was just created. If
the rules just created need to be evaluated in their context, use the code shown here:
set serveroutput on
declare
v_true_rules sys.re$rule_hit_list;
v_maybe_rules sys.re$rule_hit_list;
rnum integer;
begin
dbms_rule.evaluate(
rule_set_name => 'str_source.my_test_rule_set1,
evaluation_context => 'my_tab_alias_ctx',
true_rules =>v_true_rules,
maybe_rules => v_maybe_rules);
end;
/
In the next example, we will show how to set a tag using the dbms_streams package.
Then we will locate the tag using both dbm_logmnr and get_tag functions.
y Code 1 1 .1 5 - dbms_streams_tag.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--First find out which archive are being used, by querying the
v$archived_log table
select
max(sequence#),
name
from
v$archived_log
group by
name;
--/u02/oradata/archives/l_27_729104245.dbf
First, create a table that will be used for this test. Create the table in both source and
target databases.
y Code 1 1 .1 6 - dbms_streams_get.sql
conn sys@orallg as sysdba
Connected t o :
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
alter table
stradmin.tab_test_get_str
add constraint
pk_coll
commit;
exception
when others then
dbms_f lashback.disable ,-
raise;
end;
/
commit ;
set serveroutput on
begin
insert_reg (:sen);
end;
/
Now include this table in the replication environment using the following commands:
conn stradmin/stradmin@strsource;
begin
dbms^streams_adra.add_table_rules(table_name =>
'stradmin.tab_test_get_str1,
streams_type => 'capture',
streams_name => 'rep_capture_ora',
queue_name =>
1stradmin.streams_queue1);
end;
/
begin
dbms_streams_adm.add_table_rules(table_name =>
1stradmin.tab_test_get_str’,
streams_type => 'apply1,
streams_name =>
'rep_apply_to_SQLServer_ORA1,
else
--Delete columns for update command in any case
lc_lcr.delete_column('coll1),-
lc_lcr.delete_column('col2');
lc_lcr,delete_column(1col3');
end if;
end if;
lc_lcr.set_command_type('update'),-
lc_lcr.set_object_name{'stradmin');
lc_lcr.add_column (1new', 1col 1' , anydata.Convertnumber {1) ) ,-
lc_lcr.execute(true);
exception
when others then
lc_intError := sqlcode;
lc_varErrmsg := sqlerrm;
raise_application_error(-20001,
lc_varErrmsg || ' # ' || lc_cont_exec || ’ - '
II
'Error in stradmin,proc_dml_handler_tab_test ');
end;
/
Next, create the DML handler for each command type linking it to the procedure we
just created:
begin
dbms_apply_adm.set_dml_handler(object_name
’stradmin.tab_test_get_str',
object_type => 'table',
ope ra tion_name => 'insert',
error_handler => FALSE,
user_procedure
The example above can be used in cases where there is a need to transform data while
it is being replicated. It can also be used as a template for other general cases.
First we create the Streams users on both the source and target databases:
SI Code 1 1 .1 7 - dbms_streams_asm.sql
conn sysSorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
create user
dbms_str_user
identified by
dbras_str_user;
grant
create database link,
unlimited tablespace,
connect,
resource,
aq_administrator_role,
create any directory,
dba
to
dbms_str_user;
begin
dbms streams auth.grant admin privilege(
grantee => 'dbms_str_user');
end;
/
commit;
Now create tablespace tbs_l as well as one table inside it to be used with this
example.
create tablespace
tbs_l
datafile size
10M;
conn dbms_test_user@sourcedb
Create the local directory where scripts will be stored. In addition, we also need to
create an object directory which points where the tablespace datafile is located.
--sourcedb
create or replace directory
script_dir as
grant
read,
write on directory tbs_dir_source to public;
--Targetdb
create or replace directory
script_dir as
'/uOl/app/oracle/admin/targetdb';
grant
read,
write on directory script_dir to public;
grant
read,
write on directory tbs_dir_target to public,-
Create a database link in the source database pointing to the target database. Create a
reciprocal link in the target, pointing to the source database. Make sure that there is
the service_name in tnsnames.ora.
--sourcedb
drop database link targetdb;
create database link
targetdb
connect to
dbms_str_user
identified by
dbms_str_user
using
'targetdb1;
--Targetdb
drop database link sourcedb;
Finally, use maintain_simple_tts to clone the tbs_l tablespace from sourcedb to targetdb.
Since we am using script_name parameters, it will generate a script with all the steps
necessary to create the replication on the tbs_l tablespace. This procedure must be
executed on the capture database.
begin
dbms_streams_adm.maintain_simple_tts(
tablespace_narae => 'tbs_l',
source_directory_object => 'tbs_dir_source',
destination_directory_object => 1tbs_dir_target',
source_database => 'sourcedb1,
destination_database => 'targetdb',
script_name => 'my_scriptl.sql',
script_directory_object => 1script_dir’);
end;
/
If the tasks should actually be executed rather than just creating the scripts, use the
command below:
begin
dbms_streams_adm. maintain_simple_tts(
tablespace_name => 'tbs_l',
source_directory_object => 'tbs_dir_source',
destination_directory_object => 'tbs_dir_target',
source_database =>'sourcedb',
destination_database => 'targetdb',
perform_actions => TRUE,
bi_directional => TRUE);
end;
/
There are some good views used to help solve problems that may happen when
configuring Oracle Streams using dbms_streams_adm package. Learn how to fix a
problem on the next example.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
begin
dbras_streams_adm.mai ntain_s imp1e_tt s (
tablespace_name => 'tbs_l1,
source_directory_object => 'tbs_dir_source1,
destination_directory_object => 'tbs_dir_target',
source_database => 'sourcedb',
destination_database => 'targetdb',
script_name => 'my_scriptl.sql',
script_directory_object => 'script_dir');
end;
SQL>
select
script_id,
block_num,
error_message,
error_creation_time
from
dba_recoverable_script_errors
where
error_creation_time > systimestamp -1/24;
SCRIPT_ID BLOCK_NUM ERROR_MESSAGE
ERROR CREATION TIME
In the previous scenario, note that the user has to have the DBA privileges. In some
cases, further investigation is needed using the dba_recoverable_script_blocks view.
This query shows the forward block script and undo block script. To execute one of these
blocks, run the recover_operation procedure.
begin
dbms_streams_adm.recover_operation(
script_id => '94F0120925608F08E0400A0A710A217D’,
operation_mode => 'purge');
end;
/
Package dbms_streams_auth
As with any of Oracle’s tools, Streams have a security level; this is managed through
the dbms_streams_auth package. Within this package, the database administrator can
grant and revoke Streams Administration privileges to users. There is also an option
that can be used to grant administrative privilege to a remote user for managing local
databases through a database link.
This is a fairly straightforward package, so we will show all procedures using a single
example.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
The output of this script contains a lot of select and execute grants for many objects
used when configuring Oracle Streams. The output is as follows:
How to generate a script to revoke administrative privilege from a user is shown here:
begin
dbms_streams_auth.grant_remote_admin_access(
grantee => 1dbms_str_user') ,-
end;
/
begin
dbms streams auth.revoke remote admin access (
SQL>
Note that there is a column named access_from_Kmote which indicates whether or not a
user has remote privilege access to a local Streams Administration.
Package dbms_streams_advisor_adm
Oracle is always creating tools that help manage and administer databases and their
features. This is the case for Oracle Streams. There is a tool, named Oracle Streams
Performance Advisor, that basically is nothing more than using the
dbms_streams_advisor_adm package.
Information about enqueue rate, current queue size, capture rate, send rate,
bandwidth, latency, message apply rate, transaction apply rate and more are gathered.
The results can be found in views like:
■ dba_streams_tp_component This view displays information about each component
of Streams for each database
■ dba_streams_tp_coffiponent_link\ This view displays information on how flow
messages are moving between Oracle Streams components
■ dba_streams_tp_component_stat Displays temporary information on performance
statistics and session statistics of Oracle Streams components
In my next example, we will show how to gather statistics from the Streams
Performance Advisor by using the dbms_streams__advisor_adm package.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_streams_advisor_adm.analyze_current__performance;
end;
/
Use the views below to check all performance information generated by the
dbms_streams_advisor_adm package.
This package enables Streams Administrator to find which components are causing
performance degradation; thus facilitating the solution to the problem.
Package dbms_streams_messaging
Another package frequently used by Streams Administrators is dbms_streams_messaging.
This will enable the enqueue and dequeue of messages from and into an anydata
queue. The package dbms_streams_messaging cannot be used to enqueue or dequeue
messages from a buffered queue. In this case, use the dbms_aq package.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--sourcedb to targetdb
create database link targetdb
connect to dbms_str_user identified by dbms_str_user
using 'targetdb';
--targetdb to sourcedb
create database link sourcedb
connect to dbms_str_user identified by dbms_str_user
using 'sourcedb';
dbms_aqadm.create_queue_table(
queue_table => 'my_queue_table',
queue_payload_type => 'sys.anydataA',
storage_clause => ' tablespace tbs_streams_data',
sort_list => 'priority,enq_time',
multiple_consumers => TRUE,
Create the propagation used to send information to targetdb through a database link.
begin
dbms propagation adm.drop_propagation(
propagation_name => 'prop_tab_str_test',
drop_unused_rule_sets => TRUE);
dbms_propagation_adm.create_propagation(
propagation_name => 'prop_tab_str_test',
source_queue = > 'dbms_str_user.my_anydata_queue',
destination_queue => 'dbms_tr_user.my_anydata_queue',
destination_dblink => 'targetdb');
end;
/
Next, we create a trigger, which will enqueue messages in my message system. Note
that every time a row is inserted into the tab_str_test table, it is also enqueued in
my_anydata_queue'.
begin
dbms_streams_adm.add_message_rule (
message_type => 'dbms_str_user.type_tab_str_test',
rule_condition => 1:msg.coll > 1',
streams_type => 'dequeue',
streams_name => 'dbms_str_user_cons2',
queue_name => 'dbms_str_user.my_anydata_queue');
end;
/
The next step is to create a procedure which will dequeue messages from the queue
created.
commit;
if msg.gettypename{) = 'dbms_str_user.type_tab_str_test' then
v_int := msg.getobject(v_message);
dbms_output.put_line(a => 'value for coll: 1 || vjnessage.coll);
dbms_output.put_line (a => 'value for col2: ' || v_message.col2);
end if;
v_steps := 'next message';
commit;
exception
when sys,dbms_streams_messaging.endofcurtrans then
v_steps := 'next transaction';
when dbms_streams_messaging.nomoremsgs then
v_have_messages := false;
dbms_output.put_line(a => 'No more messages.');
dbms_output .put_line ('The error code was = ' |] sqlcode) ,-
dbms_output.put_line('The error message was ' j] sqlerrm);
Finally, perform some insert operations, check values in the queue and use the
procedure to dequeue the messages.
Run the procedure to dequeue the messages and check the output displayed:
set serveroutput on
begin
prc_dequeue;
end;
/
--Check queue table again on columns starting with deq%
select
q_name,
msgid,
enq_time,
enq_uid,
sender_name,
deq_time,
deq_uid
from
my_queue_table;
These examples have shown a simple way to dequeue messages from a queue table.
Other methods include the dbms_aq package covered earlier in this chapter.
The next chapter will show how to use the most important packages of the Oracle
HTML DB and XDB feature.
Introduction
Oracle Application Express (ApEx, formerly called HTML DB) is one of the most
exciting web application development tools on the market. HTML-DB Application
Express is a true Rapid Application Development environment that can take an idea
from concept to a working production level application in a very short period of time
and this book can help with understanding the underlying packages that make this
possible.
Oracle XML DB is another Oracle technology used to store and manage XML data in
the database. Oracle provides native support to XML data. The packages that will be
presented in this chapter make reference to both the XML and HTML DB. We will
show the main procedures of these packages regarding the installation and
management of these technologies.
There are some steps that need to be made before using these features as they are not
installed by default. To have access to the XML packages, it is necessary to install the
XML DB option by using the catqm.sql script found in
$ORACUE_HOME/rdbms/admin. In the same way, to use HTML DB packages,
Oracle Application Express needs to be installed. This is a more advanced installation
where ApEx, ApEx Listener and one HTTP Server (in my case, I just downloaded
OC4J Containers) need to be installed. The step-by-step installation guide can be
found in Oracle® Application Express Installation Guide
Release 4.0 Part Number E15513-01.
Next, how to use the most important and useful packages that concern HTML DB
and XML DB will be revealed.
Package htmldb_custom__auth
This package provides the user with procedures and functions that are related to
authentication and session management. Among other things, this package offers
functions and procedures that could be used to get the session id, check existence of
Introduction 525
an item in a page, get authentication username, perform login and authentication and
more.
The following are some examples of these functions and procedures. Use the
applicationpage_item_exists function to check if an item exists in a page. If it exists, a
TRUE value is returned; otherwise, it returns FALSE.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
if htmldb_custom_auth.application_page_item_exists(p_item_name =>
'iteml') then
dbms_output.put_line(a => 'Exist.') ;
else
dbms_output.put_line(a => 'Does not exist.');
end if;
end;
begin
if htroldb custom auth.current page is public then
dbms_output.put_line(a => 'Is a public page');
else
dbms_output.put_line(a => 'Is not a public page');
end if;
end;
/
If we want to define a user session setting the user and session id, use the
define_mer_session procedure. To get the user and session id values, use the get_u^r and
get_session_id functions.
declare
v__user varchar2(30) ;
v_session_id number;
begin
htmldb_custom_auth.define_user_session(p_user => 'userl',p_session_id =>
10) ;
select
htmldb_custom_auth.get_user
into
v_user
from
dual ;
If we need to get cookie properties in the current session, we use the get_cookie_props
procedure.
set serveroutput on
declare
v_cookie_name varchar2 (50)
v_cookie_path varchar2(100);
v_cookie_domain varchar2(20);
v_secure boolean;
begin
htmldb_custom_auth.get_cookie_props(
p_app_id => 1,
p_cookie_name = > v_cookie_name,
p_cookie_path => v_cookie_path,
p_cookie_domain => v_cookie_domain,
p_secure => v_secure);
dbms_output,put_line(a => 'Cookie Name:'||v_cookie_name);
dbms_output.put_line(a => 'Cookie Pah:'||v_cookie_path);
dbms_output.put_line(a => 'Cookie Domain:'||v_cookie_domain);
set serveroutput on
declare
v_ldap_host varchar2(30);
v_ldap_port number;
v_use_ssl varchar2(10);
v_ldap_use_exact_dn varchar2(30);
v_ldap_dn varchar2(30);
v_search_filter varchar2(30);
v_ldap_edit_function varchar2(30);
begin
htmldb_custom_auth.get_ldap_props(
p_ldap_host => v_ldap_host,
p_ldap_port => v_ldap_port,
HTML DB has a sequence that can be used to generate the next session id. To do
this, run the get_mxt_session_id function.
set serveroutput on
declare
v_next_session_id number;
v_current_session_id number;
begin
select
htmldb_custom_auth.get_next_session_id,
htmldb_custom_auth.get_session_id into v_next_session_id ,
v_current_session_id
from
dual ;
To get the current username registered with the current HTML DB session, use the
get_memame function.
set serveroutput on
declare
v_username varchar2(30);
begin
select
htmldb_custom_auth.get_username
into
v_username
from
dual;
dbms_output.put_line(a => 'Username is:'||v_username);
end;
/
set serveroutput on
declare
v_sec_group_id number,-
begin
select
htmldb_custom_auth.get_security_group_id
into
v_sec_group_id
from
dual ;
dbms_output.put_line(a => 'Security Group Number is:'||v_sec_group_id);
end;
/
set serveroutput on
begin
if htmldb_custom_auth.is_session_valid then
dbms_output,put_line(a => 'Is a valid session!');
else
dbms_output.put_line(a => 'Is NOT a valid session!');
end if;
end;
/
There are procedures used to log in, log out and authenticate a user session
registration. These are the login and logout procedures. See some examples below:
--login
begin
htmldb_custom_auth.login{
p_uname => 'pportugal',
p_password => 'XXX112',
p_session_id => 'dbms_session',
p_app_page => ':!');
end;
/
--logout
begin
--p_next_app_page_sess indicates the application and page id, where the user
will be redirected to, after logout, in this case, application 300, page 5
htmldb_custom_auth.logout(
p_this_app => '300',
p_next_app_page_sess => '300:5');
end;
/
While the login procedure makes the session authentication, the post_login procedure
will make session registration.
begin
if htmldb_custom_auth.session_id_exists then
dbnis_output .put_line <1Session ID exist .’■■);
else
dbms_output.put_line(1Session ID dos not exist.1);
end if;
end;
/
All these procedures and functions are used to manage session and user
authentication and registration with Oracle Application Express.
The most common use for the htmldb_item package is in a report or tabular form. The
most common of all the procedures is the checkbox procedure. This procedure is used
extensively in the information on checkboxes. A few of the htmldb_item procedures are
outlined below. With the examples provided, users should have enough information
to go the ApEx documentation and be able to use the rest of the dynamic page item
procedures.
With this package, for example, we can create checkboxes, different types of lists, text
areas, radio groups and pop ups such as creating checkboxes dynamically in a query
using the following command:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
--Use the command below to create a check box enabled for all employees on
table emp_del
select
htmldb_item.checkbox(
p_idx => 2,
p_value => employee_id,
p_attributes => 'checked') " ",
first_name,
last_name
from
hr.emp_de1
order by 1;
--Or this other command that create a check box just for users that have the
salary greather than 100.
select
htmldb_item.checkbox(p_idx => l,p_value => employee_id,p_attributes =>
1checked') 11 ",
first_name,
last_name
from
hr,emp_del
where
salary > 100
order by 1;
Use the display_and_save function to display an item as text and save its value to the
session state.
select
htmldb_item.display_and_save(
p_idx => 10,
p_value => object_id)
from
dba_objects
where rownum < 10;
HTMLDB ITEM.DISPLAY_AND_SAVE(P
To dynamically generate hidden fields, use the function named hidden and to generate
text fields, use the function named text. Both are used on next example:
select object_±d,
htmldb_item.hidden(
p_idx => 1,
p_value => object_id) ||
htmldb_item.text(
p_idx => 2,
p_value => object_name) Object_ID_Name
from dba_objects
where rownum < 5
order by 1;
--To generate very large select lists from a list of values use
Also, to generate text fields from the list of values or from a query, use the functions
below:
--Text functions
select distinct
htmldb_item.radiogroup(
p_idx => 1,
p_value => object_type,
p_selected_value => 1,
p_display => object_type)
from
dba_objects;
Popups and popup keys can also be created from different sources like the LOV
query. See the next examples:
Package htm ldbjtem 533
--Popup_f rom_lov
select
htmldb_item.popup_f rom_lov {
p_idx = > 1,
p_value => object_name,
p_lov_name => 1lovl')
from dba_objects
--popup_from_query
select
htmldb_item.popup_from_query(
p_idx => 1,
p_value => username,
p_lov_query => 'select username from dba_users')
from dba_users
--popupkey_f rom_lov
select
htmldb_item.popupkey_from_lov(
p_idx => 1,
p_value => object_id,
p_lov_name => 'lovl')
from
dba_objects;
--popupkey_f rom_query
select
htmldb_item.popupkey_from_query (
p_idx => 1,
p_value => object_id,
p_lov_query => 'select object_name, object_id from dba_objects')
from dba_objects;
All these functions are automatically used when running APEX wizards to create
pages. The intention here is to see how to use them when the specific needs are not
fully met by the APEX wizards.
Package htmldb__util
The htmldb_util packages provide several procedures that can be used in ApEx or in
stored procedures within the database. The ApEx development environment provides
other methods for performing the same functionality as many of these functions do.
However, when developing stored procedures, the same functionality is available
through the use of the htmldb_util package.
Some procedures and functions of this package are described below. All these
procedures and functions described here are excerpts from Easy IITML-DB Oracle
Application Express by Michael Cunningham and Kent Crotty.
■ clear_appcache-. This procedure clears all session states for the application
provided in the parameter.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
— Get email
select
htmldb_util,get_email(p_username => 'pportugal')
from
dual;
select
htmldb_util,get_last_name(p_username => ' pportugal ')
from
dual;
--Get preference
set serveroutput on
declare
vjreference number;
begin
select
htmldb_util.get_preference (p_preference => 'my__pref ',p_user =>
1pportugal')
into
v_preference
from
dual ;
dbms_output.put_line('Preference: '||v_preference);
end;
begin
htmldb_util-remove_user(p_user_name => ' pportugal ');
end;
/
--Set preference in a persistent session state
begin
htmldb_util.set_session_state(
p_name => 'my_item',
p_value => 'myvalue');
end;
/
--Return a PL/SQL array given a string
set serveroutput on
declare
v_array htmldb_application_global.vc_arr2
begin
v_array := htmldb_util.string_to_table(p_string => 'blue:red:green');
for z in 1..v_array.count loop
htp.print(v_array{z));
dbms_output.put_line(a => v_array(z) ) ,-
end loop;
end;
Like any other html_db package, all procedures and functions of this htmldb_util are
used automatically when invoking btml_db wizards to create and modify pages.
Package dbms_xdb
One of the main packages related to Oracle XML DB is dbms_xdb. Within this
package, it is possible to manage and configure resources for XML DB and also
administer security privileges. Once XML DB is installed on the database, this
package and the others about XDB that will be covered later on in this chapter can be
used. In order to install XML DB, run the catqm.sql script. This script can be found in
the SORACL£_HOME/rdbmsI admin directory.
To check if XML DB is already installed on the database, just query the dba_registiy
view. This view shows information about all components that are loaded into the
database. Some useful and practical examples will be shown below so the reader can
gain some knowledge on XML DB management.
The dbms_xdb package contains more than eighty procedures and functions and thus,
as it is not possible to show one example for each one of them, some of most useful
and most important will be shown in the next example.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Folder examples
--Create a new folder resource
declare
v_return boolean;
begin
v_return := dbms_xdb.CreateFolder(abspath => '/usr');
commit;
end;
/
--Check if the resource is a folder or a container
set serveroutput on
declare
v_return boolean;
begin
if dbms_xdb.isFolder(abspath => '/usr') then
dbms_output .put_line (a => 'Is a folder or container!'),-
else
dbms_output.put_line(a => 'Is not a folder or container!');
end if ,-
end;
/
The next example shows some procedures and functions used to create, delete,
rename and manage resources.
y Code 12.5-dbms_xdb_resources.sql
conn sys@orallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Resources
--Create a new resource files
declare
v_res boolean;
carsxmlstring varchar2(300):=
'<?xml version="l.0"?>
<cars>
<car carno="l" modelno="10" carname="X" carcOlor="blue"/>
<car carno="2" modelno="20" carname="Y" carcolor="red"/>
<car carno="3" modelno="30" carname="W" carcolor="green"/>
<car carno="4" modelno="40" carname="Z" carcolor="yellow”/>
begin
dbms_xdb.RenameResource(
srcpath => ’/usr/tmp/cars.xml1,
destfolder => ’/usr/tmp/’,
newname => ’cars2.xml’);
commit;
end;
/
select
any_path
from
xdb.resource view r
It is possible to lock and unlock resources to work with them when necessary and it is
also possible to check if the resource is locked. Find in the next example some of the
lock procedures and functions of the dbmsjxdb package.
--Find next more examples now about how to lock, unlock discover locks in
resources.
--Lock examples
declare
v_lock boolean;
begin
if dbms_xdb.LockResource(abspath => '/usr',depthzero => TRUE,shared =>
FALSE) then
dbms_output.put_line(a => 'Resource locked!1);
else
dbms_output.put_line(a => 'Resource unlocked!1);
end if;
commit;
end;
/
If we try to execute the command above one more time, it will generate a message like
this:
By default, the XML database is configured with http port set to 8080 and FTP port
to 2100. To change these values or get the actual value, use the corresponding
functions shown next.
Connected t o :
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Some procedures are used to set parameters of a listener end point that corresponds
to the XML DB HTTP server. It is possible to set HTTP and HTTP2 listeners with
the follow procedures:
Check if the XML database is running with secure settings by using the query below:
tcps 1443
declare
v_port number;
v_protocol number;
v_host varchar2 (50) ,-
begin
dbms_xdb.getListenerEndPoint(endpoint => 2,host => v_host,port =>
v_j?ort,protocol => v_j?rotocol) ;
dbms_output.put_line(a => 'Host :'||v_host);
dbms_output.put_line(a => 'Port :1||v_port);
dbms_output.put_line(a => 'Protocol:'|]v_protocol);
end;
/
Host :10.10.10.113
Port :1443
Protocol:2
Sometimes the storage tablespace for XDB objects needs to be changed and dbms_xdb
provides a procedure to check and another to change all XDB objects to another
tablespace. The following shows how to do that.
y Code 12.9-dbms_xdb_tablespace.sql
conn sys@orallg as sysdba
Connected t o :
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
These examples above were just an useful introductory sample of more than eighty
procedures and functions that compose the dbms_xdb package and are used to manage
the XDB database.
Package dbms_xdbt
As the XML database stores information in a different manner from a normal
database schema, Oracle provides some performance improvements by offering the
dbms_xdbt package that supports the index creation on XML DB when the
information is like Microsoft Word documents and other binary data.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
xdb.dbms_xdbt.createPreferences ,-
xdb.dbms_xdbt.createindex;
end;
/
begin
xdb.dbms_xdbt.configureautosync;
end;
/
The command above will create and schedule a job with the following code:
xdb.dbms_xdbt.autosyncjobbycount('xdb$ci',2,150M');
begin
xdb.dbms_xdbt.autosyncjobbycount(
mylndexName => lxdb$ci',
myMaxPendingCount =>2 ,
raylndexMemory => '50'),-
end;
/
Package dbms_xdbz
To keep the XML database secure, Oracle provides different methods of security like
row-level and column-level security and also fine-grained access control for XML DB
resources. Before starting to explain what the dbms_xdb£ package is and when and
how to use it, it is necessary to have at least a basic knowledge of what is understood
by ACL and ACE when the subject is the XML database.
Access Control Lists (ACLs) are a technique used to protect XML DB resources from
being accessed by unauthorized users. All resources, when created, receive an ACL
associated within it and ACLs can also be created later and associated to an already
created resource. All ACLs are stored in xdbSacl tables on the XDB schema.
Access Control Entry (ACE) is an entry in ACL in the form of an XML element. It
is responsible for granting or denying access to XML resources. Taking the
explanation above in consideration, it is time to start demonstrating some examples of
this package usage.
A principal is a user or role that will either receive or have the privilege revoked to
access a XML resource. Find below the procedures that are used to reset, set, delete,
add and change an application principal.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
--Application principal
--Add an Application user or role to XML DB.
declare
v_res boolean;
begin
v_res := xdb.dbms_xdbz.add_application_principal(name => 'SH');
dbms_output.put_line(a => 'Added application principal!');
commit;
end;
/
--Setting application principal
declare
v_res boolean;
begin
Also on the dbms_xdbs package are two purge procedures used to purge cache
information. They are exemplified below:
Connected t o :
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
There are also validating procedures inside the dbms_xdbs package that can be used to
validate ACLs before using them. For example, suppose that we want to add
privileges to an ACL but want to check if the ACL is valid before it.
Take a look on this next example that shows a simple way to validate an ACL. First
create an ACL, add a privilege to it and assign this ACL with host www.rampant.ee.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms_network_acl_admin.create_acl(acl => 'www_ramp.xml1,
description => 'www rampant acl',
principal => 'xdb',
is_grant => true,
privilege => 'connect') ,-
dbms_network_acl_admin.add_jprivilege (acl => ’www_ramp.xml1,
principal => 'xdb1,
is_grant => true,
privilege => 'resolve');
dbms_network_acl_admin.assign_acl(acl => 'www_ramp.xml',
host => 'www.rampant.ee'),-
end;
/
commit;
--Now, suppose that we want to give XDB the connect privilege everywhere.
Use the pl/sql block below:
declare
v_acl_path varchar2(500);
v_acl_id raw(16);
begin
-- Look for the ACL currently assigned to '*' and give flows_xxxxxx
-- the "connect" privilege if flows_XXXXXX does not have the privilege
yet.
select acl
into v_acl_path
from dba_network_acls
where host = '* '
and lower_port is null
and upper_port is null;
Check all ACLs created by querying the dba_network__acls view that shows the network
host, the ACL path, upper and lower port range.
The next packages have to do with XML management and document manipulation.
As these packages contain more than two hundred and fifty functions, procedures
and constraints, it will not be possible to exemplify all of them in this chapter.
Instead, the example below will use some of these procedures and functions from
both packages. In this example, some procedures and functions of the dbms_xmldom
package will be used to create a DOM document into a table and manipulate this data.
A xmltype table is created and after that, some value is inserted into this table.
Procedures used to create an element node and manipulate the data of the element
created are explained later.
Connected to:
create table
tab_test_xmldom of
xmltype;
declare
v_doc dbms_xmldom.domdocument;
v_doc_node dbms_xmldom.domnode;
v_dom_element dbms_xmldom.domelement;
v_buffer varchar2(300);
v_dom_node dbms_xmldora.domnode;
v_variable XMLType;
v_chil_node dbms_xmldom.domnode;
v_node_list dbms_xmldom.domnodelist;
v_var2 varchar2(100),-
v_elem dbms_xmldom.domelement;
v_docl dbms_xmldom.domdocument ;
v_n_element dbms_xmldom.domnode;
begin
v_variable :=
xmltype('<tab_test_xmldom><car>audi_a4</carx/tab_test_xmldom>');
v_doc := dbms_xmldom.cument(v_variable);
v_doc_node := dbms_xmldom.makenode(v_doc),-
dbms_xmldom.writetobuffer(n => v_doc_node,buffer => v_buffer);
dbms_output.put_line(a => 'Before change:' || v_buffer);
v_doc 1 := dbms_xmldom.newdomdocument;
v_elem := dbms_xmldom.createelement(doc => v_doc,tagName => 'CAR');
v_n_element := dbms_xmldom.makenode(v_elem!;
dbms_output.put_line('node name = ' || dbms_xmldom.getnodename(n =>
v_n_element));
dbms_output.put_line('node value = '|| dbms_xmldom.getnodevalue(n =>
v_n_element));
dbms_output.put_line{'node type = ' || dbms_xmldom.getnodetype(n =>
v_n_element));
end;
/
--Check the table values
SQL>
select * from
SYS_NC_ROWINFO$
<tab_test_xmldom>
<CAR>audi_a8</CAR>
</tab_test_xmldom>
Here are the procedures and functions used in the last example:
■ newdomdocument Procedure that processes the XMLType document and creates
an instance
■ makenoder. Function used to create a handle for the object
■ ivritetobuffer. Function used to write the document to a varchar buffer
■ getdocumentelement This function gets the document element
■ getelementsbytagnamer. This function gets an element based on its tag name
■ getfirstcbild. Function used to retrieve the first child of a node being used
■ setnodevaluer. Procedure used to set the value of the node being used
■ ivritetobuffer. This procedure writes contents of a document into a buffer using the
database character set
■ freedocument This procedure frees a dom document object
Much more can be done with the dbms_xmldom package, but it is beyond the scope of
this book. Instead, the focus will be on another XML package previously presented,
dbms_xm[parser.
XML elements, known also as storage units, contain parsed or unparsed data. A
parsed object may contain common character data or markup language. These
markups are used to describe the storage layout and logical structure of the object.
The dbms_xmlparser package allows access to XML documents, enabling the user to
get their structure and content and access or modify the document's elements and
attributes.
The next example will show how to parse an XML document using the dbms_xmlparser
package. A table containing XML data is created. Then this table is used in a function
that checks if the XML inside the table is formed correctly or not.
Now create a directory which will hold an XML file to be inserted into the newly
created table.
Create a procedure to insert the XML data from the file to the table tab_xml_parse.
insert into
tab_xml__parse
(coll,
col 2)
values
(v_file_name,
empty_clob())
returning col2 into v_myclob;
Use the procedure created above to insert XML data into the table,
begin
prc_insert_xml_to_table(
v_directory => 'dir_xral_docs',
v_file_name => 'itunesLib.xml',
v_char_set => 'utf8’);
end;
/
--Query table and check results
select * from tab_xml_parse;
Create the function to check whether or not the XML document is formed properly.
Lasdy, execute the function to check if the XML data inside the tab_xf»ljparse table is
in a correct format.
set serveroutput on
declare
v_xmlfile clob;
v_is_ok boolean;
indoc varchar2(2000);
myparser dbms_xmlparser.parser;
indomdoc dbms_xmldom.domdocument;
innode dbms_xmldom.domnode;
buf varchar2(2000);
begin
select col2 into v_xmlfile
from tab_xral_parse
where coll = 'itunesLib.xml';
v_is ok := func_check_xml(v_xmlfile);
end;
/
Some of the main procedures and functions of dbms_xmlparser were used in the last
example. Here are their functions:
■ newparser. Used to return a new parser instance
■ parseclob. Used to parse XML stored in a CLOB
■ freeparser. Used to free a parser object
■ parserbuffer. This function parses an XML stored in a document
■ getdocument Returns the node of a tree built by parse in a document
There are many other functions and packages in dbms_xmlparser. Further information
can be found by consulting tahiti.oracle.com.
The syntax is select dbms_xmlgen.getxml(‘j o u r query here from dual and with spool and
SQL*Plus settings set correcdy, the output is a dump of data in XML format. Where
O f course, much more can take place with respect to manipulating the data. Oracle
recommends that data selection and formatting, as much as possible, be done via the
select statement as opposed to forcing the RTF processing engine to manipulate the
data. The RDBMS engine is obviously much more powerful than what Microsoft
Word has to offer. Oracle recommends that dbms_xmlgen be used over dbms_xmlquery.
Would it not be fantastic if data could be simply pulled from Oracle preformatted
with XML tags? Many Oracle shops use XML for data transfer, web services, reports,
and the Oracle data can be easily tagged using the dbms_xmlgen package. Oracle’s XML
Publisher product can retrieve XML from an HTTP feed and use it to generate rich
reports with graphs, images, and other content and then mail, fax, print, or FTP
them. All that is needed is the XML.
Oracle gives the dbms_xmlgen package for formatting Oracle output in XML. The
dbms_xmlgen package generates XML on-the-fly using any query desired; in addition, it
is extremely easy to use from either the SQL prompt or in code, as it is just a simple
query. Below are some useful examples for creating XML data using both the
dbms_xmlgen and dbms_xmlquery packages. Run this simple query:
01 Code 1 2 .1 6 - dbms_xmlgen,sql
conn sysSorallg as sysdba
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
select
owner,
table_name
from
dba_tables
where
owner='system'
and
rownum <2;
select
dbms_xmlgen.getXML('select owner,table_name from dba_tables where
owner=11 system1' and rownum <2') query_xml
from
dual;
QUERY_XML
<?xml version="l.0"?>
<rowset>
<row>
<owner>system</owner>
<table_name>aq$_internet_agents</table_name>
</row>
</rowset>
It is very easy to generate XML data from a query using thegetx m l procedure.
Now examine more advanced XML tagging with dbms_xmlgen. Most XML has sub
nodes for each main node. For instance, what if we wanted to pull XML for every
department, and a sub-node for every employee under it? We can use the cursor
function!
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
cursor statement : 3
FIRST_NAME LAST_NAME
Jennifer Whalen
cursor statement : 3
FIRST_NAME LAST_NAME
Michael Hartstein
Pat Fay
cursor statement : 3
Den Raphaely
Alexander Khoo
Shelli Baida
Sigal Tobias
Guy Himuro
Karen Colmenares
The results do not look too impressive at the SQL prompt However, watch as we
surround it with a call to dbms_xmlgen.getxmh
select dbms_xmlgen.getxml{'
select department_id, department_name,
cursor(select first_name, last_name
from employees e
where e .department_id = d.department_id) emp_row
from departments d
where rownum < 4
1) from dual
<?xml version="l.0"?>
<rowset>
<row>
<department_id>10</department_id>
<department_name >admini strationc/department_name>
<emp_row>
<emp_row_row >
<first_name>Jennifer</first_name>
<1as t_name >Whalen</last_name>
</emp_row_row>
</emp_row>
</row>
<row>
<department_id>2 0</department_id>
<department_name >marketing</department_name >
<emp_row>
<emp_row_row >
< firs t_name >Michael</fi rst_name >
<last_name>Hartstein</last_name >
</emp_row_row>
<emp_row_row>
<first_name>Pat</first_name>
<1ast_name >Fay</last_name>
</emp_row_row>
</emp_row>
</row>
<row>
<department_id>30</department_id>
<department_name>Purchasing</department_name>
<emp_row>
<emp_row_row>
<first_name >Den</first_name >
<1as t_name >Raphaely</1ast_name >
</emp_row_row>
Note that we did not change the query syntax in any way. But check out the great
XML results! We have each department as a row tag, and the cursor we created gives me
an emp_row node containing recurring emp_row_row nodes.
With standard SQL queries tagged using dbms_xmlgen, XML Publisher can have a full
reporting suite that easily pulls Oracle data with XML tags, forms it into a PDF,
DOC, XLS, or HTML report, and distributes the report via e-mail using its native e-
mail capabilities. This is far easier than the traditional utl_mail or utl_smtp e-mail
packages which required specialized invocation code. Next, some dbms_xmlquery
examples will be given that compare them with the dbms_xmlgen package.
For example, the following issue came up for one of my forum users while creating a
web service to return nested XML data. The data contains a special character that
prevented IE 6.0 from displaying the XML output. By default, xmlgen generates an xml
header <?xml version — '1.0'?>. By converting to dbms_xmlqueiy, we can set the XML
header to a company standard encoding <?xml version — '1.0' encoding — 'ISO-8859-
Packages dbms_xmlgen and dbms_xmlquery 559
/'?>, and the web service will work fine. This code is almost identical, though the
irrelevant portions have been removed.
qryCtx := dbms_xmlquery.newcontext(v_prd_query);
dbms_xmlquery.setencodingtag(qryctx, 1ISO-8859-11)
dbms_xmlquery.setrowsettag(qryctx, ’root1);
dbms_xmlquery. setrowtag (qryctx, v_bayer_group_tag) ,-
Execute the query and put the results into the clob
dbms_output.put_line(1close context');
dbms_xmlquery.closecontext(qryctx);
The comment shows that multiple occurrences of a bind variable do not work in
dbms_xmkjuery. The following error will be received:
This is where the index refers to which occurrence of a bind variable (starting at 0)
failed. In this case, it was the third bind_variable occurrence that failed.
Both dbms_xmlsave and dbms_xmlstore are part of the Oracle XML SQL Utility (XSU).
The main differences between them are that dbms_xmlstore is written in C and
compiled in kernel form, so it is faster; dbms_xmlstore uses Simple API for XML to
parse XML, providing higher scalability and consuming less memory than the
dbmsjxmlsave package. Another important difference between them is that some
dbms_xmlstore functions like insertxml, updatexml and deletexml can have XMLType (from
the dbms_xmlstore package) instances in addition to CLOB values, thereby offering a
better integration with XML DB.
Here is an example of uploading a new record into the EMP table using the
dbms_xmlsave package.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
declare
insctx dbms_xmlsave.ctxtype;
n_rows number;
s_xml varchar2 (32767);
begin
s_xml :=
'<rowset>'
|| '<row>'
|| 1cempno>7783</empno>'
'<ename>clark</ename>1
|| '<job>manager</job>'
jj 1<mgr>7839</mgr>1
'<sal>2450</sal>'
'<deptno>10</deptno>'
| '</row>'
||'</rowset> ';
In this example, table emp3 is a copy of emp. The generated xml data file is named
emp3.xml and is located in a directory object named mydir - C:\Temp in this example.
Here is the procedure code to upload an xml file:
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
begin
dbms lob.createtemporary(myclob, TRUE, 2);
-- open file
dbms lob.fileopen(xmlfile);
-- context handle
insCtx := dbms_xmlstore .newcontext (upper (tableName) ) ,-
-- close handle
dbms xmlstore.closecontext (insctx) ,-
The process to upload a file is to execute the procedure and pass in the directory
object name, the file name, and the target table.
Package dbms_xmlschema
The XSD (XML Schema Definition), is a database metadata generic Document Type
Definition (DTD) that allows for storing XML documents. Once a document
structured with a DTD is defined, it can be loaded into the XSD schema and used to
validate the structure of XML documents.
XML schemas are used to check if XML instance documents are correspondent to
their specification. XML schema can be stored in an XMLType storage model in one
of three ways: structured storage, unstructured model CLOB or binary XML storage.
The dbms_xmlschema is a package that offers procedures and functions for managing
XML schemas. A few of the things which can be accomplished with the
dbms_xmlschema package are:
■ Evolving an XML schema using inplaceevolve and copyevolve procedures
■ Compiling a XML schema by using the compileschema procedure
■ Registering and deleting XML schemas by using both theregisterschema and
deleteschema procedures, respectively
■ Generating a XML schema using the generateschema procedure
Next are some examples of how and when to use the dbms_xmlschema procedures and
functions.
This example will show how to register and delete a XML schema. The first step is to
create directories where XML objects will be registered using the dbms_xdb package.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Next, save the test.xsd file into the directory created above, by using Windows
Explorer, for example, and register the schema using the registerschema procedure.
--test.xsd file
<test
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi :noNamespaceSchemaLocation=
"http://localhost:8080/dbms_books/schemas/test.xsd">
begin
dbms_xmlschema.registerschema(
schemaURL => 'http://localhost:8080/dbms_book/schemas/test.xsd',
schemaDoc => sys.UriFactory.getUri('/dbms_book/schemas/test.xsd'));
end;
/
begin
dbms_xmlschema.deleteschema(schemaURL =>
'http://localhost:8080/dbms_book/schemas/test.xsd',
delete_option => dbms_xmlschema.delete_cascade_force);
end;
/
Even after we use the deleteschema procedure, the XML schema will still have some
information in the data dictionary. To completely remove an XML schema from any
dictionary table, use purgeschema. The schema must have been registered for binary
encoding and deleted using the hide mode.
Connected to:
Oracle llg Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP, Data Mining,
Oracle Database Vault and Real Application Testing options
Check the dba_xml_schemas again; the row whose schema_id was just purged will not be
found.
Now move to the next and final package, which does not pertain to XML, but rather,
the explain plan.
Summary
This last chapter of the book has covered packages pertaining to Oracle HTML
Database or Application Express and Oracle XML Database functionalities.
Packages presented in this chapter are used to create, delete and manipulate many
different types of objects and documents in both XML and HTML environments.
Summary 565
Book Summary
After we finish watching a show at the theatre or opera, we just have an image of the
final production. When we start to study or understand what happens behind the
scenes, we start to give much more value to how much work has gone into it.
Although command line tools are becoming more and more difficult to be used, this
book serves to give the reader some more detail in using command line examples and
showing what happens on background when graphical user interfaces are being used
by a database administrator or a development.
Packages presented in this book are part of the most interesting fields of Oracle
Database Administration; for instance, performance and tuning, backup and recovery,
tablespace management, security, concurrency, Real Application Cluster, Data Guard,
Streams and others. It has packages from different Oracle Database versions and also
covers the last Oracle l l g Release 2.
Index 567
DB structure integrity check............ 334 d b m s jo g s td b y .......................464
dba_capture_parameters view.......... 488 dbms_metadata.......................41
dba_extents.......................... 81 d b m s _ m g w a d m ...................... 496
dba_hist............................ 250 d b m s _ m g w m s g ...................... 501
dbajobs............................141 dbms_monitor.................. 173,176
dba_registered_mviews view........... 438 dbms_monitor package................ 392
dba_resumable...................... 374 dbms_mview.................... 434, 442
dba_segments.........................81 dbms_mview.refresh procedure.... 439, 443
dba_services view.................... 451 dbms_obfuscation_toolkit..............110
dba_streams_administrator view........ 517 dbms_odci.......................... 181
dbms_advanced_rewrite............... 413 dbms_oiap package................... 441
dbms_advisor....................59,152 dbms_outlln......................... 182
dbms_aiert.......................... 305 dbms_output........................ 354
dbms_application_info................ 161 dbms_pclxutil.........................35
dbms_apply_adm........... 470, 475, 479 dbms_preprocessor.................. 360
dbms_aq........................482, 519 dbms_profiler.................. 184,186
dbms_aqadm.................... 482, 483 dbms_propagation_adm............... 501
dbms_auto_task_admin............... 308 dbms_random....................... 410
d bms_a w_stats.......................166 dbms_redefinition.................. 37, 39
dbms_capture_adm............... 487, 490 dbms_refresh....................... 443
dbms_cdc_publish.................... 282 dbms_repair.....................288, 290
dbms_comparison.................... 312 dbms_resconfig...................... 362
dbms_crypto...................... 89, 93 dbms_resource_manager.......... 364, 372
dbms_cube......................... 418 dbms_resource_manager_privs......... 364
dbms_cube_advise................... 422 dbms_result_cache................... 191
dbms_cube_advise_trace.sql........... 423 dbms_rls...................... 112,117
dbms_data_mining................... 426 dbms_rowid...................... 45, 46
dbms_data_mining_transform.......... 430 dbms_scheduler......................378
dbms_datapump................. 253, 255 dbms_scheduler.createjile_watcher.... 382
dbms_db_version.................... 317 dbms_server_alert................... 385
dbms_ddl........................... 320 dbms_service........................449
dbms_describe....................... 331 dbms_session.................... 114, 388
dbms_dg........................... 461 dbms_session package................ 392
dbms_dimension..................... 432 dbms_shared_pool....................393
dbms_distributed_trust_admin.......... 102 dbms_space.................. 59, 62, 70
d b m s_ e r r lo g .......................... 15 dbms_space.createJndex_cost.......... 61
d b m s f g a ...................... 105,106 dbms_space_admin................ 74, 77
dbmsjilejransfer............... 259, 266 dbms_spm..................... 200, 201
dbmsjiashback.................. 270, 271 dbms_sql...................... 209, 213
d b m s _ h m ........................... 334 dbms_sqltune.............. 215, 217, 220
d bms_hs_passthrough.................494 dbms_statjuncs..................... 227
dbmsjot......................... 19, 23 dbms_stats..................... 230, 237
dbmsjava.......................... 340 dbms_stats.create_extended_stats...... 240
d b m s j o b ........................... 342 dbms_storage_map.................. 242
dbmsjibcache....................... 171 dbms_streams................... 505, 513
d b m s j o b ............................ 27 dbms_streams_adm.................. 519
dbmsJogmnr_cdc_subscribe........... 282 dbms_streams_admin................. 510
dbmsJogmnr_d...................... 281 dbms_streams_advisor_adm........... 517
Index 569
free_blocks.......................... 64 LOBs................................ 89
full table scan........................ 413 locally managed tablespaces............. 76
full-table scan........................198 lock_object.......................... 148
func_card_num_encrypt............... 110 Log Miner................. 278, 280, 465
func_cardjium_encrypt_des3.......... 110 login procedure...................... 529
lz _ co m p re s s .......................... 86
G
M
Geo-Cluster......................... 461
get_file............................. 259 materialized view.418, 422, 434, 438, 440, 443
get_threshold procedure........... 385, 386 Materialized View Advisor..............423
getlength.............................28 materialized views.................... 434
global variables........................10 Messaging Gateway...................496
grep c o m m a n d ....................... 281 multi-column statistics.................235
GUI tools........................... 305 mv_cube_advice..................... 424
mv_cube_advice fu n c t io n .............. 422
H
N
hash................................ 90
hashmd5............................ 90 N L O B ............................... 27
High Water M a r k ...................... 67 n u m parameter...................... 438
historical deleted rows................ 279
HTM L D B ........................... 528
HTML-DB........................... 525
o
htmldb_custom_auth................. 525 object_dependent_segments............ 65
htmldbjtem........................ 530 object_growth_trend...................66
htmldb_util..................... 534, 538 OLAP databases...................... 166
OLAP Option................... 415,417
olap_table function................... 416
I
optimizer statistics....................231
initialization parameter................471 Oracle Advanced Queue (AQ)........... 492
insert c o m m a n d ................. 110,165 Oracle Calibration.....................366
instr................................ 28 Oracle Data Guard........... 461, 463, 470
internal LOBs......................... 27 Oracle Data Mining................... 425
internal Oracle alerts.................. 384 Oracle Database Workspace Manager... 124
interrupt_tuning_task................. 217 Oracle Streams.......... 470, 475, 492, 510
IOT (Index Organized Tables).............20 Oracle Streams Replications............ 515
is o p e n ............................... 28 Oracle X A ........................... 454
Oracle XM L SQL Utility (XSU)............ 561
ORADB database..................... 265
J ORDERED hint....................... 237
Java............................... 340 orphan tables........................290
jobs package........................ 378
P
L
p_mail_host.........................140
library cache......................... 11 parallelism........................... 33
lightweight jobs......................378 performance statistics................. 250
Index 571
tablespace_fix_segment_extblks..........81 V
tablespace_migrate_to_local............ 76
tablespace_verify...................... 84 v$dbjransportable_platform view...... 300
tkprof utility......................... 173 v$dmlJocks......................... 56
topnsql............................. 250 v$lock.............................. 56
trace file............................ 179 v$locked_object...................... 56
Transaction integrity check............. 335 v$logmnr_contents view............... 278
transaction_backout.................. 275 v$open_cu rsor view.................. 396
transaction_backout procedure.... 275, 277 v $ se s sio n ........................... 164
trcsess utility........................ 179 v$sql_plan view...................... 199
trim.................................28 v$vpd_policy........................ 120
Triple DES algorithm.................. 110 validate_dimension procedure...... 432, 441
trusted all........................... 104 view_wo_overwrite.................. 126
Virtual Private Database (VPD)...... 112, 389
Paulo greatly enjoys what he does and is always improving his technical
knowledge by attending events like Oracle Open World —San Francisco
(2005, 2006 and 2011) and IBM Information on Demand —Los Angeles
(2006) and Burleson Oracle RAC Cruise 2009.
Performance?
A
B C is a leading p ro vide r of Remote
Oracle Database Healthchecks
BURLESON ^ %
Consulting
800.76 6.188 4
www.dba-oracle.com
Made in the U SA
Lexington, K Y
12 December 2014
Database/Oracle
Knowing when and how to use DBMS packages is key to a DBA's success in
administering and monitoring the database. This book contains information on
how to determine which Oracle packages are best to use in real world situations.
The examples supplied in this book will be a powerful tool for any experienced
DBA or developer interested in escaping from a reliance on queries or a GUI
interface. This book is not intended for beginners.
Key Features:
Understand how to automate database management with Oracle DBMS packages. O racle T u n in g
The D efinitive Reference
^ Learn how to write scripts that replicate the functionality of OEM screens.
fa Learn how to use DBMS packages when a GUI interface is not available.
* Read about the DBMS packages used by the best DBAs. W indows for the
Oracle DBA
Advanced Oracle
DBMS Packages
The Definitive Reference
Paulo Portugal