Log Minar
Log Minar
htm
This chapter describes LogMiner functionality as it is used from the command line. You also have the
option of accessing LogMiner functionality through the Oracle LogMiner Viewer graphical user
interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.
See Extracting Actual Data Values from Redo Logs for details about how you can
use LogMiner to accomplish this.
Detecting and whenever possible, correcting user error, which is a more likely scenario than
logical corruption. User errors include deleting the wrong rows because of incorrect values in a
WHERE clause, updating rows with incorrect values, dropping the wrong index, and so forth.
Determining what actions you would have to take to perform fine-grained recovery at the
transaction level. If you fully understand and take into account existing dependencies, it may be
possible to perform a table-based undo operation to roll back a set of changes. Normally you
would have to restore the table to its previous state, and then apply an archived redo log to roll it
forward.
Performance tuning and capacity planning through trend analysis. You can determine which
tables get the most updates and inserts. That information provides a historical perspective on
disk access statistics, which can be used for tuning purposes.
Performing post-auditing. The redo logs contain all the information necessary to track any DML
and DDL statements executed on the database, the order in which they were executed, and who
executed them.
The type of change made to the database (INSERT, UPDATE, DELETE, or DDL).
The SCN at which a change was made (SCN column).
The SCN at which a change was committed (COMMIT_SCN column).
The transaction to which a change belongs (XIDUSN, XIDSLT, and XIDSQN columns).
The table and schema name of the modified object (SEG_NAME and SEG_OWNER columns).
The name of the user who issued the DDL or DML statement to make the change (USERNAME
column).
Reconstructed SQL statements showing SQL that is equivalent (but not necessarily identical) to
the SQL used to generate the redo records (SQL_REDO column). If a password is part of the
statement in a SQL_REDO column, the password is encrypted.
Reconstructed SQL statements showing the SQL statements needed to undo the change
(SQL_UNDO column). SQL_UNDO columns that correspond to DDL statements are always
NULL. Similarly, the SQL_UNDO column may be NULL for some datatypes and for rolled back
operations.
The redo logs contain internally generated numerical identifiers to identify tables and their associated
columns. To reconstruct SQL statements, LogMiner needs to know how the internal identifiers map to
user-defined names. This mapping information is stored in the data dictionary for the database.
LogMiner provides a procedure (DBMS_LOGMNR_D.BUILD) that lets you extract the data dictionary.
See Also:
Redo Logs
When you run LogMiner, you specify the names of redo logs that you want to analyze. LogMiner
retrieves information from those redo logs and returns it through the V$LOGMNR_CONTENTS view. To
ensure that the redo logs contain information of value to you, you must enable at least minimal
supplemental logging. See Supplemental Logging.
You can then use SQL to query the V$LOGMNR_CONTENTS view, as you would any other view. Each
select operation that you perform against the V$LOGMNR_CONTENTS view causes the redo logs to be
read sequentially.
Keep the following things in mind about redo logs:
The redo logs must be from a release 8.0 or later Oracle database. However, several of the
LogMiner features introduced as of release 9.0.1 only work with redo logs produced on an
Oracle9i or later database. See Restrictions.
Support for LOB and LONG datatypes is available as of release 9.2, but only for redo logs
generated on a release 9.2 Oracle database.
The redo logs must use a database character set that is compatible with the character set of the
database on which LogMiner is running.
In general, the analysis of redo logs requires a dictionary that was generated from the same
database that generated the redo logs.
If you are using a dictionary that is in flat file format or that is stored in the redo logs, then the
redo logs you want to analyze can be from either the database on which LogMiner is running or
from other databases.
If you are using the online catalog as the LogMiner dictionary, you can only analyze redo logs
from the database on which LogMiner is running.
LogMiner must be running on the same hardware platform that generated the redo logs being
analyzed. However, it does not have to be on the same system.
It is important to specify the correct redo logs when running LogMiner. If you omit redo logs
that contain some of the data you need, you will get inaccurate results when you query the
V$LOGMNR_CONTENTS view.
To determine which redo logs are being analyzed in the current LogMiner session you can look at the
V$LOGMNR_LOGS view, which contains one row for each redo log.
See Also:
"Specify Redo Logs for Analysis"
Dictionary Options
To fully translate the contents of redo logs, LogMiner requires access to a database dictionary.
LogMiner uses the dictionary to translate internal object identifiers and datatypes to object names and
external data formats. Without a dictionary, LogMiner returns internal object IDs and presents data as
hex bytes.
For example, instead of the SQL statement:
INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);
A LogMiner dictionary file contains information that identifies the database it was created from and the
time it was created. This information is used to validate the dictionary against the selected redo logs,
automatically detecting any mismatch between LogMiner's internal dictionary and the redo logs.
The dictionary file must have the same database character set and be created from the same database as
the redo logs being analyzed. However, once the dictionary is extracted, you can use it to mine the redo
logs of that database in a separate database instance without being connected to the source database.
Extracting a dictionary file also prevents problems that can occur when the current data dictionary
contains only the newest table definitions. For instance, if a table you are searching for was dropped
sometime in the past, the current dictionary will not contain any references to it.
LogMiner gives you three choices for your source dictionary:
Extracting the Dictionary to a Flat File
Extracting a Dictionary to the Redo Logs
Using the Online Catalog
Extracting the Dictionary to a Flat File
When the dictionary is in a flat file, fewer system resources are used than when it is contained in the
redo logs. It is recommended that you regularly back up the dictionary extracts to ensure correct
analysis of older redo logs.
To extract database dictionary information to a flat file, use the DBMS_LOGMNR_D.BUILD procedure
with the STORE_IN_FLAT_FILE option.
Be sure that no DDL operations occur while the dictionary is being built.
The following steps describe how to extract a dictionary to a flat file (including extra steps you must
take if you are using Oracle8). Steps 1 through 4 are preparation steps. You only need to do them once,
and then you can extract a dictionary to a flat file as many times as you wish.
1. The DBMS_LOGMNR_D.BUILD procedure requires access to a directory where it can place the
dictionary file. Because PL/SQL procedures do not normally access user directories, you must
specify a directory for use by the DBMS_LOGMNR_D.BUILD procedure or the procedure will
fail. To specify a directory, set the initialization parameter, UTL_FILE_DIR, in the
init.ora file.
See Also:
Oracle9i Database Reference for more information about the init.ora file
For example, to set UTL_FILE_DIR to use /oracle/database as the directory where the
dictionary file is placed, enter the following in the init.ora file:
UTL_FILE_DIR = /oracle/database
Remember that for the changes to the init.ora file to take effect, you must stop and restart
the database.
2. For Oracle8 only. Otherwise, go to the next step: Use your operating system's copy command
to copy the dbmslmd.sql script, which is contained in the
$ORACLE_HOME/rdbms/admin directory on the Oracle8i database, to the same directory in
the Oracle8 database. For example, enter:
% cp /8.1/oracle/rdbms/admin/dbmslmd.sql /8.0/oracle/rdbms/admin/dbmslmd.sql
3. If the database is closed, use SQL*Plus to mount and then open the database whose redo logs
you want to analyze. For example, entering the STARTUP command mounts and opens the
database:
SQL> STARTUP
4. For Oracle8 only. Otherwise, go to the next step: Execute the copied dbmslmd.sql script
on the 8.0 database to install the DBMS_LOGMNR_D package. For example, enter:
@dbmslmd.sql
You could also specify a filename and location without specifying the
To ensure that the redo logs contain information of value to you, you must enable at least minimal
supplemental logging. See Supplemental Logging.
See Also:
Oracle9i Recovery Manager User's Guide for more information about
ARCHIVELOG mode
The process of extracting the dictionary to the redo logs does consume database resources, but if you
limit the extraction to off-peak hours, this should not be a problem and it is faster than extracting to a
flat file. Depending on the size of the dictionary, it may be contained in multiple redo logs. Provided
the relevant redo logs have been archived, you can find out which redo logs contain the start and end of
an extracted dictionary. To do so, query the V$ARCHIVED_LOG view, as follows:
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN='YES';
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE DICTIONARY_END='YES';
The names of the start and end redo logs, and possibly other logs in between them, are specified with
the ADD_LOGFILE procedure when you are preparing to start a LogMiner session.
It is recommended that you periodically back up the redo logs so that the information is saved and
available at a later date. Ideally, this will not involve any extra steps because if your database is being
properly managed, there should already be a process in place for backing up and restoring archived
redo logs. Again, because of the time required, it is good practice to do this during off-peak hours.
Using the Online Catalog
To direct LogMiner to use the dictionary currently in use for the database, specify the online catalog as
your dictionary source when you start LogMiner, as follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => 2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
Using the online catalog means that you do not have to bother extracting a dictionary to a flat file or to
the redo logs. In addition to using the online catalog to analyze online redo logs, you can use it to
analyze archived redo logs provided you are on the same system that generated the archived redo logs.
The online catalog contains the latest information about the database and may be the fastest way to start
your analysis. Because DDL operations that change important tables are somewhat rare, the online
catalog generally contains the information you need for your analysis.
Remember, however, that the online catalog can only reconstruct SQL statements that are executed on
the latest version of a table. As soon as the table is altered, the online catalog no longer reflects the
previous version of the table. This means that LogMiner will not be able to reconstruct any SQL
statements that were executed on the previous version of the table. Instead, LogMiner generates
nonexecutable SQL in the SQL_REDO column (including hex-to-raw formatting of binary values)
similar to the following example:
insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
hextoraw('c306'));"
With this option set, LogMiner applies any DDL statements seen in the redo logs to its internal
dictionary. For example, to see all the DDLs executed by user SYS, you could issue the following
query:
SQL> SELECT USERNAME, SQL_REDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE USERNAME = 'SYS' AND OEPRATION = 'DDL';
The information returned might be similar to the following, although the actual information and how it
is displayed will be different on your screen.
USERNAME
SYS
SYS
SQL_REDO
ALTER TABLE SCOTT.ADDRESS ADD CODE NUMBER;
CREATE USER KATHY IDENTIFIED BY VALUES 'E4C8B920449B4C32' DEFAULT
TABLESPACE TS1;
Keep the following in mind when you use the DDL_DICT_TRACKING option:
Note:
In general, it is a good idea to keep the DDL tracking feature enabled because if it
is not enabled and a DDL event occurs, LogMiner returns some of the redo data as
hex bytes. Also, a metadata version mismatch could occur.
Because LogMiner automatically assigns versions to the database metadata, it will detect and notify
you of any mismatch between its internal dictionary and the redo logs.
Note:
It is important to understand that the LogMiner internal dictionary is not the same
as the LogMiner dictionary contained in a flat file or in redo logs. LogMiner does
update its internal dictionary, but it does not update the dictionary that is contained
in a flat file or in redo logs.
Recommendations
Oracle Corporation recommends that you take the following into consideration when you are using
LogMiner:
All databases should employ an alternate tablespace for LogMiner tables. By default all
LogMiner tables are created to use the SYSTEM tablespace. Use the
DBMS_LOGMNR_D.SET_TABLESPACE routine to re-create all LogMiner tables in an alternate
tablespace. For example, the following statement will re-create all LogMiner tables to use the
logmnrts$ tablespace:
SQL> EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE('logmnrts$');
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for a full description of
the DBMS_LOGMNR_D.SET_TABLESPACE routine
Restrictions
The following restrictions apply when you are using LogMiner:
The following are not supported:
Simple and nested abstract datatypes (ADTs)
Collections (nested tables and VARRAYs)
Object Refs
Index organized tables (IOTs)
CREATE TABLE AS SELECT of a table with a clustered key
LogMiner runs only on databases of release 8.1 or higher, but you can use it to analyze redo
logs from release 8.0 databases. However, the information that LogMiner is able to retrieve
from a redo log depends on the version of the log, not the version of the database in use. For
example, redo logs for Oracle9i can be augmented to capture additional information when
supplemental logging is enabled. This allows LogMiner functionality to be used to its fullest
advantage. Redo logs created with older releases of Oracle will not have that additional data and
may therefore have limitations on the operations and datatypes supported by LogMiner.
For example, the following features require that supplemental logging be turned on. (Note that
in Oracle9i release 9.0.1, supplemental logging was always on (it was not available at all in
releases prior to 9.0.1). But in release 9.2, you must specifically turn on supplemental logging;
otherwise it will not be enabled.)
Support for index clusters, chained rows, and migrated rows (for chained rows,
supplemental logging is required, regardless of the compatibility level to which the
database is set).
Support for direct-path inserts (also requires that ARCHIVELOG mode be enabled).
Extracting the data dictionary into the redo logs.
DDL tracking.
Generating SQL_REDO and SQL_UNDO with primary key information for updates.
LONG and LOB datatypes are supported only if supplemental logging is enabled.
See Also:
Supplemental Logging
When you specify the COMMITTED_DATA_ONLY option, LogMiner groups together all DML
operations that belong to the same transaction. Transactions are returned in the order in which they
were committed.
If long-running transactions are present in the redo logs being analyzed, use of this option may cause an
"Out of Memory" error.
The default is for LogMiner to show rows corresponding to all transactions and to return them in the
order in which they are encountered in the redo logs.
For example, suppose you start LogMiner without specifying COMMITTED_DATA_ONLY and you
execute the following query:
SQL> SELECT (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID,
2 USERNAME AS USER,
3 SQL_REDO AS SQL_REDO
4 FROM V$LOGMNR_CONTENTS;
The output would be as follows. Both committed and uncommitted transactions are returned and rows
from different transactions are interwoven.
XID
1.5.123
1.5.123
USER
SCOTT
SCOTT
1.6.124
1.6.124
KATHY
KATHY
1.6.124
KATHY
1.5.123
SCOTT
1.6.124
1.7.234
1.5.123
1.7.234
KATHY
GOUTAM
SCOTT
GOUTAM
SQL_REDO
SET TRANSACTION READ WRITE;
INSERT INTO "SCOTT"."EMP"("EMPNO","ENAME")
VALUES (8782, 'Frost');
SET TRANSACTION READ WRITE;
INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8839, 'Cummings', '415-321-1234');
INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (7934, 'Yeats', '033-334-1234');
INSERT INTO "SCOTT"."EMP" ("EMPNO","ENAME")
VALUES (8566, 'Browning');
COMMIT;
SET TRANSACTION READ WRITE;
COMMIT;
INSERT INTO "SCOTT"."CUSTOMER"("ID","NAME","PHONE_DAY")
VALUES (8499, 'Emerson', '202-334-1234');
Now suppose you start LogMiner, but this time you specify the COMMITTED_DATA_ONLY option. If
you executed the previous query again, the output would look as follows:
1.6.124
1.6.124
KATHY
KATHY
1.6.124
KATHY
1.6.124
1.5.123
1.5.123
KATHY
SCOTT
SCOTT
1.5.123
SCOTT
1.5.123
SCOTT
Because the commit for the 1.6.124 transaction happened before the commit for the 1.5.123
transaction, the entire 1.6.124 transaction is returned first. This is true even though the 1.5.123
transaction started before the 1.6.124 transaction. None of the 1.7.234 transaction is returned because a
commit was never issued for it.
If no STARTTIME or ENDTIME parameters are specified, the entire redo log is read from start to end,
for each SELECT statement issued.
The timestamps should not be used to infer ordering of redo records. You can infer the order of redo
records by using the SCN.
The STARTSCN and ENDSCN parameters override the STARTTIME and ENDTIME parameters in
situations where all are specified.
If no STARTSCN or ENDSCN parameters are specified, the entire redo log is read from start to end, for
each SELECT statement issued.
Querying V$LOGMNR_CONTENTS
LogMiner output is contained in the V$LOGMNR_CONTENTS view. After LogMiner is started, you can
issue SQL statements at the command line to query the data contained in V$LOGMNR_CONTENTS.
When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo logs are
read sequentially. Translated information from the redo logs is returned as rows in the
V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup are met
or the end of the redo log is reached.
LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY
option to specify that only committed transactions should be retrieved. SCN order is the order normally
applied in media recovery.
For example, suppose you wanted to find out about any delete operations that a user named Ron had
performed on the scott.orders table. You could issue a query similar to the following:
SQL> SELECT OPERATION, SQL_REDO, SQL_UNDO
2 FROM V$LOGMNR_CONTENTS
3 WHERE SEG_OWNER = 'SCOTT' AND SEG_NAME = 'ORDERS' AND
4 OPERATION = 'DELETE' AND USERNAME = 'RON';
The following output would be produced. The formatting may be different on your display than that
shown here.
OPERATION
DELETE
DELETE
SQL_REDO
SQL_UNDO
This output shows that user Ron deleted two rows from the scott.orders table. The reconstructed
SQL statements are equivalent, but not necessarily identical, to the actual statement that Ron issued.
The reason for this is that the original WHERE clause is not logged in the redo logs, so LogMiner can
only show deleted (or updated or inserted) rows individually.
Therefore, even though a single DELETE statement may have been responsible for the deletion of both
rows, the output in V$LOGMNR_CONTENTS does not reflect that. Thus, the actual DELETE statement
may have been DELETE FROM SCOTT.ORDERS WHERE EXPR_SHIP = 'Y' or it might have
been DELETE FROM SCOTT.ORDERS WHERE QTY < 8.
SQL statements that are reconstructed when the PRINT_PRETTY_SQL option is enabled are not
executable because they do not use standard SQL syntax.
As shown in this example, the MINE_VALUE function takes two arguments. The first one specifies
whether to mine the redo (REDO_VALUE) or undo (UNDO_VALUE) portion of the data. The second
argument is a string that specifies the fully-qualified name of the column to be mined (in this case,
SCOTT.EMP.SAL). The MINE_VALUE function always returns a string that can be converted back to
the original datatype.
Supplemental Logging
Redo logs are generally used for instance recovery and media recovery. The data needed for such
operations is automatically recorded in the redo logs. However, a redo-based application may require
that additional information be logged in the redo logs. The following are examples of situations in
which supplemental data may be needed:
An application that wanted to apply the reconstructed SQL statements to a different database
would need to identify the update statement by its primary key, not by its ROWID which is the
usual method used by LogMiner. (Primary keys are not, by default, logged in the redo logs
unless the key itself is changed by the update.)
To make tracking of row changes more efficient, an application may require that the before
image of the whole row be logged, not just the modified columns.
The default behavior of the Oracle database server is to not provide any supplemental logging at all,
which means that certain features will not be supported (see Restrictions). If you want to make full use
of LogMiner support, you must enable supplemental logging.
The use of LogMiner with minimal supplemental logging enabled does not have any significant
performance impact on the instance generating the redo logs. However, the use of LogMiner with
database-wide supplemental logging enabled does impose significant overhead and effects
performance.
There are two types of supplemental logging: database supplemental logging and table supplemental
logging. Each of these is described in the following sections.
Note:
In LogMiner release 9.0.1, minimal supplemental logging was the default
behavior. In release 9.2, the default is no supplemental logging. It must be
specifically enabled.
Identification key logging enables database-wide before-image logging of primary keys or unique
indexes (in the absence of primary keys) for all updates. With this type of logging, an application can
identify updated rows logically rather than resorting to ROWIDs.
Identification key logging is necessary when supplemental log data will be the source of change in
another database, such as a logical standby.
To enable identification key logging, execute the following statement:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY, UNIQUE INDEX)
COLUMNS;
This statement results in all primary key values, database-wide, being logged regardless of whether or
not any of them are modified.
If a table does not have a primary key, but has one or more non-null unique key constraints, one of the
constraints is chosen arbitrarily for logging as a means of identifying the row getting updated.
If the table has neither a primary key nor a unique index, then all columns except LONG and LOB are
supplementally logged. Therefore, Oracle Corporation recommends that when you use supplemental
logging, all or most tables be defined to have primary or unique keys.
Note:
Regardless of whether or not identification key logging is enabled, the SQL
statements returned by LogMiner always contain the ROWID clause. You can filter
out the ROWID clause by using the RTRIM function and appropriate arguments on
the reconstructed SQL statement.
To disable either minimal or identification key logging, execute the following statement.
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;
This creates a log group named emp_parttime on scott.emp that consists of the columns
empno, ename, and deptno. These columns will be logged every time an UPDATE statement is
executed on scott.emp, regardless of whether or not the update affected them. If you wanted to have
the entire row image logged any time an update was made, you could create a log group that contained
all the columns in the table.
Note:
LOBs, LONGs, and ADTs cannot be part of a log group
This creates a log group named emp_fulltime on scott.emp. Just like the previous example, it
consists of the columns empno, ename, and deptno. But because the ALWAYS clause was omitted,
before images of the columns will be logged only if at least one of the columns is updated.
Usage Notes for Log Groups
Keep the following in mind when you use log groups:
A column can belong to more than one log group. However, the before image of the columns
gets logged only once.
Redo logs do not contain any information about which log group a column is part of or whether
a column's before image is being logged because of log group logging or identification key
logging.
If you specify the same columns to be logged both conditionally and unconditionally, the
columns are logged unconditionally.
To run LogMiner, you use the DBMS_LOGMNR PL/SQL package. Additionally, you might also use the
DBMS_LOGMNR_D package if you choose to extract a dictionary rather than use the online catalog.
The DBMS_LOGMNR package contains the procedures used to initialize and run LogMiner, including
interfaces to specify names of redo logs, filter criteria, and session characteristics. The
DBMS_LOGMNR_D package queries the dictionary tables of the current database to create a LogMiner
dictionary file.
The LogMiner packages are owned by the SYS schema. Therefore, if you are not connected as user
SYS, you must include SYS in your call. For example:
EXECUTE SYS.DBMS_LOGMNR.END_LOGMNR
See Also:
Oracle9i Supplied PL/SQL Packages and Types Reference for details about
syntax and parameters for these LogMiner packages
Oracle9i Application Developer's Guide - Fundamentals for information
about executing PL/SQL procedures
Extract a Dictionary
To use LogMiner you must supply it with a dictionary by doing one of the following:
Extract database dictionary information to a flat file. See Extracting the Dictionary to a Flat
File.
Extract database dictionary information to the redo logs. See Extracting a Dictionary to the
Redo Logs.
Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when
you start LogMiner. See Using the Online Catalog.
Note:
If you will be mining in the same instance that is generating the redo logs, you
only need to specify one archived redo log and the CONTINUOUS_MINE option
when you start LogMiner. See Continuous Mining.
1. Use SQL*Plus to start an Oracle instance, with the database either mounted or unmounted. For
example, enter:
SQL> STARTUP
2. Create a list of redo logs. Specify the NEW option of the DBMS_LOGMNR.ADD_LOGFILE
procedure to signal that this is the beginning of a new list. For example, enter the following to
specify /oracle/logs/log1.f:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 2 LOGFILENAME => '/oracle/logs/log1.f', 3 OPTIONS => DBMS_LOGMNR.NEW);
3. If desired, add more redo logs by specifying the ADDFILE option of the
DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to add
/oracle/logs/log2.f:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 2 LOGFILENAME => '/oracle/logs/log2.f', 3 OPTIONS => DBMS_LOGMNR.ADDFILE);
The OPTIONS parameter is optional when you are adding additional redo logs. For example,
you could simply enter the following:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 2 LOGFILENAME=>'/oracle/logs/log2.f');
Continuous Mining
The continuous mining option is useful if you are mining in the same instance that is generating the
redo logs. When you plan to use the continuous mining option, you only need to specify one archived
redo log before starting LogMiner. Then, when you start LogMiner specify the
Note:
Continuous mining is not available in a Real Application Clusters environment.
If you are not specifying a flat file dictionary name, then use the OPTIONS parameter to specify
either the DICT_FROM_REDO_LOGS or DICT_FROM_ONLINE_CATALOG option.
If you specify DICT_FROM_REDO_LOGS, LogMiner expects to find a dictionary in the redo
logs that you specified with the DBMS_LOGMNR.ADD_LOGFILE procedure. To determine
which redo logs contain a dictionary, look at the V$ARCHIVED_LOG view. See Extracting a
Dictionary to the Redo Logs for an example.
Note:
If you add additional redo logs after your LogMiner session has been started, you
must restart LogMiner. You can specify new startup parameters if desired.
Otherwise, LogMiner uses the parameters you specified for the previous session.
For more information on using the online catalog, see Using the Online Catalog.
2. Optionally, you can filter your query by time or by SCN. See Filtering Data By Time or
Filtering Data By SCN.
3. You can also use the OPTIONS parameter to specify additional characteristics of your
LogMiner session. For example, you might decide to use the online catalog as your dictionary
and to have only committed transactions shown in the V$LOGMNR_CONTENTS view, as
follows:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => 2 DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + 3 DBMS_LOGMNR.COMMITTED_DATA_ONLY);
The following list is a summary of LogMiner settings that you can specify with the OPTIONS
parameter and where to find more information about them.
Query V$LOGMNR_CONTENTS
At this point, LogMiner is started and you can perform queries against the V$LOGMNR_CONTENTS
view. See Querying V$LOGMNR_CONTENTS for examples of this.
This procedure closes all the redo logs and allows all the database and system resources allocated by
LogMiner to be released.
If this procedure is not executed, LogMiner retains all its allocated resources until the end of the Oracle
session in which it was invoked. It is particularly important to use this procedure to end LogMiner if
either the DDL_DICT_TRACKING option or the DICT_FROM_REDO_LOGS option was used.
To use LogMiner to analyze joedevo's data, you must either create a dictionary file before joedevo
makes any changes or specify use of the online catalog at LogMiner startup. See Extract a Dictionary
for examples of creating dictionaries.
Step 2: Adding Redo Logs
Assume that joedevo has made some changes to the database. You can now specify the names of the
redo logs that you want to analyze, as follows:
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE( 2 LOGFILENAME => 'log1orc1.ora', 3 OPTIONS => DBMS_LOGMNR.NEW);
Start LogMiner and limit the search to the specified time range:
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( 2 DICTFILENAME => 'orcldict.ora', 3 STARTTIME => TO_DATE('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), 4 ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
At this point, the V$LOGMNR_CONTENTS view is available for queries. You decide to find all of the
changes made by user joedevo to the salary table. Execute the following SELECT statement:
SQL> SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS
2 WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';
For both the SQL_REDO and SQL_UNDO columns, two rows are returned (the format of the data
display will be different on your screen). You discover that joedevo requested two operations: he
deleted his old salary and then inserted a new, higher salary. You now have the data necessary to undo
this operation.
SQL_REDO
-------delete * from SALARY
where EMPNO = 12345
and ROWID = 'AAABOOAABAAEPCABA';
SQL_UNDO
-------insert into SALARY(NAME, EMPNO, SAL)
values ('JOEDEVO', 12345, 500)
2 rows selected
2. Query the V$LOGMNR_CONTENTS view to determine which tables were modified in the time
range you specified, as shown in the following example. (This query filters out system tables
that traditionally have a $ in their name.)
SQL> SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM
2 V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY
3 SEG_OWNER, SEG_NAME;
3. The following data is displayed. (The format of your display may be different.)
SEG_OWNER
--------CUST
SCOTT
SYS
UNIV
UNIV
UNIV
SEG_NAME
-------ACCOUNT
EMP
DONOR
DONOR
EXECDONOR
MEGADONOR
Hits
---384
12
12
234
325
32
The values in the Hits column show the number of times that the named table had an insert,
delete, or update operation performed on it during the two week period specified in the query.