11g_notes
11g_notes
1 Data Recovery Advisor in Oracle Database 11g ( can also be used if current environment doesn't
have RMAN configuration)
4 Undo Optimization ( to speed up the backup by ignoring the undo blocks which are not required for
backup).
5 Using RMAN multisection backups (To speed up the backup of big datafiles )
6 Archived Redo Log Failover ( In case of archive corrupt then Looks for one valid copy in the multiple
archive destinations )
2 Fast mirror resync ( Striping the data fastly by string only modified extents incase disk goes offline
)
3 Preferred Read Failure Groups ( Incresing the spped of read operation by reading the neared disk
in failgroup setup)
=====================================================================
E ) Health Monitor checks (Detect file corruptions, physical and logical block corruptions,
H ) Temporary Tablespace Enhancements ( We can shrink the temporary tablespace and we have the
new called DBA_TEMP_FREE_SPACE)
level compression)
J ) Read-Only Tables ( we can make table read only it restrics both DDL and DML operations)
K ) Query Result Cache ( Caching the result of the query in the result_cache area in SGA and sub
sequent execution directly gets the data)
L ) DDL with the WAIT Option (DDL_LOCK_TIMEOUT) ( Which will set the value in seconds to wait to
aquire a lock to
prevent ORA-00054:
resource busy error)
M ) password Case Sensitive in 11g (Security )
P ) Flashback fetures
Q) Performence features
2) Pending satistics ( we can keep newly gatahered stats in pending state until we publish them )
3) Adaptive Cursor Sharing ( unlike previous version now we have the execution plans based on
bind variable values) .
4 ) ADDM enhance ments ( Global analysis on all instances of an Oracle RAC cluster )
R) Partitioning enhancements
Compatibility Matrix:
Please find the following documents and link to upgrade the database to 11g.
Metalink Note ID: 429825.1
The Data Recovery Advisor automatically diagnoses corruption or loss of persistent data on disk,
determines the appropriate repair options, and executes repairs at the user's request. This
reduces the complexity of recovery process.
Oracle RMAN lists the failure and advise the failure and also generates the script to fix the
failure which we can run using RMAN commands.
Note : If RMAN backups are not configured in the existing environment ,you can get same
information by using target database control file using below command.
$ rman target /
RMAN>
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/db11g/DB11G/hm/reco_2408143298.hm
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/db11g/DB11G/hm/reco_2408143298.hm
3) 5> NOFILENAMECHECK;
RMAN virtual private catalog
VPC is a subset of the base recovery catalog. It’s allows us to use additional security features.
It’s about, we can grant access on viewing the specific database backup details in the
catalog to the specific user.
Practical log :
• Log into SQL*Plus as SYS and create a user with the RECOVERY_CATALOG_OWNER
role
• Log into RMAN using the base recovery catalog owner and grant access on the relevant
Grant succeeded.
RMAN>
• Grant the right for the VPC owner to register new target databases
$ rman
Grant succeeded.
RMAN>
• Log into RMAN using the VPC owner and issue the CREATE VIRTUAL CATALOG
command
$ rman
RMAN>
4) Undo Optimization
The BACKUP command no longer backs up undo that is not needed for recovery. As the majority
of the undo tablespace is filled with undo generated for transactions that have subsequently been
committed, this can represent a substantial saving.
Till 10g Oracle RMAN takes the backup of datafile on a per-file basis. But from 11g, let’s suppose if we
have the 1TB big file then oracle RMAN can take backup of this file parallel by specifying the multiple
sections in the same datafile. This will speed up the backup.
Example :
• If SECTION SIZE is larger than the size of the file, RMAN does not use multisection
– If SECTION SIZE would produce more than 256 sections, RMAN will increase
The archived log deletion policy of Oracle 11g has been extended to give greater flexibility and protection
in a Data Guard environment. The Oracle 10g and Oracle 11g syntax is displayed below.
8) IMPORT CATALOG
Oracle 11g has introduced the IMPORT CATALOG command to allow recovery catalogs to be merged or
moved. Connect to the destination catalog and issue the IMPORT CATALOG command, specifying the
owner of the source catalog.
Example :
$ rman
RMAN> CONNECT CATALOG rman2/rman2
RMAN> IMPORT CATALOG rman@db11g;
RMAN>
Each target imported is unregistered from the source catalog. The import can be limited to a subset of the
catalog by specifying the DBID or DB_NAME of each target to import.
RMAN> IMPORT CATALOG rman@db11g DBID=1423241, 1423242;
RMAN> IMPORT CATALOG rman@db11g DB_NAME=prod3, prod4;
The version of the source catalog must match that of the RMAN executable for the import to be
successful.
To move an entire catalog to a new server, simply create a user on the new server to act as the catalog
owner, create a catalog and import the contents of the existing catalog into it.
$ sqlplus / as sysdba
SQL> CREATE USER rman2 IDENTIFIED BY rman2 QUOTA UNLIMITED ON rman_ts;
SQL> GRANT RECOVERY_CATALOG_OWNER TO rman2;
SQL> EXIT;
$ rman catalog=rman2/rman2
RMAN> CREATE CATALOG;
RMAN> IMPORT CATALOG rman@db11g;
If flashback logs are present, RMAN will use these in preference to backups during block media recovery
(BMR), which can significantly improve BMR speed.
RMAN now supports the ZLIB binary compression algorithm as part of the Oracle Advanced Compression
option. The ZLIB algorithm is optimized for CPU efficiency, but produces larger zip files than the BZIP2
algorithm available previously, which is optimized for compression. The choice of compression algorithm
is set using the CONFIGURE command.
Block change tracking is now supported on physical standby databases, which in turn means fast
incremental backups are now possible on standby databases.
Substitution variables can now be used in RMAN command scripts in a similar manner to SQL*Plus
scripts. For example, the following command script requires a tag name to be entered for each backup
run.
CONNECT TARGET /
BACKUP DATABASE TAG '&1';
BACKUP ARCHIVELOG ALL TAG '&2';
EXIT;
Notice the "&1" and "&2" placeholders. Assuming this were saved with a filename of
"/scripts/backup.cmd", it might be called with the following syntax.
$ rman @'/tmp/backup.cmd' USING DB_20070108 ARCH_20070108
Notice the use of the USING keyword, which accepts a space-separated list of values that are substituted
for the placeholders.
1) ROLLING UPGRADE:
Prior to 11g, when we want to upgrade or patching ASM instances in cluster then we have to
bring down the all ASM instance as we have to upgrade all ASM instances at once.
From 11g onwards we can upgrade individual ASM instances in cluster without bringing down
the ASM instance.
Note 1: Using this feature in cluster we can upgrade one ASM node at a time. It means in cluster
we have to do one by one ASM instance upgrade, not all ASM instances once.
During transient disk failures within a failure group, ASM keeps track of the changed
extents that need to be applied to the offline disk. Once the disk is available, only the changed extents are
written to resynchronize the disk, rather than overwriting the contents of the entire disk. This can speed
up the resynchronization process considerably.
Fast mirror resync is only available when the disk groups compatibility attributes are set to 11.1 or higher
Oracle 11g allows each node to define a preferred read failure group.
Example : If node have the disk which is mirrored and because of failure protection we have placed one
disk in the same building and other disk in other building or other city .Here nearest disk to perform the
read operations is the disk which is in same building. So we can set preferred read failure group as the
disk which is in same building to aviod network traffic.
• Scalability
• Total capacity
cp Copies ASM files between ASM and the operating system or between ASM
instances
ADR Definitions :
• ADR Base
– If the parameter is not defined, the database will set a default value
• ADR Home
– Root directory for all diagnostic data
Subdirectory Description
We can check or open all the trace information of multiple databases from one location using adrci utility
as below
1: diag/diagtool/user_oracle/host_192545652_11
2: diag/tnslsnr/ptc-hp00/listener1
3: diag/tnslsnr/ptc-hp00/listener9
4: diag/tnslsnr/ptc-hp00/listener
5: diag/tnslsnr/ptc3-vl4/listener
6: diag/rdbms/ptc/ptc
7: diag/rdbms/dup1/DUP1
8: diag/asm/+asm/+ASM
9: diag/clients/user_unknown/host_411310321_11
10: diag/clients/user_oracle/host_2779827455_11
11: diag/clients/user_oracle/host_192545652_11
Q: to quit
adrci>
• Detect file corruptions, physical and logical block corruptions, data dictionary corruptions,
and more
– Manual – The DBA can run health checks manually using the EM interface or the
• Health check findings and recommendations are stored in the ADR (defined by
parameter diagnostic_dest )
NAME
SQL> BEGIN
DBMS_HM.RUN_CHECK('Dictionary Integrity
Check', 'my_run');
END;
We can check the result by viewing following views .or by using adrci utility
V$HM_FINDING,
V$HM_RECOMMENDATION
V$HM_RUN
Checking using adrci utility :
RUN_ID
----------
10871
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@ptc-hp12 ~]$ adrci
Adrci> help
HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
There are other commands intended to be used directly by Oracle, type
"HELP EXTENDED" to see the list
1 rows fetched
F ) Database Replay:
It’s about capturing the work in the database and replaying same work in the other test database.
Let’s suppose if we have a performance issue in the oracle 11g production database for the particular job
or particular duration then we can capture the transactions performed in that duration or for that job .Once
capture is completed then we start replay of same transactions in the other test database to find the root
cause.
2) The process of capturing workloads will impact the performance of the database. The degree of
overhead incurred is highly dependent on the types of transactions being executed. This figure could be
as high as 30%; this is a process that you would want to carefully monitor.
What is Database Replay?.
• Useful for:
– Database changes
– Tuning changes
• mkdir /u01/app/oracle/db_replay_capture
• Next, we create a directory object pointing to the new directory.
• The calibration step suggests a single client on a single CPU is enough, so we only need
to start a single replay client, which is shown below.
• BEGIN
• DBMS_WORKLOAD_REPLAY.start_replay;
• END;
• /
• If you need to stop the replay before it is complete, call the CANCEL_REPLAY
procedure.
The output from the replay client includes the start and finish time of the replay operation.
Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained
like any other index, but they are ignored by the optimizer unless the
OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level.
Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled
using the ALTER INDEX command.
Oracle 11g has a new view called DBA_TEMP_FREE_SPACE that displays information about temporary
tablespace usage.
1 row selected.
SQL>
Perform an online shrink of a temporary tablespace using the ALTER TABLESPACE command.
SQL>
SQL> ALTER TABLESPACE temp SHRINK SPACE;
Tablespace altered.
SQL> SELECT * FROM dba_temp_free_space;
1 row selected.
SQL>
Table compression was introduced in Oracle 9i as a space saving feature for data warehousing projects.
In 11g it is now considered a mainstream feature that is acceptable for OLTP databases. In addition to
saving storage space, compression can result in increased I/O performance and reduced memory use in
the buffer cache. These advantages do come at a cost, since compression incurs a CPU overhead, so it
won't be of benefit to everyone.
The compression clause can be specified at the tablespace, table or partition level with the following
options:
NOCOMPRESS - The table or partition is not compressed. This is the default action when no
compression clause is specified.
COMPRESS - This option is considered suitable for data warehouse systems. Compression is enabled
on the table or partition during direct-path inserts only.
COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the simple
COMPRESS keyword.
COMPRESS FOR ALL OPERATIONS - This option is considered suitable for OLTP systems. As the
name implies, this option enables compression for all operations, including regular DML statements. This
option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher.
The following examples show the various compression options applied at table and partition level.
Default compression settings can be specified at the tablespace level using the CREATE TABLESPACE
and ALTER TABLESPACE commands. The current settings are displayed in the
DEF_TAB_COMPRESSION and COMPRESS_FOR columns of the DBA_TABLESPACES view.
DEF_TAB_ COMPRESS_FOR
-------- ------------------
ENABLED FOR ALL OPERATIONS
1 row selected.
SQL>
DEF_TAB_ COMPRESS_FOR
-------- ------------------
DISABLED
1 row selected.
SQL>
When compression is specified at multiple levels, the most specific setting is always used. As such,
partition settings always override table settings, which always override tablespace settings.
In previous Oracle releases, tables could be made to appear read-only to other users by only granting the
SELECT object privilege to them, but the tables remained read-write for the owner. Oracle 11g allows
tables to be marked as read-only using the ALTER TABLE command. It restricts both DDL and DML
commands.
Oracle 11g allows the results of SQL queries to be cached in the SGA (result_cache_max_size) and
reused to improve performance. Set up the following schema objects to see how the SQL query cache
works.
Next, we query the test table using the slow function and check out the elapsed time. Each run takes
approximately five seconds,
SELECT slow_function(id) FROM qrc_tab;
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.15
SQL>
Adding the RESULT_CACHE hint to the query tells the server to attempt to retrieve the information from
the result cache. If the information is not present, it will cache the results of the query provided there is
enough room in the result cache. Since we have no cached results, we would expect the first run to take
approximately five seconds, but subsequent runs to be much quicker.
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.20
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.15
SQL>
The default action of the result cache is controlled by the RESULT_CACHE_MODE parameter. When it is
set to MANUAL, the RESULT_CACHE hint must be used for a query to access the result cache.
If we set the RESULT_CACHE_MODE parameter to FORCE, the result cache is used by default, but we
can bypass it using the NO_RESULT_CACHE hint.
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:00.14
SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5
5 rows selected.
Elapsed: 00:00:05.14
SQL>
L) DDL with the WAIT Option (DDL_LOCK_TIMEOUT) in Oracle Database 11g Release 1
DDL commands require exclusive locks on internal structures. If these locks are not available the
commands return with an "ORA-00054: resource busy" error message. This can be especially frustrating
when trying to modify objects that are accessed frequently. To get round this Oracle 11g includes the
DDL_LOCK_TIMEOUT parameter, which can be set at instance or session level using the ALTER
SYSTEM and ALTER SESSION commands respectively.
The DDL_LOCK_TIMEOUT parameter indicates the number of seconds a DDL command should wait for
the locks to become available before throwing the resource busy error message. The default value is
zero. To see it in action, create a new table and insert a row, but don't commit the insert.
Leave this session alone and in a new session, set the DDL_LOCK_TIMEOUT at session level to a non-
zero value and attempt to add a column to the table.
Table altered.
SQL>
Case sensitive passwords (and auditing) are a default feature of newly created Oracle 11g databases.
The Database Configuration Assistant (DBCA) allows you to revert these settings back to the pre-11g
functionality during database creation.
System altered.
SQL>
The following code demonstrates the case sensitive password functionality. First, it resets the
SEC_CASE_SENSITIVE_LOGON initialization parameter to TRUE and creates a new user with a mixed
case password.
CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;
We can see the case sensitive password functionality in operation if we attempt to connect to the new
user with both the correct and incorrect case password.
CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;
The important thing to remember here is even when case sensitive passwords are not enabled, the
original case of the password is retained so it can be used if case sensitivity is subsequently enabled. The
following code disables case sensitivity and creates a new user with a mixed case password.
CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;
CREATE USER test3 IDENTIFIED BY Test3;
GRANT CONNECT TO test3;
As you would expect, connection to the user is possible regardless of the case of the password.
SQL> CONN test3/Test3
Connected.
SQL> CONN test3/test3
Connected.
SQL>
If we enable case sensitivity, authentication is done against the mixed case password.
CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;
The DBA_USERS view includes a PASSWORD_VERSIONS column that indicates the database release
in which the password was created or last modified.
USERNAME PASSWORD
------------------------------ --------
TEST 10G 11G
SPATIAL_WFS_ADMIN_USR 10G 11G
SPATIAL_CSW_ADMIN_USR 10G 11G
APEX_PUBLIC_USER 10G 11G
SYSTEM 10G 11G
SYS 10G 11G
MGMT_VIEW 10G 11G
OUTLN 10G 11G
32 rows selected.
SQL>
Users imported from a 10g database have a PASSWORD_VERSIONS value of "10G" and maintain case
insensitive passwords independent of the SEC_CASE_SENSITIVE_LOGON parameter setting. Their
passwords become case sensitive as soon as they are changed, assuming the
SEC_CASE_SENSITIVE_LOGON parameter is set to TRUE.
The ignorecase parameter of the orapwd utility allows control over case sensitivity of passwords in the
password file. The default value is "n", meaning the passwords are case sensitive. When privileged users
(SYSDBA & SYSOPER) are imported from a previous release their passwords are included in the
password file. These users will retain case insensitive passwords until the password is modified.
To create case insensitive passwords in the password file, recreate the password file using the
ingnorecase=y option.
The passwords associated with database links are also case sensitive, which presents some issues when
connecting between different releases:
11g to 11g: The database link must be created with the password in the correct case to match the remote
users password.
11g to Pre-11g: The database link can be created with the password in any case as case is ignored by
the remote database.
Pre-11g to 11g: The remote user must have its password modified to be totally in upper case, as this is
how it will be stored and passed by the Pre-11g database.
N) AUDITING :
– DB_BLOCK_CHECKING
– DB_BLOCK_CHECKSUM
– DB_LOST_WRITE_PROTECT
• Also controls other data protection behavior with the Oracle database
DB_BLOCK_CHECKSUM
• on the parameter Determines whether DB writer and the direct loader will calculate a checksum
and store it in the cache header of every data block when writing it to disk
• Syntax:
• Values:
• OFF or FALSE
• FULL or TRUE
• Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O
systems
A lost write happens when Oracle writes a block to disk and the I/O subsystem signals the write is
complete, even though it isn't. When the block is next read the stale data is returned, which can result in
data corruption.
The DB_LOST_WRITE_PROTECT parameter can provide protection against lost writes depending on
the value set:
NONE - No lost write protection. The default.
TYPICAL - The instance logs buffer cache reads for read/write tablespaces in the redo log. This has a
approximate overhead of 5-10% in a RAC environment.
FULL - The instance logs buffer cache reads for read/write and read-only tablespaces in the redo log.
This has a approximate overhead of 20% in a RAC environment.
Lost write detection is most effective in Data Guard environments. Once the primary and standby
databases are protected, the SCNs of blocks applied to the standby database are compared to the SCN
logged in the redo logs. If the SCN on the primary database is smaller than the SCN on the standby
database, a lost write on the primary database has occurred and is signaled with an external error (ORA-
752). At this point you should failover to the standby database. If the SCN on the primary database is
bigger than on the standby database, a lost write on the standby database has occured and is signalled
with an internal error (ORA-600 [3020]). At this point the standby database should be recreated.
Lost write protection can also be used in normal databases, although there is no signal that the lost write
has occurred. If you suspect a problem due to inconsistent data, you must recovery the database to the
SCN of the stale block from a backup taken before the suspected problem occurred. This restore
operation will generate the lost write error (ORA-752). If the error is detected during a recovery, you have
no alternative but to open the database with the RESETLOGS option. All data after this point is lost.
• DB_BLOCK_CHECKING created.
• Specifies whether or not Oracle performs block checking for database blocks
• Syntax:
• Values:
– OFF or FALSE
– LOW
– MEDIUM
– FULL or TRUE
• The values FALSE and TRUE are support for backwards compatibility.
• Flashback Data Archives provide the ability to track changes that occur over the lifetime
of an individual table
• Unlike other flashback features, Flashback Data Archive does not rely upon undo records
or archivelog files
History records are stored in an archive tablespace and can be kept for years
• Restrictions
• Usage
LogMiner Enhancements
In previous versions of Oracle the LogMiner viewer was a separate Java based console, but in
Oracle 11g it has been incorporated into Enterprise Manager and integrated with the new
Flashback Transaction feature, making it simple to recover transactions that have had an
undesirable affect. The logminer functionality is accessed using the "View and Manage
Transactions" link on the "Availability" tab.
Q ) Performance features
– MEMORY_MAX_TARGET
– SGA_TARGET = 0
– PGA_AGGREGATE_TARGET = 0
This allows the SGA and instance PGA to be dynamically tuned up and down, as required
• V$MEMORY_DYNAMIC_COMPONENTS
• V$MEMORY_MEMORY_TARGET_ADVICE
sized
2) Pending Statistics
In previous database versions, new optimizer statistics were automatically published when they were
gathered. In 11g this is still the default action, but you now have the option of keeping the newly gathered
statistics in a pending state until you choose to publish them.
The DBMS_STATS.GET_PREFS function allows you to check the 'PUBLISH' attribute to see if statistics
are automatically published. The default value of TRUE means they are automatically published, while
FALSE indicates they are held in a pending state.
1 row selected.
SQL>
The 'PUBLISH' attribute is reset using the DBMS_STATS.SET_TABLE_PREFS procedure.
-- New statistics for SCOTT.EMP are kept in a pending state.
EXEC DBMS_STATS.set_table_prefs('SCOTT', 'EMP', 'PUBLISH', 'false');
The DBMS_STATS package allows you to publish or delete pending statistics, as show below.
-- Publish all pending statistics.
EXEC DBMS_STATS.publish_pending_stats(NULL, NULL);
Pending statistics can be transfered between database by exporting this using the
Oracle 11g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the
effectiveness of execution plans between executions with different bind variable values. If it notices
suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate execution
plans for the same statement. This functionality requires no additional configuration. The following code
provides and example of adaptive cursor sharing.
DECLARE
TYPE t_acs_test_tab IS TABLE OF acs_test_tab%ROWTYPE;
l_tab t_acs_test_tab := t_acs_test_tab();
BEGIN
FOR i IN 1 .. 100000 LOOP
l_tab.extend;
IF MOD(i,2)=0 THEN
l_tab(l_tab.last).record_type := 2;
ELSE
l_tab(l_tab.last).record_type := i;
END IF;
l_tab(l_tab.last).id := i;
l_tab(l_tab.last).description := 'Description for ' || i;
END LOOP;
COMMIT;
END;
/
The data in the RECORD_TYPE column is skewed, as shown by the presence of a histogram against the
column.
COLUMN_NAME HISTOGRAM
------------------------------ ---------------
ID NONE
RECORD_TYPE HEIGHT BALANCED
DESCRIPTION NONE
3 rows selected.
SQL>
Next, we query the table and limit the rows returned based on the RECORD_TYPE column with a literal
value of "1".
MAX(ID)
----------
1
1 row selected.
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID cgt92vnmcytb0, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = 1
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------
This query has used the index as we would expect. Now we repeat the query, but this time use a bind
variable.
MAX(ID)
----------
1
1 row selected.
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1| 9| 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------
So we ran what amounted to the same query, and got the same result and execution plan. The optimizer
picked an execution plan that it thinks is optimium for query by peeking at the value of the bind variable.
The only problem is, this would be totally the wrong thing to do for other bind values.
MAX(ID)
----------
100000
1 row selected.
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------
If we look at the V$SQL view entry for this query, we can see the IS_BIND_SENSITIVE column is marked
as 'Y', so Oracle is aware this query may require differing execution plans depending on the bind variable
values, but currently the IS_BIND_AWARE column is marked as 'N', so Oracle as not acted on this yet.
SQL_ID CHILD_NUMBER I I
------------- ------------ - -
9bmm6cmwa8saf 0YN
1 row selected.
SQL>
If we run the statement using the second bind variable again, we can see that Oracle has decided to use
an alternate, more efficient plan for this statement.
MAX(ID)
----------
100000
1 row selected.
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 1
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 138 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
|* 2 | TABLE ACCESS FULL| ACS_TEST_TAB | 48031 | 422K| 138 (2)| 00:00:02 |
-----------------------------------------------------------------------------------
This change in behavior is also reflected in the V$SQL view, which now has the IS_BIND_AWARE
column maked as "Y".
SQL_ID CHILD_NUMBER I I
------------- ------------ - -
9bmm6cmwa8saf 0YN
9bmm6cmwa8saf 1YY
2 rows selected.
SQL>
instances
• For example: The I/O levels of each cluster node may be insignificant when considered
locally, but the aggregate I/O may be a significant for the cluster as a whole
DBA_ Views
DBA_ADDM_FDG_BREAKDOWN
DBA_ADDM_FINDINGS
DBA_ADDM_INSTANCES
DBA_ADDM_SYSTEM_DIRECTIV
ES
DBA_ADDM_TASK_DIRECTIVES
DBA_ADDM_TASKS
re
• Ad hoc SQL
• Parallel queries
• Recursive SQL
• DML/DDL
Queries considered
• Refreshing statistics
• Query rewrite
• SQLSets
Core enhancements
• SQL Profiling
– Allows for different and better explain plans without altering the SQL Text
– Fixed
– Moving Window
– Baseline Template
Fixed Baseline
• Default window size is eight days (corresponds to the default AWR retention period)
Baseline Template
• Single: The single baseline template can be used to create a baseline for a single
• Repeating: The repeating baseline template can be used to create and drop baselines
Automatic Optimizer Statistics Collection - Gathers stale or missing statistics for all schema objects The
task name is 'auto optimizer stats collection'.
Automatic Segment Advisor - Identifies segments that could be reorganized to save space. The task
name is 'auto space advisor'.
Automatic SQL Tuning Advisor - Identifies and attempts to tune high load SQL. The task name is 'sql
tuning advisor'.
These tasks run during maintenance windows scheduled to open over night. Configuration of the
Relevant Views
The following views display information related to the automated database maintenance tasks:
DBA_AUTOTASK_CLIENT_JOB
DBA_AUTOTASK_CLIENT
DBA_AUTOTASK_JOB_HISTORY
DBA_AUTOTASK_WINDOW_CLIENTS
DBA_AUTOTASK_CLIENT_HISTORY
8 Partitioning enhancements
• Interval partitioning
• Reference partitioning
• System partitioning