0% found this document useful (0 votes)
25 views

11g_notes

Uploaded by

RSS1980
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

11g_notes

Uploaded by

RSS1980
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Following listed Oracle 11g new features are described in the attachment with the practical statement

(Some of the features description gathered from oracle documents).

A ) Compatibility matrix to upgrade the database to 11g

B ) RMAN 11g new features:

1 Data Recovery Advisor in Oracle Database 11g ( can also be used if current environment doesn't
have RMAN configuration)

2 Active Database duplication

3 RMAN virtual private catalog (Security )

4 Undo Optimization ( to speed up the backup by ignoring the undo blocks which are not required for
backup).

5 Using RMAN multisection backups (To speed up the backup of big datafiles )

6 Archived Redo Log Failover ( In case of archive corrupt then Looks for one valid copy in the multiple
archive destinations )

7 Archived Log Deletion Policy Enhancements

8 IMPORT CATALOG ( to move or merge the catalog database )

9 Improved Block Media Recovery Performance

10 Faster Backup Compression.


11 Block Change Tracking Support for Standby Databases also ( Extending 10g feature to speed up
incremental backups )

12 Improved Scripting with RMAN Substitution Variables

C ) Automatic Storage Management (ASM) 11g new features

1 Rolling upgrade ( No downtime required to upgrade ASM instance in cluster environment)

2 Fast mirror resync ( Striping the data fastly by string only modified extents incase disk goes offline
)

3 Preferred Read Failure Groups ( Incresing the spped of read operation by reading the neared disk
in failgroup setup)

4 Scalability and performance enhancements

5 Internal consistency checking ( Checking consistency of disk )

6 ASMCMD new features ( New commands with ASM command tool )

=====================================================================

D ) Automatic Diagnostic Repository (ADR contains all diagnostic data of bdump,cdumo,udump ,

montor reports and managing ADR


with adrci utility)

E ) Health Monitor checks (Detect file corruptions, physical and logical block corruptions,

data dictionary corruptions and monitrong the result


using adrci utility )
F ) Database Replay ( It’s about capturing the work in the database and replaying same work in the
other test database

G ) Invisible Indexes ( making optimizer to ignore index by making index as invisible)

H ) Temporary Tablespace Enhancements ( We can shrink the temporary tablespace and we have the
new called DBA_TEMP_FREE_SPACE)

I ) Compression Enhancements in Oracle Database 11g Release 1 ( COMPRESS FOR DIRECT_LOAD


OPERATIONS and enabling tablespace

level compression)

J ) Read-Only Tables ( we can make table read only it restrics both DDL and DML operations)

K ) Query Result Cache ( Caching the result of the query in the result_cache area in SGA and sub
sequent execution directly gets the data)

L ) DDL with the WAIT Option (DDL_LOCK_TIMEOUT) ( Which will set the value in seconds to wait to
aquire a lock to

prevent ORA-00054:
resource busy error)
M ) password Case Sensitive in 11g (Security )

N ) Auditing ( by default enabled in 11g )

O ) New Parametes to find the data corruption


(DB_BLOCK_CHECKING,DB_BLOCK_CHECKSUM,DB_LOST_WRITE_PROTECT)

P ) Flashback fetures

1) Flashback Data Archives

2 ) Flashback transaction ( Now logminer integrated with OEM )

Q) Performence features

1) Enable Automatic Memory Management

2) Pending satistics ( we can keep newly gatahered stats in pending state until we publish them )

3) Adaptive Cursor Sharing ( unlike previous version now we have the execution plans based on
bind variable values) .
4 ) ADDM enhance ments ( Global analysis on all instances of an Oracle RAC cluster )

5) Automatic SQL tuning

6)AWR baseline enhancements

7) automated database maintenance tasks

R) Partitioning enhancements

A ) Upgrading database to 11g

Compatibility Matrix:

– 7.3.3 (or lower) -> 7.3.4 -> 9.2.0.8 -> 11.1

– 8.0.5 (or lower) -> 8.0.6 -> 9.2.0.8 -> 11.1

– 8.1.7 (or lower) -> 8.1.7.4 -> 9.2.0.8 -> 11.1

– 9.0.1.3 (or lower) -> 9.0.1.4 -> 9.2.0.8 -> 11.1

• Direct upgrade support

– 9.2.0.4 (or higher) -> 11.1

– 10.1.0.2 (or higher) -> 11.1

– 10.2.0.1 (or higher) -> 11.1

Please find the following documents and link to upgrade the database to 11g.
Metalink Note ID: 429825.1

Metalink Note ID: 396671.1

Metalink Note ID: 396387.1

B ) RMAN 11g new features:

1 Data Recovery Advisor in Oracle Database 11g

The Data Recovery Advisor automatically diagnoses corruption or loss of persistent data on disk,
determines the appropriate repair options, and executes repairs at the user's request. This
reduces the complexity of recovery process.

Example : (Note: RMAN commands are highlighted in RED)

Oracle RMAN lists the failure and advise the failure and also generates the script to fix the
failure which we can run using RMAN commands.

Note : If RMAN backups are not configured in the existing environment ,you can get same
information by using target database control file using below command.

$ rman target /

RMAN> LIST FAILURE;

List of Database Failures


=========================

Failure ID Priority Status Time Detected Summary


---------- -------- --------- ------------- -------
202 HIGH OPEN 03-JAN-08 One or more non-system datafiles are corrupt
RMAN> ADVISE FAILURE;

List of Database Failures


=========================

Failure ID Priority Status Time Detected Summary


---------- -------- --------- ------------- -------
202 HIGH OPEN 03-JAN-08 One or more non-system datafiles are corrupt

analyzing automatic repair options; this may take some time


allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions f


========================
no manual actions available

Optional Manual Actions


=======================
no manual actions available

Automated Repair Options


========================
Option Repair Description
------ ------------------
1 Restore and recover datafile 4
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/db11g/DB11G/hm/reco_3657335472.hm

RMAN>

RMAN> REPAIR FAILURE PREVIEW;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/db11g/DB11G/hm/reco_2408143298.hm

contents of repair script:


# restore and recover datafile
sql 'alter database datafile 4 offline';
restore datafile 4;
recover datafile 4;
sql 'alter database datafile 4 online';
RMAN> REPAIR FAILURE NOPROMPT;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/db11g/DB11G/hm/reco_2408143298.hm

contents of repair script:


# restore and recover datafile
sql 'alter database datafile 4 offline';
restore datafile 4;
recover datafile 4;
sql 'alter database datafile 4 online';
executing repair script

sql statement: alter database datafile 4 offline

Starting restore at 03-JAN-08


using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore


channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/DB11G/users01.dbf
channel ORA_DISK_1: reading from backup piece
/u01/app/oracle/flash_recovery_area/DB11G/backupset/
2008_01_03/o1_mf_nnndf_BACKUP_DB11G.WORLD_0_3qsl2hy4_.bkp
channel ORA_DISK_1: piece
handle=/u01/app/oracle/flash_recovery_area/DB11G/backupset/2008_01_03/
o1_mf_nnndf_BACKUP_DB11G.WORLD_0_3qsl2hy4_.bkp
tag=BACKUP_DB11G.WORLD_010308113407
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 03-JAN-08

Starting recover at 03-JAN-08


using channel ORA_DISK_1

starting media recovery


media recovery complete, elapsed time: 00:00:01

Finished recover at 03-JAN-08

sql statement: alter database datafile 4 online


repair failure complete

2 ) Active database duplication


Now we can duplicate the database without using intermediate backup. It means that we can duplicate
live running database.

RMAN> CONNECT TARGET SYS@prod1/password

connected to target database: PROD1 (DBID=39525561)

RMAN> CONNECT AUXILIARY SYS@dup1/password

connected to auxiliary database: DUP1 (not mounted)

RMAN> DUPLICATE TARGET DATABASE TO dup1

2> FROM ACTIVE DATABASE

3> PASSWORD FILE


r
4> SPFILE

3) 5> NOFILENAMECHECK;
RMAN virtual private catalog

VPC is a subset of the base recovery catalog. It’s allows us to use additional security features.

It’s about, we can grant access on viewing the specific database backup details in the
catalog to the specific user.

Practical log :

• Log into SQL*Plus as SYS and create a user with the RECOVERY_CATALOG_OWNER

role

CREATE USER vpc1 IDENTIFIED BY vpc1 QUOTA UNLIMITED ON users;

GRANT RECOVERY_CATALOG_OWNER TO vpc1;

• Log into RMAN using the base recovery catalog owner and grant access on the relevant

databases to the VPC user


$ rman

RMAN> CONNECT CATALOG rman/rman;

RMAN> GRANT CATALOG FOR DATABASE db11g TO vpc1;

Grant succeeded.

RMAN>

• Grant the right for the VPC owner to register new target databases

$ rman

RMAN> GRANT REGISTER DATABASE TO vpc1;

Grant succeeded.

RMAN>

• Log into RMAN using the VPC owner and issue the CREATE VIRTUAL CATALOG

command

$ rman

RMAN> CONNECT CATALOG vpc1/vpc1;

RMAN> CREATE VIRTUAL CATALOG;

found eligible base catalog owned by RMAN

created virtual catalog against base catalog owned by RMAN

RMAN>
4) Undo Optimization

The BACKUP command no longer backs up undo that is not needed for recovery. As the majority
of the undo tablespace is filled with undo generated for transactions that have subsequently been
committed, this can represent a substantial saving.

This functionality is not configurable. It is not affected by the CONFIGURE BACKUP


OPTIMIZATION {ON | OFF} command.

5) Using RMAN multisection backups

Till 10g Oracle RMAN takes the backup of datafile on a per-file basis. But from 11g, let’s suppose if we
have the 1TB big file then oracle RMAN can take backup of this file parallel by specifying the multiple
sections in the same datafile. This will speed up the backup.

Example :

RMAN> Backup datafile 4 section size=25m ;

• If SECTION SIZE is larger than the size of the file, RMAN does not use multisection

backup for the file

• Maximum number of file sections is 256

– If SECTION SIZE would produce more than 256 sections, RMAN will increase

SECTION SIZEW to a value that results in 256 sections

6 ) Archived Redo Log Failover


When backing up archived redo logs RMAN only includes a single copy of each archived redo
log, regardless of how many archive log destinations are being written to. The Oracle 11g archived redo
log failover feature allows RMAN to complete a backup provided at least one valid copy of each archived
redo log is present in one of the specified archive destinations. If RMAN finds a log file containing corrupt
blocks, it searches the other archive destinations for a valid copy to back up.

7) Archived Log Deletion Policy Enhancements

The archived log deletion policy of Oracle 11g has been extended to give greater flexibility and protection
in a Data Guard environment. The Oracle 10g and Oracle 11g syntax is displayed below.

# Oracle 10g Syntax.

CONFIGURE ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON STANDBY | NONE}}

# Oracle 11g Syntax.

ARCHIVELOG DELETION POLICY {CLEAR | TO {APPLIED ON [ALL] STANDBY |


BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier |
NONE | SHIPPED TO [ALL] STANDBY}
[ {APPLIED ON [ALL] STANDBY | BACKED UP integer TIMES TO DEVICE TYPE deviceSpecifier |
NONE | SHIPPED TO [ALL] STANDBY}]...}

8) IMPORT CATALOG

Oracle 11g has introduced the IMPORT CATALOG command to allow recovery catalogs to be merged or
moved. Connect to the destination catalog and issue the IMPORT CATALOG command, specifying the
owner of the source catalog.

Example :

$ rman
RMAN> CONNECT CATALOG rman2/rman2
RMAN> IMPORT CATALOG rman@db11g;

Starting import catalog at 07-JAN-08


source recovery catalog database Password:
connected to source recovery catalog database
import validation complete
database unregistered from the source recovery catalog
Finished import catalog at 07-JAN-08

RMAN>
Each target imported is unregistered from the source catalog. The import can be limited to a subset of the
catalog by specifying the DBID or DB_NAME of each target to import.
RMAN> IMPORT CATALOG rman@db11g DBID=1423241, 1423242;
RMAN> IMPORT CATALOG rman@db11g DB_NAME=prod3, prod4;
The version of the source catalog must match that of the RMAN executable for the import to be
successful.

To move an entire catalog to a new server, simply create a user on the new server to act as the catalog
owner, create a catalog and import the contents of the existing catalog into it.
$ sqlplus / as sysdba
SQL> CREATE USER rman2 IDENTIFIED BY rman2 QUOTA UNLIMITED ON rman_ts;
SQL> GRANT RECOVERY_CATALOG_OWNER TO rman2;
SQL> EXIT;

$ rman catalog=rman2/rman2
RMAN> CREATE CATALOG;
RMAN> IMPORT CATALOG rman@db11g;

9 ) Improved Block Media Recovery Performance

If flashback logs are present, RMAN will use these in preference to backups during block media recovery
(BMR), which can significantly improve BMR speed.

10 ) Faster Backup Compression

RMAN now supports the ZLIB binary compression algorithm as part of the Oracle Advanced Compression
option. The ZLIB algorithm is optimized for CPU efficiency, but produces larger zip files than the BZIP2
algorithm available previously, which is optimized for compression. The choice of compression algorithm
is set using the CONFIGURE command.

CONFIGURE COMPRESSION ALGORITHM 'ZLIB';


CONFIGURE COMPRESSION ALGORITHM 'BZIP2';
To perform a compressed backup using the ZLIB algorithm you might do something like this.
# One-off configuration.
CONFIGURE COMPRESSION ALGORITHM 'ZLIB';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
# Backup.
BACKUP DATABASE PLUS ARCHIVELOG;

11) Block Change Tracking Support for Standby Databases

Block change tracking is now supported on physical standby databases, which in turn means fast
incremental backups are now possible on standby databases.

12 ) Improved Scripting with RMAN Substitution Variables

Substitution variables can now be used in RMAN command scripts in a similar manner to SQL*Plus
scripts. For example, the following command script requires a tag name to be entered for each backup
run.
CONNECT TARGET /
BACKUP DATABASE TAG '&1';
BACKUP ARCHIVELOG ALL TAG '&2';
EXIT;

Notice the "&1" and "&2" placeholders. Assuming this were saved with a filename of
"/scripts/backup.cmd", it might be called with the following syntax.
$ rman @'/tmp/backup.cmd' USING DB_20070108 ARCH_20070108
Notice the use of the USING keyword, which accepts a space-separated list of values that are substituted
for the placeholders.

C) Automatic Storage Management (ASM) 11g new features

1) ROLLING UPGRADE:

Prior to 11g, when we want to upgrade or patching ASM instances in cluster then we have to
bring down the all ASM instance as we have to upgrade all ASM instances at once.

From 11g onwards we can upgrade individual ASM instances in cluster without bringing down
the ASM instance.

Note 1: Using this feature in cluster we can upgrade one ASM node at a time. It means in cluster
we have to do one by one ASM instance upgrade, not all ASM instances once.

Commands for example:

ALTER SYSTEM START ROLLING MIGRATION TO '11.1.0.7.0';

ALTER SYSTEM STOP ROLLING MIGRATION;

2) Fast mirror resync

During transient disk failures within a failure group, ASM keeps track of the changed
extents that need to be applied to the offline disk. Once the disk is available, only the changed extents are
written to resynchronize the disk, rather than overwriting the contents of the entire disk. This can speed
up the resynchronization process considerably.
Fast mirror resync is only available when the disk groups compatibility attributes are set to 11.1 or higher

3) Preferred Read Failure Groups


In Oracle 10g, ASM always reads the primary copy of the mirrored extent set. This is not a problem when
both nodes and both failure groups are all located in the same site, but it can be inefficient for extended
clusters, causing needless network traffic.

Oracle 11g allows each node to define a preferred read failure group.

Example : If node have the disk which is mirrored and because of failure protection we have placed one
disk in the same building and other disk in other building or other city .Here nearest disk to perform the
read operations is the disk which is in same building. So we can set preferred read failure group as the
disk which is in same building to aviod network traffic.

SQL> ALTER SYSTEM SET ASM_PREFERRED_READ_FAILURE_GROUPS = 'data.data_0000',


'data.data_0001',

4) Scalability and performance enhancements

• Not much has really changed since Oracle 10g

– NFS performance is enhanced via DirectNFS

• Scalability

– 63 disk groups in a storage system

– 10,000 ASM disks in a storage system

– 4PB maximum storage for each ASM disk

– 40 exabyte maximum storage for each storage system


– 1 million files for each disk group

• Total capacity

– External redundancy - 140PB

– Normal redundancy - 42PB

– High redundancy - 15PB

5) Internal consistency checking

ALTER DISK GROUP <dgname> CHECK ALL;

 Verifies the consistency of the disk

6) ASMCMD new features

• ASMCMD new commands

cp Copies ASM files between ASM and the operating system or between ASM
instances

lsdsk List disks visible to ASM

md_backup/ Creates/restores backup of disk group metadata


md_restore

remap Repairs a range of blocks on disk

D ) Automatic Diagnostic Repository (ADR)

• File-based repository for database diagnostic data

• Includes traces, the alert log, and health monitor reports

• Unified directory structure across multiple instances and multiple products

• All diagnostic data is stored in the ADR

ADR Definitions :

• ADR Base

– Root directory for all ADR Homes

– Location is specified by the DIAGNOSTIC_DEST parameter

– If the parameter is not defined, the database will set a default value

• ADR Home
– Root directory for all diagnostic data

– Defined for an instance of an Oracle product or component

Subdirectory Description

adhoc Ad hoc SQL scripts

arch Archived redo log files

adump Audit files


(Set the AUDIT_FILE_DEST
initialization parameter to specify the
adump directory. Clean out this
subdirectory periodically)

create Scripts used to create the database

exp Database export files

logbook Files recording the status and history


of the database

pfile Instance parameter files

Diagnostic data 10g location 11g location

Foreground process user_dump_dest {ADR_HOME}/trace/


traces

Background process background_dump_dest {ADR_HOME}/trace/


traces

Alert log data background_dump_dest {ADR_HOME}/alert/

Core dump core_dump_dest {ADR_HOME}/incident/In/

Incident dumps user_dump_dest or {ADR_HOME}/incident/In/


background_dump_dest *

We can check or open all the trace information of multiple databases from one location using adrci utility
as below

[oracle@ptc-hp12 ~]$ adrci

ADRCI: Release 11.1.0.6.0 - Beta on Sat Oct 25 01:50:06 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ADR base = "/u01/app/oracle"


adrci>
adrci>
adrci> show alert

Choose the alert log from the following homes to view:

1: diag/diagtool/user_oracle/host_192545652_11
2: diag/tnslsnr/ptc-hp00/listener1
3: diag/tnslsnr/ptc-hp00/listener9
4: diag/tnslsnr/ptc-hp00/listener
5: diag/tnslsnr/ptc3-vl4/listener
6: diag/rdbms/ptc/ptc
7: diag/rdbms/dup1/DUP1
8: diag/asm/+asm/+ASM
9: diag/clients/user_unknown/host_411310321_11
10: diag/clients/user_oracle/host_2779827455_11
11: diag/clients/user_oracle/host_192545652_11
Q: to quit

Please select option: 8


adrci>
adrci>
adrci>
adrci>
adrci>
adrci> show incident

ADR Home = /u01/app/oracle/diag/diagtool/user_oracle/host_192545652_11:


*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
1 DIA 48001 [dbgrfafr_1] 2008-10-24 06:58:58.915084 -05:00
1 rows fetched
ADR Home = /u01/app/oracle/diag/rdbms/ptc/ptc:
*************************************************************************
INCIDENT_ID PROBLEM_KEY CREATE_TIME
-------------------- ----------------------------------------------------------- ----------------------------------------
34803 other 2008-04-26 22:54:46.594510 -05:00
34802 other 2008-04-26 22:53:48.400962 -05:00
33769 ORA 1578 2008-04-25 15:48:11.945343 -05:00
3 rows fetched

adrci>

E ) Health Monitor checks

• Detect file corruptions, physical and logical block corruptions, data dictionary corruptions,

and more

• Health checks can be run in two ways

– Reactive – The fault diagnosability infrastructure can run health checks

automatically in response to a critical error ( Actually by using this information


RMAN giving list failure and advise failure info )

– Manual – The DBA can run health checks manually using the EM interface or the

DBMS_HM PL/SQL package

• Health check findings and recommendations are stored in the ADR (defined by

parameter diagnostic_dest )

• Execute the RUN_CHECK procedure


– Supply the name of the check and a name for the run

• Use this query to get a list of valid check names

SQL> SELECT name FROM v$hm_check WHERE internal_check='N';

NAME

DB Structure Integrity Check

Data Block Integrity Check

Redo Integrity Check

Transaction Integrity Check

Undo Segment Integrity Check

Dictionary Integrity Check

SQL> BEGIN

DBMS_HM.RUN_CHECK('Dictionary Integrity
Check', 'my_run');

END;

PL/SQL procedure successfully completed.

We can check the result by viewing following views .or by using adrci utility

V$HM_FINDING,

V$HM_RECOMMENDATION

V$HM_RUN
Checking using adrci utility :

SQL> select run_id from v$hm_run where name='my_run';

RUN_ID
----------
10871

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@ptc-hp12 ~]$ adrci

ADRCI: Release 11.1.0.6.0 - Beta on Sat Oct 25 01:30:18 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.

ADR base = "/u01/app/oracle"

Adrci> help

HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
There are other commands intended to be used directly by Oracle, type
"HELP EXTENDED" to see the list

adrci> show hm_run -p "run_id=10871"


**********************************************************
HM RUN RECORD 1
**********************************************************
RUN_ID 10871
RUN_NAME my_run
CHECK_NAME Dictionary Integrity Check
NAME_ID 24
MODE 0
START_TIME 2008-10-25 01:22:10.669400 -05:00
RESUME_TIME <NULL>
END_TIME 2008-10-25 01:22:20.894431 -05:00
MODIFIED_TIME 2008-10-25 01:22:20.894431 -05:00
TIMEOUT 0
FLAGS 0
STATUS 5
SRC_INCIDENT_ID 0
NUM_INCIDENTS 0
ERR_NUMBER 0
REPORT_FILE <NULL>

1 rows fetched

F ) Database Replay:

It’s about capturing the work in the database and replaying same work in the other test database.

Let’s suppose if we have a performance issue in the oracle 11g production database for the particular job
or particular duration then we can capture the transactions performed in that duration or for that job .Once
capture is completed then we start replay of same transactions in the other test database to find the root
cause.

Note :1 ) To capture database must be in archivelog mode.

2) The process of capturing workloads will impact the performance of the database. The degree of
overhead incurred is highly dependent on the types of transactions being executed. This figure could be
as high as 30%; this is a process that you would want to carefully monitor.
What is Database Replay?.

• Allows for the capture of work/transactions

• Allows for the replay of this work/transactions

• Useful for:

– Database changes

– Tuning changes

– General regression testing

• The DBMS_WORKLOAD_CAPTURE package provides a set of procedures and


functions to control the capture process. Before we can initiate the capture process we
need an empty directory on the "prod-11g" database server to hold the capture logs.

• mkdir /u01/app/oracle/db_replay_capture
• Next, we create a directory object pointing to the new directory.

• CONN sys/password@prod AS SYSDBA



• CREATE OR REPLACE DIRECTORY db_replay_capture_dir
• AS '/u01/app/oracle/db_replay_capture/';

• -- Make sure existing processes are complete.
• SHUTDOWN IMMEDIATE
• STARTUP

• Once installed, you can invoke the dbms_workload_capture.start_capture and

dbms_workload_capture.finish_capture procedures to capture a SQL tuning set (a representative


workload of current SQL).
• BEGIN

DBMS_WORKLOAD_CAPTURE.start_capture (name => 'test_capture_1', dir =>


'DB_REPLAY_CAPTURE_DIR',
duration => NULL);
END;
/
• Once the work is complete we can stop the capture using the FINISH_CAPTURE
procedure.

• CONN sys/password@prod AS SYSDBA



• BEGIN
• DBMS_WORKLOAD_CAPTURE.finish_capture;
• END;
• /

Doing a database workload replay


• Before we can start the replay, we need to calibrate and start a replay client using the
"wrc" utility. The calibration step tells us the number of replay clients and hosts necessary
to faithfully replay the workload.

• $ wrc mode=calibrate replaydir=/u01/app/oracle/db_replay_capture



• Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:33:42 2007

• Copyright (c) 1982, 2007, Oracle. All rights reserved.


• Report for Workload in: /u01/app/oracle/db_replay_capture
• -----------------------

• Recommendation:
• Consider using at least 1 clients divided among 1 CPU(s).

• Workload Characteristics:
• - max concurrency: 1 sessions
• - total number of sessions: 3

• Assumptions:
• - 1 client process per 50 concurrent sessions
• - 4 client process per CPU
• - think time scale = 100
• - connect time scale = 100
• - synchronization = TRUE

• $

• The calibration step suggests a single client on a single CPU is enough, so we only need
to start a single replay client, which is shown below.

• $ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture



• Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007

• Copyright (c) 1982, 2007, Oracle. All rights reserved.


• Wait for the replay to start (09:34:14)
• The replay client pauses waiting for replay to start. We initiate replay with the following
command.

• BEGIN
• DBMS_WORKLOAD_REPLAY.start_replay;
• END;
• /
• If you need to stop the replay before it is complete, call the CANCEL_REPLAY
procedure.

The output from the replay client includes the start and finish time of the replay operation.

• $ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture



• Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007

• Copyright (c) 1982, 2007, Oracle. All rights reserved.


• Wait for the replay to start (09:34:14)
• Replay started (09:34:44)
• Replay finished (09:39:15)
• $
• Once complete, we can see the DB_REPLAY_TEST_TAB table has been created and
populated in the DB_REPLAY_TEST schema.

• SQL> CONN sys/password@test AS SYSDBA


• Connected.
• SQL> SELECT table_name FROM dba_tables WHERE owner = 'DB_REPLAY_TEST';

• TABLE_NAME
• ------------------------------
• DB_REPLAY_TEST_TAB

• SQL> SELECT COUNT(*) FROM db_replay_test.db_replay_test_tab;

• COUNT(*)
• ----------
• 500000

• SQL>

G) Invisible Indexes in Oracle Database 11g Release 1

Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained
like any other index, but they are ignored by the optimizer unless the
OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level.
Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled
using the ALTER INDEX command.

CREATE INDEX index_name ON table_name(column_name) INVISIBLE;

ALTER INDEX index_name INVISIBLE;


ALTER INDEX index_name VISIBLE;

H ) Temporary Tablespace Enhancements in Oracle Database 11g Release 1

Oracle 11g has a new view called DBA_TEMP_FREE_SPACE that displays information about temporary
tablespace usage.

SQL> SELECT * FROM dba_temp_free_space;

TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE


------------------------------ --------------- --------------- ----------
TEMP 56623104 56623104 55574528

1 row selected.

SQL>

Perform an online shrink of a temporary tablespace using the ALTER TABLESPACE command.

SQL>
SQL> ALTER TABLESPACE temp SHRINK SPACE;

Tablespace altered.
SQL> SELECT * FROM dba_temp_free_space;

TABLESPACE_NAME TABLESPACE_SIZE ALLOCATED_SPACE FREE_SPACE


------------------------------ --------------- --------------- ----------
TEMP 1114112 65536 1048576

1 row selected.

SQL>

otherunctional areas. They are also useful for testing the

I ) Table Compression Enhancements in Oracle Database 11g Release 1

Table compression was introduced in Oracle 9i as a space saving feature for data warehousing projects.
In 11g it is now considered a mainstream feature that is acceptable for OLTP databases. In addition to
saving storage space, compression can result in increased I/O performance and reduced memory use in
the buffer cache. These advantages do come at a cost, since compression incurs a CPU overhead, so it
won't be of benefit to everyone.

The compression clause can be specified at the tablespace, table or partition level with the following
options:

NOCOMPRESS - The table or partition is not compressed. This is the default action when no
compression clause is specified.

COMPRESS - This option is considered suitable for data warehouse systems. Compression is enabled
on the table or partition during direct-path inserts only.

COMPRESS FOR DIRECT_LOAD OPERATIONS - This option has the same affect as the simple
COMPRESS keyword.

COMPRESS FOR ALL OPERATIONS - This option is considered suitable for OLTP systems. As the
name implies, this option enables compression for all operations, including regular DML statements. This
option requires the COMPATIBLE initialization parameter to be set to 11.1.0 or higher.
The following examples show the various compression options applied at table and partition level.

Default compression settings can be specified at the tablespace level using the CREATE TABLESPACE
and ALTER TABLESPACE commands. The current settings are displayed in the
DEF_TAB_COMPRESSION and COMPRESS_FOR columns of the DBA_TABLESPACES view.

CREATE TABLESPACE test_ts


DATAFILE '/u01/app/oracle/oradata/DB11G/test_ts01.dbf'
SIZE 1M
DEFAULT COMPRESS FOR ALL OPERATIONS;

SELECT def_tab_compression, compress_for


FROM dba_tablespaces
WHERE tablespace_name = 'TEST_TS';

DEF_TAB_ COMPRESS_FOR
-------- ------------------
ENABLED FOR ALL OPERATIONS

1 row selected.

SQL>

ALTER TABLESPACE test_ts DEFAULT NOCOMPRESS;

SELECT def_tab_compression, compress_for


FROM dba_tablespaces
WHERE tablespace_name = 'TEST_TS';

DEF_TAB_ COMPRESS_FOR
-------- ------------------
DISABLED

1 row selected.

SQL>

DROP TABLESPACE test_ts INCLUDING CONTENTS AND DATAFILES;

When compression is specified at multiple levels, the most specific setting is always used. As such,
partition settings always override table settings, which always override tablespace settings.

The restrictions associated with table compression include:


Compressed tables can only have columns added or dropped if the COMPRESS FOR ALL
OPERATIONS option was used.
Compressed tables must not have more than 255 columns.
Compression is not applied to lob segments.
Table compression is only valid for heap organized tables, not index organized tables.
The compression clause cannot be applied to hash or hash-list partitions. Instead, they must inherit their
compression settings from the tablespace, table or partition settings.
Table compression cannot be specified for external or clustered tables.

J ) Read-Only Tables in Oracle Database 11g Release 1

In previous Oracle releases, tables could be made to appear read-only to other users by only granting the
SELECT object privilege to them, but the tables remained read-write for the owner. Oracle 11g allows
tables to be marked as read-only using the ALTER TABLE command. It restricts both DDL and DML
commands.

ALTER TABLE table_name READ ONLY;


ALTER TABLE table_name READ WRITE;

K) Query Result Cache in Oracle Database 11g Release 1

Oracle 11g allows the results of SQL queries to be cached in the SGA (result_cache_max_size) and
reused to improve performance. Set up the following schema objects to see how the SQL query cache
works.

Next, we query the test table using the slow function and check out the elapsed time. Each run takes
approximately five seconds,
SELECT slow_function(id) FROM qrc_tab;

SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5

5 rows selected.

Elapsed: 00:00:05.15
SQL>

Adding the RESULT_CACHE hint to the query tells the server to attempt to retrieve the information from
the result cache. If the information is not present, it will cache the results of the query provided there is
enough room in the result cache. Since we have no cached results, we would expect the first run to take
approximately five seconds, but subsequent runs to be much quicker.

SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;

SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5

5 rows selected.

Elapsed: 00:00:05.20

SELECT /*+ result_cache */ slow_function(id) FROM qrc_tab;

SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5

5 rows selected.

Elapsed: 00:00:00.15
SQL>

The default action of the result cache is controlled by the RESULT_CACHE_MODE parameter. When it is
set to MANUAL, the RESULT_CACHE hint must be used for a query to access the result cache.

SHOW PARAMETER RESULT_CACHE_MODE

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
result_cache_mode string MANUAL
SQL>

If we set the RESULT_CACHE_MODE parameter to FORCE, the result cache is used by default, but we
can bypass it using the NO_RESULT_CACHE hint.

ALTER SESSION SET RESULT_CACHE_MODE=FORCE;

SELECT slow_function(id) FROM qrc_tab;

SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5

5 rows selected.

Elapsed: 00:00:00.14

SELECT /*+ no_result_cache */ slow_function(id) FROM qrc_tab;

SLOW_FUNCTION(ID)
-----------------
1
2
3
4
5

5 rows selected.

Elapsed: 00:00:05.14
SQL>

L) DDL with the WAIT Option (DDL_LOCK_TIMEOUT) in Oracle Database 11g Release 1

DDL commands require exclusive locks on internal structures. If these locks are not available the
commands return with an "ORA-00054: resource busy" error message. This can be especially frustrating
when trying to modify objects that are accessed frequently. To get round this Oracle 11g includes the
DDL_LOCK_TIMEOUT parameter, which can be set at instance or session level using the ALTER
SYSTEM and ALTER SESSION commands respectively.

The DDL_LOCK_TIMEOUT parameter indicates the number of seconds a DDL command should wait for
the locks to become available before throwing the resource busy error message. The default value is
zero. To see it in action, create a new table and insert a row, but don't commit the insert.

CREATE TABLE lock_tab (


id NUMBER
);

INSERT INTO lock_tab VALUES (1);

Leave this session alone and in a new session, set the DDL_LOCK_TIMEOUT at session level to a non-
zero value and attempt to add a column to the table.

ALTER SESSION SET ddl_lock_timeout=30;

ALTER TABLE lock_tab ADD (


description VARCHAR2(50)
);

The session will wait for 30 seconds before failing.

ALTER TABLE lock_tab ADD (


*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
If we repeat the ALTER TABLE command and commit the insert in the first session within 30 seconds, the
ALTER TABLE will return a successful message.

ALTER TABLE lock_tab ADD (


description VARCHAR2(50)
);

Table altered.
SQL>

M ) Case Sensitive Passwords in Oracle Database 11g Release 1

Case sensitive passwords (and auditing) are a default feature of newly created Oracle 11g databases.
The Database Configuration Assistant (DBCA) allows you to revert these settings back to the pre-11g
functionality during database creation.

The SEC_CASE_SENSITIVE_LOGON initialization parameter gives control over case sensitive


passwords. If existing applications struggle to authenticate against 11g, you can use the ALTER SYSTEM
command to turn off this functionality.
SQL> SHOW PARAMETER SEC_CASE_SENSITIVE_LOGON

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
sec_case_sensitive_logon boolean TRUE
SQL>
SQL> ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;

System altered.

SQL>

The following code demonstrates the case sensitive password functionality. First, it resets the
SEC_CASE_SENSITIVE_LOGON initialization parameter to TRUE and creates a new user with a mixed
case password.

CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;

CREATE USER test2 IDENTIFIED BY Test2;


GRANT CONNECT TO test2;

We can see the case sensitive password functionality in operation if we attempt to connect to the new
user with both the correct and incorrect case password.

SQL> CONN test2/Test2


Connected.
SQL> CONN test2/test2
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.


SQL>

By switching the SEC_CASE_SENSITIVE_LOGON initialization parameter to FALSE we are able to


connect using both variations of the password.

CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;

SQL> CONN test2/Test2


Connected.
SQL> CONN test2/test2
Connected.
SQL>

The important thing to remember here is even when case sensitive passwords are not enabled, the
original case of the password is retained so it can be used if case sensitivity is subsequently enabled. The
following code disables case sensitivity and creates a new user with a mixed case password.

CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = FALSE;
CREATE USER test3 IDENTIFIED BY Test3;
GRANT CONNECT TO test3;
As you would expect, connection to the user is possible regardless of the case of the password.
SQL> CONN test3/Test3
Connected.
SQL> CONN test3/test3
Connected.
SQL>

If we enable case sensitivity, authentication is done against the mixed case password.

CONN / AS SYSDBA
ALTER SYSTEM SET SEC_CASE_SENSITIVE_LOGON = TRUE;

SQL> CONN test3/Test3


Connected.
SQL> CONN test3/test3
ERROR:
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.


SQL>

The DBA_USERS view includes a PASSWORD_VERSIONS column that indicates the database release
in which the password was created or last modified.

SQL> SELECT username, password_versions FROM dba_users;

USERNAME PASSWORD
------------------------------ --------
TEST 10G 11G
SPATIAL_WFS_ADMIN_USR 10G 11G
SPATIAL_CSW_ADMIN_USR 10G 11G
APEX_PUBLIC_USER 10G 11G
SYSTEM 10G 11G
SYS 10G 11G
MGMT_VIEW 10G 11G
OUTLN 10G 11G

32 rows selected.

SQL>

Users imported from a 10g database have a PASSWORD_VERSIONS value of "10G" and maintain case
insensitive passwords independent of the SEC_CASE_SENSITIVE_LOGON parameter setting. Their
passwords become case sensitive as soon as they are changed, assuming the
SEC_CASE_SENSITIVE_LOGON parameter is set to TRUE.

The ignorecase parameter of the orapwd utility allows control over case sensitivity of passwords in the
password file. The default value is "n", meaning the passwords are case sensitive. When privileged users
(SYSDBA & SYSOPER) are imported from a previous release their passwords are included in the
password file. These users will retain case insensitive passwords until the password is modified.

To create case insensitive passwords in the password file, recreate the password file using the
ingnorecase=y option.

$ orapwd file=orapwDB11Gb entries=100 ignorecase=y password=mypassword

The passwords associated with database links are also case sensitive, which presents some issues when
connecting between different releases:
11g to 11g: The database link must be created with the password in the correct case to match the remote
users password.

11g to Pre-11g: The database link can be created with the password in any case as case is ignored by
the remote database.

Pre-11g to 11g: The remote user must have its password modified to be totally in upper case, as this is
how it will be stored and passed by the Pre-11g database.

N) AUDITING :

• Auditing is enabled by default in 11g

• AUDIT_SYS_OPERATIONS is off by default

• Configured to audit to the database

– Goes to sys.aud$ table

• Audits AUDIT ROLE by default

– This role contains a number of SQL statements

• Must be disabled by hand if desired

– Set AUDIT_TRAIL to none

• The AUDIT Role SQL statements are audited by default

ALTER ANY PROCEDURE CREATE ANY JOB DROP ANY TABLE

ALTER ANY TABLE CREATE ANY LIBRARY DROP PROFILE

ALTER DATABASE CREATE ANY PROCEDURE DROP USER


ALTER PROFILE CREATE ANY TABLE EXEMPT ACCESS POLICY

ALTER SYSTEM CREATE EXTERNAL JOB GRANT ANY OBJECT PRIVILEGE

ALTER USER CREATE PUBLIC DATABASE GRANT ANY PRIVILEGE


LINK

ALTER ROLE BY ACCESS CREATE SESSION GRANT ANY ROLE

AUDIT SYSTEM CREATE ANY USER

AUDIT SYSTEM BY DROP ANY PROCEDURE


ACCESS

O ) New initialization parameters to find data corruption

New initialization parameter: DB_ULTRA_SAFE

• This parameter controls the setting of other related parameters:

– DB_BLOCK_CHECKING

– DB_BLOCK_CHECKSUM

– DB_LOST_WRITE_PROTECT

• Also controls other data protection behavior with the Oracle database

• Offers an integrated mechanism to control various levels from data corruptions

DB_BLOCK_CHECKSUM
• on the parameter Determines whether DB writer and the direct loader will calculate a checksum
and store it in the cache header of every data block when writing it to disk

• Syntax:

DB_BLOCK_CHECKSUM = { OFF | FALSE | TYPICAL | TRUE | FULL }

• Values:

• OFF or FALSE

• TYPICAL (Oracle recommended)

• FULL or TRUE

• Checksums allow Oracle to detect corruption caused by underlying disks, storage systems, or I/O
systems

• TYPICAL mode causes only a 1% to 2% overhead

• FULL mode causes 4% to 5% overhead

Lost Write Detection

A lost write happens when Oracle writes a block to disk and the I/O subsystem signals the write is
complete, even though it isn't. When the block is next read the stale data is returned, which can result in
data corruption.

The DB_LOST_WRITE_PROTECT parameter can provide protection against lost writes depending on
the value set:
NONE - No lost write protection. The default.
TYPICAL - The instance logs buffer cache reads for read/write tablespaces in the redo log. This has a
approximate overhead of 5-10% in a RAC environment.
FULL - The instance logs buffer cache reads for read/write and read-only tablespaces in the redo log.
This has a approximate overhead of 20% in a RAC environment.
Lost write detection is most effective in Data Guard environments. Once the primary and standby
databases are protected, the SCNs of blocks applied to the standby database are compared to the SCN
logged in the redo logs. If the SCN on the primary database is smaller than the SCN on the standby
database, a lost write on the primary database has occurred and is signaled with an external error (ORA-
752). At this point you should failover to the standby database. If the SCN on the primary database is
bigger than on the standby database, a lost write on the standby database has occured and is signalled
with an internal error (ORA-600 [3020]). At this point the standby database should be recreated.

Lost write protection can also be used in normal databases, although there is no signal that the lost write
has occurred. If you suspect a problem due to inconsistent data, you must recovery the database to the
SCN of the stale block from a backup taken before the suspected problem occurred. This restore
operation will generate the lost write error (ORA-752). If the error is detected during a recovery, you have
no alternative but to open the database with the RESETLOGS option. All data after this point is lost.

• DB_BLOCK_CHECKING created.

• Specifies whether or not Oracle performs block checking for database blocks

• Syntax:

DB_BLOCK_CHECKING = {FALSE | OFF | LOW | MEDIUM | TRUE | FULL }

• Values:

– OFF or FALSE

– LOW

– MEDIUM

– FULL or TRUE

• The values FALSE and TRUE are support for backwards compatibility.

option. All data after this point is lost.


P ) FLASHBACK FEATURES

1) Flashback Data Archives

• Flashback Data Archives provide the ability to track changes that occur over the lifetime

of an individual table

• Unlike other flashback features, Flashback Data Archive does not rely upon undo records

or archivelog files

History records are stored in an archive tablespace and can be kept for years

• Restrictions

– Cannot drop/rename/truncate a table assigned to a Flashback Archive

• Alter it first to unassign it from the archive

• Usage

– Use the “AS OF” timestamp parameter

• Not subject to undo information anymore

• CREATE TABLESPACE fda_ts


• DATAFILE '/u01/app/oracle/oradata/DB11G/fda1_01.dbf'
• SIZE 1M AUTOEXTEND ON NEXT 1M;

• CREATE FLASHBACK ARCHIVE DEFAULT fda_1year TABLESPACE fda_ts
• QUOTA 10G RETENTION 1 YEAR;

LogMiner Enhancements

In previous versions of Oracle the LogMiner viewer was a separate Java based console, but in
Oracle 11g it has been incorporated into Enterprise Manager and integrated with the new
Flashback Transaction feature, making it simple to recover transactions that have had an
undesirable affect. The logminer functionality is accessed using the "View and Manage
Transactions" link on the "Availability" tab.

Q ) Performance features

1 ) Enable Automatic Memory Management

• Set the following parameters to the desired values:


– MEMORY_TARGET (includes SGA + PGA )

– MEMORY_MAX_TARGET

• Also set the following parameters:

– SGA_TARGET = 0

– PGA_AGGREGATE_TARGET = 0

This allows the SGA and instance PGA to be dynamically tuned up and down, as required

• V$MEMORY_DYNAMIC_COMPONENTS

– Shows the current sizes of dynamically tuned memory components

– Includes total sizes of the SGA and the instance PGA

– All sizes are expressed in bytes

• V$MEMORY_MEMORY_TARGET_ADVICE

– Provides information about how the MEMORY_TARGET parameter should be

sized

– Values are based on current sizes and satisfaction metrics

2) Pending Statistics
In previous database versions, new optimizer statistics were automatically published when they were
gathered. In 11g this is still the default action, but you now have the option of keeping the newly gathered
statistics in a pending state until you choose to publish them.

The DBMS_STATS.GET_PREFS function allows you to check the 'PUBLISH' attribute to see if statistics
are automatically published. The default value of TRUE means they are automatically published, while
FALSE indicates they are held in a pending state.

SELECT DBMS_STATS.get_prefs('PUBLISH') FROM dual;


DBMS_STATS.GET_PREFS('PUBLISH')
-------------------------------------------
TRUE

1 row selected.

SQL>
The 'PUBLISH' attribute is reset using the DBMS_STATS.SET_TABLE_PREFS procedure.
-- New statistics for SCOTT.EMP are kept in a pending state.
EXEC DBMS_STATS.set_table_prefs('SCOTT', 'EMP', 'PUBLISH', 'false');

-- New statistics for SCOTT.EMP are published immediately.


EXEC DBMS_STATS.set_table_prefs('SCOTT', 'EMP', 'PUBLISH', 'true');

Pending statistics are visible using the [DBA|ALL|USER]_TAB_PENDING_STATS and [DBA|ALL|


USER]_IND_PENDING_STATS views.

The DBMS_STATS package allows you to publish or delete pending statistics, as show below.
-- Publish all pending statistics.
EXEC DBMS_STATS.publish_pending_stats(NULL, NULL);

-- Publish pending statistics for a specific object.


EXEC DBMS_STATS.publish_pending_stats('SCOTT','EMP');

-- Delete pending statistics for a specific object.


EXEC DBMS_STATS.delete_pending_stats('SCOTT','EMP');

The optimizer is capable of using pending statistics if the OPTIMIZER_PENDING_STATISTICS


initialization parameter, which defaults to FALSE, is set to TRUE. Setting this parameter to TRUE at
session level allows you to test the impact of pending statistics before publishing them.
ALTER SESSION SET OPTIMIZER_PENDING_STATISTICS=TRUE;

Pending statistics can be transfered between database by exporting this using the

3) Adaptive Cursor Sharing in Oracle Database 11g Release 1


DBAs are always encouraging developers to use bind variables, but when bind variables are used against
columns containing skewed data they sometimes lead to less than optimum execution plans. This is
because the optimizer peaks at the bind variable value during the hard parse of the statement, so the
value of a bind variable when the statement is first presented to the server can affect every execution of
the statement, regardless of the bind variable values.

Oracle 11g uses Adaptive Cursor Sharing to solve this problem by allowing the server to compare the
effectiveness of execution plans between executions with different bind variable values. If it notices
suboptimal plans, it allows certain bind variable values, or ranges of values, to use alternate execution
plans for the same statement. This functionality requires no additional configuration. The following code
provides and example of adaptive cursor sharing.

First we create and populate a test table.

DROP TABLE acs_test_tab;

CREATE TABLE acs_test_tab (


id NUMBER,
record_type NUMBER,
description VARCHAR2(50),
CONSTRAINT acs_test_tab_pk PRIMARY KEY (id)
);

CREATE INDEX acs_test_tab_record_type_i ON acs_test_tab(record_type);

DECLARE
TYPE t_acs_test_tab IS TABLE OF acs_test_tab%ROWTYPE;
l_tab t_acs_test_tab := t_acs_test_tab();

BEGIN
FOR i IN 1 .. 100000 LOOP
l_tab.extend;
IF MOD(i,2)=0 THEN
l_tab(l_tab.last).record_type := 2;
ELSE
l_tab(l_tab.last).record_type := i;
END IF;

l_tab(l_tab.last).id := i;
l_tab(l_tab.last).description := 'Description for ' || i;
END LOOP;

FORALL i IN l_tab.first .. l_tab.last


INSERT INTO acs_test_tab VALUES l_tab(i);

COMMIT;
END;
/

EXEC DBMS_STATS.gather_table_stats(USER, 'acs_test_tab', method_opt=>'for all indexed columns


size skewonly', cascade=>TRUE);

The data in the RECORD_TYPE column is skewed, as shown by the presence of a histogram against the
column.

SELECT column_name, histogram FROM user_tab_cols WHERE table_name = 'ACS_TEST_TAB';

COLUMN_NAME HISTOGRAM
------------------------------ ---------------
ID NONE
RECORD_TYPE HEIGHT BALANCED
DESCRIPTION NONE

3 rows selected.

SQL>

Next, we query the table and limit the rows returned based on the RECORD_TYPE column with a literal
value of "1".

SET LINESIZE 200


SELECT MAX(id) FROM acs_test_tab WHERE record_type = 1;
SELECT * FROM TABLE(DBMS_XPLAN.display_cursor);

MAX(ID)
----------
1

1 row selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID cgt92vnmcytb0, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = 1

Plan hash value: 3987223107

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------

This query has used the index as we would expect. Now we repeat the query, but this time use a bind
variable.

VARIABLE l_record_type NUMBER;


EXEC :l_record_type := 1;

SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type;


SELECT * FROM TABLE(DBMS_XPLAN.display_cursor);

MAX(ID)
----------
1

1 row selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type

Plan hash value: 3987223107

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1| 9| 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------

So we ran what amounted to the same query, and got the same result and execution plan. The optimizer
picked an execution plan that it thinks is optimium for query by peeking at the value of the bind variable.
The only problem is, this would be totally the wrong thing to do for other bind values.

VARIABLE l_record_type NUMBER;


EXEC :l_record_type := 2;

SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type;


SELECT * FROM TABLE(DBMS_XPLAN.display_cursor);

MAX(ID)
----------
100000

1 row selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 0
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type

Plan hash value: 3987223107

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
| 2 | TABLE ACCESS BY INDEX ROWID| ACS_TEST_TAB | 1 | 9 | 2 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | ACS_TEST_TAB_RECORD_TYPE_I | 1 | | 1 (0)|
00:00:01 |
-----------------------------------------------------------------------------------------------------------

If we look at the V$SQL view entry for this query, we can see the IS_BIND_SENSITIVE column is marked
as 'Y', so Oracle is aware this query may require differing execution plans depending on the bind variable
values, but currently the IS_BIND_AWARE column is marked as 'N', so Oracle as not acted on this yet.

SELECT sql_id, child_number, is_bind_sensitive, is_bind_aware


FROM v$sql
WHERE sql_text = 'SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type';

SQL_ID CHILD_NUMBER I I
------------- ------------ - -
9bmm6cmwa8saf 0YN

1 row selected.
SQL>

If we run the statement using the second bind variable again, we can see that Oracle has decided to use
an alternate, more efficient plan for this statement.

VARIABLE l_record_type NUMBER;


EXEC :l_record_type := 2;

SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type;


SELECT * FROM TABLE(DBMS_XPLAN.display_cursor);

MAX(ID)
----------
100000

1 row selected.

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
SQL_ID 9bmm6cmwa8saf, child number 1
-------------------------------------
SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type

Plan hash value: 509473618

-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 138 (100)| |
| 1 | SORT AGGREGATE | | 1| 9| | |
|* 2 | TABLE ACCESS FULL| ACS_TEST_TAB | 48031 | 422K| 138 (2)| 00:00:02 |
-----------------------------------------------------------------------------------

This change in behavior is also reflected in the V$SQL view, which now has the IS_BIND_AWARE
column maked as "Y".

SELECT sql_id, child_number, is_bind_sensitive, is_bind_aware


FROM v$sql
WHERE sql_text = 'SELECT MAX(id) FROM acs_test_tab WHERE record_type = :l_record_type';

SQL_ID CHILD_NUMBER I I
------------- ------------ - -
9bmm6cmwa8saf 0YN
9bmm6cmwa8saf 1YY

2 rows selected.

SQL>

4 ) ADDM Enhancements for Oracle 11g

• Global analysis on all instances of an Oracle RAC cluster


– ADDM for RAC considers database time as the sum of database times for all

instances

– Findings that are significant are reported at the cluster level

– Targeted analysis available at several granularity levels or specific targets

• For example: The I/O levels of each cluster node may be insignificant when considered

locally, but the aggregate I/O may be a significant for the cluster as a whole

_STATNew ADDM views

DBA_ Views

DBA_ADDM_FDG_BREAKDOWN

DBA_ADDM_FINDINGS

DBA_ADDM_INSTANCES

DBA_ADDM_SYSTEM_DIRECTIV
ES
DBA_ADDM_TASK_DIRECTIVES

DBA_ADDM_TASKS

re

5) Automatic SQL tuning.


SQL Tuning Advisor

• Collects SQL Statements suspect of causing high load

• Attempts to improve execution plans

• Implement changes and generate better explain plans

• Works well for repeating SQL

– Does not analyze:

• Ad hoc SQL

• Parallel queries

• Recursive SQL

• DML/DDL

Queries considered

• Refreshing statistics

• Changing init.ora parameters

• Query rewrite

• Drop/add indexes and tables

• SQL plan baselines

• SQLSets

• Top SQL from past week


• Top SQL for any day in past week

• Top SQL for any hour

• Top SQL by average single execution

Core enhancements

• SQL Profiling

– Custom information to build a better explain plan

• Requires minimum 3x performance improvement to implement plan

– Stored persistently in the database

– Allows for different and better explain plans without altering the SQL Text

• SQL Tuning Advisor

– Produces statement-specific tuning advice

SQL Tuning Optimizer

• Gets SQL from SQL Tuning Advisor or manual submission

• Uses SQL Profile

• Gets history from AWR

• Does what-if analysis


• Produces new profile, new explain plan, and zero or more tuning recommendations

• Passed back to SQL Tuning Advisor for user input

6 Automatic Workload Repository Baselines

• Contains performance data from a specific time period

• Baseline snapshots are excluded from the AWR purging process

• Snapshots are retained indefinitely

• Three types of baselines:

– Fixed

– Moving Window

– Baseline Template

Fixed Baseline

• Corresponds to a fixed, contiguous time period

• Should represent the system operating at an optimal level

• Compare to periods of poor performance to analyze performance degradation over time

Moving Window Baseline


• Corresponds to all AWR data that exists within the AWR retention period

• A system-defined moving window baseline is automatically maintained by Oracle

• Default window size is eight days (corresponds to the default AWR retention period)

• For a larger window size, first increase AWR retention period

Baseline Template

Two types of baseline templates

• Single: The single baseline template can be used to create a baseline for a single

contiguous future time period

* Useful when time period is known beforehand

• Repeating: The repeating baseline template can be used to create and drop baselines

based on a repeating time schedule

* Useful when capturing a contiguous time period on an ongoing basis

7) Oracle 11g includes three automated database maintenance tasks:

Automatic Optimizer Statistics Collection - Gathers stale or missing statistics for all schema objects The
task name is 'auto optimizer stats collection'.

Automatic Segment Advisor - Identifies segments that could be reorganized to save space. The task
name is 'auto space advisor'.

Automatic SQL Tuning Advisor - Identifies and attempts to tune high load SQL. The task name is 'sql
tuning advisor'.

These tasks run during maintenance windows scheduled to open over night. Configuration of the
Relevant Views
The following views display information related to the automated database maintenance tasks:
DBA_AUTOTASK_CLIENT_JOB
DBA_AUTOTASK_CLIENT
DBA_AUTOTASK_JOB_HISTORY
DBA_AUTOTASK_WINDOW_CLIENTS
DBA_AUTOTASK_CLIENT_HISTORY

8 Partitioning enhancements

New partitioning options

• Interval partitioning

• Reference partitioning

• System partitioning

• Virtual column-based partitioning

• Fine-grained partitioning of OLAP cubes


9

You might also like