Oracle 10g New Features1
Oracle 10g New Features1
This article the discusses the new features which automate the tuning of SQL statements in Oracle 10g: Overview SQL Tuning Advisor Managing SQL Profiles SQL Tuning Sets Useful Views
Overview
In its normal mode the query optimizer needs to make decisions about execution plans in a very short time. As a result it may not always be able to obtain enough information to make the best decision. Oracle 10g allows the optimizer to run in tuning mode where it can gather additional information and make recommendations about how specific statements can be tuned further. This process may take several minutes for a single statement so it is intended to be used on high-load resource-intensive statements. In tuning mode the optimizer performs the following analysis: Statistics Analysis - The optimizer recommends the gathering of statistics on objects with missing or stale statistics. Additional statistics for these objects are stored in an SQL profile. SQL Profiling - The optimizer may be able to improve performance by gathering additional statistics and altering session specific parameters such as the OPTIMIZER_MODE. If such improvements are possible the information is stored in an SQL profile. If accepted this information can then used by the optimizer when running in normal mode. Unlike a stored outline which fixes the execution plan, an SQL profile may still be of benefit when the contents of the table alter drastically. Even so, it's sensible to update profiles periodically. The SQL profiling is not performed when the tuining optimizer is run in limited mode. Access Path Analysis - The optimizer investigates the effect of new or modified indexes on the access path. It's index recommendations relate to a specific statement so where necessary it will also suggest the use of the SQL Access Advisor to check the impact of these indexes on a representative SQL workload. SQL Structure Analysis - The optimizer suggests alternatives for SQL statements that contain structures that may impact on performance. The implementation of these suggestions requires human intervention to check their validity.
The automatic SQL tuning features are accessible from Enterprise Manager on the "Advisor Central" page these or from PL/SQL using the DBMS_SQLTUNE package. This article will focus on the PL/SQL API as the Enterprise Manager interface is reasonably intuative.
2) Overview
The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided the STATISTICS_LEVEL parameter is set to TYPICAL or ALL. The ADDM analysis includes: CPU load Memory usage I/O usage Resource intensive SQL Resource intensive PL/SQL and Java RAC issues Application issues Database configuration issues Concurrency issues Object contention
are written to two locations. In summary ASM provides the following functionality: Manages groups of disks, called disk groups. Manages disk redundancy within a disk group. Provides near-optimal I/O balancing without any manual tuning. Enables management of database objects without specifying mount points and filenames. Supports large files.
The repository is a source of information for several other Oracle 10g features including: Automatic Database Diagnostic Monitor SQL Tuning Advisor Undo Advisor Segment Advisor
Snapshots
By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using: BEGIN DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings( retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL. interval => 30); -- Minutes. Current value retained if NULL. END; /
Oracle 10g has introduced the DBMS_FILE_TRANSFER package which provides an API for copying binary files between database servers. Common Usage Notes COPY_FILE GET_FILE PUT_FILE
COPY_FILE
The COPY_FILE procedure allows you to copy binary files from one location to another on the same server.
GET_FILE
The GET_FILE procedure allows you to copy binary files from a remote server to the local server.
PUT_FILE
The PUT_FILE procedure allows you to copy binary files from the local server to a remote server.
Flashback Transaction Query Flashback Table Flashback Drop (Recycle Bin) Flashback Database Flashback Query Functions
Flashback Query
Flashback Query allows the contents of a table to be queried with reference to a specific point in time, using the AS OF clause. Essentially it is the same as the DBMS_FLASHBACK functionality or Oracle9i, but in a more convenient form. For example: CREATE TABLE flashback_query_test ( id NUMBER(10) ); SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database; CURRENT_SCN TO_CHAR(SYSTIMESTAM ----------- ------------------722452 2004-03-29 13:34:12
Flashback Table
The FLASHBACK TABLE command allows point in time recovery of individual tables subject to the following requirements: You must have either the FLASHBACK ANY TABLE system privilege or have FLASHBACK object privilege on the table. You must have SELECT, INSERT, DELETE, and ALTER privileges on the table. There must be enough information in the undo tablespace to complete the operation. Row movement must be enabled on the table (ALTER TABLE tablename ENABLE ROW MOVEMENT;).
The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered from the recycle bin:
Flashback Database
The FLASHBACK DATABASE command is a fast alternative to performing an incomplete recovery. In order to flashback the database you must have SYSDBA privilege and the flash recovery area must have been prepared in advance. If the database is in NOARCHIVELOG it must be switched to ARCHIVELOG mode: CONN sys/password AS SYSDBA ALTER SYSTEM SET log_archive_dest_1='location=d:\oracle\oradata\DB10G\archive\' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_format='ARC%S_%R.%T' SCOPE=SPFILE; ALTER SYSTEM SET log_archive_start=TRUE SCOPE=SPFILE; SHUTDOWN IMMEDIATE STARTUP MOUNT ARCHIVE LOG START ALTER DATABASE ARCHIVELOG; ALTER DATABASE OPEN; Flashback must be enabled before any flashback operations are performed: CONN sys/password AS SYSDBA SHUTDOWN IMMEDIATE STARTUP MOUNT EXCLUSIVE ALTER DATABASE FLASHBACK ON; ALTER DATABASE OPEN;
The SQL_BIND and SQL_TEXT columns are only populated when the AUDIT_TRAIL=DB_EXTENDED initialization parameter is set:
DBMS_CRYPTO
The DBMS_CRYPTO package is a replacement for the DBMS_OBFUSCATION_TOOLKIT package available in Oracle 8i and 9i. The new package is easier to use and contains more cryptographic algorithms: Cryptographic algorithms - DES, 3DES, AES, RC4, 3DES_2KEY Padding forms - PKCS5, zeroes Block cipher chaining modes - CBC, CFB, ECB, OFB Cryptographic hash algorithms - MD5, SHA-1, MD4 Keyed hash (MAC) algorithms - HMAC_MD5, HMAC_SH1 Cryptographic pseudo-random number generator - RAW, NUMBER, BINARY_INTEGER Database types - RAW, CLOB, BLOB
Test Table
The following examples use the table defined below. CREATE TABLE test1 AS SELECT * FROM all_objects WHERE 1=2;
Optional Clauses
The MATCHED and NOT MATCHED clauses are now optional making all of the following examples valid. -- Both clauses present. MERGE INTO test1 a USING all_objects b ON (a.object_id = b.object_id) WHEN MATCHED THEN UPDATE SET a.status = b.status WHEN NOT MATCHED THEN INSERT (object_id, status) VALUES (b.object_id, b.status);
Getting Started
For the examples to work we must first unlock the SCOTT account and create a directory object it can access: CONN sys/password@db10g AS SYSDBA ALTER USER scott IDENTIFIED BY tiger ACCOUNT UNLOCK; GRANT CREATE ANY DIRECTORY TO scott; CREATE OR REPLACE DIRECTORY test_dir AS '/u01/app/oracle/oradata/'; GRANT READ, WRITE ON DIRECTORY test_dir TO scott;
Table Exports/Imports
The TABLES parameter is used to specify the tables that are to be exported. The following is an example of the table export and import syntax: expdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=expdpEMP_DEPT.log impdp scott/tiger@db10g tables=EMP,DEPT directory=TEST_DIR dumpfile=EMP_DEPT.dmp logfile=impdpEMP_DEPT.log For example output files see expdpEMP_DEPT.log and impdpEMP_DEPT.log. The TABLE_EXISTS_ACTION=APPEND parameter allows data to be imported into existing tables.
Schema Exports/Imports
The OWNER parameter of exp has been replaced by the SCHEMAS parameter which is used to specify the schemas to be exported. The following is an example of the schema export and import syntax: expdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=expdpSCOTT.log impdp scott/tiger@db10g schemas=SCOTT directory=TEST_DIR dumpfile=SCOTT.dmp logfile=impdpSCOTT.log
The whole migration process is beyond the scope of this article so please refer to the Upgrading a Database to the New Oracle Database 10g Release document for further information.
Direct upgrades to 10g are possible from existing databases with versions listed in the table below. Upgrades from other versions are supported only via intermediate upgrades to a supported upgrade version. Original Version Upgrade Script 8.0.6 8.1.7 9.0.1 9.2.0 u0800060.sql u0801070.sql u0900010.sql u0902000.sql
The preferred upgrade method is to use the Database Upgrade Assistant (DBUA), a GUI tool that performs all necessary prerequisite checks and operations before upgrading the specified instances. The DBUA can be started directly from the Oracle Universal Installer (OUI) or separately after the software installation is complete. Alternatively you may which to perform a manual upgrade which involves the following steps: Analyze the existing instance using the utlu101i.sql script, explained below. Backup the database. Start the original database in the new upgrade mode (see below) and proceed with the upgrade. The majority of the upgrade work is done by running the appropriate upgrade script for the current database version. Recompile invalid objects. Troubleshoot any issues or abort the upgrade.
Selecting the instance to upgrade. Analyzing the existing database to make sure it is suitable for upgrade. Creating the SYSAUX tablespace which is required for 10g. Deciding whether to recompile all invalid objects when the upgrade is complete. Selecting a backup option for the database. Deciding how the database should be managed (OEM Console or Grid Control) and defining the appropriate authentication. Defining the flash recovery area. Performing any necessary network configuration. Performing the upgrade process. Checking the upgrade results. Listing the changes in default behaviour between the old and new versionsof the database. Completing the upgrade procedure.
The DBUA can also be started in silent mode provided all the necessary parameters are provided.
STARTUP UPGRADE
The is a new startup mode associated with the upgrade procedure in Oracle 10g. SQL> STARTUP UPGRADE;
Programs
The scheduler allows you to optionally create programs which hold metadata about a task, but no schedule information. A program may related to a PL/SQL block, a stored procedure or an OS executable file. Programs are created using the CREATE_PROGRAM procedure: -- Create the test programs. BEGIN -- PL/SQL Block. DBMS_SCHEDULER.create_program ( program_name => 'test_plsql_block_prog', program_type => 'PLSQL_BLOCK',
program_action => 'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;', enabled => TRUE, comments => 'Program to gather SCOTT''s statistics using a PL/SQL block.');
Cluster Configuration
For services to function correctly the GSD daemon must be running on each node in the cluster. The GSD daemons are started using the gsdctl utility, which is part of the Cluster Ready Services (CRS) installation, so they must be started from that environment as follows. # Set environment. export ORACLE_HOME=/u01/app/oracle/product/10.1.0/crs export PATH=$ORACLE_HOME/bin:$PATH # Start GSD daemon. gsdctl start
The user profile files, glogin.sql and login.sql are now run after each successful connection in addition to SQL*Plus startup. This is particularly useful when the login.sql file is used to set the SQLPROMPT to the current connection details. Three new predefined variables have been added to SQL*Plus: _DATE - Contains the current date or a user defined fixed string. _PRIVILEGE - Contains privilege level such as AS SYSDBA, AS SYSOPER or blank. _USER - Contains the current username (like SHOW USER).
An example of their use would be: SET SQLPROMPT "_USER'@'_CONNECT_IDENTIFIER _PRIVILEGE _DATE> " The values of the variables can be viewed using the DEFINE command with no parameters.
Most of these features are beyond the scope of this article and as such will be dealt with in separate aticles.
JAVA_POOL_SIZE
If these parameters are set to a non-zero value they represent the minimum size for the pool. These minimum values may be necessary if you experience application errors when certain pool sizes drop below a specific threshold. The following parameters must be set manually and take memory from the quota allocated by the SGA_TARGET parameter: DB_KEEP_CACHE_SIZE DB_RECYCLE_CACHE_SIZE DB_nK_CACHE_SIZE (non-default block size) STREAMS_POOL_SIZE LOG_BUFFER
The new views include: V$ACTIVE_SESSION_HISTORY V$SYSTEM_WAIT_HISTORY V$SESS_TIME_MODEL V$SYS_TIME_MODEL V$SYSTEM_WAIT_CLASS V$SESSION_WAIT_CLASS V$EVENT_HISTOGRAM V$FILE_HISTOGRAM V$TEMP_HISTOGRAM
The following are some examples of how these updates can be used. The V$EVENT_NAME view has had three new columns added (WAIT_CLASS_ID, WAIT_CLASS# and WAIT_CLASS) which indicate the class of the event. This allows easier aggregation of event details: User I/O .109552 9 rows selected. The V$SESSION view has had several columns added that include blocking session and wait information. The wait information means it's no longer necessary to join to V$SESSION_WAIT to get wait information for a session: -- Display blocked session and their blocking session details. SELECT sid, serial#, blocking_session_status, blocking_session FROM v$session WHERE blocking_session IS NOT NULL;
no rows selected -- Display the resource or event the session is waiting for. SELECT sid, serial#, event, (seconds_in_wait/1000000) seconds_in_wait FROM v$session ORDER BY sid; The V$SYSTEM_WAIT_HISTORY view shows historical wait information which allows you to identify issues after the session has ended.
The statistics can be gathered then locked at a time when the table contains the appropriate data: BEGIN DBMS_STATS.gather_table_stats('MY_SCHEMA','LOAD_TABLE'); DBMS_STATS.lock_table_stats('MY_SCHEMA','LOAD_TABLE'); END; / System statistics and statistics for fixed object, such as dynamic performance tables, are not gathered automatically.
Dynamic Sampling
Dynamic sampling enables the server to improve performance by:
Estimate single-table predicate selectivities where available statistics are missing or may lead to bad estimations. Estimate statatistics for tables and indexes with missing statistics. Estimate statatistics for tables and indexes with out of date statistics.
Dynamic sampling is controled by the OPTIMIZER_DYNAMIC_SAMPLING parameter which accepts values from "0" (off) to "10" (agressive sampling) with a default value of "2". At compiletime Oracle determines if dynamic sampling would improve query performance. If so it issues recursive statements to estimate the necessary statistics. Dynamic sampling can be beneficial when: The sample time is small compared to the overall query execution time. Dynamic sampling results in a better performing query. The query may be executed multiple times.
In addition to the OPTIMIZER_DYNAMIC_SAMPLING system parameter the dynamic sampling level can be set using the DYNAMIC_SAMPLING optimizer hint for specific queries like: SELECT /*+ dynamic_sampling(emp 10) */ empno, ename, job, sal FROM emp WHERE deptno = 30; The results of dynamic sampling are repeatable provided no rows are inserted, updated or deleted from the sampled table. The OPTIMIZER_FEATURES_ENABLE parameter will turns off dynamic sampling if it is set to a version earlier than 9.2.0.
CPU Costing
By default the cost model for the optimizer is now CPU+I/O, with the cost unit as time.
Optimizer Hints
New hints: SPREAD_MIN_ANALYSIS - Specifies analysis options for spreadsheets. USE_NL_WITH_INDEX - Specifies a nested loops join. QB_NAME - Specifies a name for a query block. NO_QUERY_TRANSFORMATION - Prevents the optimizer performing query transformations. NO_USE_NL, NO_USE_MERGE, NO_USE_HASH, NO_INDEX_FFS, NO_INDEX_SS and NO_STAR_TRANSFORMATION - Excludes specific operations from the query plan. INDEX_SS, INDEX_SS_ASC, INDEX_SS_DESC - Excludes range scans from the query plan.
Updated hints: Hints that specify table names have been expanded to accept Global Table Hints. This allows a base table within a view to be specified using the "view-name.table-name" syntax.
Hints that specify index names have been expanded to accept Complex Index Hints. This allows an index to be specified using the "(table-name.column-name)" syntax instead of the index name. Some hints can now optionally accept a query block parameter.
Renamed hints: NO_PARALLEL - Formally NOPARALLEL. NO_PARALLEL_INDEX - Formally NOPARALLEL_INDEX. NO_REWRITE - Formally NOREWRITE.
Deprecated hints: AND_EQUAL HASH_AJ MERGE_AJ NL_AJ HASH_SJ NL_SJ EXPAND_GSET_TO_UNION ORDERED_PREDICATES ROWID STAR
Tracing Enhancements
The Oracle Trace functionality has been removed from Oracle 10g. Instead the SQL Trace and TKPROF functionality should be used. In multi-tier environments where statements are passed to different sessions by the application server it can become difficult to trace an individual process from start to finish. To solve this problem Oracle have introduced End to End Application Tracing which allows a client process to be identified via the client identifier rather than the typical session id. Each piece of trace information is linked to the following information: Client Identifier - Specifies the "real" end user. Set using the DBMS_SESSION.SET_IDENTIFIER procedure.
Service - Specifies a group of related applications. Created using the DBMS_SERVICE.CREATE_SERVICE procedure. Module - Specifies a functional area or feature of an application. Set using the DBMS_APPLICATION_INFO.SET_MODULE procedure. Action - Specifies the current action (INSERT, UPDATE, DELETE etc.) within the current module. Set using the DBMS_APPLICATION_INFO.SET_ACTION procedure.
End to end tracing can be managed via Enterprise Manager or a set of APIs and views. Here are some examples of how to enable and disable to various types of tracing: BEGIN -- Enable/Disable Client Identifier Trace. Once the trace files are produced the trcsess command line utility can be used to filter out the relevant data from multiple files. The utility accepts the following parameters: OUTPUT - Specifies the name of the consolidated trace file. SESSION - Consolidates the file based on the specified session id (SID.SERIAL# columns from V$SESSION). CLIENT_ID - Consolidates the file based on the specified client identifier (CLIENT_IDENTIFIER column from V$SESSION). SERVICE - Consolidates the file based on the specified service (SERVICE_NAME column from V$SESSION). MODULE - Consolidates the file based on the specified module (MODULE column from V$SESSION). ACTION - Consolidates the file based on the specified action (ACTION column from V$SESSION). TRACE_FILES - A space separated list of trace files to be searched. If omitted all files in the local directory are searched.
At lease one of the search criteria must be specified. If more than one is specified only trace that matches all the criteria is consolidated. Examples of trcsess usage are: # Search all files for this session. trcsess output=session.trc session=144.2274 # Search the specified files for this client identifier. trcsess output=client.trc client_id=my_id db10g_ora_198.trc db10g_ora_206.trc # Search the specified files for this service, module and action combination. trcsess output=client.trc service=my_service module=my_module action=INSERT db10g_ora_198.trc db10g_ora_206.trc Once the consolidated trace file is produced it can be processed by the TKPROF utility like any other SQL Trace file. By default statistics are gathered at the session level. The DBMS_MONITOR package allows this to be altered to follow the client identifier, service or combinations of the service, module and action: BEGIN
DBA_ENABLED_AGGREGATIONS - Accumulated global statistics. V$CLIENT_STATS - Accumulated statistics for the specified client identifier. V$SERVICE_STATS - Accumulated statistics for the specified service. V$SERV_MOD_ACT_STATS - Accumulated statistics for the specified service, module and action combination. V$SVCMETRIC - Accumulated statistics for elapsed time of database calls and CPU usage.
UTL_MAIL
The constants for NaN and infinity are also available in SQL.
CREATE OR REPLACE PACKAGE BODY numeric_overload_test AS PROCEDURE go (p_number NUMBER) AS BEGIN DBMS_OUTPUT.put_line('Using NUMBER'); END; PROCEDURE go (p_number BINARY_FLOAT) AS BEGIN DBMS_OUTPUT.put_line('Using BINARY_FLOAT'); END; PROCEDURE go (p_number BINARY_DOUBLE) AS BEGIN DBMS_OUTPUT.put_line('Using BINARY_DOUBLE'); END; END; / -- Test it. SET SERVEROUTPUT ON BEGIN numeric_overload_test.go(10); numeric_overload_test.go(10.1f); numeric_overload_test.go(10.1d); END; / It is important to check that the correct overload is being used at all times. The appropriate suffix or conversion function will make the engine to pick the correct overload.
display('MULTISET UNION:', l_col_3); l_col_3 := l_col_1 MULTISET UNION DISTINCT l_col_2; display('MULTISET UNION DISTINCT:', l_col_3); l_col_3 := l_col_1 MULTISET INTERSECT l_col_2; display('MULTISET INTERSECT:', l_col_3); l_col_3 := l_col_1 MULTISET INTERSECT DISTINCT l_col_2; display('MULTISET INTERSECT DISTINCT:', l_col_3); l_col_3 := l_col_1 MULTISET EXCEPT l_col_2; display('MULTISET EXCEPT:', l_col_3); l_col_3 := l_col_1 MULTISET EXCEPT DISTINCT l_col_2; display('MULTISET EXCEPT DISTINCT:', l_col_3); END; /
Compile-Time Warnings
Oracle can now produce compile-time warnings when code is ambiguous or inefficient be setting the PLSQL_WARNINGS parameter at either instance or session level. The categories ALL, SEVERE, INFORMATIONAL and PERFORMANCE can be used to alter the type of warnings that are produced. Examples of their usage include: -- Instance and session level. ALTER SYSTEM SET PLSQL_WARNINGS='ENABLE:ALL'; ALTER SESSION SET PLSQL_WARNINGS='DISABLE:PERFORMANCE';
Oracle 10g now supports implicit conversions between CLOBs and NCLOBs and vice-versa. As with all type conversions it is still better to be explicit and use the conversion functions TO_CLOB and TO_NCLOB for clarity.
Regular Expressions
Oracle 10g supports regular expressions in SQL and PL/SQL with the following functions: REGEXP_INSTR - Similar to INSTR except it uses a regular expression rather than a literal as the search string. REGEXP_LIKE - Similar to LIKE except it uses a regular expression as the search string. REGEXP_REPLACE - Similar to REPLACE except it uses a regular expression as the search string. REGEXP_SUBSTR - Returns the string matching the regular expression. Not really similar to SUBSTR.
/ Building regular expressions to match your requirements can get a little confusing and this is beyond the scope of this article.
UTL_COMPRESS
The UTL_COMPRESS package provides an API to allow compression and decompression of binary data (RAW, BLOB and BFILE). It uses the Lempel-Ziv compression algorithm which is equivalent to functionality of the gzip utility. A simple example of it's usage would be: /
UTL_MAIL
The UTL_MAIL package provides a simple API to allow email to be sent from PL/SQL. In prior versions this was possible using the UTL_SMTP package, but this required knowledge of the
SMTP protocol. The package is loaded by running the following scripts: CONN sys/password AS SYSDBA @$ORACLE_HOME/rdbms/admin/utlmail.sql @$ORACLE_HOME/rdbms/admin/prvtmail.plb In addition the SMTP_OUT_SERVER parameter must be set to identify the SMTP server: CONN sys/password AS SYSDBA ALTER SYSTEM SET smtp_out_server='smtp.domain.com' SCOPE=SPFILE; SHUTDOWN IMMEDIATE STARTUP With the configuration complete we can now send a mail using: BEGIN UTL_MAIL.send(sender => '[email protected]', recipients => '[email protected],[email protected]', cc => '[email protected]', bcc => '[email protected]', subject => 'UTL_MAIL Test', message => 'If you get this message it worked!'); END; / The package also supports sending mails with RAW and VARCHAR2 attachments.