Oracle SQL Tuning PDF
Oracle SQL Tuning PDF
Oracle SQL Tuning PDF
Christopher R. Spence
www.vampired.net
Version information
1.0 First version
Copyright Notice
Copyright © 2001 Christopher R. Spence, All Rights Reserved.
This document is free; you can redistribute it and/or modify it under the terms of the GNU General
Public License as published by the Free Software Foundation; either version 2 of the License, or (at
your option) any later version.
This document is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY;
without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. See the GNU General Public License for more details.
For a copy of the GNU General Public License write to the Free Software
Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
COPYRIGHT NOTICE ................................................................................................................................ 2
CURSORS................................................................................................................................................... 6
SQL PROCESSING PHASES ....................................................................................................................... 6
THE PARSE PHASE ...................................................................................................................................... 6
THE BIND PHASE ........................................................................................................................................ 6
THE EXECUTE PHASE.................................................................................................................................. 7
THE FETCH PHASE ...................................................................................................................................... 7
SQL STANDARDS ...................................................................................................................................... 7
SHARED CURSORS .................................................................................................................................... 8
INDEXES ............................................................................................................................................... 24
JOINS..................................................................................................................................................... 29
Cursors
Every statement Oracle execution creates is known as a cursor, or context area. If multiple
users execute the same statement, they will share the same cursor if the statement in the shared pool is
not invalid. If the statement is invalid, it will need to go through the parse phase again. A cursor
contains the parsed statement, list of objects referenced in the statement, and an execution plan.
Cursors are stored in the library cache area of the shared pool, more specifically the shared
SQL area. This shared pool uses a method to age out statements making room for new ones; this
method is LRU, or the Least Recently Used algorithm. The size of the pool is configured by the DBA
with the SHARED_POOL_SIZE initialization parameter.
Note: Oracle does not create an execution plan with the knowledge of the value inside these
bind variables, this restricts Oracle from using index selectivity when making an execution
plan during the parse phase
SQL Standards
One of the most frustrating things as a DBA is reading 8 million different styles of SQL. I have
a style I like to use, and find things much easier to read when formatted in such a way. Take a query
written any which way as follows:
SELECT a.table_name
, a.next_extent
, a.tablespace_name
FROM all_tables a
, (SELECT tablespace_name
, max(bytes) as big_chunk
FROM dba_free_space
GROUP BY tablespace_name) f
WHERE f.tablespace_name = a.tablespace_name
AND a.next_extent > f.big_chunk
/
Notice how much easier it is to read the second query? Adopt a style and one style will save
you time trying to read statements. In a large company, the style adopted may not be your style.
Instead of being a rebel and continuing to use your style, conform. It will be a lot less painful on
everyone, and the extra effort will be appreciated by your teammates.
There is also a technical reason to use a set style throughout the organization. Remember the parse
phase? If the statement it is trying to parse does not match a statement in the shared pool precisely,
character by character, and case for case, Oracle will not re-use the parsed statement and instead will
opt to create a new cursor. This means your statement will have to undergo the Oracle parse, bind,
execute, and fetch phases again. Also the extra resources in the library cache will be added to maintain
this copy of a statement. You may actually age out a statement frequently used if the shared pool is
too small. Now I want to explain what is commonly called as “sharing cursors.”
Shared Cursors
Using shared cursors will allow you to conserve memory while having optimal response time on
statements that have already been parsed. A cursor will only be shared if the following criteria are
met:
(Note: Naming convention of bind variables is not important as they will be renamed internally by
Oracle)
Oracle SQL Optimizer
Oracle uses the optimizer to determine what the best path of execution would be given the
information known. How does Oracle do this?
o Oracle version
o INIT.ORA parameters
o Oracle Optimizer mode (Rule or Cost based)
o Any SQL hints
o All available indexes
o Tables being referenced
o All conditions supplied in the where clause(s)
o Any object statistics (objects that have been Analyzed)
o Physical table location
Optimizer Modes
Oracle works in two distinctly different modes. You may have noticed Rule based (RBO) and
Cost based (CBO) mentioned above. These are the two modes Oracle optimizer can work under. The
distinct difference between the two is how Oracle determines the execution plan.
Under the RBO mode Oracle uses a set of 15 rules, first being by row id and last being full
table scan. Under the CBO mode of the optimizer, Oracle will use ma ny different statistics available
to it to make a choice on the best possible execution plan. CBO mode requires objects to be analyzed
frequently to provide accurate choice of execution plans. Oracle mentions that an average database
can analyze 3Gb of tables and indexes within an hour. Oracle does create locks on the objects when
you analyze them. I would not recommend analyzing a large object during peak time.
Rule based optimizer
The Rule based optimizer will be summoned on any one of these event s:
1. Row ID
2. Single row by cluster join
3. Single row by hash cluster key with unique or primary key
4. Single row by unique or primary key
5. Clustered join
6. Hash cluster key
7. Indexed cluster key
8. Composite index
9. Single-column index
10. Bounded range search on index column
11. Unbounded range search on indexed column
12. Sort-Merge join
13. MAX or MIN of indexed column
14. Order by on index column
15. Full table scan
Note: Many of the new advanced features of Oracle are not supported under Rule based
optimizer. Oracle has also made it apparent future support of Rule based mode is quite
sketchy.
There are a few things you can do to influence how the RBO interoperates a statement. The
order of the tables in the from clause can have great effect on execution path chosen. Altering the
availability of an index (Create/Drop) will have significant effect on the path chosen by the Rule based
optimizer. Changing the order of the where clause(s) can have a very significant effect on the
execution path chosen and the amount of data processed.
Another notable feature of the rule based optimizer is that it will always use an index if it is
available. It does not weigh the amount of rows being returned to determine if the index use is most
efficient. In some cases you may want to suppress an index to choose a more optimal full table scan
rather than numerous iterations through the index tree. Oracle highly recommends everyone to move
to using Cost based optimizer. Although in some situations Rule based optimizer may provide a better
path of execution.
Cost based optimizer
The Cost based optimizer will be summoned on any of these events:
o OPTIMIZER_MODE = CHOOSE and statistics are available on at least one object referenced
To efficiently use Cost based optimizer, you must have statistics on all objects referenced in the
statement. To make a choice on the most efficient execution plan, CBO uses a cost calculation.
Oracle uses the number of logical reads, CPU time used, and network transmission to determine the
cost value assigned to an execution plan.
Cost based optimizer excels when used against un-tuned SQL statements. You may sometimes
experience great performance gains by using CBO on existing poorly tuned statements. Cost based
optimizer is constantly being revisited by Oracle adding new features and functionality to it.
The cost based optimizer has the ability to use index selectivity to determine the best path of
execution. Combined with histograms, this can prove to be very effective in providing an optimal
path. Some statements (even though the same except for literal value) may perform well as there are
only a few occurrences of the value in the index (high selectivity). But when a non-unique index has
many occurrences of the same value, the optimizer will assign a low selectivity value to this index and
assign a higher cost value. The path chosen by the Cost based optimizer is only as accurate as your
latest set of statistics. Failure to update statistics can force the Cost optimizer to make very poor
choices considering new status of referring objects.
SQL Hints
Hints are used to influence the path of the optimizer. Hints will be ignored on insert
statements except for sub-queries. Hints are not the best way to tune, but may provide a very quick fix
for that one offending statement. If the hint is specified incorrectly, Oracle will ignore it. Hints are
specified as a special type of comment. SQL statement may have multiple hints supplied in a single
hint comment. Any use of aliases in the statement must be used in the hint as well. A simple example
of a hint would be as follows.
Although it may be obvious, this unique index retrieval would resort to the use of a painful full
table scan. This may or may not be the best solution for all cases. Another thing to keep in mind is
providing a hint to use a specific path may not always be the most optimal path when there are 1000’s
more records than previously. This is what makes the Cost based optimizer so flexible, it has the
ability to assess the current environment and make an optimal choice given the available information.
However, if you are using old statistics, this may again be the wrong path.
Available SQL hints
o ALL_ROWS
This tells the optimizer to aim for the best possible throughput to complete the entire task in as
short of time as possible. If statistics are not available, Oracle will estimate them.
o FIRST_ROWS
Oracle will aim to set a subset of the result set as fast as possible at the expense of throughput.
This is a common setting for OLTP applications. Oracle will also estimate statistics if none
are available.
o CHOOSE
If statistics are available, Oracle will use the Cost based optimizer, if the y are not Oracle will
use the Rule based optimizer.
o RULE
Oracle will blindly use the Rule based optimizer regardless of statistics. Oracle will disable
certain advanced features such as bitmap indexes and hash joins.
o USE_MERGE(table)
o USE_NL(table)
Oracle will attempt to use a nested loop for the join predicates.
o USE_CONCAT
o FULL(table)
Oracle will attempt to use an available index if possible. You can specify an index, table, or
nothing and have the optimizer calculate costs.
o INDEX_DESC(table | index)
Oracle will attempt to use an available index in reverse order. You can specify an index, table,
or nothing and have the optimizer calculate costs.
o INDEX(table | index)
Oracle will attempt to use an available index if possible. You can specify an index, table, or
nothing and have the optimizer calculate costs.
o CACHE(table)
Oracle will place blocks visited from full table scan on the most recently used side of the LRU
list.
o NOCACHE(table)
Oracle will proceed as normal and place blocks visited from a full table scan on the least
recently used side of the LRU list.
o AND_EQUAL(table | index)
Oracle will attempt to use an execution plan that would specifically merge the scan of multiple
column indexes.
o HASH(table)
o CLUSTER(table)
o NOPARALLEL(table)
o PUSH_SUBQ
Oracle will evaluate non- merged sub queries in the beginning of the execution plan rather than
the end.
o ORDERED
Oracle will attempt to use the order of the from clause to determine the driving table of a join.
Oracle supplied tuning tools
Explain Plan
One of Oracle’s best features is the ability to query the optimizer and find out how will Oracle
execute the given statement based on all known information. This alone will give you enough
information to effectively tune SQL statements. The only requirement for using Oracle’s explain plan
is to run utlxplan.sql to create the explain plan table (PLAN_TABLE) which you can then use to store
information that makes up Oracle’s explain plan feature.
You can also create your own explain plan table manually using the following create table
syntax:
EXPLAIN PLAN
SET statement_id = ‘Chris Statement’
FOR
SELECT *
FROM emp
WHERE empno = 15;
Explained.
If you noticed, the explain plan statement doesn’t show the results to the query, in fact, the
query was never run! Explain plan statement only creates an execution plan for the statement and
stores it into the explain plan table (Here the default PLAN_TABLE). To retrieve the information on
this query you need to run the following query.
select id
, lpad(‘ ‘, 2*level) || operation
|| decode(id, 0,’ Cost = ‘ || position)
|| ‘ ‘ || options
|| ‘ ‘ || object_name as “Query Plan”
from plan_table
where statement_id = ‘Chris Statement’
connect by prior id = parent_id
start with id = 0;
ID Query Plan
------- --------------------------------
0 SELECT STATEMENT Cost =
1 TABLE ACCESS BY INDEX ROWID EMP
2 INDEX UNIQUE SCAN PK_EMP
Given this information, you can tell Oracle to chose to do a unique scan on index PK_EMP that
happens to be the primary key of EMP. Then it retrieved by row id from the EMP table. This is a very
efficient query, and explain plan confirms that. There are some other pieces of information stored in
the PLAN_TABLE that may be of interest of you.
COST The cost based optimizer’s cost assigned to the amount of work performed to
execute the statement. This value is weighted number, there are no units
assigned to this number.
Considering that was an easy explain plan, there is not much to be known about how to read it.
It is fairly easy to assess what Oracle is doing. One thing to understand is that every row in the explain
plan either retrieves rows from the database, or from input of other step. Each step that retrieves rows
from the database feed these rows to another step as a “row source”. In our previous example step 2
create a row source from the unique index scan of PK_EMP. This row source is used by step 1 to
execute table access via the row id in the row source. In this example the row source is only one
record. In complex queries this could be hundreds, thousands, or even millions of records. You want
to keep these row sources as small as possible. This means you want to use where clauses that chop
down the amount of records returned to other clauses as soon as possible. If you have not noticed, you
have been reading this explain plan from the bottom to the top in reverse order of the ID. This is how
Oracle executes statements, and this is how you have to read the explain plan to make sense of it.
Auto Trace
Auto trace is a very good feature of SQL*PLUS that allows you to bypass all the work
involved in creating explain plans and querying the plan table. It also adds some statistics to the
picture. This is a very common way to use Oracle’s explain plan features.
Turning on Auto Trace
To use auto trace you need to create a plan table just like with using explain plan directly. For
the statistics, you need to have access to the statistic tables, or have the plustrace role. This role is
created with the plustrce.sql file and is not creation automatically.
Once you turn on autotrace, you can just run your query as you would normally and you will
get the results, explain plan, and statistics of the query. What couldn’t be better?
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP'
Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
2 consistent gets
0 physical reads
0 redo size
1243 bytes sent via SQL*Net to client
430 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed
SQL Trace and TKPROF
SQL trace is a very useful Oracle feature that allows one to log a session in its entirety. This
will produce debugging, performance, and tuning information. Be very careful with using SQL trace
as they generate alot of information and take up space quite rapidly. Make sure you check your dump
destinations for old trace files and monitor space prior to working with traces. Although they do not
take up a lot of space in terms of today’s terms (megs) generated in rapid succession they can pile up
and eventually fill an entire mount point. This is especially so when turning on tracing at an instance
level. Tracing at the instance level also incurs a large performance hit. Only use instance level tracing
when absolutely need to do debugging and tuning.
Oracle’s SQL trace function can be turned on at the instance or session level. TKPROF is
generally used to format the raw trace file to make it more readable to the DBA. I would highly
recommend using the parameter TIMED_STATISTICS to aid in the usefulness of trace files. Using
timed statistics will generate alot of wait statistic information that is essential to tuning efforts.
USER_DUMP_DEST parameter determines the location of user generated trace files. As a DBA you
can control trace file sizes using the MAX_DUMP_FILE_SIZE parameter. Keep in mind, this is set in
operating system blocks.
Note: Timed Statistics can be turned on in the parameter file, dynamically via alter system statement,
and at the session level with an alter session statement.
Note: To turn off sql trace you just use FALSE for any of these methods.
The first thing you need to do is find your trace file. It will be located under the
USER_DUMP_DEST directory specified in your instance parameter file. Your trace file will
generally be named ora_<PROCES_ID>.trc (ora<process_Id>.trc on windows). The raw trace file will
provide you with instance information as well as information for each cursor executed. Unless doing
in depth debugging, yo u should use TKPROF to format your trace file in a more readable manor.
Tkprof ora_1253.trc new_trace.txt
explain=<user/password@database>
This will generate a version of the trace file with the same information in a more readable
format with explain plans for each cursor. This is very useful when doing SQL tuning or debugging.
Trace files will include all recursive SQL as well. These can be excluded with TKPROF by using the
SYS=NO option.
Note: Explain generated is at the time of running TKPROF not the time of execution. Any change in
statistics, table, access paths, or indexes will affect this explain plan compared to the true execution
path taken at the time of execution.
There are a few situations in which Oracle defines as traps that may make reading a TKPROF
formatted trace file confusing. Generally these issues arise as the database is being used concurrently
with other users and transactions. Sometimes another user may interfere with statistics because of
locks and other factors. Below you will find four traps that commonly confused DBAs.
Other transactions can hold uncommitted changes against an object being referenced in a
statement. This will possible increase the number of blocks read as additional blocks will have
to be read for read consistency.
Time Trap
If elapsed time shows very high value there may be inference with shared locks from other
users. Generally CPU time is more accurate for timing particular statements.
Schema Trap
Is the statistics show high number of blocks visited yet explain plan shows index access, most
likely the index was created after the statement was executed but before the explain plan.
Trigger Trap
Trigger resources and recursive SQL are included with the statistics reported during a
statement. TKPROF will in fact report these twice. A good rule of thumb is to avoid tuning
SQL when resources are being exhausted at a lower level of recursion.
Indexes
An index use can dramatically improve query performance by lowering the logic/physical
blocks retrieved from disks. Index provide table of contents like access to tables. Proper us e of
indexes can be very beneficial to OLTP, DSS, and DW type systems. Improper use of indexes will
just waste space and slow down insert, updates, and deletes. The exception here is on DW systems
where loads are done at schedule times with indexes disabled, which performance disadvantages of
indexes are null and void due to the lack of any insert, updates, and deletes. Although under these
environments full table scans usually will perform better over index use.
You just said full table scan can be faster than an index, no way!
There are a few cases where full table scan may perform better than index use. A simple
example of this would be a query that returns large amount of rows relative to total rows in the index.
A full table scan will simply retrieve all rows, and then sort for the rows needed. Full table scan will
also take advantage of multiple blocks IO, which can prove very fast compared to normal IO methods.
Under the index access approach the tree will have to be traversed many times resulting in high logical
block retrieval compared to a simple full table scan. An average index takes 2-3 logical blocks to fully
traverse to the row id of the result.
Indexes can perform badly when more than 20% of the rows have been deleted or the order of
unique indexes is seriously out of order. Also if your binary height is greater than 3 or 4 you should
consider rebuilding your indexes. The binary height of an index refers to how many logical reads are
needed to get to the database. Say an index has 3 levels, the binary height would be 4, it would take 3
block reads to get to the row id, and one more to retrieve the row at that row id.
Ok, why won’t Oracle use this concatenated index I just created?
To use a concatenated index, you must query the table using columns in the front of the index.
If you have an index on column 1, column 2, column 3, column 4 then you must use column 1 in the
where clause to use the index, you also must use column 1 and 2 if you want to do an index retrieval
based off column 3. In other words, to use a column in a concatenated index, you must use all
columns prior to it in the where clause. So a good rule of thumb is to use the most query column first
in concatenated indexes. Think of what information will be available when you query this table. Also
another thing to remember when using concatenated indexes is always put the most restrictive column
first if your going to query by the all columns in the concatenated index. This is especially true with
composite primary keys.
Common traps that invalidate index use
Use of various functions on the left side of the where clause will disable use of an index as it
has to do a full table scan, execute the function on the column, then do the comparison.
Arithmetic trap
Use of operators in the where clause on the column will also disable the index, avoid their use
in the left side of the where predicate.
To be or not to be trap
An index will be disabled if the column being referenced appears on both sides of the predicate,
even when used inside a function.
Negativity trap
Use of !=, <>, and NOT will disable an index, the obvious reason is an index can tell you what
is in the index, but it cannot tell you what is not. Try to rewrite the query using EXISTS.
Bitmap index trap
When you create a bitmap index on a column, Oracle will only use this index when under the
Cost based optimizer. Bitmap indexes are most efficient when used under a combination of AND / OR
predicates or used with the IN (values. …) clause. Bitmap indexes cannot be declared as an unique
index. Bitmap indexes prove most effective when there are 1% possible values compared total rows.
Bitmap indexes are also very efficient for queries that use complex predicates based on columns with
low cardinality. Aggregate functions can benefit greatly with the use of bitmap indexes. Bitmap
indexes do slow down insert and updates compared to conventional b-tree indexes. But bitmap
indexes are very efficient as they are compressed internally. Using fixed length data types and
declaring columns NOT NULL can even further reduce the storage use of bitmap indexes.
Under OLTP bitmap indexes may prove disastrous to performance due to the way they lock in
bitmap segments. This is the smallest amount of a bitmap that may be locked, which may lock
multiple rows. With high level of update, delete, and inserts this can prove to be devastating.
B_TREE_BITMAP_PLANS will give you some benefit of bitmap indexes on b-tree only indexes.
Casting trap
When using an index character column which has numeric data will cause the index to be
suppressed if quotes are not used. This is because Oracle will do a numeric conversion automatically
on the column, which will force a full table scan.
Null trap
Using IS NULL and IS NOT NULL will disable index usage; an index does not store null
values.
Distinct trap
Use of distinct in the select statement will disable any available index and opt for a full table
scan. If you must use distinct, try forcing using EXISTS.
Or trap
Use of the OR verb on an indexed column will in fact disable the index as well. Try replacing
the predicate with IN or UNION verb when using index column as OR can force a full table scan.
Concatenated columns trap
Concatenating columns in the where clause on the left side will also disable index use, try
replacing the statement with the AND clause and compare both columns with equality predicates.
Index selectivity
Under the Cost based optimizer Oracle evaluates the usefulness of an index by using
selectivity. Selectivity basically is the precision of an index. For an example, a unique index will
always be seen as 100% selective. Meaning, there is no other more detailed access path. Oracle uses
selectivity to determine the cost of index use. Any other type of index the selectivity is calculated
based on the following formulas.
Unique index
Selectivity = 100%
Range scans
Upper value – Lower value (in predicate)
Selectivity = -------------------------
Upper value – Lower value + 1 (in column)
Non-unique index
Distinct values
Selectivity = ----------------
Number of rows
Selectivity = 25%
Selectivity = 50%
Multiple equality indexes in query
When Oracle sees multiple indexes over different tables when there is equality predicate it will
merge the two indexes. It will then fetch the rows that are common to the indexes. If they are upon
the same table then Oracle, which choose the one with the lower rank
Joins
The optimizer will determine which join method to choose (if no hint specified), access path for
the row sources, and what order to join the tables. Parse time will be increased as tables are added to
joins while under the Rule based optimizer. Sub queries generally execute as joins internally, this may
be confirmed using explain plan. Under the Cost based optimizer you ha ve two notable parameters
that help tune joins (OPTIMIZER_SEARCH_LIMIT and OPTIMIZER_MAX_PERMUTATIONS).
When joining three or more tables, use the table with the most dependencies as the driving table.
Hash Join
The first thing to remember about hash joins is they are only used if you are using the Cost
based optimizer and when using equijoins. Hash joins are usually used in replacement of sort and
merge joins. Hash joins will generally outperform sort and merge operations. Hash joins do a full table
scan on both tables being joined, it will then break them up into partitions. The amount of available
memory will determine how many partitions. (Half SORT_AREA_SIZE if HASH_AREA_SIZE not
set). You may also control performance and frequency of hash joins with the
HASH_MULTIBLOCK_IO_SIZE parameter.
Sort and merge will perform a sort on the row sources if they have not already been sorted, then
the results are merged. Sort and merge are only usable during an equijoin. Like the hash join, the
sequence of the tables do not matter, there are no inner and outer tables with a sort and merge join. If
the number of rows satisfying the join predicates is a majority of the tables then a sort and merge join
will outperform a nested loop join. The sort parameters in the database play a big impact on the
performance and efficiency of sort and merge joins. The important parameters are
SORT_AREA_SIZE and SORT_AREA_RETAINED_SIZE. Using a smaller SORT_AREA_SIZE is
likely to increase the cost generated under Cost based optimizer for sort operations. But on the other
hand increasing the multi block read count can decrease the cost of sort operations.
Nested Loop Join
Nested loop joins use a concept of an inner and outer table. The outer table is known as the
driving table. Nested loop joins are performed for non-equijoin queries. In a nested loop join Oracle
will query for all matching rows in the inner table for each row in the outer table. In other words,
Oracle will repeatedly query the inner table while it goes row by row in the outer table. You can not
avoid the full table scan on the outer row, but you can use an index when retrieving matching rows
from the inner table. Performance can seriously suffer when the inner table is not accessed by index as
for each row in the outer table, the inner table will be full table scanned over and over. None join
predicates will be resolved after the actual nested loop unless a composite index exists that satisfies
both the join and non-join predicate. In other words, you should try to minimize nested loop joins
where the inner table is accessed via full table scan.
Outer Join
An outer join is simply a join that returns rows from the outer table even though the inner table
does not have a row that satisfies the join predicate. Non join predicates will only use an index if the
join predicate does not use an index access path.
Some Random Tuning Tips
Order of the from clause
The order of the from clause only is important when there is two equivalent execution paths.
This is because the order of the where clause predicates is of higher priority than the from clause. The
use of the ORDERED hint will force Oracle to use the current from clause order when choosing the
execution path. In this case you want to have the smallest and most dependant table to the far right as
Oracle will parse the from clause from right to left.
Use of having will usually provide sub optimal execution path, as it will filter the results after
the row source has been fetched. This can cause unneeded sorting. Restricting rows using a properly
written where clause will usually perform a lot better than having.
When there are non indexes predicates, Oracle is kind of strange how it evaluates the
predicates. AND predicates get evaluated top to bottom while the OR predicates get evaluated from
bottom to top. To take advantage of this you want to order your clauses to cut down row source sizes
as soon as possible. Depending on the use of AND or OR you may have to reverse the order.
You may be able to join unrelated but similar queries together into a single complex query to
save parse and client server response time. Using the dual table can usually aid in this by avoiding full
table scans associated with trying to join up different tables. These queries can be tricky to put
together but may prove beneficial when dealing with busy or slow networks.
Count()
This discussion has come up many times and I have heard many different stories to this. The
reason for this is Oracle has changed count() and how it functions over the versions. Previously
count(*) would perform faster than say count(1) and count(indexed column) would perfo rm even
faster. In current versions this has become a myth. Oracle has provided optimization on how count()
functions and will choose the most optimal path regardless of what is used in count clause. For
readability I would recommend sticking to count(*) as a standard.
Minimize your sub queries
Minimize your redundant and similar table sub queries by combining they with multiple
column sub queries. Blocks visited will be significantly lower when a single sub query is used over
multiple sub queries.
The use of DECODE can save multiple visits to a table often enough. DECODE can be used to
efficiently combine multiple group functions into a single query. Although DECODE can be quite
confusing to use at times, it power and flexibility can prove well worth the efforts. Significant
improvements may be found with strategically located DECODE statements.
Be careful of UNIONS
Always ask yourself if you need duplicate row detection or not. UNION will combine
multiple queries and do a sort, merged, and filtered to check for duplication between rows. A UNION
ALL will simply return the union of the two row sources. Avoiding the costly sort and filtering can
save significant time.