0% found this document useful (0 votes)
52 views89 pages

Performance Tuning - Database

The document outlines an agenda for a 3-day performance tuning training for a database. Day 1 covers topics like database memory architecture, shared pool, SQL parsing, and buffer cache. Day 2 focuses on reading execution plans, optimizer statistics, case studies, and more. Day 3 discusses optimizer evolution, execution plan management, traces, and AWR. The document also provides overviews of key database concepts and common mistakes in query writing.

Uploaded by

CHANDER SHEKHAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views89 pages

Performance Tuning - Database

The document outlines an agenda for a 3-day performance tuning training for a database. Day 1 covers topics like database memory architecture, shared pool, SQL parsing, and buffer cache. Day 2 focuses on reading execution plans, optimizer statistics, case studies, and more. Day 3 discusses optimizer evolution, execution plan management, traces, and AWR. The document also provides overviews of key database concepts and common mistakes in query writing.

Uploaded by

CHANDER SHEKHAR
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 89

Performance Tuning - Database

May 31, 2017


Agenda – Day 1
● Database Memory Architecture
● Shared Pool
● SQL parsing
● Literal vs Bind
● Versioning
● Cursor Sharing to Rescue
● Buffer Cache
● Common Query writing Mistakes
● Indexes and Usage
● Related AWR
● Important SQL’s
Agenda – Day 2
● Reading optimizer execution plans
● Optimizer statistics
● gather_plan_statistics
● Various Case Studies
– Locking
– GTT tables
– Throw Away percentage
– Wrong statistics
– Fragmentation and FTS
– Sudden increase in data volume/data skewness
Agenda - Day 3
● Optimizer Evolution
– RBO
– CBO
– Histograms
– Bind Peeking
– Adaptive Cursor sharing
– Cardinality Feedback
● Execution Plan Management:
– Taking Baselines – SPM
– Manual Baselines
– Wrong plan used. fixing correct plan.
– Copying plan from different environment.
– Asking Optimizer to look for a better plan
– Copying execution plan of a sql to another
– Hints
● Traces & Reviewing tkprof
● AWR- related to session
Performance Requirement

● Optimal Performance – Always


● Highly Scalable – Nearly Linear
● Ensure High Availability

Responsible Team
● Application Developer – Proactive
● Development DBA’s – Proactive
● Production DBA - Reactive
Database
● Understanding Architecture
– SGA (the Shared Pool / the Buffer Cache)
– Read Consistency
– Locking and Concurrency
● Optimizer
– Statistics
– Query Transformation
● Database Objects
– Tables & Indexes
– Partitioning
Database
SGA… the Shared Pool
● Objective – to read as much as from Memory
● Stores Parsed Version of SQL’s / PLSQL’s
● Split into various components – library cache, dictionary cache.
● LRU Algorithm
● Protected by Latches / Mutexes (Mutual Exclusive Lock)
● Contention : Frequent Allocation / Deallocation of memory
● Contention : Frequent Loading of Cursors
● Shared pool sizing ????

• Sharing SQL’s is key to effective Shared Pool


Utilization
• Maintain Coding Standards
• SQL Parsing
• Consider what it takes to avoid excessive parsing of SQL statements.
• Impacts
a) CPU b) Concurrency/latches c) Performance
Latch:Shared Pool, Latch: Row wait Objects, Library Cache pin,locks, mutex
related events etc
• Column A • Column B
• SELECT COUNT(*) • select count(*)
• FROM after_deforestation; • = • from after_deforestation; • ?
• BEGIN • BEGIN
• UPDATE favorites • update favorites
• SET flavor = 'CHOCOLATE' • set flavor = 'CHOCOLATE'

• = •
• ?
WHERE name = 'STEVEN'; where name = 'STEVEN';
• END; • END;
• BEGIN • BEGIN
UPDATE ceo_compensation update ceo_compensation
• SET stock_options = • set stock_options =
1000000, • = 1000000,
• salary = salary * 2.0 • salary = salary * 2
• ?
• WHERE layoffs > 10000; • where layoffs > 10000;
• END; • END;
Literal vs Bind
● Select object_name, owner from DBA_OBJECTS_bkp
where object_id =&no;
● Var b1 number
Exec :b1 := &no;
Select object_name, owner from DBA_OBJECTS_bkp
where object_id =:b1;
Parsing… Soft v/s Hard
• Syntax / Semantics
Checks

• Sharable
Parent Cursor
• N Available ?

• Y
• Sharable • Y
• Store Parent Cursor
in Library Cache Child Cursor
Available ?

• N
• Query Transformation
/ Execution Plans

• Store child Cursor in • Execute


Library Cache
Coding Standard…(Multiple Parent
Cursors)
● v

• V$sql_shared_cursor
Coding Standard…(Multiple Child Cursors)
Multiple Child Cursors example with change in
Bind Length and All reasons list below with Demo

● Desc v$sql_shared_cursor
Anchor Declarations of Variables

● Choices while declaring a • Hard-Coded Declarations


variable:
• ename VARCHAR2(30);
– Hard-coding the datatype
• totsales NUMBER (10,2);
– Anchoring the datatype to another
structure
● Whenever possible, use anchored
declarations rather than explicit
datatype references • Anchored Declarations
– %TYPE for scalar structures
• v_ename emp.ename%TYPE;
– %ROWTYPE for composite structures • totsales pkg.sales_amt
%TYPE;

• emp_rec emp%ROWTYPE;
• tot_rec tot_cur%ROWTYPE;
Examples of Anchoring
• DECLARE • The emp table
• v_ename emp.ename%TYPE;
• v_totsal config.dollar_amt • ename • VARCHAR2(60)
%TYPE;

• empno • NUMBER
newid config.primary_key;
• BEGIN • hiredate • DATE
• . . .
• END; • sal • NUMBER

• PACKAGE config
• IS
● Use %TYPE and %ROWTYPE
• dollar_amt NUMBER (10, 2);
when anchoring to database
elements
• pkey_var NUMBER(6); ● Use SUBTYPEs for
programmatically-defined
• SUBTYPE primary_key

types
IS
• pkey_var%TYPE;

• SUBTYPE full_name IS • PLV.sps


• VARCHAR2(100); • aq.pkg

• END config;
Analysis-Am I using Bind

am_i_using_bind_variables.sql
Cursor Sharing & Event 10503 for the rescue
● Cursor_sharing=FORCE Converts literals to Bind
variables
● Calculates SQL hash value and searches an existing cursor
in library cache. Hence reducing the amount of hard parsing
and shared pool garbage.

● alter system set events '10503 trace name context


forever, level 4000';
SGA… the Buffer Cache
● Objective – to read as much as from Memory
● Caches Database Blocks to eliminate Disk I/O
● Blocks are either Dirty or Clean
● LRU Algorithm, in conjunction with, Touch Count (TCH)
● Protected by Latches to maintain LRU and TCH
● Contention : Unwanted I/O’s
● Contention : Concurrent Access to a Block

● Logical Reads are faster than Disk Reads

• 1
AWR Review - Sections

● Load profile
● Instance Efficiency Percentages
C:\Users\
● Cache Sizes tsharma4\Desktop\demo\awrrpt_1_3

● Parsing in Time Model Statistics


● SQL ordered by Sharable Memory
● SQL ordered by Parse Calls
● SQL ordered by Version Count
● Advisory Statistics

• 2
What's the Big Deal?
● How you write/design SQL/PLSQL the most critical factor
affecting the quality of PL/SQL-based applications

● Consider:
– One of the reasons developers like PL/SQL so much is
that it is so easy to write SQL and PL/SQL
– One of the most dangerous aspects of PL/SQL is that it is
so easy to write SQL and PL/SQL.

• 2
Common Query Writing Mistakes
● Adding All Possible Joins
– Select * from t1,t2,t3 where t1.emp_id = t2.emp_id and t2.emp_id = t3.emp_id
– Make sure everything that can be joined is joined
– Select * from t1,t2,t3 where t1.emp_id = t2.emp_id and t2.emp_id = t3.emp_id
and t1.emp_id = t3.emp_id (Better Way)
● Ordering Via a Where Clause
– Select city from address order by city
– A dummy where clause referencing an indexed column will retrieve all records
in ascending order
– Will avoid costly sort operation
– Select city from address where city > ‘ ‘

• 2
Common Query Writing Mistakes
• Functions on Indexed Columns
- WHERE SUBSTR(party_name,1,7) =‘Capital’
- Solution - WHERE party_name like 'Capital%‘
- TRUNC(last_update_date) = TRUNC(SYSDATE-2)
- Solution - WHERE last_update_date BETWEEN TRUNC(SYSDATE-2) AND
TRUNC(SYSDATE-2) + 0.99999

• Implicit Conversions
- SELECT * from emp3 where employee_id = 100
- Solution - SELECT * from emp3 where employee_id = ‘100’
- Strings always lose in conversion.
Implicit Conversions Example
ID Data Type Where What Index Gets
Clause Optimizer Used
Does
VARCHAR2 ID =123 TO_NUMBER No
(ID) = 123
NUMBER ID = ‘123’ ID = Yes
TO_NUMBER
(‘123’)
VARCHAR2 ID = :b1 ID = :b1 Yes
NUMBER ID = :b1 ID = Yes
TO_NUMBER
(:b1)
SELECT STATEMENT- Best Practices
● Not exists in place of NOT IN
● Exists in place of IN
● Joins in place of sub queries
● UNION in place of OR on an index column
● WHERE instead of ORDER BY
● WHERE in place of HAVING wherever possible
● Using WITH Clause for repetitions in query
● CASE in place of DECODE
● Count(pkey col) instead of *
Indexes
• Index Advantages:
- Performance

• Index Disadvantages:
- Inserts/Deletes

• Type of Indexes
- Normal Index
- Unique Index
- Bitmap Index
- Functional Index
- Reverse key index
- Intermedia Index
Solutions to basic trouble spots for sql
queries
Select * from s_emp where title= ‘Manager’ and salary = 100000;
Solution:

● Please provide your thoughts- Which index would


be the best index

Options
1. Only Index on Title
2. Only index salary
3. Index on title, salary
4. Index on salary, title

• 2
Which Columns to Index
• Know how the table is going to be used, and what data you are going after
in the table.
• Choose a column, or combination of columns which have the most unique
values.
• Do not choose A column, or columns which have mostly the same values.
• For example, for an employee table, you want the index on social security
number (a unique value). BUT, if you were going to search on name, you
would Want an index specifically on the NAME column.
• Try to create indexes on columns that have integer values rather than
character values
• Create indexes on columns frequently used in the WHERE,ORDER BY,
GROUP BY Clauses
• Order matters in a composite index
• Always create normal index on foreign key columns
• Don’t create indexes with less than 75% selectivity
• Remove unused indexes
Usage of Indexes

● Full table scan : Performance degrades as volume increases


● Unique Index: Unique value
● Functional Index : Comparing functions
● Bit Map Index : Stores data in form of BIT MAP. Efficient for AND/OR and
count operations. Makes Updates and inserts very slow. Ideal for data
warehousing/read only environments.
● Oracle Text : Text indexing engine. Works specifically on CLOB structures.
Occupies huge space and needs high maintenance.
● Reverse key Index : In case of hot block.
Important SQL’s
● o.sql
● o1.sql
● sd.sql
● sdh.sql
● ap.sql
● ap1.sql
● blwt.sql
● Size.sql
● frag.sql
● bind.sql
● ind.sql
● pjobs.sql
● jobs.sql
● Jobh.sql
● anreq.sql
Script Location: /ots/scripts/dba_scripts/

• 3
Reading optimizer execution plans
● Execution Plan

• 3
Interpreting Execution Plan
• Look for the innermost indented statement
• The first statement is the one that has the most indentation
• But Not Always
• first indented line required to satisfy the parent step without further requirements
• If two statements appear at the same level of indentation, the top
statement is executed first

• 3
Access Methods
Access Methods
Full Table Scan (FTS)
Index Unique scan
Index range scan
Index full scan
Index fast full scan

• 3
Access Methods- Full
Full Table
Table Scan
Scan (FTS)
(FTS)

• 3
Access Methods-Index
Methods-Index Unique
Unique scan
scan

• 3
Access Methods-Index
Methods-Index range scan
scan

• 3
Access Methods-Index
Methods-Index full
full scan
scan

• 3
Access Methods-Index
Methods-Index skip
skip scan
scan

• 3
Join
● A Join is a predicate that attempts to combine only 2
row sources
● Join steps are performed serially
● Order in which joins are performed is very critical.
index on A(a.col1,a.col2)
e.g select A.col4 from A,B,C where  B.col3 =10
and    A.col1 = B.col1 and    A.col2 = C.col2
and    C.col3=5

• 3
Join Orders

● Cartesian Product
● Nested Loops (NL)
● Hash Join
● Sort Merge Join (SMJ)

• 4
Join Orders -- Cartesian Product

• 4
Join Orders – Nested Loops

• 4
Join Orders – Hash Join

• 4
Join Orders- Sort Merge Join

• 4
Operations & Sorts
Operations
● sort
● filter
● View

Sorts
● order by clauses
● group by
● Aggregate

• 4
Operations – Sort Aggregate

• 4
Operations – Sort Order By

• 4
Operations – Sort Group By

• 4
Operations – Filter

• 4
Operations – View

• 5
Generating Explain Plan
• TOAD
• Tkprof
• DBMS_XPLAN
• GV$SQL_PLAN or DBA_HIST_SQL_PLAN

• 5
Gather_plan_statistics( A Big Help)
Select /*+ gather_plan_statistics */ * from (your query);
Select * from table(dbms_xplan.display_cursor('&sql_id', &childnumber,
'ALLSTATS'));

• 5
Gather_plan_statistics( A Big Help)

• 5
Locking
Check for wait event : enq: TX - row lock contention
Blwt.sql
Blocking_session column in v$session.

• 5
• Poor SQL Construct

Department_id Leave Blank for ALL

First Name • Leave Blank for ALL

SUBMIT

This slide has been omitted


• Note : Either or Both column selection is mandatory

• SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME ,


dep2artment_name,CITY FROM EMP2 EM, DEP2
• D,LOCATIONS L WHERE EM.DEPARTMENT_ID = D.DEPARTMENT_ID AND
• D.LOCATION_ID = L.LOCATION_ID
• AND em.department_id= nvl(:dept_id,em.department_id)
• and FIRST_NAME = nvl(:first_name,first_name);
Wrong Statistics

• 5
Wrong Statistics

• 5
Inappropriate index with high throw away %
● High number of rows returned by Index.
● Most of the Row filtering happens at table level.
Results in
● High IO
● High elapsed time
● Execution plan of SQL may turn to FTS in future.
Solution: Modification of Index. For Consistent Performance, keep throwaway as low as possible

• 5
Inappropriate index with high throw away %

• 5
Issues with Temporary Tables
● Temporary data can be preserved only for session or only for current transaction
defined by ON COMMIT DELETE ROWS/PRESERVE ROWS
● Truncate/ Delete impacts only session specific data
● Temp tables can have indexes/triggers/views etc
● Temporary tables with temporary=Y in DBA_TABLES

Create global temporary table temp_test as select * from apps.test123 on commit


delete rows;
Issue: Sub-optimal query plans
Reason : Incorrect statistics
Gather statistics programs will not populate current statistics.
Difficult to form correct query plan since different
Systems/Programs/Sessions different type/Volume of Data

• 6
Approaches to Tune GTT tables related
queries
● Dynamic Sampling
● Collecting Statistics in Middle of Code
● Creating profiles on all SQL statements using these GTT tables
● Identifying Best Optimizer statistics for each GTT

Question - Which is the best approach?

• 6
Impact of Fragmentation on Performance
Index Unique Scan Index Range Scan Full Table Scan
No No Yes

Always have appropriate indexes on volatile tables with high


insert/delete/update even if they have very few rows.
Microsoft Office
Word Document
When was code modified...Changes in code?
● Identify long running SQL
● Check the usual methods mentioned earlier
● Check sdh.sql
● Check other environments
● If it seems to be a new SQL and not found in any other environments, if
could be a new code.
● Use src.sql which will return the pkg/proc/function name along with line
number in the code.
● Check when the code was modified and whether same code exists in
other environments.

• 6
Sudden increase in Data Volume/Data skewness
Symptoms:
● Sd.sql will show higher IO/high number of records.
● Capture bind values of SQL to check skewness in the data.
● Use tools to check growth of tables involved.

• 6
Optimizer Evolution
● RBO
● CBO
● Histograms
● Bind Peeking
● Adaptive Cursor sharing
● Cardinality Feedback
RBO
Rule
– Hardcoded heuristic rules determine plan
● “Access via index is better than full table scan”
● “Fully matched index is better than partially matched index”
● …
● Cost (2 modes)
– Statistics of data play role in plan determination
● Best throughput mode: retrieve all rows asap
– First compute, then return fast
– Alter session set optimizer_mode=ALL_ROWS/FIRST ROWS_10
● Best response mode: retrieve first row asap
– Start returning while computing (if possible)
RBO
RBO, the optimizer chooses an execution plan based on the access paths available and the ranks of these access paths
● RBO Path 1: Single Row by Rowid
● RBO Path 2: Single Row by Cluster Join
● RBO Path 3: Single Row by Hash Cluster Key with Unique or Primary Key
● RBO Path 4: Single Row by Unique or Primary Key
● RBO Path 5: Clustered Join
● RBO Path 6: Hash Cluster Key
● RBO Path 7: Indexed Cluster Key
● RBO Path 8: Composite Index
● RBO Path 9: Single-Column Indexes
● RBO Path 10: Bounded Range Search on Indexed Columns
● RBO Path 11: Unbounded Range Search on Indexed Columns
● RBO Path 12: Sort Merge Join
● RBO Path 13: MAX or MIN of Indexed Column
● RBO Path 14: ORDER BY on Indexed Column
● RBO Path 15: Full Table Scan
RBO
● Ranking multiple available indexes
1. Equality on single column unique index
2. Equality on concatenated unique index
3. Equality on concatenated index
4. Equality on single column index
5. Bounded range search in index
– Like, Between, Leading-part, …
6. Unbounded range search in index
– Greater, Smaller (on leading part)
CBO
● Works on Cost of the operation performed.
The optimizer determines the most efficient way to execute a SQL statement after considering many
factors related to the conditions specified in the query , stats of the underlying objects. This
determination is an important step in the processing of any SQL statement and can greatly affect
execution time.

******Statistics are CRUCIAL to this processing!!!

● Statistics at various levels


● Table:
– Num_rows, Total Blocks, Rows/Block, Empty_blocks, Avg_space
● Column:
– Num_values, Low_value, High_value, Num_nulls
● Index:
– Distinct_keys, Blevel, Avg_leaf_blocks_per_key, Avg_data_blocks_per_key, Leaf_blocks
Histograms – 8i for literals
● Histograms - literals

● To take care of uneven distribution of data


● Created with
exec dbms_stats.gather_table_stats('APPS','EMP2',method_opt =>'FOR
COLUMNS SIZE 2 PERMANENT');
● Disable new histogram creation by re-analyze using dbms_stats with
the argument:  method_opt=> 'for all columns size 1';
Bind Peeking – 9i
● To use HISTOGRAMS in BIND
● Re-set the _optim_peek_user_binds=false hidden
parameter to disable bind peeking
● Disable new histogram creation by re-analyze
using dbms_stats with the argument:  method_opt=> 'for all
columns size 1';
Bind Peeking – 9i – 10g
● Problem: Usage of BINDS
Options:
a)  re-parsed  for every execution of every statement using
bind variables
b) use literals in order to get the benefit of
histograms wherever required based on application
c) Implemented a system which identifies statements that
might benefit from different plans based on the values of
bind variables. Then peek at those variables for every
execution of those “bind sensitive” statements

plan_changes.txt
Adaptive Cursor Sharing - Solution
● Two new columns in V$SQL table:
a) IS_BIND_SENSITIVE: If optimizer peeked a bind variable
value and if a different value may change the explain plan.
b) IS_BIND_AWARE: If a query uses cursor sharing, occurs
only after the query has been marked bind sensitive.
● Three new views:
a) V$SQL_CS_HISTOGRAM
b) V$SQL_CS_SELECTIVITY
c) V$SQL_CS_STATISTICS
Adaptive Cursor Sharing - Solution
● Can be disabled by
alter system set "_optimizer_extended_cursor_sharing_rel"=none scope=both;
alter system set "_optimizer_extended_cursor_sharing"=none scope= both;
alter system set "_optimizer_adaptive_cursor_sharing"=false scope= both;
Cardinality Feedback- 11gR2
Enables monitoring for
●  Missing or inaccurate statistics
● Complex predicates
USE_FEEDBACK_STATS column in V$SQL_SHARED_CURSOR.

● Record actual rows returned


● Compares with estimates

● Can be turned off by setting "_optimizer_use_feedback"


Cardinality Feedback- 11gR2
Bad execution Plan
1. Check for other available plans in memory using sd.sql
2. Check for other available plans in AWR repository using sdh.sql
3. Use coe_xfr_sql_profile.sql to force correct plan.

E.G SQL’s : 4b17gdjg5y97q p3


Profile used for this SQL statement.

• 7
Copy execution plan from different
environment
1. Check for other available plans in memory using sd.sql
2. Check for other available plans in AWR repository using sdh.sql
3. Use coe_xfr_sql_profile.sql to generate plan file from other environment.
4. Run file in current environment.

Option 2 :
5. Create profile on source environment.
6. Create staging table on source environment.
7. Take export dump of table.
8. Import dump in target database.
9. Import profile from target table.
Script is providing in Note : Doc ID 457531.1

• 7
Asking Optimizer to look for a better plan
1. Run @/rdbms/admin/sqltrpt.sql to get the recommendations from Oracle.
2. Mostly suggestions would revolve around
a) New indexes
b) New plan with parallelism enabled
c) Collect statistics on Tables/Indexes
d) Better plan is available which can be enabled by running script provided.

Suitable options c and d. sqltrpt.txt

• 7
Copying execution plan of a sql to
another
Copying execution plan of a sql to another
Applicable to
● SQL’s differs only in HINTS
● Rewrite the SQL to create new plan and then create appropriate HINTS to force same plan as
optimized SQL
Options:
● Custom script fix_plan.sql – Takes original and Modified SQLid as input. Works for all
environments.
● SQL patching– 11g
● Sql Plan Management—11g
Requires: SQL rewrite skills

• 8
Hints
● Used to alter execution plans
● Have syntax like comments
– Select /*+ FULL(e)*/ from …
● Incorrectly structured or misspelled hints are ignored
● Conflicting hints are ignored
● Multiple hints may be required for complex statements
● Hints for join orders
– LEADING
– ORDERED
● Hints for Join operations
– USE_NL
– USE_MERGE
– USE_HASH
Capturing Baselines – SPM (11g)
● Enabled and Used by 2 parameters
a) OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES

b) OPTIMIZER_USE_SQL_PLAN_BASELINES

Stored in table DBA_SQL_PLAN_BASELINES


Manual Baselines

● Alternative mechanism to capture ALL SQL’s, execution


plans and Bind Variables in Custom tables.
● Execution plan can be generated from custom tables and
applied to corresponding SQL Statements.
Reviewing TKProf
• Trace Levels

• 8
Reviewing TKProf(Continued)

tkprof.txt

• 8
Reviewing AWR

AWR is used to collect performance statistics including:


•Wait events used to identify performance problems.
•Time model statistics indicating the amount of DB time associated with a
process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
•Active Session History (ASH) statistics from
the V$ACTIVE_SESSION_HISTORY view.
•Some system and session statistics from
the V$SYSSTAT and V$SESSTAT views.
•Object usage statistics.
•Resource intensive SQL statements.

• 8
Reviewing AWR(Continued)

● Report Summary
● Time Model Statistics
● Foreground Wait Class
● Foreground Wait Events
● SQL Statistics
● IO Stats
● Advisory Statistics
● Segment Statistics
● Memory Statistics

• 8
Important SQL’s
● o.sql
● o1.sql
● sd.sql
● sdh.sql
● ap.sql
● ap1.sql
● blwt.sql
● Size.sql
● frag.sql
● bind.sql
● ind.sql
● pjobs.sql
● jobs.sql
● Jobh.sql
● anreq.sql
Script Location: /ots/scripts/dba_scripts/

• 8
Expected•Benefits

• 89

You might also like