The SQL Trace (ST05) Quick and Easy
The SQL Trace (ST05) Quick and Easy
The SQL Trace (ST05) Quick and Easy
Easy
Follow
RSS feed
Like
60 Likes 117,905 Views 29 Comments
The SQL Trace, which is part of the Performance Trace (transaction
ST05), is the most important tool to test the performance of the
database. Unfortunately, information on how to use the SQL Trace and
especially how to interpret its results is not part of the standard ABAP
courses. This weblog tries to give you a quick introduction to the SQL
Trace. It shows you how to execute a trace, which is very
straightforward. And it tells how you can get a very condensed overview
of the results–the SQL statements summary–a feature that many are not
so familiar with. The usefulness of this list becomes obvious when the
results are interpreted. A short discussion of the ‘database explain’
concludes this introduction to the SQL Trace.
1. Using the SQL Trace
Using the SQL trace is very straightforward:
1 Call the SQL trace in a second mode
2 Make sure that your test program was executed at least once, or even
better, a few times, to fill the buffers and caches. Only a repeated
execution provides reproducible trace results. Initial costs are
neglected in our examination
3 Start the trace
4 Execute your test program in the first mode
5 Switch off the trace. Note, that only one SQL trace can be active on an
application server, so always switch your trace off immediately
after your are finished.
6 Display the trace results
7 Interpretation of the results
1
Note, the trace can also be switched on for a different user.
=> In this section we showed how the SQL trace is executed. The
execution is very straightforward and can be performed without any prior
knowledge. The interpretation of the results, however, requires some
experience. More on the interpretation will come in the next section.
2. Trace Results – The Extended Trace List
When the trace result is displayed the extended trace list comes up. This
list shows all executed statements in the order of execution (as extended
list it includes also the time stamp). One execution of a statement can
result in several lines, one REOPEN and one or several FETCHES. Note
that you also have PREPARE and OPEN lines, but you should not see
them, because you only need to analyze traces of repeated executions.
So, if you see a PREPARE line, then it is better to repeat the
measurement, because an initial execution has also other effects, which
make an analysis difficult.
If you want to take the quick and easy approach, the extended trace list
2
is much too detailed. To get a good overview you want to see all
executions of the same statement aggregated into one line. Such a list is
available, and can be called by the menu ‘Trace List -> Summary by SQL
Statements’.
=> The extended trace list is the default result of the SQL Trace. It shows
a lot of and very detailed information. For an overview it is much more
convenient to view an aggregated list of the trace results. This is the
Summarized SQL Statements explained in the next section.
3. Trace Results – Summarized SQL Statements
This list contains all the information we need for most performance tuning
tasks.
The keys of the list are ‘Obj Name’ (col. 12), i.e. table name, and ‘SQL
Statement’ (col. 13). When using the summarized list, keep the following
points in mind:
• Several coding positions can relate to the same statement:
• The statement shown can differ from its Open SQL formulation in
ABAP.
• The displayed length of the field ‘Statement’ is restricted, but
sometimes the displayed text is identical.
• In this case, the statements differ in part that is not displayed.
The important measured values are ‘Executions’ (col. 1), ‘Duration’ (col.
3) and ‘Records’ (col. 4). They tell you how often a statement was
executed, how much time it needed in total and how many records were
selected or changed. For these three columns also the totals are
interesting; they are displayed in the last line. The other totals are
actually averages, which make them not that interesting.
Three columns are direct problem indicators. These are ‘Identical’ (col.
2), ‘BfTp’ (col. 10), i.e. buffer type, and ‘MinTime/R.’ (col. 8), the minimal
time record.
Additional, but less important information is given in the columns,
‘Time/exec’ (col. 5), ‘Rec/exec’ (col. 6), ‘AvgTime/R.’ (col. 7), ‘Length’
(col. 9) and ‘TabType’ (col. 11).
For each line four functions are possible:
3
• The magnifying glass shows the statement details; these are the actual
values that were used in the execution. In the summary the values
of the last execution are displayed as an example.
• The ‘DDIC information’ provides some useful information about the
table and has links to further table details and technical settings.
• The ‘Explain’ shows how the statement was processed by the
database, particularly which index was used. More information
about ‘Explain’ can be found in the last section.
• The link to the source code shows where the statement comes from
and how it looks in OPEN SQL.
=> The Statement summary, which was introduced here, will turn out to
be a powerful tool for the performance analysis. It contains all information
we need in a very condensed form. The next section explains what
checks should be done.
4. Checks on the SQL Statements
For each line the following 5 columns should be checked, as tuning
potential can be deduced from the information they contain. Select
statements and changing database statements, i.e. inserts, deletes and
updates, can behave differently, therefore also the conclusions are
different.
For select statements please check the following:
• Entry in ‘BfTy’ = Why is the buffer not used? The tables which are
buffered, i.e. with entries ‘ful’’ for fully buffered, ‘gen’ for buffered by
generic region and ‘sgl’ for single record buffer, should not appear
in the SQL Trace, because they should use the table buffer.
Therefore, you must check why the buffer was not used. Reasons
are that the statement bypasses the buffer or that the table was in
the buffer during the execution of the program. For the tables that
are not buffered, but could be buffered, i.e. with entries starting
with ‘de’ for deactivated (‘deful’, ‘degen’, ‘desgl’ or ;deact’) or the
entry ‘cust’ for customizing table, check whether the buffering could
not be switched on.
• Entry in ‘Identical’ = Superfluous identical executions
• The column shows the identical overhead as a percentage. Identical
means that not only the statement, but also the values are
identical. Overhead expresses that from 2 identical executions one
is necessary, and the other is superfluous and could be saved.
• Entry in ‘MinTime/R’ larger than 10.000 = Slow processing of statement
An index-supported read from the database should need around
1.000 micro-seconds or even less per record. A value of 10.000
micro-seconds or even more is a good indication that there is
problem with the execution of that statement. Such statements
4
should be analyzed in detail using the database explain, which is
explained in the last section.
• Entry in ‘Records’ equal zero = No record found
• Although this problem is usually completely ignored, ‘no record found’
should be examined. First, check whether the table should actually
contain the record and whether the customizing and set-up of the
system is not correct. Sometimes ‘No record found’ is expected
and used to determine program logic or to check whether keys are
still available, etc. In these cases only a few calls should be
necessary, and identical executions should absolutely not appear.
• High entries in ‘Executions’ or ‘Records’ = Really necessary?
• High numbers should be checked. Especially in the case of records, a
high number here can mean that too many records are read.
For changing statements, errors are fortunately much rarer. However, if
they occur then they are often more serious:
• Entry in ‘BfTy’ = Why is a buffered table changed?If a changing
statement is executed on a buffered statement, then it is
questionable whether this table is really suitable for buffering. In
the case of buffered tables, i.e entries ‘ful’, ‘gen’ or ’sgl’’, it might be
better to switch off the buffering. In the case of bufferable tables,
the deactivation seems to be correct.
• Entry in ‘Identical’ = Identical changes must be avoided
• Identical executions of changing statements should definitely be
avoided.
• Entry in ‘MinTime/R’ larger than 20.000 = Changes can take longer
• Same argument as above just the limit is higher for changing
statements.
• Entry in ‘Records’ equal zero = A change with no effectChanges should
also have an effect on the database, so this is usually a real error
which should be checked. However, the ABAP modify statement is
realized on the database as an update followed by an insert if the
record was not found. In this case one statement out of the group
should have an effect.
• High entries in ‘Executions’ and ‘Records’ = Really necessary?
• Same problems as discussed above, but in this case even more
serious.
=> In this section we explained detailed checks on the statements of the
SQL Statement Summary. The checks are slightly different for selecting
and changing statements. They address questions such as why a
statement does not use the table buffer, why statements are executed
identically, whether the processing is slow, why a statement was
executed but no record was selected or changed, and whether a
statement is executed too often or selects too many records.
5
5. Understanding the Database Explain
The ‘database explain’ should show the SQL statement as it goes to the
database, and the execution plan on the database. This view has a
different layout for the different database platforms supported by SAP,
and it can become quite complicated if the statement is complicated.
6
From this example you should understand the principle of the ‘Explain’,
so that you can also understand more complicated execution plans.
Some database platforms do not use graphical layouts and are a bit
harder to read, but still show all the relevant information.
=> In this last section we showed an example of a database explain,
which is the only way to find out whether a statement uses an index, and
if so, which index. Especially in the case of a join, it is the proper index
support that determines whether a statement needs fractions of seconds
or even minutes to be finished.
Chapter Overview:
1 Introduction
2 Performance Tools
7
3 Database Know-How
4 Optimal Database Programming
5 Buffers
6 ABAP – Internal Tables
7 Analysis and Optimization
8 Programs and Processes
9 Further Topics
10 Appendix
In the book you will find detailed descriptions of all relevant performance
tools. An introduction to database processing, indexes, optimizers etc. is
also given. Many database statements are discussed and different
alternatives are compared. The resulting recommendations are
supported by ABAP test programs which you can download from the
publishers webpage (see below). The importance of the buffers in the
SAP system are discussed in chaptr five. Of all ABAP statements mainly
the usage of internal tables is important for a good performance. With all
the presented knowledge you will able to analyse your programs and
optimize them. The performance implications of further topics, such as
modularisation, workprocesses, remote function calls (RFCs), locks &
enqueues, update tasks and prallelization are explained in the eight
chapter.
Even more information – including the test programs – can be found on
the webpage of the publisher.
I would recommend you especially the examples for the different
database statements. The file with the test program (K4a) and necessary
overview with the input numbers (K4b) can even be used, if you do not
speak German!