0% found this document useful (0 votes)
53 views3 pages

Handling SAP Performance Issues in General: Contributed by Kevin Wilson Thursday, 30 April 2009

This document provides tips for handling SAP performance issues and analyzing slowdowns. It recommends using transactions STAD, ST05, SE30, SM50, and SM66 to examine statistics, traces, and debugging information. Key things to check include database access times, SQL statements, network traffic, and CPU usage. The main steps are to analyze statistics, use the performance trace to examine SQLs and other operations in detail, and use the ABAP trace and debugger if needed.

Uploaded by

Diego Ortiz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views3 pages

Handling SAP Performance Issues in General: Contributed by Kevin Wilson Thursday, 30 April 2009

This document provides tips for handling SAP performance issues and analyzing slowdowns. It recommends using transactions STAD, ST05, SE30, SM50, and SM66 to examine statistics, traces, and debugging information. Key things to check include database access times, SQL statements, network traffic, and CPU usage. The main steps are to analyze statistics, use the performance trace to examine SQLs and other operations in detail, and use the ABAP trace and debugger if needed.

Uploaded by

Diego Ortiz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

ERPGenie.

COM ABAP Tips and Tricks Database

Handling SAP Performance issues in General


Contributed by Kevin Wilson
Thursday, 30 April 2009

The following are some known reasons for a decrease in performance:

Ø Use of user exits (Note 178328) Ø Obsolete statistics (Note 103212) Ø Very large data volumes (Notes 118743 and
111813) Ø Inefficient table buffering. High CPU consumption (E.G. copying, sorting and searching in large internal tables)
(Note 185530) Pre-Requisites

Ø Authorizations to

o STAD o ST05 o ST06 o SE30 o SM50 o SM66 Overview

The analysis steps should follow this order:

1. Examine the statistical individual records (Statistical analysis (STAD / STAT)) Ø Used to isolate the areas in which
performance problems occur in individual programs. Ø Transaction STAD (or Transaction STAT if you have a release
higher than 4.6C) 2. Use the Performance Trace (ST05)to examine SQLs, database accesses, RFCs and lock
operations (enqueues) in detail. 3. Use the ABAP Trace (SE30) 4. Use the ABAP Debugger 5. Runtime Overview
(SM50) Details

1. Statistical analysis (STAD / STAT) Statistical information comprises: Ø response times, Ø main memory
requirement, Ø database accesses, etc. The records are saved on the application servers for approximately one day.
The statistical records can be grouped by transactions and then analyzed. On the selection screen, you enter the user
name, transaction name or program name as well as the time period to be analyzed. Also note the Server Selection
switch. By default, the data is collected by all application servers. You can use the following option switches to select
the display type of the individual record statistics: Ø Show all statistic records, sorted by start time Ø Show all records,
grouped by business transaction Ø Show business transaction sums (-> this is recommended for SD transactions)
Statistical Analysis (STAD / STAT) Ø Response times o Are the response times the same or do high response times
occur sporadically? o You can use the Fcod column, which displays the function code within a transaction, to determine
whether high response times only ever occur for one screen within a transaction. The records in question can be
selected by double-clicking an individual statistical record. To display all of the details concerning the individual record,
choose All details. Ø High database times (DB req time) o In principle, this is a database problem that can be analyzed
using a SQL trace (STO5). You can use the value for 'kbytes transferred' or 'Database rows' to distinguish between two
types of database problems: § The database time is too high even though relatively little data is read! (Avg.time/row (ms)
= sequential read).
The optimal read time is 1 msec per data record for 'sequential read'). Select-* clauses check whether all or some of the
fields are required in the program.
Where clauses, which should only read one row at most in order to check whether certain information exists, for
example, should contain the addition 'UP TO 1 ROWS':
SELECT * FROM dbtable UP TO ROWS WHERE field1 = x1.
ENDSELECT. § The database time is high because the volume of data transferred is high, but the rate at which data is
read (approximately 1 msec per record) is optimal.
The Where clause is used to keep the number of records read by the database for each SQL statement as low as
possible.
If, for example, CHECK statements for table fields are contained in SELECT .. ENDSELECT loops, they should be
replaced with a suitable Where clause, if possible.
If there are SELECT statements without a Where clause on tables that are constantly increasing in size, that is, BSEG or
VBRK, the program design must be revised.
Is identical data read repeatedly?
To determine this, use the 'GoTo.Identical Selects' function in the SQL trace. Here, you have to check whether identical
accesses can be avoided. § High roll wait times (wait time)
There are probably communication problems here. The RFC trace (ST05) is used to analyze problems of this type. §
High CPU times (CPU time)
High CPU times imply that there are time-consuming calculations in the ABAP code or frequent accesses to the table
buffer. Programs with a CPU time greater than 50% should be examined using the ABAP trace or the ABAP debugger.
2. Performance Trace (ST05) During program runtime, the following operations can be recorded: SQL statements of a
user, RFC calls, enqueue operations and accesses to the SAP buffer. On the initial screen, there are start (Trace on),
stop (Trace off) and analysis switches. Another user for whom the trace is to be activated can also be predefined here.
Your own user is proposed by default. For standard program analysis, activate the SQL trace, the enqueue trace and
the RFC trace. When creating a trace (this also applies to the SQL and ABAP TRACE), note the following points: Ø Since
the analysis is transparent, the user should only run one activity (no other background jobs or update requests). Ø Make
sure that Transaction ST05 runs on the same application server as the programs to be monitored. If, for example, you
http://erpgenie.com/abaptips Powered by Joomla for ERPGenie.COM! Generated: 3 August, 2011, 10:02
ERPGenie.COM ABAP Tips and Tricks Database

want to record an update request or a background job and you are working in a system with a distributed update or
distributed background processing, you must start a trace on all relevant application servers. Ø You can use the SQL
trace to record SQL statements. There may be considerable differences between the SQL statement formulated from the
ABAP and the SQL statement sent to the database.
Furthermore, you have to use the buffer trace to activate SAP buffer accesses. Ø In the SQL trace, the buffer loading
processes are also written during the first program run, therefore first execute the program without a trace, so that
reloading processes concerning the R/3 table buffers, program buffers, and so on, are not recorded. Ø During the trace,
note the following monitors: For the general check, note the work process overview, the operating system monitor for
monitoring possible CPU bottlenecks and the database process monitor for directly monitoring the SQL statements. This
does not work if the trace is activated under your own user because otherwise the SQL statements of the monitors would
appear on the trace and are illegible Ø SQL trace analysis For analysis, choose List Trace on the initial screen. Use the
Trace Mode field to select the subsequent analysis and then choose Execute: o SQL Trace
Database operations: § Open/Reopen The fields specified in the Where condition are the key fields of the table. The
result of the request is exactly one record (Rec=1) or no record found (Rec=0).
SQL statements for which all key fields are specified with 'equal' are called fully 'qualified access'es or 'Direct Reads'. A
fully qualified database access should not last any longer 2-10 ms. However, if the hard disk reloads data blocks, for
example, times of 100 ms are acceptable.

If you place the cursor on the OPEN/REOPEN statement, you can navigate to the Explain SQL. When you double-click
the table names, the system displays a dialog box that contains statistical information about the table and information
about all of the table indexes. If you double-click the index, you receive statistical information about the index used.

You can choose Analyze to refresh the statistical information about a table and its indexes. To define the behavior of the
cost-based optimizer, you must refresh the statistical information each time a change is made to a table or its indexes.
Index Statistics provides an overview of the statistical information about all indexes. Choose Explain with hint to use hints
to view the execution plan of a SQL statement. § Fetch The data records are transferred in packages to the application
server in one or more fetches. The select clause of the SQL statement determines the number of records transferred with
a Fetch .
If the number of fields to be transferred by the database are restricted using the SELECT command, or listed directly,
more records fit into a Fetch than if you use a 'SELECT *' command.
Here the optimal pick/pack times are 10 ms/record.

If a SQL statement was identified as having a long runtime, the trace should be performed when the system load is high
and low.
If the response times for a database access are only negative at certain times, this implies that there are throughput
problems in the network or when accessing the database. If, on the other hand, it is difficult to reproduce the response
times for a database access, the SQL statement is probably inefficient.

You can distinguish between two types of SQL statements that perform poorly:
Suitable access path:
The SQL statement checks a lot of data blocks in the database and is time-consuming because a lot of data records are
transferred from the database to the application server. The database performance is then satisfactory if you can see
that the SQL statement requires less than 10 ms for each data record transferred. Generally, performance can only be
improved by changing the business process or ABAP code.
Unsuitable access path:
The SQL statement checks a lot of data blocks, but only transfers a few of them from the database to the server. The
database performance is less than optimal if you can see that the SQL statement requires more than 10 ms for each
data record transferred. You can make it more optimal by changing the ABAP code (unsuitable access strategy - to a
complex or incorrect WHERE clause) or by changing the index design (unsuitable access strategy incorrect index or no
index).

To recognize network problems between the database and the application server, execute the SQL trace repeatedly, on
the one hand on the application server of the connected database and, on the other hand, on an application server that is
connected to the database via the TCP/IP network.
If the response times on the server, which is connected to the database via the network, are considerably higher, there
is a network problem.

In the trace list, choose the Summary function to obtain an overview of the intensity and time required to access the
tables listed in the SQL performance trace.
Goto.Summary.Summarize:
The SQL statements are sorted according to their runtime.
You should optimize SQL statements that have a relatively high runtime in comparison with the total runtime.

You can use the Goto.Identical.selects operation to identify SQL statements that have the same value, that is, the
database reads identical data repeatedly and in quick succession. Identical SQL statements return identical results.
http://erpgenie.com/abaptips Powered by Joomla for ERPGenie.COM! Generated: 3 August, 2011, 10:02
ERPGenie.COM ABAP Tips and Tricks Database

In this respect, the SQL statement results are buffered, for example, in an internal table of the calling ABAP program.

On the trace list screen, you can choose ABAP display to jump to the section in the source code from which the SQL
statement was executed. The ABAP display is displayed for the SQL statement on which the cursor is currently
positioned in the trace list and it is possible for the PREPARE, OPEN and REOPEN database operations. o RFC Trace
The RFC trace records RFCs that were received and sent.
By double-clicking a row in the RFC trace or by choosing Details, you receive, for example, the names and IP addresses
of the sender and recipient, the name of the RFC module and the volume of data transferred. To display the ABAP
source code, choose ABAP DISPLAY. o Enqueue Trace
The enqueue trace records R/3 lock requests or releases (including the lock keys and the objects in question). 3.
ABAP Trace (SE30) Use this function if you experience problems with high CPU consumption. Unlike the SQL trace,
this function also provides time measurements for operations on internal tables (LOOP, READ, SORT, APPEND, and so
on) , as well as the runtime of database accesses (SELECT, EXEC SQL, and so on) and the time required for individual
modularization units (MODULES, PERFORM, CALL FUNCTION, SUBMIT, and so on).

Alternatively, if programs have long runtimes, you can call the ABAP debugger from the work process overview
(Transaction SM50, see below) and trace the program in debugging.

On the initial screen, you can enter a transaction code, a program name or a function module. To start the trace, you
choose Execute. If you exit the program, transaction or function module in the usual way, the trace returns to the initial
screen and displays the measurement data file that was just created in the lower area.
You can use the Analyze switch to display the various views for the measurement results:
Hit list (in the menu under'Goto') displays the execution time for each statement. It is sorted in descending order
according to gross times. Sorting the net time provides an overview of statements that have the highest net times. The
gross time is the total time required for a call. The net time is the gross time minus the time required for the module called
(MODULE, PERFORM, CALL FUNCTION, and so on) and separate ABAP statements. The gross time is the same as
the net time for basic statements such as APPEND and SORT.
Hierarchy displays the chronological sequence of the transaction or program. 4. ABAP Debugger This tool is used to
identify errors in programs.

Note that processing may terminate during debugging, which will automatically trigger a database commit. In this case,
the work process (the LUW) is interrupted and this may result in some data inconsistencies (see Note 859240). We
therefore recommend that you only debug the test system, if possible.

Performance analysis using the debugger begins when you start the program to be examined. In the second session,
you start the work process overview (Transaction SM50, see below). From the work process overview, you choose the
Debugging function to navigate to the debugger. By repeatedly navigating to the debugger in quick succession, you can
identify the sections of source code in the program that have a high CPU requirement. Experience has shown that these
are often LOOP..ENDLOOP loops over large internal tables.

The following programming errors always result in a high main memory and a large CPU requirement: Ø Missing
REFRESH or FREE statements
You use these statements to delete internal tables and release the assigned main memory. If these statements are
missing, resources are blocked unnecessarily. Ø Reading in internal tables:
The READ TABLE..WITH KEY.. ABAP statement without a further addition triggers a sequential search. This is very
time-consuming if tables are large. Performance is improved considerably if the system uses the ..BINARY SEARCH..
addition to read the table.
For this, however, the table must be sorted beforehand. You can also optimize the performance of operations on large
tables, for example, by using sorted tables (SORTED TABLE) or hash tables (HASHED TABLE). 5. Runtime Overview
(SM50) Which program/function consumes the most time? Ø Is there a large number of sequential read accesses onto
one table? Ø Does the program require too much memory space? Ø What is relationship between CPU consumption and
runtime?
The reasons for high CPU consumption lie in the ABAP code whereas the reasons for low CPU consumption lie in the
databases or the RFCs.

http://erpgenie.com/abaptips Powered by Joomla for ERPGenie.COM! Generated: 3 August, 2011, 10:02

You might also like